JPL Work on a Gravitational Lensing Mission

Seeing oceans, continents and seasonal changes on an exoplanet pushes conventional optical instruments well beyond their limits, which is why NASA is exploring the Sun’s gravitational lens as a mission target in what is now the third phase of a study at NIAC (NASA Innovative Advanced Concepts). All of this builds upon the impressive achievements of Claudio Maccone that we’ve recently discussed. Led by Slava Turyshev, the NIAC effort takes advantage of light amplification of 1011 and angular resolutions that dwarf what the largest instruments in our catalog can deliver, showing what the right kind of space mission can do.

We’re going to track the Phase III work with great interest, but let’s look back at what the earlier studies have accomplished along the way. Specifically, I’m interested in mission architectures, even as the NASA effort at the Jet Propulsion Laboratory continues to consider the issues surrounding untangling an optical image from the Einstein ring around the Sun. Turyshev and team’s work thus far argues for the feasibility of such imaging, and as we begin Phase III, sees viewing an exoplanet image with a 25-kilometer surface resolution as a workable prospect.

But how to deliver a meter-class telescope to a staggeringly distant 550 AU? Consider that Voyager 1, launched in 1977, is now 152 AU out, with Voyager 2 at 126 AU. New Horizons is coming up on 50 AU from the Earth. We have to do better, and one way is to re-imagine how such a mission would be achieved through advances in key technologies and procedures.

Here we turn to mission enablers like solar sails, artificial intelligence and nano-satellites. We can even bring formation flying into a multi-spacecraft mix. A technology demonstration mission drawing on the NIAC work could fly within four years if we decide to fund it, pointing to a full-scale mission to the gravitational focus launched a decade later. Travel time is estimated at 20 years.

These are impressive numbers indeed, and I want to look at how Turyshev and team achieve them, but bear in mind that in parsing the Phase II report, we’re not studying a fixed mission proposal. This is a highly detailed research report that tackles every aspect of a gravitational lens mission, with multiple solutions examined from a variety of perspectives. One thing it emphatically brings home is how much research is needed in areas like sail materials and instrumentation for untangling lensed images. Directions for such research are sharply defined by the analysis, which will materially aid our progress moving into the Phase III effort.

Image: A meter-class telescope with a coronagraph to block solar light, placed in the strong interference region of the solar gravitational lens (SGL), is capable of imaging an exoplanet at a distance of up to 30 parsecs with a few 10 km-scale resolution on its surface. The picture shows results of a simulation of the effects of the SGL on an Earth-like exoplanet image. Left: original RGB color image with (1024×1024) pixels; center: image blurred by the SGL, sampled at an SNR of ~103 per color channel, or overall SNR of 3×103; right: the result of image deconvolution. Credit: Turyshev et al.

Modes of Propulsion

A mission to the Sun’s gravity lens need not be conceived as a single spacecraft. Turyshev relies on spacecraft of less than 100 kg (smallsats, in the report’s terminology) using solar sails, working together and produced in numbers that will enable the study of multiple targets.

The propulsive technique is a ‘Sundiver’ maneuver in which each smallsat spirals in toward perihelion in the range of 0.1 to 0.25 AU, achieving 15-25 AU per year exit velocity, which gets us to the gravity lensing region in less than 25 years. The sails are eventually ejected to reduce weight, and onboard propulsion (the study favors solar thermal) is available at cruise. The craft would enter the interstellar medium in 7 years as compared to Voyager’s 40, making the journey to the lens in a timeframe 2.5 times longer than what it took to get New Horizons to Pluto.

Image: Sailcraft example trajectory toward the Solar Gravity Lens. Credit: Turyshev et al.

The hybrid propulsion concept is necessary, and not just during cruise, because once in the focal lensing region, the spacecraft will need either chemical or electrical propulsion for navigation corrections and for operations and maintenance. Let’s pause on that point for a moment — Alex Tolley and I have been discussing this, and it shows up in the comments to the previous post. What Alex is interested in is whether there is in fact a ‘sweet spot’ where the problem of interference from the solar corona is maximally reduced compared to the loss of signal strength with distance. If there is, how do we maximize our stay in it?

Recall that while the focal line goes to infinity, the signal gain for FOCAL is proportional to the distance. A closer position gives you stronger signal intensity. Our craft will not only need to make course corrections as needed to keep on line with the target star, but may slow using onboard propulsion to remain in this maximally effective area longer. I ran this past Claudio Maccone, who responded that simulations on these matters are needed and will doubtless be part of the Phase II analysis. He has tackled the problem in some detail already:

“For instance: we do NOT have any reliable mathematical model of the Solar Corona, since the Corona keeps changing in an unpredictable way all the time.

“In my 2009 book I devoted the whole Chapters 8 and 9 to use THREE different Coronal Models just to find HOW MUCH the TRUE FOCUS is PUSHED beyond 550 AU because of the DIVERGENT LENS EFFECT created by the electrons in the lowest level of the Corona. For instance, if the frequency of the electromagnetic waves is the Peak Frequency of the Planckian CMB, then I found that the TRUE FOCUS is 763 AU Away from the Sun, rather than just 550 AU.

“My bottom-line suggestion is to let FOCAL observe HIGH Frequencies, like 160 GHz, that are NOT pushing the true focus too much beyond 550 AU.”

Where we make our best observations and how we keep our spacecraft in position are questions that highlight the need for the onboard propulsion assumed by the Phase II study.

Image: Our stellar neighborhood with notional targets. Credit: Turyshev et al.

For maximum velocity in the maneuver at the Sun, as close a perihelion as possible is demanded, which calls for a sailcraft design that can withstand the high levels of heat and radiation. That in turn points to the needed laboratory and flight testing of sail materials proposed for further study in the NIAC work. Let me quote from the report on this:

Interplanetary smallsats are still to be developed – the recent success of MarCO brings them perhaps to TRL 7. Solar sails have now flown – IKAROS and LightSail-2 already mentioned, and NASA is preparing to fly NEA-Scout. Scaling sails to be thinner and using materials to withstand higher temperatures near the Sun remains to be done. As mentioned above, we propose to do this in a technology test flight to the aforementioned 0.3 AU with an exit velocity ~6 AU/year. This would still be the fastest spacecraft ever flown.

The report goes on to analyze a technology demonstration mission that could be done within a few years at a cost less than $40 million, using a ‘rideshare’ launch to approximately GEO.

String of Pearls

The mission concept calls for an array of optical telescopes to be launched to the gravity lensing region. I’ll adopt the Turyshev acronym of SGL for this — Solar Gravity Lens. The thinking is that multiple small satellites can be launched in a ‘string of pearls’ architecture, where each ‘pearl’ is an ensemble of smallsats, with multiple such ensembles periodically launched. A series of these pearls, multiple smallsats operating interdependently using AI technologies, provides communications relays, observational redundancy and data management for the mission. From the report:

By launching these pearls on an approximately annual basis, we create the “string”, with pearls spaced along the string some 20-25 AU apart throughout the timeline of the mission. So that later pearls have the opportunity to incorporate the latest advancements in technology for improved capability, reliability, and/or reductions in size/weigh/power which could translate to further cost savings.

In other words, rather than being a one-off mission in which a single spacecraft studies a single target, the SGL study conceives of a flexible investigation of multiple exoplanetary systems, with ‘strings of pearls’ launched toward a variety of areas within the focus within which exoplanet targets can be observed. Whereas the Phase I NIAC study analyzed instrument and mission requirements and demonstrated the feasibility of imaging, the Phase II study refines the mission architecture and makes the case that a gravity lens mission, while challenging, is possible with technologies that are already available or have reached a high degree of maturity.

Notice the unusual solar sail design — called SunVane — that was originally developed at the space technology company L’Garde. Here we’re looking at a sail design based on square panels aligned along a truss to provide the needed sail area. In the Phase II study, the craft would achieve 25 AU/year, reaching 600 AU in ~26 years (allowing two years for inner system approach to the Sun). [Note: I’ve replaced the earlier SunVane image with this latest concept, as passed along by Xplore’s Darren Garber. Xplore contributed the design for the demonstration mission’s solar sail].

Image: The SunVane concept. Credit: Darren D. Garber (Xplore, Inc).

The report examines a sail area of 45,000 m2, equivalent to a ~212×212 m2 sail, with spacecraft components to be configured along the truss. Deployment issues are minimal with the SunVane design. The vanes are kept aligned edge-on to the Sun as the craft approaches perihelion, then directed face-on to promote maximum acceleration.

We have to learn how to adjust parameters for the sail to allow the highest possible velocity, with areal density A/m being critical — here A stands for the area in square meters of the sail, with m as the total mass of the sailcraft in kilograms (this includes spacecraft plus sail). Sail materials and their temperature properties will be crucial in determining the perihelion distance that can be achieved. This calls for laboratory and flight testing of sail material as part of the continuing research moving into the Phase III study and beyond. Sail size is a key issue:

The challenge for design of a solar sail is managing its size – large dimensions lead to unstable dynamics and difficult deployment. In this study we have consider[ed] a range of smallsat masses (<100 kg) and some of the tradeoffs of sail materials (defining perihelion distance) and sail area (defining the A/m and hence the exit velocity…). As an example, for the SGLF mission, consider perihelion distance of 0.1 AU (20Rsun) and A/m=900 m2/kg; the exit velocity would be 25 AU/year, reaching 600 AU in ~26 years (allowing 2 years for inner solar system approach to the Sun). The resulting sail area is 45,000 m2, equivalent to a ~212×212 m2 sail.

The size of that number provokes the decision to explore the SunVane concept, which distributes sail area in a way that allows spacecraft components to be placed along the truss instead of being confined to the sail’s center of gravity, and which has the added benefit of high maneuverability. A low-cost near-term test flight is proposed with testing of sail material and control, closing to a perihelion in the range of 0.3 AU, with an escape velocity from the Solar System of 6-7 AU per year. Several such spacecraft would enable a test of swarm architectures.

Thus the concept: Multiple spacecraft would be launched together as an ensemble — the ‘pearl’ — using solar sails deployed on each and navigating through the Deep Space Network, with the spacecraft maintaining a separation on the order of 15,000 km as they pass through perihelion. Such ensembles are periodically launched, acting interdependently in ways that would maximize flexibility while reducing risk from a single catastrophic failure and lowering mission cost. We wind up with a system that would enable investigations of multiple extrasolar systems.

I haven’t had time to get into such issues as communications and power for the individual smallsats, or data processing and AI, all matters that are covered in the report, nor have I looked in as much detail as I would have liked at the sail arrays, envisioned through SunVane as on the order of 16 vanes of 103 m2, allowing the area necessary in a configuration the report considers realistic. This is a lengthy, rich document, and I commend it to those wanting to dig further into all these matters.

The report is Turyshev et al., “Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission,” Final Report NASA Innovative Advanced Concepts Phase II (full text).

tzf_img_post

Two Planets Around Nearby Gliese 887

Red dwarf stars have fascinated me for decades, ever since I learned that a potentially habitable planet around one might well be tidally locked. Trying to imagine a living world with a sun that didn’t move in the sky was the kind of exercise that I love about science fiction, where playing with ideas always includes a vivid visual element. What kind of landscapes would a place like this offer to the view? What kind of weather would tidal lock conjure? Stephen Baxter’s novel Proxima (Ace, 2014) is a wonderful exercise in such world-building.

Thus my continuing interest in the splendid work being done by RedDots, which takes as its charter the detection of terrestrial planets orbiting red dwarfs near the Sun. You’ll recall that this is the team that discovered Proxima Centauri b, a star under increased scrutiny of late as other potential planetary signals are examined. RedDots also gave us Barnard’s Star b and has found three planets around the red dwarf GJ 1061.

Now we learn about a system of super-Earths orbiting nearby Gliese 887, which is the brightest red dwarf in our sky (as per RECONS, the Research Consortium On Nearby Stars). More massive than the Earth but smaller than the ice giants in our system, the two worlds may or may not be rocky — this is radial velocity work, so we have only minimum mass figures to work with. The minimum masses reported in the paper in Science are 4.2 ± 0.6 and 7.6 ± 1.2 Earth masses (M?). My guess is that these planets are more like Neptune than Earth, but we’ll see.

The RedDots team found the two planets using the HARPS spectrograph at the European Southern Observatory in Chile. There is also an unconfirmed signal with a period of roughly 50 days, possibly a third planet of a similar mass, but note this: “We regard the third signal at ~50 days as dubious and likely related to stellar activity.”

Lead author of the paper is Sandra Jeffers (University of Göttingen), who notes the opportunity this system provides astronomers for follow-up work. The two planets have orbital periods of 9.3 and 21.8 days respectively and circle a star that is about 11 light years away. A space-based observatory might be able to tease out their reflected light. From the paper:

GJ 887 has the brightest apparent magnitude of any known M dwarf planet host. This brightness, combined with the high photometric stability of GJ 887, exhibited in the TESS data, and the high planet-star brightness and radius ratios, make these planets potential targets for phased-resolved photometric studies, especially in emission. Spectrally resolved phase photometry has been shown to be sensitive to the presence of an atmosphere and molecules such as CO2.

Image: Dr Sandra Jeffers. Credit: University of Göttingen.

The team believes that the star has few starspots:

The TESS variability can be explained by one starspot, or a group of starspots, with a total diameter of 0.3% of the stellar surface, indicating that GJ 887 is slowly rotating with very few surface brightness inhomogeneities.

And that’s interesting because it implies a lower value for the star’s stellar wind, which could cause planetary atmospheres to erode and, if strong enough, conceivably strip them altogether. Thus the likelihood that there may be atmospheres on these planets, which renders them interesting potential targets for the who-knows-when-it-will-fly James Webb Space Telescope.

Image: Artist’s impression of the multiplanetary system of newly discovered super-Earths orbiting nearby red dwarf Gliese 887. Credit: Mark Garlick.

The paper is Jeffers et al., “A multiplanet system of super-Earths orbiting the brightest red dwarf star GJ 887,” Science Vol. 368, Issue 6498 (26 June 2020), pp. 1477-1481 (abstract).

tzf_img_post

Into the Magellanics

Somehow it feels as if the Hubble Space Telescope has been with us longer than the 30 years now being celebrated. But it was, in fact, on April 24, 1990 that the instrument was launched aboard the space shuttle Discovery, being deployed the following day. 1.4 million observations have followed, with data used to write more than 17,000 peer-reviewed papers. It’s safe to say that Hubble’s legacy will involve decades of research going forward as its archives are tapped by future researchers. That’s good reason to celebrate with a 30th anniversary image.

I’m reminded that the recent work we looked at on the interstellar comet 2I/Borisov involved Hubble as part of the effort that detected the highest levels of carbon monoxide ever seen in a comet so close to the Sun. Using Hubble data is simply a given wherever feasible. And given yesterday’s article on star formation and conditions in the Sun’s birth cluster that may have produced leftover material from other stellar systems still orbiting our star, it’s worth noting the portrait Hubble just took of two extraordinary nebulae that are part of a star-formation complex.

Image: This image is one of the most photogenic examples of the many turbulent stellar nurseries the NASA/ESA Hubble Space Telescope has observed during its 30-year lifetime. The portrait features the giant nebula NGC 2014 and its neighbor NGC 2020 which together form part of a vast star-forming region in the Large Magellanic Cloud, a satellite galaxy of the Milky Way, approximately 163,000 light-years away. Credit: NASA, ESA, and STScI.

Here we’re deep in the Large Magellanic Cloud. This is a visible-light image showing NGC 2014 and NGC 2020, which although appearing separate here, are part of a large star-forming region filled with stars much more massive than the Sun, many of them with lifetimes of little more than a few million years. Dominating the image, a group of stars at the center of NGC 2014 has shed the hydrogen gas and dust of stellar birth (shown in red) so that the star cluster lights up nearby regions in ultraviolet while sculpting and eroding the gas cloud above and to the right of the young stars. Bubble-like structures within the gas churn with the debris of starbirth.

At the left, NGC 2020 is dominated by a star 200,000 times more luminous than the Sun, a Wolf-Rayet star 15 times more massive than our own. Here the stellar winds have again cleared out the area near the star. According to the ESA/Hubble Information Center, the blue color of the nebula is oxygen gas being heated to 11,000 degrees Celsius, far hotter than the surrounding hydrogen gas. We should keep in mind when seeing such spectacular imagery that massive stars of this kind are relatively uncommon, but the processes at work in this image show how strong a role their stellar winds and supernovae explosions play in shaping the cosmos.

Inevitably I’m reminded of science fictional treatments of the Magellanics, especially Olaf Stapledon’s Star Maker (1937), in which a symbiotic race living in the Large Magellanic Cloud (LMC) is the most advanced life form in the galaxy. But let’s also remember that when Arthur C. Clarke’s starship leaves the Sun in Rendezvous with Rama (1973), it’s headed for the LMC. Among scores of other references, from Haldeman’s The Forever War to Blish’s Cities in Flight, I’m particularly fond of Robert Silverberg’s less known Collision Course, probably because a shorter version fired my imagination as a boy in a copy of Amazing Stories I still possess.

Image: The brilliant Cele Goldsmith was editing Amazing Stories by the time the July, 1959 issue containing “Collision Course” came out. The full novel would be published in 1961.

There is something about what you would see in the night sky from a satellite galaxy, as opposed to our own view of the Milky Way from within its disk, that fires the imagination. For an entirely different view of the galaxy, read Poul Anderson’s World Without Stars (1966), as discussed in these pages back in 2014 in The Milky Way from a Distance.

tzf_img_post

Where Do ‘Hot Neptunes’ Come From?

Learning about the orbital tilt of a distant exoplanet may help us understand how young planets evolve, and especially how they interact with both their star and other nearby planets. Thus the question of ‘hot Neptunes’ and the mechanisms that put them in place.The issue has been under study since 2004. Are we looking at planets laden with frozen ices that have somehow migrated to the inner system, or are these worlds that formed in place, so that their heavy elements are highly refractory materials that can withstand high disk temperatures?

Among the exoplanets that can give us guidance here is DS Tuc Ab, discovered in 2019 in data from the TESS mission (Transiting Exoplanet Survey Satellite). Here we have a young world whose host is conveniently part of the 45 million year old Tucana-Horologium moving group (allowing us to establish its age), a planet within a binary system in the constellation Tucana. The binary stars are a G-class and K-class star, with DS Tuc Ab orbiting the G-class primary.

A team at the Center for Astrophysics | Harvard & Smithsonian has developed a new modeling tool that is described in a paper to be published in The Astrophysical Journal Letters, one that allows them to measure the orbital tilt of DS Tuc Ab, the first time the tilt of a planet this young has been determined. Systems evolve over billions of years, making analysis of the formation and orbital configuration of their planets difficult. It’s clear from the team’s work that DS Tuc Ab did not, in the words of the CfA’s George Zhou, “get flung into its star system. That opens up many other possibilities for other, similar young exoplanets…”

The work was complicated by the fact that the host star, DS Tuc A, was covered up to 40% in star spots. David Latham (CfA) describes the situation:

“We had to infer how many spots there were, their size, and their color. Each time we’d add a star spot, we’d check its consistency with everything we already knew about the planet. As TESS finds more young stars like DS Tuc A, where the shadow of a transiting planet is hidden by variations due to star spots, this new technique for uncovering the signal of the planet will lead to a better understanding of the early history of planets in their infancy.”

The work proceeded using the Planet Finder Spectrograph on the Magellan Clay Telescope at Las Campanas Observatory in Chile, with the goal of finding out whether this newly formed world had experienced chaotic interactions in its past that could account for its current orbital position. The analysis involved modeling how the planet blocked light across the surface of the star, folding in the team’s projections of how the star spots changed the stellar light emitted. A well-aligned orbit would block an equal amount of light as the planet passed across the star’s surface. The method should aid the study of other young ‘hot Neptunes.’

Image: Animation courtesy of George Zhou, CfA. Top right illustration of DS Tuc AB by M Weiss, CfA.

Benjamin Montet (University of New South Wales) is lead author of a companion paper on which the CfA team were co-authors (citation below). Montet’s team used a different technique called the Rossiter-McLauglin effect to study the planet. Here the researchers measure slight blueshifts and redshifts in the star’s spectrum while simultaneously modeling both transit and stellar activity signals. Says Montet;

“DS Tuc Ab is at an interesting age. We can see the planet, but we thought it was still too young for the orbit of other distant stars to manipulate its path.”

What the combined work suggests is that DS Tuc Ab, because of its youth, probably did not form further out and migrate in. Its flat orbital tilt also indicates that the second star in the binary did not produce interactions that pulled it into its current position. The authors of the Montet paper consider this work “a first data point” in an analysis that may eventually confirm or rule out the hypothesis that wide binary companions can tilt protoplanetary disks to produce high inclination orbits in the planets that form within them.

A good deal of work is ahead to understand such young systems, but the methods the Montet team used are promising, for the Rossier-McLaughlin (R-M) effect proves to be potent. From the already published companion paper:

DS Tuc Ab is one of a small number of planets to be confirmed by a detection of its R-M signal rather than its spectroscopic orbit. This approach may be the optimal strategy for future confirmation of young planets orbiting rapidly-rotating stars. While the RV [radial velocity] of the star varies on rotational period timescales at the 300 m s?1 level, it does so relatively smoothly over transit timescales, enabling us to cleanly disentangle the stellar and planetary signals. While this planet would require a dedicated series of many spectra and a detailed data-driven analysis to measure a spectroscopic orbit, the R-M signal is visible by eye in observations from a single night. For certain systems, in addition to a more amenable noise profile, the amplitude of the R-M signal can be larger than the Doppler amplitude. Similar observations to these should be achievable for more young planets as they are discovered, which will shed light onto the end states of planet formation in protoplanetary disks.

So now we’re seeing two complementary methods for studying young planetary systems, both of which have turned in useful data on how one ‘hot Neptune’ must have formed.

Results of the CfA study will be published in The Astrophysical Journal Letters. The companion study is Montet et al., “The Young Planet DS Tuc Ab Has a Low Obliquity,” The Astronomical Journal Vol. 159, No. 3 (20 February 2020). Abstract / Preprint.

tzf_img_post

Exploring the Contact Paradox

Keith Cooper is a familiar face on Centauri Dreams, both through his own essays and the dialogues he and I have engaged in on interstellar topics. Keith is the editor of Astronomy Now and the author of both The Contact Paradox: Challenging Assumptions in the Search for Extraterrestrial Intelligence (Bloomsbury Sigma), and Origins of the Universe: The Cosmic Microwave Background and the Search for Quantum Gravity (Icon Books) to be published later this year. The Contact Paradox is a richly detailed examination of the history and core concepts of SETI, inspiring a new set of conversations, of which this is the first. With the recent expansion of the search through Breakthrough Listen, where does SETI stand both in terms of its likelihood of success and its perception among the general public?

  • Paul Gilster

Keith, we’re 60 years into SETI and no contact yet, though there are a few tantalizing things like the WOW! signal to hold our attention. Given that you have just given us an exhaustive study of the field and mined its philosophical implications, what’s your take on how this lack of results is playing with the general public? Are we more or less ready today than we were in the days of Project Ozma to receive news of a true contact signal?

And despite what we saw in the film Contact, do you think the resultant clamor would be as widespread and insistent? Because to me, one of the great paradoxes about the whole idea of contact is that the public seems to get fired up for the idea in film and books, but relatively uninterested in the actual work that’s going on. Or am I misjudging this?

  • Keith Cooper

What a lot of people don’t realise is just how big space is. Our Galaxy is home to somewhere between 100 billion and 200 billion stars. Yet, until Yuri Milner’s $100 million Breakthrough Listen project, we had looked and listened, in detail, at about a thousand of those stars. And when I say listened closely, I mean we pointed a telescope at each of those stars for half an hour or so. Even Breakthrough Listen, which will survey a million stars in detail, finds the odds stacked against it. Let’s imagine there are 10,000 technological species in our Galaxy. That sounds like a lot, but on average we’d have to search between 10 million and 20 million stars just to find one of those species.

And remember, we’re only listening for a short time. If they’re not transmitting during that time frame, then we won’t detect them, at least not with a radio telescope. Coupled with the fact that incidental radio leakage will be much harder to detect than we thought, then it’s little wonder that we’ve not found anyone out there yet. Of course, the public doesn’t see these nuances – they just see that we’ve been searching for 60 years and all we’ve found is negative or null results. So I’m not surprised that the public are often uninspired by SETI.

Some of this dissatisfaction might stem from the assumptions made in the early days of SETI, when it was assumed that ETI would be blasting out messages through powerful beacons that would be pretty obvious and easy to detect. Clearly, that doesn’t seem to be the case. Maybe that’s because they’re not out there, or maybe it’s because the pure, selfless altruism required to build such a huge, energy-hungry transmitter to beam messages to unknown species is not very common in nature. Certainly on Earth, in the animal kingdom, altruism usually operates either on the basis of protecting one’s kin, or via quid pro quo, neither of which lend themselves to encouraging interstellar communication.

So I think we – that is, both the public and the SETI scientific community – need to readjust our expectations a little bit.

Are we ready to receive a contact signal? I suspect that we think we are, but that’s different from truly being ready. Of course, it depends upon a number of variables, such as the nature of the contact, whether we can understand the message if one is sent, and whether the senders are located close in space to us or on the other side of the Galaxy. A signal detected from thousands of light years away and which we can’t decode the message content of, will have much less impact than one from, say, 20 or 30 light years away, and which we can decode the message content and perhaps even start to communicate with on a regular basis.

  • Paul Gilster

I’ll go further than that. To me, the optimum SETI signal to receive first would be one from an ancient civilization, maybe one way toward galactic center, which would make by virtue of its extreme distance a non-threatening experience. Or at least it would if we quickly went to work on expanding public understanding of the size of the Galaxy and the Universe itself, as you point out. An even more ancient signal from a different galaxy would be even better, as even the most rabid conspiracy theorist would have little sense of immediate threat.

I suppose the best scenario of all would be a detection that demonstrated other intelligent life somewhere far away in the cosmos, and then a century or so for humanity to digest the idea, working it not only into popular culture, but also into philosophy, art, so that it becomes a given in our school textbooks (or whatever we’ll use in the future in place of school textbooks). Then, if we’re going to receive a signal from a relatively nearby system, let it come after this period of acclimatization.

Great idea, right? As if we could script what happens when we’re talking about something as unknowable as SETI contact. I don’t even think we’d have to have a message we could decode at first, because the important thing would be the simple recognition of the fact that other civilizations are out there. On that score, maybe Dysonian SETI turns the trick with the demonstration of a technology at work around another star. The fact of its existence is what we have to get into our basic assumptions about the universe. I used to assume this would be easy and come soon, and while I do understand about all those stars out there, I’m still a bit puzzled that we haven’t turned up something. I’d call that no more than a personal bias, but there it is.

Image: The Parkes 64m radio telescope in Parkes, New South Wales, Australia with the Milky Way overhead. Breakthrough Listen is now conducting a survey of the Milky Way galactic plane over 1.2 to 1.5 GHz and a targeted search of approximately 1000 nearby stars over the frequency range 0.7 to 4 GHz. Credit: Wikimedia Commons / Daniel John Reardon.

  • Keith Cooper

It’s the greatest puzzle that there is. Radio SETI approaches things from the assumption that ET just sat at home belting out radio signals, and yet, as we know, the Universe is so old that ET has had ample time to reach us, or to build some kind of Dysonian artefact, or to do something to make their presence more obvious. And over the years we’ve all drawn our own conclusions as to why this does not seem to be the case – maybe they are here but hidden, watching us like we’re in some kind of cosmic zoo. Or maybe interstellar travel and building megastructures are more difficult than we envision. Perhaps they are all dead, or technological intelligence is rare, or they were never out there in the first place. We just don’t know. All we can do is look.

I think science fiction has also trained us to expect alien life to be out there – and I don’t mean that as a criticism of the genre. Indeed, in The Contact Paradox, I often use science fiction as allegory, largely because that’s where discussions about what form alien life may take and what might happen during contact have already taken place. So let me ask you this, Paul: From all the sf that you’ve read, are there any particular stories that stand out as a warning about the subtleties of contact?

  • Paul Gilster

I suppose my favorite of all the ‘first contact through SETI’ stories is James Gunn’s The Listeners (1972). Here we have multiple narrators working a text that is laden with interesting quotations. Gunn’s narrative methods go all the way back to Dos Passos and anticipate John Brunner (think Stand on Zanzibar, for example). It’s fascinating methodology, but beyond that, the tumult that greets the decoding of an image from Capella transforms into acceptance as we learn more about a culture that seems to be dying and await what may be the reply to a message humanity had finally decided to send in response. So The Listeners isn’t really a warning as much as an exploration of this tangled issue in all its complexity.

Of course, if we widen the topic to go beyond SETI and treat other forms of contact, I love what Stanislaw Lem did with Solaris (1961). A sentient ocean! I also have to say that I found David Brin’s Existence (2012) compelling. Here competing messages are delivered by something akin to Bracewell probes, reactivated after long dormancy. Which one do you believe, and how do you resolve deeply contradictory information? Very interesting stuff! I mean, how do we respond if we get a message, and then a second one saying “Don’t pay any attention to that first message?”

What are some of your choices? I could go on for a bit about favorite science fiction but I’d like to hear from you. I assume Sagan’s Contact (1985) is on your list, but how about dazzling ‘artifact’ contact, as in the Strugatsky brothers’ Roadside Picnic (1972)? And how do we fit in Cixin Liu’s The Three Body Problem (2008)? At first glance, I thought we were talking about Alpha Centauri, but the novel shows no familiarity with the actual Centauri system, while still being evocative and exotic. Here the consequences of contact are deeply disturbing.

  • Keith Cooper

I wish I were as well read as you are, Paul! I did read The Three Body Problem, but it didn’t strike a chord with me, which is a shame. For artefact contact, however, I have to mention the Arthur C. Clarke classic, Rendezvous with Rama (1973). One of the things I liked about that story is that it removed us from the purpose of Rama. We just happened to be bystanders, oblivious to Rama’s true intent and destination (at least until the sequel novels).

Clarke’s story feels relevant to SETI today, in which embracing the search for ‘technosignatures’ has allowed researchers to consider wider forms of detection than just radio signals. In particular, we’ve seen more speculation about finding alien spacecraft in our own Solar System – see Avi Loeb pondering whether 1I/’Oumuamua was a spacecraft (I don’t think it was), or Jim Benford’s paper about looking for lurkers.

I’ve got mixed feelings about this. On the one hand, although it’s speculative and I really don’t expect us to find anything, I see no reason why we shouldn’t look for probes in the Solar System, just in case, and it would be done in a scientific manner. On the other hand, it sets SETI on a collision course with ufology, and I’d be interested to see how that would play out in the media and with the public.

It could also change how we think about contact. Communication over many light years via radio waves or optical signals is one thing, but if the SETI community agrees that it’s possible that there could be a probe in our Solar System, then that would bring things into the arena of direct contact. As a species, I don’t think we’re ready to produce a coherent response to a radio signal, and we are certainly not ready for direct contact.

Contact raises ethical dilemmas. There’s the obvious stuff, such as who has the right to speak for Earth, and indeed whether we should respond at all, or stay silent. I think there are other issues though. There may be information content in the detected signal, for example a message containing details of new technology, or new science, or new cultural artefacts.

However, we live in a world in which resources are not shared equally. Would the information contained within the signal be shared to the whole world, or will governments covet that information? If the technological secrets learned from the signal could change the world, for good or ill, who should we trust to manage those secrets?

These issues become amplified if contact is direct, such as finding one of Benford’s lurkers. Would we all agree that the probe should have its own sovereignty and keep our distance? Or would one or more nations or organisations seek to capture the probe for their own ends? How could we disseminate what we learn from the probe so that it benefits all humankind? And what if the probe doesn’t want to be captured, and defends itself?

My frustration with SETI is that we devote our efforts to trying to make contact, but then shun any serious discussion of what could happen during contact. The search and the discussion should be happening in tandem, so that we are ready should SETI find success, and I’m frankly puzzled that we don’t really do this. Paul, do you have any insight into why this might be?

  • Paul Gilster

You’ve got me. You and I are on a slightly different page when it comes to METI, for example (Messaging to Extraterrestrial Intelligence). But we both agree that while we search for possible evidence of ETI, we should be having this broad discussion about the implications of success. And if we’re talking about actually sending a signal without any knowledge whatsoever of what might be out there, then that discussion really should take priority, as far as I’m concerned. I’d be much more willing to accept the idea of sending signals if we came to an international consensus on the goal of METI and its possible consequences.

As to why we don’t do this, I hear a lot of things. Most people from the METI side argue that the cat is already out of the bag anyway, with various private attempts to send signals proliferating, and the assumption that ever more sophisticated technology will allow everyone from university scientists to the kid in the basement to send signals whenever they want. I can’t argue with that. But I don’t think the fact that we have sent messages means we should give up on the idea of discussing why we’re doing it and why it may or may not be a sound idea. I’m not convinced anyway that any signals yet sent have the likelihood of being received at interstellar distances.

But let’s leave METI alone for a moment. On the general matter of SETI and implications of receiving a signal or finding ETI in astronomical data, I think we’re a bit schizophrenic. When I talk about ‘we,’ I mean western societies, as I have no insights into how other traditions now view the implications of such knowledge. But in the post-Enlightenment tradition of places like my country and yours, contacting ETI is on one level accepted (I think this can be demonstrated in recent polling) while at the same time it is viewed as a mere plot device in movies.

This isn’t skepticism, because that implies an effort to analyze the issue. This is just a holdover of old paradigms. Changing them might take a silver disc touching down and Michael Rennie strolling out. On the day that happens, the world really would stand still.

Let’s add in the fact that we’re short-sighted in terms of working for results beyond the next dividend check (or episode of a favorite show). With long-term thinking in such perilously short supply (and let’s acknowledge the Long Now Foundation‘s heroic efforts at changing this), we have trouble thinking about how societies change over time with the influx of new knowledge.

Our own experience says that superior technologies arriving in places without warning can lead to calamity, whether intentional or not, which in and of itself should be a lesson as we ponder signals from the stars. A long view of civilization would recognize how fragile its assumptions can be when faced with sudden intervention, as any 500 year old Aztec might remind us.

Image: A 17th century CE oil painting depicting the Spanish Conquistadores led by Hernan Cortes besieging the Aztec capital of Tenochtitlan in 1519 CE. (Jay I. Kislak Collection).

Keith, what’s your take on the ‘cat out of the bag’ argument with regard to METI? It seems to me to ignore the real prospect that we can change policy and shape behavior if we find it counterproductive, instead focusing on human powerlessness to control our impulses. Don’t we on the species level have agency here? How naive do you think I am on this topic?

  • Keith Cooper

That is the ‘contact paradox’ in a nutshell, isn’t it? This idea that we’re actively reaching out to ETI, yet we can’t agree on whether it’s safe to do so or not. That’s the purpose of my book, to try and put the discussion regarding contact in front of a wider audience.

In The Contact Paradox, I’m trying not to tell people what they should think about contact, although of course I give my own opinions on the matter. What I am asking is that people take the time to think more carefully about this issue, and about our assumptions, by embarking on having the broader debate.

Readers of Centauri Dreams might point out that they have that very debate in the comments section of this website on a frequent basis. And while that’s true to an extent, I think the debate, whether on this site or among researchers at conferences or even in the pages of science fiction, has barely scratched the surface. There are so many nuances and details to examine, so many assumptions to challenge, and it’s all too easy to slip back into the will they/won’t they invade discussion, which to me is a total straw-man argument.

To compound this, while the few reviews that The Contact Paradox has received so far have been nice, I am seeing a misunderstanding arise in those reviews that once again brings the debate back down to the question of whether ETI will be hostile or not. Yet the point I am making in the book is that even if ETI is benign, contact could potentially still go badly, through misunderstandings, or through the introduction of disruptive technology or culture.

Let me give you a hypothetical example based on a science-fiction technology. Imagine we made contact with ETI, and they saw the problems we face on Earth currently, such as poverty, disease and climate change. So they give us some of their technology – a replicator, like that in Star Trek, capable of making anything from the raw materials of atoms. Let’s also assume that the quandaries that I mentioned earlier, about who takes possession of that technology and whether they horde it, don’t apply. Instead, for the purpose of this argument, let’s assume that soon enough the technology is patented by a company on Earth and rolled out into society to the point that replicators became as common a sight in people’s homes as microwave ovens.

Just imagine what that could do! There would be no need for people to starve or suffer from drought – the replicators could make all the food and water we’d ever need. Medicine could be created on the spot, helping people in less wealthy countries who can’t ordinarily get access to life-saving drugs. And by taking away the need for industry and farming, we’d cut down our carbon emissions drastically. So all good, right?

But let’s flip the coin and look at the other side. All those people all across the world who work in manufacturing and farming would suddenly be out of a job, and with people wanting for nothing, the economy would crash completely, and international trade would become non-existent – after all, why import cocoa beans when you can just make them in your replicator at home? We’d have a sudden obesity crisis, because when faced with an abundance of resources, history tells us that it is often human nature to take too much. We’d see a drugs epidemic like never before, and people with malicious intent would be able to replicate weapons out of thin air. Readers could probably imagine other disruptive consequences of such a technology.

It’s only a thought experiment, but it’s a useful allegory showing that there are pros and cons to the consequences of contact. What we as a society have to do is decide whether the pros outweigh the cons, and to be prepared for the disruptive consequences. We can get some idea of what to expect by looking at contact between different societies on Earth throughout history. Instead of the replicator, consider historical contact events where gunpowder, or fast food, or religion, or the combustion engine have been given to societies that lacked them. What were the consequences in those situations?

This is the discussion that we’re not currently having when we do METI. There’s no risk assessment, just a bunch of ill-thought-out assumptions masquerading as a rationale for attempting contact before we’re ready.

There’s still time though. ETI would really have to be scrutinising us closely to detect our leakage or deliberate signals so far, and if they’re doing that then they would surely already know we are here. So I don’t think the ‘cat is out of the bag’ just yet, which means there is still time to have this discussion, and more importantly to prepare. Because long-term I don’t think we should stay silent, although I do think we need to be cautious, and learn what is out there first, and get ready for it, before we raise our voice. And if it turns out that no one is out there, then we’ve not wasted our time, because I think this discussion can teach us much about ourselves too.

  • Paul Gilster

We’re on the same wavelength there, Keith. I’m not against the idea of communicating with ETI if we receive a signal, but only within the context you suggest, which means thinking long and hard about what we want to do, making a decision based on international consultation, and realizing that any such contact would have ramifications that have to be carefully considered. On balance, we might just decide to stay silent until we gathered further information.

I do think many people have simply not considered this realistically. I was talking to a friend the other day whose reaction was typical. He had been asking me about SETI from a layman’s perspective, and I was telling him a bit about current efforts like Breakthrough Listen. But when I added that we needed to be cautious about how we responded, if we responded, to any reception, he was incredulous, then thoughtful. “I’ve just never thought about that,” he said. “I guess it just seems like science fiction. But of course I realize it isn’t.”

So we’re right back to paradox. If we have knowledge of the size of the galaxy — indeed, of the visible cosmos — why do we not see more public understanding of the implications? I think people could absorb the idea of a SETI reception without huge disruption, but it will force a cultural shift that turns what had been fiction into the realm of possibility.

But maybe we should now identify the broad context within which this shift can occur. In the beginning of your book, Keith, you say this: “Understanding altruism may ultimately be the single most significant factor in our quest to make contact with other intelligent life in the Universe.”

I think this is exactly right, and the next time we talk, I’d like us to dig into why this statement is true, and its ramifications for how we deal with not only extraterrestrial contact but our own civilization. Along with this, let’s get into that thorny question of ‘deep time’ and how our species sees itself in the cosmos.

tzf_img_post

A Deep Dive into Tidal Lock

Mention red dwarf habitable zones and tidal lock invariably comes up. If a planet is close enough to a dim red star to maintain temperatures suitable for life, wouldn’t it keep one face turned toward it in perpetuity? But tidal lock, as Ashley Baldwin explains in the essay below, is more complex than we sometimes realize. And while there are ways to produce temperate climate models for such planets, tidal lock itself is a factor in not just M-dwarfs, but K- and even G-class stars like the Sun. Flip a few starting conditions and Earth itself might have been in tidal lock. The indefatigable Dr. Baldwin keeps a close eye on the latest exoplanet research, somehow balancing his astronomical scholarship with a career as consultant psychiatrist at the 5 Boroughs Partnership NHS Trust (Warrington, UK). Read on to learn a great deal about where current thinking stands on a subject critical to the question of red dwarf habitability.

by Ashley Baldwin

“Tidal locking”, “captured rotation” or “spin-orbit locking” etc occurs in most recognised guise when an orbiting astronomical body (be it a moon, planet or even a star) always presents the same face towards the object it is orbiting. In this instance, the orbit of the “satellite” body can be referred to as “synchronous”, whereby the tidally locked body takes as long to rotate around its own axis as to orbit its partner. This occurs due to the primary body’s gravity flexing the orbiting body into an elongated “prolate” shape. This in turn is then exposed to varying gravitational interaction with the central body.

Figure 1: Tidal stresses and tidal locking

As the “orbiter” rotates, its now elongated axis falls out of line with the central mass, which consequently perturbs it as it rotates across its orbit. It thus becomes subject to gravitationally induced torques that can act as a brake — through energy exchange and dissipation, the latter via friction-induced heat loss in the perturbed orbiting body. As M dwarf habitable zones are closer to their central star and their gravitational influence thus greater, it’s easy to see how this dissipated heat can contribute substantially to an exoplanet’s overall energy flux and can even affect its habitability potential – possibly tipping it into a runaway greenhouse scenario. (Kopparapu 2013).

Over millions of years (or more) this process can lead to “orbital synchronisation”. This arises when the orbiting body reaches a state where there is no longer any net exchange of rotation during the course of a completed orbit (Barnes 2010). Leaving a tidal locking state would only be possible with the addition of energy to the system. This might occur should some other massive object (such as a planet, or a star in, say, a binary system) break the equilibrium. If the masses of the two bodies (for instance Pluto & Charon) are similar, they can become tidally locked to each other.

Not all tidal locking involves synchronisation. “Super-synchronisation” occurs where an orbiting body becomes tidally locked to its parent body but rotates at a fixed but quicker rate. A topical example of this is the erstwhile “geosynchronous transfer orbit” (GTO). We see this on launcher specs all the time: “Payload to GTO”. This orbit is external to geosynchronous orbit, where many satellites start their operational lives, but allows for pre-orbital insertion inclination changes — economically expending less propellant prior to final insertion. Alternatively, such orbits can be used as dumping grounds for non-functioning satellites or related debris, so-called “geo-graveyard belts” (Luu 1998). Simulations suggest many exoplanets could exist in variants of such orbital types.

Gravitational interaction with a central star leads to progressive rotational slowing of a smaller planetary body like Mercury via energy exchange and heat dissipation. This is due to subtle but important tidal force variations across the orbiting body (remembering that gravity is inversely proportional to the square of the distance between any two bodies — thus “gravitational gradients” exist across solid bodies, leading to bulges). However, if the initial planetary orbit is significantly eccentric, this effect varies substantially across the orbital period (especially at periapsis — the point of strongest gravitational interaction) and can instead result in a spin-orbit resonance. In Mercury’s case, this is 3:2 (three rotations per two orbits) but other ratios can occur from 2:1 through 5:2 (Mahoney 2013). It’s worth noting that this effect is most pronounced for closer-in planets where the gravitational effects are greatest, so the effect should be even more relevant for the tightly packed exoplanetary architectures (e.g. TRAPPIST-1) that seem to be prevalent.

In extreme cases where the orbiting body’s orbit is nearly circular AND has a minimal or zero axial tilt — such as with the Moon — then the same hemisphere (libration allowing) faces the primary mass.

That said, for simplicity we will now assume that a smaller mass body (exoplanet) is orbiting a very much more massive body (star) — this is the focus of this review, with an unavoidable nod towards habitability.

For reasons of brevity and also pertaining to the exoplanet subject matter of recent posts, we will limit ourselves to the specific case of terrestrial exoplanets and their orbits around smaller main sequence stars.

The time to tidal locking can even be described by the adapted equation :

Tlock ≈ wa6 (0.4 m*R2) / (3 Gmp2 kR5) (Goldreich, Goldreich & Soter 1966); (Peale 1977); (Gladman 1996); (Greenberg 2009)

Where Tlock is “time to tidal locking”, w and k are constants which can be ignored for simplicity, m* is mass of the star, mp is mass of the planet, R is the exoplanet radius and “G” is Newton’s all important gravitational constant.

Tlock is substantially lengthened by “a” — increasing planetary semi-major axis (to the sixth power!). Tidal locking time is also increased by 0.4 X m* in this equation. However it is important to remember the context and just how massive a star, indeed ANY star, is — even an M dwarf star — many times, orders of magnitude even, more massive than a planet. A star thus plays the major role in the tidal locking of its attendant planets.

The gravitational constant G ensures that increasing stellar mass will substantially decrease Tlock. All other things being equal, increasing stellar mass is a major factor in reducing time to tidal locking.

Figure 2: Stellar mass & type versus semi-major axis orange / red graph with superimposed Tsync for 0.1,1 and 10 gigayear times for an Earth mass planet. (Penz 2005)

The concept of synchronisation is relatively new, dating back to Stephen Dole’s seminal Habitable Planets for Man at the beginning of the space age in the early 1960s. The concept was purely theoretical, with somewhat arbitrary parameters at this point, but it implied that tidal lock would be a major impediment to the human-friendly “habitable” exoplanets Dole had in mind for his book. It was here that tidally locked orbits and planets in M-dwarf systems were first linked, in a negative way that to some extent still exists today (before we even get to coronal mass ejections, EUV and stellar flares et al !) Atmospheric collapse due to freezing out on the side of the planet facing away from the star is not the least of these problems.

It was only in 1993 that Kasting et al employed sophisticated 1-D climate modelling as part of describing what constituted habitable planets. Habitable planets essentially now meant planets with conditions that could sustain liquid water on their surfaces. This is a rather lower bar than that set by Dole thirty years earlier, but far more applicable and still a pillar of exoplanet science today. More importantly, Kasting’s team also simulated star/planet gravitational interaction.

They did this by utilising the “Equilibrium Tide” model (ET). Refined variants of this have now become THE staple of all subsequent related studies, as it too has “evolved”. The model essentially assumes that the gravitational force of the tide-raiser (star) produces an elongated shape in the perturbed body (exoplanet) and that its long axis is slightly misaligned with respect to an imaginary line connecting the two centres of mass.

The misalignment is crucial and is due to the dissipating processes within the “deformed” exoplanet, leading to evolution of the orbit and spin angular moments. From this, various equations can be created which map out the orbital and rotational evolutionary history of exoplanets over time (see above). ET was originally derived from the Earth/Moon system by Darwin in 1880 before refinement by Pearle in 1977. Iterations vary in subtle but significant ways and are used as the basis for increasingly sophisticated simulations as computing power increases. Barnes 2017 has carried out a detailed review of synchronising and ET modelling (see below).

Kasting et al showed synchronisation of putative exoplanets orbiting in the habitable zones of M-dwarfs, stars with a mass of up to 0.42 Msun, within 4.5 billion years. They introduced the now familiar term “tidal locking radius”. Though a big step forward, this had the unfortunate consequence of continuing to propagate a pessimistic view of habitable exoplanets orbiting such stars. Importantly, stellar mass was still viewed as the major if not sole cause of synchronisation. The graph below (from Yang et al 2014), though based on sophisticated modelling, still captures this type of thinking. Here various habitable zone model ranges are superimposed on a graph of relative stellar insolation (and star type) versus semi-major axis examples of known exoplanets, adding realistic perspective. You will note also that for a 0.42 Msun star, with a temperature around 3500 K, the 1-D inner habitable range is very close to the value attributed to recently discovered TOI 700d — mid-80s percent.

Figure 3: Temperature of star versus stellar flux graph with superimposed coloured star classes and dashed gray “tidal locking radius” line.

The effects of other factors — such as starting orbital eccentricity (already encountered above with Mercury), baseline rotation rate, the presence of companion bodies (Greenberg, Corriea 2013) thermal tides arising from atmospheres (Leconte et al 2015), and stellar and planetary interiors (Driscoll & Barnes 2015), orbital tilt (Barnes 2017) — were not considered. As can be seen, it has only been over the last five years or so that these things have been added to simulations. Indeed, the results of these studies very much alter the whole tidal locking paradigm with particular relevance to habitable zones, which despite refinement (Kopparapu 2013, Selsis 2007) have only changed slightly, a big compliment to Kasting’s work in 1993.

Taken altogether, habitable zone planets of M,K and G stars all have the potential to become tidally locked. Not just M dwarfs — though their potential remains very much the greatest and especially for < 0.1 Msun stars such as TRAPPIST-1. Even the Earth, had its starting rotation been greater than just three days, according to Barnes 2017, might have become synchronous.

For the sake of brevity, this review has largely focused on stellar mass as a major driver in exoplanetary synchronisation. As can be seen above, as knowledge in this area progresses, other processes come into account. It is also becoming increasingly difficult to tease these out from drivers of exoplanetary habitability. So to this end we must look in more detail at some of the factors named above.

The planet Venus is unusual in many ways, but one in particular stands out: its retrograde and slow rotation rate that is longer than its orbital period. Why? What makes Venus different? One factor is that it is a rocky planet with a substantial atmosphere (92 bar at its surface). We all know about the infamous runaway greenhouse effect this drives, making Venus the hottest planet in the Solar System despite being further from the Sun than (spin/orbit resonant) Mercury. However, does this atmosphere have any other effects?

On Earth, the day/night cycle leads to variations in heat distribution in the atmosphere. It is known that the hottest time of day on Earth does not occur when the Sun is at its zenith and thus nearest to the Earth, but rather several hours later. This is because of thermal inertia. There is a delay between solar heating and thermal response, leading to mass redistribution. As the atmosphere and the Earth’s surface are generally well linked via friction, this will give rise to non-negligible thermal torques.

These torques are akin to the torques arising from the Sun’s uneven gravitational interaction with the Earth described above, though not as potent. On the Earth with its extended 1 AU orbit, they are largely inconsequential, but for 0.3 AU nearer Venus, they become significant. Depending on their direction, they can either slow up OR speed up planetary rotation, but either way they help to resist synchronisation. Over time, torques arising in Venus have acted to slow down its rotation, so much so that it has reversed to the retrograde pattern we see today.

So if this is true of Venus, how about exoplanets? Can these atmospheric torques resist or at least delay synchronisation and tidal locking in vulnerable areas around a star? This has been extensively modelled by Leconte et al 2015 and the answer was a resounding yes, especially for smaller, less luminous stars with close-in habitable zones, and not just for exoplanets with 90 bar atmospheres, either. Even 1 bar Earth-like atmospheres could help resist synchronisation for the habitable zones in stars of 0.5 Mearth – 0.7 Mearth.

Ten bar atmospheres were simulated and shown to resist synchronisation even for habitable zone planets orbiting 0.3 Mearth stars (mid-M dwarfs). These are the high bar “maximum greenhouse” CO2 atmospheres that are postulated to occur in the outer regions of stellar habitable zones. But there are limits. Venus’ 92 bar atmosphere is ironically so thick that most of the incident sunlight that isn’t reflected back into space is either absorbed or scattered before it can reach the planetary surface and exert the driving effect of thermal torques (Leconte et al 2015).

Figure 4: Red arrow synchronous rotation / blue arrow asynchronous rotation graph (Leconte 2015).

Orbital synchronisation and exoplanet habitability remains a contentious theoretical field that is subject to continual debate and constant change. Modern Global Climate Modelling (GCM) has become a sophisticated sub-science. Using an earlier iteration of GCM, Yang et al showed in 2013 that synchronised M-dwarf habitable zone planets would form thick cloud banks above their sub-stellar point. This would then reflect much of the incident stellar flux, thus reducing the energy reaching the surface. In turn, this would reduce the overall energy reaching the planet and so reduce global temperatures. The net effect in theory is to extend the stellar habitable zone inwards. However, the same author collaborated with Wolf and Kopparapu in 2016 to apply an updated 3-D model to the same problem. This showed that a sub-stellar cloud bank could not form, or would form and then move, a result effectively rebutting the 2013 findings and moving the habitable zone back to its original pre 2013 starting point. Expect more of this !

So, all things considered, just how easy is it for an exoplanet to become tidally locked and just how easy can habitable zone planets become tidally locked ? Barnes 2017 attempted to address just this question for exoplanets in circular orbits. He applied two well recognised refined variants (CPL left, CTL right in the graphic below) of the ET to two model populations of exoplanets orbiting differing stellar masses, and ran thousands of giga-year simulations for each (think of the computing power and time!) One population had a starting orbital period of 8 hours and an orbital tilt of 60°. The other had a starting period of ten days and a tilt of 0°. This produced the four outcomes illustrated below. The superimposed grey shading represents the latest habitable zones (Kopparapu 2013) iteration, with the dark grey representing the “conservative” and the light the “optimistic”.

Figure 5: “Four in one” black and white stellar mass vs semi-major axis / superimposed greyscale habzone graphs.

These results are indicative and significantly different from the status quo, which is that tidal locking is only something that applies to exoplanets orbiting in close to M dwarf and smaller K dwarf stars. For one thing, even this older paradigm implies that at least some “Goldilocks” stars are not quite as homely as expected (more Kasting than Dole). The Barnes work hints at potential overlap of the habitable zone for potentially a large fraction of K-class and even many G-class stars, driven by factors beyond simple stellar mass. Clearly planets with a slow initial rotation rate and low orbital tilt are at greater risk, as may prove the case. Opposed to this are non-synchronising factors such as, inter alia, higher baseline orbital eccentricities and the close proximity of other orbiting bodies (moons, planets …thinking TRAPPIST-1 and binary stars/brown dwarfs, as with the recently described Gliese 229Ac system).

What this also shows is the inextricable link between orbital features and planet habitability. No more so demonstrated than by Kepler, and likely even more so with its greater number of short orbital period planets, with any potential habitable zone planetary candidates lying within just tenths or less of an AU from their parent star. This is very much in the “red arrow” synchronous zone in the Leconte graphic above.

There are now over 4000 known exoplanets. The current focus is on their “characterisation” and this is largely about atmospheres and biosignatures. However, it is obvious that we need to know far more about their evolving and historical orbital properties. This is a part of a process of determining habitable planets/zones, which are about so much more than stellar mass.

Most of the exoplanets discovered already by Kepler et al orbit close in to their stars, including those few in the potential tidal lock habitable zone. Ongoing Doppler photometry and TESS will identify thousands more such exoplanets, many of which will be even closer to their latest star given TESS’ shorter 27 day observation runs. TOI 700d and Gliese 229Ac are just for starters. Hopefully the search for habitability will expand to encompass the unavoidable connexion with planetary orbital features.

Know the star to know the planet, but know the orbit to know them both.

Figure 6: Stellar effects/planetary properties/planetary systems (Meadows and Barnes 2018)

References

Barnes,R. Formation and evolution of exoplanets. John Wiley & Sons, p248, 2010

Barnes, R. Tidal locking of habitable exoplanets. Celestial mechanics and dynamical astronomy Vol 129, Issue 4, pp 509-536, Dec 2017

Darwin, G H. On the secular changes in the elements of the orbit of a satellite revolving about a tidally distorted planet. Royal Society of London Philosophical Transactions, Series I, 171:713-891 ; 1880.

Dole, S H. Habitable Planets for Man. 1964

Goldreich, P. Final spin rates of planets and satellites. Astronomical Journal, 71, 1966

Goldreich, P., Soter, A., Q in the solar system. Icarus 5, 375-389, 1966

Gladman, B et al. Synchronous locking of tidally evolving satellites. Icarus 133 (1) 166-192, 1996

Greenberg, R. Frequency dependence of tidal Q, The Astrophysical Journal, 698, L42-45, 2009

Kasting, J. F. Habitable zones around main sequence stars. Icarus,101 d 108-128 Jan 1993

Kopparapu, R K et al. Habitable zones around main sequence stars: New Estimates. The Astrophysical Journal, 765;131, March 2013

Kopparapu R K, Wolf E, Yang et al. The inner edge of the habitable zone for synchronously rotating planets around low-mass stars using general circulation models. The Astrophysical Journal Volume 819, Number 1, March 2016

Luu, K. Effects of perturbations on space debris in super-synchronous storage orbits. Air Force Research Laboratory Technical Reports, 1998

Mahoney,T J. Mercury. Springer Science & Business Media, 2013

Meadows V S, Barnes R K. Factors affecting exoplanet habitability. In Handbook of Exoplanets P57, 2018

Peale, S J. Rotation histories of natural satellites. Burns, J A, Editor, IAU Colloquium 28; Planetary Satellites, p 87-111, 1977

Penz,T et al. Constraints for the evolution of habitable planets: Implications for the search of life in the Universe: Evolution of Habitable planets, 2005

Yang, J et al. Stabilising cloud feedback dramatically expands the habitable zone of tidally locked planets. The Astrophysical Journal Letters: 771:L45, July 2013

Yang, J et al. Strong dependence of the inner edge of the habitable zone on planetary rotation rate. The Astrophysical Journal Letters: 787:1, April 2014

tzf_img_post