Data Return from Proxima Centauri b

The challenges involved in sending gram-class probes to Proxima Centauri could not be more stark. They’re implicit in Kevin Parkin’s analysis of the Breakthrough Starshot system model, which ran in Acta Astronautica in 2018 (citation below). The project settled on twenty percent of the speed of light as a goal, one that would reach Proxima Centauri b well within the lifetime of researchers working on the project. The probe mass is 3.6 grams, with a 200 nanometer-thick sail some 4.1 meters in diameter.

The paper we’ve been looking at from Marshall Eubanks (along with a number of familiar names from the Initiative for Interstellar Studies including Andreas Hein, his colleague Adam Hibberd, and Robert Kennedy) accepts the notion that these probes should be sent in great numbers, and not only to exploit the benefits of redundancy to manage losses along the way. A “swarm” approach in this case means a string of probes launched one after the other, using the proposed laser array in the Atacama desert. The exciting concept here is that these probes can reform themselves from a string into a flat, lens-shaped mesh network some 100,000 kilometers across.

Image: Figure 16 from the paper. Caption: Geometry of swarm’s encounter with Proxima b. The Beta-plane is the plane orthogonal to the velocity vector of the probe ”at infinity” as it approaches the planet; in this example the star is above (before) the Beta-plane. To ensure that the elements of the swarm pass near the target, the probe-swarm is a disk oriented perpendicular to the velocity vector and extended enough to cover the expected transverse uncertainty in the probe-Proxima b ephemeris. Credit: Eubanks et al.

The Proxima swarm presents one challenge I hadn’t thought of. We have to be able to predict the position of Proxima b to within 10,000 kilometers at least 8.6 years before flyby – this is the time for complete information cycle between Earth, Proxima and back to Earth. Effectively, we need to figure out the planet’s velocity to a value of 1 meter per second, with a correspondingly tight angular position (0.1 microradians).

Although we already have Proxima b’s period (11.68 days), we need to determine its line of nodes, eccentricity, inclination and epoch, and also its perturbations by the other planets in the system. At the time of flyby, the most recent Earth update will be at least 8.5 years old. The Proxima b orbit state will need to be propagated over at least that interval to predict its position, and that prediction needs to be accuracy to the order of the swarm diameter.

The authors suggest that a small spacecraft in Earth orbit can refine Proxima b’s position and the star’s ephemeris, but note that a later paper will dig into this further.

In the previous post I looked at the “Time on Target” and “Velocity on Target” techniques that would make swarm coherence possible, with variations in acceleration and velocity allowing later-launched probes to reach higher speeds, but with higher drag so that as they reach the craft sent before them, they slow to match their speed. From the paper again:

A string of probes relying on the ToT technique only could indeed form a swarm coincident with the Proxima Centauri system, or any other arbitrary point, albeit briefly. But then absent any other forces it would quickly disperse afterwards. Post-encounter dispersion of the swarm is highly undesirable, but can be eliminated with the VoT technique by changing the attitude of the spacecraft such that the leading edge points at an angle to the flight direction, increasing the drag induced by the ISM, and slowing the faster swarm members as they approach the slower ones. Furthermore, this approach does not require substantial additional changes to the baseline BTS [Breakthrough Starshot] architecture.

In other words, probes launched at different times with a difference in velocity target a point on their trajectory where the swarm can cohere, as the paper puts it. The resulting formation is then retained for the rest of the mission. The plan is to adjust the attitude of the leading probes continually as they move through the interstellar medium, which means variations in their aspect ratio and sectional density. A probe can move edge-on, for instance, or fully face-on, with variations in between. The goal is that the probes lost later in the process catch up with but do not move past the early probes.

All this is going to take a lot of ‘smarts’ on the part of the individual probes, meaning we have to have ways for them to communicate not just with Earth but with each other. The structure of the probes discussed here is an innovation. The authors propose that key components like laser communications and computation should be concentrated, so that whereas the central disk is flat, the ‘heart of the device,’ as they put it, is concentrated in a 2-cm thickened rim around the outside of the sail disk.

The center of the disk is optical, or as the paper puts it, ‘a thin but large-aperture phase-coherent meta-material disk of flat optics similar to a fresnel lens…’ which will be used for imaging as well as communications. Have a look at the concept:

Image: This is Figure 3a from the paper. Caption: Oblique view of the top/forward of a probe (side facing away from the launch laser) depicting array of phase-coherent apertures for sending data back to Earth, and optical transceivers in the rim for communication with each other. Credit: Eubanks et al.

So we have a sail moving at twenty percent of lightspeed through an incoming hydrogen flux, an interesting challenge for materials science. The authors consider both aerographene and aerographite. I had assumed these were the same material, but digging into the matter reveals that aerographene consists of a three-dimensional network of graphene sheets mixed with porous aerogel, while aerographite is a sponge-like formation of interconnected carbon nanotubes. Both offer extremely low density, so much so that the paper notes the performance of aerographene for deceleration is 104 times better than conventional mylar. Usefully, both of these materials have been synthesized in the laboratory and mass production seems feasible.

Back to the probe’s shape, which is dictated by the needs not only of acceleration but survival of its electronics – remember that these craft must endure a laser launch that will involve at least 10,000 g’s. The raised rim layout reminds the authors of a red corpuscle as opposed to what has been envisioned up to now as a simple flat disk. The four-meter central disk contains 247 25-cm structures arranged, as the illustration shows, like a honeycomb. We’ll use this optical array for both imaging Proxima b but also returning data to Earth, and each of the arrays offers redundancy given that impacts with interstellar hydrogen will invariably create damage to some elements.

Remember that the plan is to build an intelligent swarm, which demands laser links between the probes themselves. Making sure each probe is aware of its neighbors is crucial here, for which purpose it will use the optical transceivers around its rim. The paper calculates that this would make each probe detectable by its closest neighbor out to something close to 6,000 kilometers. The probes transmit a pulsed beacon as they scan for neighboring probes, and align to create the needed mesh network. The alignment phase is under study and will presumably factor into the NIAC work.

The paper backs out to explain the overall strategy:

…our innovation is to use advances in optical clocks, mode-locked optical lasers, and network protocols to enable a swarm of widely separated small spacecraft or small flotillas of such to behave as a single distributed entity. Optical frequency and reliable picosecond timing, synchronized between Earth and Proxima b, is what underpins the capability for useful data return despite the seemingly low source power, very large space loss and low signal-to-noise ratio.

For what is going to happen is that the optical pulses between the probes will be synchronized, meaning that despite the sharp constraints on available energy, the same signal photons are ‘squeezed’ into a smaller transmission slot, which increases the brightness of the signal. We get data rates through this brightening that could not otherwise be achieved, and we also get data from various angles and distances. On Earth, a square kilometer array of 796 ‘light buckets’ can receive the pulses.

Image: This is Figure 13 from the paper. Caption: Figure 13: A conceptual receiver implemented as a large inflatable sphere, similar to widely used inflatable antenna domes; the upper half is transparent, the lower half is silvered to form a half-sphere mirror. At the top is a secondary mirror which sends the light down into a cone-shaped accumulator which gathers it into the receiver in the base. The optical signals would be received and converted to electrical signals – most probably with APDs [avalanche photo diodes] at each station and combined electrically at a central processing facility. Each bucket has a 10-nm wide band-pass filter, centered on the Doppler-shifted received laser frequency. This could be made narrower, but since the probes will be maneuvering and slowing in order to meet up and form the swarm, and there will be some deceleration on the whole swarm due to drag induced by the ISM, there will be some uncertainty in the exact wavelength of the received signal. Credit: Eubanks et al.

If we can achieve a swarm that is in communication with its members using micro-miniaturized clocks to keep operations synchronous, we can thus use all of the probes to build up a single detectable laser pulse bright enough to overcome the background light of Proxima Centauri and reach the array on Earth. The concept is ingenious and the paper so rich in analysis and conjecture that I keep going back to it, but don’t have time today to do more than cover these highlights. The analysis of enroute and approach science goals and methods alone would make for another article. But it’s probably best that I simply send you to the paper itself, one which anyone interested in interstellar mission design should download and study.

The paper is Eubanks et al., “Swarming Proxima Centauri: Optical Communication Over Interstellar Distances,” submitted to the Breakthrough Starshot Challenge Communications Group Final Report and available online. Kevin Parkin’s invaluable analysis of Starshot is Parkin, K.L.G., “The Breakthrough Starshot system model,” Acta Astronautica 152 (2018), 370–384 (abstract / preprint).

Atmospheric Types and the Results from K2-18b

The exoplanet K2-18b has been all over the news lately, with provocative headlines suggesting a life detection because of the possible presence of dimethyl sulfide (DMS), a molecule produced by life on our own planet. Is this a ‘Hycean’ world, covered with oceans under a hydrogen-rich atmosphere? Almost nine times as massive as Earth, K2-18b is certainly noteworthy, but just how likely are these speculations? Centauri Dreams regular Dave Moore has some thoughts on the matter, and as he has done before in deeply researched articles here, he now zeroes in on the evidence and the limitations of the analysis. This is one exoplanet that turns out to be provocative in a number of ways, some of which will move the search for life forward.

by Dave Moore

124 light years away in the constellation of Leo lies an undistinguished M3V red dwarf, K2-18. Two planets are known to orbit this star: K2-18c, a 5.6 Earth mass planet orbiting 6 million miles out, and K2-18b, an 8.6 Earth mass planet orbiting 16 million miles out. The latter planet transits its primary, so from its mass and size (2.6 x Earth’s), we have its density (2.7 g/cm2), which class the planet as a sub-Neptune. The planet’s relatively large radius and its primary’s low luminosity make it a good target to get its atmospheric spectra, but what also makes this planet of special interest to astronomers is that its estimated irradiance of 1368 watts/m2 is almost the same as Earth’s (1380 watts/m2).

Determining an exosolar planet’s atmospheric constituents, even with the help of the James Webb telescope, is no easy matter. For a detectable infrared spectrum, molecules like H2O, CH4, CO2 and CO generally need to have a concentration above 100 ppm. The presence of O3 can function as a stand-in for O2, but molecules such as H2, N2, with no permanent dipole moment, are much harder to detect.

The Hubble telescope got a spectrum of K2-18b in 2019. Water vapor and H2 were detected, and it was assumed to have a deep H2/He/steam atmosphere above a high pressure ice layer over an iron/rocky core, much like Neptune. On September 11 of this year, the results of spectral studies by the James Webb telescope were announced: CH4 and CO2 were found as well as possible traces of DMS (Dimethyl sulfide). No signal of NH3 was found. Nor was there any sign of water vapor. The feature thought to be water vapor turned out to be a methane line of the same frequency.

Figure 1: Spectra of K2-18b obtained by the James Webb telescope

This announcement resulted in considerable excitement and speculation by the popular press. K2-18b was called a Hycean planet. It was speculated that it had an ocean, and the possible presence of DMS was taken as an indication of life because oceanic algae produce this chemical. But that was not what intrigued me. What caught my attention was the seemingly anomalous combination of CH4 and CO2in the planet’s atmosphere. How could a planet have CH4, a highly reduced form of carbon, in equilibrium with CO2, the oxidized form of carbon? A search turned up a paper from February 2021: “Coexistence of CH4, CO2, and H20 in exoplanet atmospheres,” by Woitke, Herbort, Helling, Stüeken, Dominik, Barth and Samra.

The authors’ purpose for this paper was to help with the detection of biosignatures. To quote:

The identification of spectral signatures of biological activity needs to proceed via two steps: first, identify combinations of molecules which cannot co-exist in chemical equilibrium (“non-equilibrium markers”). Second, find biological processes that cause such disequilibria, which cannot be explained by other physical non-equilibrium processes like photo-dissociation. […] The aim of this letter is to propose a robust criterion for step one…

The paper presents an exhaustive study for the lowest energy state (Gibbs free energy) composition of exoplanet atmospheres for all possible abundances of Hydrogen, Carbon, Oxygen, and Nitrogen in chemical equilibrium. To do that, they ran thermodynamic simulations of varying mixtures of the above atoms and looked at the resulting molecular ratios. At low temperatures (T ≤ 600K), they found that the only molecular species you get in any abundance are H2, H20, CH4, NH3, N2, CO2, O2. At higher temperature, the equilibrium shifts towards more H2, and CO begins to appear.

Some examples of their results:

If O > 0.5 x H + 2 x C ––> O2-rich atmosphere, no CH4
If H > 2 x O + 4 x C ––> H2-rich atmosphere, no CO2
If C > 0.25 x H + 0.5 x O ––> Graphite condensation, no H20

They also used the equations to tell what partial pressures of the elemental mixture will produce equal pressures of the various molecules:

If H = 2 x O then the CO2 level will equal CH4
If 12 C = 2 x O + 3 x H then the CO2level will equal H20
If 12 C = 6 x O + H then the H20 level will equal CH4

To summarize, I quote from their abstract:

We propose a classification of exoplanet atmospheres based on their H, C, O, and N element abundances below about 600 K. Chemical equilibrium models were run for all combinations of H, C, O, and N abundances, and three types of solutions were found, which are robust against variations of temperature, pressure, and nitrogen abundance.

Type A atmospheres[which] contain H20, CH4, NH3, and either H2 or N2, but only traces of CO2 and O2.

Type B atmospheres [which] contain O2, H20, CO2, and N2, but only traces of CH4, NH3, and H2.

Type C atmospheres [which] contain H20, CO2, CH4, and N2, but only traces of NH3, H2, and O2

Type A atmospheres are found in the giant planets of our outer solar system. Type B atmospheres occur in our inner solar system. Earth, Venus and Mars fall under this classification, but we don’t see any planets with Type C atmospheres.

Below is a series of charts showing the results for each of the six main molecular species over a range of mixtures.

Figure 2: The vertical axis is the ratio of Hydrogen to Oxygen, starting at 100% Hydrogen at the bottom and running to 100% Oxygen at the top. The horizontal axis shows the proportion of Carbon in the total mixture (The ratio runs up to 35%.) Molecular concentrations are in chemical equilibrium as a function of Hydrogen, Carbon, and Oxygen element abundances, calculated for T = 400 K and p = 1 bar. The blank regions are concentrations of < 10−4.

The central grey triangle marks the region in which H20, CH4, and CO2 can coexist in chemical equilibrium. The thin grey lines bisecting the triangle indicate where two of the constituents are at an equal concentration. These lines are hard to discern unless you can magnify the original image. For H20 and CO2 at equal concentration, it’s the dashed line (the near vertical line running upwards from 0.2 on the horizontal scale.) For CO2 and CH4, it’s the horizontal line. And for H20 and CH4, it’s the dotted line swooping upwards toward the top right-hand corner.)

The color bars at the right-hand side of the charts are both a color representation of the concentration and show the proportion of Nitrogen tied up as N2, i.e. that which is not NH3. Not surprisingly, the more Hydrogen there is in the mix, the higher the proportion of NH3 there is.

Other Results from the Paper

In the area around the stoichiometric ratio for water you get maximum H20 production and supersaturation occurs. Clouds form and the water rains out. Therefore, you cannot get an atmosphere with very high concentrations of water vapor unless the temperature is over 650°K, the critical point of water. Precipitation results in the atmospheric composition moving out of the area that gives CO2/CH4 mixtures.

Atmospheres with high carbon concentrations and having Hydrogen and Oxygen near their stoichiometric ratio have most of the atmospheric constituents tied up as water, so at a certain point carbon forms neither CO2 nor CH4 but rains out as soot. This, however, only precludes mixtures in the very right hand side of the CO2/CH4 Triangle.

Full-equilibrium condensation models show that the outgassing from warm rock, such as mid-oceanic ridge basalt can naturally produce Type C atmospheres.

Thoughts and Speculations

i) While it is difficult to argue with the man who coined the term, I still think Madhusudhan’s description of K2-18b as Hycean is too broad. Watching Madhusudhan in a Youtube interview, he refers to his paper “Habitability and Biosignatures of Hycean Worlds,’ which suggests that ocean covered planets under a Hydrogen atmosphere can exists within a zone that reaches into a level of irradiance slightly greater than Earth’s; however, he doesn’t mention the work by Lous et al in their paper, “Potential long-term habitable conditions on planets with primordial H–He atmospheres,” that showed that inside irradiance levels equivalent to 2 au from our Sun or greater, the Hydrogen atmosphere required to maintain Earthlike temperatures and not cook it is so thin that it is lost quickly over geological timescales. (You can see this in more detail in my article Super Earths/Hycean Worlds.) I would therefore define a Hycean planet as a rocky world with a radius up to 1.8 x Earth’s outside the irradiance equivalent of 2 au from our sun. K2-18b, being both larger than this and less dense than a rocky world, would fall, in my mind, firmly into the category of sub-Neptune.

ii) Another way of thinking of Type A, Type B and Type C atmospheres is to denote them as Hydrogen dominated, Oxygen dominated and Carbon dominated. Carbon dominated atmospheres may have by far the bulk of their constituents being Hydrogen and Oxygen; but because the enthalpy of the Hydrogen-Oxygen reaction is so much greater than the other reactions, when Hydrogen and Oxygen are close to their stoichiometric ratio, they preferentially remove themselves from the mix leaving Carbon as the dominant constituent. There is no Nitrogen dominated atmosphere because for most of its range Nitrogen sticks to itself forming N2 and is inert.

iii) The lack of H20 spectral lines is puzzling. Madhusudhan in his interview suggests that the spectra was a shot of the high-dry stratosphere. To cross-check the plausibility of this, I looked up the physical data on DMS. Dimethyl Sulfide vaporizes at 37°C and freezes at -98°C, which is lower than CO2’s freezing point. It also has a much higher vapor pressure than water at below freezing temperatures, so this does not contradict the assumption.

iv) I’m surprised this paper is not more widely known as not only does it provide a powerful tool for the analysis of exosolar planets’ atmospheric spectra, but it can also point to other aspects of a planet.

After the Hubble results came out in 2017, papers were published to model the formation of K2-18b, and while a range of possibilities could match the planet’s characteristics, they all came from the assumption that the planet began via the formation of a rocky/iron core followed by the gas accretion of large amounts of H2, Helium, and H20. According to the coexistence paper though, you cannot have large amounts of H2 and get a CO2/CH4 mix with no NH3. So to arrive at this state, this planet must never have had much gas accretion in the first place, or lost large amounts of Hydrogen after it formed. This latter scenario would require the planet to gain a Hydrogen envelope while at less than full mass in a hot nebula and then at full mass, in a cooler environment, lose most of its Hydrogen.

It is much easier to explain the planet’s characteristics by assuming it formed outside the snowline, never gained much of a gas envelope in the first place and spiraled into its present position. If it was formed from icy bodies like Ganymede and Titan (density ~ 1.9 gm/cc), this would give a good match for its density (2.7 gm/cc) allowing for gravitational contraction. The snow line is also the zone where carbonaceous chondrites form, so this would give the planet a higher carbon content than a pure rocky/iron one.

v) Madhusudhan, again from his interview, seems to think that K2-18b is an ocean planet, but I’m dubious about this for two reasons:

The first is that from the work done on Hycean planets by Lous et al, any depth of atmosphere especially with the potent greenhouse mix of CO2 and CH4 is likely to result in a runaway-greenhouse steam atmosphere inside the classically defined habitable zone (inside 2 au. for our sun).

The planet’s CO2/CH4 mix also points against this. From the paper, if there is a slight excess of Hydrogen over the stoichiometric ratio for water, then condensing H20 out, as either water or high pressure ice, pushes the planet’s atmosphere towards a Type A Hydrogen excess with no CO2 and NH3 lines appearing.

All of this would point towards a planet with a rocky/iron core overlaid by high pressure ice, which would, at about the megabar level, transition to a gas atmosphere composed mainly of super-critical steam. This would make up a significant volume of the planet. At the top of this atmosphere, the water, now in the form of steam, would condense out as virago rain leaving a dry stratosphere consisting mainly of CO2, CH4, H2 and N2.

To test my assumption, I did a rough back of the envelope calculation using online calculators, and looked at the wet adiabatic lapse rate (the rate of increase in temperature when saturated air is compressed) per atm. pressure doubling starting from 1 bar at 20°C. This rate (1.5°C/1000 ft) is considerably less than the rate for dry gases (3°C/1000 ft).

It was all very ad hoc, but the first thing I noted was that for each pressure doubling, the boiling point of water goes up significantly–at 100 bar, water boils at 300°C–until its temperature approaches its critical point (374°C) where it levels off. So the lapse rate increase in temperature chases the boiling point of water as you go deeper and deeper into the atmosphere; however, from my calculations, it catches water’s boiling point at 270°C and 64 bar. The calculations are arbitrary—I was using Earth’s atmospheric composition and gravity–and small changes in the parameters can result in big changes in the crossover point; but what this does point to is that if the planet has an ocean, it could be a rather hot one under a dense atmosphere, and if the atmosphere has any great depth then the ocean is likely to be a supercritical fluid.

Also, for the atmosphere to be thin, the planet’s ratio of CO2, CH4 and H2 must be less than 1/10,000 that of H20, which is not something I regard as likely, given what we know about the outer solar system.

I’ll leave you with a phase diagram of water with (red line) the dry adiabat of Venus moved 25°C cooler to represent a dry Earth and the wet adiabat (blue line) the one I calculated out. It’s also a handy diagram to play with as it gives you an idea of how deep the ocean or critical fluid layer will be at a given temperature before it turns into a layer of high pressure ice.

vi) One final point, and this reinforces the purpose of the paper: that we need to thoroughly understand planetary chemistry to eliminate false bio-markers. DMS is widely touted as a biomarker, but if we look at the most thermodynamically stable forms of sulfur: In a Type A reducing atmosphere, it’s H2S; and in a wet, oxidizing, Type B atmosphere, it’s the Sulfate (SO42-) ion. Unfortunately, the authors of the paper did not extend their thermodynamic analysis to Sulfur, but if we look at DMS’s formula (CH3)2S, it looks an awful lot like a good candidate for the most thermodynamically stable form of Sulfur for a Type C atmosphere, not a biomarker.

References

Wikipedia: K2-18b
https://en.wikipedia.org/wiki/K2-18b

N. Madhusudhan, S. Sarkar, S. Constantinou, M Holmberg, A. Piette, and J. Moses, Carbon-bearing Molecules in a Possible Hycean Atmosphere, Preprint, arXiv: 2309.05566v2, Oct 2023
https://esawebb.org/media/archives/releases/sciencepapers/weic2321/weic2321a.pdf

P. Woitke, O. Herbort, Ch. Helling, E. Stüeken, M. Dominik, P. Barth and D. Samra, Coexistence of CH4, CO2, and H2O in exoplanet atmospheres, Astronomy & Astrophysics, Vol. 646, A43, Feb 2021
https://doi.org/10.1051/0004-6361/202038870

N. Madhusudhan, M. Nixon, L. Welbanks, A. Piette and R. Booth, The Interior and Atmosphere of the Habitable-zone Exoplanet K2-18b, The Astrophysical Journal Letters, 891:L7 (6pp), 2020 March 1
https://doi.org/10.3847/2041-8213/ab7229

Super Earths/Hycean Worlds, Centauri Dreams 11 November, 2022

Youtube interview of Nikku Madhusudhan, Is K2-18b a Hycean Exoworld? on Colin Michael Godier’s Event Horizon

What We’re Learning about TRAPPIST-1

It’s no surprise that the James Webb Space Telescope’s General Observers program should target TRAPPIST-1 with eight different efforts slated for Webb’s first year of scientific observations. Where else do we find a planetary system that is not only laden with seven planets, but also with orbits so aligned with the system’s ecliptic? Indeed, TRAPPIST-1’s worlds comprise the flattest planetary arrangement we know about, with orbital inclinations throughout less than 0.1 degrees. This is a system made for transits. Four of these worlds may allow temperatures that could support liquid water, should it exist in so exotic a locale.

Image: This diagram compares the orbits of the planets around the faint red star TRAPPIST-1 with the Galilean moons of Jupiter and the inner Solar System. All the planets found around TRAPPIST-1 orbit much closer to their star than Mercury is to the Sun, but as their star is far fainter, they are exposed to similar levels of irradiation as Venus, Earth and Mars in the Solar System. Credit: ESO/O. Furtak.

The parent star is an M8V red dwarf about 40 light years from the Sun. It would be intriguing indeed if we detected life here, especially given the star’s estimated age of well over 7 billion years. Any complex life would have had plenty of time to evolve into a technological phase, if this can be done in these conditions. But our first order of business is to find out whether these worlds have atmospheres. TRAPPIST-1 is a flare star, implying the possibility that any gaseous envelopes have long since been disrupted by such activity.

Thus the importance of the early work on TRAPPIST-1 b and c, the former examined by Webb’s Mid-Infrared Instrument (MIRI), with results presented in a paper in Nature. We learn here that the planet’s dayside temperature is in the range of 500 Kelvin, a remarkable find in itself given that this is the first time any form of light from a rocky exoplanet as small and cool as this has been detected. The planet’s infrared glow as it moved behind the star produced a striking result, explained by co-author Elsa Ducrot (French Alternative Energies and Atomic Energy Commission):

“We compared the results to computer models showing what the temperature should be in different scenarios. The results are almost perfectly consistent with a blackbody made of bare rock and no atmosphere to circulate the heat. We also didn’t see any signs of light being absorbed by carbon dioxide, which would be apparent in these measurements.”

The TRAPPIST-1 work is moving relatively swiftly, for already we have the results of a second JWST program, this one executed by the Max Planck Institute for Astronomy and explained in another Nature paper, this one by lead author Sebastian Zieba. Here the target is TRAPPIST-1 c, which is roughly the size of Venus and which, moreover, receives about the same amount of stellar radiation. That might imply the kind of thick atmosphere we see at Venus, rich in carbon dioxide, but no such result is found. Let me quote Zieba:

“Our results are consistent with the planet being a bare rock with no atmosphere, or the planet having a really thin CO2 atmosphere (thinner than on Earth or even Mars) with no clouds. If the planet had a thick CO2 atmosphere, we would have observed a really shallow secondary eclipse, or none at all. This is because the CO2 would be absorbing all of the 15-micron light, so we wouldn’t detect any coming from the planet.”

Image: This light curve shows the change in brightness of the TRAPPIST-1 system as the second planet, TRAPPIST-1 c, moves behind the star. This phenomenon is known as a secondary eclipse. Astronomers used Webb’s Mid-Infrared Instrument (MIRI) to measure the brightness of mid-infrared light. When the planet is beside the star, the light emitted by both the star and the dayside of the planet reach the telescope, and the system appears brighter. When the planet is behind the star, the light emitted by the planet is blocked and only the starlight reaches the telescope, causing the apparent brightness to decrease. Credits: NASA, ESA, CSA, Joseph Olmsted (STScI)

What JWST is measuring is the 15-micron mid-infrared light emitted by the planet, using the world’s secondary eclipse, the same technique used in the TRAPPIST-1 b work. The MIRI instrument observed four secondary eclipses as the planet moved behind the star. The comparison of brightness between starlight only and the combined light of star and planet allowed the calculation of the amount of mid-infrared given off by the dayside of the planet. This is remarkable work: The decrease in brightness during the secondary eclipse amounts to 0.04 percent, and all of this working with a target 40 light years out.

Image: This graph compares the measured brightness of TRAPPIST-1 c to simulated brightness data for three different scenarios. The measurement (red diamond) is consistent with a bare rocky surface with no atmosphere (green line) or a very thin carbon dioxide atmosphere with no clouds (blue line). A thick carbon dioxide-rich atmosphere with sulfuric acid clouds, similar to that of Venus (yellow line), is unlikely. Credit: NASA, ESA, CSA, Joseph Olmsted (STScI).

I should also mention that the paper on TRAPPIST-1 b points out the similarity of its results to earlier observations of two other M-dwarf stars and their inner planets, LHS 3844 b and GJ 1252 b, where the recorded dayside temperatures showed that heat was not being redistributed through an atmosphere and that there was no absorption of carbon dioxide, as one would expect from an atmosphere like that of Venus.

Thus the need to move further away from the star, as in the TRAPPIST-1 c work, and now, it appears, further still, to cooler worlds more likely to retain their atmospheres. As I said, things are moving swiftly. In the coming year for Webb is a follow-up investigation on both TRAPPIST-1 b and c, in the hands of the system’s discoverer, Michaël Gillon (Université de Liège) and team. With a thick atmosphere ruled out at planet c, we need to learn whether the still cooler planets further out in this system have atmospheres of their own. If not, that would imply formation with little water in the early circumstellar disk.

The paper is Zieba et al., “No thick carbon dioxide atmosphere on the rocky exoplanet TRAPPIST-1 c,” Nature 19 June 2023 (full text). The paper on TRAPPIST-1 b is Greene et al., “Thermal emission from the Earth-sized exoplanet TRAPPIST-1 b using JWST,” Nature 618 (2023), 39-42 (abstract).

Part II: Sherlock Holmes and the Case of the Spherical Lens: Reflections on a Gravity Lens Telescope

Part II: Sherlock Holmes and the Case of the Spherical Lens: Reflections on a Gravity Lens Telescope

Aerospace engineer Wes Kelly continues his investigations into gravitational lensing with a deep dive into what it will take to use the phenomenon to construct a close-up image of an exoplanet. For continuity, he leads off with the last few paragraphs of Part I, which then segue into the practicalities of flying a mission like JPL’s Solar Gravitational Lens concept, and the difficulties of extracting a workable image from the maze of lensed photons. The bending of light in a gravitational field may offer our best chance to see surface features like continents and seasonal change on a world around another star. The question to be resolved: Just how does General Relativity make this possible?

by Wes Kelly

Conclusion of Part I

At this point, having one’s hands on an all-around deflection angle for light at the edges of a “spherical lens” of about 700,000 kilometers radius (or b equal to the radius of the sun rS), if it were an objective lens of a corresponding telescope, what would be the value of the focal length for this telescopic component expressed in astronomical units?

The angle of 700,000 km solar radius observed from 1 AU, gives an arcsine of 0.26809 degrees. This is consistent with the rule of thumb solar diameter estimate of ~0.5 degrees.

Expressed in still another way, solar radius from this arcsine measure is 965 arc seconds. When the solar disc itself is observed to be about 1.75 arc seconds in radius, that’s where you will find the focus for this objective lens.

If we take the ratio of 965 to 1.75, we obtain a value 551.5. In other words, a focal point for the relativistic effect at 551.5 AU’s out. Thus, the General Relativity effect implies that light bent by the sun’s gravity near its surface radius is focused about 550 AUs out from the sun. And like the protagonist of Moliere’s 16th century comedy play, as I run off to tell everyone I know, I discover a feeling akin to, “For more than forty years I have been speaking prose while knowing nothing of it.”

This could be a primary lens for a very unwieldy telescope. True, but not unwieldy in all manners. When we consider the magnification power of a telescope system, we speak of the focal length of the objective lens over that for an eye piece or sensor lens focal length. And habitually one might assume it is enclosed in a canister – as most telescopes sold over the counter at hobby stars are. But it is not always necessary or to any advantage. Consider the largest ground-based optical reflectors, or the JWST and radio telescopes. Their objective focal lengths extend through the open air or space. The JWST focal length is 131.4 meters or taller than its Ariane V launch system. Its collected light reaches sensors through a succession of ricochets in its instrumentation package, but not through. a cylindrical conduit extending out from the reflector any significant distance to the front. [Note: The Jupiter deflection case mentioned above would make the focal length 100x longer.]

Continued Discussion

(Tables, Figures and References for Parts I and II are sequential).

In contrast with a 130-meter objective lens focal length, with 550 AU, any focal length for a conventionally manufactured “eyepiece” lens optical system of any size would have enormous magnification or light gathering potential. Were it a lens of 1 or 10 or 100 meter focal length at the instrument end of the telescope, with the “Oort Cloud radius sized” objective lens focal length (550 x 1.5xe8 meters = 8.2 x 10e8 meters) it would not matter much so far as interstellar mapping would be concerned now. We should add as well that the magnification is in terms of area rather than diameter or radius. In effect magnification is multiplication of projected surface area or surface light.

Given the above, issues that remain to be addressed related to the field of view.

1. The spherical lens (the sun) is a light source itself, which needs to be blocked out with a coronagraph on board the SGL spacecraft.

2. The signal obtained from the star (but especially the planet!) is “convoluted” by passage around the perimeter of the solar lens. This must be undone by a deconvolution process.

3. In application for examining an exoplanet in orbit around another star, the fix on the star must be either adjusted to center on the related planetary target or else the planet’s data must be extracted from an enormous extraneous data package.

On issue 1, there are many coronagraph techniques already applied in telescopes for blocking solar or stellar light sources. The Nancy Roman Space Telescope device when launched will be the state of the art and likely to influence SGL coronagraph design. For issue 2, it would be interesting to see a simple illustrative example (e.g., a sphere with a simple pattern such as broad colored latitudinal and longitudinal bands alternating in some patterns… a yellow or green smiley face?), transformed and then converted back. On issue 3, however, I believe that discussion below will provide more immediate insights.

Figure-6 As noted in [7], a meter class telescope with a coronagraph to block solar light is placed In the strong interference region of the solar gravitational lens (SGL) and is capable of imaging an exoplanet at a distance of up to 30 parsecs with a few 10-km scale resolution on its surface. The picture shows results of a simulation of the effects of the SGL on an Earth-like exoplanet image.

Left: Original RGB (red, green, blue) image with a 1024 x 1024 pixel array.

Center: Image blurred by the SGL, sampled at an SNR (signal to noise ratio) of 103 per color channel or overall SNR of 3 x 130.

Right: Result of image deconvolution.

In Reference 7 by Turyshev et al., with Figure-7, potential benefits of an SGL telescope are illustrated with a targeted planet similar to the Earth within a range of 100 light years. What follows is a reference point which we would like to examine as well; in this case, with a specific range (10 parsecs) to illustrate engineering and operational questions, concerns or trades. In archives, see also [ref. A12].

Figure-7 Contrast of benefits illustration with planet observed with an orbital plane in the line of sight of the GLT.

Figure-8 Observation of a target planet with an orbital plane inclined to the line of GLT line of sight.

Left side, with perpendicular to the orbital plane tipped forward, we can observe crescent phases similar to the planets orbiting the sun interior to the Earth, but at low angles, the illuminated exoplanet face is not illuminated. On the aft side of the sun it is in full phase, but perhaps experiencing significant glare. On the right side, with higher inclination, the exoplanet appears as a cat’s eye above the center point; below, as a crescent rotated at a right angle to its path.

What to Do about Slew?

As for deploying a telescope out into the Oort Cloud out to ~550 AUs: This seems explicable and feasible with a combination of conventional propulsion and orbital mechanics taken to a higher state of the art, nuclear thermal, nuclear fusion electric or thermal, sized based on constraints such as mass, mission duration, infrastructure and finance. It is assumed here, by this aerospace engineer, that trajectory, propulsion, navigation and guidance issues of deployment with resources not yet available but will be with larger spacecraft assembled and tested in the future. However, operational issues of this baseline or reference mission, I would still like to explore. In pursuit of this, we will add a reference target (perhaps the first of an enlarging set), an exoplanet similar to the earth in a solar system similar to ours at a viewing distance used to set stellar absolute magnitudes, a distance of ten parsecs.

Now if a stellar system were ten parsecs away or 32.26 light years off, the maximum radial offset of an Earth-like planet from a Sol like star (1 AU) would be 0.1 arc seconds. Hence, the Earth analog would be in the “nominal” field of view (FOV) but the FOV would encompass a radius of 175 AUs – If the center of the nominal FOV can be considered the center of the target star. The stellar absolute magnitude measure distance (10 parsecs ) is a middle distance for this exercise and a parsec (3.23 light years), also basic to astronomy, could be considered a minimum just below Alpha Centauri distance (4.3 light years).

However, FOV behind the sun used for now, might be misleading or unclear in these circumstances. Because it is not clear to me how much of the blocked celestial sphere is transferred back via the gravity lens phenomenon. In this analysis, without full understanding of how the coronagraph or convolutions will work, I am unsure whether there is any control over what the steradian field behind the sun will be; whether it can be entirely controlled. Focusing on the star could provide all the 175 AU radius in the field of view, or some fraction thereof. But if centering on a planetary target can limit the wasted scan area, I highly recommend such.

For argument’s sake, of this celestial “blockage” region, it could range from the infinitesimal to the whole. The image obtained might be treated akin to a point source from which we might extract image data somewhat akin to extracting the spectrum of a similar un-dimensioned source. Or there might be several different deconvolution methods which provide options. But the aspect that concerns me here is how one searches for a point source in this so-called FOV, more characterized by the blockage of the sun’s angular width. The FOV might be described as an area within an FOB, a field of blockage. Whether discerned directly without need of a deconvolution – or not – at ten parsec distance the field of “blockage” (FOB?) would include a radius of 175.5 AUs in the 17.5 arcsecond maximum field of view.

The diameter of a G2V sun like ours is about 0.01 AU and a terrestrial planet like our own is 0.01 of that. And then what kind of transformation or convolution would be required to take the information from the other side and convert it back into an image? An image we would recognize as a planet with continents, oceans and clouds. Not knowing for sure, I suspected that if the position of the target planet were known, it would make more sense to focus the telescope on it rather than the star itself. On the other hand, if obtaining a coronagraphic blocking of the star required centering on the star, and capturing the planet required processing the thick ring around the star, then the total amount of data processing could become enormous – as the following table shows.

In terms of terrestrial planet viewed area vs. that of the 1 AU radius region and the 175 AU radius encompassing the entire celestial pane blocked by the sun, the ratios are 1 to 500 million and 168 billion respectively. Depending on the resolution sought for the planetary analysis ( e.g., 10 kilometer features distinguishable), then data bits characterizing individual “squares” of smaller dimensions must be processed. For present purposes, we can select ten kilometers for illustration.

Table-3 Scanning the entire field of FOV of a target at 10 parsecs and for an exoplanet similar to the Earth orbiting a G2V star. At the distance selected for calibrating stellar absolute magnitude (about 33 light years) and a GLT placed 550 AUs from our sun, the geometrical area blocked by the solar disc is a region 17.5 AUs in radius or 17.5 arc seconds wide at 1 arc second distance, 10x wider at 10 parsecs. The sun-like star diameter ~ 0.01 AUs and an exoplanet Earth about 1/100th of that or 0.0001 AU wide.

As the NIAC Phase II Report and AIAA journal article [7 and 8] indicate, targeted resolution objectives are on the order of 10 kilometers, indicative of sampling cells of lower dimensions. A one-kilometer-wide sample cell we select for sake of argument. However, with each observed cell, the GLT telescope instrument suite will include 3 -5 color band sweeps (e.g., ultraviolet, blue, yellow, red, infra-red) which would include intensity levels. A spectrometer could also seek evidence of discrete spectral lines or molecular bands. So, for each square kilometer scanned, there could be considerable binary coded data for the telemetry link. More than one data-bit for sure associated with each polygon of space scanned by the SGL telescope. If each polygon has a location defined in a 2-dimensional grid, then that point likely has two 32 or 64-bit position assignments; then each color filter has an intensity. In addition, if spectral lines are tracked another databit code will be assigned to that point as well.

Processing the FOV indiscriminately with focus on the star is like searching for a needle (or data) in a haystack. Tracking the planet itself could eliminate orders of magnitude of excess data processing. On the other hand, slewing at 550 AU circular orbit entails 40,000 km magnitude oscillations over a year to follow the target, distances equivalent to a tenth of Earth-Moon separation, but an expenditure of propulsive resources. Consequently, this would become at least one resource trade between data handling and maneuverability. One possible solution would be multiple telescopes formation flying over “seasonal” tracking points a quarter of orbital revolution apart in the projected orbital track.

The scenario for deploying the telescope assumes considerable outbound velocity accumulated in the form of continuous low thrust acceleration. Consequently, on station a very large radial velocity will remain. Remarkably, at 550 AU distance, circular orbit velocities are still over a kilometer per second ( e.g., Earth orbital about 29.7 km/sec over square root of 550, about 1.27 km/sec). With the Earth-based example at 10 parsecs and the requirement to cover 40,000 km back and forth within about 6 months, the corresponding constant velocity would be 0.0025 km/sec to hold the alignment. This type of slewing would work better with a more rapidly orbiting exoplanet located in the HZ of a red dwarf. But the M star case would require more frequent reverses of direction. Significantly, were we to do this exercise for a target at 1 parsec such as the Alpha Centauri stars, the oscillations would be ten times larger (400,000 km) or about the distance to the moon.

Additionally, the rotation rate about the planetary axis could be star synchronous or, as with the Earth or Mars, much faster than the orbital revolution. There could be moons in its near vicinity. All these are natural considerations for a habitable zone exoplanet survey. And reasons that features on the exoplanet surface could become blurred. Other cases would generate different requirements, no doubt. And all this will affect how long it will take to process square kilometer data sets into each of their relevant maps.

Beside stellar glare, galactic background needs to be considered too. A dark field behind the target star would be preferable as well, achieving a higher signal to noise ratio. It would be a shame if threshold levels for observing a planet vs. magnified stellar backgrounds could not be assessed prior to flight. A potential problem making out the planet against the background would make a planetary ephemeris important; linkage to home base guide telescopes directing the GLT pointing, where in a sense the GLT will be blind. We have discussed just an Earth analog so far, but HZ targets at cooler K and M stars as well as hotter F main sequence stars could possess eye-opener properties too.

Several decades ago, during an undergraduate satellite design project I participated as the communications engineer – and then space navigation assignments called on putting on that hat again. An interesting experience each time and I found some overall equations that formulated relations among distance, signal to noise thresholds, signal rates and power required to stay in touch at both ends, spacecraft and the Deep Space Network. Unfortunately, I lost our first team’s final report in a flood, not of information like that discussed, but of tropical storm water. But it is not necessary to reconstruct the methods found then. At this date there is now an old literature base for communications with spacecraft situated in deep space, thanks to publications of the Jet Propulsion Laboratory, illustrative examples such as Voyager and other Jupiter bound spacecraft, even earlier spacecraft examined as if they were beaming from there and received with network capabilities of a given epoch (see Figure 9).

Figure-9 Figure-9 A diagram from Ref. 5 pegs down one end of the trade issues, chronological increases in data rates obtained from spacecraft in Jupiter vicinity. Reception is associated with 5.2 AU distance from the sun (varying with the Earth) vs. the 550 AUs or more anticipated for the GLT. On one axis, acquisition data rates are shown. For each spacecraft that sets out on these Jovian missions (some, of course, actually did not), a liftoff limit on power or data rate can be assumed for the spacecraft or observatory. Once launched, most of the growth was likely at the Earth based part of the communication link.

In comparison with attenuation of signals from the Jovian system at 5.2 AU for the various systems shown in the Figure-9 JPL diagram, signals 100x further out will be decreased in strength to ~1/10,000th or less with movement beyond 550 AU. Consequently, data rates shown in the diagram for various extent technologies will be dropped by a factor of 1/10,000th or 1.0 e-04 as well.

Depending on when such an SGL space observatory will be launched, some technologies will improve data transmission rates or storage capacities with respect to mass density or power required. Other technologies likely will not experience similar trends. For example, it is unclear what new Deep Space Network type tracking facilities will be employed in support of the SGL mission. However, if the data load is driven by a full scan of the equivalent of the solar angular area or FOV, the spacecraft system requirements for data storage and transmission are increased enormously.

On the other hand, as shown, slewing from the stellar focal point to a planetary position will require propellant resources and attitude control increases over those for the stellar fix. Even at 550 AU, there is a 1.28 km/sec characteristic circular orbit velocity. And depending on time of flight to outpost station delivery, in coast the spacecraft can be considered on an extremely hyperbolic heliocentric path. Consequently, fixed on a target planet, low thrust would be required without planet tracking even to maintain stellar focus.

My own quick assessment is that narrow field of view scanning in the planetary vicinity as it tracks around the star in some arbitrary orbital plane is the better procedure. The actual orbital plane’s normal could be inclined by some angle to the line of sight (See Figure 8). Hence, a circular path would be perceived as an elliptical projection; more complex if actually eccentric to a considerable fraction. But with a mean likelihood of 45-degree inclination and circular orbit, half phases would appear at greatest stellar elongation. Near the line of sight, a cat’s-eye would appear behind the star and a crescent in front with lowest elongation and greatest glare. With zero inclination of the planet, we are bound to learn much about its northern hemisphere and much less about its south, depending on its rotational axis alignment.

Now if a stellar system were ten parsecs away or about 33 light years off, the maximum radial offset of an Earth-like planet from a Sol like star (1 AU) would be 0.1 arc seconds. Hence, the Earth analog would be in the “nominal” field of view (FOV) but the FOV would encompass a radius of 17.5 AUs – If the center of the nominal FOV can be considered the center of the target star.

FOV, used for now, might be misleading in these circumstances. Because it is not clear to me how much of the blocked celestial sphere is transferred back via the gravity lens phenomenon.

For argument’s sake, of this celestial “blockage” region, it could range from the infinitesimal to the whole. The image obtained might be treated akin to a point source from which we might extract image data somewhat akin to extracting the spectrum of a similar un-dimensioned source. So the aspect that confuses me here is how one searches for a point source in this so-called FOV, more characterized by the blockage of the sun’s angular width. The FOV might be described as an area within an FOB, a field of blockage.

In this situation, there would have to be some fore-knowledge of where the target planet should be. You would need a tracker observatory probably closer to Terra home. You still need a means to locate a body orbiting an object about a hundredth of an AU in diameter and in turn a planet about 1/10,000 of an AU wide. To relay information from a stellar observatory not experiencing this occultation by the sun to 550 AUs out, the lag would be about 3.174 days based on the speed of light.

And then, presumably, the observatory would need to slew toward this planetary target from the reference point of the stellar primary – or perhaps even the center of mass in a binary system. Alpha Centauri could be such an example.

A Mission for One Star System and Exoplanet or More?

Additional trade issues to consider are related to completion of observation and characterization of one planetary system. Perhaps there is more than one planet (or a moon) in a target system to study. But there is also the issue of observing more than one planetary system. Minimal angular separation of two “good” candidate systems in the celestial sphere would have to be weighed against the “excellence” of an isolated stellar system with no potential for a phase II mission elsewhere, say within one degree of circular arc. Faced with such a dilemma I would hope that observing the isolated system over years until system deactivation will be well worthwhile.

At this writing we are aware of about 5000 exoplanets with attributable features, providing a range of reasons for continued or closer observation. Like the other design issues described above, eventually there will be the dilemma of which exoplanet or planets to select.

In terms of steradians, the whole celestial sphere has an area of 4 ? units. With some experimentation I discover that it is possible to determine the equidistant position of any number of stars – which can illustrate the dilemma of deciding how to deploy the SGL Telescope. The celestial arc A between equally spaced stars of a given number n can be described with the answer in radians convertible to degrees. Once n equals or exceeds 3, the equidistant points can be viewed as vertices to equilateral and equiangular spherical triangles of given arc segments, the latter the significant parameter. The total of 5000 exoplanets is not distributed with an equal spacing, but there is an element of likelihood with the fractional degree separation overall. And, of course, a smaller selection of select exoplanet systems will have wider individual separations overall, but perhaps a few will be less than a degree apart. For the case of ten parsec distant planetary system we noted that a traverse to cover 17.5 AUs encompassed about 400,000 kilometers at 550 AU. A one degree traverse is 205 times as large but it does not have tracking determined maneuver velocity requirements.

It is likely that by some set of selected parameters, several exoplanets can be selected for further scrutiny. However, if several parameters are involved and a couple of candidates or alternates can be identified in proximity, it is possible that two close star systems could outscore focus on one, even if it generally acknowledged the best, but located on the wrong side of the sky for total mission benefit.

Consequently, the mission analysis could become more complicated as time passes with a larger and larger selection of nearby systems identified with one or more planets.

What would the parameters tend to be to warrant such a trade? Even if there is no evidence of life, an exoplanet of exceptional nature could transcend parameters associated with habitable zone parameters or signs of life. And for examination of signs of life, our knowledge will have to exceed such identifications as diameter, albedo and placement in a habitable zone: atmospheric composition, nature of hydrosphere, traces of processes similar to terrestrial ones… Cost benefit issues too of propulsion and maneuver to survey two planets would need to have an identifiable threshold against the additional spacecraft weight budget for propulsion. If the two candidate systems are far apart, then the choice might be easier in a way if it requires launching two distinct missions.

Whether going for two exoplanets separated by a degree or more is worthwhile, is difficult to ascertain at this early stage. But the determination will depend on establishing criteria for a trade. To first order it will depend on how outstanding signs of life might be within a future database of exoplanets. And if not clear, which parameters of an exoplanet or a maneuverable spacecraft should be considered and with what weight. Reflecting on an earlier orbital application proposal, Arthur C. Clarke suggested geosynchronous orbit for a single communications relay station, elaborated as a call center with humans at switchboards. Instead, we have numerous geosats with no one aboard. It could be that SGL spacecraft will proliferate similarly and for several purposes. At the very least, we can be thankful to be able to consider such possibilities, coming from a time decades back when exoplanets were simply considered fantasy like Spock’s planet – or more locally – Lescarbault’s and Le Verrier’s Vulcan.

References for Part I and Part II

1.) Pais, Abraham, Subtle is the Lord … The Science and Life of Albert Einstein, Oxford University Press, 1982.

2.) https://www.stsci.edu/jwst/science-execution/observing-schedules

3.) Vallado, David A., Fundamentals of Astrodynamics and Applications, 2nd edition, Appendix D4, Space Technology Library, 2001.

4.) Moulton, Forest Ray, An Introduction to Celestial Mechanics, 2nd Edition, Dover, 1914 Text.

5.) Taylor, Jim et al. Deep Space Communications, online at https://descanso.jpl.nasa.gov/monograph/series13_chapter.html

6.) Wali, Kamshwar, C., Chandra – A Biography of S. Chandrasekhar, U. of Chicago Press, 1984.

7.) Turyshev et al., ”Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission,” Final Report, NASA Innovative Advanced, Concepts (NIAC) Phase II.

8.) Helvajian, H. et al., “Mission Architecture to Reach and Operate at the Focal Region of the Solar Gravitational Lens,” Journal of Spacecraft and Rockets, American Institute of Aeronautics and Astronautics (AIAA), February 2023, on line pre-print.

9.) Xu, Ya et al., ”Solar oblateness and Mercury’s perihelion precession”, MNRAS, 415, 3335-3343, 2011.

A1.) Archives: In the Days before Centauri Dreams… An Essay by WDK (centauri-dreams.org)

A2.) Archives: A Mission Architecture for the Solar Gravity Lens (centauri-dreams.org)

Here in Houston, the University of Houston, Clear Lake Physics and Astronomy Club had a recent meeting when the sky was obscured by clouds and the president had asked in advance, just in case of such circumstances, would I have any presentation I could give that night. There were some other ones that had grown all out of control, so I decided to start on a fresh topic. This article grew out of the evening presentation and consequently, it is dedicated to the club and its members.

WDK
13 April 23

tzf_img_post

Ring of Life? Terminator Habitability around M-dwarfs

It would come as no surprise to readers of science fiction that the so-called ‘terminator’ region on certain kinds of planets might be a place where the conditions for life can emerge. I’m talking about planets that experience tidal lock to their star, as habitable zone worlds around some categories of M-dwarfs most likely do. But I can also go way back to science fiction read in my childhood to recall a story set, for example, on Mercury, then supposed to be locked to the Sun in its rotation, depicting humans setting up bases on the terminator zone between broiling dayside and frigid night.

Addendum: Can you name the science fiction story I’m talking about here? Because I can’t recall it, though I suspect the setting on Mercury was in one of the Winston series of juvenile novels I was absorbing in that era as a wide-eyed kid.

The subject of tidal lock is an especially interesting one because we have candidates for habitable planets around stars as close as Proxima Centauri, if indeed a possibly tidally locked planet can sustain clement conditions at the surface. Planets like this are subject to extreme conditions, with a nightside that receives no incoming radiation and an irradiated dayside where greenhouse effects might dominate depending on available water vapor. Even so, moderate temperatures can be achieved in models of planets with oceans, and most earlier work has gone into modeling water worlds. I also think it’s accurate to say that earlier work has focused on how habitable conditions might be maintained in the substellar ‘eye’ region directly facing the star.

But what about planets that are largely covered in land? It’s a pointed question because a new study in The Astrophysical Journal finds that tidally locked worlds mostly covered in water would eventually become saturated by a thick layer of vapor. The study, led by Ana Lobos (UC-Irvine) also finds that plentiful land surfaces produce a terminator region that could well be friendly to life even if the equatorial zone directly beneath the star on the dayside should prove inhospitable. Says Lobo:

“We are trying to draw attention to more water-limited planets, which despite not having widespread oceans, could have lakes or other smaller bodies of liquid water, and these climates could actually be very promising.”

Image: Some exoplanets have one side permanently facing their star while the other side is in perpetual darkness. The ring-shaped border between these permanent day and night regions is called a “terminator zone.” In a new paper in The Astrophysical Journal, physics and astronomy researchers at UC Irvine say this area has the potential to support extraterrestrial life. Credit: Ana Lobo / UCI.

The team’s modeling simulates both water-rich and water-limited planet scenarios, even as the question of how much water to expect on a habitable zone M-dwarf planet remains open. After all, water content likely depends on planet formation. If a habitable zone planet formed in place, it likely emerged with lower water content than one that formed beyond the snowline (relatively close in for M-dwarfs) and migrated inward. We also have to remember that flare activity could trigger water loss for such worlds.

Water’s effects on climate are abundant, from affecting surface albedo to the production of clouds and the development of greenhouse effects. They’re also tricky to model when we move into other planetary scenarios. As the paper notes:

Due to water’s various climate feedbacks and its effects on the atmospheric structure, the habitable zone of a water-limited Earth twin is broader than that of an aquaplanet Earth (Abe et al. 2011). But while water’s impact on climate is well understood for Earth, many of these fundamental climate feedbacks behave differently on M-dwarf planets, due to the lower frequency of the stellar radiation.

To perform the study, Lobo’s team considered a hypothetical Earth-class planet orbiting the nearby star AD Leonis (Gliese 388), an M3.5V red dwarf, using a 3D global climate model to find out whether a tidally locked world here could sustain a temperature gradient large enough to make the terminator habitable. The study uses a simplified habitability definition based solely on surface temperature. The researchers deployed ExoCAM, a modified version of the Community Atmosphere Model (CAM4) developed by the National Center for Atmospheric Research and used to study climate conditions on Earth. Their software tweaked the original code to adjust for factors such as planetary rotation.

The results are straightforward: With abundant land on the planet, terminator habitability increases dramatically. A water-rich world like Earth, with land covering but 30 percent of the surface, is not necessarily the best model for habitability here, as we consider the factors involved in tidal lock, with extensive land offering viable options in at least part of the surface. A ‘ring’ of habitability may prove to be a common outcome for such worlds. But it’s interesting to consider how these initial conditions might complicate the early development of biology. Here I return to the paper:

There are still many uncertainties regarding the water content of habitable-zone M-dwarf planets. Based on our current understanding, it is possible that water-limited planets could be abundant and possibly more common than ocean-covered worlds. Therefore, terminator habitability may represent a significant fraction of habitable M-dwarf planets. Compared to the temperate climates obtained with aquaplanets, terminator habitability does offer reduced fractional habitability. Also, while achieving a temperate terminator is relatively easy on water-limited planets, constraining the water availability at the terminator remains a challenge. Overall, the lack of abundant surface water in these simulations could pose a challenge for life to arise under these conditions, but mechanisms, including glacier flow, could allow for sufficient surface water accumulation to sustain locally moist and temperate climates at or near the terminator.

The paper is Lobo et al., “Terminator Habitability: The Case for Limited Water Availability on M-dwarf Planets,” Astrophysical Journal Vol. 945, No. 2 (16 March 2023), 161 (full text).

tzf_img_post

Interstellar Research Group: 8th Interstellar Symposium Second Call for Papers

Abstract Submission Final Deadline: April 21, 2023

The Interstellar Research Group (IRG) in partnership with the International Academy of Astronautics (IAA) hereby invites participation in its 8th Interstellar Symposium, hosted by McGill University, to be held from Monday, July 10 through Thursday, July 13, 2023, in Montreal, Quebec, Canada. This is the first IRG meeting outside of the United States, and we are excited to partner with such a distinguished institution!

Topics of Interest

Physics and Engineering

Propulsion, power, communications, navigation, materials, systems design, extraterrestrial resource utilization, breakthrough physics

Astronomy

Exoplanet discovery and characterization, habitability, solar gravitational focus as a means to image exoplanets

Human Factors

Life support, habitat architecture, worldships, population genetics, psychology, hibernation, finance

Ethics

Sociology, law, governance, astroarchaeology, trade, cultural evolution

Astrobiology

Technosignature and biosignature identification, SETI, the Fermi paradox, von Neumann probes, exoplanet terraformation

Submissions on other topics of direct relevance to interstellar travel are also welcome. Examples of presentations at past symposia can be found here:
https://www.youtube.com/c/InterstellarResearchGroup/videos

Confirmed Speakers

Dr. Stephen Webb (University of Portsmouth)
“Silence is Golden: SETI and the Fermi Paradox”

Dr. Kathryn Denning (York University)
“Anthropological Observations for Intestellar Aspirants”

Dr. Rebecca M. Rench (Planetary Science Division, NASA Headquarters)
“The Search for Life and Habitable Worlds at NASA: Past, Present and Future”

Dr. Frank Tipler (Tulane University)
“The Ultimate Rocket and the Ultimate Energy Source and their Use in the Ultimate Future”

Contributed Plenary Lectures

The primary submissions for the Interstellar Symposium are plenary lectures. The lectures will be approximately 20 minutes in length and be accompanied by a manuscript prior to the Symposium. The early bird deadline for abstract submission, which ensures expedited consideration and notification of acceptance, is January 15, 2023. Submitted abstracts will continue to be considered until April 21, 2023, if space in the program permits. The submitted abstract should follow the format described in the Abstract Submission section below. Abstracts should be emailed to: registrar@irg.space

No Paper, No Podium: Contributed plenary lectures are to be accompanied by a written paper, with an initial draft due June 23, 2023. You will have an opportunity to revise and extend your draft before the publication deadline of September 8, 2023. If a paper is not submitted by the final manuscript deadline, authors will not be permitted to present their work. Papers should be original work that has not been previously published.

Work in Progress Posters

Contributors wishing to present projects still in progress or at a preliminary stage may submit an abstract for a Work in Progress poster presentation. The deadline for abstract submission for Work in Progress posters is May 20, 2023. The abstract describing the work to be presented should follow the format described in the Abstract Submission section below. The poster should not exceed 36 inch (width) by 48 inch (height). The presenters are responsible for printing their own posters and would need to bring their poster to the Interstellar Symposium. Abstracts should be emailed to: registrar@irg.space

Sagan Meetings

An interested Sagan Meeting organizer is given the option to define a particular question for an in-depth panel discussion. The organizer would be responsible for inviting five speakers to give short presentations staking out a position on a particular question. These speakers will then form a panel to engage in a lively discussion with the audience on that topic. Carl Sagan famously employed this format for his 1971 conference at the Byurakan Observatory in old Soviet Armenia, which dealt with the Drake Equation. A one-page description (format of your choosing) of the panel topic, the questions to be addressed, and the suggested panel members should be emailed by January 15, 2023 to: registrar@irg.space

Seminars

Seminars are 3-hour presentations on a single subject, providing an in depth look at that subject. Seminars are held before the Symposium begins, on Sunday, July 9, 2023, with morning and afternoon sessions. The content must be acceptable to be counted as continuing education credit for those holding a Professional Engineer (PE) certificate.

Other Content

Other content includes, but is not limited to, posters, displays of art or models, demonstrations, panel discussions, interviews, or public outreach events. IRG recognizes the importance of a holistic human cultural experience and encourages the submission of non-academic works to be involved with the symposium program.

Publications

The IRG serves as a critical incubator of ideas for the interstellar community. Following the success of the 7th Interstellar Symposium, papers may be submitted for consideration in publication within a special issue of Acta Astronautica. Papers from the 7th Symposium (September 2021) have now been published in the August 2022 issue of Acta Astronautica. Contributors who wish to publish their papers elsewhere may do so. Abstracts and papers not published elsewhere will be compiled into a complete Symposium proceedings in book form.

Video and Archiving

All symposium events may be captured on video or in still images for use on the IRG website, in newsletters and social media. All presenters, speakers, and selected participants will be asked to complete a Release Form that grants permission for IRG to use this content as described.

Abstract Submission

Abstracts for the 8th Interstellar Symposium must relate to one or more of the many interstellar mission related topics. The previously listed topics are not exclusive but represent a cross-section of possible categories. All abstracts must be submitted online via email to: registrar@irg.space.

Acceptable formats are text, Microsoft Word, and PDF only. Submissions of Contributed Plenary Lectures and Work in Progress Posters must follow the format described below.

Presenting Author(s)

Please list only the author(s) who will actually be in attendance and presenting at the conference. (First name, last name, degree – for example, Susan Smith, MD)

Additional Author(s)

List all authors here, including Presenting Author(s) – (first name, last name, degree(s) – for example, Mary Rockford, RN; Susan Smith, MD; John Jones, PhD)

Abbreviation(s)

Abbreviations within the body should be kept to a minimum and must be defined upon first use in the abstract by placing the abbreviation in parenthesis after the represented full word or phrase. Non-proprietary (generic) names should be used.

Abstract Length

The entire abstract (excluding title, authors, presenting author’s institutional affiliation(s), city, state, and text) including any tables or figures should be a maximum of 350 words. It is your responsibility to verify compliance with the length requirement.

Abstract Structure

Abstracts must include the following headings:

  • Title = The presentation title.
  • Background = Describes the research or initiative context.
  • Objective = Describes the research or initiative objective.
  • Methods = Describes research methodology used. For initiatives, describes the target population, program or curricular content, and evaluation method.
  • Results – Summarizes findings in sufficient detail to support the conclusions.
  • Conclusion – States the conclusions drawn from results, including their applicability.

Questions and responses to this call for papers, workshops, and participation should be directed to:

registrar@irg.space

For updates on the meeting, speakers, and logistics, please refer to the website:

https://irg.space/irg-2023/

tzf_img_post