The Physics of Starship Catastrophe

Now that gravitational wave astronomy is a viable means of investigating the cosmos, we’re capable of studying extreme events like the merger of black holes and even neutron stars. Anything that generates ripples in spacetime large enough to spot is fair game, and that would include supernovae events and individual neutron stars with surface irregularities. If we really want to push the envelope, we could conceivably detect the proposed defects in spacetime called cosmic strings, which may or may not have been formed in the early universe.

The latter is an intriguing thought, a conceivably observable one-dimensional relic of phase transitions from the beginning of the cosmos that would be on the order of the Planck length (about 10-35 meters) in width but lengthy enough to encompass light years. Oscillations in these strings, if indeed they exist, would theoretically generate gravitational waves that could be involved in the large-scale structure of the universe. Because new physics could well lurk in any detection, cosmic strings remain a tantalizing subject for speculation in gravitational wave astronomy.

Remember the resources that are coming into play in this field. In addition to LIGO (Laser Interferometer Gravitational-Wave Observatory), we have KAGRA (Kamioka Gravitational Wave Detector) in Japan and Virgo (VIRgo interferometer for Gravitational-wave Observations) in Italy. The LISA observatory (Laser Interferometer Space Antenna) is currently scheduled for a launch some time in the 2030s.

For that matter, could a cosmic string be detected in other ways? One possibility is in any signature it might leave in the cosmic microwave background (CMB). Another, and this seems promising, is the potential for gravitational lensing as light from background objects travels through the distorted spacetime produced by the string. That would be an interesting signature to find, and indeed, one of the exciting aspects of gravitational wave astronomy is speculation on what new phenomena it would allow us to detect.

As witness a new paper from Katy Clough (Queen Mary University, London) and colleagues, who ask whether an artificial gravitational event could generate a signal that an observatory like LIGO could detect. Now we nudge comfortably into science fiction, for at issue is what would happen if a starship powered by a warp drive were to experience a malfunction. Given the curvature of spacetime induced by an Alcubierre-style drive, a problem in its operations could be detectable, although not, the team points out, at the frequencies currently observed by LIGO.

An Alcubierre warp drive would produce a spacetime that is truly exotic, but one that can be described within the theory of General Relativity. The speed of light is never exceeded by our starship, thus satisfying Special Relativity, but a craft that can contract spacetime in front of it and expand spacetime behind it would theoretically cross distances faster than the speed of light as witnessed by an outside observer.

Huge problems would be created by such a craft, including some that may be insurmountable. It seems to violate what is known as the Null Energy Condition, for one thing, which demands negative energy seemingly not allowed in standard theories of spacetime. But the authors note that “The requirement that warp drives violate the NEC may be considered a practical rather than fundamental barrier to their construction since NEC violation can be achieved by quantum effects and effective descriptions of modifications to gravity, albeit subject to quantum inequality bounds and other semiclassical considerations that seem likely to prove problematic.”

Image: Two-dimensional visualization of an Alcubierre drive, showing the opposing regions of expanding and contracting spacetime that displace the central region. Credit: AllenMcC., CC BY-SA 3.0 , via Wikimedia Commons.

Problematic is a useful word, and it seems appropriate here. It’s also appropriate when we consider that a functioning warp drive raises paradoxical issues with regard to time travel, allowing closed time-like curves (in other words, the possibility of traveling into the past, with all the headaches that causes for causality and our view of reality). That puts us in the realm of rotating black holes and wormholes, powerful gravitational wave generators. The authors also point out that a warp drive would be a difficult thing to control and deactivate, as Miguel Alcubierre himself pointed out in a 2017 paper.

So how would we detect a starship of this variety? The authors note that at constant velocity, an Alcubierre drive spacecraft would not generate gravitational waves, but interesting phenomena would be observed if the drive bubble were to collapse, accelerate or decelerate:

There is (to our knowledge) no known equation of state that would maintain the warp drive metric in a stable configuration over time – therefore, whilst one can require that initially, the warp bubble is constant, it will quickly evolve away from that state and, in most cases, the warp fluid and spacetime deformations will disperse or collapse into a central point….This instability, whilst undesirable for the warp ship’s occupants, gives rise to the possibility of generating gravitational waves.

In other words, a working warp drive craft may well be undetectable, but a prototype that fails could throw an observable signature. The paper homes in on the collapse of a warp drive bubble, which could be created by the breakdown of the containment field that the makers of the starship use to support it. So we have a potential gravitational wave signature for a technological catastrophe as an advanced civilization experiments with the distortion of spacetime for interstellar travel.

Such events are presumably rare. I’m reminded of Greg Benford’s story “Bow Shock,” in which as astronomer studying what he thinks is a runaway neutron star – “a faint finger in maps centered on the plane of the galaxy, just a dim scratch” – is in fact a technological object. Here’s a clip:

“What you wrote,” she said wonderingly. “It’s a…star ship?”

“Was. It got into trouble of some kind these last few days. That’s why the wake behind it – ” he tapped the Fantis’ image – “got longer. Then, hours later, it got turbulent, and—it exploded.”

She sipped her coffee. “This is…was…light years away?”

“Yes, and headed somewhere else. It was sending out a regular beamed transmission, one that swept around as the ship rotated, every 47 seconds.”

Her eyes widened. “You’re sure?”

“Let’s say it’s a working hypothesis.”

Great scenario for a science fiction story, and there are a number of papers on starship detection from other angles in the scientific literature. In Benford’s case, the starship is thought to be of the Bussard ramjet variety, definitely not moving through warp drive methods. All this reminds me that a survey of starship detection papers is overdue in these pages, and I’ll plan to get to that in coming weeks. But back to warp drives.

Let’s assume things occasionally go wrong at whatever level of technology we’re looking at. We’re witnessing SpaceX actively developing Starship, a craft that gets a little better, and sometimes a lot better, each time it is launched, but development is hard and there are errors along the way. Throw an error into an Alcubierre-style starship and gravitational effects should show up involving nasty tidal outcomes.

To investigate these, Clough and colleagues develop a structured framework to simulate warp bubble collapse and analyze the gravitational wave signatures that would be produced at the point of collapse. Other types of signal may also be produced, but the paper notes: “Since we do not know the type of matter used to construct the warp ship, we do not know whether it would interact (apart from gravitationally) with normal matter as it propagates through the Universe.”

We don’t have equipment tuned to pick up such signals. We have the needed sensitivity in observatories like LIGO, but we would need to tune it to a different range of gravitational waves. The paper continues:

…for a 1km-sized ship, the frequency of the signal is much higher than the range probed by existing detectors, and so current observations cannot constrain the occurrence of such events. However, the amplitude of the strain signal would be significant for any such event within our galaxy and even beyond, and so within the reach of future detectors targeting higher frequencies… We caution that the waveforms obtained are likely to be highly specific to the model employed, which has several known theoretical problems, as discussed in the Introduction. Further work would be required to understand how generic the signatures are, and properly characterise their detectability.

A funding request to study starships undergoing catastrophic failure is going to be a tough sell. But probing the question produces the formalism developed by the Clough team and gives us further insights into warp drive prospects. Fascinating.

The paper is Clough et al, “What no one has seen before: gravitational waveforms from warp drive collapse” (preprint).

ACS3: Refining Sail Deployment

Rocket Lab, a launch service provider based in Long Beach CA, launched a rideshare payload on April 23 from its launch complex in New Zealand. I’ve been tracking that launch because aboard the Electron rocket was an experimental solar sail that NASA is developing to study boom deployment. This is important stuff, because the lightweight materials we need to maximize payload and performance are evolving, and so are boom deployment methods. Hence the Advanced Composite Solar Sail System (ACS3), created to test composites and demonstrate new deployment methods.

The thing about sails is that they are extremely scalable. In fact, it’s remarkable how many different sizes and shapes of sails we’ve discussed in these pages, ranging from Jordin Kare’s ‘nanosails’ to the small sails envisioned by Breakthrough Starshot that are just a couple of meters to the side, and on up to the behemoth imaginings of Robert Forward, designed to take a massive starship with human crew to Barnard’s Star and other targets. Sail strategies thus move from using them as propulsive projectiles (Kare) to full-blown interstellar photon-catchers for high-speed star travel.

With ACS3, we’re at the lower end of the size spectrum and digging into such fundamental matters as composite materials and boom deployment engineering. Entertainingly, the Electron launch vehicle was named ‘Beginning of the Swarm,’ doubtless a nod to the primary payload, which is a South Korean imaging satellite that will be complemented by 10 similar craft in coming years. But I also like to think that ‘swarms’ of small solar sails like the twelve-unit (12U) CubeSat used for ACS3, will eventually offer options not only for near-Earth but also outer system observation and exploration. But first, we have to nail down those tricky deployment issues. Keats Wilkie is ACS3 principal investigator at NASA Langley in Hampton Virginia:

“Booms have tended to be either heavy and metallic or made of lightweight composite with a bulky design – neither of which work well for today’s small spacecraft. Solar sails need very large, stable, and lightweight booms that can fold down compactly. This sail’s booms are tube-shaped and can be squashed flat and rolled like a tape measure into a small package while offering all the advantages of composite materials, like less bending and flexing during temperature changes.”

Image: On 24 April 2024, Rocket Lab launched the ACS3 & NeonSat-1 missions from Onenui Station (Mahia Peninsula), New Zealand. In this image, engineers at NASA’s Langley Research Center test deployment of the Advanced Composite Solar Sail System’s solar sail. The unfurled solar sail is approximately 30 feet (about 9 meters) on a side. Credit: NASA Ames.

ACS3 reached its final orbit a little less than two hours after liftoff, after earlier deployment of the South Korean NEONSAT-1 via a kick stage that changed orbit for the second of the deployments. The craft is now roughly 1000 kilometers up, and if everything goes well, full deployment of the composite booms spanning the diagonals of the sail will give us an 80 square meter sail as bright as Sirius in the night sky. Digital cameras onboard should provide imagery of the sail before and during deployment. No signs of sail deployment yet but the satellite is being observed at numerous sites.

The polymer from which the composite booms are made is reinforced with carbon fiber and flexible enough to allow it to be rolled for compact storage. According to Alan Rhodes, lead systems engineer for the mission at NASA Ames, seven meters of deployable booms can roll up into a shape that fits into the hand. Note too that these booms are 75 percent lighter than previous metallic deployable booms and should experience far less in-space thermal distortion during flight. A new tape-spool boom extraction system is being tested which will, engineers hope, minimize the possibility of the coiled booms jamming during the deployment. We shall see.

Animation: Deployment of the ACS3 sail. Credit: NASA Ames.

We’re getting pretty good at miniaturization, as shown by the fact that the 12-unit CubeSat carrying ACS3 into orbit measures roughly 23 centimeters by 34 centimeters, which makes it about the size of the microwave oven sitting on my kitchen counter. Refining the material and structure of the booms is another step toward lower-cost missions which we can eventually hope to deploy in networked swarms. Imagine a constellation of exploratory craft to targets like the ice giants. Larger sails using these technologies may eventually fly the kind of ‘sundiver’ missions we’ve often discussed here, deploying at perihelion for maximum thrust to deep space.

Data Return from Proxima Centauri b

The challenges involved in sending gram-class probes to Proxima Centauri could not be more stark. They’re implicit in Kevin Parkin’s analysis of the Breakthrough Starshot system model, which ran in Acta Astronautica in 2018 (citation below). The project settled on twenty percent of the speed of light as a goal, one that would reach Proxima Centauri b well within the lifetime of researchers working on the project. The probe mass is 3.6 grams, with a 200 nanometer-thick sail some 4.1 meters in diameter.

The paper we’ve been looking at from Marshall Eubanks (along with a number of familiar names from the Initiative for Interstellar Studies including Andreas Hein, his colleague Adam Hibberd, and Robert Kennedy) accepts the notion that these probes should be sent in great numbers, and not only to exploit the benefits of redundancy to manage losses along the way. A “swarm” approach in this case means a string of probes launched one after the other, using the proposed laser array in the Atacama desert. The exciting concept here is that these probes can reform themselves from a string into a flat, lens-shaped mesh network some 100,000 kilometers across.

Image: Figure 16 from the paper. Caption: Geometry of swarm’s encounter with Proxima b. The Beta-plane is the plane orthogonal to the velocity vector of the probe ”at infinity” as it approaches the planet; in this example the star is above (before) the Beta-plane. To ensure that the elements of the swarm pass near the target, the probe-swarm is a disk oriented perpendicular to the velocity vector and extended enough to cover the expected transverse uncertainty in the probe-Proxima b ephemeris. Credit: Eubanks et al.

The Proxima swarm presents one challenge I hadn’t thought of. We have to be able to predict the position of Proxima b to within 10,000 kilometers at least 8.6 years before flyby – this is the time for complete information cycle between Earth, Proxima and back to Earth. Effectively, we need to figure out the planet’s velocity to a value of 1 meter per second, with a correspondingly tight angular position (0.1 microradians).

Although we already have Proxima b’s period (11.68 days), we need to determine its line of nodes, eccentricity, inclination and epoch, and also its perturbations by the other planets in the system. At the time of flyby, the most recent Earth update will be at least 8.5 years old. The Proxima b orbit state will need to be propagated over at least that interval to predict its position, and that prediction needs to be accuracy to the order of the swarm diameter.

The authors suggest that a small spacecraft in Earth orbit can refine Proxima b’s position and the star’s ephemeris, but note that a later paper will dig into this further.

In the previous post I looked at the “Time on Target” and “Velocity on Target” techniques that would make swarm coherence possible, with variations in acceleration and velocity allowing later-launched probes to reach higher speeds, but with higher drag so that as they reach the craft sent before them, they slow to match their speed. From the paper again:

A string of probes relying on the ToT technique only could indeed form a swarm coincident with the Proxima Centauri system, or any other arbitrary point, albeit briefly. But then absent any other forces it would quickly disperse afterwards. Post-encounter dispersion of the swarm is highly undesirable, but can be eliminated with the VoT technique by changing the attitude of the spacecraft such that the leading edge points at an angle to the flight direction, increasing the drag induced by the ISM, and slowing the faster swarm members as they approach the slower ones. Furthermore, this approach does not require substantial additional changes to the baseline BTS [Breakthrough Starshot] architecture.

In other words, probes launched at different times with a difference in velocity target a point on their trajectory where the swarm can cohere, as the paper puts it. The resulting formation is then retained for the rest of the mission. The plan is to adjust the attitude of the leading probes continually as they move through the interstellar medium, which means variations in their aspect ratio and sectional density. A probe can move edge-on, for instance, or fully face-on, with variations in between. The goal is that the probes lost later in the process catch up with but do not move past the early probes.

All this is going to take a lot of ‘smarts’ on the part of the individual probes, meaning we have to have ways for them to communicate not just with Earth but with each other. The structure of the probes discussed here is an innovation. The authors propose that key components like laser communications and computation should be concentrated, so that whereas the central disk is flat, the ‘heart of the device,’ as they put it, is concentrated in a 2-cm thickened rim around the outside of the sail disk.

The center of the disk is optical, or as the paper puts it, ‘a thin but large-aperture phase-coherent meta-material disk of flat optics similar to a fresnel lens…’ which will be used for imaging as well as communications. Have a look at the concept:

Image: This is Figure 3a from the paper. Caption: Oblique view of the top/forward of a probe (side facing away from the launch laser) depicting array of phase-coherent apertures for sending data back to Earth, and optical transceivers in the rim for communication with each other. Credit: Eubanks et al.

So we have a sail moving at twenty percent of lightspeed through an incoming hydrogen flux, an interesting challenge for materials science. The authors consider both aerographene and aerographite. I had assumed these were the same material, but digging into the matter reveals that aerographene consists of a three-dimensional network of graphene sheets mixed with porous aerogel, while aerographite is a sponge-like formation of interconnected carbon nanotubes. Both offer extremely low density, so much so that the paper notes the performance of aerographene for deceleration is 104 times better than conventional mylar. Usefully, both of these materials have been synthesized in the laboratory and mass production seems feasible.

Back to the probe’s shape, which is dictated by the needs not only of acceleration but survival of its electronics – remember that these craft must endure a laser launch that will involve at least 10,000 g’s. The raised rim layout reminds the authors of a red corpuscle as opposed to what has been envisioned up to now as a simple flat disk. The four-meter central disk contains 247 25-cm structures arranged, as the illustration shows, like a honeycomb. We’ll use this optical array for both imaging Proxima b but also returning data to Earth, and each of the arrays offers redundancy given that impacts with interstellar hydrogen will invariably create damage to some elements.

Remember that the plan is to build an intelligent swarm, which demands laser links between the probes themselves. Making sure each probe is aware of its neighbors is crucial here, for which purpose it will use the optical transceivers around its rim. The paper calculates that this would make each probe detectable by its closest neighbor out to something close to 6,000 kilometers. The probes transmit a pulsed beacon as they scan for neighboring probes, and align to create the needed mesh network. The alignment phase is under study and will presumably factor into the NIAC work.

The paper backs out to explain the overall strategy:

…our innovation is to use advances in optical clocks, mode-locked optical lasers, and network protocols to enable a swarm of widely separated small spacecraft or small flotillas of such to behave as a single distributed entity. Optical frequency and reliable picosecond timing, synchronized between Earth and Proxima b, is what underpins the capability for useful data return despite the seemingly low source power, very large space loss and low signal-to-noise ratio.

For what is going to happen is that the optical pulses between the probes will be synchronized, meaning that despite the sharp constraints on available energy, the same signal photons are ‘squeezed’ into a smaller transmission slot, which increases the brightness of the signal. We get data rates through this brightening that could not otherwise be achieved, and we also get data from various angles and distances. On Earth, a square kilometer array of 796 ‘light buckets’ can receive the pulses.

Image: This is Figure 13 from the paper. Caption: Figure 13: A conceptual receiver implemented as a large inflatable sphere, similar to widely used inflatable antenna domes; the upper half is transparent, the lower half is silvered to form a half-sphere mirror. At the top is a secondary mirror which sends the light down into a cone-shaped accumulator which gathers it into the receiver in the base. The optical signals would be received and converted to electrical signals – most probably with APDs [avalanche photo diodes] at each station and combined electrically at a central processing facility. Each bucket has a 10-nm wide band-pass filter, centered on the Doppler-shifted received laser frequency. This could be made narrower, but since the probes will be maneuvering and slowing in order to meet up and form the swarm, and there will be some deceleration on the whole swarm due to drag induced by the ISM, there will be some uncertainty in the exact wavelength of the received signal. Credit: Eubanks et al.

If we can achieve a swarm that is in communication with its members using micro-miniaturized clocks to keep operations synchronous, we can thus use all of the probes to build up a single detectable laser pulse bright enough to overcome the background light of Proxima Centauri and reach the array on Earth. The concept is ingenious and the paper so rich in analysis and conjecture that I keep going back to it, but don’t have time today to do more than cover these highlights. The analysis of enroute and approach science goals and methods alone would make for another article. But it’s probably best that I simply send you to the paper itself, one which anyone interested in interstellar mission design should download and study.

The paper is Eubanks et al., “Swarming Proxima Centauri: Optical Communication Over Interstellar Distances,” submitted to the Breakthrough Starshot Challenge Communications Group Final Report and available online. Kevin Parkin’s invaluable analysis of Starshot is Parkin, K.L.G., “The Breakthrough Starshot system model,” Acta Astronautica 152 (2018), 370–384 (abstract / preprint).

Atmospheric Types and the Results from K2-18b

The exoplanet K2-18b has been all over the news lately, with provocative headlines suggesting a life detection because of the possible presence of dimethyl sulfide (DMS), a molecule produced by life on our own planet. Is this a ‘Hycean’ world, covered with oceans under a hydrogen-rich atmosphere? Almost nine times as massive as Earth, K2-18b is certainly noteworthy, but just how likely are these speculations? Centauri Dreams regular Dave Moore has some thoughts on the matter, and as he has done before in deeply researched articles here, he now zeroes in on the evidence and the limitations of the analysis. This is one exoplanet that turns out to be provocative in a number of ways, some of which will move the search for life forward.

by Dave Moore

124 light years away in the constellation of Leo lies an undistinguished M3V red dwarf, K2-18. Two planets are known to orbit this star: K2-18c, a 5.6 Earth mass planet orbiting 6 million miles out, and K2-18b, an 8.6 Earth mass planet orbiting 16 million miles out. The latter planet transits its primary, so from its mass and size (2.6 x Earth’s), we have its density (2.7 g/cm2), which class the planet as a sub-Neptune. The planet’s relatively large radius and its primary’s low luminosity make it a good target to get its atmospheric spectra, but what also makes this planet of special interest to astronomers is that its estimated irradiance of 1368 watts/m2 is almost the same as Earth’s (1380 watts/m2).

Determining an exosolar planet’s atmospheric constituents, even with the help of the James Webb telescope, is no easy matter. For a detectable infrared spectrum, molecules like H2O, CH4, CO2 and CO generally need to have a concentration above 100 ppm. The presence of O3 can function as a stand-in for O2, but molecules such as H2, N2, with no permanent dipole moment, are much harder to detect.

The Hubble telescope got a spectrum of K2-18b in 2019. Water vapor and H2 were detected, and it was assumed to have a deep H2/He/steam atmosphere above a high pressure ice layer over an iron/rocky core, much like Neptune. On September 11 of this year, the results of spectral studies by the James Webb telescope were announced: CH4 and CO2 were found as well as possible traces of DMS (Dimethyl sulfide). No signal of NH3 was found. Nor was there any sign of water vapor. The feature thought to be water vapor turned out to be a methane line of the same frequency.

Figure 1: Spectra of K2-18b obtained by the James Webb telescope

This announcement resulted in considerable excitement and speculation by the popular press. K2-18b was called a Hycean planet. It was speculated that it had an ocean, and the possible presence of DMS was taken as an indication of life because oceanic algae produce this chemical. But that was not what intrigued me. What caught my attention was the seemingly anomalous combination of CH4 and CO2in the planet’s atmosphere. How could a planet have CH4, a highly reduced form of carbon, in equilibrium with CO2, the oxidized form of carbon? A search turned up a paper from February 2021: “Coexistence of CH4, CO2, and H20 in exoplanet atmospheres,” by Woitke, Herbort, Helling, Stüeken, Dominik, Barth and Samra.

The authors’ purpose for this paper was to help with the detection of biosignatures. To quote:

The identification of spectral signatures of biological activity needs to proceed via two steps: first, identify combinations of molecules which cannot co-exist in chemical equilibrium (“non-equilibrium markers”). Second, find biological processes that cause such disequilibria, which cannot be explained by other physical non-equilibrium processes like photo-dissociation. […] The aim of this letter is to propose a robust criterion for step one…

The paper presents an exhaustive study for the lowest energy state (Gibbs free energy) composition of exoplanet atmospheres for all possible abundances of Hydrogen, Carbon, Oxygen, and Nitrogen in chemical equilibrium. To do that, they ran thermodynamic simulations of varying mixtures of the above atoms and looked at the resulting molecular ratios. At low temperatures (T ≤ 600K), they found that the only molecular species you get in any abundance are H2, H20, CH4, NH3, N2, CO2, O2. At higher temperature, the equilibrium shifts towards more H2, and CO begins to appear.

Some examples of their results:

If O > 0.5 x H + 2 x C ––> O2-rich atmosphere, no CH4
If H > 2 x O + 4 x C ––> H2-rich atmosphere, no CO2
If C > 0.25 x H + 0.5 x O ––> Graphite condensation, no H20

They also used the equations to tell what partial pressures of the elemental mixture will produce equal pressures of the various molecules:

If H = 2 x O then the CO2 level will equal CH4
If 12 C = 2 x O + 3 x H then the CO2level will equal H20
If 12 C = 6 x O + H then the H20 level will equal CH4

To summarize, I quote from their abstract:

We propose a classification of exoplanet atmospheres based on their H, C, O, and N element abundances below about 600 K. Chemical equilibrium models were run for all combinations of H, C, O, and N abundances, and three types of solutions were found, which are robust against variations of temperature, pressure, and nitrogen abundance.

Type A atmospheres[which] contain H20, CH4, NH3, and either H2 or N2, but only traces of CO2 and O2.

Type B atmospheres [which] contain O2, H20, CO2, and N2, but only traces of CH4, NH3, and H2.

Type C atmospheres [which] contain H20, CO2, CH4, and N2, but only traces of NH3, H2, and O2

Type A atmospheres are found in the giant planets of our outer solar system. Type B atmospheres occur in our inner solar system. Earth, Venus and Mars fall under this classification, but we don’t see any planets with Type C atmospheres.

Below is a series of charts showing the results for each of the six main molecular species over a range of mixtures.

Figure 2: The vertical axis is the ratio of Hydrogen to Oxygen, starting at 100% Hydrogen at the bottom and running to 100% Oxygen at the top. The horizontal axis shows the proportion of Carbon in the total mixture (The ratio runs up to 35%.) Molecular concentrations are in chemical equilibrium as a function of Hydrogen, Carbon, and Oxygen element abundances, calculated for T = 400 K and p = 1 bar. The blank regions are concentrations of < 10−4.

The central grey triangle marks the region in which H20, CH4, and CO2 can coexist in chemical equilibrium. The thin grey lines bisecting the triangle indicate where two of the constituents are at an equal concentration. These lines are hard to discern unless you can magnify the original image. For H20 and CO2 at equal concentration, it’s the dashed line (the near vertical line running upwards from 0.2 on the horizontal scale.) For CO2 and CH4, it’s the horizontal line. And for H20 and CH4, it’s the dotted line swooping upwards toward the top right-hand corner.)

The color bars at the right-hand side of the charts are both a color representation of the concentration and show the proportion of Nitrogen tied up as N2, i.e. that which is not NH3. Not surprisingly, the more Hydrogen there is in the mix, the higher the proportion of NH3 there is.

Other Results from the Paper

In the area around the stoichiometric ratio for water you get maximum H20 production and supersaturation occurs. Clouds form and the water rains out. Therefore, you cannot get an atmosphere with very high concentrations of water vapor unless the temperature is over 650°K, the critical point of water. Precipitation results in the atmospheric composition moving out of the area that gives CO2/CH4 mixtures.

Atmospheres with high carbon concentrations and having Hydrogen and Oxygen near their stoichiometric ratio have most of the atmospheric constituents tied up as water, so at a certain point carbon forms neither CO2 nor CH4 but rains out as soot. This, however, only precludes mixtures in the very right hand side of the CO2/CH4 Triangle.

Full-equilibrium condensation models show that the outgassing from warm rock, such as mid-oceanic ridge basalt can naturally produce Type C atmospheres.

Thoughts and Speculations

i) While it is difficult to argue with the man who coined the term, I still think Madhusudhan’s description of K2-18b as Hycean is too broad. Watching Madhusudhan in a Youtube interview, he refers to his paper “Habitability and Biosignatures of Hycean Worlds,’ which suggests that ocean covered planets under a Hydrogen atmosphere can exists within a zone that reaches into a level of irradiance slightly greater than Earth’s; however, he doesn’t mention the work by Lous et al in their paper, “Potential long-term habitable conditions on planets with primordial H–He atmospheres,” that showed that inside irradiance levels equivalent to 2 au from our Sun or greater, the Hydrogen atmosphere required to maintain Earthlike temperatures and not cook it is so thin that it is lost quickly over geological timescales. (You can see this in more detail in my article Super Earths/Hycean Worlds.) I would therefore define a Hycean planet as a rocky world with a radius up to 1.8 x Earth’s outside the irradiance equivalent of 2 au from our sun. K2-18b, being both larger than this and less dense than a rocky world, would fall, in my mind, firmly into the category of sub-Neptune.

ii) Another way of thinking of Type A, Type B and Type C atmospheres is to denote them as Hydrogen dominated, Oxygen dominated and Carbon dominated. Carbon dominated atmospheres may have by far the bulk of their constituents being Hydrogen and Oxygen; but because the enthalpy of the Hydrogen-Oxygen reaction is so much greater than the other reactions, when Hydrogen and Oxygen are close to their stoichiometric ratio, they preferentially remove themselves from the mix leaving Carbon as the dominant constituent. There is no Nitrogen dominated atmosphere because for most of its range Nitrogen sticks to itself forming N2 and is inert.

iii) The lack of H20 spectral lines is puzzling. Madhusudhan in his interview suggests that the spectra was a shot of the high-dry stratosphere. To cross-check the plausibility of this, I looked up the physical data on DMS. Dimethyl Sulfide vaporizes at 37°C and freezes at -98°C, which is lower than CO2’s freezing point. It also has a much higher vapor pressure than water at below freezing temperatures, so this does not contradict the assumption.

iv) I’m surprised this paper is not more widely known as not only does it provide a powerful tool for the analysis of exosolar planets’ atmospheric spectra, but it can also point to other aspects of a planet.

After the Hubble results came out in 2017, papers were published to model the formation of K2-18b, and while a range of possibilities could match the planet’s characteristics, they all came from the assumption that the planet began via the formation of a rocky/iron core followed by the gas accretion of large amounts of H2, Helium, and H20. According to the coexistence paper though, you cannot have large amounts of H2 and get a CO2/CH4 mix with no NH3. So to arrive at this state, this planet must never have had much gas accretion in the first place, or lost large amounts of Hydrogen after it formed. This latter scenario would require the planet to gain a Hydrogen envelope while at less than full mass in a hot nebula and then at full mass, in a cooler environment, lose most of its Hydrogen.

It is much easier to explain the planet’s characteristics by assuming it formed outside the snowline, never gained much of a gas envelope in the first place and spiraled into its present position. If it was formed from icy bodies like Ganymede and Titan (density ~ 1.9 gm/cc), this would give a good match for its density (2.7 gm/cc) allowing for gravitational contraction. The snow line is also the zone where carbonaceous chondrites form, so this would give the planet a higher carbon content than a pure rocky/iron one.

v) Madhusudhan, again from his interview, seems to think that K2-18b is an ocean planet, but I’m dubious about this for two reasons:

The first is that from the work done on Hycean planets by Lous et al, any depth of atmosphere especially with the potent greenhouse mix of CO2 and CH4 is likely to result in a runaway-greenhouse steam atmosphere inside the classically defined habitable zone (inside 2 au. for our sun).

The planet’s CO2/CH4 mix also points against this. From the paper, if there is a slight excess of Hydrogen over the stoichiometric ratio for water, then condensing H20 out, as either water or high pressure ice, pushes the planet’s atmosphere towards a Type A Hydrogen excess with no CO2 and NH3 lines appearing.

All of this would point towards a planet with a rocky/iron core overlaid by high pressure ice, which would, at about the megabar level, transition to a gas atmosphere composed mainly of super-critical steam. This would make up a significant volume of the planet. At the top of this atmosphere, the water, now in the form of steam, would condense out as virago rain leaving a dry stratosphere consisting mainly of CO2, CH4, H2 and N2.

To test my assumption, I did a rough back of the envelope calculation using online calculators, and looked at the wet adiabatic lapse rate (the rate of increase in temperature when saturated air is compressed) per atm. pressure doubling starting from 1 bar at 20°C. This rate (1.5°C/1000 ft) is considerably less than the rate for dry gases (3°C/1000 ft).

It was all very ad hoc, but the first thing I noted was that for each pressure doubling, the boiling point of water goes up significantly–at 100 bar, water boils at 300°C–until its temperature approaches its critical point (374°C) where it levels off. So the lapse rate increase in temperature chases the boiling point of water as you go deeper and deeper into the atmosphere; however, from my calculations, it catches water’s boiling point at 270°C and 64 bar. The calculations are arbitrary—I was using Earth’s atmospheric composition and gravity–and small changes in the parameters can result in big changes in the crossover point; but what this does point to is that if the planet has an ocean, it could be a rather hot one under a dense atmosphere, and if the atmosphere has any great depth then the ocean is likely to be a supercritical fluid.

Also, for the atmosphere to be thin, the planet’s ratio of CO2, CH4 and H2 must be less than 1/10,000 that of H20, which is not something I regard as likely, given what we know about the outer solar system.

I’ll leave you with a phase diagram of water with (red line) the dry adiabat of Venus moved 25°C cooler to represent a dry Earth and the wet adiabat (blue line) the one I calculated out. It’s also a handy diagram to play with as it gives you an idea of how deep the ocean or critical fluid layer will be at a given temperature before it turns into a layer of high pressure ice.

vi) One final point, and this reinforces the purpose of the paper: that we need to thoroughly understand planetary chemistry to eliminate false bio-markers. DMS is widely touted as a biomarker, but if we look at the most thermodynamically stable forms of sulfur: In a Type A reducing atmosphere, it’s H2S; and in a wet, oxidizing, Type B atmosphere, it’s the Sulfate (SO42-) ion. Unfortunately, the authors of the paper did not extend their thermodynamic analysis to Sulfur, but if we look at DMS’s formula (CH3)2S, it looks an awful lot like a good candidate for the most thermodynamically stable form of Sulfur for a Type C atmosphere, not a biomarker.


Wikipedia: K2-18b

N. Madhusudhan, S. Sarkar, S. Constantinou, M Holmberg, A. Piette, and J. Moses, Carbon-bearing Molecules in a Possible Hycean Atmosphere, Preprint, arXiv: 2309.05566v2, Oct 2023

P. Woitke, O. Herbort, Ch. Helling, E. Stüeken, M. Dominik, P. Barth and D. Samra, Coexistence of CH4, CO2, and H2O in exoplanet atmospheres, Astronomy & Astrophysics, Vol. 646, A43, Feb 2021

N. Madhusudhan, M. Nixon, L. Welbanks, A. Piette and R. Booth, The Interior and Atmosphere of the Habitable-zone Exoplanet K2-18b, The Astrophysical Journal Letters, 891:L7 (6pp), 2020 March 1

Super Earths/Hycean Worlds, Centauri Dreams 11 November, 2022

Youtube interview of Nikku Madhusudhan, Is K2-18b a Hycean Exoworld? on Colin Michael Godier’s Event Horizon

What We’re Learning about TRAPPIST-1

It’s no surprise that the James Webb Space Telescope’s General Observers program should target TRAPPIST-1 with eight different efforts slated for Webb’s first year of scientific observations. Where else do we find a planetary system that is not only laden with seven planets, but also with orbits so aligned with the system’s ecliptic? Indeed, TRAPPIST-1’s worlds comprise the flattest planetary arrangement we know about, with orbital inclinations throughout less than 0.1 degrees. This is a system made for transits. Four of these worlds may allow temperatures that could support liquid water, should it exist in so exotic a locale.

Image: This diagram compares the orbits of the planets around the faint red star TRAPPIST-1 with the Galilean moons of Jupiter and the inner Solar System. All the planets found around TRAPPIST-1 orbit much closer to their star than Mercury is to the Sun, but as their star is far fainter, they are exposed to similar levels of irradiation as Venus, Earth and Mars in the Solar System. Credit: ESO/O. Furtak.

The parent star is an M8V red dwarf about 40 light years from the Sun. It would be intriguing indeed if we detected life here, especially given the star’s estimated age of well over 7 billion years. Any complex life would have had plenty of time to evolve into a technological phase, if this can be done in these conditions. But our first order of business is to find out whether these worlds have atmospheres. TRAPPIST-1 is a flare star, implying the possibility that any gaseous envelopes have long since been disrupted by such activity.

Thus the importance of the early work on TRAPPIST-1 b and c, the former examined by Webb’s Mid-Infrared Instrument (MIRI), with results presented in a paper in Nature. We learn here that the planet’s dayside temperature is in the range of 500 Kelvin, a remarkable find in itself given that this is the first time any form of light from a rocky exoplanet as small and cool as this has been detected. The planet’s infrared glow as it moved behind the star produced a striking result, explained by co-author Elsa Ducrot (French Alternative Energies and Atomic Energy Commission):

“We compared the results to computer models showing what the temperature should be in different scenarios. The results are almost perfectly consistent with a blackbody made of bare rock and no atmosphere to circulate the heat. We also didn’t see any signs of light being absorbed by carbon dioxide, which would be apparent in these measurements.”

The TRAPPIST-1 work is moving relatively swiftly, for already we have the results of a second JWST program, this one executed by the Max Planck Institute for Astronomy and explained in another Nature paper, this one by lead author Sebastian Zieba. Here the target is TRAPPIST-1 c, which is roughly the size of Venus and which, moreover, receives about the same amount of stellar radiation. That might imply the kind of thick atmosphere we see at Venus, rich in carbon dioxide, but no such result is found. Let me quote Zieba:

“Our results are consistent with the planet being a bare rock with no atmosphere, or the planet having a really thin CO2 atmosphere (thinner than on Earth or even Mars) with no clouds. If the planet had a thick CO2 atmosphere, we would have observed a really shallow secondary eclipse, or none at all. This is because the CO2 would be absorbing all of the 15-micron light, so we wouldn’t detect any coming from the planet.”

Image: This light curve shows the change in brightness of the TRAPPIST-1 system as the second planet, TRAPPIST-1 c, moves behind the star. This phenomenon is known as a secondary eclipse. Astronomers used Webb’s Mid-Infrared Instrument (MIRI) to measure the brightness of mid-infrared light. When the planet is beside the star, the light emitted by both the star and the dayside of the planet reach the telescope, and the system appears brighter. When the planet is behind the star, the light emitted by the planet is blocked and only the starlight reaches the telescope, causing the apparent brightness to decrease. Credits: NASA, ESA, CSA, Joseph Olmsted (STScI)

What JWST is measuring is the 15-micron mid-infrared light emitted by the planet, using the world’s secondary eclipse, the same technique used in the TRAPPIST-1 b work. The MIRI instrument observed four secondary eclipses as the planet moved behind the star. The comparison of brightness between starlight only and the combined light of star and planet allowed the calculation of the amount of mid-infrared given off by the dayside of the planet. This is remarkable work: The decrease in brightness during the secondary eclipse amounts to 0.04 percent, and all of this working with a target 40 light years out.

Image: This graph compares the measured brightness of TRAPPIST-1 c to simulated brightness data for three different scenarios. The measurement (red diamond) is consistent with a bare rocky surface with no atmosphere (green line) or a very thin carbon dioxide atmosphere with no clouds (blue line). A thick carbon dioxide-rich atmosphere with sulfuric acid clouds, similar to that of Venus (yellow line), is unlikely. Credit: NASA, ESA, CSA, Joseph Olmsted (STScI).

I should also mention that the paper on TRAPPIST-1 b points out the similarity of its results to earlier observations of two other M-dwarf stars and their inner planets, LHS 3844 b and GJ 1252 b, where the recorded dayside temperatures showed that heat was not being redistributed through an atmosphere and that there was no absorption of carbon dioxide, as one would expect from an atmosphere like that of Venus.

Thus the need to move further away from the star, as in the TRAPPIST-1 c work, and now, it appears, further still, to cooler worlds more likely to retain their atmospheres. As I said, things are moving swiftly. In the coming year for Webb is a follow-up investigation on both TRAPPIST-1 b and c, in the hands of the system’s discoverer, Michaël Gillon (Université de Liège) and team. With a thick atmosphere ruled out at planet c, we need to learn whether the still cooler planets further out in this system have atmospheres of their own. If not, that would imply formation with little water in the early circumstellar disk.

The paper is Zieba et al., “No thick carbon dioxide atmosphere on the rocky exoplanet TRAPPIST-1 c,” Nature 19 June 2023 (full text). The paper on TRAPPIST-1 b is Greene et al., “Thermal emission from the Earth-sized exoplanet TRAPPIST-1 b using JWST,” Nature 618 (2023), 39-42 (abstract).

Part II: Sherlock Holmes and the Case of the Spherical Lens: Reflections on a Gravity Lens Telescope

Part II: Sherlock Holmes and the Case of the Spherical Lens: Reflections on a Gravity Lens Telescope

Aerospace engineer Wes Kelly continues his investigations into gravitational lensing with a deep dive into what it will take to use the phenomenon to construct a close-up image of an exoplanet. For continuity, he leads off with the last few paragraphs of Part I, which then segue into the practicalities of flying a mission like JPL’s Solar Gravitational Lens concept, and the difficulties of extracting a workable image from the maze of lensed photons. The bending of light in a gravitational field may offer our best chance to see surface features like continents and seasonal change on a world around another star. The question to be resolved: Just how does General Relativity make this possible?

by Wes Kelly

Conclusion of Part I

At this point, having one’s hands on an all-around deflection angle for light at the edges of a “spherical lens” of about 700,000 kilometers radius (or b equal to the radius of the sun rS), if it were an objective lens of a corresponding telescope, what would be the value of the focal length for this telescopic component expressed in astronomical units?

The angle of 700,000 km solar radius observed from 1 AU, gives an arcsine of 0.26809 degrees. This is consistent with the rule of thumb solar diameter estimate of ~0.5 degrees.

Expressed in still another way, solar radius from this arcsine measure is 965 arc seconds. When the solar disc itself is observed to be about 1.75 arc seconds in radius, that’s where you will find the focus for this objective lens.

If we take the ratio of 965 to 1.75, we obtain a value 551.5. In other words, a focal point for the relativistic effect at 551.5 AU’s out. Thus, the General Relativity effect implies that light bent by the sun’s gravity near its surface radius is focused about 550 AUs out from the sun. And like the protagonist of Moliere’s 16th century comedy play, as I run off to tell everyone I know, I discover a feeling akin to, “For more than forty years I have been speaking prose while knowing nothing of it.”

This could be a primary lens for a very unwieldy telescope. True, but not unwieldy in all manners. When we consider the magnification power of a telescope system, we speak of the focal length of the objective lens over that for an eye piece or sensor lens focal length. And habitually one might assume it is enclosed in a canister – as most telescopes sold over the counter at hobby stars are. But it is not always necessary or to any advantage. Consider the largest ground-based optical reflectors, or the JWST and radio telescopes. Their objective focal lengths extend through the open air or space. The JWST focal length is 131.4 meters or taller than its Ariane V launch system. Its collected light reaches sensors through a succession of ricochets in its instrumentation package, but not through. a cylindrical conduit extending out from the reflector any significant distance to the front. [Note: The Jupiter deflection case mentioned above would make the focal length 100x longer.]

Continued Discussion

(Tables, Figures and References for Parts I and II are sequential).

In contrast with a 130-meter objective lens focal length, with 550 AU, any focal length for a conventionally manufactured “eyepiece” lens optical system of any size would have enormous magnification or light gathering potential. Were it a lens of 1 or 10 or 100 meter focal length at the instrument end of the telescope, with the “Oort Cloud radius sized” objective lens focal length (550 x 1.5xe8 meters = 8.2 x 10e8 meters) it would not matter much so far as interstellar mapping would be concerned now. We should add as well that the magnification is in terms of area rather than diameter or radius. In effect magnification is multiplication of projected surface area or surface light.

Given the above, issues that remain to be addressed related to the field of view.

1. The spherical lens (the sun) is a light source itself, which needs to be blocked out with a coronagraph on board the SGL spacecraft.

2. The signal obtained from the star (but especially the planet!) is “convoluted” by passage around the perimeter of the solar lens. This must be undone by a deconvolution process.

3. In application for examining an exoplanet in orbit around another star, the fix on the star must be either adjusted to center on the related planetary target or else the planet’s data must be extracted from an enormous extraneous data package.

On issue 1, there are many coronagraph techniques already applied in telescopes for blocking solar or stellar light sources. The Nancy Roman Space Telescope device when launched will be the state of the art and likely to influence SGL coronagraph design. For issue 2, it would be interesting to see a simple illustrative example (e.g., a sphere with a simple pattern such as broad colored latitudinal and longitudinal bands alternating in some patterns… a yellow or green smiley face?), transformed and then converted back. On issue 3, however, I believe that discussion below will provide more immediate insights.

Figure-6 As noted in [7], a meter class telescope with a coronagraph to block solar light is placed In the strong interference region of the solar gravitational lens (SGL) and is capable of imaging an exoplanet at a distance of up to 30 parsecs with a few 10-km scale resolution on its surface. The picture shows results of a simulation of the effects of the SGL on an Earth-like exoplanet image.

Left: Original RGB (red, green, blue) image with a 1024 x 1024 pixel array.

Center: Image blurred by the SGL, sampled at an SNR (signal to noise ratio) of 103 per color channel or overall SNR of 3 x 130.

Right: Result of image deconvolution.

In Reference 7 by Turyshev et al., with Figure-7, potential benefits of an SGL telescope are illustrated with a targeted planet similar to the Earth within a range of 100 light years. What follows is a reference point which we would like to examine as well; in this case, with a specific range (10 parsecs) to illustrate engineering and operational questions, concerns or trades. In archives, see also [ref. A12].

Figure-7 Contrast of benefits illustration with planet observed with an orbital plane in the line of sight of the GLT.

Figure-8 Observation of a target planet with an orbital plane inclined to the line of GLT line of sight.

Left side, with perpendicular to the orbital plane tipped forward, we can observe crescent phases similar to the planets orbiting the sun interior to the Earth, but at low angles, the illuminated exoplanet face is not illuminated. On the aft side of the sun it is in full phase, but perhaps experiencing significant glare. On the right side, with higher inclination, the exoplanet appears as a cat’s eye above the center point; below, as a crescent rotated at a right angle to its path.

What to Do about Slew?

As for deploying a telescope out into the Oort Cloud out to ~550 AUs: This seems explicable and feasible with a combination of conventional propulsion and orbital mechanics taken to a higher state of the art, nuclear thermal, nuclear fusion electric or thermal, sized based on constraints such as mass, mission duration, infrastructure and finance. It is assumed here, by this aerospace engineer, that trajectory, propulsion, navigation and guidance issues of deployment with resources not yet available but will be with larger spacecraft assembled and tested in the future. However, operational issues of this baseline or reference mission, I would still like to explore. In pursuit of this, we will add a reference target (perhaps the first of an enlarging set), an exoplanet similar to the earth in a solar system similar to ours at a viewing distance used to set stellar absolute magnitudes, a distance of ten parsecs.

Now if a stellar system were ten parsecs away or 32.26 light years off, the maximum radial offset of an Earth-like planet from a Sol like star (1 AU) would be 0.1 arc seconds. Hence, the Earth analog would be in the “nominal” field of view (FOV) but the FOV would encompass a radius of 175 AUs – If the center of the nominal FOV can be considered the center of the target star. The stellar absolute magnitude measure distance (10 parsecs ) is a middle distance for this exercise and a parsec (3.23 light years), also basic to astronomy, could be considered a minimum just below Alpha Centauri distance (4.3 light years).

However, FOV behind the sun used for now, might be misleading or unclear in these circumstances. Because it is not clear to me how much of the blocked celestial sphere is transferred back via the gravity lens phenomenon. In this analysis, without full understanding of how the coronagraph or convolutions will work, I am unsure whether there is any control over what the steradian field behind the sun will be; whether it can be entirely controlled. Focusing on the star could provide all the 175 AU radius in the field of view, or some fraction thereof. But if centering on a planetary target can limit the wasted scan area, I highly recommend such.

For argument’s sake, of this celestial “blockage” region, it could range from the infinitesimal to the whole. The image obtained might be treated akin to a point source from which we might extract image data somewhat akin to extracting the spectrum of a similar un-dimensioned source. Or there might be several different deconvolution methods which provide options. But the aspect that concerns me here is how one searches for a point source in this so-called FOV, more characterized by the blockage of the sun’s angular width. The FOV might be described as an area within an FOB, a field of blockage. Whether discerned directly without need of a deconvolution – or not – at ten parsec distance the field of “blockage” (FOB?) would include a radius of 175.5 AUs in the 17.5 arcsecond maximum field of view.

The diameter of a G2V sun like ours is about 0.01 AU and a terrestrial planet like our own is 0.01 of that. And then what kind of transformation or convolution would be required to take the information from the other side and convert it back into an image? An image we would recognize as a planet with continents, oceans and clouds. Not knowing for sure, I suspected that if the position of the target planet were known, it would make more sense to focus the telescope on it rather than the star itself. On the other hand, if obtaining a coronagraphic blocking of the star required centering on the star, and capturing the planet required processing the thick ring around the star, then the total amount of data processing could become enormous – as the following table shows.

In terms of terrestrial planet viewed area vs. that of the 1 AU radius region and the 175 AU radius encompassing the entire celestial pane blocked by the sun, the ratios are 1 to 500 million and 168 billion respectively. Depending on the resolution sought for the planetary analysis ( e.g., 10 kilometer features distinguishable), then data bits characterizing individual “squares” of smaller dimensions must be processed. For present purposes, we can select ten kilometers for illustration.

Table-3 Scanning the entire field of FOV of a target at 10 parsecs and for an exoplanet similar to the Earth orbiting a G2V star. At the distance selected for calibrating stellar absolute magnitude (about 33 light years) and a GLT placed 550 AUs from our sun, the geometrical area blocked by the solar disc is a region 17.5 AUs in radius or 17.5 arc seconds wide at 1 arc second distance, 10x wider at 10 parsecs. The sun-like star diameter ~ 0.01 AUs and an exoplanet Earth about 1/100th of that or 0.0001 AU wide.

As the NIAC Phase II Report and AIAA journal article [7 and 8] indicate, targeted resolution objectives are on the order of 10 kilometers, indicative of sampling cells of lower dimensions. A one-kilometer-wide sample cell we select for sake of argument. However, with each observed cell, the GLT telescope instrument suite will include 3 -5 color band sweeps (e.g., ultraviolet, blue, yellow, red, infra-red) which would include intensity levels. A spectrometer could also seek evidence of discrete spectral lines or molecular bands. So, for each square kilometer scanned, there could be considerable binary coded data for the telemetry link. More than one data-bit for sure associated with each polygon of space scanned by the SGL telescope. If each polygon has a location defined in a 2-dimensional grid, then that point likely has two 32 or 64-bit position assignments; then each color filter has an intensity. In addition, if spectral lines are tracked another databit code will be assigned to that point as well.

Processing the FOV indiscriminately with focus on the star is like searching for a needle (or data) in a haystack. Tracking the planet itself could eliminate orders of magnitude of excess data processing. On the other hand, slewing at 550 AU circular orbit entails 40,000 km magnitude oscillations over a year to follow the target, distances equivalent to a tenth of Earth-Moon separation, but an expenditure of propulsive resources. Consequently, this would become at least one resource trade between data handling and maneuverability. One possible solution would be multiple telescopes formation flying over “seasonal” tracking points a quarter of orbital revolution apart in the projected orbital track.

The scenario for deploying the telescope assumes considerable outbound velocity accumulated in the form of continuous low thrust acceleration. Consequently, on station a very large radial velocity will remain. Remarkably, at 550 AU distance, circular orbit velocities are still over a kilometer per second ( e.g., Earth orbital about 29.7 km/sec over square root of 550, about 1.27 km/sec). With the Earth-based example at 10 parsecs and the requirement to cover 40,000 km back and forth within about 6 months, the corresponding constant velocity would be 0.0025 km/sec to hold the alignment. This type of slewing would work better with a more rapidly orbiting exoplanet located in the HZ of a red dwarf. But the M star case would require more frequent reverses of direction. Significantly, were we to do this exercise for a target at 1 parsec such as the Alpha Centauri stars, the oscillations would be ten times larger (400,000 km) or about the distance to the moon.

Additionally, the rotation rate about the planetary axis could be star synchronous or, as with the Earth or Mars, much faster than the orbital revolution. There could be moons in its near vicinity. All these are natural considerations for a habitable zone exoplanet survey. And reasons that features on the exoplanet surface could become blurred. Other cases would generate different requirements, no doubt. And all this will affect how long it will take to process square kilometer data sets into each of their relevant maps.

Beside stellar glare, galactic background needs to be considered too. A dark field behind the target star would be preferable as well, achieving a higher signal to noise ratio. It would be a shame if threshold levels for observing a planet vs. magnified stellar backgrounds could not be assessed prior to flight. A potential problem making out the planet against the background would make a planetary ephemeris important; linkage to home base guide telescopes directing the GLT pointing, where in a sense the GLT will be blind. We have discussed just an Earth analog so far, but HZ targets at cooler K and M stars as well as hotter F main sequence stars could possess eye-opener properties too.

Several decades ago, during an undergraduate satellite design project I participated as the communications engineer – and then space navigation assignments called on putting on that hat again. An interesting experience each time and I found some overall equations that formulated relations among distance, signal to noise thresholds, signal rates and power required to stay in touch at both ends, spacecraft and the Deep Space Network. Unfortunately, I lost our first team’s final report in a flood, not of information like that discussed, but of tropical storm water. But it is not necessary to reconstruct the methods found then. At this date there is now an old literature base for communications with spacecraft situated in deep space, thanks to publications of the Jet Propulsion Laboratory, illustrative examples such as Voyager and other Jupiter bound spacecraft, even earlier spacecraft examined as if they were beaming from there and received with network capabilities of a given epoch (see Figure 9).

Figure-9 Figure-9 A diagram from Ref. 5 pegs down one end of the trade issues, chronological increases in data rates obtained from spacecraft in Jupiter vicinity. Reception is associated with 5.2 AU distance from the sun (varying with the Earth) vs. the 550 AUs or more anticipated for the GLT. On one axis, acquisition data rates are shown. For each spacecraft that sets out on these Jovian missions (some, of course, actually did not), a liftoff limit on power or data rate can be assumed for the spacecraft or observatory. Once launched, most of the growth was likely at the Earth based part of the communication link.

In comparison with attenuation of signals from the Jovian system at 5.2 AU for the various systems shown in the Figure-9 JPL diagram, signals 100x further out will be decreased in strength to ~1/10,000th or less with movement beyond 550 AU. Consequently, data rates shown in the diagram for various extent technologies will be dropped by a factor of 1/10,000th or 1.0 e-04 as well.

Depending on when such an SGL space observatory will be launched, some technologies will improve data transmission rates or storage capacities with respect to mass density or power required. Other technologies likely will not experience similar trends. For example, it is unclear what new Deep Space Network type tracking facilities will be employed in support of the SGL mission. However, if the data load is driven by a full scan of the equivalent of the solar angular area or FOV, the spacecraft system requirements for data storage and transmission are increased enormously.

On the other hand, as shown, slewing from the stellar focal point to a planetary position will require propellant resources and attitude control increases over those for the stellar fix. Even at 550 AU, there is a 1.28 km/sec characteristic circular orbit velocity. And depending on time of flight to outpost station delivery, in coast the spacecraft can be considered on an extremely hyperbolic heliocentric path. Consequently, fixed on a target planet, low thrust would be required without planet tracking even to maintain stellar focus.

My own quick assessment is that narrow field of view scanning in the planetary vicinity as it tracks around the star in some arbitrary orbital plane is the better procedure. The actual orbital plane’s normal could be inclined by some angle to the line of sight (See Figure 8). Hence, a circular path would be perceived as an elliptical projection; more complex if actually eccentric to a considerable fraction. But with a mean likelihood of 45-degree inclination and circular orbit, half phases would appear at greatest stellar elongation. Near the line of sight, a cat’s-eye would appear behind the star and a crescent in front with lowest elongation and greatest glare. With zero inclination of the planet, we are bound to learn much about its northern hemisphere and much less about its south, depending on its rotational axis alignment.

Now if a stellar system were ten parsecs away or about 33 light years off, the maximum radial offset of an Earth-like planet from a Sol like star (1 AU) would be 0.1 arc seconds. Hence, the Earth analog would be in the “nominal” field of view (FOV) but the FOV would encompass a radius of 17.5 AUs – If the center of the nominal FOV can be considered the center of the target star.

FOV, used for now, might be misleading in these circumstances. Because it is not clear to me how much of the blocked celestial sphere is transferred back via the gravity lens phenomenon.

For argument’s sake, of this celestial “blockage” region, it could range from the infinitesimal to the whole. The image obtained might be treated akin to a point source from which we might extract image data somewhat akin to extracting the spectrum of a similar un-dimensioned source. So the aspect that confuses me here is how one searches for a point source in this so-called FOV, more characterized by the blockage of the sun’s angular width. The FOV might be described as an area within an FOB, a field of blockage.

In this situation, there would have to be some fore-knowledge of where the target planet should be. You would need a tracker observatory probably closer to Terra home. You still need a means to locate a body orbiting an object about a hundredth of an AU in diameter and in turn a planet about 1/10,000 of an AU wide. To relay information from a stellar observatory not experiencing this occultation by the sun to 550 AUs out, the lag would be about 3.174 days based on the speed of light.

And then, presumably, the observatory would need to slew toward this planetary target from the reference point of the stellar primary – or perhaps even the center of mass in a binary system. Alpha Centauri could be such an example.

A Mission for One Star System and Exoplanet or More?

Additional trade issues to consider are related to completion of observation and characterization of one planetary system. Perhaps there is more than one planet (or a moon) in a target system to study. But there is also the issue of observing more than one planetary system. Minimal angular separation of two “good” candidate systems in the celestial sphere would have to be weighed against the “excellence” of an isolated stellar system with no potential for a phase II mission elsewhere, say within one degree of circular arc. Faced with such a dilemma I would hope that observing the isolated system over years until system deactivation will be well worthwhile.

At this writing we are aware of about 5000 exoplanets with attributable features, providing a range of reasons for continued or closer observation. Like the other design issues described above, eventually there will be the dilemma of which exoplanet or planets to select.

In terms of steradians, the whole celestial sphere has an area of 4 ? units. With some experimentation I discover that it is possible to determine the equidistant position of any number of stars – which can illustrate the dilemma of deciding how to deploy the SGL Telescope. The celestial arc A between equally spaced stars of a given number n can be described with the answer in radians convertible to degrees. Once n equals or exceeds 3, the equidistant points can be viewed as vertices to equilateral and equiangular spherical triangles of given arc segments, the latter the significant parameter. The total of 5000 exoplanets is not distributed with an equal spacing, but there is an element of likelihood with the fractional degree separation overall. And, of course, a smaller selection of select exoplanet systems will have wider individual separations overall, but perhaps a few will be less than a degree apart. For the case of ten parsec distant planetary system we noted that a traverse to cover 17.5 AUs encompassed about 400,000 kilometers at 550 AU. A one degree traverse is 205 times as large but it does not have tracking determined maneuver velocity requirements.

It is likely that by some set of selected parameters, several exoplanets can be selected for further scrutiny. However, if several parameters are involved and a couple of candidates or alternates can be identified in proximity, it is possible that two close star systems could outscore focus on one, even if it generally acknowledged the best, but located on the wrong side of the sky for total mission benefit.

Consequently, the mission analysis could become more complicated as time passes with a larger and larger selection of nearby systems identified with one or more planets.

What would the parameters tend to be to warrant such a trade? Even if there is no evidence of life, an exoplanet of exceptional nature could transcend parameters associated with habitable zone parameters or signs of life. And for examination of signs of life, our knowledge will have to exceed such identifications as diameter, albedo and placement in a habitable zone: atmospheric composition, nature of hydrosphere, traces of processes similar to terrestrial ones… Cost benefit issues too of propulsion and maneuver to survey two planets would need to have an identifiable threshold against the additional spacecraft weight budget for propulsion. If the two candidate systems are far apart, then the choice might be easier in a way if it requires launching two distinct missions.

Whether going for two exoplanets separated by a degree or more is worthwhile, is difficult to ascertain at this early stage. But the determination will depend on establishing criteria for a trade. To first order it will depend on how outstanding signs of life might be within a future database of exoplanets. And if not clear, which parameters of an exoplanet or a maneuverable spacecraft should be considered and with what weight. Reflecting on an earlier orbital application proposal, Arthur C. Clarke suggested geosynchronous orbit for a single communications relay station, elaborated as a call center with humans at switchboards. Instead, we have numerous geosats with no one aboard. It could be that SGL spacecraft will proliferate similarly and for several purposes. At the very least, we can be thankful to be able to consider such possibilities, coming from a time decades back when exoplanets were simply considered fantasy like Spock’s planet – or more locally – Lescarbault’s and Le Verrier’s Vulcan.

References for Part I and Part II

1.) Pais, Abraham, Subtle is the Lord … The Science and Life of Albert Einstein, Oxford University Press, 1982.


3.) Vallado, David A., Fundamentals of Astrodynamics and Applications, 2nd edition, Appendix D4, Space Technology Library, 2001.

4.) Moulton, Forest Ray, An Introduction to Celestial Mechanics, 2nd Edition, Dover, 1914 Text.

5.) Taylor, Jim et al. Deep Space Communications, online at

6.) Wali, Kamshwar, C., Chandra – A Biography of S. Chandrasekhar, U. of Chicago Press, 1984.

7.) Turyshev et al., ”Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission,” Final Report, NASA Innovative Advanced, Concepts (NIAC) Phase II.

8.) Helvajian, H. et al., “Mission Architecture to Reach and Operate at the Focal Region of the Solar Gravitational Lens,” Journal of Spacecraft and Rockets, American Institute of Aeronautics and Astronautics (AIAA), February 2023, on line pre-print.

9.) Xu, Ya et al., ”Solar oblateness and Mercury’s perihelion precession”, MNRAS, 415, 3335-3343, 2011.

A1.) Archives: In the Days before Centauri Dreams… An Essay by WDK (

A2.) Archives: A Mission Architecture for the Solar Gravity Lens (

Here in Houston, the University of Houston, Clear Lake Physics and Astronomy Club had a recent meeting when the sky was obscured by clouds and the president had asked in advance, just in case of such circumstances, would I have any presentation I could give that night. There were some other ones that had grown all out of control, so I decided to start on a fresh topic. This article grew out of the evening presentation and consequently, it is dedicated to the club and its members.

13 April 23