≡ Menu

Musings on Art, Brown Dwarfs & Galactic Disks

I was getting ready to start writing a story with implications for brown dwarfs and the galaxy’s ‘thick disk’ (as opposed to its ‘thin disk,’ about which more in a moment) when I ran across the artwork below. This is the work of French artist and astronomer Étienne Léopold Trouvelot (1827-1895), whose careful astronomical observations were rendered into illustrations and pastel drawings in the era before astrophotography. I learned from Maria Popova’s The Marginalian that Trouvelot produced 50 scientific papers, but almost 7000 works of art based on what he saw. Thus the study of part of the Milky Way below, evidently created somewhere between 1874 and 1876.

Trouvelot’s work caught the attention of the director of the Harvard Observatory, who invited him to join its staff in 1872. The concept of his art was to get across to those without the privilege of seeing these objects through a telescope just how they looked to a trained scientist. He przed the value of human rendering over instrumentation, as in this passage from the introduction to The Trouvelot Astronomical Drawings Manual, published in 1882:

Although photography renders valuable assistance to the astronomer in the case of the Sun and Moon … for other subjects, its products are in general so blurred and indistinct that no details of any great value can be secured. A well-trained eye alone is capable of seizing the delicate details of structure and of configuration of the heavenly bodies, which are liable to be affected, and even rendered invisible, by the slightest changes in our atmosphere.

Thus the view from the 1870s. These days we explore the deep sky without the flourishes of a human pen through exquisite imagery from our Earth-based telescopes and space instruments like Hubble and JWST. But I always like to learn more about the development of our views of the cosmos, and wanted to introduce Centauri Dreams readers to a figure I only learned about over the weekend. I notice that Maria Popova is producing high-quality prints of some of Trouvelot’s work, with the proceeds benefiting an attempt to build a public observatory in New York City. A worthy project; I’m sure Trouvelot would have approved.

Brown Dwarfs and the Thick Disk

JWST’s early imaging has already proven stunning, and the discoveries mount. Today we look at a small object called GLASS-JWST-BD1, a member of that subclass of brown dwarfs known as T dwarfs. These are difficult objects to detect, with temperatures between 500 and 1500 K, and thus useful for exploring the boundary between star and planet when we can find them. What is exciting here is the demonstration of what JWST can do as we push outward in our observations of stars that cannot ignite hydrogen fusion, looking now into more distant parts of the galactic disk.

Objects like this emit primarily at infrared wavelengths. Their inherent faintness has meant that when we do surveys of brown dwarfs, we are working largely with brown dwarfs within 100 parsecs (326 light years) or less of the Sun. That has made it difficult to find such objects in the galaxy’s ‘thick disk,’ which consists of largely metal-poor stars rising well above the galactic plane. Surveys using the Hubble telescope’s WFC3 instrument have pushed detection further out but JWST’s results change the game.

Image: Edge on diagram of the Milky Way with several structures indicated (not to scale). The thick disk is shown in light yellow. Credit: Wikimedia Commons (CC BY-SA 3.0).

Indeed, the discovery paper puts the matter this way with reference to Hubble:

…these surveys are restricted to wavelengths < 2 μm, limiting their sensitivity to the reddest and coldest brown dwarfs. The James Webb Space Telescope (JWST) represents a major step forward in the detection of cool and distant brown dwarfs, with imaging and spectroscopy extending to ∼ 5μm and providing orders of magnitude greater sensitivity than Spitzer.

Image: Using the James Webb Space Telescope (JWST), an international team of astronomers have detected a new faint, distant, and cold brown dwarf. The newly found object, designated GLASS-JWST-BD1, turns out to be about 31 times more massive than Jupiter. Credit: Nonino et al., 2022.

The work was led by Mario Nonino of the Astronomical Observatory of Trieste in Italy. GLASS-JWST-BD1 is between 1850 and 2350 light years from the Sun in a direction perpendicular to the galactic plane. Its mass is calculated at 31.43 Jupiter masses, with an effective temperature of 600 K. Its age is estimated at 5 billion years.

The discovery was made with JWST’s Near-Infrared Spectrograph (NIRSPEC) and Near-Infrared Imager and Slitless Spectrograph (NIRISS). We only have about 400 known T dwarfs to date, but it’s clear that JWST will expand the catalog substantially. What we’re seeing here is that JWST is able to probe out into the galactic thick disk to find objects that are small and faint, meaning that the study of brown dwarfs, their metallicity and the evolution of their atmospheres, will be considerably enhanced.

Backyard Brown Dwarfs

From an entirely different dataset comes brown dwarf news from the citizen science project Backyard Worlds: Planet 9, which has just announced the discovery of 34 binary star systems where a brown dwarf is a companion object to a white dwarf. Citizen scientist Frank Kiwy is in fact listed as lead author of the paper on these discoveries that appears in The Astronomical Journal. Kiwy’s work involved data mining within a database of 4 billion objects in in the NOIRLab Source Catalog DR2.

We have a long way to go in learning how common brown dwarf companions to stars are, but these are objects that merit attention for what they can tell us about atmospheres both stellar and planetary. Brown dwarf atmospheres contain interesting molecules and offer hints to the development of planetary atmospheres in gas giants.

Moreover, have you looked at the numbers of some of these citizen scientist projects lately? Backyard Worlds: Planet 9 has a network encompassing over 100,000 volunteers who scan telescope images to search for features that machine learning algorithms may miss. The binary systems Kiwy found were among a far larger group of 2500 potential ultracool brown dwarfs that appear in the NOIRLab data. So while JWST pushes the limits from L2, data gathering from more than 40 Earth-based instruments collected in the NOIRLab holdings is being combed by citizen volunteers.

Aaron Meisner is an astronomer at NOIRLab (National Optical-Infrared Astronomy Research Laboratory) and the co-founder of Backyard Worlds:

“These discoveries were made by an amateur astronomer who conquered astronomical big data. Modern astronomy archives contain an immense treasure trove of data and often harbor major discoveries just waiting to be noticed.”

Something tells me that Étienne Trouvelot would have liked the idea of an amateur making contributions worthy of professional publication. In any case, wouldn’t he have loved to have gotten hold of some of the views we’re seeing from JWST? It’s hard, though, to see how he could have crafted works of art more lovely than the Webb instrument’s recent deep field images. “[No] human skill,” he once admitted, “can reproduce upon paper the majestic beauty and radiance of the celestial objects.”

Nonetheless, his work is deeply attractive. Let me close with another, this one of the Orion Nebula.

The paper is Nonino et al, “Early results from GLASS-JWST. XIII. A faint, distant, and cold brown dwarf,” submitted to Astrophysical Journal Letters (preprint). The Kiwy paper is “Discovery of 34 Low-mass Comoving Systems Using NOIRLab Source Catalog DR2,” Astronomical Journal Vol. 164, No. 1 (6 June 2022). Abstract.


CNEOS 2014-01-08: Sampling the Interstellar Meteor

How unusual that the study of an interstellar object should receive a boost from the United States Space Command, which is responsible for US military operations off-planet. But that’s part of the story of CNEOS 2014-01-08, which is described in its discovery paper as “a meteor of interstellar origin.” The 2019 finding came from Harvard’s Avi Loeb, working with then undergraduate student Amir Siraj. Loeb had been examining a catalog containing data on meteors over the last three decades in terms of the strength of their fireball, prompted by a 2018 fireball off the Kamchatka peninsula.

The Kamchatka meteor produced a blast with ten times the energy of the Hiroshima bomb, leading Loeb to put Siraj to work on calculating the past trajectories of the fastest meteors in the CNEOS catalog – CNEOS is NASA’s Center for Near Earth Object Studies. In an email yesterday morning, Loeb explained that numerous factors went into the study. Siraj was able to work with the position and velocity of the meteors at impact while factoring in the Earth’s gravity as well as that of the Sun and planets.

You would think that the fastest such objects would be those with interstellar implications, but it turns out that the fastest meteor in the catalog was not on a hyperbolic orbit, but had made a head-on collision with the Earth. But CNEOS 2014-01-08, which struck the Earth in 2014, impacting the ocean near the coast of Papua New Guinea, was another matter. The 2019 discovery paper (citation below) outlined the case for this object as interstellar in origin, unbound to the Sun.

A new paper is now available, submitted to the Journal of Astronomical Instrumentation. Says Loeb:

In our 2019 discovery paper, Amir and I inferred CNEOS-2014–01–08 to be moving at nearly sixty kilometers per second outside the Solar system, twice faster than the characteristic speed of stars in the so-called “Local Standard of Rest” of the Milky Way. In our new paper we took account of the meteor slowdown in the atmosphere and found that its speed was initially larger than the value measured from the fireball deep in the atmosphere by twenty kilometers per second. If the meteor was natural in origin, then this high initial speed suggests gravitational ejection from a deep potential well, such as found in the interior of a planetary system, within the orbit of a Mercury-like planet around a Sun-like star. Alternatively, the meteor could have been a technological object propelled by artificial means.

Image: This is Figure 1 from the paper. Caption: Trajectory of the January 8, 2014 meteor (red), shown intersecting with that of Earth (blue) at the time of impact, ti = 2014-01-08 17:05:34. Credit: Siraj & Loeb.

We’re able to draw some conclusions about this interstellar meteor even from the relatively sparse data available. But first, a word about the data collection process. You can imagine how wide-ranging the network of sensors that tracks objects entering the Earth’s atmosphere for reasons of national security must be. I learned from Loeb’s email that Space Command and NASA had made an agreement in 2020 that would boost NASA’s asteroid tracking capabilities through the use of Pentagon resources. Thus NASA is able to take advantage of light curve data generated by this source.

For more on these interactions, see Amir Siraj’s Spy Satellites Confirmed Our Discovery of the First Meteor from Beyond the Solar System. Because confirming the nature of CNEOS-2014–01–08 required referencing classified datasets, a letter to NASA from US Space Command came into play, issued on April 6, 2022 and making note of the 2019 paper by Loeb and Siraj. The letter confirms the interstellar nature of this object.

Loeb points out that as the meteor detection occurred in January of 2014, it predates the discovery of ‘Oumuamua by almost four years. Thus CNEOS-2014–01–08 “should be recognized as the first massive interstellar object ever discovered.”

We can already make some statements, as the authors do in the new paper, about the composition of this object, because the US Department of Defense released, along with its confirmation letter, the light curve for CNEOS 2014-01-08, showing three flashes separated from each other by roughly a tenth of a second. The authors note that it is possible to use the measured direction of motion for the object to calculate the altitude of these flashes as well as the density of the air at the level they occurred.

The calculations are complex and I send you to the paper for the details. But here is a taste of the logic behind them as stated within:

When a supersonic meteor moves through air, it is subject to a friction force on its frontal surface area. The force per unit area equals the ambient mass-density of air times the square of the object’s speed. This ram pressure reflects the flux of momentum per unit area per unit time delivered to the object in slowing down its motion. The meteor disintegrates if the ram pressure exceeds the yield strength of the material it is made of, representing the maximum stress that can be applied to it before it begins to deform. The heat released by the friction with air melts the fragments and generates the flashes of light in the fireball.

Loeb and Siraj calculated the ram pressure exerted on CNEOS 2014-01-08 at the time the three flashes in the light curve occurred. Here I’ll again draw from Loeb’s email:

We translated the meteor light curve to a plot of the power released as a function of the ambient ram pressure. To our surprise, the disintegration of CNEOS-2014–01–08 occurred when the external ram pressure reached a value of 113 megapascals (MPa). This value is twenty times larger than the highest yield strength of stony meteorites and two times larger than that of the toughest iron meteorites. The first interstellar meteor could not have been a stony meteorite similar to most solar system asteroids.

Indeed, as Loeb points out, the required material strength for this object has to exceed that of iron meteorites to allow it to survive the ram pressure down to the 18.7 kilometer altitude where the brightest flare shows up in the data. About one in twenty of the objects impacting the Earth are iron meteorites – 90% to 95% iron, mixing with a remainder of nickel alloys and trace amounts of iridium, gallium and sometimes gold. Loeb’s email points out how useful a sample of this object would be:

We could confirm the interstellar origin of this meteor independent of its speed based on its composition being different from solar system objects. It could deliver exotic abundances of heavy elements, depending on the proximity of its birth place to a supernova or a merger event of two neutron stars.

Confirming this with actual samples from the object would be ideal, which is why Loeb is hoping to find the funding to put what he describes as “an experienced expedition team” and the needed equipment to reach the impact site off the coast of Papua New Guinea. He has already received half a million dollars toward this purpose but needs another million to proceed with the expedition. From the paper:

The best way to decipher anomalies is to gather additional data. We are currently planning an expedition to Papua New Guinea where we could retrieve the meteor’s fragments from the ocean floor. Studying these fragments in a laboratory would allow us to determine the isotope abundances in CNEOS-2014-01-08 and check whether they are different from those found in solar system meteors. Altogether, anomalous properties of interstellar objects like CNEOS-2014-01-08 and ‘Oumuamua, hold the potential for revising conventional wisdom on our cosmic neighborhood. The expedition to the ocean floor around Papua New Guinea will illustrate metaphorically how scientific evidence expands our island of knowledge into the ocean of ignorance that surrounds it.

The search area appears to be a relatively reasonable 10 kilometers by 10 kilometers, offering the potential for discovery of fragments on the ocean floor. The plan is ambitious but seems entirely workable. I’ll close with its description in the paper:

Our plan is to mobilize a ship with a magnetic sled deployed using a long line winch. We will be operating approximately ∼ 300 km north of Manus Island. The team will consist of seven sled operators, plus the scientific team… We will tow a sled mounted with magnets, cameras and lights on the ocean floor inside of a 10 km × 10 km search box. A number of sources have been used to narrow the search site to this relatively small search box. A sled, ∼ 2 m long, ∼ 1 m wide and ∼ 0.2 m centimeters tall weighing about ∼ 55 kg, will be towed along the seabed to sample for ferro-magnetic meteorite fragments from the CNEOS 2014-01-08.

It would never have occurred to me when I began publishing Centauri Dreams that one day we might be mounting a search in our own oceans looking for debris from an interstellar object. Readers with deep pockets take note.

The paper is Siraj & Loeb, “An Ocean Expedition by the Galileo Project to Retrieve Fragments of the First Large Interstellar Meteor CNEOS 2014-01-08,” submitted to the Journal of Astronomical Instrumentation (preprint). The discovery paper is Siraj & Loeb, “The 2019 Discovery of a Meteor of Interstellar Origin,” submitted to Astrophysical Journal Letters (preprint).


The discovery of a super-Earth around the M-dwarf Ross 508 gives us an interesting new world close to, if not sometimes within, the inner edge of the star’s habitable zone. This is noteworthy not simply because of the inherent interest of the planet, but because the method used to detect it was Doppler spectroscopy. In other words, radial velocity methods in which we study shifts in the spectrum of the star are here being applied to a late M-dwarf that emits most of its energies in the near-infrared (NIR).

I usually think about transits in relation to M-dwarf planets, because our space-based observatories, from CoRoT to Kepler and now TESS, have demonstrated the power of these techniques in finding exoplanets. M-dwarfs are made to order for transits because they’re small enough to offer deep transits – the signature of the planet in the star’s lightcurve is more pronounced than a transit across a larger star.

From a radial velocity perspective, planets in an M-dwarf habitable zone orbit the star closely, making for a strong RV signal if we can detect it. But there are limitations to both methods: Transit searches have clustered around younger red dwarfs that are relatively more massive. In terms of radial velocity, most exoplanet surveys have employed optical CCDs, whereas older, more evolved M-dwarfs are brighter in the near-infrared (NIR). From an exoplanet perspective, then, it can be said that cool late M-dwarfs remain largely unexplored terrain, a situation that is now being addressed.

What is needed for this kind of work is a spectrograph specifically designed for NIR wavelengths, and in fact NIR spectrographs have begun to appear, some of which involve projects we’ve looked at here, as for example CARMENES (Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Echelle Spectrographs). Other such projects, like SPIROU (SPectropolarimetre InfraROUge) and HPF (Habitable Planet Finder) also employ NIR spectrographs.

The most famous of the M-dwarf planets is, of course, Proxima Centauri b, found by the team led by Guillem Anglada-Escudé using visible light spectroscopy, but M-dwarfs with temperatures below the roughly 3000 K of Proxima Centauri, which are considered late-type M-dwarfs, have not been systematically searched for planets.

Consider this: Seen from 30 light years out, the Sun is a 5th magnitude object in visible light, but a 3rd magnitude target in infrared. A late-type red dwarf comes in at around 19th magnitude in visible light, but brightens to 11th magnitude in the infrared. We’ve found dozens of exoplanets around stars with effective temperature higher than 3,000 K, but only a handful around cooler M-dwarfs. The authors of the discovery paper on Ross 508 b are not exaggerating when they describe the detection of planets around such stars using high-precision radial velocity methods as “a frontier in exoplanet exploration.” Their paper serves as a helpful introduction to NIR spectroscopy.

The team, led by Hiroki Harakawa (NAOJ Subaru Telescope, Hawaii), reports on the Ross 508 work as the beginning of a campaign exploring low-temperature stars with the Subaru Telescope IRD (InfraRed Doppler) instrument, which the Astrobiology Center of Japan, where it was developed, describes as the first high-precision infrared spectrograph for 8-meter class telescopes. The observing program now underway is the IRD Subaru Strategic Program (IRD-SSP), which began in 2019 and scans late-type M-dwarfs. Stable red dwarfs with low surface activity are the targets.

Radial velocity is the detection of stellar wobbles that can be indicated in several ways, making finding planets a matter of excluding false-positives as much as locating candidates. Because M-dwarfs are prone to violent flare activity, they’re problematic thanks to the changes in surface brightness they produce. A false planetary signature like this has to be extracted and then subtracted from the signature of a possible planet. Ross 508 b holds up to the scrutiny, indicating a minimum mass about four times that of Earth at an average distance of 0.05 AU from the star.

There are indications that the planet’s orbit is elliptical, with an orbital period of about 11 days, part of which may include crossing into and back out of the habitable zone. An interesting consequence of studying late-type M-dwarfs is that their presumed lower levels of flare activity may offer a planetary environment more conducive for life than their younger cousins, with a surface less frequently bathed in flare-induced radiation. I hasten to add that this is a tentative conclusion still the subject of active study.

In any case, a planet like Ross 508 b may well turn out to be a target for atmospheric analysis once we’re able to image it directly, probably with the coming generation of 30-meter class telescopes. Transits are unlikely here, so we’re reliant on imaging rather than transmission spectroscopy, which analyzes planetary atmospheres by studying the star’s light as it filters through the atmosphere during transit events.

We should be hearing a lot more from the IRD-SSP project. Lead author Hiroki Harakawa has this to say:

“Ross 508 b is the first successful detection of a super-Earth using only near-infrared spectroscopy. Prior to this, in the detection of low-mass planets such as super-Earths, near-infrared observations alone were not accurate enough, and verification by high-precision line-of-sight velocity measurements in visible light was necessary. This study shows that IRD-SSP alone is capable of detecting planets, and clearly demonstrates the advantage of IRD-SSP in its ability to search with a high precision even for late-type red dwarfs that are too faint to be observed with visible light.”

Image: Periodic variation in the line-of-sight velocity of the star Ross 508 observed by IRD. It is wrapped around the orbital period of the planet Ross 508 b (10.77 days). The change in the line-of-sight velocity of Ross 508 is less than 4 meters per second, indicating that IRD captured a very small wobble that is slower than a person running. The red curve is the best fit to the observations and its deviation from a sinusoidal curve indicates that the planet’s orbit is most likely elliptical. Credit: Harakawa et al. 2022.

The authors are interested in the question of eccentricity, pointing out that it may offer early clues to the planet’s origin, although it will take further radial velocity measurements to clarify just how eccentric this orbit is. The paper examines four different scenarios to explain the RV data, but none of these constrain the eccentricity conclusively. From the paper:

…there remains the possibility that Ross 508 b is in a high-eccentricity orbit. In a multiple-planet system, migrated planets experience giant impacts or are trapped in a resonant chain (e.g., Ogihara & Ida 2009; Izidoro et al. 2017). Planetary eccentricities are excited by giant impacts. The eccentricity of a planet can be also excited by gravitational interactions between neighboring planets or secular perturbations from a (sub)stellar companion on a wider orbit. The confirmation of a long-term RV trend will help disentangle the formation history of the super-Earth Ross 508 b.

It’s also far too early to make any statements about this planet’s habitability. For one thing, the inner edge of the habitable zone at Ross 508 is not well understood, depending as it does on the star’s luminosity, which in turn is affected by its low metallicity. It does appear that the planet is near the runaway greenhouse limit. But our knowledge of super-Earth habitability is nascent. Climate, plate tectonics, and other potent factors would play a role that we won’t be able to measure until we can start taking atmospheric measurements with next generation telescopes.

Ross 508 b is one of the faintest, lowest-mass stars with a planet detected through radial velocity. Its discovery points to the need for a large telescope and a high precision spectrograph in the near infrared to analyze the planetary systems around this kind of star. We should be learning a great deal more about late M-dwarfs as we press on with projects like the IRD Subaru Strategic Program, coupling near infrared RV work with transit observations from space and ground-based observatories.

The paper is Harakawa et al., “A Super-Earth Orbiting Near the Inner Edge of the Habitable Zone around the M4.5-dwarf Ross 508,” Publications of the Astronomical Society of Japan 30 June 2022 (full text).


Interesting things happen at the edge of the Solar System. Or perhaps I should say, at the boundary of the heliosphere, since the Solar System itself conceivably extends (in terms of possible planets) further out than the 100 or so AU that marks the heliosphere’s boundary at its closest. The fact that the heliosphere is pliable and reacts among other things to the solar cycle in turns means that the boundary is a moving target. It would be useful if we could get something like JHU/APL’s Interstellar Probe mission out well beyond the heliosphere to help us understand this morphology better.

But let’s think about the heliosphere’s boundaries from the standpoint of incoming spacecraft. Because deceleration at the destination system is a huge problem for starship mission planning. A future crew, human or robotic, could deploy a solar sail to slow down, but a magsail seems better, as its effects kick in earlier on the approach. Looking at the image below, however, suggests another possibility, one using the interactions between stars and the interstellar medium to assist the slowdown. And then the question arises: Does our own Sun produce a similar kind of bow shock?

Image: A multi-wavelength view of Zeta Ophiuchi. Credit: X-ray: NASA/CXC/Dublin Inst. Advanced Studies/S. Green et al.; Infrared: NASA/JPL/Spitzer.

Here we’re looking at a star, Zeta Ophiuchi, that is some 440 light years from Earth. It’s about 20 times as massive as the Sun, and evidently was once in a tight orbit around another star that became a supernova perhaps a million years ago. As a result, Zeta Ophiuchi was ejected from its binary orbit, and we have data from the Spitzer Space Telescope as well as the Chandra X-ray Observatory depicting the spectacular after-effects. The shock wave consists of matter blowing away from the star’s surface, slamming into gas. In the above image, the shock wave is in vivid red and green.

The latest work on Zeta Ophiuchi comes from a team led by Samuel Green (Dublin Institute for Advanced Studies, Ireland), with a paper laying out computer modeling of the shock wave and running the data against observational data obtained at X-ray, optical, infrared and radio wavelengths. Their results are interesting, as what can be found in data on the X-ray emissions shows that it is brighter than the modeling suggests. The bubble of X-ray emissions shows up in blue around the star in the image above. Its brightness indicates further modeling including turbulence and particle acceleration is needed.

I’ll send you to the paper for more on Zeta Ophiuchi, whose position – enveloped by the nebula Sh2-27 and pushing through dense dust clouds – makes it a natural for studying what happens when a shock wave develops. But let’s cut back to more mundane interactions, such as what happens when the Sun’s solar wind encounters the interstellar medium. Does a bow shock form here? Depending on the relative velocity of the heliosphere and the strength of the local interstellar magnetic field, such a phenomenon may or may not occur, as suggested by Voyager data as well as earlier findings from the Interstellar Boundary Explorer spacecraft (IBEX). A bow shock had been assumed, but we’re learning that these interactions are complicated.

While we investigate our heliosphere’s interactions with the interstellar medium, we can point to numerous bow shocks especially associated with more massive stars. In fact, a citizen science effort called The Milky Way Project is all about mapping bow shocks, building our catalog of these interesting astrophysical features. Learning more about how bow shocks form will clearly take us into the influence of interstellar magnetic fields as they roil the outflowing stellar winds they encounter. The density and pressures of the medium and the speed of the star’s astrosphere determine the result.

Image: Stars travel through the galaxy surrounded by a bubble of charged gas and magnetic fields, rounded at the front and trailing into a long tail behind. The bubble is called an astrosphere, or — in the case of the one around our Sun — a heliosphere. This image shows a few examples of astrospheres that are very strong and therefore visible.
Credit: NASA/Goddard Space Flight Center.

All of this has implications for our thinking about certain kinds of interstellar missions. If a star does form a bumper of plasma and higher density gas at the edge of its astrosphere, then as Gregory Benford has suggested (in correspondence some years back), we are looking at an obvious place to slow down an incoming starship. As Benford noted, the bow shock produces 3D structures, surfaces within which one can move while shedding speed, perhaps braking via a magsail. Each star would produce its own unique deceleration environment, allowing us to brake where possible along the bow shock, the astropause (cognate to the heliopause) and the termination shock.

We are talking about long, spiraling approaches to a destination system with continual magsail braking – decelerating from interstellar velocities is not going to be fast or easy. But it seems clear that one kind of precursor mission before we send missions that are more than flybys to other stars will be to visit our own shock environment at the edge of the Solar System, where we can learn more about using shock surfaces to slow down. I like the way Benford put it in an email: “As a starship approaches a star, sensing the shock structures will be like having a good eye for the tides, currents and reefs of a harbor.” For more, see 2012’s Starship Surfing: Ride the Bow Shock, where I assumed the existence of a solar bow shock.

All of this reminds us that the interstellar medium is anything but uniform. If the Sun is currently near the boundary of the Local Interstellar Cloud (and its exact position within it is unclear), the Alpha Centauri stars appear to be outside that cloud in the direction of the G cloud, another variation in the medium. So we have another kind of boundary crossing to consider. Different hydrogen densities play havoc with the Bussard ramjet concept, too. Robert Bussard assumed hydrogen densities in the range of 1 hydrogen atom per cubic centimeter, but move outside denser clouds and that figure should drop precipitously. If you’re flying an interstellar ramjet, pay attention to the clouds!

The Zeta Ophiuchi paper is Green et al., “Thermal emission from bow shocks. II. 3D magnetohydrodynamic models of zeta Ophiuchi,” in process at Astronomy & Astrophysics (abstract).


The Challenge of ‘Twilight Asteroids’

We have the Zwicky Transient Facility at Palomar Observatory to thank for the detection of the strikingly named ’Ayló’chaxnim (2020 AV2). This is a large near-Earth asteroid with a claim to distinction, being the first NEO found to orbit inside the orbit of Venus. I love to explore the naming of things, and now that we have ’Ayló’chaxnim (2020 AV2), we have to name the category, at least provisionally. The chosen name is Vatira, which in turn is a nod to Atira, a class of asteroids that orbit entirely inside Earth’s orbit. Thus Vatira refers to an Atira NEO with orbit interior to Venus.

As to the ’Ayló’chaxnim, it’s a word from indigenous peoples whose ancestral lands took in the mountainous region where the Palomar Observatory is located. I’m told by the good people at Caltech that the word means something like ‘Venus Girl.’ On June 7, people of Pauma descent gathered for a ceremony at the observatory, having been asked by the team manning the Zwicky Transient Facility to choose a local name.

I couldn’t tell you how ’Ayló’chaxnim is pronounced, but with the ZTF on watch, it’s possible we’ll find more Vatiras, or at least Atiras, which seem to be more numerous, so we may have more Pauma names to come and perhaps we’ll learn. 2020 AV2 is 1 to 3 kilometers in size and has an orbit tilted about 15 degrees from the plane of the Solar System. On its 151 day orbit, it stays interior to Venus and comes close to the orbit of Mercury. Postdoc Bryce Bolan at Caltech flagged it as a candidate in early 2020.

The ZTF itself is a survey camera mounted on the Samuel Oschin Telescope at Palomar, conducting a wide-field survey making rapid scans of the sky. 2020 AV2, says Caltech’s George Helou, who is a ZTF co-investigator, is on an interesting orbit, surely the result of migration from further out in the system:

“Getting past the orbit of Venus must have been challenging. The only way it will ever get out of its orbit is if it gets flung out via a gravitational encounter with Mercury or Venus, but more likely it will end up crashing on one of those two planets.”

Image: The Zwicky Transient Facility field of view. The ZTF Observing System delivers efficient, high-cadence, wide-field-of-view, multi-band optical imagery for time-domain astrophysics analysis. The camera utilizes the entire focal plane of 47 square degree of the 48-inch Samuel Oschin Schmidt telescope, providing the largest instantaneous field-of-view of any camera on a telescope of aperture greater than 0.5 m: each image will cover 235 times the area of the full moon. Credit: Zwicky Transient Facility.

This close to the Sun, Vatiras are only going to be visible at dusk or dawn. As the University of Hawaii’s Scott Sheppard points out in a recent issue of Science, our asteroid surveys mostly take place with a dark night sky, which implies that small objects orbiting between the Earth and the Sun are not likely to be found. Modeling of the NEO population predicts that objects as large as 2020 AV2 are unlikely among Vatiras but smaller objects could be plentiful. Asteroid surveys interior to Venus’ orbit are few, so there is work here for facilities like the ZTF, or the NSF’s Blanco 4-meter telescope in Chile with the Dark Energy Camera (DECam) to fill out this population. Both have fields of view sufficient to carry out this kind of survey.

So let’s get down to the asteroid mitigation question. Sheppard points out that what with current NEO surveys coupled with formation models for these objects, more than 90 percent of what he calls ‘planet killer’ NEOs have probably already been found – these would be objects larger than 1 kilometer, and he’s talking here about the entire range of NEOs, not just those interior to the orbits of Earth or Venus. He writes:

The last few unknown 1-km NEOs likely have orbits close to the Sun or high inclinations, which keep them away from the fields of the main NEO surveys. The 48-inch Zwicky Transient Facility telescope has found one Vatira and several Atira asteroids, making it one of the most prolific asteroid hunters interior to Earth. To combat twilight to find smaller asteroids, one can use a bigger telescope. Large telescopes usually do not have big fields of view to efficiently survey. The National Science Foundation’s Blanco 4-meter telescope in Chile with the Dark Energy Camera (DECam) is an exception. A new search for asteroids hidden in plain twilight with DECam has found a few Atira asteroids, including 2021 PH27.

Sheppard’s also describes a category he calls ‘city killers,’ which takes in NEOs larger than 140 meters; of these, he believes we have found about half. The progress in tracking NEOs has been heartening as we learn about potentially dangerous trajectories, and turning to twilight surveys like these will help us learn more about NEOs hidden in the glare of the Sun.

It turns out that the Zwicky team recently found the asteroid with the smallest known semimajor axis (0.46 AU). This is 2021 PH27, an object with high eccentricity whose orbit crosses the orbit of Mercury as well as Venus. Thus, given our categorization, PH27 is an Atira rather than a Vatira. With a perihelion of 0.13 AU, this NEO shows 1 arc minute of precession per century, the highest of any object in the Solar System including Mercury. This is another large NEO at about 1 kilometer in size. Although as Sheppard notes:

…because the diameter of these interior asteroids is calculated with an assumed albedo and solar phase function, the actual diameters for both of these discoveries could be under 1 km. This would put them in a more-expected population and make them less of a statistical fluke.

Image: 2020 AV2 orbits entirely within the orbit of Venus. Credit: Bryce Bolin/Caltech

Clearly we have much to do to build our catalog of objects close to the Sun. We can extend the catalog of exotic names as well. Asteroids called Amors are those that approach the Earth but do not cross its orbit. Apollos do cross the orbit of the Earth but have semimajor axes greater than Earth’s. Atens, in turn, cross Earth’s orbit but have semimajor axes less than that of the Earth. Sheppard points out that NEOs have dynamically unstable orbits, and speculates that a reservoir that replenishes their numbers must exist because the overall count seems to be in a steady state.

Among possible reservoirs are those that may exist in long-term resonances with Venus or Mercury, and there may conceivably be a population of asteroids not yet observed, the so-called Vulcanoids, that could have orbits entirely within the orbit of Mercury. Sheppard’s excellent article makes the point that Vulcanoids would be at the mercy of many factors, including Yarkovsky drift, collisions and thermal fracturing from proximity to the sun, so they’re likely uncommon. We do know that spacecraft observations of the region near the Sun seem to rule out Vulcanoids larger than 5 kilometers, but stable reservoirs for smaller objects may exist. Remember, too, that we have found numerous exoplanets closer to their host stars than the Vulcanoid region in our Solar System.

Overall, NEOs in the Sun’s glare should not be too prolific:

Fewer Atiras should exist than the more-distant NEOs, and even fewer Vatiras, because it becomes harder and harder for an object to move inward past Earth’s and then Venus’ orbit. Random walks of a NEO’s orbit through planetary gravitational interactions can make an Aten into an Atira and/or Vatira orbit and vice versa. Atiras should make up some 1.2% and Vatiras only 0.3% of the total NEO population coming from the main belt of asteroids (4). 2020 AV2 itself will spend only a few million years in a Vatira orbit before crossing Venus’ orbit. Eventually, 2020 AV2 will either collide with or be tidally disrupted by one of the planets, disintegrate near the Sun, or be ejected from the inner Solar System.

Scott Sheppard’s article is “In the Glare of the Sun,” Vol. 377 Issue 6604 (21 July 2022), pp. 366-367 (full text). For more on the Zwicky Transient Facility, see Graham et al., “The Zwicky Transient Facility: Science Objectives,” Publications of the Astronomical Society of the Pacific Vol. 131, No. 1001 (22 May 2019). Full text.


Getting There Quickly: The Nuclear Option

Adam Crowl has been appearing on Centauri Dreams for almost as long as the site has been in existence, a welcome addition given his polymathic interests and ability to cut to the heart of any issue. His long-term interest in interstellar propulsion has recently been piqued by the Jet Propulsion Laboratory’s work on a mission to the Sun’s gravitational lens region. JPL is homing in on multiple sailcraft with close solar passes to expedite the cruise time, leading Adam to run through the options to illustrate the issues involved in so dramatic a mission. Today he looks at the pros and cons of nuclear propulsion, asking whether it could be used to shorten the trip dramatically. Beamed sail and laser-powered ion drive possibilities are slated for future posts. With each of these, if we want to get out past 550 AU as quickly as possible, the devil is in the details. To keep up with Adam’s work, keep an eye on Crowlspace.

by Adam Crowl

The Solar Gravitational Lens amplifies signals from distant stars and galaxies immensely, thanks to the slight distortion of space-time caused by the Sun’s mass-energy. Basically the Sun becomes an immense spherical lens, amplifying incoming light by focussing it hundreds of Astronomical Units (AU) away. Depending on the light frequency, the Sun’s surrounding plasma in its Corona can cause interference, so the minimum distance varies. For optical frequencies it can be ~600 AU at a minimum and light is usefully focussed out to ~1,000 AU.

One AU is traveled in 1 Julian Year (365.25 days) at a speed of 4.74 km/s. Thus to travel 100 AU in 1 year needs a speed of 474 km/s, which is much faster than the 16.65 km/s that probes have been launched away from the Earth. If a Solar Sail propulsion system could be deployed close to the Sun and have a Lifting Factor (the ratio of Light-Pressure to Weight of Solar Sail vehicle) greater than 1, then such a mission could be launched easily. However, at present, we don’t have super-reflective gossamer light materials that could usefully lift a payload against solar gravity.

Carbon nanotube mesh has been studied in such a context, as has aerographite, but both are yet to be created in large enough areas to carry large payloads. The ratio of the push of sunlight, for a perfect reflector, to the gravity of the Sun means an areal mass density of 1.53 grams per square metre gives a Lifting Factor of 1. A Sail with such an LF will hover when pointing face on at the Sun. If a Solar Sail LF is less than 1, then it can be angled and used to speed up or slow down the Sail relative to its initial orbital vector, but the available trajectories are then slow spirals – not fast enough to reach the Gravity Lens in a useful time.

Image: A logarithmic look at where we’d like to go. Credit: NASA.

Absent super-light Solar Sails, what are the options? Modern day rockets can’t reach 474 km/s without some radical improvements. Multi-grid Ion Drives can achieve exhaust velocities of the right scale, but no power source yet available can supply the energy required. The reason why leads into the next couple of options so it’s worth exploring. For deep space missions the only working option for high-power is a nuclear fission reactor, since we’re yet to build a working nuclear fusion reactor.

When a rocket’s thrust is limited by the power supply’s mass, then there’s a minimum power & minimum travel time trajectory with a specific acceleration/deceleration profile – it accelerates 1/3 the time, then cruises at constant speed 1/3 the time, then brakes 1/3 the time. The minimum Specific Power (Power per kilogram) is:

P/M = (27/4)*S2*T-3

…where P/M is Power/Mass, S is displacement (distance traveled) and T is the total mission time to travel the displacement S. In units of AU and Years, the P/M becomes:

P/M = 4.8*S2*T-3 W/kg

However while the Average Speed is 474 km/s for a 6 year mission to 600 AU, the acceleration/deceleration must be accounted for. The Cruise Speed is thus 3/2 times higher, so the total Delta-Vee is 3 times the Average Speed. The optimal mass-ratio for the rocket is about 4.41, so the required Effective Exhaust Velocity is a bit over twice the Average Speed – in this case 958 km/s. As a result the energy efficiency is 0.323, meaning the required Specific Power for a rocket is:

P/M = 14.9*S2*T-3 W/kg

For a mission to 600 AU in 6 years a Specific Power of 24,850 W/kg is needed. But this is the ideal Jet-Power – the kinetic energy that actually goes into the forward thrust of the vehicle. Assuming the power source is 40% (40% drive and 10% payload) of the vehicle’s empty mass and the efficiency of the higher-powered multi-grid ion-drive is 80%, then the power source must produce 77,600 W/kg of power. Every power source produces waste heat. For a fission power supply, the waste heat can only be expelled by a radiator. Thermodynamic efficiency is defined as the difference in temperature between the heat-source (reactor) and the heat-sink (radiator), divided by the temperature of the heat source:

Thermal Efficiency = (Tsource – Tsink) / Tsource

For a reactor with a radiator in space, the mass of that radiator is (usually) minimised when the efficiency is 25 % – so to maximise the Power/Mass ratio the reactor has to be really HOT. The heat of the reactor is carried away into a heat exchanger and then travels through the radiator to dump the waste heat to space. To minimise mass and moving parts so called Heat-Pipes can be used, which are conductive channels of certain alloys.

Another option, which may prove highly effective given clever reactor designs, is to use high performance thermophotovoltaic (TPV) cells to convert high temperature thermal emissions directly into electrical power. High performance TPV’s have hit 40% efficiency at over 2,000 degrees C, which would also maximise the P/M ratio of the whole power system.

Pure Uranium-235, if perfectly fissioned (a Burn-Up Fraction of 1), releases 88 trillion joules (88 TJ) per kilogram. A jet-power of 24,850 W/kg sustained for 4 years is a total power output of 3.1 TJ/kg. Operating the Solar Lens Telescope payload won’t require such power levels, so we’ll assume it’s negligible fraction of the total output – a much lower power setting. So our fuel needs to be *at least* 3.6% Uranium-235. But there’s multipliers which increase the fraction required – not all the vehicle will be U-235.

First, the power-supply mass fraction and the ion-drive efficiency – a multiplier of 1/0.32. Therefore the fuel must be 11.1% U-235.

Second, there’s the thermodynamic efficiency. To minimise the radiator area (thus mass) required, it’s set at 25%. Therefore the U-235 is 45.6% of the power system mass. The Specific Power needed for the whole system is thus 310,625 W per kilogram.

The final limitation I haven’t mentioned until now – the thermophysical properties of Uranium itself. Typically Uranium is in the form of Uranium Dioxide, which is 88% uranium by mass. When heated every material goes up in temperature by absorbing (or producing internally) a certain amount of heat – the so called Heat Capacity. The total amount of heat stored in a given amount of material is called the Enthalpy, but what matters to extracting heat from a mass of fissioning Uranium is the difference in Enthalpy between a Higher and a Lower temperature.

Considering the whole of the reactor core and the radiator as a single unit, the Lower temperature will be the radiator temperature. The Higher will be the Core where it physically contacts the heat exchanger/radiator. Thanks to the Thermal efficiency relation we know that if the radiator is at 2,000 K, then the Core must be at least ~2,670 K. The Enthalpy difference is 339 kilojoules per kilogram of Uranium Oxide core. Extracting that heat difference every second maintains the temperature difference between the Source and the Sink to make Work (useful power) and that means a bare minimum of 91.6% of the specific mass of the whole power system must be very hot fissioning Uranium Dioxide core. Even if the Core is at melting point – about 3120 K – then the Enthalpy difference is 348 KJ/kg – 89.3% of the Power System is Core.

The trend is obvious. The power supply ends up being almost all fissioning Uranium, which is obviously absurd.

To conclude: A fission powered mission to 600 AU will take longer than 6 years. As the Power required is proportional to the inverse cube of the mission time, the total energy required is proportional to the inverse square of the mission time. So a mission time of 12 years means the fraction of U-235 burn-up comes down to a more achievable 22.9% of the power supply’s total mass. A reactor core is more than just fissioning metal oxide. Small reactors have been designed with fuel fractions of 10%, but this is without radiators. A 5% core mass puts the system in range of a 24 year mission time, but that’s approaching near term Solar Sail performance.


The last time we looked at the Jet Propulsion Laboratory’s ongoing efforts toward designing a mission to the Sun’s gravitational lens region beyond 550 AU, I focused on how such a mission would construct the image of a distant exoplanet. Gravitational lensing takes advantage of the Sun’s mass, which as Einstein told us distorts spacetime. A spacecraft placed on the other side of the Sun from the target exoplanetary system would take advantage of this, constructing a high resolution image of unprecedented detail. It’s hard to think of anything short of a true interstellar mission that could produce more data about a nearby exoplanet.

In that earlier post, I focused on one part of the JPL work, as the team under the direction of Slava Turyshev had produced a paper updating the modeling of the solar corona. The new numerical simulations led to a powerful result. Remember that the corona is an issue because the light we are studying is being bent around the Sun, and we are in danger of losing information if we can’t untangle the signal from coronal distortions. And it turned out that because the image we are trying to recover would be huge – almost 60 kilometers wide at 1200 AU from the Sun if the target were at Proxima Centauri distance – the individual pixels are as much as 60 meters apart.

Image: JPL’s Slava Turyshev, who is leading the team developing a solar gravitational lens mission concept that pushes current technology trends in striking new directions. Credit: JPL/S. Turyshev.

The distance between pixels turns out to help; it actually reduces the integration time needed to pull all the data together to produce the image. The integration time (the time it takes to gather all the data that will result in the final image) is in fact reduced when pixels are not adjacent at a rate proportional to the inverse square of the pixel spacing. I’ve more or less quoted the earlier paper there to make the point that according to the JPL work thus far, exoplanet imaging at high resolution using these methods is ‘manifestly feasible,’ another quotation from the earlier work.

We now have a new paper from the JPL team, looking further at this ongoing engineering study of a mission that would operate in the range of 550 to 900 AU, performing multipixel imaging of an exoplanet up to 100 light years away. The telescope is meter-class, the images producing a surface resolution measured in tens of kilometers. Again I will focus on a specific topic within the paper, the configuration of the architecture that would reach these distances. Those looking for the mission overview beyond this should consult the paper, the preprint of which is cited below.

Bear in mind that the SGL (solar gravitational lens) region is, helpfully, not a focal ‘point’ but rather a cylinder, which means that a spacecraft stays within the focus as it moves further from the Sun. This movement also causes the signal to noise ratio to improve, and means we can hope to study effects like planetary rotation, seasonal variations and weather patterns over integration times that may amount to months or years.

Image: From Geoffrey Landis’ presentation at the 2021 IRG/TVIW symposium in Tucson, a slide showing the nature of the gravitational lens focus. Credit: Geoffrey Landis.

Considering that Voyager 1, our farthest spacecraft to date, is now at a ‘mere’ 156 AU, a journey that has taken 44 years, we have to find a way to move faster. The JPL team talks of reaching the focal region in less than 25 years, which implies a hyperbolic escape velocity of more than 25 AU per year. Chemical methods fail, giving us no more than 3 to 4 AU per year, while solar thermal and even nuclear thermal move us into a still unsatisfactory 10-12 AU per year in the best case scenario. The JPL team chooses solar sails in combination with a close perihelion pass of the Sun. The paper examines perihelion possibilities at 15 as well as 10 solar radii but notes that the design of the sailcraft and its material properties define what is going to be possible.

Remember that we have also been looking at the ongoing work at the Johns Hopkins Applied Physics Laboratory involving a mission called Interstellar Probe, which likewise is in need of high velocity to reach the distances needed to study the heliosphere from the outside (a putative goal of 1000 AU in 50 years has been suggested). Because the JHU/APL effort has just released a new paper of its own, I’ll also be referring to it in the near future, because thus far the researchers working under Ralph McNutt on the problem have not found a close perihelion pass, coupled with a propulsive burn but without a sail, to be sufficient for their purposes. But more on that later. Keep it in mind in relation to this, from the JPL paper:

…the stresses on the sailcraft structure can be well understood. For the sailcraft, we considered among other known solar sail designs, one with articulated vanes (i.e., SunVane). While currently at a low technology readiness level (TRL), the SunVane does permit precision trajectory insertion during the autonomous passage through solar perigee. In addition, the technology permits trimming of the trajectory injection errors while still close to the Sun. This enables the precision placement of the SGL spacecraft on its path towards the image cylinder which is 1.3 km in diameter and some 600+ AU distant.

Is the SunVane concept the game-changer here? I looked at it 18 months ago (see JPL Work on a Gravitational Lensing Mission), where I used the image below to illustrate the concept. The sail is constructed of square panels aligned along a truss. In the Phase II study for NIAC that preceded the current papers, a sail based on SunVane design could achieve 25 AU per year – that would be arrival at 600 AU in 26 years in conjunction with a close solar pass – using a craft with total sail area of 45,000 square meters (that’s equivalent to a roughly 200 X 200 square meter single sail).

Image: The SunVane concept. Credit: Darren D. Garber (Xplore, Inc).

With sail area distributed along the truss rather than confined to the sail’s center of gravity, this is a highly maneuverable design that continues to be of great interest. Maneuverability is a key factor as we look at injecting spacecraft into perihelion trajectory, where errors can be trimmed out while still in close proximity to the Sun.

But current thinking goes beyond flying a single spacecraft. What the JPL work has developed through the three NIAC phases and beyond is a mission built around a constellation of smaller spacecraft. The idea is chosen, the authors say, to enhance redundancy, enable the needed precision of navigation, remove the contamination of background light during SGL operations, and optimize the return of data. What intrigues me particularly is the use of in-flight assembly, with the major spacecraft modules placed on separate sailcraft. This will demand that the sailcraft fly in formation in order to effect the needed rendezvous for assembly.

Let’s home in on this concept, pausing briefly on the sail, for this mission will demand an attitude control system to manage the thrust vector and sail attitude once we have reached perihelion with our multiple craft, each making a perihelion pass followed by rendezvous with the other craft. I turn to the paper for more:

Position and velocity requirements for the incoming trajectory prior to perihelion are < 1 km and ∼1 cm/sec. Timing through perihelion passage is days to weeks with errors in entry-time compensated in the egress phase. As an example, if there is a large position and/or velocity error upon perihelion passage that translated to an angular offset of 100” from the nominal trajectory, there is time to correct this translational offset with the solar sail during the egress phase all the way out to the orbit of Jupiter. The sail’s lateral acceleration is capable of maneuvering the sailcraft back to the desired nominal state on the order of days depending on distance from the Sun. This maneuvering capability relaxes the perihelion targeting constraints and is well within current orbit determination knowledge threshold for the inner solar system which drive the ∼1 km and ∼1 cm/sec requirements.

Why the need to go modular and essentially put the craft together during the cruise phase? The paper points out that the 1-meter telescope that will be necessary cannot currently be produced in the mass and volume range needed to fit a CubeSat. The mission demands something on the order of a 100 kg spacecraft, which in turn would demand solar sails of extreme size as needed to reach the target velocity of 20 AU per year or higher. Such sails will be commonplace one day (I assume), but with the current state of the art, in-flight robotic assembly leverages our growing experience with miniaturization and small satellites and allows for a mission within a decade.

If in-flight assembly is used, because of the difficulties in producing very large sails, the spacecraft modules…are placed on separate sailcraft. After in-flight assembly, the optical telescope and if necessary, the thermal radiators are deployed. Analysis shows that if the vehicle carries a tiled RPS [radioisotope power system]…where the excess heat is used for maintaining spacecraft thermal balance, then there is no need for thermal radiators. The MCs [the assembled spacecraft] use electric propulsion (EP) to make all the necessary maneuvers for the cruise (∼25 years) and science phase of the mission. The propulsion requirements for the science phase are a driver since the SGL spacecraft must follow a non inertial motion for the 10-year science mission phase.

According to the authors, numerous advantages accrue from using a modular approach with in-space assembly, including the ability to use rideshare services; i.e., we can launch modules as secondary payloads, with related economies in money and time. Moreover, such a use means that we can use conventional propulsion rather than sails as an option for carrying the cluster of sailcraft inbound toward perihelion in formation. In any case, at some point the sailcraft deploy their sails and establish the needed trajectory for the chosen solar perihelion point. After perihelion, the sails — whose propulsive qualities diminish with distance from the Sun — are ejected, perhaps nearing Earth orbit, as the sailcraft prepare for assembly.

Flying in formation, the sailcraft reduce their relative distance outbound and begin the in-space assembly phase while passing near Earth orbit. The mission demands that each of the 10-20 kg mass spacecraft be a fully functional nanosatellite that will use onboard thrusters for docking. Autonomous docking in space has already been demonstrated, essentially doing what the SGL mission will have to do, assembling larger craft from smaller ones. It’s worth noting, as the authors do, that NASA’s space technology mission directorate has already begun a project called On-Orbit Autonomous Assembly from Nanosatellites-OAAN along with a CubeSat Proximity Operations Demonstration (CPOD) mission, so we see these ideas being refined.

What demands attention going forward is the needed development of proximity operation technologies, which range from sensor design to approach algorithms, all to be examined as study of the SGL mission continues. There was a time when I would have found this kind of self-assembly en-route to deep space fanciful, but there was also a time when I would have said landing a rocket booster on its tail for re-use was fanciful, and it’s clear that self-assembly in in the SGL context is plausible. The recent deployment of the James Webb Space Telescope reinforces the same point.

The JPL team has been working with simulation tools based on concurrent engineering methodology (CEM), modifying current software to explore how such ‘fractionated’ spacecraft can be assembled. Note this:

Two types of distributed functionality were explored. A fractionated spacecraft system that operates as an “organism” of free-flying units that distribute function (i.e., virtual vehicle) or a configuration that requires reassembly of the apportioned masses. Given that the science phase is the strong driver for power and propellant mass, the trade study also explored both a 7.5 year (to ∼800 AU) and 12.5 year (to ∼900 AU) science phase using a 20 AU/yr xit velocity as the baseline. The distributed functionality approach that produced the lowest functional mass unit is a cluster of free-flying nanosatellites…each propelled by a solar sail but then assembled to form a MC [mission capable] spacecraft.

Image: Various approaches will emerge about the kind of spacecraft that might fly a mission to the gravitational focus of the Sun. In this image (not taken from the Turyshev et al. paper), swarms of small sailcraft capable of self-assembly into a larger spacecraft are depicted that could fly to a spot where our Sun’s gravity distorts and magnifies the light from a nearby star system, allowing us to capture a sharp image of an Earth-like exoplanet. Credit: NASA/The Aerospace Corporation.

The current paper goes deeply into the attributes of the kind of nanosatellite that can assemble the final design, and I’ll send you to it for further details. Each of the component craft has the capability of a 6U CubeSat/nanosat and each carries components of the final craft, from optical communications to primary telescope mirror. Current thinking is that the design is in the shape of a round disk about 1 meter in diameter and 10 cm thick, with a carbon fiber composite scaffolding. The idea is to assemble the final craft as a stack of these units, producing the final round cylinder.

What a fascinating, gutsy mission concept, and one with the possibility of returning extraordinary data on a nearby exoplanet. The modular approach can be used to enhance redundancy, the authors note, as well as allowing for reconfiguration to reduce the risk of mission failure. Self-assembly leverages current advances in miniaturization, composite materials, and computing as reflected in the proliferation of CubeSat and nanosat technologies. What this engineering study is pointing to is a mission to the solar gravity lens that seems feasible with near-term technologies.

The paper is Helvajian et al., “A mission architecture to reach and operate at the focal region of the solar gravitational lens,” now available as a preprint. The earlier report on the study’s progress is “Resolved imaging of exoplanets with the solar gravitational lens,” (preprint). The Phase II NIAC report on this work is Turyshev & Toth, “Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission,” Final Report NASA Innovative Advanced Concepts Phase II (2020). Full text.


Getting Down to Business with JWST

So let’s get to work with the James Webb Space Telescope. Those dazzling first images received a gratifying degree of media attention, and even my most space-agnostic neighbors were asking me about what exactly they were looking at. For those of us who track exoplanet research, it’s gratifying to see how quickly JWST has begun to yield results on planets around other stars. Thus WASP-96 b, 1150 light years out in the southern constellation Phoenix, a lightweight puffball planet scorched by its star.

Maybe ‘lightweight’ isn’t the best word. Jupiter is roughly 320 Earth masses, and WASP-96b weighs in at less than half that, but its tight orbit (0.04 AU, or almost ten times closer to its Sun-like star than Mercury) has puffed its diameter up to 1.2 times that of Jupiter. This is a 3.5-day orbit producing temperatures above 800 ℃.

As you would imagine, this transiting world is made to order for analysis of its atmosphere. To follow JWST’s future work, we’ll need to start learning new acronyms, the first of them being the telescope’s NIRISS, for Near-Infrared Imager and Slitless Spectrograph. NIRISS was a contribution to the mission from the Canadian Space Agency. The instrument measured light from the WASP-96 system for 6.4 hours on June 21.

Parsing the constituents of an atmosphere involves taking a transmission spectrum, which examines the light of a star as it filters through a transiting planet’s atmosphere. This can then be compared to the light of the star when no transit is occurring. As specific wavelengths of light are absorbed during the transit, atmospheric gasses can be identified. Moreover, scientists can gain information about the atmosphere’s temperature based on the height of peaks in the absorption pattern, while the spectrum’s overall shape can flag the presence of haze and clouds.

These NIRISS observations captured 280 individual spectra detected in a wavelength range from 0.6 microns to 2.8 microns, thus taking us from red into the near infrared. Even with a relatively large object like a gas giant, the actual blockage of starlight is minute, here ranging from 1.36 percent to 1.47 percent. As the image below reveals, the results show the huge promise of the instrument as we move through JWST’s Cycle 1 observations, nearly a quarter of which are to be devoted to exoplanet investigation.

Image: A transmission spectrum is made by comparing starlight filtered through a planet’s atmosphere as it moves across the star, to the unfiltered starlight detected when the planet is beside the star. Each of the 141 data points (white circles) on this graph represents the amount of a specific wavelength of light that is blocked by the planet and absorbed by its atmosphere. The gray lines extending above and below each data point are error bars that show the uncertainty of each measurement, or the reasonable range of actual possible values. For a single observation, the error on these measurements is remarkably small. The blue line is a best-fit model that takes into account the data, the known properties of WASP-96 b and its star (e.g., size, mass, temperature), and assumed characteristics of the atmosphere. Credit: NASA, ESA, CSA, and STScI.

No more detailed infrared transmission spectrum has even been taken of an exoplanet, and this is the first that includes wavelengths longer than 1.6 microns at such resolution, as well as the first to cover the entire frequency range from 0.6 to 2.8 microns simultaneously. Here we can detect water vapor and infer the presence of clouds, as well as finding evidence for haze in the shape of the slope at the left of the spectrum. Peak heights can be used to deduce an atmospheric temperature of about 725 ℃.

Moving into wavelengths longer than 1.6 microns gives scientists a part of the spectrum that is made to order for the detection of water, oxygen, methane and carbon dioxide, all of which are expected to be found in other exoplanets observed by the instrument, and a portion of the spectrum not available from predecessor instruments. All this bodes well for what JWST will have to offer as it widens its exoplanet observations.


Just how likely is it that the galaxy is filled with technological civilizations? Kelvin F Long takes a look at the question using diffusion equations to probe the possible interactions among interstellar civilizations. Kelvin is an aerospace engineer, physicist and author of Deep Space Propulsion: A Roadmap to Interstellar Flight (Springer, 2011). He is the Director of the Interstellar Research Centre (UK), has been on the advisory committee of Breakthrough Starshot since its inception in 2016, and was the co-founder of Icarus Interstellar and the Initiative/Institute for Interstellar Studies, He has served as editor of the Journal of the British Interplanetary Society and continues to maintain the Interstellar Studies Bibliography, currently listing some 1400 papers on the subject.

by Kelvin F Long

Many excellent papers have been written about the Fermi paradox over the years, and until we find solid evidence for the existence of life or intelligent life elsewhere in the galaxy the best we can do is to estimate based on what we do know about the nature of the world we live in and the surrounding universe we observe across space and time.

Yet ultimately to increase the chances of finding life we need to send robotic probes external to our solar system to visit the planets around other stars. Whilst telescopes can do a lot of significant science, in principle a probe can conduct in-situ reconnaissance of the system to include orbiters, atmospheric penetrators and even landers.

Currently, the Voyager 1 and 2 probes are taking up the vanguard of this frontier and hopefully in the years ahead more will follow in their wake. Although these are only planetary flyby probes and would take tens of thousands of years to reach the nearest stars, our toes have been dipped into the cosmic ocean at least, and this is a start.

If we can send a probe out into the Cosmos, it stands to reason that other civilizations may do the same. As probes from different civilizations explore space, there is a possibility that they may encounter each other. Indeed, it could be argued that the probability of species-species first contact is more with their robotic ambassadors rather than the original biological organisms that launched them on their vast journeys.

However, the actual probability of two different probes from alternative points of origin (different species) interacting is low. This is for several reasons. The first relates to astrobiology in that we do not yet know how frequent life is in the galaxy. The second relates to the time of departure of the probes within the galaxy’s history. Two probes may appear in the same region of space, but if this happens millions of years apart then they will not meet. Third, and an issue not often discussed in the literature, is the fact that each probe will have a different propulsion system and so its velocity of motion will be different.

As a result, not only do probes have to contend with relativistic effects with respect to their world of origin (particularly if they are going close to the speed of light), but they will also have to deal with the fact that their clocks are not synchronised with each other. The implication is that for probes interacting from civilizations that are far apart, the relativistic effects become so large that it creates a complex scenario of temporal synchronization. This becomes more pronounced the larger the different species of probes, and the larger the difference in the respective average speeds. This is a state we might call ‘temporal spaghettification’, in reference to the complex space-time history of the spacecraft trajectories relative to each other.

An implication of this is that ideas like the Isaac Asimov Foundation series, where vast empires are constructed across hundreds or thousands of light years of space, do not seem plausible. This is particularly the case for ultra-fast speeds (where relativistic effects dominate) that do approach the speed of light. In general, the faster the probe speeds and the further apart the separate civilizations, the more pronounced the effect. In 2016 this author framed the idea as a postulate:

“Ultra-relativistic spaceflight leads to temporal spaghettification and is not compatible with galaxy wide civilizations interacting in stable equilibrium.”

Another consequence of ultra-fast speeds is that if civilizations do interact, it will not be possible to prevent the technology (i.e. power and propulsion) associated with the more advanced race from eventually emerging within the other species at some point in the future. Imagine, for example, if a species turned up with faster than light drives and simply chose to share that technology, even if for a price, as a part of a cultural information exchange.

Should such a culture refuse to share that technology with us, we would likely work towards its fruition anyway. This is because our knowledge of its existence will promote research within our own science to work towards its realisation. Alternatively, knowledge of that technology will eventually just leak out and be known by others.

There is also a statistical probability that if it can be invented by one species, it will be invented by another; as a law of large numbers. As a result when one species has this technology and starts interacting with others, eventually many other species will obtain it, even if it takes a long time to mature. We might think of this as a form of technological equilibration, in reference to an analogy to thermodynamics.

Ultimately, this implies that it is not possible to contain the information associated with the technology forever once species-species interaction begins. Indeed, it has been discovered that even the gravitational prisons of light (black holes) are leaky through Hawking evaporation. The idea that there is no such thing as a permanently closed system was also previously framed as a second postulate by this author:

“No information can be contained in any system indefinitely.”

Adopting analogues from plasma physics and the concept of distribution functions, we can imagine a scenario in which within a galaxy there are multiple populations, each sending out waves of probes at some average velocity of expansion rate. If most of the populations adopted fusion propulsion technology, for example, as their choice of interstellar transport, then the average velocity might be around 0.1c (i.e. plausible speeds for fusion propulsion are 0.05-0.15c) and this would then define the peak of a velocity distribution function.

The case of human-carrying ships may be represented by world ships traveling at the slow speeds of 0.01-0.03c. In the scenario of the majority of the populations employing a more energetic propulsion method, such as using antimatter fuel, the peak would shift to the right. In general, the faster the average expansion speed, the further to the right the peak would shift, since the peak represents the average velocity.

The more the populations interacted, the greater the technological equilibration over time, and this could see a gradual shift into the relativistic and then ultra-relativistic (>0.9c) speed regimes. Yet, due to the limiting factor of the speed of light limit (~300,000 km/s or 1c), the peak would start to move asymptotically towards some infinite value.

There is also the special case of faster-than-light travel (ftl), but by the second postulate if any one civilization develops it then eventually many of the others will also develop it. Then as the mean velocity of many of the galactic populations tends towards some ftl value, you get a situation where many civilizations can now leave the galaxy, creating a massive population expansion outwards, as starships are essentially capable of reaching other galaxies. That population would also be expanding inwards to the other stars within our galaxy since trip times are so short. Indeed, ships would also be arriving from other galaxies due to the ease of travel. But if this were the case, starships would be arriving in Earth orbit by now.

In effect, the more those civilizations interact, the more the average speed of spacecraft in the galaxy would shift to higher speeds, and eventually this average would begin to move asymptotically towards ftl (assuming it is physically possible), which is an effect we might refer to as ‘spatial runaway’ since there is no longer any tendency towards some equilibrium speed limit. In addition, the ubiquity of ftl transport comes with all sorts of implications for communications and causality and in general creates a chaotic scenario that does not lean towards a stable state.

This then leads to the third postulate:

“Faster than light spaceflight leads to spatial runaway, and is not compatible with galaxy wide civilizations interacting in stable equilibrium.”

Each species that is closely interacting may start out with different propulsion systems so that they have an average speed of population expansion, but if technology is swapped there will be some sort of equilibration that will occur such that all species tend towards some mean velocity of population diffusion.

The modeling of a population density of a substance is borrowed from stochastic potential theory, with discrete implementation for the quantization of space and time intervals by the use of average collision parameters. This is analogous to problems such as Brownian motion, where particles undergo a random walk. This can be adopted as an analogy to explain the motion of a population of interstellar probes dispersing through the galaxy from a point of origin.

Modeling population interaction is best done using the diffusion equation of physics, which is derived from Fick’s first and second law for the dispersion of a material flux, and also the continuity equation. This is a second order partial differential equation and its solution for a population that starts with some initial high density and drops to some low density. It is given by a flux equation which is a function of both distance and time. This equation is proportional to the exponential of the negative distance squared.

Using this physics as a model, it is possible to show that the galaxy can be populated within only a couple of million years, but even faster if the population is growing rapidly, as for instance via von Neumann self-replication. A key part of the use of the diffusion equation is the definition of a diffusion coefficient which is equal to ½(distance squared/time), where the distance is the average collision distance between stars (assumed to be around 5 light years) and time is the average collision time between stars (assumed to be between 50-100 years for 0.05-0.1c average speed). These relatively low cruise speeds were chosen because the calculations were conducted in relation to fusion propulsion designs only.

For probes that eventually manufacture another probe on average (i.e., not fully self-reproducing), this might be seen as analogous to a critical nuclear state. Where the probe reproduction rate drops to less than unity on average, this is like a sub-critical state and eventually the probe population will fall-off until some stagnation horizon is reached. For example, calculations by this author using the diffusion equation show that with an initial population as large as 1 million probes, each traveling at an average velocity of 0.1c, after about ~1,000 years the population would have stagnated at a distance of approximately ~100 light years.

If however, the number of probes being produced is greater than unity, such as through self-replicating von Neumann probes, then the population will grow from a low density state to a high density state as a type of geometrical progression. This is analogous to a supercritical state. For example, if each probe produced a further two probes on average from a starting population of 10 probes, then by the 10th generation there would be a total of 10,000 probes in the population.

Assume that there are at least 100 billion stars in the Milky Way galaxy. For the number of von Neumann probes in the population to equal that number of stars would only require a starting population of less than 100 probe factories, with each producing 10 replication probes, and after only 10 generations of replication. This underscores the argument made by some such as Boyce (Extraterrestrial Encounter, A Personal Perspective, 1979) that von Neumann-like replication probes should be here already. The suggestion of self-replicating probes was advanced by Bracewell (The Galactic Club: Intelligent Life in Outer Space, 1975) but has its origins in automata replication and the research of John von Neumann (Theory of Self-Reproducing Automata, 1966).

Any discussion about robotic probes interacting is also a discussion about the number of intelligent civilizations – such probes had to be originally designed by someone. It is possible that these probes are no longer in contact with their originator civilization, which may be many hundreds of light years away. This is why such probes would have to be fully autonomous in their decision making capability. Indeed, it could be argued that the probability of the human species first meeting an artificial intelligence-based robotic probe is more likely than meeting an alien biological organism. It may also be the case that in reality there is no difference, if biological entities have figured out how to go fully artificial and avoid their mortal fate.

Indeed, when considering the future of Homo Sapiens and our continued convergence with technology the science and science fiction writer Arthur C Clarke referred to a new species that would eventually emerge, which he called Homo Electronicus. He depicted it thus:

“One day we may be able to enter into temporary unions with any sufficiently sophisticated machines, thus being able not merely to control but to become a spaceship or a submarine or a TV network….the thrill that can be obtained from driving a racing car or flying an aeroplane may be only a pale ghost of the excitement our great grandchildren may know, when the individual human consciousness is free to roam at will from machines to machine, through all reaches of sea and sky and space.” (Profiles of the Future, 1962).

So even the idea of separating a biological organism from a machine intelligence may be an incorrect description of the likely encounter scenarios of the future. A von Neumann robotic spacecraft could turn up in our orbit tomorrow and from a cultural information exchange perspective there may be no distinction. It is certainly the case that robotic probes are more suited for the environment of space than biological organisms that require a survival environment.

Consider a thought experiment. Assume the galaxy’s disc diameter is 100,000 light years and consider only one dimension of space. A population of probes starts out at one end with an average diffusion wave speed of around 10 percent of the speed of light (0.1c). We assume no stopping and instantaneous time between populations of diffusion waves (in reality, there would be a superposition of diffusion waves propagating as a function of distance and time). This diffusion wave would take on the order of 1 million years to cross from one side of the galaxy to the other. We can continue this thought experiment and imagine that the same population starts at the centre and expands out as a spherical diffusion wave. Assuming that the wave did not dissipate and continued to grow, then the time to cover the galactic disc would be approximately half than if it had started on one side.

Now imagine there are two originating civilizations, each sending out populations of probes that continue to grow and do not dissipate. These two civilizations are located at opposite ends of the galaxy. The time for the galaxy to be covered by the two populations will now be half of a single population starting out on the edge of the disc. We can continue to add more numbers of populations n=1,2,3,4,5,6….and we get t, t/2, t/4, t/6, t/8, t/10…and we eventually find that for n>1 it follows a geometrical series of the form tn=t0/2(n-1), where t0 is the galactic crossing timescale (i.e. 1 million years) assumed for an initiating population of probes derived from a single civilization which is a function of the diffusion wave speed.

So that for a high number of initiating populations where n → infinity, the interaction time between populations will be low so that tn → 0, and the probability of interaction is therefore high. However, for a low number of initiating populations where n → 0, the interaction time between populations will be high, so that tn → infinity; thus the timescales between potential interactions are a lot larger and the probability of interaction is therefore low.

It is important to clarify the definition of interaction time used here. The shorter the interaction time, the higher the probability of interaction, since the time between effective overlapping diffusion waves is short. Conversely, where the interaction time is long, the time between overlapping diffusion waves is long and so the probability of interaction is low. The illustrated graphic below demonstrates these limits and the boxes are the results of diffusion calculations and the implications for population interaction.

As discussed by Bond & Martin (‘Is Mankind Unique?’, JBIS 36, 1983), the graphic illustrates two extreme viewpoints about intelligence within the galaxy. The first is known as Drake-Sagan chauvinism and advocates for a crowded galaxy. This has been argued by Shklovskii & Sagan (‘Intelligent Life in the Universe’, 1966), Sagan & Drake (The Search for Extraterrestrial Intelligence, 1975). In the graphic this occurs when n → ∞ , tn → 0, so that the probability of interaction is extremely high.

Especially since there are likely to be a large superposition of diffusion waves overlapping each other. This effect would become more pronounced for multiple populations of vN probes diffusing simultaneously. We note also that an implication of this model for the galaxy is that if there are large populations of probes, then there must have been large populations of civilizations to launch them, which implies that the many steps to complexity in astrobiology are easier than we might believe. In terms of diffusion waves this scenario is characterised by very high population densities such that φ(S,t) → ∞ which also implies that the probability of probe-probe interaction is high p(S,t) → ∞. This is box (d) in the graphic.

The second viewpoint is known as Hart-Viewing chauvinism and advocates for a quiet galaxy. This has been argued by Tipler (‘Extraterrestrial Intelligent Beings do not Exist’, 1980), Hart (‘An Explanation for the Absence of Extraterrestrials on Earth’, 1975) and Viewing (‘Directly Interacting Extraterrestrial Technological Communities’, 1975). This occurs when n → 0, tn → ∞, so that the probability of interaction is extremely low. In contrast with the first argument, this might imply that the many steps to complexity in astrobiology are hard. This scenario is characterised by very low population densities such that φ(S,t) → 0 so that few diffusion waves can be expected and also that the probability of interaction is low p(S,t) → 0. This is box (a) in the graphic.

In discussing biological complexity, we are referring to the difficulty in going from single celled to multi-celled organisms, but then also to large animals, and then to intelligent life which proceeds towards a state of advanced technological attainment. A state where biology is considered ‘easy’ is when all this happens regularly provided the environmental conditions for life are met within a habitat. A state where biology is considered ‘hard’ may be, for example, where it may be possible for life to emerge purely as a function of chemistry but building that up to more complex life such as to an intelligent life-form that may one day build robotic probes is a lot more difficult and less probable. This is a reference to the science of astrobiology which will not be discussed further here. However, since the existence of robotic probes would require a starting population of organisms it has to be mentioned at least.

Given that these two extremes are the limits of our argument, it stands to reason that there must be transition regimes in between which either work towards or against the existence of intelligence and therefore the probability of interaction. The right set of parameters would be optimum to explain our own thinking around the Fermi paradox in terms of our theoretical predictions being in contradiction to our observations.

As shown in the graphic it comes down to the variance σ2 of the statistical distribution for the distance S of a number of probe populations ni within a region of space in a galaxy (not necessarily a whole galaxy), where the variance is also the square root of the standard distribution σ relative to a mean distance between population sources μS. In other words whether the originating civilizations that initiated the probe populations are closely compacted or widely spread out.

A region of space which had a high probe population density (not spread out or sharp distribution function) would be characterised by a low variance. A region with a low probe population density (widely distributed or flattened distribution function) would be characterised by a high variance. The starting interaction time to of two separate diffusion waves from independent civilizations would then be proportional to the variance and the diffusion wave velocity vdw of each population such that to is proportional to σ2/vdw.

Going back to the graphic there comes a point where the number of populations of probes becomes less than some critical number n<nc, the value of which we do not know, but as this threshold is crossed the interaction time will also increase past that critical value tn>tc. In box (c) of the graphic, biology is ‘hard’ and so despite the low variance the population density will be less than some critical value φ(S,t)<φc(S,t) which means that the probability of probe-probe interaction will be low p(S,t) → 0. This is referred to as a low spatio-temporal distributed galaxy. Whereas for box (b) of the graphic although biology may be ‘easy’, the large variance of the populations makes for a low population density of the total combined and so also a low probability of probe-probe interaction. This is referred to as a high spatio-temporal distributed galaxy.

Taking all this into account and assessing the Milky Way, we don’t see evidence of a crowded galaxy, which would rule out box (d) in the graphic. In this author’s opinion the existence of life on Earth and its diversity does not imply (at least) consistency with a quiet galaxy (unless one is invoking something special about planet Earth). This is indicated in (a). On the basis of all this, we might consider a fourth postulate along the following lines:

“The probability of interaction for advanced technological intelligent civilizations within a galaxy strongly depends on the number of such civilizations, and their spatial-temporal variance.”

Due to the exponential fall-off in the solution of the diffusion wave equation, the various calculations by this author suggest that intelligent life may occur at distances of less than ~200 ly, which for a 100-200 kly diameter galaxy might suggest somewhere in the range of ~500-1,000 intelligent civilizations along a galactic disc. Given the vast numbers of stars in the galaxy this would lean towards a sparsely populated galaxy, but one where civilizations do occur. Then considering the calculated time scales for interaction, the high probability of von Neumann probes or other types of probes interacting therefore remains.

We note that the actual diffusion calculations performed by this author showed that even with a seed population of 1 billion probes, the distance where the population falls off was at around ~164 ly. This is not too dissimilar to the independent conclusion of Betinis (“On ETI Alien Probe Flux Density”, JBIS, 1978) who calculated that the sources of probes would likely be somewhere within 70-140 ly. Bond and Martin (‘A Conservative Estimate of the Number of Habitable Planets in the Galaxy’ 1978) also calculated that the average distance between habitable planets was likely ~110 ly and ~140 ly between intelligent life relevant planets. Sagan (‘Direct Contact Among Galactic Civilizations by Relativistic Interstellar Spaceflight’, 1963) also calculated that the most probable distance to the nearest extant advanced technical civilization in our galaxy would be several hundred light years. This all implies that an extraterrestrial civilization would be at less than several hundred light years distance, and this therefore is where we should focus search efforts.

When it comes down to the Fermi paradox, this analysis implies that we live in a moderately populated galaxy, and so the probability of interaction is low when considering both the spatial and temporal scales. However, when it comes to von Neumann probes it is clear that the galaxy could potentially be populated in a timescale of less than a million years. This implies they should be here already. As we perhaps ponder recent news stories that are gaining popular attention, we might once again consider the words of Arthur C Clarke in this regard:

“I can never look now at the Milky Way without wondering from which of those banked clouds of stars the emissaries are coming…I do not think we will have to wait for long.” (‘The Sentinel’, 1951).

The content of this article is by this author and appears in a recently accepted 2022 paper for the Journal of the British Interplanetary Society titled ‘Galactic Crossing Times for Robotic Probes Driven by Inertial Confinement Fusion Propulsion’, as well as in an earlier paper published in the same journal titled ‘Unstable Equilibrium Hypothesis: A Consideration of Ultra-Relativistic and Faster than Light Interstellar Spaceflight’, JBIS, 69, 2016.


In a long and discursive paper on self-replicating probes as a way of exploring star systems, Alex Ellery (Carleton University, Ottawa) digs, among many other things, into the question of what we might detect from Earth of extraterrestrial technologies here in the Solar System. The idea here is familiar enough. If at some point in our past, a technological civilization had placed a probe, self-replicating or not, near enough to observe Earth, we should at some point be able to detect it. Ellery believes such probes would be commonplace because we humans are developing self-replication technology even today. Thus a lack of probes would indicate that there are no extraterrestrial civilizations to build them.

There are interesting insights in this paper that I want to explore, some of them going a bit far afield from Ellery’s stated intent, but worth considering for all that. SETA, the Search for Extraterrestrial Artifacts, is a young endeavor but a provocative one. Here self-replication attracts the author because probing a stellar system is a far different proposition than colonizing it. In other words, exploration per se — the quest for information — is a driver for exhaustive seeding of probes not limited by issues of sustainability or sociological constraints. Self-replication, he believes, is the key to exponential exploration of the galaxy at minimum cost and greatest likelihood of detection by those being studied.

Image: The galaxy Messier 101 (M101, also known as NGC 5457 and nicknamed the ‘Pinwheel Galaxy’) lies in the northern circumpolar constellation, Ursa Major (The Great Bear), at a distance of about 21 million light-years from Earth. This is one of the largest and most detailed photos of a spiral galaxy that has been released from Hubble. How long would it take a single civilization to fill a galaxy like this with self-replicating probes? Image credit: NASA/STScI.

Growing the Idea of Self-Reproduction

Going through the background to ideas of self-replication in space, Ellery cites the pioneering work of Robert Freitas, and here I want to pause. It intrigues me that Freitas, the man who first studied the halo orbits around the Earth-Moon L4 and L5 points looking for artifacts, is also responsible for one of the earliest studies of machine self-replication in the form of the NASA/ASEE study in 1980. The latter had no direct interstellar intent but rather developed the concept of a self-replicating factory on the lunar surface using resources mined by robots. Freitas would go on to explore a robot factory coupled to a Daedalus-class starship called REPRO, though one taken to the next level and capable of deceleration at the target star, where the factory would grow itself to its full capabilities upon landing.

I should mention that following REPRO, Freitas would turn his attention to nanotechnology, a world where payload constraints are eased and self-reproduction occurs at the molecular level. But let’s stick with REPRO a moment longer, even though I’m departing from Ellery in doing so. For in Freitas’ original concept, half the REPRO payload would be devoted to self-reproduction, with a specialized payload exploiting the resources of a gas giant moon to produce a new REPRO probe every 500 years.

As you can see, the REPRO probe would have taken Project Daedalus’ onboard autonomy to an entirely new level. Freitas’ studies foresaw thirteen distinct robot species, among them chemists, miners, metallurgists, fabricators, assemblers, wardens and verifiers. Each would have a role to play in the creation of the new probe. The chemist robots, for example, were to process ore and extract the heavy elements needed to build the factory on the moon of the gas giant planet. Aerostat robots would float like hot-air balloons in the gas giant’s atmosphere, where they would collect the needed propellants for the next generation REPRO probe. Fabricators would turn raw materials (produced by the metallurgists) into working parts, from threaded bolts to semiconductor chips, while assemblers created the modules that would build the initial factory. Crawler robots would specialize in surface hauling, while wardens, as with Project Daedalus, remained responsible for maintenance and repair of ship systems.

I spend so much time on this because of my fascination with the history of interstellar ideas. In any case, I don’t know of any earlier studies that explored self-reproduction in the interstellar context and in terms of mission hardware than Freitas’ 1980 paper “A Self-Reproducing Interstellar Probe” in JBIS, which is conveniently available online. This was a step forward in interstellar studies, and I want to highlight it with this quotation from its text:

A major alternative to both the Daedalus flyby and “Bracewell probe” orbiter is the concept of the self -reproducing starprobe. Replicating spacefaring machines recently have received a cursory examination by Calder [4] and Boyce [5], but the basic feasibility of this approach has never been seriously considered despite its tremendous potential. In theory, each self -reproducing device dispatched by the launching society would become an independent agent, slowly scouting the Galaxy for evidence of life, intelligence and civilization. While such machines might be costlier to design and construct, given sufficient time a relatively few replicating starprobes could search the entire Milky Way.

The present paper addresses the plausibility of self-reproducing starprobes and the basic parameters of feasibility. A subsequent paper [10] compares reproductive and nonreproductive probe search strategies for missions of interstellar and galactic exploration.

Hart, Tipler and the Spread of Intelligence

These days, as Freitas went on to explore, massive redundancy, miniaturization and self-assembly at the molecular level have moved into tighter focus as we contemplate missions to the stars, and the enormous Daedalus-style craft (54,000 tons initial mass, including 50,000 tonnes of fuel and 500 tonnes of scientific payload) and its successors, while historically important, also resonate a bit with Captain Nemo’s Nautilus, as spectacular creations of the imagination that defied no laws of physics, but remain in tension with the realities of payload and propulsion. These days we explore miniaturization, with Breakthrough Starshot’s tiny payloads as one example.

But back to Ellery. From a philosophical standpoint, self-reproduction, he rightly points out, had also been considered by Michael Hart and Frank Tipler, each noting that if self-replication were possible, a civilization could fill the galaxy in a relatively short (compared to the age of the galaxy) timeframe. Ultimately self-reproducing probes exploit local materials upon arrival and make copies of themselves, a wave of exploration that would ensure every habitable planet had an attendant probe. Thus the Hart/Tipler contention that the lack of evidence for such a probe is an indication that extraterrestrial intelligence does not exist, an idea that still has currency.

Would any exploring civilization turn to self-replication? The author sees many reasons to do so:

There are numerous reasons to send out self-replicating probes – reconnaissance prior to interstellar migration, first-mover advantage, insurance against planetary disaster, etc – but only one not to – indifference to information growth (which must apply to all extant ETI without exception). Self-replicating probes require minimal capital investment and represent the most economical means to explore space, interstellar space included. In a real sense, self-replicating machines cannot become obsolete – new design developments can be broadcast and uploaded to upgrade them when necessary. Once the self-replicating probe is established in a star system, the probe may be exploited in various ways. The universal construction capability ensures that the self-replicating probe can construct any other device.

Probes that can fill the galaxy extract maximum information and can not only monitor but communicate with local species. Should a civilization choose to implement panspermia in systems devoid of life, the capability is implicit here, including “the prospect of exploiting microorganism DNA as a self-replicating message.” Such probes could also, in the event of colonization at a later period, establish needed infrastructure for the new arrivals, with the possibility of terraforming.

Thus probes like these become a route from Kardashev II to III. In fact, as Ellery sees it, if a Kardashev Type I civilization is capable of self-reproduction technology – and remember, Ellery believes we are on the cusp of it now – then the entire Type I phase may be relatively short on the way to Kardashev Types II and III, perhaps as little as a few thousand years. It’s an interesting thought given our current status somewhere around Kardashev 0.72, beset by problems of our own making and wondering whether we will survive long enough to establish a Type I civilization.

Image: NASA’s James Webb Space Telescope has produced the deepest and sharpest infrared image of the distant universe to date. Known as Webb’s First Deep Field, this image of galaxy cluster SMACS 0723 is overflowing with detail. Thousands of galaxies – including the faintest objects ever observed in the infrared – have appeared in Webb’s view for the first time. This slice of the vast universe covers a patch of sky approximately the size of a grain of sand held at arm’s length by someone on the ground. If self-reproducing probes are possible, are all galaxies likely to be explored by other civilizations? Credit: NASA, ESA, CSA, and STScI.

Early Days for SETA

The question of diffusion through the galaxy here gets a workover from a theory called TRIZ (Teorija Reshenija Izobretatel’skih Zadach), which Ellery uses to analyze the implications of self-reproduction, finding that the entire galaxy could be colonized within 24 probe generations. This produces a population of 424 billion probes. He’s assuming a short replication time at each stop – a few years at most – and thus finds that the spread of such probes is dominated by the transit time across the galactic plane, a million year process to complete assuming travel at a tenth of lightspeed.

Given this short timespan compared with the age of the Galaxy, our Galaxy should be swarming with self-replicating probes yet there is no evidence of them in our solar system. Indeed, it only requires a civilization to exist long enough to send out such probes as they would thenceforth continue to propagate through the Galaxy even if the sending civilization were no more. And of course, it requires only one ETI to do this.

Part of Ellery’s intent is to show how humans might create a self-replicating probe, going through the essential features of such and arguing that self-replication is near- rather than long-term, based on the idea of the universal constructor, a machine that builds any or all other machines including itself. Here we find intellectual origins in the work of Alan Turing and John von Neumann. Ellery digs into 3D printing and ongoing experiments in self-assembly as well as in-situ resource utilization of asteroid material, and along the way he illustrates probe propulsion concepts.

At this stage of the game in SETA, there is no evidence of self-replication or extraterrestrial probes of any kind, the author argues:

There is no observational evidence of large structures in our solar system, nor signs of large-scale mining and processing, nor signs of residue of such processes. Our current terrestrial self-replication scheme with its industrial ecology is imposed by the requirements for closure of the self-replication loop that (i) minimizes waste (sustainability) to minimize energy consumption; (ii) minimizes materials and components manufacture to minimize mining; (iii) minimizes manufacturing and assembly processes to minimize machinery. Nevertheless, we would expect extensive clay residues. We conclude therefore that the most tenable hypothesis is that ETI do not exist.

The answer to that contention is, of course, that we haven’t searched for local probes in any coordinated way, and that now that we are becoming capable of minute analysis of, for instance, the lunar surface (through Lunar Reconnaissance Orbiter imagery, for one), we can become more systematic in the investigation, taking in Earth co-orbitals, as Jim Benford has suggested, or looking for signs of lurkers in the asteroid belt. Ellery notes that the latter might demand searching for signs of resource exploitation there as opposed to finding an individual probe amidst the plethora of candidate objects.

But Ellery is adamant that efforts to find such lurkers should continue, citing the need to continue what has been up to now a meager and sporadic effort to conduct SETA. I’m going to recommend this paper to those Centauri Dreams readers who want to get up to speed on the scholarship on self-reproduction and its consequences. Indeed, the ideas jammed into its pages come at bewildering pace, but the scholarship is thorough and the references handy to have in one place. Whether self-reproducing probes are indeed imminent is a matter for debate but their implications demand our attention.

The paper is Ellery, “Self-replicating probes are imminent – implications for SETI,” International Journal of Astrobiology 8 July 2022 (full text). A companion paper published at the same time is “Curbing the fruitfulness of self-replicating machines,” International Journal of Astrobiology 8 July 2022 (full text).