Methane as Biosignature: A Conceptual Framework

A living world around another star will not be an easy catch, no matter how sophisticated the coming generation of space- and ground-based telescopes turns out to be. It’s one thing to develop the tools to begin probing an exoplanet atmosphere, but quite another to be able to say with any degree of confidence that the result we see is the result of biology. When we do begin picking up an interesting gas like methane, we’ll need to evaluate the finding against other atmospheric constituents, and the arguments will fly about non-biological sources for what might be a biosignature.

This is going to begin playing out as the James Webb Space Telescope turns its eye on exoplanets, and methane is the one potential sign of life that should be within its range. We know that oxygen, ozone, methane and carbon dioxide are produced through biological activity on Earth, and we also know that each can be produced in the absence of life. The simultaneous presence of such gases is what would intrigue us most, but the opening round of biosignature detection will be methane. Here I’ll quote Maggie Thompson, who is a graduate student in astronomy and astrophysics at UC Santa Cruz and lead author of a new study on methane in exoplanet atmospheres:

“Oxygen is often talked about as one of the best biosignatures, but it’s probably going to be hard to detect with JWST. We wanted to provide a framework for interpreting observations, so if we see a rocky planet with methane, we know what other observations are needed for it to be a persuasive biosignature.”

The problem of interpretation is huge, given how numerous are the sources of methane. The study Thompson is discussing has just appeared in Proceedings of the National Academy of Sciences, addressing a wide range of phenomena from volcanic activity, hydrothermal vents, tectonic subduction zones to asteroid or comet impacts. There are many ways to produce methane, but because it is unstable in an atmosphere and easily destroyed by photochemical reactions, it needs to be replenished to remain at high levels. Thus the authors look for clues as to how that replenishment works and how to distinguish these processes from signs of life.

Image: Methane in a planet’s atmosphere may be a sign of life if non-biological sources can be ruled out. This illustration summarizes the known abiotic sources of methane on Earth, including outgassing from volcanoes, reactions in settings such as mid-ocean ridges, hydrothermal vents, and subduction zones, and impacts from asteroids and comets. Credit: © 2022 Elena Hartley.

Current methods for studying exoplanet atmospheres rely upon analyzing the light of the host star during a transit as it passes through the planet’s atmosphere, the latter absorbing some of the starlight to offer clues to its composition. To do this well, we need relatively quiet stars with little flare activity. M-dwarfs are great targets for this kind of work because of their small size, so that the transit depth of a rocky planet in the habitable zone will be relatively large and the signal stronger. It’s also useful that small red stars represent as much as 80% of all stars in the galaxy (for a deep dive into this question, see Alex Tolley’s Red Dwarfs: Their Impact on Biosignatures).

Context will be the key in the hunt for biosignatures, with false positives a persistent danger. Outgassing volcanoes should add not only methane but also carbon monoxide to the atmosphere, while biological activity should consume carbon monoxide. The authors argue that it would be difficult for non-biological processes to produce an atmosphere rich in both methane and carbon dioxide with little carbon monoxide.

Thus a small, rocky world in the habitable zone will need to be evaluated in terms of its geochemistry and its geological processes, not to mention its interactions with its host star. Find atmospheric methane here and it is more likely to be an indication of life if the atmosphere also shows carbon dioxide and the methane is more abundant than carbon monoxide and the planet is not extremely rich in water. The paper is an attempt to build a framework for distinguishing not just false positives, but identifying real biosignatures that may be easy to overlook.

Making the process even more complex is the fact that the scope of abiotic methane production on a planetary scale is not fully understood. Even so, the authors argue that while various abiotic mechanisms can replenish methane, it is hard to produce a methane flux comparable to Earth’s biogenic flux without creating clues that signal a false positive. Here we’re at the heart of things; let me quote the paper:

…we investigated whether planets with very reduced mantles and crusts can generate large methane fluxes via magmatic outgassing and assessed the existing literature on low-temperature water-rock and metamorphic reactions, and, where possible, determined their maximum global abiotic methane fluxes. In every case, abiotic processes cannot easily produce atmospheres rich in both CH4 and CO2 with negligible CO due to the strong redox disequilibrium between CO2 and CH4 and the fact that CO is expected to be readily consumed by life. We also explored whether habitable-zone exoplanets that have large volatile inventories like Titan could have long lifetimes of atmospheric methane. We found that, for Earth-mass planets with water mass fractions that are less than ?1 % of the planet’s mass, the lifetime of atmospheric methane is less than ?10 Myrs, and observational tools can likely distinguish planets with larger water mass fractions from those with terrestrial densities.

Let’s also recall that when searching for biosignatures, terms like ‘Earth-like’ are easy to misuse. Today’s atmosphere is a mix of nitrogen, oxygen and carbon dioxide, but we know that over geological time, the atmosphere has changed profoundly. The early Earth would have been shrouded in hydrogen and helium, with volcanic eruptions producing carbon dioxide, water vapor and sulfur. The oxygenation event that occurred two and a half billion years ago brought oxygen levels up. We thus have to remember where a given exoplanet may be in its process of development as we evaluate it.

So by all means let’s hope we one day find something like a simultaneous detection of oxygen and methane, two gases that ought not to co-exist unless there were a sustaining process (life) to keep them present. An out of equilibrium chemistry is intriguing, because life wants to throw chemical stability out of whack. And by all means let’s accelerate our work in the direction of biosignature analysis to root out those false positives. We begin with methane because that is what JWST can most readily detect.

And as to the question of ambiguity in life detection, JWST is not likely able to detect atmospheric oxygen and ozone, nor will it be a reliable source on water vapor, so its ability to make the call on habitability is limited. Going forward, the authors think that if the instrument detects significant methane and carbon dioxide and can constrain the ratio of carbon monoxide to methane, this will serve as a motivator for future instruments like ground-based Extremely Large Telescopes to follow up these observations. It will take observational tools in combination to nail down methane as a biosignature, but the ELTs should be well placed to take the next step forward.

The paper is Thompson et al., “The case and context for atmospheric methane as an exoplanet biosignature,” Proceedings of the National Academy of Sciences 119 (14) (March 30, 2022). Abstract. See also Krissansen-Totton et al., “Understanding planetary context to enable life detection on exoplanets and test the Copernican principle,” Nature Astronomy 6 (2022), 189-198 (abstract).


SETI as Exploration

Early exoplanet detections always startled my friends outside the astronomical community. Anxious for a planet something like the Earth, they found themselves looking at a ‘hot Jupiter’ like 51 Pegasi b, which at the time seemed like little more than a weird curiosity. A Jupiter-like planet hugging a star? More hot Jupiters followed, which led to the need to explain how exoplanet detection worked with radial velocity methods, and why big planets close to their star should turn up early in the hunt.

Earlier, there were the pulsar planets, as found by Aleksander Wolszczan and Dale Frail around the pulsar PSR B1 257+12 in the constellation Virgo. These were interestingly small, but obviously accumulating a sleet of radiation from their primary. Detected a year later, PSR B1620-26 b was found to orbit a white dwarf/pulsar binary system. But these odd detections some 30 years ago actually made the case for the age of exoplanet discovery that was about to open, a truly golden era of deep space research.

Aleksander Wolszczan himself put it best: “If you can find planets around a neutron star, planets have to be basically everywhere. The planet production process has to be very robust.”

Indeed. With NASA announcing another 65 exoplanets added to its Exoplanet Archive, we now take the tally of confirmed planets up past 5000, their presence firmed up by multiple detection methods or by analytical techniques. These days, of course, the quickly growing catalog is made up of all kinds of worlds, from those gas giants near their stars to the super-Earths that seem to be rocky worlds larger than our Earth, and the intriguing ‘mini-Neptunes, which seem to slot into a category of their own. And let’s not forget those interesting planets on circumbinary orbits in multiple star systems.

Wolszczan is quoted in a NASA news release as saying that life is an all but certain find – “most likely of some primitive kind” – for future instrumentation like ESA’s ARIEL mission (launching in 2029), the James Webb Space Telescope, or the Nancy Grace Roman Space Telescope, which will launch at the end of the decade. These instruments should be able to take us into exoplanet atmospheres, where we can start taking apart their composition in search of biosignatures. This, in turn, will open up whole new areas of ambiguity, and I predict a great deal of controversy over early results.

Image: The more than 5,000 exoplanets confirmed in our galaxy so far include a variety of types – some that are similar to planets in our Solar System, others vastly different. Among these are a mysterious variety known as “super-Earths” because they are larger than our world and possibly rocky. Credit: NASA/JPL-Caltech.

But what about life beyond the primitive? I noticed a short essay by Seth Shostak recently published by the SETI Institute which delves into why we humans seem fixated on finding not just exo-biology but exo-intelligence. Shostak digs into the act of exploration itself as a justification for this quest, pointing out that experiments to find life around other stars are not science experiments as much as searches. After all, there is no way to demonstrate that life does not exist, so the idea of a profoundly biologically-infused universe is not something that any astronomer can falsify.

So is exploration, rather than science, a justification for SETI? Surely the answer is yes. Exploration usually mixes with commercial activity – Shostak’s example is the voyages of James Cook, who served the British admiralty by looking for trade routes and mapping hitherto uncharted areas of the southern ocean. Was there a new continent to be found somewhere in this immensity, a Terra Australis, as some cartographers had been placing on maps to balance between the land-heavy northern hemisphere and the south? The idea was ancient but still had life in Cook’s time.

In our parlous modern world, we make much of the downside of enterprises once considered heroic, noting their depredations in the name of commerce and empire. But we shouldn’t overlook the scope of their accomplishment. Says Shostak:

Exploration has always been important, and its practical spin-offs are often the least of it. None of the objectives set by the English Admiralty for Cook’s voyages was met. And yes, the exploration of the Pacific often left behind death, disease and disruption. But two-and-a-half centuries later, Cook’s reconnaissance still has the power to stir our imagination. We thrill to the possibility of learning something marvelous, something that no previous generation knew.

Image: The routes of Captain James Cook’s voyages. The first voyage is shown in red, second voyage in green, and third voyage in blue. The route of Cook’s crew following his death is shown as a dashed blue line. Credit: Wikimedia Commons / Jon Platek. CC BY-SA 3.0.

Shostak’s mention of Cook reminds me of the Conference on Interstellar Migration, held way back in 1983 at Los Alamos, where anthropologist Ben Finney and astrophysicist Eric Jones, who had organized the interdisciplinary meeting, discussed humans as what they called “The Exploring Animal.” Like Konrad Lorenz, Finney and Jones saw the exploratory urge as an outcome of evolution that inevitably pushed people into new places out of innate curiosity. The classic example, discussed by the duo in a separate paper, was the peopling of the Pacific in waves of settlement, as these intrepid sailors set off, navigating by the stars, the wind, the ocean swells, and the flight of birds.

The outstanding achievement of the Stone Age? Finney and Jones thought so. In my 2004 book Centauri Dreams, I reflected on how often the exploratory imperative came up as I talked with interstellar-minded writers, physicists and engineers:

The maddening thing about the future is that while we can extrapolate based on present trends, we cannot imagine the changes that will make our every prediction obsolete. It is no surprise to me that in addition to their precision and, yes, caution, there is a sense of palpable excitement among many of the scientists and engineers with whom I talked. Their curiosity, their sense of quest, is the ultimate driver for interstellar flight. A voyage of a thousand years seems unthinkable, but it is also within the span of human history. A fifty-year mission is within the lifetime of a scientist. Somewhere between these poles our first interstellar probe will fly, probably not in our lifetimes, perhaps not in this century. But if there was a time before history when the Marquesas seemed as remote a target as Alpha Centauri does today, we have the example of a people who found a way to get there.

I’ve argued before that exploration is not an urge that can be tamped down, nor is it one that needs to be exercised by a large percentage of the population to shape outcomes that can be profound. To return to the Cook era, most people involved in the voyages that took Europeans to the Pacific islands, Australia and New Zealand in those days were exceptions, the few who left what they knew behind (some, of course, were forced to go due to the legal apparatus of the time). The point is: It doesn’t take mass human colonization to be the driver for our eventual spread off-planet. It does take inspired and determined individuals, and history yields up no shortage of these.

The 1983 conference in Los Alamos is captured in the book Interstellar Migration and the Human Experience, edited by Ben R. Finney and Eric M. Jones (Berkeley: University of California Press, 1985), an essential title in our field.


A Hybrid Interstellar Mission Using Antimatter

Epsilon Eridani has always intrigued me because in astronomical terms, it’s not all that far from the Sun. I can remember as a kid noting which stars were closest to us – the Centauri trio, Tau Ceti and Barnard’s Star – wondering which of these would be the first to be visited by a probe from Earth. Later, I thought we would have quick confirmation of planets around Epsilon Eridani, since it’s a scant (!) 10.5 light years out, but despite decades of radial velocity data, astronomers have only found one gas giant, and even that confirmation was slowed by noise-filled datasets.

Even so, Epsilon Eridani b is confirmed. Also known as Ægir (named for a figure in Old Norse mythology), it’s in a 3.5 AU orbit, circling the star every 7.4 years, with a mass somewhere between 0.6 and 1.5 times that of Jupiter. But there is more: We also get two asteroid belts in this system, as Gerald Jackson points out in his new paper on using antimatter for deceleration into nearby star systems, as well as another planet candidate.

Image: This artist’s conception shows what is known about the planetary system at Epsilon Eridani. Observations from NASA’s Spitzer Space Telescope show that the system hosts two asteroid belts, in addition to previously identified candidate planets and an outer comet ring. Epsilon Eridani is located about 10 light-years away in the constellation Eridanus. It is visible in the night skies with the naked eye. The system’s inner asteroid belt appears as the yellowish ring around the star, while the outer asteroid belt is in the foreground. The outermost comet ring is too far out to be seen in this view, but comets originating from it are shown in the upper right corner. Credit: NASA/JPL-Caltech/T. Pyle (SSC).

This is a young system, estimated at less than one billion years. For both Epsilon Eridani and Proxima Centauri, deceleration is crucial for entering the planetary system and establishing orbit around a planet. The amount of antimatter available will determine our deceleration options. Assuming a separate method of reaching Proxima Centauri in 97 years (perhaps beamed propulsion getting the payload up to 0.05c), we need 120 grams of antiproton mass to brake into the system. A 250 year mission to Epsilon Eridani at this velocity would require the same 120 grams.

Thus we consider the twin poles of difficulty when it comes to antimatter, the first being how to produce enough of it (current production levels are measured in nanograms per year), the second how to store it. Jackson, who has long championed the feasibility of upping our antimatter production, thinks we need to reach 20 grams per year before we can start thinking seriously about flying one of these missions. But as both he and Bob Forward have pointed out, there are reasons why we produce so little now, and reasons for optimism about moving to a dedicated production scenario.

Past antiproton production was constrained by the need to produce antiproton beams for high energy physics experiments, requiring strict longitudinal and transverse beam characteristics. Their solution was to target a 120 GeV proton beam into a nickel target [41] followed by a complex lithium lens [42]. The world record for the production of antimatter is held by the Fermilab. Antiproton production started in 1986 and ended in 2011, achieving an average production rate of approximately 2 ng/year [43]. The record instantaneous production rate was 3.6 ng/year [44]. In all, Fermilab produced and stored 17 ng of antiprotons, over 90% of the total planetary production.

Those are sobering numbers. Can we cast antimatter production in a different light? Jackson suggests using our accelerators in a novel way, colliding two proton beams in an asymmetric collider scenario, in which one beam is given more energy than the other. The result will be a coherent antiproton beam that, moving downstream in the collider, is subject to further manipulation. This colliding beam architecture makes for a less expensive accelerator infrastructure and sharply reduces the costs of operation.

The theoretical costs for producing 20 grams of antimatter per year are calculated under the assumption that the antimatter production facility is powered by a square solar array 7 km x 7 km in size that would be sufficient to supply all of the needed 7.6 GW of facility power. Using present-day costs for solar panels, the capital cost for this power plant comes in at $8 billion (i.e., the cost of 2 SLS rocket launches). $80 million per year covers operation and maintenance. Here’s Jackson on the cost:

…3.3% of the proton-proton collisions yields a useable antiproton, a number based on detailed particle physics calculations [45]. This means that all of the kinetic energy invested in 66 protons goes into each antiproton. As a result, the 20 g/yr facility would theoretically consume 6.7 GW of electrical power (assuming 100% conversion efficiencies). Operating 24/7 this power level corresponds to an energy usage of 67 billion kW-hrs per year. At a cost of $0.01 per kW-hr the annual operating cost of the facility would be $670 million. Note that a single Gerald R. Ford–class aircraft carrier costs $13 billion! The cost of the Apollo program adjusted for 2020 dollars was $194 billion.

Science Along the Way

Launching missions that take decades, and in some cases centuries, to reach their destination calls for good science return wherever possible, and Jackson argues that an interstellar mission will determine a great deal about its target star just by aiming for it. Whereas past missions like New Horizons could count on the position of targets like Pluto and Arrokoth being programmed into the spacecraft computers, the preliminary positioning information uploaded to the craft came from Earth observation. Our interstellar craft will need more advanced tools. It will have to be capable of making its own astrometrical observations, sending its calculations to the propulsion system for deceleration into the target system and orbital insertion, thus refining exoplanet parameters on the fly.

Remember that what we are considering is a hybrid mission, using one form of propulsion to attain interstellar cruise velocity, and antimatter as the method for deceleration. You might recall, for example, the starship ISV Venture Star in the film Avatar, which uses both antimatter engines and a photon sail. What Jackson has added to the mix is a deep dive into the possibilities of antimatter for turning what would have been a flyby mission into a long-lasting planet orbiter.

Let’s consider what happens along the line of flight as a spacecraft designed with these methods makes its way out of the Solar System. If we take a velocity of 0.02c, our spacecraft passes the outgoing Voyager and Pioneer spacecraft in two years, and within three more years it passes into the gravitational lensing regions of the Sun beginning at 550 AU. A mere five years has taken the vehicle through the Kuiper Belt and moved it out toward the inner Oort Cloud, where little is currently known about such things as the actual density distribution of Oort objects as a function of radius from the Sun. We can also expect to gain data on any comparable cometary clouds around Proxima Centauri or Epsilon Eridani as the spacecraft continues its journey.

By Jackson’s calculations, when we’re into the seventh year of such a mission, we are encountering Oort Cloud objects at a pretty good clip, with an estimated 450 Oort objects within 0.1 AU of its trajectory based on current assumptions. Moving at 1 AU every 5.6 hours, we can extrapolate an encounter rate of one object per month over a period of three decades as the craft transits this region. Jackson also notes that data on the interstellar medium, including the Local Interstellar Cloud, will be prolific, including particle spectra, galactic cosmic ray spectra, dust density distributions, and interstellar magnetic field strength and direction.

Image: This is Figure 7 from the paper. Caption: Potential early science return milestones for a spacecraft undergoing a 10-year acceleration burn with a cruise velocity of 0.02c. Credit: Gerald Jackson.

It’s interesting to compare science return over time with what we’ve achieved with the Voyager missions. Voyager 2 reached Jupiter about two years after launch in 1977, and passed Saturn in four. It would take twice that time to reach Uranus (8.4 years into the mission), while Neptune was reached after 12. Voyager 2 entered the heliopause after 41.2 years of flight, and as we all know, both Voyagers are still returning data. For purposes of comparison, the Voyager 2 mission cost $865 million in 1973 dollars.

Thus, while funding missions demands early return on investment, there should be abundant opportunity for science in the decades of interstellar flight between the Sun and Proxima Centauri, with surprises along the way, just as the Voyagers occasionally throw us a curveball – consider the twists and wrinkles detected in the Sun’s magnetic field as lines of magnetic force criss-cross, and reconnect, producing a kind of ‘foam’ of magnetic bubbles, all this detected over a decade ago in Voyager data. The long-term return on investment is considerable, as it includes years of up-close exoplanet data, with orbital operations around, for example, Proxima Centauri b.

It will be interesting to see Jackson’s final NIAC report, which he tells me will be complete within a week or so. As to the future, a glimpse at one aspect of it is available in the current paper, which refers to what the original NIAC project description referred to as “a powerful LIDAR system…to illuminate, identify and track flyby candidates” in the Oort Cloud. But as the paper notes, this now seems impractical:

One preliminary conclusion is that active interrogation methods for locating 10 km diameter objects, for example with the communication laser, are not feasible even with megawatts of available electrical power.

We’ll also find out in the NIAC report whether or not Jackson’s idea of using gram-scale chipcraft for closer examination of, say, objects in the Oort has stood up to scrutiny in the subsequent work. This hybrid mission concept using antimatter is rapidly evolving, and what lies ahead, he tells me in a recent email, is a series of papers expanding on antimatter production and storage, and further examining both the electrostatic trap and electrostatic nozzle. As both drastically increasing antimatter production, as well as learning how to maximize small amounts, are critical for our hopes to someday create antimatter propulsion, I’ll be tracking this report closely.


Antimatter-driven Deceleration at Proxima Centauri

Although I’ve often seen Arthur Conan Doyle’s Sherlock Holmes cited in various ways, I hadn’t chased down the source of this famous quote: “When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.” Gerald Jackson’s new paper identifies the story as Doyle’s “The Adventure of the Blanched Soldier,” which somehow escaped my attention when I read through the Sherlock Holmes corpus a couple of years back. I’m a great admirer of Doyle and love both Holmes and much of his other work, so it’s good to get this citation straight.

As I recall, Spock quotes Holmes to this effect in one of the Star Trek movies; this site’s resident movie buffs will know which one, but I’ve forgotten. In any case, a Star Trek reference comes into useful play here because what Jackson (Hbar Technologies, LLC) is writing about is antimatter, a futuristic thing indeed, but also in Jackson’s thinking a real candidate for a propulsion system that involves using small amounts of antimatter to initiate fission in depleted uranium. The latter is a by-product of the enrichment of natural uranium to make nuclear fuel.

Both thrust and electrical power emerge from this, and in Jackson’s hands, we are looking at a mission architecture that can not only travel to another star – the paper focuses on Proxima Centauri as well as Epsilon Eridani – but also decelerate. Jackson has been studying the matter for decades now, and has presented antimatter-based propulsion concepts for interstellar flight at, among other venues, symposia of the Tennessee Valley Interstellar Workshop (now the Interstellar Research Group). In the new paper, he looks at a 10-kilogram scale spacecraft with the capability of deceleration as well as a continuing source of internal power for the science mission.

Image: Depiction of the deceleration of interstellar spacecraft utilizing antimatter concept. Credit: Gerald Jackson.

On the matter of the impossible, the quote proves useful. Jackson applies it to the propulsion concepts we normally think of in terms of making an interstellar crossing. This is worth quoting:

Applying this Holmes Method to space propulsion concepts for exoplanet exploration, in this paper the term “impossible” is re-interpreted arbitrarily to mean any technology that requires: 1) new physics that has not been experimentally validated; 2) mission durations in excess of one thousand years; and 3) material properties that are not currently demonstrated or likely to be achievable during this century. For example, “warp drives” can currently be classified as impossible by criterion #1, and chemical rockets are impossible due to criterion #2. Breakthrough Starshot may very well be impossible according to criterion #3 simply because of the needed material properties of the accelerating sail that must survive a gigawatt laser beam for 30 minutes. Though traditional nuclear thermal rockets fail due to criterion #2, specific fusion-based propulsion systems might be feasible if breakeven nuclear fusion is ever achieved.

Can antimatter supply the lack? The kind of mission Jackson has been analyzing uses antimatter to initiate fission, so we could consider this a hybrid design, one with its roots in the ‘antimatter sail’ Jackson and Steve Howe have described in earlier technical papers. For the background on this earlier work, you can start by looking at Antimatter and the Sail, one of a number of articles here on Centauri Dreams that has explored the idea.

In this paper, we move the antimatter sail concept to a deceleration method, with the launch propulsion being handed off to other technologies. The sail’s antimatter-induced fission is not used only to decelerate, though. It also provides a crucial source of power for the decades-long science mission at target.

If we leave the launch and long cruise of the mission up to other technologies, we might see the kind of laser-beaming methods we’ve looked at in other contexts as part of this mission. But if Breakthrough Starshot can develop a model for a fast flyby of a nearby star (moving at a remarkable 20 percent of lightspeed) via a laser array, various problems emerge, especially in data acquisition and return. On the former, the issue is that a flyby mission at these velocities allows precious little time at target. Successful deceleration would allow in situ observations from a stable exoplanet orbit.

That’s a breathtaking idea, given how much energy we’re thinking about using to propel a beamed-sail flyby, but Jackson believes it’s a feasible mission objective. He gives a nod to other proposed deceleration methods, which have included using a ‘magnetic sail’ (magsail) to brake against a star’s stellar wind. The problem is that the interstellar medium is too tenuous to slow a craft moving at a substantial percentage of lightspeed for orbital insertion upon arrival – Jackson considers the notion in the ‘impossible’ camp, whereas antimatter may come in under the wire as merely ‘improbable.’ That difference in degree, he believes, is well worth exploring.

The antimatter concept described generates a high specific impulse thrust, with the author noting that approximately 98 percent of antiprotons that stop within uranium induce fission. It turns out that antiproton annihilation on the nucleus of any uranium isotope – and that includes non-fissile U238 – induces fission. In Jackson’s design, about ten percent of the annihilation energy released is channeled into thrust.

Jackson analyzes an architecture in which the uranium “propagates as a singly-charged atomic ion beam confined to an electrostatic trap.” The trap can be likened in its effects to what magnetic storage rings do when they confine particle beams, providing a stable confinement for charged particles. Antiprotons are sent in the same direction as the uranium ions, reaching the same velocity in the central region, where the matter/antimatter annihilation occurs. Because the uranium is in the form of a sparse cloud, the energetic fission ‘daughters’ escape with little energy loss.

Here is Jackson’s depiction of an electrostatic annihilation trap. In this design, both the positively charged uranium ions and the negatively charged antiprotons are confined.

Image: This is Figure 1 from the paper. Caption: Axial and radial confinement electrodes (top) and two-species electrostatic potential well (bottom) of a lightweight charged-particle trap that mixes U238 with antiprotons.

A workable design? The author argues that it is, saying:

Longitudinal confinement is created by forming an axial electrostatic potential well with a set of end electrodes indicated in figure 1. To accomplish the goal of having oppositely charged antiprotons and uranium ions traveling together for the majority of their motion back and forth (left/right in the figure) across the trap, this electrostatic potential has a double-well architecture. This type of two-species axial confinement has been experimentally demonstrated [53].

The movement of antiprotons and uranium ions within the trap is complex:

The antiprotons oscillate along the trap axis across a smaller distance, reflected by a negative potential “hill”. In this reflection region the positively charged uranium ions are accelerated to a higher kinetic energy. Beyond the antiproton reflection region a larger positive potential hill is established that subsequently reflects the uranium ions. Because the two particle species must have equal velocity in the central region of the trap, and the fact that the antiprotons have a charge density of -1/nucleon and the uranium ions have a charge density of +1/(238 nucleons), the voltage gradient required to reflect the uranium ions is roughly 238 times greater than that required to reflect the antiprotons.

The design must reckon with the fact that the fission daughters escape the trap in all directions, which is compensated for through a focusing system in the form of an electrostatic nozzle that produces a collimated exhaust beam. The author is working with a prototype electrostatic trap coupled to an electrostatic nozzle to explore the effects of lower-energy electrons produced by the uranium-antiproton annihilation events as well as the electrostatic charge distribution within the fission daughters.

Decelerating at Proxima Centauri in this scheme involves a propulsive burn lasting ten years as the craft sheds kinetic energy on the long arc into the planetary system. Under these calculations, a 200 year mission to Proxima requires 35 grams of total antiproton mass. Upping this to a 56-year mission moving at 0.1 c demands 590 grams.

Addendum: I wrote ’35 kilograms’ in the above paragraph before I caught the error. Thanks, Alex Tolley, for pointing this out!

Current antimatter production remains in the nanogram range. What to do? In work for NASA’s Innovative Advanced Concepts office, Jackson has argued that despite minuscule current production, antimatter can be vastly ramped up. He believes that production of 20 grams of antimatter per year is a feasible goal. More on this issue, to which Jackson has been devoting his life for many years now, in the next post.

The paper is Jackson, “Deceleration of Exoplanet Missions Utilizing Scarce Antimatter,” in press at Acta Astronautica (2022). Abstract.


An Abundance of Technosignatures?

What expectations do we bring to the hunt for life elsewhere in the universe? Opinions vary depending on who has the podium, but we can neatly divide the effort into two camps. The first looks for biosignatures, spurred by our remarkably growing and provocative catalog of exoplanets. The other explicitly looks for signs of technology, as exemplified by SETI, which from the start hunted for signals produced by intelligence.

My guess is that a broad survey of those looking for biosignatures would find that they are excited by the emerging tools available to them, such as new generations of ground- and space-based telescopes, and the kind of modeling we saw in the last post applied to a hypothetical Alpha Centauri planet. We use our growing datasets to examine the nature of exoplanets and move beyond observation to model benchmarks for habitable worlds, including their atmospheric chemistry and even geology.

Technosignatures are a different matter, and it’s fascinating to read through a new paper from Jason Wright and colleagues. – Jacob Haqq-Misra, Adam Frank, Ravi Kopparapu, Manasvi Lingam and Sofia Sheikh – discussing just how. The intent is to show that technosignatures offer a vast search space that in a sense dwarfs the hunt for biosignatures. That’s not what you would expect, as the latter are usually described as a kind of all-encompassing envelope within which technosignatures would be a subset.

On the contrary, write the authors, “there is no incontrovertible reason that technology could not be more abundant, longer-lived, more detectable, and less ambiguous than biosignatures.” How this potential is unlocked impacts how the search proceeds, and it also sends out a call for collaboration among all those hunting for life elsewhere.

Image: Photo of the central region of the Milky Way. Credit: UCLA SETI Group/Yuri Beletsky, Carnegie Las Campanas Observatory.

Technosignatures as Subset?

Remember that technosignatures do not require an intent to communicate, but are evidence of technologies in use or even long abandoned, perhaps found in already existing datasets needing re-examination, or in results from upcoming observatories. Check your own assumptions here, based on the Drake equation, in which factors include the fraction of habitable planets that develop life, the fraction that produce species that are intelligent and can communicate, and so on. Traditional thinking sees technosignatures as an embedded feature within a broader spectrum of life.

Reasonably enough, then, we might decide that if intelligence is a rare subset within biological systems, technosignatures would prove even rarer. Our own planet seems to exemplify this, with our species having become communicative only within roughly a century of today, despite 4.6 billion years in which to evolve. But Wright and team make the case that technology cannot be bounded in this way. Its emergence may be rare, but once it appears, it is possible that it will outlive its biological creators.

Biology may confine itself to a single habitable planet, but why should technosignatures be thus limited? In our own Solar System, we are producing, the authors argue, technosignatures for multiple worlds right now, especially at Mars, where we have our combined force of landers and orbital assets taking data and communicating results back to Earth. Such signals should increase as we follow through on plans to explore Mars with human crews and robotic spacecraft. As we spread into the Solar System, new technosignatures will emerge at each venue we study.

Why, too, should technology not spread through self-replication, perhaps not under the control of the biological beings who set it into motion? For that matter, why should we confine technology to planets? Places with no biology may prove extremely useful for our species, as for example the asteroid belt for resource extraction. We might expect technosignatures to emerge from these operations, another separate appearance of technology that grows ultimately out of the single planetary source. Moreover, this diaspora is unlikely to confine itself to a single star system, as the authors point out:

There is also no reason to think that technological life in the galaxy cannot spread beyond its home planetary system (see Mamikunian & Briggs 1965; Drake 1980). While interstellar spaceflight of the sort needed to settle a nearby star system is beyond humanity’s current capabilities, the problem is one being seriously considered now, and there are no real physical or engineering obstacles to such a thing happening (e.g., Mauldin 1992; Ashworth 2012; Lingam & Loeb 2021). Even if we cannot envision it happening for humans in the near future, it is not hard to imagine it transpiring in, say, 10,000 or 100,000 yr.

What a shift in thinking in the above paragraph, which to us merely states the obvious, when compared to a mere 75 years ago, a time when the idea of interstellar flight was considered science fictional in the extreme, and we were only beginning to probe the physics of the engines that might make crossing to another star possible. Today we’re more likely to be thinking about interstellar journeys as expeditions awaiting new generations of technology and engineering rather than a mystical new physics. We also factor artificial intelligence into an interstellar future that may be exclusively robotic.

Image: A rendering of a potential Dyson sphere, collect stellar energy on a system wide scale for highly advanced civilizations. How many separate technosignatures might have emerged out of a single biological source in the building of such a thing? Credit:

Recall our recent discussion of von Neumann probes. While the average distance between stars is vast, Greg Matloff looked at the problem in an exceedingly practical way. Suppose, he said, we confine ourselves to times when stars are within a single light year of each other, which happens to our Sun every 500,000 years or so. If we launch a self-replicating probe only every 500,000 years, we nonetheless set up a process of such crossings that fills a large percentage of stellar systems in the galaxy within a time frame of tens of thousands of years. All of these can produce technosignatures.

Thus even the most conservative assumptions for interstellar flight using speeds not much beyond what we can achieve with a Jupiter gravity assist today still create the opportunity for technology to spread far beyond the planet of its origin. As the authors are quick to point out, the Drake equation cannot capture this spreading, and the search space for technosignatures could vastly outnumber that for biological life.

Lifetimes Civilizational and Technological

Looming over discussion of the Drake equation has always been the issue of the lifetime of a technological civilization, the L factor. How likely would we be to pick up a signal from another civilization if our own is threatened at this comparatively early stage of its growth by factors like nuclear or biological war? The Fermi question may be answered simply enough by saying that no technological species lives very long.

Here it’s fair to ask how much we are projecting human tendencies onto our extraterrestrial counterparts. This gets intriguing. The collapse of civilization would be a dire event, but absent actual extinction, our species might recover or, indeed, re-develop the technologies that once proliferated. The time between catastrophe and potential recovery is not known, but such events do not put a fixed limit on a civilization’s lifetime. Even if we assume that technological civilizations will roughly track our own, we may understand our own only imperfectly. From the paper:

…humanity is the first species on Earth that can prevent its own extinction with technology, for instance by diverting asteroids, stopping or mitigating pandemics, or building “lifeboat” settlements elsewhere in the solar system or beyond (Baum et al. 2015; Turchin & Green 2017; Turchin & Denkenberger 2018). This means that the upper limit on our technology’s survival is essentially unlimited in theory, even in the face of inevitable natural catastrophes. Apart from these modern examples, Earth-analogs from human history teach us that a technological downshift—to temporarily become less technological until circumstances improve—is a common and healthy adaptation to catastrophe in human history and that technology and longevity are in this way inextricably linked…

Nor can we rule out the possibility that the Earth could develop other species beyond our own in the future that can produce a technological society following humanity’s extinction. For that matter, are we so sure about our past? If there have been prior periods of technology on Earth, the processes of time over millions of years would likely have eradicated them. Thus using our experience on Earth as the model for the Drake L factor is inadvisable because of how little we know about L for our own planet.

Technosignatures can outlast the beings that create them, and as the authors point out, the ones we produce are already on a par with Earth’s biosignatures in terms of detectability. While we would not be able to detect the biosignatures of Earth from Alpha Centauri’s distance, the final iteration of the Square Kilometre Array should be sensitive enough to pick up our radars at distances of several parsecs, and an advanced space telescope within our engineering capabilities now (such as the proposed LUVOIR) might be able to detect atmospheric pollution at 10 parsecs.

It seems a safe assumption that if our biosignatures and technosignatures are roughly comparable in terms of detectability today, the advance of technology as a species continues to innovate should produce ever more robust technosignatures. We cannot, in other words, assume a biology-like trajectory, as implicit in the Drake equation, for the evolution of technosignatures and their detectability through SETI. Indeed:

…the spread of technology could reasonably imply that the number of sites of technosignatures might be larger than that of biosignatures, potentially by a factor of as much as > 1010 if the galaxy were to be virtually filled with technology.

No wonder some authors have considered adding a ‘spreading factor’ to the Drake equation, which accounts for the possibility of technologies moving far beyond their home worlds. Thus one technosphere produces myriad technosignatures, while the Drake equation in its classic form inevitably does not account for such growth. If the equation assumes life emerges and stays on its home world, the authors of this paper see technology as having a separate evolutionary arc which potentially takes it far into the galaxy in ever proliferating form.

While the search for biosignatures continues, it makes sense given all these factors for technosignatures to remain under active investigation, and to encourage the astrobiology and SETI communities to engage with each other in the common pursuit of extraterrestrial life. Comparative and cooperative analysis should enhance the work of both disciplines.

The paper is Wright et al., “The Case for Technosignatures: Why They May Be Abundant, Long-lived, Highly Detectable, and Unambiguous,” Astrophysical Journal Letters 927, L30 (10 March 2022). Full text.