≡ Menu

Mistakes in the Drake Equation

Juggling all the factors impacting the emergence of extraterrestrial civilizations is no easy task, which is why the Drake equation has become such a handy tool. But are there assumptions locked inside it that need examination? Robert Zubrin thinks so, and in the essay that follows, he explains why, with a particular nod to the possibility that life can move among the stars. Although he is well known for his work at The Mars Society and authorship of The Case for Mars, Zubrin became a factor in my work when I discovered his book Entering Space: Creating a Spacefaring Civilization back in 2000, which led me to his scientific papers, including key work on the Bussard ramjet concept and magsail braking. Today’s look at Frank Drake’s equation reaches wide-ranging conclusions, particularly when we begin to tweak the parameters affecting both the lifetime of civilizations and the length of time it takes them to emerge and spread into the cosmos.

by Robert Zubrin

There are 400 billion other solar systems in our galaxy, and it’s been around for 10 billion years. Clearly it stands to reason that there must be extraterrestrial civilizations. We know this, because the laws of nature that led to the development of life and intelligence on Earth must be the same as those prevailing elsewhere in the universe.

Hence, they are out there. The question is: how many?

In 1961, radio astronomer Frank Drake developed a pedagogy for analyzing the question of the frequency of extraterrestrial civilizations. According to Drake, in steady state the rate at which new civilizations form should equal the rate at which they pass away, and therefore we can write:

Equation (1) is therefore known as the “Drake Equation.” Herein, N is the number of technological civilizations is our galaxy, and L is the average lifetime of a technological civilization. The left-hand side term, N/L, is the rate at which such civilizations are disappearing from the galaxy. On the right-hand side, we have R∗, the rate of star formation in our galaxy; fp, the fraction of these stars that have planetary systems; ne, is the mean number of planets in each system that have environments favorable to life; fl the fraction of these that actually developed life; fi the fraction of these that evolved intelligent species; and fc the fraction of intelligent species that developed sufficient technology for interstellar communication. (In other words, the Drake equation defines a “civilization” as a species possessing radiotelescopes. By this definition, civilization did not appear on Earth until the 1930s.)

By plugging in numbers, we can use the Drake equation to compute N. For example, if we estimate L=50,000 years (ten times recorded history), R∗ = 10 stars per year, fp = 0.5, and each of the other four factors ne, fl, fi, and fc equal to 0.2, we calculate the total number of technological civilizations in our galaxy, N, equals 400.

Four-hundred civilizations in our galaxy may seem like a lot, but scattered among the Milky Way’s 400 billion stars, they would represent a very tiny fraction: just one in a billion to be precise. In our own region of the galaxy, (known) stars occur with a density of about one in every 320 cubic light years. If the calculation in the previous paragraph were correct, it would therefore indicate that the nearest extraterrestrial civilization is likely to be about 4,300 light years away.

But, classic as it may be, the Drake equation is patently incorrect. For example, the equation assumes that life, intelligence, and civilization can only evolve in a given solar system once. This is manifestly untrue. Stars evolve on time scales of billions of years, species over millions of years, and civilizations take mere thousands of years.

Current human civilization could knock itself out with a thermonuclear war, but unless humanity drove itself into complete extinction, there is little doubt that 1,000 years later global civilization would be fully reestablished. An asteroidal impact on the scale of the K-T event that eliminated the dinosaurs might well wipe out humanity completely. But 5 million years after the K-T impact the biosphere had fully recovered and was sporting the early Cenozoic’s promising array of novel mammals, birds, and reptiles. Similarly, 5 million years after a K-T class event drove humanity and most of the other land species to extinction, the world would be repopulated with new species, including probably many types of advanced mammals descended from current nocturnal or aquatic varieties.

Human ancestors 30 million years ago were no more intelligent than otters. It is unlikely that the biosphere would require significantly longer than that to recreate our capabilities in a new species. This is much faster than the 4 billion years required by nature to produce a brand-new biosphere in a new solar system. Furthermore, the Drake equation also ignores the possibility that both life and civilization can propagate across interstellar space.

So, let’s reconsider the question.

Estimating the Galactic Population

There are 400 billion stars in our galaxy, and about 10 percent of them are good G and K type stars which are not part of multiple stellar systems. Almost all of these probably have planets, and it’s a fair guess that 10 percent of these planetary systems feature a world with an active biosphere, probably half of which have been living and evolving for as long as the Earth. That leaves us with two billion active, well-developed biospheres filled with complex plants and animals, capable of generating technological species on time scales of somewhere between 10 and 40 million years. As a middle value, let’s choose 20 million years as the “regeneration time” tr. Then we have:

where N and L are defined as in the Drake equation, and ns is the number of stars in the galaxy (400 billion), fg is the fraction of them that are “good” (single G and K) type stars (about 0.1), fb is the fraction of those with planets with active biospheres (we estimate 0.1), fm is the fraction of those biospheres that are “mature” (estimate 0.5), and nb, the product of these last four factors, is the number of active mature biospheres in the galaxy.

If we stick with our previous estimate that the lifetime, L, of an average technological civilization is 50,000 years, and plug in the rest of the above numbers, equation (2) says that there are probably 5 million technological civilizations active in the galaxy right now. That’s a lot more than suggested by the Drake equation. Indeed, it indicates that one out of every 80,000 stars warms the home world of a technological society. Given the local density of stars in our own region of the galaxy, this implies that the nearest center of extraterrestrial civilization could be expected at a distance of about 185 light years.

Technological civilizations, if they last any time at all, will become starfaring. In our own case (and our own case is the only basis we have for most of these estimations), the gap between development of radiotelescopes and the achievement of interstellar flight is unlikely to span more than a couple of centuries, which is insignificant when measured against L=50,000 years. This suggests that once a civilization gets started, it’s likely to spread. Propulsion systems capable of generating spacecraft velocities on the order of 5 percent the speed of light appear possible. However, interstellar colonists will probably target nearby stars, with further colonization efforts originating in the frontier stellar systems once civilization becomes sufficiently well-established there to launch such expeditions.

In our own region of the galaxy, the typical distance between stars is five or six light years. So, if we guess that it might take 1,000 years to consolidate and develop a new solar system to the point where it is ready to launch missions of its own, this would suggest the speed at which a settlement wave spreads through the galaxy might be on the order of 0.5 percent the speed of light. However, the period of expansion of a civilization is not necessarily the same as the lifetime of the civilization; it can’t be more, and it could be considerably less. If we assume that the expansion period might be half the lifetime, then the average rate of expansion, V, would be half the speed of the settlement wave, or 0.25 percent the speed of light.

As a civilization expands, its zone of settlement encompasses more and more stars. The density, d, of stars in our region of the galaxy is about 0.003 stars per cubic light year, of which a fraction, fg, of about 10 percent are likely to be viable potential homes for life and technological civilizations. Combining these considerations with equation 2, we can create a new equation to estimate C, the number of civilized solar systems in our galaxy, by multiplying the number of civilizations N, by, nu, the average number of useful stars available to each.

For example, we have assumed that the average lifespan, L, of a technological species is 50,000 years, and if that is true, then the average age of one is half of this, or 25,000 years. If a typical civilization has been spreading out at the above estimated rate for this amount of time, the radius, R, of its settlement zone would be 62.5 light years (R = VL/2 = 62.5 ly), and its domain would include about 3,000 stars. If we multiply this domain size by the number of expected civilizations calculated above, we find that about 15 billion stars, or 3.75 percent of the galactic population, would be expected to lie within somebody’s sphere of influence. If 10 percent of these stars are actually settled, this implies there are about 1.5 billion civilized stellar systems within our galaxy. Furthermore, we find that the nearest outpost of extraterrestrial civilization could be expected to be found at a distance of 185-62.5 = 122.5 light years.

The above calculation represents my best guess as to the shape of things, but there’s obviously a lot of uncertainty in the calculation. The biggest uncertainty revolves around the value of L; we have very little data to estimate this number and the value we pick for it strongly influences the results of the calculation. The value of V is also rather uncertain, although less so than L, as engineering knowledge can provide some guide. In Table 1 we show how the answers might change if we take alternative values for L and V, while keeping the other assumptions we have adopted constant.

Table 1 The Number and Distribution of Galactic Civilizations

V=0.005 cV=0.0025 cV=0.001 c
L=10,000 years
N (# civilizations)1 million1 million1 million
C (# civilized stars)19.5 million2.4 million1 million
R (radius of domain)25 ly12.5 ly5 ly
S (Separation between civilizations)316 ly316 ly316 ly
D (distance to nearest outpost)291 ly304 ly311 ly
F (fraction of stars within domains)0.048%0.006%0.0025%
L=50,000 years
N (# civilizations)5 million5 million5 million
C (# civilized stars)12 billion1.5 billion98 million
R (radius of domain)125 ly62.5 ly25 ly
S (Separation between civilizations)185 ly185 ly185 ly
D (distance to nearest outpost)60 ly122.5 ly160 ly
F (fraction of stars within domains)30%3.75%0.245%
L=200,000 years
N (# civilizations)20 million20 million20 million
C (# civilized stars)40 billion40 billion18 billion
R (radius of domain)500 ly250 ly100 ly
S (Separation between civilizations)131 ly131 ly131 ly
D (distance to nearest outpost)0 ly0 ly31 ly
F (fraction of stars within domains)100%100%44%

In Table 1, N is the number of technological civilizations in the galaxy (5 million in the previous calculation) , C is the number of stellar systems that some civilization has settled (1.5 billion, above), R is the radius of a typical domain (62.5 ly above), S is the separation distance between the centers of civilization (185 ly above), D is the probable distance to the nearest extraterrestrial outpost (122.5 ly, above), and F is the fraction of the stars in the galaxy that are within someone’s sphere of influence (3.75% above).

Examining the numbers in Table 1, we can see how the value of L completely dominates our picture of the galaxy. If L is “short” (10,000 years or less), then interstellar civilizations are few and far between, and direct contact would almost never occur. If L is “medium” (~50,000 years), then the radius of domains is likely to be smaller than the distance between civilizations, but not much smaller, and so contact could be expected to happen occasionally (remember, L, V, and S are averages; particular civilizations in various localities could vary in their values for these quantities). If L is a long time (> 200,000 years), then civilizations are closely packed, and contact should occur frequently. (These relationships between L and the density of civilizations apply in our region of the galaxy. In the core, stars are packed tighter, so smaller values of L are needed to produce the same “packing fraction,” but the same general trends apply.)

Any way you slice it, one thing seems rather certain: There’s plenty of them out there.

What are these civilizations like? What have they achieved?

It would be good to know.



Galaxies in Motion

“Wherever you go, there you are.” So goes an old saw that makes a valid point: You can’t escape yourself by changing locations. Translating the great Greek poet C. P. Cavafy, Lawrence Durrell tweaked the language of “The God Abandons Antony” to come up with these closing lines:

Ah! don’t you see
Just as you’ve ruined your life in this
One plot of ground you’ve ruined its worth
Everywhere now — over the whole earth?

All this in the service of Durrell’s Alexandria Quartet, noting the fact that not even a Roman autocrat could escape his fate. Bear with me — I think about stuff like this when I’m out walking late at night and the stars are particularly stunning. Before my walk, I had been looking at images of M31, the Andromeda galaxy, and doing my usual “What would it be like to be there” routine. Minus Durrell/Cavafy’s dark vision, I might still ask myself what had changed. From a vantage in the Andromeda galaxy, there would be a Milky Way in my sky. And what else?

Then David Herne dropped me a note, providing a link to new research from Australia’s International Centre for Radio Astronomy Research dealing with this very galaxy. The work described therein raised the question anew: Just how alike are Andromeda and the Milky Way? For some previous estimates have held that M31 was actually two to three times the size of our galaxy, while others have found a rough parity, with Harvard’s Mark Reid and colleagues arguing in 2009 that our galaxy is about as massive as Andromeda, with a mass of up to 3 trillion Suns.

Image: The Andromeda Galaxy, perhaps a twin of our own. We can see Andromeda from without, but determining the structure of the galaxy we move through ourselves is a continuing challenge. Credit: NASA.

I described Reid’s work in 2009 (see How Many Stars in the Galaxy), and I mention it here because when we’re discussing these matters, it’s necessary to add a caveat. We have to ask ourselves, what exactly does a mass of 1 to 3 trillion stars actually mean? Much has happened since 2009, but I wrote this back then:

Does that mean that the Milky Way contains three trillion stars? Absolutely not. I’m seeing the three trillion star number popping up all over the Internet, and almost reported it that way here when I first encountered the work. The misunderstanding comes from making mistaken assumptions about galactic mass. Reid used the Very Long Baseline Array to examine regions of intense star formation across the galaxy, a study the scientist reported at the American Astronomical Society’s winter meeting this past January [2009]. The Milky Way does indeed turn out to have much more mass than earlier studies had indicated.

But a heavier than expected Milky Way means — according to much current thinking — a larger amount of dark matter. Reid and team had found that the Milky Way was rotating 15% faster than previously assumed, matching the rotation rate of M31 and implying similar overall mass and size. But only a fraction of this would be normal matter, so that a mass of three trillion Suns would still translate to, say, five hundred billion actual stars, and they would be spread over all stellar classes. In any case, the 3 trillion figure is now in doubt.

Back to the ICRAR work. In 2014, Prajwal Kafle (University of Western Australia) revised back downward the mass of the Milky Way, studying the kinematics of halo stars to determine the underlying distribution of mass, revealing about half as much dark matter as had been previously thought. Now Kafle and colleagues have gone to work on Andromeda, reaching the conclusion that the galaxy is about 800 billion times heavier than the Sun. Our nearest galactic neighbor thus turns out to be roughly the same mass as the Milky Way.

As in their earlier study, Kafle’s group looked at the orbits of high-velocity stars as a way of gauging galactic escape velocity, a technique developed by British astronomer James Jeans in 1915. For the Milky Way, this value is thought to be in the neighborhood of 550 kilometers per second, a figure Kafle and team confirmed in 2014. The new paper’s data mean that the value for Andromeda is not dissimilar. Like the Milky Way, M31 turns out to have much less dark matter than previously thought, perhaps only a third of earlier high-end estimates.

If this is the case, then we can start to re-think the remote future, when the two giant spiral galaxies (over two million light years apart) begin to approach each other. Indeed, galactic interactions within the local group — 54 galaxies, most of them dwarfs — are affected by these changes, given that the gravitational center is located between the Milky Way and Andromeda. The two galaxies are now shown to be evenly matched in terms of size. Adds Kafle:

“It completely transforms our understanding of the local group. We had thought there was one biggest galaxy and our own Milky Way was slightly smaller but that scenario has now completely changed.”

We have five billion years to wait before the merger of our two galaxies, which happens to be the same timescale that takes in the growth of our Sun to red giant stage. What will the Solar System become as the Sun swells and the galaxies begin their close encounter? Kafle’s simulations of the event are spectacular, as you can see below.

The paper is Kafle et al., “The Need for Speed: Escape velocity and dynamical mass measurements of the Andromeda galaxy,” Monthly Notices of the Royal Astronomical Society February 15th, 2018 (abstract). Kafle’s 2014 paper on the Milky Way’s mass is “On the Shoulders of Giants: Properties of the Stellar Halo and the Milky Way Mass Distribution,” Astrophysical Journal 794, No. 1 (24 September 2014). Abstract.



‘Oumuamua: New Work on Interstellar Objects

Anomalous objects are a problem — we need more than one to figure them out. One ‘hot Jupiter’ could have been an extreme anomaly, but we went on to find enough of them to realize this was a kind of planet that had a place in our catalog. Or think of those two Kuiper Belt objects that New Horizons imaged, as discussed in yesterday’s post. Soon we’ll have much closer imagery of MU69, but it will take more encounters — and more spacecraft — to begin to fathom the full range of objects that make up the Kuiper Belt. Ultimately, we’d like to see enough KBOs up close to start drawing statistically valid conclusions about the entire population.

So where does the intriguing ‘Oumuamua fit into all this? It was the first interstellar asteroid we’ve been able to look at, even if the encounter was fleeting. A friend asked me, having learned of the Breakthrough Listen SETI monitoring of the object, whether it wasn’t absurd to imagine it could be a craft from another civilization. I could only say that the idea was highly unlikely, but given how little time we had and how rare the object was, how could we not have listened? I favor throwing whatever resources we have at an opportunity this unusual.

And time was short, as Joshua Sokol recently noted in Scientific American. We found ‘Oumuamua in late October of last year, but getting a probe to it on the best possible trajectory would have demanded a launch the previous July. I see that Greg Laughlin (UC-Santa Cruz), working with Yale doctoral student Darryl Seligman, has been exploring how we might drive an impactor into a future interstellar visitor, allowing the kind of analysis we did with the Deep Impact mission. I’ll have more on the idea as the paper wends its way through peer review.

Image: This animation shows the path of ‘Oumuamua, which passed through our inner solar system in September and October 2017. From analysis of its motion, scientists calculate that it probably originated from outside of our Solar System. Credit: NASA/JPL-Caltech.

We appear to be getting into the era of comparative interstellar object studies. One, two, many ‘Oumuamuas, not to mention their cousins, who may not just pass through but stick around. Harvard’s Avi Loeb, working with Manasvi Lingam (Harvard-Smithsonian Center for Astrophysics), offers a paper on ‘Oumuamua that’s now available on the arXiv server. Here we get a sense of the broader population of interstellar objects, not all of which may have departed.

The authors have approached the question by asking how likely it is for interstellar objects to be captured in our Solar System, performing the same kind of analysis for the Alpha Centauri system. The scientists believe several thousand captured interstellar objects may be within the Solar System at any given time, with the largest of these reaching tens of kilometers in size.

‘Oumuamua came and went quickly, but a long-lingering population offers us ample grounds for investigation. Likening the effects of the Sun and Jupiter to a fishing net, the authors peg the number of interstellar objects currently within the system at ~ 6 x 103, pointing out that they offer us the potential to study exoplanetary debris without leaving our own system.

But how to determine whether an object now bound to our Solar System really is interstellar in origin? The answer may lie in the chemical constitution of water vapor found associated with the object. The oxygen isotope ratios may hold the key, as the paper explains:

…if the oxygen isotope ratios are markedly different from the values commonly observed in the Solar system, it may suggest that the object is interstellar in nature; more specifically, the ratio of 17O/18O is distinctly lower for the Solar system compared to the Galactic value (Nittler & Gaidos 2012), and hence a higher value of this ratio may be suggestive of interstellar origin.

To make this work, we could analyze these isotopes through high-resolution spectroscopy, working in the optical, infrared and submillimeter ranges of water vapor in cometary tails, just as the Herschel observatory was able to measure the isotope ratio of comet C/2009 P1 in the Oort cloud. A flyby and perhaps even a sample return mission could not be ruled out either, with the interesting implication that a technology like Breakthrough Starshot’s could be used to explore much closer targets than Proxima Centauri with short mission times.

But if thousands of interstellar objects are within our Solar System now, what implications does this offer for the emergence of life? The paper notes that some 400 interstellar objects with a radius in the 0.1 kilometer range could have struck the Earth prior to abiogenesis, and about 10 could have been kilometer-sized. The possibility of interstellar panspermia is evident. The paper continues:

If a km-sized interstellar object were to strike the Earth, we suggested that it would result in pronounced local changes, although the global effects may be transient. Habitable planets could have been seeded by means of panspermia through two different channels: (i) direct impact of interstellar objects, and (ii) temporary capture of the interstellar object followed by interplanetary panspermia. There are multiple uncertainties involved in all panspermia models, as the probability of alien microbes surviving ejection, transit and reentry remains poorly constrained despite recent advancements.

It’s interesting to note on this score that while the Solar System might have snared objects up to tens of kilometers in size, the Alpha Centauri system could capture objects up to Earth size, making for the possibility of a life-bearing world being acquired in its entirety.

‘Oumuamua work continues in a letter from Carlos de la Fuente Marcos (Complutense University of Madrid) that analyzes the orbits of 339 known hyperbolic objects and models their histories, finding eight possible interstellar objects within past astronomical observations. Unlike Loeb and Lingam’s population of captured objects, these visitors followed the ‘Oumuamua model, making a single brief appearance, but they offer the possibility that our archives contain further examples of such wanderers. The onset of observations with the Large Synoptic Survey Telescope in the early 2020s may help us further constrain the population of unbound objects.

The paper is Lingam & Loeb, “Implications of Captured Interstellar Objects for Panspermia and Extraterrestrial Life” (preprint). The de la Fuente Marcos paper is “Where the Solar system meets the solar neighbourhood: patterns in the distribution of radiants of observed hyperbolic minor bodies,” Monthly Notices of the Royal Astronomical Society 20 February 2018 (abstract).



The View from the Kuiper Belt

New Horizons continues to push our limits, revealing new sights as it makes its way through the Kuiper Belt enroute to a January 1, 2019 encounter with the KBO 2014 MU69. No object this far from the Sun has ever been visited by a spacecraft. Adding further interest is the unusual nature of the target, for MU69 is thought to be a contact binary, two independent bodies that have touched (comet Churyumov–Gerasimenko is likely a contact binary as well). The beauty of this kind of exploration, of course, is that we so often get surprised when we reach our destination.

Below is an image of NGC 3532, also known as the Wishing Well Cluster, an open cluster in the constellation Carina that has its own place in our observational history, becoming the first target ever observed by the Hubble Space Telescope. That was in May of 1990; this is New Horizons’ view in December.

The Wishing Well Cluster is a naked eye object for southern hemisphere observers, one of the most spectacular clusters of its type. It’s worth noting that astronomer John Herschel (1792-1871) considered NGC 3532 one of the most beautiful clusters in the sky, describing “several elegant double stars” during a residence in southern Africa in the 1830s. The New Horizons image below doesn’t bring out its aesthetic appeal (see the following image for that), but it’s stirring nonetheless when we consider how far from home the image was made.

Image: For a short time, this New Horizons Long Range Reconnaissance Imager (LORRI) frame of the “Wishing Well” star cluster, taken Dec. 5, 2017, was the farthest image ever made by a spacecraft, breaking a 27-year record set by Voyager 1. About two hours later, New Horizons later broke the record again. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.

Here we’re seeing the work of New Horizons’ Long Range Reconnaissance Imager (LORRI), taken when the spacecraft was 6.12 billion kilometers (40.9 AU) from Earth. And yes, that puts it further away than Voyager 1 was when it took the ‘Pale Blue Dot’ photo back in 1990 — Voyager at that time was 6.06 billion kilometers away. Because the Voyager cameras were turned off not long after that image was made, its distance record for images has stood until now.

By way of comparison, and in the spirit of the great John Herschel, here’s the Wishing Well Cluster in all its glory in an image from the European Southern Observatory’s La Silla site, using the Wide Field Imager instrument.

Image: The MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile captured this richly colourful view of the bright star cluster NGC 3532. Some of the stars still shine with a hot bluish colour, but many of the more massive ones have become red giants and glow with a rich orange hue. Credit: ESO/G. Beccari.

Two hours after the Wishing Well image from New Horizons, LORRI set still another distance record, imaging Kuiper Belt objects 2012 HZ84 and 2012 HE85. The spacecraft’s travels in the Kuiper Belt will be replete with observations of KBOs other than MU69, although none will be approached nearly as closely as the latter. This update from JHU/APL tells us that the plan is to observe at least two dozen KBOs, dwarf planets and Centaurs, hoping to determine their shapes and examine their surface properties, while likewise looking for moons and rings. Meanwhile, measurements of plasma, dust and the neutral-gas environment in this region proceed, useful data for future missions to the heliosphere and beyond.

Image: With its Long Range Reconnaissance Imager (LORRI), New Horizons has observed several Kuiper Belt objects (KBOs) and dwarf planets at unique phase angles, as well as Centaurs at extremely high phase angles to search for forward-scattering rings or dust. This December 2017 false-color image of KBO 2012 HZ84 is, for now, one of the farthest from Earth ever captured by a spacecraft. At the time it was among the closest observations yet made of the mysterious, distant objects known as KBOs. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.

Image: A second KBO image. Here, New Horizons’ range to the KBO 2012 HE85 was only 51 million kilometers, or 0.34 AU – closer than the planet Mars ever comes to Earth. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.



Europa and Enceladus: Hotspots for Life

Icy moons around Jupiter and Saturn offer exciting venues for possible life elsewhere in our Solar System. But how do we penetrate surface ice to reach the oceans below? In today’s post, Kostas Konstantinidis surveys the field of in-situ operations on places like Enceladus and Europa. Enceladus will be a tricky place to land thanks to rough topography and polar lighting conditions. Europa poses its own challenges; once we’re down, how do we power up the technologies to get below the ice? Kostas developed a mission concept for DLR, the German space agency, to sample subsurface plume sources on Enceladus as part of the Enceladus Explorer (EnEx) project. He is currently working on a PhD thesis at Bundeswehr University (Munich) simulating a safe landing on that world, and tells me he hopes that by the end of his academic career, he will have ‘a nice mugshot of an alien microbe swimming around in its natural environment to show for it.’ How to get that mugshot is a fascinating issue, as he explains below.

by Kostas Konstantinidis

The search for life in the solar system has been one of the guiding goals of space exploration since its conception. The recent discoveries that the icy moons of the giant planets in our solar system contain vast oceans, has made them prime targets for that search. In particular, Jupiter’s moon Europa and Saturn’s moon Enceladus (Figure 1) are currently the most promising candidates among the icy moons, as they appear to fulfill the basic requirements for them to host life: the heat that is generated by the tidal pull of the parent planet maintains a subglacial ocean in the liquid state and in direct contact with the rocky core of the moon, through which reactions critical for the creation of the building blocks of basic life as we understand it can occur. Exchange processes through the thick ice shells covering those moons, much like in the polar regions of Earth, mean that further chemicals needed for life are transported from the surface where they have been delivered by e.g. micrometeoroids, all the way down to the ocean. The chemical makeup of plume jets found to emanate from the south pole of Enceladus by the recently decommissioned Cassini spacecraft further point to a chemically rich ocean hospitable to microbial life.

Figure 1: Jupiter’s moon Europa (left) and Saturn’s moon Enceladus (right), currently the most promising targets for the search of life in the solar system. Credit: Wikimedia

As evidence on the habitability of Europa and Enceladus is mounting, the question arises of where on these moons could possible microbial ecosystems exist, and how we could investigate them.

Signs of life and potential ecosystems on Europa and Enceladus

The currently most plausible hypothesis about where life might emerge and flourish in the oceans of the icy moons, is near possible hydrothermal vents on the ocean bottom. Around the hydrothermal vents an exotic energy generating process occurs: carbon dioxide, dissolved in ocean water, reacts to form organic matter. However, instead of sunlight as the energy source as is the case for our more familiar photosynthesis, microorganisms use the vent fluids’ chemical energy. This process is called chemosynthesis. Ecosystems of such chemotrophic microbes can thus flourish around these vents. After the discovery of such a hydrothermal vent in the bottom of the Earth’s ocean in 1977, it has been proposed that even life on Earth could have originated from such a location.

Figure 2: A hydrothermal vent in the bottom of the Earth’s ocean. Credit: Smithsonian Magazine

Once life is created near the vents, it can then migrate and populate other hospitable niches in the ocean of an icy moon (Figure 3). A first niche where microbes could survive is the interface between the ice shell and the ocean. Exchange processes transferring chemicals from the surface means that a concentration of chemicals and nutrients could be present on the bottom of the ice shell, from which microbial life could be sustained.

There are various transport methods through which any microbial life could be carried closer to the surface. The most direct is through channels that directly connect to the plumes on the surface. Such plumes have been observed on Enceladus, and there are strong indications that they exist on Europa [3]. Microbes from the ocean could be carried along with the ocean water, and then ejected to space via the plumes. The microbes would remain in their original state up to a certain depth under the plumes and could even form small microbial communities in pockets of liquid water close to the plume channel. Any potential life forms will be heavily altered by exposure to vacuum after they are ejected by the plumes, but signatures of life might be detectable in the plume material, and in the deposits of plume material on the surface of the icy moon.

There are also less dramatic ways for life and its signs to be transferred from the ocean, closer to the surface. Glaciological processes in the ice shell slowly transfer ocean material upwards. Geological characteristics on the surface point to the existence of subglacial lakes and liquid water pockets that might or might not be hosts to ecosystems of microbes originating from the ocean. Material from the ocean can eventually reach the surface, in particular in areas that offer evidence of intense glaciological processes like breaks on the ice shell and overturned ice blocks. Once on the surface, ocean material and any signs of possible life contained in it, are degraded by the strong radiation surrounding the giant planets.

Figure 3: Potential habitable regions on an icy moon, and transport mechanisms of ocean material to the surface [1]

The development of instruments to detect the signs of life, as well as study any existing life, is a fascinating field in its own accord. Here I will not discuss those instruments; I will describe instead mission concepts and some of the necessary technologies to deliver such instruments on their target environments on Europa and Enceladus.

Plume fly-through

A first mission concept would aim to take advantage of the “free samples” of fresh ocean material we are kindly offered from the plumes (Figure 4). In this concept, a spacecraft either flies through the plumes, analyzes the captured sample on the spot, and transmits the resulting data back to Earth, or it returns a captured sample to Earth where it is analyzed in dedicated laboratories.

The Cassini spacecraft until recently in orbit around Saturn flew through the plumes several times but was not equipped to perform the sophisticated measurements necessary for definitive life detection. A mission concept combining on-the-spot sampling and sample return to Earth is the LIFE (Life Investigation For Enceladus) mission, originally proposed in 2009 [1].

The main technological challenge is instrumentation to capture the sample at the high relative velocities involved, without evaporating it. This is done by using aerogel, a material similar to ballistic gel. If returning the sample for analysis to Earth, then very strict planetary protection rules mean that the risk of exposing the sample to the Earth environment must be minimal. This means that atmospheric reentry of the sample carrying vehicle must be very reliable, and that expensive installations and labs must be built to handle the sample in a safe way. A similar mission with sample return from the tail of a comet was the Stardust mission [5], that successfully returned a sample to Earth in 2006.

Figure 4: Cassini flying though the plumes of Enceladus. A plume flythrough mission would have the same concept. Credit: MailOnline

Surface and near sub-surface

As we saw above, thanks to glaciological processes material from the ocean can be transported all the way to the surface over thousands of years. Material on the surface and up to a meter close to it will be degraded by the intense radiation around the parent planet of the icy moon (especially for Europa’s parent planet, Jupiter). Therefore, sampling the surface, and especially the near subsurface, can allow the detection of certain signs of microbial life. We can gain access to the surface to perform these measurements using planetary landers. NASA are currently planning such a lander mission to Europa (Figure 6) [2].

Figure 5: Typical terrains of the icy moons. Left: a chaos terrain on Europa. Right: canyon terrain on Enceladus. Inset: A digital elevation model of a Europa terrain. Credit: NASA

Figure 6: Illustration of the NASA Europa Lander on the surface on Europa. Credit: NASA

Due to the various glaciological processes on Europa and Enceladus, the terrain tends to be very rough on these moons (Figure 5). This high terrain roughness significantly affects the design of the landing system. To land safely, the lander will have to sense the terrain and be able to autonomously decide on a new landing spot if the original one is found to be too risky. Hazard detection is performed by utilizing sensors like cameras and laser scanning lidar to identify hazards. On-board software then decides on the optimal landing area. The lander must also be able to know its own position with enough accuracy so that it can land accurately on a safe spot. Using input from the same sensors used for hazard detection, software is used to detect particular features, and then track them through time in order to be able to infer the landers position relative to them. On-board guidance software is used in case a landing trajectory must be calculated to a new safer landing spot. The lander thrusters are then commanded to follow this new trajectory (Figure 7).

Such a landing scenario is significantly more challenging than those of past Moon and Mars lander missions, where landing areas were less hazardous, and less landing accuracy was required, as a rover could typically be deployed after landing and drive for up several kilometers to interesting targets on the surface.

Figure 7: The landing sequence of the ESA Lunar Lander mission concept. A landing on the icy moons will follow a similar scenario [Credit: ESA Lunar Lander)

Another concept to sample the near subsurface are the so-called planetary penetrators. They are bullet shaped vehicles that are deployed from orbit and impact the planetary surface with a velocity of close to 300 m/s (1000 km/h). Thanks to the high impact velocity the penetrator is emplaced to a depth of a few meters under the ice. Once there it can deploy its instruments and sample the surrounding ice, which has been unaffected by the radiation on the surface (Figure 8).

A few penetrators have been sent to Mars and the Moon as secondary payloads, unfortunately with no success. A penetrator concept for the icy moons (Europa) is CLEP (Clipper Europa Penetrator) that was considered as a secondary payload for the future NASA Europa Clipper mission [6].

Penetrators can be a relatively inexpensive way to sample the surfaces of the icy moons but are risky, as their track record demonstrates. The main technological challenge for penetrators, is to develop a meaningful set of miniaturized instruments that can also withstand the high impact shock, as well as other resilient systems, like the communications subsystem to relay data from the measurements. A simpler version of the landing system described above could help with targeting a penetrator more accurately to safer areas.

Figure 8: The concept of operations of a penetrator for Europa. Inset: Illustration of a penetrator and its delivery vehicle. Credit: University of Surrey

Subsurface liquid water pockets

Water pockets in the ice shell of Enceladus or Europa are interesting targets for exploration for possible signs of life. Arguably the most promising spot for accessing near subsurface liquid water pockets is right under the plume sources on Enceladus (and possibly also on Europa). The combination of the relative ease of access to these water pockets and the freshness of the material contained in them makes them a very desirable target.

To access these subglacial areas, a probe must melt through up to a few hundreds of meters of ice to sample the potential liquid water pocket habitat. The melting probe is deployed to start melting by a lander. A mission concept to sample subglacial water pockets under the plume sources on Enceladus was developed (by the author) as part of the Enceladus Explorer (EnEx) project funded by the German space agency DLR [7][8].

Figure 9: Illustration of the Enceladus Lander concept [7]: A lander touches down inside the plume source canyon and deploys the IceMole melting probe (red circle on the left). A cut-out of a plume source canyon on Enceladus. Adapted from [9]

To melt through the few hundreds of meters of ice, a so called shallow melting probe is needed. Electrical power produced on the lander is transferred through a cable and powers electrical melting heads on the melting probe. By heating the melting heads on different sides of the probe the probe is maneuverable and can avoid obstacles under the ice such as meteorites, cracks, etc. The design of an appropriate cable is also a challenging proposition. It must be lightweight, though strong enough to withstand the tuck and pull from the surrounding ice. It must be able to be unrolled without getting stuck or causing other issues and must carry high power loads with minimal loss. To properly take advantage of the maneuverability, the probe must have an accurate knowledge of its position using on-board navigation sensors and be capable to autonomously make decisions to avoid subglacial obstacles. An important issue with such melting probes is the large amount of power needed to power them. Since a lot of power is lost during the transformation from electrical power to thermal on the melting heads, such a melting probe would have large power needs. Currently those power needs can be met only by small nuclear reactors on board the lander or at least with high capacity fuel cells (essentially a very high capacity battery). Both solutions face limitations, including mainly the large mass of reactors and the issues accompanying the use of nuclear material near a potential extraterrestrial habitat. However, the relative advantage remains, that the reactor stays in the surface and does not go under the ice where the potential habitat is.

An example melting maneuverable melting probe is the IceMole developed by FH Aachen in Germany. The probe uses combined screwing and melting for increased maneuverability. The IceMole was tested near Blood Falls in Antarctica in 2015 in an analogue scenario to melting the ices of Enceladus [10].

Subglacial water pockets tend to be in geologically rough areas. The plume source terrain on Enceladus is a good example of this. They are situated along the bottoms of canyons in the south pole of Enceladus, dubbed the Tiger Stripes (Figure 5). The rough canyon topography, along with the terrain texture resulting from the plume fallout that covers the canyon terrains with superfine snow, and in addition the polar lighting conditions, make a landing near the plumes very difficult, with accuracy and landing safety requirements much more demanding that any past automated planetary landing that we have attempted or are currently planning. An even more sophisticated version of the landing system discussed above will be needed for such a landing.

One of the most challenging problems of astrobiological planetary exploration that we have not yet discussed is planetary protection, i.e. the need to not contaminate any possible habitable niches in the target environments with microbial life from Earth. There are microbes that can withstand the radiation and vacuum of space for years tucked away in parts of a spacecraft. A lot of work goes on for sensitive missions to identify and limit the bioload that remains on a spacecraft or lander that is to be sent to a sensitive area. Such microbes can be very persistent and reducing microbial loads to within acceptable levels can be a daunting task. With the lander aiming to land right next to an opening that very likely leads directly to the ocean of an icy moon, reducing the risk of inadvertently introducing Earth microbes and other potentially damaging materials brought from Earth becomes a driving design requirement. Considering the risky nature and statistics of past planetary landings this will be a difficult thing to ensure. The melting probe that will directly access the potential habitat must be sterilized in levels that have been so far unprecedented in planetary protection.

Accessing the ocean

The subglacial ocean is the main source of interest in the icy moons, as it is the most likely source and host of currently existing microbial life on the icy moons. To gain access to the ocean itself, a mission would have to melt through the entire ice shell for at least a few kilometers. A probe could study the ice-ocean interface or simply float in the ocean while sampling it. If it were to sample a hydrothermal vent on the ocean bottom, a submersible would have to navigate several kilometers to the bottom and locate a vent there. It would then be able to study the potential source of life on the icy moons and arguably the single most promising location in the solar system for the existence of life.

Figure 10 illustrates a mission concept to deploy a submersible in the ocean of an icy moon. As the bulk of the science of such a mission is to be performed by the submersible, it must be ensured that it reaches the ocean with a high likelihood. This means that for the mission to be viable, the landing must be safe and reliable, and significantly more so than past planetary lander missions have been.

Figure 10: Left: Concept for the deployment of a submersible to explore the ocean floor of Europa. Credit: Cornell University. Right: Nuclear heater melting probe concept. Credit: NASA

Melting through several kilometers of ice will be quite a challenge for a mission to access the ocean of an icy moon, and potent melting methods will be necessary. Of course, the shallow melting probe described above could be adapted for deep melting, but the length of the cable needed could prove problematic. A similar method for melting through large amounts of ice is by using high energy laser beams powered by a high-performance energy source on the lander, through a fiber optic cable. The main advantage here is that power losses associated with an electrical cable and melting head are eliminated, thus making melting more efficient. This would require however even larger amounts of power generated at the surface by the lander, which in turn would mean even heavier power sources. Such a melting probe is being developed by Stone Aerospace [11].

A method of melting through the ice that completely circumvents cables, is using thermal power directly provided on-board the melting probe, by nuclear heat sources. Such heat sources have been used in space exploration since the 1970s and are non-fissile isotopes of plutonium or other radioactive material, that thanks to active nuclear reactions produce large amounts of heat (Figure 11). Bricks of such materials are enclosed in vessels that are highly resistant to impact and explosion and are used to either heat the surrounding ice to produce hot water jets (Figure 10), or to melt the ice via direct contact. Hot water jets can be used to melt with some degree of maneuverability. Such power sources however face possible issues with transporting nuclear material to a potential extraterrestrial habitat. These concerns are somewhat alleviated by enclosing the radioactive material in their containment vessels. The heat generated by these nuclear sources cannot be “turned-off” and it could act as a heat source for Earth microbes that hitched a ride on the melting probe to potentially survive and contaminate the ocean. Still such power sources offer a relatively lightweight and efficient way of melting through the ice. Communication back to the lander is done by dropping behind radio or acoustic relay stations in the ice.

Figure 11: A pellet of plutonium-238, used for radioisotope thermoelectric generators. The pellet can be seen glowing from its own heat. Credit: Wikimedia

Once in the ocean, a submersible is deployed, and it must navigate towards a plume source. This must be done for a depth of tens of km, despite the ocean currents, the lack of reference points in the environment that would help in localizing the submersible, and the lack of contact to the operators on Earth. The submersible must then communicate the collected data to a relay station that remains on the ice-ocean interface, that will then forward the data back to the lander.

This requires impressive capabilities of autonomously localizing itself in the ocean and, to large extent, autonomous operations. The DLR funded Europa Explorer (EurEx) project has been developing navigation and autonomy methods for such an exploration scenario [12] (Figure 12). The ARTEMIS submersible being developed by Stone Aerospace is also another example of technology development necessary for the exploration of the oceans of the icy moons [13]. A recent study by Cornell university investigated alternative mobility in the oceans of the icy moons by using (appropriately squid-like) biomimetic soft robots, and novel power harvesting methods via electromagnetic tethers deployed in the ocean [14]. Such alternative approaches hold promise for new and unexpected ways to explore the icy moons.

Figure 12: Schematic overview of a possible mission scenario for the EurEx submersible. The submersible is deployed at the ice-ocean interface, navigates to the ocean bottom for scientific measurements, and returns to the deployment point for data transfer and battery recharging [12]


The search for extant or extinct life in the solar system has been one of the driving motivations behind space exploration. Up to now several missions have explored Mars and have identified it as a very promising place where microbial life could have once existed. The next step for the search for life in the solar system is then to turn to the places in the solar system most likely to currently host life, the icy moons. A programmatic shift in priorities towards the search for life might be in the works in the agencies as seen by the establishment of an Ocean Worlds program by NASA [15]. An ambitious stepwise exploration program in the coming decades, comprised of the mission concepts described above, will be one of the most challenging undertakings in the history of space exploration. It will however help answer some of the most fundamental questions of mankind about the origin and provenance of life in the universe.


[1] Europa Lander SDT Report & Mission Concept, presentation to OPAG in Atlanta, GA, Kevin Hand et al., 2017

[2] Europa Lander Study 2016 Report, Europa Lander Mission Concept Team, 2016, https://solarsystem.nasa.gov/docs/Europa_Lander_SDT_Report_2016.pdf

[3] https://www.space.com/36464-jupiter-moon-europa-water-plume-hubble.html

[4] LIFE – Enceladus Plume Sample Return via Discovery, Peter Tsou et al., 45th Lunar and Planetary Science Conference, 2014

[5] https://stardust.jpl.nasa.gov/home/index.html

[6] CDF study report CLEO/P, Assessment of a Europa Penetrator Mission as part of NASA Clipper Mission, 2015,

[7] A lander mission to probe subglacial water on Saturn׳s moon Enceladus for life, Acta Astronautica, v. 106, p. 63-89, 2015, https://www.sciencedirect.com/science/article/pii/S0094576514003610

[8] http://www.dlr.de/rd/en/desktopdefault.aspx/tabid-10572/18379_read-42824/

[9] The possible origin and persistence of life on Enceladus and detection of biomarkers in the plume., C.P. McKay et al., Astrobiology. 8 (2008) 909–19. doi:10.1089/ast.2008.0265.

[10] Blood Falls – EnEx probe collects first ‘clean’ water samples, DLR press release, 2015, http://www.dlr.de/dlr/presse/en/desktopdefault.aspx/tabid-10172/213_read-12733/#/gallery/18529

[11] VALKYRIE, A prototype cryobot for clean subglacial access and sampling, http://stoneaerospace.com/valkyrie/

[12] Design of an Autonomous Under-Ice Exploration System, Mark Hildebrandt et al., 2013, http://ieeexplore.ieee.org/document/6741164/

[13] ARTEMIS, The Robotic Search for Life on Icy Worlds Begins in the Analog Environment Beneath the McMurdo Ice Shelf, http://stoneaerospace.com/artemis/

[14] Soft-Robotic Rover with Electrodynamic Power Scavenging, NIAC Phase 1 report, Cornell University, https://www.nasa.gov/sites/default/files/atoms/files/11-2015_phase_i_mason_peck_soft_robotic_rover_electrodynamic_power_scavenging.pdf

[15] https://www.nasa.gov/specials/ocean-worlds/

[16] The European Lunar Lander: Robotics Operations in a Harsh Environment, Presentation, Fisackerly, R.,



Lunar Recession: Implications for the Early Earth

It was in 1775 that Pierre-Simon Laplace developed his theories of tidal dynamics, formulating in the following year a set of equations to explain the phenomenon at a greater level of detail than ever before. Looking at the Moon on a frosty winter night, it’s pleasing to realize that there is a mountainous region at the end of Montes Jura in Mare Imbrium that is called Promontorium Laplace. Surely the French astronomer and mathematician would have been pleased.

One result of Laplace’s calculations was his pointing out that the Moon’s equatorial bulge was far too large to be accounted for by its current rate of rotation. Here we’re dealing with conditions of formation of an object thought to have been the result of a collision between the Earth and a Mars-sized planet early in our system’s evolution. I seldom write about the Moon in these pages, but today’s story on its development catches my eye because it relates to the early history of our own world and the Solar System itself. For Chuan Qin (now at Harvard University) and colleagues have modeled how quickly the hot young Moon receded from the Earth.

The current rate of the Moon’s recession from the Earth is about 4 centimeters per year. But what was the recession rate in the earliest periods of its formation?

Image credit: University of Colorado at Boulder.

The tidal bulge at the equator evidently has much to tell us. A hot, fast-rotating early Moon would have possessed a much larger equatorial bulge than today’s. As the Moon moved farther from the Earth and its rotation slowed, the bulge would have shrunk until, cooled and hardened, a permanent ‘fossil’ bulge remained in its crust. Working with a model adjusting the relative timing of lithosphere thickening and lunar orbit recession, Qin and team have found that the pace of the lunar recession was slow, lasting for several hundred million years in an era roughly four billion years ago.

If this dynamic modeling is correct, it can tell us something about the early Earth, says Shijie Zhong (University of Colorado at Boulder), a co-author on the paper:

“The moon’s fossil bulge may contain secrets of Earth’s early evolution that were not recorded anywhere else. Our model captures two time-dependent processes and this is the first time that anyone has been able to put timescale constraints on early lunar recession.”

The new model has implications for the hydrosphere, the combined mass of water on the early Earth. The researchers argue that the Moon’s equatorial bulge is evidence that Earth’s energy dissipation in response to tidal forces would have been greatly reduced in this period. That’s assuming that a hydrosphere even existed in the Hadean, a geologic eon that began with the planet’s formation some 4.6 billion years ago and ended roughly 4 billion years ago. From the paper:

Viable solutions indicate that lunar bulge formation was a geologically slow process lasting several hundred million years, that the process was complete about 4 Ga when the Moon-Earth distance was less than ~32 Earth radii, and that the Earth in Hadean was significantly less dissipative to lunar tides than during the last 4 Gyr, possibly implying a frozen hydrosphere due to the fainter young Sun.

The paper makes the case that Earth’s hydrosphere may have been frozen during the time of the Moon’s formation, making for little tidal dissipation. One possibility emerging from that is a faint young Sun radiating about 30 percent less energy than today. A ‘snowball Earth’ in the Hadean could have been the result, but we have no direct evidence in the geological record for this. Qin and team intend to continue work on their model as they dig deeper into the Moon’s evolution in a period ending with the Late Heavy Bombardment some 3.8 billion years ago.

The paper is Qin et al., “Formation of the Lunar Fossil Bulges and Its Implication for the Early Earth and Moon,” Geophysical Research Letters 2 February 2018 (abstract).



TRAPPIST-1: Planets Likely Rich in Volatiles

Yesterday we saw that, by pushing the Hubble telescope to its limits, we could make a call about three of the TRAPPIST-1 planets — d, e and f — and one possibility for their respective atmospheres. The Hubble data rule out puffy atmospheres rich in hydrogen for these three (TRAPPIST-1 g needs more work before a definitive call can be made there).

This is a useful finding, for hydrogen is a greenhouse gas that can heat planets close to their star beyond our usual norms for habitability. Set out deeper in a stellar system, we can think of Neptune, a gaseous world far different from the kind of rocky, terrestrial-class planets most likely to produce surface water. So on balance, the Hubble work, while not telling us anything more about potential atmospheres in this system, does rule out the Neptune scenario. That leaves open the question of whether future instruments will find more compact atmospheres.

The James Webb Space Telescope should be able to probe these worlds, perhaps revealing heavier gases like methane, carbon dioxide, water and oxygen. Meanwhile, we have another new paper to look at, from lead author Simon Grimm and colleagues, taking another angle on the composition of the TRAPPIST-1 worlds. Grimm (University of Bern Centre for Space and Habitability) and team have produced new mass estimates that allow a more fine-grained appraisal of the planets’ density, which is a step toward characterizing each planet.

Image: This chart shows, on the top row, artist concepts of the seven planets of TRAPPIST-1 with their orbital periods, distances from their star, radii, masses, densities and surface gravity as compared to those of Earth. These numbers are current as of February 2018. On the bottom row, the same numbers are displayed for the bodies of our inner solar system: Mercury, Venus, Earth and Mars. The TRAPPIST-1 planets orbit their star extremely closely, with periods ranging from 1.5 to only about 20 days. This is much shorter than the period of Mercury, which orbits our sun in about 88 days. Credit: NASA/JPL-Caltech.

The work of Grimm and team is yet another illustration of why TRAPPIST-1 is such a remarkable target. A transit can tell us about the radius of the world between us and the star, but we also need mass information to make a call on density. Calling this system “…a fascinating setting to study the formation and evolution of tightly-packed small planet systems…” the paper explains the problem in a nutshell:

While the TRAPPIST-1 planet sizes are all known to better than 5%, their densities suffer from significant uncertainty (between 28 and 95%) because of loose constraints on planet masses. This critically impacts in turn our knowledge of the planetary interiors, formation pathway (Ormel et al. 2017; Unterborn et al. 2017) and long-term stability of the system. So far, most exoplanet masses have been measured using the radial-velocity technique. But because of the TRAPPIST-1 faintness (V=19), precise constraints on Earth-mass planets are beyond the reach of existing spectrographs.

Fortunately, in this system we are dealing with seven tightly packed planets (all within the orbit of Mercury around the Sun). In this resonant chain, more massive planets can perturb the orbits of lighter ones, creating transit timing variations (TTV) that can be modeled to produce mass values for each world. 284 transit timing variations obtained with the SPECULOOS and TRAPPIST instruments between September 17, 2015 and March 27, 2017 were complemented by previously published TRAPPIST data and Spitzer as well as Kepler (K2) observations.

The models employed are anything but simple. In fact, the researchers had to examine 35 different parameters, a problem they tackled with new computer algorithms. Simulating orbits until the calculations agree with observed values for the TRAPPIST-1 transits tightens up our previous mass estimates. The work absorbed a year, says Grimm, who goes on to explain:

“The TRAPPIST-1 planets are so close together that they interfere with each other gravitationally, so the times when they pass in front of the star shift slightly. These shifts depend on the planets’ masses, their distances and other orbital parameters. With a computer model, we simulate the planets’ orbits until the calculated transits agree with the observed values, and hence derive the planetary masses.”

What emerges corroborates what the Hubble data show. Rather than being gaseous worlds, the TRAPPIST-1 planets are primarily made of rock. Moreover, they contain significant amounts of volatiles, probably water, given that water in the form of vapor, liquid or ice is the most abundant source of volatiles for the kind of protoplanetary disk that would have produced this system. In some cases, the water can amount to 5% of the planet’s mass. By contrast, Earth has only about 0.02% water by mass. Some of the TRAPPIST-1 planets could thus have 250 times more water than the Earth’s oceans.

TRAPPIST-1 b and c, the two innermost planets, appear to have rocky cores and thick atmospheres, according to this work, while TRAPPIST-1 d, the lightest of the planets (about 30 percent Earth mass) may have a large atmosphere, an ocean or an ice layer. All three possibilities would account for the volume of volatiles thought to match a planet of this density.

TRAPPIST-1 e turns out to be somewhat denser than the Earth, suggestive of a dense iron core and, perhaps, the absence of a thick atmosphere, ocean or ice layer. In terms of insolation from the central star, as well as size and density, this is the planet most like the Earth. The question of why it seems to have a rockier composition than any of its companions remains unresolved.

As to the outer worlds, TRAPPIST-1 f, g and h are distant enough for ice to be frozen on their surfaces, and according to Grimm’s team, are unlikely to have any more than thin atmospheres.

Image: This graph presents known properties of the seven TRAPPIST-1 exoplanets (labeled b through h), showing how they stack up to the inner rocky worlds in our own solar system. The horizontal axis shows the level of illumination that each planet receives from its host star. TRAPPIST-1 is a mere 9 percent the mass of our Sun, and its temperature is much cooler. But because the TRAPPIST-1 planets orbit so closely to their star, they receive comparable levels of light and heat to Earth and its neighboring planets. The vertical axis shows the densities of the planets. Density, calculated based on a planet’s mass and volume, is the first important step in understanding a planet’s composition. The plot shows that the TRAPPIST-1 planet densities range from being similar to Earth and Venus at the upper end, down to values comparable to Mars at the lower end. Credit: NASA/JPL-Caltech.

Among the most interesting things about TRAPPIST-1 is the history of its planetary system, which the paper addresses this way:

The resonant structure of the TRAPPIST-1 system (Luger et al. 2017) is a telltale sign of orbital migration (Terquem & Papaloizou 2007; Ogihara & Ida 2009). The fact that all seven planets form a single resonant chain indicates that the entire system migrated in concert (Cossou et al. 2014; Izidoro et al. 2017). Indeed, orbital solutions generated by disk-driven migration have been shown to be more stable than other solutions (Tamayo et al. 2017b). Whereas most resonant systems are likely to be unstable ((Izidoro et al. 2017; Matsumoto et al. 2012)), the TRAPPIST-1 can be interpreted as a system that underwent a relatively slow migration creating a long-lived resonant system.

We will need the James Webb Space Telescope to move to greater certainty on the question of whether atmospheres actually exist here and what they are made of. I notice that the robotic SAINT-EX Observatory is under construction in Mexico, with the goal of searching for terrestrial planets around cool stars like TRAPPIST-1 (it will also provide ground support for the European Space Agency’s CHEOPS mission). Demory and team hope to apply the computer code they used in the TRAPPIST-1 work on systems detected by SAINT-EX, which should begin operations this year.

The paper is Grimm et al., “The nature of the TRAPPIST-1 exoplanets,” in press at Astronomy & Astrophysics (preprint).



Falcon Heavy: Extraordinary!

The Tau Zero Foundation and Centauri Dreams congratulates team Space Exploration Technologies, for the successful, historic, pioneering test flight of the Falcon Heavy.

Ad Astra Incrementis indeed!

From all of us,

Jeff Greason
Marc Millis
Rhonda Stevenson
Andrew Aldrin
Paul Gilster
Bill Tauskey
Rod Pyle



Probing TRAPPIST-1 Planetary Atmospheres

This week offers two interesting papers about the TRAPPIST-1 planets, one from Hubble data looking at the question of hydrogen in potential planetary atmospheres, the other drawing on data from the European Southern Observatory’s Paranal facility as well as the Spitzer and Kepler space-based instruments. We’ll look at the Hubble work this morning and move on to the second paper tomorrow. Both offer meaty stuff to dig into, for we’re beginning to characterize these seven planets, which form a unique laboratory for the study of red dwarf systems.

Published in Nature Astronomy, the Hubble results screen four of the TRAPPIST-1 planets — d, e, f and g — to study their potential atmospheres in the infrared, using Hubble’s Wide Field Camera 3 in data collected from December 2016 to January 2017. The data allow us to rule out a cloud-free hydrogen-rich atmosphere on three of these worlds, while TRAPPIST-1g needs further observation before a hydrogen atmosphere can be conclusively excluded.

Image: These spectra show the chemical makeup of the atmospheres of four Earth-size planets orbiting within or near the habitable zone of the nearby star TRAPPIST-1. The habitable zone is a region at a distance from the star where liquid water, the key to life as we know it, could exist on the planets’ surfaces. To obtain the spectra, astronomers used the Hubble Space Telescope to collect light from TRAPPIST-1 that passed through the exoplanets’ atmospheres as the alien worlds crossed the face of the star. Credit: NASA, ESA, and Z. Levy (STScI).

Pay particular attention to the purple curves in the above image. These show the signature we would expect to see from gases like water and methane, which would be found if any of these planets had a hydrogen-dominated atmosphere like Neptune’s. The spectroscopic signature should be strong in the near-infrared. The Hubble results are indicated by the green crosses, clearly showing no evidence of such an extended atmosphere for TRAPPIST-1 d, f and e.

Julien de Wit (Massachusetts Institute of Technology), lead author on the paper, explains the significance of the finding:

“The presence of puffy, hydrogen-dominated atmospheres would have indicated that these planets are more likely gaseous worlds like Neptune. The lack of hydrogen in their atmospheres further supports theories about the planets being terrestrial in nature. This discovery is an important step towards determining if the planets might harbour liquid water on their surfaces, which could enable them to support living organisms.”

The work proceeded through transmission spectroscopy, in which some of the light of the star passes through the planetary atmosphere and leaves a distinctive trace in the star’s spectrum. The beauty of the TRAPPIST-1 system is that we have so many transits to work with. Moreover, all seven of the planets orbit their star much closer than Mercury is to the Sun, so we have transits occurring frequently, and the possibility of liquid water on some planetary surfaces. We’re also dealing with a planetary system that’s a relatively nearby 40 light years away.

Image: The graphic at the top shows a model spectrum containing the signatures of gases that the astronomers would expect to see if the exoplanets’ atmospheres were puffy and dominated by primordial hydrogen from the distant worlds’ formation. The Hubble observations, however, revealed that the planets do not have hydrogen-dominated atmospheres. The flatter spectrum shown in the lower illustration indicates that Hubble did not spot any traces of water or methane, which are abundant in hydrogen-rich atmospheres. The researchers concluded that the atmospheres are composed of heavier elements residing at much lower altitudes than could be measured by the Hubble observations. Credit: NASA, ESA and Z. Levy (STScI).

If heavier gases like carbon dioxide, methane, water and oxygen are atmospheric constituents in this system, the James Webb Space Telescope may well be able to find them. What the Hubble work achieves is to take one possibility off the table before JWST goes to work, assuming the latter is successfully deployed in 2019.

The TRAPPIST-1 planets may have had hydrogen atmospheres when first formed, assuming they formed further away from the parent star and migrated into their present positions. The primordial hydrogen would then have been lost as the planets moved close to the star, allowing the formation of secondary atmospheres. Our own Solar System’s rocky planets evidently formed in hotter and drier regions much closer, on a relative basis, to the Sun.

Hannah Wakeford (STScI), one of the scientists involved with this work, adds:

“There are no analogs in our solar system for these planets. One of the things researchers are finding is that many of the more common exoplanets don’t have analogs in our solar system.”

The paper is de Wit et al., “Atmospheric reconnaissance of the habitable-zone Earth-sized planets orbiting TRAPPIST-1,” Nature Astronomy 5 February 2018 (abstract). Tomorrow we’ll look at work just published in Astronomy and Astrophysics on new constraints on the mass, density and composition of the seven planets around TRAPPIST-1.



Detection of Extragalactic Planets?

I was pleased to be a guest on David Livingston’s The Space Show last week. David’s questions are always well chosen, as were those of the listeners who participated in the show, and we spoke broadly about the interstellar effort and what it will take to eventually get human technologies to the stars. The show is now available in David’s archives.

I suspect that if David and I had spoken a couple of days later, the topic would have gotten around to gravitational microlensing, and specifically, the news about planets in other galaxies. On the surface, the story seems sensational. In our own galaxy, we can use radial velocity and transit studies on stars, but here our working distances are constrained by our method. The original Kepler field of view in Cygnus, Lyra and Draco, for example, contained stars ranging from 600 to 3000 light years out — get beyond 3000 light years and transits are not detectable.

Image: The Sun is about 25,000 light years from the center of the galaxy, about half the distance from the center to the edge. The blue cone shows the region of the Milky Way that Kepler explored for planets. Kepler looked along a spiral arm of our galaxy. The distance to most of the stars for which Earth-size planets can be detected by Kepler is from 600 to 3,000 light years. Less than 1% of these stars in the region are closer than 600 light years. Stars farther than 3,000 light years are too faint for Kepler to observe the transits needed to detect Earth-size planets. Credit: NASA/JPL-Caltech/R. Hurt (SSC).

Gravitational microlensing, in which a star moves in front of a more distant star so that light from the background object is distorted by the foreground star’s gravitational field, can turn up distant planets within our own galaxy. In fact, it’s quite a useful tool because it is not limited by line of sight — no planet needs to transit — and is not dependent on the planet’s distance from its star.

Microlensing has allowed us to find planets thousands of light years away, near the center of the Milky Way. We see the pattern of the microlensing event temporarily disrupted by a spike of brightness as the planet around the closer star causes its own gravitational disruption.

But how do we hope to find planets billions of light years away? At the University of Oklahoma, Xinyu Dai and Eduardo Guerras have tackled the question using data from the Chandra X-Ray Observatory, working with microlensing models calculated at the university’s Supercomputing Center for Education and Research. Their work revolved around the microlensing properties of a supermassive black hole at the center of quasar RX J1131–1231. The background quasar, about 6 billion light years away, is what is being lensed by the foreground galaxy, which is 3.8 billion light years out.

We are dealing with what is known as a quasar-galaxy strong lensing system, one in which a background quasar is being gravitationally lensed by a foreground galaxy. The result is that multiple images of the quasar form, as seen in the image below. The light from the background quasar crosses different locations in the foreground galaxy, and is lensed as well by nearby stars in the lens galaxy, an effect called quasar microlensing. The latter is a useful tool, for astronomers have used it to study accretion disks around supermassive black holes at the center of quasars. It can also provide information about the lensing galaxy itself.

Image: The gravitational lens RX J1131-1231 galaxy with the lens galaxy at the center and four lensed background quasar images. Credit: University of Oklahoma.

Let’s turn to the paper, where the relevance of this to extragalactic planets emerges:

As we probe smaller and smaller emission regions of the accretion disk close to the event horizon of the SMBH, the gravitational fields of planets in the lensing galaxy start to contribute to the overall gravitational lensing effect, providing us with an opportunity to probe planets in extragalactic galaxies…

The authors analyze the Einstein ring created by lensing effects to show that emissions close to the Schwarzschild radius of the central supermassive black hole of the source quasar will be affected by planets in the lensing galaxy. Thus we have a way of identifying a population of planets in another galaxy, though not planets orbiting a central star. For the paper goes on:

We have shown that quasar microlensing can probe planets, especially the unbound ones, in extragalactic galaxies, by studying the microlensing behavior of emission very close to the inner most stable circular orbit of the super-massive black hole of the source quasar. For bound planets, they contribute little to the overall magnification pattern in this study.

These planets would be, in other words, so-called ‘rogue’ planets not associated with any star, for bound planets of the kind we are used to studying with radial velocity and transit methods would be below the threshold of detection — they would not change the magnification patterns being observed.

The numbers on these rogue planets are impressive: Roughly 2000 objects per main sequence star in sizes ranging between the Moon and Jupiter — 200 of these per main sequence star would be in the Mars to Jupiter range.

These numbers, Dai and Guerras note, are consistent with theoretical studies showing a large population of unbound planets in our own galaxy. Stanford’s Louis Strigari and colleagues, for example, have found that there may be up to 105 compact objects per main sequence star in the Milky Way, using evidence from microlensing as well as direct imaging. Our galaxy may, in other words, be well populated with such objects, most of these relatively small but some larger than Jupiter. See Island Hopping to the Stars for more on Strigari’s work.

What to make of all this? Before this work, the only evidence for an extragalactic planet was the microlensing event PA-99-N2, detected in 1999, and consistent with a star in the disk of M31, the Andromeda galaxy, lensing a background red giant. A planet of 6 times Jupiter’s mass is one explanation for the lensing profile, but there is no way to confirm the possible planet.

Now we have not the detection of individual planets but a hypothesis that lensing data of a galaxy 3.8 billion light years away can be explained by the presence of a population of unbound planets and other compact objects much smaller than planets. The idea that there would be planets in other galaxies is hardly unusual, given their numbers in our own Milky Way. But it’s an exciting thought that we can now begin to study extragalactic exoplanets, even if we’re extremely early in the process and there is much to be learned. As the paper notes:

It is possible that a population of distant but bound planets (Sumi et al. 2011) can contribute to a significant fraction of the planet population, which we defer to future investigations. Because of the much larger Einstein ring size for extragalactic microlensing, we expect that two models, the unbound and the distant but bound planets, can be better distinguished in the extragalactic regime.

The paper is Xinyu Dai & Eduardo Guerras, “Probing Planets in Extragalactic Galaxies Using Quasar Microlensing,” accepted at Astrophysical Journal Letters (preprint). And if you’re interested in the PA-99-N2 event, one source is Ingrosso et al., “Detection of Exoplanets in M31 with Pixel-Lensing: The Event Pa-99-N2 Case,” Twelfth Marcel Grossmann Meeting: on Recent Developments in Theoretical and Experimental General Relativity. p. 2191 (preprint).