≡ Menu

In a long and discursive paper on self-replicating probes as a way of exploring star systems, Alex Ellery (Carleton University, Ottawa) digs, among many other things, into the question of what we might detect from Earth of extraterrestrial technologies here in the Solar System. The idea here is familiar enough. If at some point in our past, a technological civilization had placed a probe, self-replicating or not, near enough to observe Earth, we should at some point be able to detect it. Ellery believes such probes would be commonplace because we humans are developing self-replication technology even today. Thus a lack of probes would indicate that there are no extraterrestrial civilizations to build them.

There are interesting insights in this paper that I want to explore, some of them going a bit far afield from Ellery’s stated intent, but worth considering for all that. SETA, the Search for Extraterrestrial Artifacts, is a young endeavor but a provocative one. Here self-replication attracts the author because probing a stellar system is a far different proposition than colonizing it. In other words, exploration per se — the quest for information — is a driver for exhaustive seeding of probes not limited by issues of sustainability or sociological constraints. Self-replication, he believes, is the key to exponential exploration of the galaxy at minimum cost and greatest likelihood of detection by those being studied.

Image: The galaxy Messier 101 (M101, also known as NGC 5457 and nicknamed the ‘Pinwheel Galaxy’) lies in the northern circumpolar constellation, Ursa Major (The Great Bear), at a distance of about 21 million light-years from Earth. This is one of the largest and most detailed photos of a spiral galaxy that has been released from Hubble. How long would it take a single civilization to fill a galaxy like this with self-replicating probes? Image credit: NASA/STScI.

Growing the Idea of Self-Reproduction

Going through the background to ideas of self-replication in space, Ellery cites the pioneering work of Robert Freitas, and here I want to pause. It intrigues me that Freitas, the man who first studied the halo orbits around the Earth-Moon L4 and L5 points looking for artifacts, is also responsible for one of the earliest studies of machine self-replication in the form of the NASA/ASEE study in 1980. The latter had no direct interstellar intent but rather developed the concept of a self-replicating factory on the lunar surface using resources mined by robots. Freitas would go on to explore a robot factory coupled to a Daedalus-class starship called REPRO, though one taken to the next level and capable of deceleration at the target star, where the factory would grow itself to its full capabilities upon landing.

I should mention that following REPRO, Freitas would turn his attention to nanotechnology, a world where payload constraints are eased and self-reproduction occurs at the molecular level. But let’s stick with REPRO a moment longer, even though I’m departing from Ellery in doing so. For in Freitas’ original concept, half the REPRO payload would be devoted to self-reproduction, with a specialized payload exploiting the resources of a gas giant moon to produce a new REPRO probe every 500 years.

As you can see, the REPRO probe would have taken Project Daedalus’ onboard autonomy to an entirely new level. Freitas’ studies foresaw thirteen distinct robot species, among them chemists, miners, metallurgists, fabricators, assemblers, wardens and verifiers. Each would have a role to play in the creation of the new probe. The chemist robots, for example, were to process ore and extract the heavy elements needed to build the factory on the moon of the gas giant planet. Aerostat robots would float like hot-air balloons in the gas giant’s atmosphere, where they would collect the needed propellants for the next generation REPRO probe. Fabricators would turn raw materials (produced by the metallurgists) into working parts, from threaded bolts to semiconductor chips, while assemblers created the modules that would build the initial factory. Crawler robots would specialize in surface hauling, while wardens, as with Project Daedalus, remained responsible for maintenance and repair of ship systems.

I spend so much time on this because of my fascination with the history of interstellar ideas. In any case, I don’t know of any earlier studies that explored self-reproduction in the interstellar context and in terms of mission hardware than Freitas’ 1980 paper “A Self-Reproducing Interstellar Probe” in JBIS, which is conveniently available online. This was a step forward in interstellar studies, and I want to highlight it with this quotation from its text:

A major alternative to both the Daedalus flyby and “Bracewell probe” orbiter is the concept of the self -reproducing starprobe. Replicating spacefaring machines recently have received a cursory examination by Calder [4] and Boyce [5], but the basic feasibility of this approach has never been seriously considered despite its tremendous potential. In theory, each self -reproducing device dispatched by the launching society would become an independent agent, slowly scouting the Galaxy for evidence of life, intelligence and civilization. While such machines might be costlier to design and construct, given sufficient time a relatively few replicating starprobes could search the entire Milky Way.

The present paper addresses the plausibility of self-reproducing starprobes and the basic parameters of feasibility. A subsequent paper [10] compares reproductive and nonreproductive probe search strategies for missions of interstellar and galactic exploration.

Hart, Tipler and the Spread of Intelligence

These days, as Freitas went on to explore, massive redundancy, miniaturization and self-assembly at the molecular level have moved into tighter focus as we contemplate missions to the stars, and the enormous Daedalus-style craft (54,000 tons initial mass, including 50,000 tonnes of fuel and 500 tonnes of scientific payload) and its successors, while historically important, also resonate a bit with Captain Nemo’s Nautilus, as spectacular creations of the imagination that defied no laws of physics, but remain in tension with the realities of payload and propulsion. These days we explore miniaturization, with Breakthrough Starshot’s tiny payloads as one example.

But back to Ellery. From a philosophical standpoint, self-reproduction, he rightly points out, had also been considered by Michael Hart and Frank Tipler, each noting that if self-replication were possible, a civilization could fill the galaxy in a relatively short (compared to the age of the galaxy) timeframe. Ultimately self-reproducing probes exploit local materials upon arrival and make copies of themselves, a wave of exploration that would ensure every habitable planet had an attendant probe. Thus the Hart/Tipler contention that the lack of evidence for such a probe is an indication that extraterrestrial intelligence does not exist, an idea that still has currency.

Would any exploring civilization turn to self-replication? The author sees many reasons to do so:

There are numerous reasons to send out self-replicating probes – reconnaissance prior to interstellar migration, first-mover advantage, insurance against planetary disaster, etc – but only one not to – indifference to information growth (which must apply to all extant ETI without exception). Self-replicating probes require minimal capital investment and represent the most economical means to explore space, interstellar space included. In a real sense, self-replicating machines cannot become obsolete – new design developments can be broadcast and uploaded to upgrade them when necessary. Once the self-replicating probe is established in a star system, the probe may be exploited in various ways. The universal construction capability ensures that the self-replicating probe can construct any other device.

Probes that can fill the galaxy extract maximum information and can not only monitor but communicate with local species. Should a civilization choose to implement panspermia in systems devoid of life, the capability is implicit here, including “the prospect of exploiting microorganism DNA as a self-replicating message.” Such probes could also, in the event of colonization at a later period, establish needed infrastructure for the new arrivals, with the possibility of terraforming.

Thus probes like these become a route from Kardashev II to III. In fact, as Ellery sees it, if a Kardashev Type I civilization is capable of self-reproduction technology – and remember, Ellery believes we are on the cusp of it now – then the entire Type I phase may be relatively short on the way to Kardashev Types II and III, perhaps as little as a few thousand years. It’s an interesting thought given our current status somewhere around Kardashev 0.72, beset by problems of our own making and wondering whether we will survive long enough to establish a Type I civilization.

Image: NASA’s James Webb Space Telescope has produced the deepest and sharpest infrared image of the distant universe to date. Known as Webb’s First Deep Field, this image of galaxy cluster SMACS 0723 is overflowing with detail. Thousands of galaxies – including the faintest objects ever observed in the infrared – have appeared in Webb’s view for the first time. This slice of the vast universe covers a patch of sky approximately the size of a grain of sand held at arm’s length by someone on the ground. If self-reproducing probes are possible, are all galaxies likely to be explored by other civilizations? Credit: NASA, ESA, CSA, and STScI.

Early Days for SETA

The question of diffusion through the galaxy here gets a workover from a theory called TRIZ (Teorija Reshenija Izobretatel’skih Zadach), which Ellery uses to analyze the implications of self-reproduction, finding that the entire galaxy could be colonized within 24 probe generations. This produces a population of 424 billion probes. He’s assuming a short replication time at each stop – a few years at most – and thus finds that the spread of such probes is dominated by the transit time across the galactic plane, a million year process to complete assuming travel at a tenth of lightspeed.

Given this short timespan compared with the age of the Galaxy, our Galaxy should be swarming with self-replicating probes yet there is no evidence of them in our solar system. Indeed, it only requires a civilization to exist long enough to send out such probes as they would thenceforth continue to propagate through the Galaxy even if the sending civilization were no more. And of course, it requires only one ETI to do this.

Part of Ellery’s intent is to show how humans might create a self-replicating probe, going through the essential features of such and arguing that self-replication is near- rather than long-term, based on the idea of the universal constructor, a machine that builds any or all other machines including itself. Here we find intellectual origins in the work of Alan Turing and John von Neumann. Ellery digs into 3D printing and ongoing experiments in self-assembly as well as in-situ resource utilization of asteroid material, and along the way he illustrates probe propulsion concepts.

At this stage of the game in SETA, there is no evidence of self-replication or extraterrestrial probes of any kind, the author argues:

There is no observational evidence of large structures in our solar system, nor signs of large-scale mining and processing, nor signs of residue of such processes. Our current terrestrial self-replication scheme with its industrial ecology is imposed by the requirements for closure of the self-replication loop that (i) minimizes waste (sustainability) to minimize energy consumption; (ii) minimizes materials and components manufacture to minimize mining; (iii) minimizes manufacturing and assembly processes to minimize machinery. Nevertheless, we would expect extensive clay residues. We conclude therefore that the most tenable hypothesis is that ETI do not exist.

The answer to that contention is, of course, that we haven’t searched for local probes in any coordinated way, and that now that we are becoming capable of minute analysis of, for instance, the lunar surface (through Lunar Reconnaissance Orbiter imagery, for one), we can become more systematic in the investigation, taking in Earth co-orbitals, as Jim Benford has suggested, or looking for signs of lurkers in the asteroid belt. Ellery notes that the latter might demand searching for signs of resource exploitation there as opposed to finding an individual probe amidst the plethora of candidate objects.

But Ellery is adamant that efforts to find such lurkers should continue, citing the need to continue what has been up to now a meager and sporadic effort to conduct SETA. I’m going to recommend this paper to those Centauri Dreams readers who want to get up to speed on the scholarship on self-reproduction and its consequences. Indeed, the ideas jammed into its pages come at bewildering pace, but the scholarship is thorough and the references handy to have in one place. Whether self-reproducing probes are indeed imminent is a matter for debate but their implications demand our attention.

The paper is Ellery, “Self-replicating probes are imminent – implications for SETI,” International Journal of Astrobiology 8 July 2022 (full text). A companion paper published at the same time is “Curbing the fruitfulness of self-replicating machines,” International Journal of Astrobiology 8 July 2022 (full text).

tzf_img_post
{ 35 comments }

Two Close Stellar Passes

Interstellar objects are much in the news these days, as witness the flurry of research on ‘Oumuamua and 2I/Borisov. But we have to be cautious as we look at objects on hyperbolic orbits, avoiding the assumption that any of these are necessarily from another star. Spanish astronomers Carlos and Raúl de la Fuente Marcos dug several years ago into the question of objects on hyperbolic orbits, noting that some of these may well have origins much closer to home. Let me quote their 2018 paper on this:

There are mechanisms capable of generating hyperbolic objects other than interstellar interlopers. They include close encounters with the known planets or the Sun, for objects already traversing the Solar system inside the trans-Neptunian belt; but also secular perturbations induced by the Galactic disc or impulsive interactions with passing stars, for more distant bodies (see e.g. Fouchard et al. 2011, 2017; Królikowska & Dybczyński 2017). These last two processes have their sources beyond the Solar system and may routinely affect members of the Oort cloud (Oort 1950), driving them into inbound hyperbolic paths that may cross the inner Solar system, making them detectable from the Earth (see e.g. Stern 1987).

Scholz’s Star Leaves Its Mark

So much is going on in the outer reaches of the Solar System! In the 2018 paper, the two astronomers looked for patterns in how hyperbolic objects move, noting that anything approaching us from the far reaches of the Solar System seems to come from a well-defined location in the sky known as its radiant (also called its antapex). Given the mechanisms for producing objects on hyperbolic orbits, they identify distinctive coordinate and velocity signatures among these radiants.

Work like this relies on the past orbital evolution of hyperbolic objects using computer modeling and statistical analyses of the radiants, and I wouldn’t have dug quite so deeply into this arcane work except that it tells us something about objects that are coming under renewed scrutiny, the stars that occasionally pass close to the Solar System and may disrupt the Oort Cloud. Such passing stars are an intriguing subject in their own right and even factor into studies of galactic diffusion; i.e., how a civilization might begin to explore the galaxy by using close stellar passes as stepping stones.

But more about that in a moment, because I want to wrap up this 2018 paper before moving on to a later paper, likewise from the de la Fuente Marcos team, on close stellar passes and the intriguing Gliese 710. Its close pass is to happen in the distant future, but we have one well characterized pass that the 2018 paper addresses, that of Scholz’s Star, which is known to have made the most recent flyby of the Solar System when it moved through the Oort Cloud 70,000 years ago. In their work on minor objects with long orbital periods and extreme orbital eccentricity, the researchers find a “significant overdensity of high-speed radiants toward the constellation of Gemini” that may be the result of the passage of this star.

This is useful stuff, because as we untangle prior close passes, we learn more about the dynamics of objects in the outer Solar System, which in turn may help us uncover information about still undiscovered objects, including the hypothesized Planet 9, that may lurk in the outer regions and may have caused its own gravitational disruptions.

Before digging into the papers I write about today, I hadn’t realized just how many objects – presumably comets – are known to be on hyperbolic orbits. The astronomers work with the orbits of 339 of these, all with nominal heliocentric eccentricity > 1, using data from JPL’s Solar System Dynamics Group Small-Body Database and the Minor Planet Center Database. For a minor object moving with an inbound velocity of 1 kilometer per second, which is the Solar System escape velocity at about 2000 AU, the de la Fuente Marcos team runs calculations going back 100,000 years to examine the modeled object’s orbital evolution all the way out to 20,000 AU, which is in the outer Oort Cloud.

That overdensity of radiants toward Gemini that I mentioned above does seem to implicate the Scholz’s Star flyby. If so, then a close stellar pass that occurred 70,000 years ago may have left traces we can still see in the orbits of these minor Solar System bodies today. The uncertainties in the analysis of other stellar flybys relate to the fact that past encounters with other stars are not well determined, with Scholz’s Star being the prominent exception. Given the lack of evidence about other close passes, the de la Fuente Marcos team acknowledges the possibility of other perturbers.

Image: This is Figure 3 from the paper. Caption: Distribution of radiants of known hyperbolic minor bodies in the sky. The radiant of 1I/2017 U1 (‘Oumuamua) is represented by a pink star, those objects with radiant’s velocity > −1 km s−1 are plotted as blue filled circles, the ones in the interval (−1.5, −1.0) km s−1 are shown as pink triangles, and those < − 1.5 km s−1 appear as goldenrod triangles. The current position of the binary star WISE J072003.20-084651.2, also known as Scholz’s star, is represented by a red star, the convergent brown arrows represent its motion and uncertainty as computed by Mamajek et al. (2015). The ecliptic is plotted in green. The Galactic disc, which is arbitrarily defined as the region confined between Galactic latitude −5° and 5°, is outlined in black, the position of the Galactic Centre is represented by a filled black circle; the region enclosed between Galactic latitude −30° and 30°  appears in grey. Data source: JPL’s SSDG SBDB. Credit: Carlos and Raúl de la Fuente Marcos.

The Coming of Gliese 710

Let’s now run the clock forward, looking at what we might expect to happen in our next close stellar passage. Gliese 710 is an interesting K7 dwarf in the constellation Serpens Cauda that occasionally pops up in our discussions because of its motion toward the Sun at about 24 kilometers per second. Right now it’s a little over 60 light years away, but give it time – in about 1.3 million years, the star should close to somewhere in the range of 10,000 AU, which is about 1/25th of the current distance between the Sun and Proxima Centauri. As we’re learning, wait long enough and the stars come to us.

Note that 10,000 AU; we’ll tighten it up further in a minute. But notice that it is actually inside the distance between the closest star, Proxima Centauri, and the Centauri A/B binary.

Image: Gleise 710 (center), destined to pass through the inner Oort Cloud in our distant future. Credit: SIMBAD / DSS

An encounter like this is interesting for a number of reasons. Interactions with the Oort Cloud should be significant, although well spread over time. Here I go back to a 1999 study by Joan García-Sánchez and colleagues that made the case that spread over human lifetimes, the effects of such a close passage would not be pronounced. Here’s a snippet from that paper:

For the future passage of Gl 710, the star with the closest approach in our sample, we predict that about 2.4 × 106 new comets will be thrown into Earth-crossing orbits, arriving over a period of about 2 × 106 yr. Many of these comets will return repeatedly to the planetary system, though about one-half will be ejected on the first passage. These comets represent an approximately 50% increase in the flux of long-period comets crossing Earth’s orbit.

As far as I know, the García-Sánchez paper was the first to identify Gliese 710’s flyby possibilities. The work was quickly confirmed in several independent studies before the first Gaia datasets were released, and the parameters of the encounter were then tightened using Gaia’s results, the most recent paper using Gaia’s third data release. Back to Carlos and Raúl de la Fuente Marcos, who tackle the subject in a new paper appearing in Research Notes of the American Astronomical Society.

The researchers have subjected the Gliese 710 flyby to N-body simulations using a suite of software tools that model perturbations from the star and factor in the four massive planets in our own system as well as the barycenter of the Pluto/Charon system. They assume a mass of 0.6 Solar masses for Gliese 710, consistent with previous estimates. In addition to the Gaia data, the authors include the latest ephemerides information for Solar System objects as provided by the Jet Propulsion Laboratory’s Horizons System.

Image: This is Figure 1 from the paper. Caption: Future perihelion passage of Gliese 710 as estimated from Gaia DR3 input data and the N-body simulations discussed in the text. The distribution of times of perihelion passage is shown in the top-left panel and perihelion distances in the top-right one. The blue vertical lines mark the median values, the red ones show the 5th and 95th percentiles. The bottom panels show the times of perihelion passage (bottom-left) and the distance of closest approach (bottom–right) as a function of the observed values of the radial velocity of Gliese 710 and its distance (randomly generated using the mean values and standard deviations from Gaia DR3), both as color coded scatter plots of the distribution in the associated top panel. Histograms have been produced using the Matplotlib library (Hunter 2007) with sets of bins computed using Numpy (Harris et al. 2020) by applying the Freedman and Diaconis rule; instead of considering frequency-based histograms, we used counts to form a probability density so the area under the histogram will sum to one. The colormap scatter plot has also been produced using Matplotlib. Credit: Carlos and Raúl de la Fuente Marcos.

The de la Fuente Marcos paper now finds that the close approach of Gliese 710 will take it to within 10635 AU plus or minus 500 AU, putting it inside the inner Oort Cloud in about 1.3 million years – both the distance of the approach and the time of perihelion passage are tightened from earlier estimates. And as we’ve seen, Scholz’s Star passed through part of the Oort Cloud at perhaps 52,000 AU some 70,000 years ago. We thus get a glimpse of the Solar System influenced by passing stars on a time frame that begins to take shape and clearly defines a factor in the evolution of the Solar System.

What Gaia Can Tell Us

We can now back out further again to a 2018 paper from Coryn Bailer-Jones (Max Planck Institute for Astronomy, Heidelberg), which examines not just two stars with direct implications for our Solar System, but Gaia data (using the Gaia DR2 dataset) on 7.2 million stars to look for further evidence for close stellar encounters. Here we begin to see the broader picture. Bailer-Jones and team find 26 stars that have or will approach within 1 parsec, 7 that will close to 0.5 parsecs, and 3 that will pass within 0.25 parsecs of the Sun. Interestingly, the closest encounter is with our friend Gliese 710.

How often can these encounters be expected to occur? The authors estimate about 20 encounters per million years within a range of one parsec. Greg Matloff has used these data to infer roughly 2.5 encounters within 0.5 parsecs per million years. Perhaps 400,000 to 500,000 years should separate close stellar encounters as found in the Gaia DR2 data. We should keep in mind here what Bailer-Jones and team say about the current state of this research, especially given subsequent results from Gaia: “There are no doubt many more close – and probably closer – encounters to be discovered in future Gaia data releases.” But at least we’re getting a feel for the time spans involved.

So given the distribution of stars in our neighborhood of the galaxy, our Sun should have a close encounter every half million years or so. Such encounters between stars dramatically reduce the distance for any would be travelers. In the case of Scholz’s Star, for instance, the distances involved cut the current distance to the nearest star by a factor of 5, while Gliese 710 is even more provocative, for as I mentioned, it will close to a distance not all that far off Proxima Centauri’s own distance from Centauri A/B.

A good time for interstellar migration? We’ve considered the possibilities in the past, but as new data accumulate, we have to keep asking how big a factor stellar passages like these may play in helping a technological civilization spread throughout the galaxy.

The earlier de la Fuente Marcos paper is “Where the Solar system meets the solar neighbourhood: patterns in the distribution of radiants of observed hyperbolic minor bodies,” Monthly Notices of the Royal Astronomical Society Letters Vol. 476, Issue 1 (May 2018) L1-L5 (abstract). The later de la Fuente Marcos paper is “An Update on the Future Flyby of Gliese 710 to the Solar System Using Gaia DR3: Flyby Parameters Reproduced, Uncertainties Reduced,” Research Notes of the AAS Vol. 6, No. 6 (June, 2022) 136 (full text). The García-Sánchez et al. paper is “Stellar Encounters with the Oort Cloud Based on Hipparcos Data,” Astronomical Journal 117 (February, 1999), 1042-1055 (full text). The Bailer-Jones paper is “New stellar encounters discovered in the second Gaia data release,” Astronomy & Astrophysics Vol. 616, A37 (13 August 2018). Abstract.

tzf_img_post
{ 31 comments }

The Great Venusian Bug Hunt

Our recent focus on life detection on nearby worlds concludes with a follow-up to Alex Tolley’s June essay on Venus Life Finder. What would the sequence of missions look like that resulted in an unambiguous detection of life in the clouds of Venus? To answer that question, Alex takes the missions in reverse order, starting with a final, successful detection, and working back to show what the precursor mission to each step would have needed to accomplish to justify continuing the effort. If the privately funded VLF succeeds, it will be in the unusual position of making an astrobiological breakthrough before the large space organizations could achieve it, but there are a lot of steps along the way that we have to get right.

by Alex Tolley

In my previous essay, Venus Life Finder: Scooping Big Science, I introduced the near-term, privately financed plan to send a series of dedicated life-finding probes to Venus’ clouds. The first was a tiny atmosphere entry vehicle with a dedicated instrument, the Autofluorescing Nephelometer (AFN). The follow-up probes would culminate in a sample return to Earth, all this before the big NASA and ESA probes had even reached Venus at the end of this decade to investigate planetary conditions.

When the discussion turns to missions on or around planets or moons that may be home to life, the focus is on whether these probes could be loaded with life-finding instruments to front-load life detection science. The VLF missions are perhaps the first, to make detecting life the primary science goal since the Viking Mars missions in the mid-1970s, with the possible exception of ESA’s Beagle 2 (Beagle 2 lander’s objectives included landing site geology, mineralogy, geochemistry, atmosphere, meteorology, climate; and search for biosignatures [8]).

The approach I am going to use here is to start with what an Earth laboratory might do to investigate samples with suspected novel life. I will then reverse the thinking for each mission stage until the decision to launch a Venusian atmosphere entry AFN becomes the obvious, logical choice.

So let us start with the what science and technology would likely employ on Earth, assuming that we have samples from the VLF missions previously undertaken that indicate that the conditions for life are not prohibitive, and earlier analyses that suggest that the collected particles are not just inanimate but appear to be, or contain life. As we do not know if this life is truly from a de novo abiogenesis or common to terrestrial life and thus perhaps from Earth, there are a number of basic tests that would be employed to determine if the VLF samples contain life.

The key analyses would include:

1) Are there complex organic molecules with structural regularities rather than randomness? For example, terrestrial cell membranes are composed of lipid chains with a certain length of the carbon chain (phospho- and glycolipids peak at 16- and 18-length chains). Are there high abundances of certain molecules that might form the basis of an information storage molecule, e.g. the 4 bases used in DNA – adenine, thymine, guanine, cytosine, or an abundance subset of the many possible amino acids?

2) Are the cell-like particles compartmentalized? Are there cell membranes? Do the cells contain other compartments that manage key biological functions [5]?

3) Do the molecules show homochirality, as we see on Earth? If not, and the molecules are racemic as we see with amino acids in meteorites, then this indicates a non-biological formation. Terrestrial proteins are based on levorotatory amino acids (L-amino acids), whilst sugars are dextrorotatory (D-sugars).

4) Do the samples generate or consume gases that are associated with life? This can be deceptive as we learned with the ambiguous Viking experiment to detect gas emissions from cultured Martian regolith. Lab experiments can resolve such issues.

5) Do the samples have different Isotope ratios than the planetary material? On Earth, biology tends to alter the ratios of carbon and oxygen isotopes that are used as proxies in analyses of samples for paleo life. For example, photosynthesis reduces the C13/C12 ratio and therefore can be used to infer whether carbon compounds are biogenic.

Note that the goals do not initially include using optical microscopes, or DNA sequencers. Terrestrial life is increasingly surveyed analyzing samples for DNA sequences. DNA reading instruments will only work if the same nucleobases are used by Venusian life. If they are, then there is the issue of whether they come from a common origin to terrestrial life. For bacteria-sized particles, electron microscopes are more appropriate.

The types of instruments used include mass spectrometers, liquid and gas chromatographs, optical spectrometers of various wavelengths, nanotomographs (nano-sized CT scans), atomic force microscopes, etc. These instruments tend to be rather large and heavy, although specially designed ones are being flown on the big missions, such as the Mars Perseverance rover. Table 1 below details these biosignature analyses to be done on the returned samples.

Table 1 (click to enlarge). Laboratory biosignature analyses for the returned samples, the instruments, and the specific outputs.

For the goal of detecting biosignature gases and their changes, table 2 shows the prior information collected from probes and telescopes that might indicate extant life on Venus.

Table 2 (click to enlarge). Prior data of potential biosignature gases in the Venusian atmosphere.

Given that these are the types of experiments on samples returned to Earth, how do we collect those samples for return? Unlike Mars, life on Venus is expected to be in the clouds, in a temperate habitable zone (HZ) layer. The problem is not dissimilar to collecting samples in the deep ocean. A container must be exposed to the environment and then closed and sealed. Apart from pressure, the ocean is a benign environment.

Imagine the difficulties of collecting a sample near the bottom of a deep, highly acidic lake. How would that be done given that it is not possible to take a boat out and lower an acidic resistant sample bottle? The VLF team has not decided how best to do this, but the sampling is designed to take place from a balloon floating in the atmosphere for the sample return mission.

Possible sampling methods include:

  • Use of aerogels
  • Filters
  • Electrostatic sticky tape
  • Funnels, jars, and bottles
  • Fog Harp droplet collector
  • Gas sampling bags

In order to preserve the sample from contamination and to ensure planetary protection from the returned samples, containment must be carefully designed to cover contingencies that might expose the sample to Earth’s biosphere.

Note that this Venus Return mission is no longer a small project. The total payload to reach LEO, that includes the transit vehicle, balloon and instrument gondola, plus Venus ascent vehicle, and transit vehicle to Earth, is 38,000 kg, far more massive than the Mars Perseverance Mission, double the launch capability of the Atlas V and Ariane V launchers, and requiring the Falcon Heavy. The number of components indicates a very complex and difficult mission, probably requiring the capabilities of a national space organization. This is definitely no longer a small, privately funded, mission.

But let’s backtrack again. The samples were deemed worth the cost of returning to Earth because prior missions have supported the case that life may be present in the atmosphere. What experiments would best be done to make that assessment, given a prior mission that indicated this ambitious, complex, and expensive effort was worth attempting?

The science goals for this intermediate Habitability Mission are:

1) Measure the physical conditions in the cloud layer to ensure they are not outside of a possible extremophile range. The most important metric is perhaps temperature, as no terrestrial thermophile can survive above 122 °C, nor metabolize in solids. A lower bound might be that water freezes at 0 °C, although salt water will lower that freezing point, so halophiles could live in salty water at below 0 °C. Is there water vapor in the clouds that indicates that the particles are not pure sulfuric acid? Allied with that, are the particles less acidic than pure H2SO4? Are there non-volatile elements, such as phosphorus and metals, that are used by terrestrial biology to harvest and transfer energy for metabolism?

2) Can the organic materials previously detected be identified to indicate biologic rather than abiologic chemistry? Are there any hints at compound regularity that will inform the sample return mission? Can we detect gas changes that indicate metabolism is happening, and are the gases in disequilibrium? Of particular interest may be the detection of phosphine (PH3) emissions, an unambiguous terrestrial biosignature, detected in the Venusian clouds by terrestrial ground-based telescopes in 2020.

3) Are the non-spherical particles detected in a prior mission solid or liquid, and are they homogeneous in composition (non-biologic) or not (possible life).

To be able to do these experiments, the mission will use balloons that can float in the Venusian clouds. They may need to be able to adjust their altitude to find the best layers (but this adds complexity, risk, and cost) and travel spatially, especially if there is a desire to sample the patchy cloud layers that are strongly UV absorbing and have been likened to algal blooms on Earth.

A balloon mission is quite complex and carries a number of instruments, so that cost and complexity is now substantial. A simpler, low-cost, prior mission is needed that will capture key data. What is the simplest, lowest mass mission possible that will inform the team that this balloon mission should definitely go ahead if the results are positive? What science goals and instrument[s] could best provide the data to inform this decision?

This earlier mission is designed around two sources of information that it can leverage. First, there are the many Venus entry probe missions from the early 1960s to the mid 1980s. The most intriguing information includes the observation that there were particles in the clouds that were not spherical as would be expected by physics, and this non-spherical nature might indicate cellular life, such as bacilli (rod-shaped bacteria).

Shape however, is insufficient, as life must be able to interact with the environment to feed, grow, and reproduce. On Earth, these functions require a range of organic molecules – proteins, DNA, RNA, lipids and sugars. This implies that organic compounds must be present in these non-spherical particles; otherwise the shape may be due to physical processes, including agglomeration and/or merging of spherical particles.

The VLF team is also testing some of the assumptions and technology in the lab, confirming for example that autofluorescing of carbon materials works in concentrated sulfuric acid. Their lab experiments show that linear carbon molecules like formaldehyde and methanol in concentrated H2SO4 result in both UV absorption and fluorescence over time, implying that the structures are altered, as is found in industrial processes. From the report:

If there is organic carbon in the Venus atmosphere, it will react with concentrated sulfuric acid in the cloud droplets, resulting in colored, strongly UV absorbing, and fluorescent products that can be detected (…). We have exposed several samples containing various organic molecules (e.g., formaldehyde) to 120 °C, 90% sulfuric acid for different lengths of time. As a result of the exposure to concentrated sulfuric acid all of the tested organic compounds produced visible coloration, increased absorbance (mainly in the UV range of the spectrum), and resulted in fluorescence (…)

It should also be noted that Misra has shown that remote autofluorescing can detect carbon compounds and distinguish between organic material (leaves, microbes on rocks, and fossils) and fluorescing minerals [6,7].

Table 3 (click to enlarge). The science goals for the balloon mission, showing that the AFN is the best single instrument to both detect and confirm the non-spherical particles found in the prior Venus probes, and the presence of organic compounds in the particles. It can also determine whether the particles are liquid or solid, and estimate the pH of the particles.

Of the possible choices of instruments, the Autofluorescing Nephelometer (AFN) best meets the requirements of being able to measure both particle shape and the presence of organic compounds. This can be seen in table 3 above for the science goals of the balloon mission. The instrument is described in the prior post and in more detail in the VLF Report [1]. Ideally, both conditions should be met with positive results, although even both together are suggestive but not unambiguous.

Organic compounds can form in concentrated H2SO4, and cocci are essentially spherical bacteria. Nevertheless, a positive result for one or both justifies the funding of the follow up mission. Conversely, a negative result for both, especially the absence of detectable organic compounds would put a nail in the coffin of the idea that there is life in the Venusian clouds (a classic falsification experiment) – at least for that 4-5 minute data acquisition mission as the probe falls through the HZ layers of clouds where these non-spherical particles have previously been detected.

It could certainly be argued that life is patchy, and just like failing to catch a fish does not mean there are no fish to be caught, it is possible that the probe fell though a [near] lifeless patch and that other attempts should be made, for example the balloon mission that will take measurements over a wider range of space and time.

The first VLF probe mission begs the question of why we should even consider Venus as an abode for life. The prior missions have shown that the surface is a very hot, dry, and acidic environment which is inimical to life as we know it. The only suggestions for the presence of life are the aforementioned patchy UV absorbing regions implying organic compounds in the clouds, and the presence of the biosignature gas PH3.

For life to be on Venus, either the conditions must once have been clement to allow abiogenesis, or life must have been seeded by panspermia to allow it ultimately to evolve to survive in the cloud refugia when the oceans were lost during the runaway greenhouse era. Is there any evidence that Venus was once our sister world with conditions like Earth, but warmer, before the runaway greenhouse conditions transformed the planet?

The scholarly literature is divided, from the optimistic view of Grinspoon [2] and others that Venus had an early ocean that lasted for long enough (e.g. 1 Gy), to support life [3], to the pessimistic view of Turbet [4] that modeling suggests Venus never had an ocean (and that Earth was only able to condense one during the faint young sun period.

It is to try to answer these questions that the science goals of NASA’s and ESA’s DAVINCI+, VERITAS, and EnVision probes are designed to meet.

The VLF team, however, have supported their plan with the optimistic view that early Venus was clement and that life could have taken hold, and therefore a series of dedicated, life-finding missions will best answer the question of whether there is life on Venus, rather than establishing that a paleo climate on Venus was indeed present and lasted long enough to allow life to emerge, or long enough for it to have been transferred from Earth by the time we are sure life on Earth was present.

If the first VLF mission returns positive results, then it seems likely that the following missions, however designed and by whom executed, will push forward the science goals toward more life detection. Negative results could well derail subsequent life detection goals. The time frame will overlap with the Mars sample return mission that will collect the Perseverance rover samples for analyses back on Earth. It may well also overlap with the early results of biosignature detection on exoplanets. Whatever the outcome, the end of this decade will be an exciting time and will pose fundamental questions about our place in the galaxy.

References

1. Seager S, et al Venus Life Finder Study (2021) Web accessed 02/18/2022 https://venuscloudlife.com/venus-life-finder-mission-study/

2. Grinspoon, David & Bullock, Mark. (2007). Searching for Evidence of Past Oceans on Venus, American Astronomical Society, DPS meeting #39, id.61.09; Bulletin of the American Astronomical Society, Vol. 39, p.540

3. Way, M. J.,et all (2016), Was Venus the first habitable world of our solar system?, Geophys. Res. Lett., 43, 8376–8383, doi:10.1002/2016GL069790.

4. Turbet, M., Bolmont, E., Chaverot, G. et al. Day–night cloud asymmetry prevents early oceans on Venus but not on Earth. Nature 598, 276–280 (2021). https://doi.org/10.1038/s41586-021-03873-w

5. Cornejo E, Abreu N, Komeili A. Compartmentalization and organelle formation in bacteria. Curr Opin Cell Biol. 2014 Feb;26:132-8. doi: 10.1016/j.ceb.2013.12.007. Epub 2014 Jan 16. PMID: 24440431; PMCID: PMC4318566.

6. Misra, A.K., Rowley, S.J., Zhou, J. et al. Biofinder detects biological remains in Green River fish fossils from Eocene epoch at video speed. Sci Rep 12, 10164 (2022). https://doi.org/10.1038/s41598-022-14410-8

7. Misra, A. et al (2021). Compact Color Biofinder (CoCoBi): Fast, Standoff, Sensitive Detection of Biomolecules and Polyaromatic Hydrocarbons for the Detection of Life. Applied Spectroscopy. 75. 000370282110339. DOI:10.1177/00037028211033911.

8. Beagle 2. https://en.wikipedia.org/wiki/Beagle_2 Accessed July 2, 2022

tzf_img_post
{ 31 comments }

Drilling into Icy Moon Oceans

While we talk often about subsurface oceans in places like Europa, the mechanisms for getting through layers of ice remain problematic. We’ll need a lot of data through missions like Europa Clipper and JUICE just to make the call on how thick Europa’s ice is before determining which ice penetration technology is feasible. But it’s exciting to see how much preliminary work is going into the issue, because the day will come when one or another icy moon yields the secrets of its ocean to a surface lander.

By way of comparison, the thickest ice sheet on Earth is said to reach close to 5,000 meters. This is at the Astrolabe Subglacial Basin, which lies at the southern end of Antarctica’s Adélie Coast. Here we have glacial ice covering continental crust, as opposed to ice atop an ocean (although there appears to be an actively circulating groundwater system, which has been recently mapped in West Antarctica). The deepest bore into this ice has been 2,152 meters, a 63 hour continuous drilling session that will one day be dwarfed by whatever ice-penetrating technologies we take to Europa.

Consider the challenge. We may, on Europa, be dealing with an ice sheet up to 25 kilometers thick – figuring out just how thick it actually is may take decades if the above missions get ambiguous results. In any case, we will need hardware that can operate at cryogenic temperatures in a hard vacuum, with radiation shielding adequate to the Jovian surface environment. The lander, after all, remains on the surface to sustain communications with the Earth.

Moreover, we need a system that is reliable, meaning one that can work its way around problems it finds in the ice as it moves downward. Here again we need ice profiles that can be developed by future missions. We do know the ice we encounter will contain salts, sulfuric acids and other materials whose composition is currently unknown. And we will surely have to cope with liquid water ‘pockets’ on the way down, as well as the fact that the ice may be brittle near the surface and warmer at greater depths.

SESAME Program Targets Europan Ice

NASA’s SESAME program at Glenn Research Center, which coordinates work from a number of researchers, is doing vital early work on all these problems. On its website, the agency has listed a number of assumptions and constraints for a lander/ice penetrator mission, including the ability to reach up to 15 kilometers within three years (assuming we learn that the ice isn’t thicker than this). For preliminary study, a total system mass of less than 200 kg is assumed, and the overall penetration system must be able to survive three years of operations in this hostile environment.

So far this program is Europa-specific, the acronym translating to Scientific Exploration Subsurface Access Mechanism for Europa. The idea is to identify which penetration systems can reach liquid water. It’s early days for thinking about penetrating Europa and other icy moon oceans, but you have to begin somewhere, and SESAME is about figuring out which approach is most likely to work and developing prototype hardware.

SESAME is dealing with proposals from a number of sources. Johns Hopkins, for example, will be testing communication tether designs and analyzing problems with RF communications. Stone Aerospace is studying a closed-cycle hot water drilling technology running on a fission reactor. Georgia Tech is contributing data from projects in Antarctica and studying a subsurface access drill design, hoping to get it up to TRL 4. Honeybee Robotics is focused on a “hybrid, thermomechanical drill system combining thermal (melting) and mechanical (cutting) penetration approaches.”

Image: A preliminary VERNE design from Georgia Tech showing conceptual component layout. Credit: Georgia Tech.

We’re pretty far down on the TRL scale (standing for Technology Readiness Level), which goes from 1 to 9, or from back of the cocktail lounge napkin drawings up to tested flight-ready hardware. Well, I shouldn’t be so cavalier about TRL 1, which NASA defines as “scientific research is beginning and those results are being translated into future research and development.” The real point is that it’s a long haul from TRL 1 to TRL 9, and the nitty gritty work is occurring now for missions we haven’t designed yet, but which will one day take us to an icy moon and drill down into its ocean.

Swarming Technologies at JPL

Let’s home in on work that’s going on at the Jet Propulsion Laboratory, in the form of SWIM (Sensing With Independent Micro-Swimmers), a concept that, in the hands of JPL’s Ethan Schaler, has just moved into Phase II funding from NASA’s Innovative Advanced Concepts program. The $600,000 award should allow Schaler’s team to develop and test 3D printed prototypes over the next two years. The plan is to design miniaturized robots of cellphone size that would swarm through subsurface oceans, released underwater from the ice-melting probe that drilled through the surface.

Schaler, a robotics mechanical engineer, focuses on miniaturization because of the opportunity it offers to widen the search space:

“My idea is, where can we take miniaturized robotics and apply them in interesting new ways for exploring our solar system? With a swarm of small swimming robots, we are able to explore a much larger volume of ocean water and improve our measurements by having multiple robots collecting data in the same area.”

Image: In the Sensing With Independent Micro-Swimmers (SWIM) concept, illustrated here, dozens of small robots would descend through the icy shell of a distant moon via a cryobot – depicted at left – to the ocean below. The project has received funding from the NASA Innovative Advanced Concepts program. Credit: NASA/JPL-Caltech.

We are talking about robots described as ‘wedge-shaped,’ each about 12 centimeters long and 60 to 75 cubic centimeters in volume. Space is tight on the cryobot that delivers the package to the Europan surface, but up to 50 of these robots could fit into the envisioned 10-centimeter long (25 centimeters in diameter) delivery package, while leaving enough room for accompanying instruments that will remain stationary under the ice.

I mentioned the Johns Hopkins work on communications tethers, and here the plan would be to connect to the surface lander (obviously ferociously shielded from radiation in this environment), allowing an open channel for data to flow to controllers on Earth. The swarm notion expands the possibilities for what the ice penetrating technology can do, as project scientist Samuel Howell, likewise at JPL, explains:

“What if, after all those years it took to get into an ocean, you come through the ice shell in the wrong place? What if there’s signs of life over there but not where you entered the ocean? By bringing these swarms of robots with us, we’d be able to look ‘over there’ to explore much more of our environment than a single cryobot would allow.”

Image: This illustration shows the NASA cryobot concept called Probe using Radioisotopes for Icy Moons Exploration (PRIME) deploying tiny wedge-shaped robots into the ocean miles below a lander on the frozen surface of an ocean world. Credit: NASA/JPL-Caltech.

One of the assumptions built into the SESAME effort is that the surface lander will use one of two nuclear power systems along with whatever technologies are built into the penetration hardware. Thus for the surface cryobot we have the option of a “small fission reactor providing 420 We and 43,000 Wth waste heat” or a “radioisotope power system providing up to 110 We and 2,000 Wth waste heat.” SWIM counts on nuclear waste heat to melt through the ice and also to produce a thermal bubble whose reactions with the ice above could be analyzed in terms of water chemistry.

The robots envisioned here have to be semi-autonomous, each with its own propulsion system, ultrasound communications capability and basic sensors, including chemical sensors to look for biomarkers. Overlapping measurements should allow this ‘flock’ of instrumentation to examine temperature or salinity gradients and more broadly characterize the chemistry of the subsurface water. We’ll follow SWIM with interest not only in the context of Europa but other ocean worlds that may be of astrobiological significance. If life can exist in these conditions, just how much biomass may turn up if we consider all the potential ice-covered oceans on moons and dwarf planets in the Solar System?

tzf_img_post
{ 52 comments }

My prediction that we’re going to find evidence for exo-life around another star before we find it in our own Solar System is being challenged from several directions. Alex Tolley recently looked at the Venus Life Finder mission, a low-cost and near-term way to examine the clouds of the nearest planet for evidence of biology (see Venus Life Finder: Scooping Big Science). Now we learn of advances in a ten-year old project at the University of Hawai‘i at Manoa, where Anupam Misra and team have been working on remote sensing instruments to detect minute biomarkers. This one looks made to order for Mars, but it also by extension speaks to future rovers on a variety of worlds.

Image: This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 m deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 km. How can we best identify markers of early life, assuming they exist? Credit: NOVA Next / UH Manoa.

The challenge is immense, because the lifeforms in question may be tiny, and may have been extinct for millions, if not billions, of years. As Misra’s recent paper notes, organic chemicals formed by biology, or minerals produced by living organisms, are the kind of biomarkers research efforts have targeted. We’re talking about proteins, lipids and fossil residues, the detection of any of which on another planet would lock down the case for life off the Earth. Instruments that can sweep wide areas with sensors and deliver fast detection times are critical for invigorating the biomarker hunt.

Remote sensing is the operative term. Misra’s team have developed what they call a Compact Color Biofinder that, in the words of the paper, “detects trace quantities of organic matter in a large area at video speed.” Moreover, the device can operate from distances of a few centimeters up to five meters. The intent is to move quickly, scanning large areas to locate these biological tracers. The device draws on fluorescence, a short-lived signal that can be found in most biological materials, including amino acids, fossils, clays, sedimentary rocks, plants, microbes, bio-residues, proteins and lipids. According to the authors, fluorescence also figures into polycyclic aromatic hydrocarbon (PAHs) and abiotic organics, such as plastic or amino acids.

Misra, who is lead instrument developer at the Hawai‘i Institute of Geophysics and Planetology at the university, makes the case that these traces are still viable, and that the Compact Color Biofinder can tell the difference between mineral phosphorescence and organic phosphorescence in daylight conditions with measurement times in the realm of one microsecond. It can also distinguish between different organic materials. Says Misra:

“There are some unknowns regarding how quickly bio-residues are replaced by minerals in the fossilization process. However, our findings confirm once more that biological residues can survive millions of years, and that using biofluorescence imaging effectively detects these trace residues in real time.”

Demonstrating the fact is news that the device can detect the bio-residue of fish fossils from the Green River formation, a geological feature resulting from sedimentation in a series of lakes along the Green River in Colorado, Wyoming and Utah. The formation is thought to be between 34 and 56 million years old. The fish in question is Knightia spp, which untangles to several different species within the genus Knightia (spp stands for species pluralis, meaning several different species within the larger genus). The now extinct fish lived in freshwater lakes during the Eocene. The team examined 35 fish fossils, all of which still retained a significant quantity of bio-fluorescence.

Detection is from a distance of several meters and can be achieved over large areas, which should greatly accelerate the process of astrobiological detection on a planetary surface. From the paper:

To further test the detection capability of the Biofinder the camera lens was changed to a long working distance microscope objective, thus turning the instrument into a standoff fluorescence microscope. The same fossil was cut into several pieces to be imaged in cross-section (Fig. 1c). At the microscopic scale, fluorescence images (Fig. 1d) demonstrated the clear presence of organic material in the fossil by the characteristic fluorescence of organic matter detected using a 10× objective at a working distance of 54 mm. The brown color material has been known to paleontologists to be organic matter formed from residues of fish bones along with soft tissues20 and hence, we can say that the organic fluorescence comes from biological origin.

Image: This is Figure 1 from the paper. Caption: Biofinder detection of biological resides in fish fossil. (a) White light image of a Green River formation fish fossil, Knightia sp., from a distance of 50 cm using the Biofinder without laser excitation. (b) Fluorescence image of the fish fossil obtained by the Biofinder using a single laser pulse excitation, 1 µs detection time, and 3.6% gain on the CMOS detector. (c) Close-up white light image of the fish fossil cross-section using a 10× objective with 54 mm working distance showing the fish remains and rock matrix. (d) Fluorescence image with a single laser pulse excitation showing strong bio-fluorescence from the fish remains. Credit: Misra, et al., 2022.

So fluorescence imaging may join our toolkit for future rovers on other worlds, able to detect organisms that have been dead for millions of years by scanning large areas of terrain in short periods of time. The Biofinder detections were corroborated by a wide range of instruments, from laboratory spectroscopy analysis and scanning electron microscopy to fluorescence lifetime imaging microscopy.

Image: This is Figure 3 from the paper. Caption: Confirmation of carbon and short-lived biofluorescence in fish fossil. (a) SEM–EDS analysis of the fish fossil cross-section showing that the fossil contains considerable quantities of carbon in comparison to the rock matrix. The rock matrix is rich in silica and has more oxygen than the fish. (b) FLIM image of the fossil cross-section showing strong bio-fluorescence in the fish (shown as false-coloured green-yellow region) with a lifetime of 2.7 ns. Credit: Misra et al.

The upshot: Biological residues can last for millions of years, and standoff bio-fluorescence imaging as used in the Compact Color Biofinder can detect them. Remote sensing is heating up in astrobiological circles, and I should mention two other ongoing projects: WALI (Wide Angle Laser Imaging enhancement to ExoMars PanCam) and OrganiCam, both based on fluorescence detection. The work of Misra and team indicates the method is sound, and should be capable of being deployed for large landscape surveys on future lander missions. The fact that the technology does not introduce contamination likewise speaks to its utility, says Sonia J. Rowley, a co-author of the paper and the biologist on the project:

“The Biofinder’s capabilities would be critical for NASA’s Planetary Protection program, for the accurate and non-invasive detection of contaminants such as microbes or extraterrestrial biohazards to or from planet Earth.”

The paper is Misra et al, “Biofinder detects biological remains in Green River fish fossils from Eocene epoch at video speed,” Scientific Reports 12, Article number: 10164 (2022). Full text.

tzf_img_post
{ 19 comments }

Of Algorithms and Hidden Planets

It’s hard to imagine what the field of exoplanet discovery will look like in a hundred years. Just as difficult as it is to imagine what might happen if we do get to a ‘singularity’ in machine intelligence beyond which we humans can’t venture. Will the study of other stellar systems become largely a matter of computers analyzing data acquired by AI, with human operators standing by only in case of equipment failure? Or will the human eye for pattern and detail so evident in many current citizen science projects always be needed to help us piece together what the machines find?

I wonder this when I read about the effort going into teasing new data out of older observations, as we saw recently in VASCO, a project to study old astronomical photographic plates looking for possible technosignatures. And I suspect we’ll always need human/machine collaboration to draw maximum knowledge out of our data. Today let’s look at how useful software tools are illuminating what we’ve already learned about an exceedingly interesting and relatively close planetary system.

Sometimes it becomes necessary to begin writing about something by carefully explaining what it is not. In this case, I’m talking about the planetary system e Eridani, otherwise known as 82 Eridani, and it’s important to add that this is not the system known as Epsilon Eridani. The latter, interesting in its own right, is nearby (10.5 light years) and in fact is the third closest individual star system visible to the naked eye. The former, our subject today, is 20 light years out, a G-class dwarf with several confirmed planets. In the southern hemisphere Gould star catalogs, compiled in the late 19th Century, it is listed as the 82nd star in the constellation Eridanus.

This is potentially confusing enough that I’m going to use 82 Eridani rather than e Eridani in this article, which will look at an interesting way to study exoplanet systems that are close by, and one that offers useful new insights into what may be found in the 82 Eridani system that we have yet to discover. We already know about two planets, now confirmed, that were found through radial velocity data, and the same data suggest another. As many as six planets may exist here based on recent analysis by Fabo Feng (University of Hertfordshire) and colleagues in a 2017 paper.

Image: This table shows what we currently know about the planetary system at 82 Eridani, including evidence for a dust disk. As we’re about to see, a hypothetical seventh planet turns up in the work we discuss below. Credit: Wikimedia Commons.

In a new paper in the Astronomical Journal, Ritvik Basant (University of Arizona) and colleagues go to work on the planetary architecture of 82 Eridani with a software package called DYNAMITE (developed by co-author Daniel Apai) that folds information specific to this system into a broader analysis incorporating what the authors call ‘exoplanet demographics.’ At stake here is this question: If an additional planet exists in a given system, what can we say about the probability distributions of its orbit, its eccentricity, its likely size? Let me quote from the paper:

To answer this question, DYNAMITE uses the robust trends identified in the Kepler exoplanet demographics data (orbital period distribution, planet-size distribution, etc., based on the ∼2400 exoplanets that form the Kepler population) with specific data for a given single exoplanet system (detected planets and constraints on their orbits and sizes). Based on this information, DYNAMITE uses a Monte Carlo approach to map the likelihood of different planetary architectures, also considering the orbital dynamical stability and allowing for the freedom of statistical model choice.

I’m going into the weeds here because this package has already shown its worth. Back in 2020, Apai and co-author Jeremy Dietrich used DYNAMITE on 45 transiting systems discovered by TESS (Transiting Exoplanet Survey Satellite) to make predictions about undiscovered planets. Their work showed in multiple instances that an already discovered planet, if initially hidden from the software, would be retrieved by DYNAMITE, a test the software also passed when applied to the system at TOI-174, where more than one planet was removed and the probability of additional planets was noted in the software.

The accomplishments of DYNAMITE can be further examined in the paper, but I’ll mention its utility in the Tau Ceti system and its prediction of a habitable zone planet there, as well as interesting work on the K2-138 system, where it made what turns out to have been accurate predictions on two planets. So this seems to be a robust package, drawing heavily on existing data on planetary populations – it works best with the typical rather than the outlier, in other words, a fact to keep in mind before we extrapolate too freely.

Exoplanet science is all about tugging facts out of challenging data, as has been the case since the detection of 51 Pegasi b or, for that matter, the pulsar planets at PSR 1257+12. Continually refining our techniques through ever more sophisticated equipment sharpens radial velocity and transit detections, but we’re also learning how the right algorithms can be applied to the data we generate to suggest new targets for study. As our equipment improves, our algorithms are continually tuned up.

What we have so far for 82 Eridani shows the method at work in a system where our knowledge of several planet candidates is uncertain. DYNAMITE generates hypotheses exploring possible combinations of planet candidates. Each of these hypotheses produces predictions, and as it turns out, all four hypotheses produced for 82 Eridani result in planetary orbits that are quite similar. The authors also draw on a new DYNAMITE module that uses a statistical approach to explore possible surface temperatures. So this is a wide ranging look at the system, and they consider the work an “exploratory assessment” only, until more constraining data become available.

It will be interesting indeed to see how accurately this assessment describes what we will one day find with improved observational techniques. Beginning with the assumption of a system consisting of only the three known planets, DYNAMITE provides further support for the earlier work that predicted three more potential worlds (no information from the 2017 study, mentioned above, was used as input for the software). The parameters for the three candidate planets turn out to be in good agreement with the results of Fabo Feng and team. If all six planets, confirmed and unconfirmed, are used as input, DYNAMITE then predicts one additional planet in the habitable zone.

Here the software is suggestive in relation to the orbital eccentricity of these worlds:

From our eccentricity analysis, we find that if e Eridani is a three-planet system with planets b, d, and e, then the combined mean eccentricity for the system to be stable is ∼0.05. If the system is a six-planet system instead, then the combined mean eccentricity for the system to be stable is of an order ∼0.026. In either case, we find that the eccentricity of each planet should be significantly lower than the value fitted to the RV data, as also proposed by Feng et al. (2017a).

As the planetary system’s stability necessitates a lower-than-reported eccentricity for the planets, our analysis is based on this assumption. If better constraints on the eccentricities become available in future, then our analysis could be repeated again with the updated values.

So this is a rolling process, with the DYNAMITE results seeming to support seven planets at this star, including one additional candidate in the habitable zone, joining the previously predicted 82 Eridani f there. Indeed, the habitable zone around this star is wide enough, and the inner planetary system likely to be complex enough, to raise 82 Eridani higher on the list of planetary systems we will want to examine for life, using future direct imaging via space-based observatories and terrestrial extremely large telescopes. That new habitable zone planet candidate, by the way, would likely be a mini-Neptune rather than a terrestrial world based on the DYNAMITE results.

It’s interesting to see that Guillem Anglada-Escudé, the astronomer behind the discovery of Proxima Centauri b, worked with exoplanet hunter Paul Butler to develop an algorithm called TERRA to filter noise and sharpen radial velocity analysis. It was this algorithm that turned up the evidence for the three additional candidates at 82 Eridani in Feng and team’s 2017 paper that played into the work using DYNAMITE.

So we have three known planets at 82 Eridani, three more suggested by the TERRA analysis of the existing RV data and strengthened by the DYNAMITE results, and now a possible seventh world with an orbital period of 549-733 days in the habitable zone. Again, the new worlds here are planet candidates at this point and await further observation and analysis. The latter will give us one day the data that will tighten algorithms like these still further, giving us better options to distinguish between probabilities and decide which of them merit precious telescope time.

The paper is Basant et al, “An Integrative Analysis of the Rich Planetary System of the Nearby Star e Eridani: Ideal Targets for Exoplanet Imaging and Biosignature Searches,” Astronomical Journal Vol. 164, No. 1 (16 June 2022) 12 (full text). If you want to dig further into the background, the Feng et al. paper is “Evidence for at least three planet candidates orbiting HD 20794,” Astronomy & Astrophysics Vol. 605 (September 2017) A 103 (abstract).

tzf_img_post
{ 24 comments }

White Dwarfs: Planetary System Rebirth?

Let’s catch up with white dwarfs, a kind of star that may spawn planetary systems of its own. For I’ve just found another case of archival data being put to good use in the form of a study of a white dwarf system called G238-44. Here, the data come from the Hubble instrument (specifically, its Cosmic Origins Spectrograph and Space Telescope Imaging Spectrograph), the Far Ultraviolet Spectroscopic Explorer (FUSE), and the Keck Observatory’s High Resolution Echelle Spectrometer (HIRES) in Hawaii.

What astronomers presented at a recent AAS conference is a picture of a system severely disrupted by its star’s transition to white dwarf status. Moreover, this is a star in the process of accretion with a distinct twist from earlier such discoveries. For the white dwarf – the remnant left behind after the system’s star went through its red giant phase – is actively drawing rocky and metallic material as well as ices from the debris of the disrupted system. These are the stuff of planet formation. We’re learning how extreme are conditions in what astronomers call an ‘evolved’ planetary system as it undergoes destruction and what may be a kind of rebirth.

Image: This illustrated diagram of the planetary system G238-44 traces its destruction. The tiny white dwarf star is at the center of the action. A very faint accretion disk is made up of the pieces of shattered bodies falling onto the white dwarf. The remaining asteroids and planetary bodies make up a reservoir of material surrounding the star. Larger gas giant planets may still exist in the system. Much farther out is a belt of icy bodies such as comets, which also ultimately feed the dead star. Credit: NASA, ESA, Joseph Olmsted (STScI).

In relation to main sequence stars, white dwarfs give us new assumptions about planet formation. Such a star contains half the mass of the Sun, for example, but while it’s only a bit bigger than the Earth, it sports a density of 1 x 109 kg/m3. The average white dwarf is 200,000 times as dense as the Sun, a remnant stellar core with a temperature in the range of 100,000 Kelvin. And the system it finds itself in, surviving the red giant phase of the star, is hardly a static place.

To analyze what is happening at G238-44, we have to take into account that the original red giant, perhaps much like the Sun in its earlier days, would have cast off its outer layers as nuclear burning ceased. This shedding of mass can cause asteroids and small moons to be scattered gravitationally by remaining large planets, their own orbits disrupted. Materials like these experience tidal forces that can tear them apart as they move inward toward the star. The result: A disk of gas and dust that, over time, settles onto the surface of the white dwarf and throws a distinct observational signal.

At G238-44, the white dwarf left behind is seen in the process of accreting two such objects, a process observed before in a number of white dwarf systems but never with both icy and rocky-metallic components in the mix. Now we have a case of a white dwarf evidently drawing on a planetary system that was once abundant in ices. As UCLA’s Benjamin Zuckerman, a co-author of the paper on this work, notes:

“Life as we know it requires a rocky planet covered with a variety of elements like carbon, nitrogen, and oxygen. The abundances of the elements we see on this white dwarf appear to require both a rocky and a volatile-rich parent body – the first example we’ve found among studies of hundreds of white dwarfs.”

Within about 100 million years of the white dwarf’s formation, the star will be capturing material from regions analogous to our asteroid and Kuiper Belt. The total mass involved in this study is relatively small, about that of a large asteroid. Nitrogen, oxygen, magnesium, silicon and iron have been measured in the debris disk here, and in interesting proportions. Lead researcher Ted Johnson, a colleague of Zuckerman’s at UCLA, sees a 2-1 mix of Mercury-like material – high in iron and suggestive of a metallic planetary core – mixing with the comet-like debris.

Image: This illustration shows a white dwarf star siphoning off debris from shattered objects in a planetary system. The Hubble Space Telescope detects the spectral signature of the vaporized debris that reveals a combination of rocky-metallic and icy material, the ingredients of planets. The findings help describe the violent nature of evolved planetary systems and the composition of its disintegrating bodies. Credit: NASA, ESA, Joseph Olmsted (STScI).

Terrestrial Planet in the Habitable Zone?

With this destruction derby in mind, let’s catch up with the white dwarf WD1054–226, found not so long ago to have objects – apparently of small moon or asteroid size – orbiting close to the star. Their presence is an indication, according to astronomers at University College London, that there may be a nearby planet in the star’s small habitable zone. This finding is based on data from the ESO’s 3.5m New Technology Telescope (NTT) at the La Silla Observatory in Chile. Fully 65 dips in the star’s light show the extent of the orbital material, whipping around the star in clouds every 25 hours.

A planet farther out seems the best explanation for how this arrangement stays in place, and if it is there, it would be in an orbit about 1.7 percent of the distance between the Earth and the Sun (roughly 2.5 million kilometers). That’s in the liquid water habitable zone, and the planet would be about the size of the Earth, based on these calculations.

What interesting scenarios stars like these represent. 95 percent of the stars in the galaxy will eventually become white dwarfs, with our Sun joining their ranks in four or five billion years. At WD1054-226, we’re hypothesizing the existence of a kind of planet that has yet to be confirmed around such a star. UCL’s Jay Farihi is lead author of the paper on this work:

“This is the first time astronomers have detected any kind of planetary body in the habitable zone of a white dwarf. The moon-sized structures we have observed are irregular and dusty (e.g. comet-like) rather than solid, spherical bodies. Their absolute regularity is a mystery we cannot currently explain. An exciting possibility is that these bodies are kept in such an evenly-spaced orbital pattern because of the gravitational influence of a nearby major planet. Without this influence, friction and collisions would cause the structures to disperse, losing the precise regularity that is observed. A precedent for this ‘shepherding’ is the way the gravitational pull of moons around Neptune and Saturn help to create stable ring structures orbiting these planets.”

Image: An artist’s impression of the white dwarf star WD1054–226 orbited by clouds of planetary debris and a major planet in the habitable zone. Credit Mark A. Garlick / markgarlick.com. License type Attribution (CC BY 4.0).

Tantalizing, but remember that the ‘planet’ here is unconfirmed. JWST data on the debris disk would be helpful as we learn more. White dwarf planets of terrestrial size should eventually turn up if they’re out there in any numbers. If we could find a transit, we’d be looking at a world as large as the star it orbits. The transit depth that would afford if we can find a system so aligned would make for an unforgettable light curve.

The paper is Farihi et al., “Relentless and Complex Transits from a Planetesimal Debris,” Monthly Notices of the Royal Astronomical Society Vol. 511, No. 2 (April 2022), 1647-1666 (full text).

tzf_img_post
{ 21 comments }

‘Lurker’ Probes & Disappearing Stars

We’ve looked before at the growing interest in exploring near-Earth space for evidence of probes from other civilizations that may have been sent in the distant past to monitor and report home on the progression of life in our Solar System. If extraterrestrial civilizations exist, the idea that one of them might have explored our system and left behind what Jim Benford calls a ‘lurker’ probe is sensible enough. We send probes to places we want to learn more about, and we would certainly have probes around the nearest stars if we had the means. Breakthrough Starshot is an example of such interest. A century from now, human probes to other stars may be commonplace.

Various places to search for lurker probes have been suggested, from Lagrange points – where objects placed there tend to stay put, with minimal need for fuel consumption – to barely studied Earth co-orbitals to the surface of the Moon. But what about Earth orbit? Surveillance of the Earth could involve probes in long-term high altitude orbits, the geosynchronous realm of our present-day communication satellites, which can always remain above the same location on the planet. As opposed to low-Earth orbits, GEO offers stable conditions over millions, perhaps billions of years.

The immediate objection is that looking into Earth’s sky is confounded by multiple factors. We have close to 5,000 satellites already in one kind of Earth orbit or another. We must also cope with centimeter-scale debris in lower orbits that seems to be increasing over time, another reason why higher orbits would be preferable for searching for something anomalous. Even so, human contamination near our planet means that using modern survey tools like Pan-STARRS is complicated and time-consuming.

If we had a time machine, we could see the sky as it was before Sputnik. But as Beatriz Villarroel and colleagues note in a new paper in Acta Astronautica, we have much easier ways of doing this. Photographic plate projects like the First Palomar Sky Survey (POSS-1) are available from earlier periods, and Villarroel (Stockholm University) is behind a new citizen science project called VASCO (Vanishing & Appearing Sources during a Century of Observations) to exploit these resources.

VASCO builds upon an earlier project of the same name in which Villarroel and colleagues analyzed old sky catalogs looking for stars that appear in the older plates but are not found in later imaging. In the earlier work, about 100 red transients turned up, interesting objects that likely represent flare stars worth follow-up investigation. The image below is drawn from this work (citation at the end of the article).

Image: A source visible in an old plate (left, seen as the bright source at the centre of the square) has disappeared in a later plate (right). Credit: Villarroel et al. (2019).

With the online VASCO project, the focus shifts to human volunteers, who work in the cause of finding anomalous features that may point to a technology in Earth orbit before the first Earth satellite flew. While VASCO uses the POSS-1 dataset, other plate material is available from the Lick and Sonneberg observatories and the Carte du Ciel, a decades-long mapping project from the early 20th Century. Interest in photographic plates is quickening because this is a resource ripe for analysis with our new digital tools. Thus projects like DASCH (Digital Access to a Sky Century @ Harvard), which has spent two decades thus far scanning photographic plates and archiving data.

Long-term Centauri Dreams readers will know that I champion the idea of using older datasets, which are priceless windows into the pre-digital sky. How we can exploit this material and expand our understanding with our new digital tools is an exciting area of research, and here we have the great benefit that the data have already been collected. We need build no new observatories to acquire information, but can concentrate on mining older plates for what may turn out to be new discoveries.

In the case of VASCO, the trick is to come up with the necessary filters to isolate anything that may be artificial, i.e., a technosignature. Low Earth orbits remain relatively uninteresting because they do not fit the long-term survival we’re presuming in a lurker, although fast-moving point sources in a long exposure can readily show natural objects like asteroids or meteors. An object in geosynchronous orbit may throw a glint when observer and reflective surface happen to be precisely aligned. But single glints are not enough. The authors are after indicators that cannot be confused with natural phenomena and are not the result of instrumental or photographic defects.

Image: This is Figure 2 from the paper. Caption: Fig. 2. A typical streak. The POSS-I streak found in a red image identified through the citizen science project shows the effect of tumbling and is a possible near-Earth asteroid but is also a possible candidate. The streak is roughly 40 arcminutes in length and unlikely to be a meteor with its angular velocity and pattern of first being dim, then brightens, and then dims once again. The typical exposure time for POSS-I images is about 50 minutes. Credit: Villarroel et al.

While a reflective satellite can produce a short, powerful glint, the glint shows a Point Source Function (PSF) shape that does not help us much. A point source is one that is smaller than maximum resolution of the equipment. An image of it seems to spread, a factor we must consider because of the diffraction of the telescope aperture. From the paper:

Satellites that are uniformly illuminated at low- or medium altitude orbits leave clear streaks in the long time exposures from old photographic plates as they move at speeds projected as hundreds of arcseconds per second. At higher or GEO altitudes the presence of satellites or space debris can be detected by fast, transient glints caused by surface reflection of the Sun. When the reflective surface of the satellite coincides perfectly with the position of the observer and the Sun, a short but powerful glint can be observed. Despite the fast movement of the satellite, the very brief reflective alignment means that the resulting short duration glint has a Point Source Function (PSF)-like shape…

And it turns out that a single glint on older photographic plates is indistinguishable from an astrophysical transient. In fact, ground-based searches for such transients today often pull up solar reflections from artificial objects in geosynchronous orbits. Moreover, 75 percent of glints from GEO, while not associated with any known object, are almost certainly centimeter-sized human space debris.

Going beyond the single glint, then, the paper analyzes multiple glints, and notes in particular glints with point source functions that occur along a straight line – which can occur when a spinning object reflects sunlight – and triple glints, another sign of possible rotation. And indeed, multiple glints have been found in at least one image exposed in 1950, though it is impossible to rule out contamination or emulsion defects on old photographic plates. What the authors are after is something more reliable:

The smoking-gun observation that settles the question unequivocally, is the one of repeating glints with clear PSFs along a straight line in a long-exposure image. When an object spins fast around itself and when its reflective surface faces the Earth, some of its parts could reflect sunlight. That results in multiple glints following a trail in an image. The number of glints might depend on the geometry and the speed of the rotation of the object. An object with only one single reflective surface that spins slowly will produce fewer glints than an object with several reflective surfaces that, moreover, spins fast. From the period one can also determine the shape of the glinting object.

This, then, is the kind of signature Villarroel and colleagues hope to find during the course of the VASCO investigations:

An exciting aspect of these suggestions is that precisely these type of objects could be found during the course of the VASCO project [8], [29]. Among the many objects classified as “Vanished”, we could discover both single and multiple glint objects. Also through automatised methods, we seek to identify all cases of multiple glints within a small area of 10 × 10 arcmin2, and to see if any of these represent cases where the glints follow a straight line.

Image: This is Figure 5 from the paper. Caption: Fig. 5. Triple glints. An example of a triplet glints in a red POSS-I image from 1950s. The left column shows the POSS-I image, and the right column the Pan-STARRS image ( year 2015). The example is from Villarroel et al. (2021) [54] and uses the VASCO citizen science web interface. Credit: Villarroel et al.

A series of multiple glints along a line in photographic plate images, if found in the VASCO plates, would be of great interest, but there is a ticking clock, because trying to locate any such object today comes up against the growing volume of human-made space debris. The authors argue that searches for technosignatures in photographic plates should thus be done as soon as possible, and preferably performed on datasets beyond the POSS-1 material now used by VASCO.

A sky without human contamination in orbit is available through these plates, and if an object in geosynchronous orbit has been left behind – perhaps millions of years ago – as an observing platform or other kind of probe, this method is one way citizen science can be employed to spot it. Just how long a reflective surface can endure in an environment of dust grain and micrometeorite collisions is debatable, but of course we know nothing about what measures probe builders might take to protect their equipment. The authors think the imponderables keep VASCO a viable project.

The paper is Villarroel et al., “A glint in the eye: Photographic plate archive searches for non-terrestrial artefacts,” Acta Astronautica Vol. 194 (May 2022), 106-113 (full text). For earlier work, see Villarroel et al., “The Vanishing and Appearing Sources during a Century of Observations Project. I. USNO Objects Missing in Modern Sky Surveys and Follow-up Observations of a “Missing Star,” Astronomical Journal Vol. 159, No. 1 (2020) 8 (full text). Thanks to my friend Antonio Tavani for the pointer to the 2022 paper.

tzf_img_post
{ 40 comments }

Comet Interceptor Could Snag an Interstellar Object

It pleases me to learn that Dutch astronomer Jan Oort was among the select group of people who have seen Halley’s Comet twice. At the age of 10, he saw it with his father on the shore at Noordwijk, Netherlands. In 1986, he saw it again from an aircraft. What a fine experience that would have been for a man who brought so much to the study of comets, including the idea that the Solar System is surrounded by a massive cloud of such objects in orbits far beyond those of the outer planets.

Image: Dutch astronomer Jan Oort, a pioneer in the study of radio astronomy and a major figure in mid-20th Century science. Credit: Wikimedia Commons CC BY-SA 3.0.

Halley’s Comet is a short-period object, roughly defined as a comet with an orbit of 200 years or less, and thus not a member of the Oort Cloud. But let’s linger on it for just a moment. The most famous person associated with two appearances of Halley’s Comet is Mark Twain, who was born in 1835 with the comet in the sky, and who sensed that its approach in 1910 would also mark his demise. As Twain put it:

I came in with Halley’s Comet… It is coming again … and I expect to go out with it… The Almighty has said, no doubt: ‘Now here are these two unaccountable freaks; they came in together, they must go out together.’

And so they did.

Edging in from the Oort

The Oort Cloud is an intriguing concept because by some accounts, it may extend halfway to the nearest star, meaning that it’s conceivable that the cometary cloud around the Sun nudges into a similar cloud around Centauri A/B, assuming there is one there. We use the Oort to explain the appearance of long-period comets, assuming that among these trillions of objects, a few are occasionally nudged out of their orbits and fall toward the Sun. The concept makes sense but observational data is sparse, as these dark objects are not directly observable until one of them moves inward.

Image: The presumed distance of the Oort cloud compared to the rest of the Solar System. Credit: NASA / JPL-Caltech / R. Hurt.

We’ve recently learned about a long-period comet with interesting properties indeed. C/2014 UN271 (Bernardinelli–Bernstein) is the object in question, named after the two astronomers who discovered it in Dark Energy Survey (DES) data at a heliocentric distance of 29 au. Recent work with the Hubble Space Telescope has determined that the object may be as much as 130 kilometers across, making it the largest nucleus ever seen in a comet. Moreover, we can assume that it’s not an aberration.

David Jewitt (UCLA) is a co-author of the paper on this work:

This comet is the tip of the iceberg for many thousands of comets that are too faint to see in the more distant parts of the solar system. We’ve always suspected this comet had to be big because it is so bright at such a large distance. Now we confirm it is.”

Getting an accurate read on an object like this was no easy matter. At this distance from the Sun, the nucleus is too faint to be resolved even by the Hubble instrument, so Jewitt and team had to rely on data showing the spike of light where the nucleus was thought to be. Lead author Man-To Hui (Macau University of Science and Technology) led the development of a computer model of the surrounding coma, adjusting it to the Hubble data and then subtracting its glow, leaving behind the nucleus. Observations from the Atacama Large Millimeter/submillimeter Array (ALMA) confirmed its size and also made it clear that the nucleus is, as Jewitt puts it, “blacker than coal.”

Image: Sequence showing how the nucleus of Comet C/2014 UN271 (Bernardinelli-Bernstein) was isolated from a vast shell of dust and gas surrounding the solid icy nucleus. Credit: NASA, ESA, Man-To Hui (Macau University of Science and Technology), David Jewitt (UCLA). Image processing: Alyssa Pagan (STScI).

Intercepting a Comet

If long period comets are difficult objects to study from Earth orbit, we may need to get up close with a spacecraft. It’s good to hear that the European Space Agency has approved the mission known as Comet Interceptor for construction, slotting it to fly in 2029 in the same launch that will carry the Ariel exoplanet finder into space. We’ve studied comets before, of course, including Halley’s, with notable success. But it’s obvious that short-period comets like the former and Rosetta target 67/P Churyumov–Gerasimenko would have been changed by their long proximity to the inner Solar System. What will we find when we study a newly arriving Oort object?

Michael Küppers is an ESA scientist working on the Comet Interceptor mission:

“A comet on its first orbit around the Sun would contain unprocessed material from the dawn of the Solar System. Studying such an object and sampling this material will help us understand not only more about comets, but also how the Solar System formed and evolved over time.”

Both Ariel and Comet Interceptor will proceed to the L2 Lagrangian point 1.5 million kilometers from the Earth, where the latter will wait for a target, presumably an Oort object jostled inward by gravitational interactions. Here we rely on the fact that comets are often detected more than a year before they reach perihelion, a time too short to allow for the construction of a dedicated space mission. The plan is to make Comet Interceptor ready to move when the time comes, performing a flyby of the incoming object and releasing twin probes to build up a 3D profile of the comet.

Image: An illustration of the L2 point showing the distance between the L2 and the Sun, compared to the distance between Earth and the Sun. Credit: ESA.

ESA will build the spacecraft and one of the two probes, the other being developed by the Japanese space agency JAXA. Given that over 100 comets are known to come close to Earth in their orbit around the Sun, along with the 29,000 asteroids cataloged so far, it will likewise be useful to have a better understanding of the composition of a pristine comet in case it ever becomes necessary to take action to avert an impact on Earth.

And if the target turns out to be an interstellar new arrival like ‘Oumuamua? So much the better. We should be finding more such newcomers shortly, given the success of the Pan-STARRS observatory and the development of the Large Synoptic Survey Telescope, now known as the Vera C. Rubin Observatory, under construction in Chile. Waiting in space for an Oort object or an interstellar comet means we won’t need to know the target in advance, but can adjust the mission as data become available. In any case, ESA is optimistic, saying Comet Interceptor “is expected to complete its mission within six years of launch.”

An ESA factsheet on Comet Interceptor can be found here. The paper on C/2014 UN271 (Bernardinelli–Bernstein) is Man-To Hui et al, “Hubble Space Telescope Detection of the Nucleus of Comet C/2014 UN271 (Bernardinelli–Bernstein),” Astrophysical Journal Letters Vol. 929, No. 1 (12 April 2022) L12 (abstract).

tzf_img_post
{ 31 comments }

Europa: Catching Up with the Clipper

I get an eerie feeling when I look at spacecraft before they launch (not that I get many opportunities to do that, at least in person). But seeing the Spirit and Opportunity rovers on the ground at JPL just before their shipment to Florida was an experience that has stayed with me, as I pondered how something built by human hands would soon be exploring another world. I suppose the people who do these things at the Johns Hopkins Applied Physics Laboratory and the Jet Propulsion Laboratory itself get used to the feeling. For me, though, the old-fashioned ‘sense of wonder’ kicks in long and hard, as it did when Europa Clipper arrived recently at JPL.

Not that the spacecraft is by any means complete, but its main body has been delivered to the Pasadena site, where it will see final assembly and testing over a two-year period. Here I fall back on the specs to note that this is the largest NASA spacecraft ever designed for exploration of another planet. It’s about the size of an SUV when stowed for launch, but we know from the James Webb Space Telescope how large these things can become when fully deployed. In Europa Clipper’s case, the recently delivered main body is 3 meters tall and 1.5 meters wide. Extending the solar arrays and other deployable equipment takes it up to basketball court size.

Image: The main body of NASA’s Europa Clipper spacecraft has been delivered to the agency’s Jet Propulsion Laboratory in Southern California, where, over the next two years, engineers and technicians will finish assembling the craft by hand before testing it to make sure it can withstand the journey to Jupiter’s icy moon Europa. Here it is being unwrapped in a main clean room at JPL, as engineers and technicians inspect it just after delivery in early June 2022. Credit: NASA.

Eight antennas are involved, powered by a radio frequency subsystem that will service a high-gain antenna measuring three meters wide, and as JPL notes in a recent update, the electrical wires and connectors collectively called the ‘harness’ themselves weigh 68 kilograms. Stretch all that wiring out and you get 640 meters, taking us twice around a football field. The main body will include a fuel tank and an oxidizer tank connecting to an array of 24 engines. Tim Larson is JPL deputy project manager for Europa Clipper:

“Our engines are dual purpose. We use them for big maneuvers, including when we approach Jupiter and need a large burn to be captured in Jupiter’s orbit. But they’re also designed for smaller maneuvers to manage the attitude of the spacecraft and to fine tune the precision flybys of Europa and other solar system bodies along the way.”

So what is arriving, or has arrived at JPL, is a spacecraft in pieces, its main body now joining key instruments like E-THEMIS, a thermal emission imaging system developed at Arizona State, and Europa-UVS, the mission’s ultraviolet spectrograph. E-THEMIS is an infrared camera that should give us insights into temperatures on the Jovian moon, and hence offer information about its geological activity. Given that we’re interested in finding places where liquid water is close to the surface, the data from this instrument should be extremely valuable during the spacecraft’s nearly fifty close passes.

The theory here is that as Europa’s surface cools after local sunset, the areas of the most solid ice will retain heat longer than areas with a looser, more granular texture. E-THEMIS will be able to map cooling rates across the surface. The infrared camera works in three heat-sensitive bands, and the warmer regions it should see may be the result of liquid water close to the surface, or possible impacts or convection activity. Not surprisingly, E-THEMIS lead project engineer Greg Mehall points to the radiation environment in Jupiter space as one of the team’s biggest issues:

“The extreme radiation environment at Europa gave far more design challenges for the ASU engineering team than on any previous instrument we’ve developed. We had to use dense shielding materials, such as copper-tungsten alloys, to provide the necessary protection from the expected radiation. And to ensure that E-THEMIS will survive during the mission, we also carried out radiation tests on the instrument’s electronic components and materials.”

Image: The thermal imager will use infrared light to distinguish warmer regions on Europa’s surface, where liquid water may be near the surface or might have erupted onto the surface. The thermal imager will also characterize surface texture to help scientists understand the small-scale properties of Europa’s surface. In the image above, we’re seeing a diurnal temperature color image from the first light test of Europa Clipper’s thermal imager (called E-THEMIS), taken from the rooftop of the Interdisciplinary Science and Technology Building 4 on the Tempe Campus of Arizona State University (ASU). The top image was acquired at 12:40 PM, the middle at 4:40 PM, and the bottom image at 6:20 PM (after sunset). Temperatures are approximations during this testing phase. Credit: ASU.

As to the Europa-UVS instrument, this ultraviolet spectrograph will search for water vapor plumes and study the composition of both the surface and the tenuous atmosphere as it uses an optical grating to spread and analyze light, identifying basic molecules like hydrogen, oxygen, hydroxide and carbon dioxide.

The spacecraft’s visible light imaging system (EIS) is going to upgrade those well-studied images from the Galileo mission enormously. The plan is to map 90 percent of the moon’s surface at 100 meters per pixel, which is six times more of Europa’s surface than Galileo, and at five times better resolution. And when Europa Clipper swings close to Europa during a flyby, it will produce images with a resolution fully 100 times better than Galileo. The Europa Imaging System includes both wide- and narrow-angle cameras, each with an eight-megapixel sensor. Both of these cameras will produce stereoscopic images and include the needed filters to acquire color images.

All told, the spacecraft’s nine science instruments should be able to extract information about the depth and salinity of the ocean under the ice and, crucially, the thickness of the ice crust (I can imagine wagers on that issue going around in certain quarters). Gathering information about the moon’s surface and interior should further illuminate the issue of plumes from the ocean below that may break through the ice.

Assembly, test and launch is a two year phase that, by the end of this year, should see assembly of most of the flight hardware and the remaining science instruments. Kudos to JHU/APL, which has just delivered a flight system that is the largest ever built by engineers and technicians there. Now we look toward bolting on the radio frequency module, radiation monitors, power converters, the propulsion electronics and those hundreds of meters of wiring. Not to mention the electronics vault that must stand up to hard radiation.

The full instrument package will include an imaging spectrometer, ice-penetrating radar, a magnetometer, a plasma instrument, a mass spectrometer and a dust analyzer. Only two years and four months before launch onto a six-year journey of 2.9 billion kilometers. Europa Clipper isn’t a life-finder, but it does have the capability of detecting whether the moon’s ocean really does allow for the possibility of life to develop. It’s our first reconnaissance of Europa since the 1990s. What surprises will it reveal?

Bear in mind, too, that we still have ESA’s JUICE (JUpiter ICy moons Explorer) in the offing, with launch planned for 2023. I note with interest that on June 19, Europa will occult a distant star, which should be useful in tweaking our knowledge of the moon’s orbit before the arrival of both missions. Destined to end its life as a Ganymede orbiter, JUICE will make only two close passes of Europa, but its period of operations will coincide with part of Europa Clipper’s numerous flybys of the moon.

tzf_img_post
{ 14 comments }