Open Cluster SETI

Globular clusters, those vast ‘cities of stars’ that orbit our galaxy, get a certain amount of traction in SETI circles because of their age, dating back as they do to the earliest days of the Milky Way. But as Henry Cordova explains below, they’re a less promising target in many ways than the younger, looser open clusters which are often home to star formation. Because it turns out that there are a number of open clusters that likewise show considerable age. A Centauri Dreams regular, Henry is a retired map maker and geographer now living in southeastern Florida and an active amateur astronomer. Here he surveys the landscape and points to reasons why older open clusters are possible homes to life and technologies. Yet they’ve received relatively short shrift in the literature exploring SETI possibilities. Is it time for a new look at open clusters?

by Henry Cordova

If you’re looking for signs of extra-terrestrial intelligence in the cosmos, whether it be radio signals or optical beacons or technological residues, doesn’t it make sense to observe an area of sky where large numbers of potential candidates (particularly stars) are concentrated? Galaxies, of course, are large concentrations of stars, but they are so remote that it is doubtful we would be able to detect any artifacts at those distances. Star clusters are concentrations of stars gathered together in a small area of the celestial sphere easily within the field of view of a telescope or radio antenna. These objects also have the advantage that all their members are at the same distance, and of the same age,

Ask any amateur astronomer; “How many kinds of star cluster are there?” and he will answer; “Two, Open Clusters (OCs) and Globular Clusters (GCs)”. The terms “Globular” and “Open” refer to both their general morphology as well as their appearance through the eyepiece. It’s important to keep in mind that both are collections of stars presumably born at the same time and place (and hence, from the same material) but they are nevertheless very different kinds of objects. There does not seem to be a clearly defined transitional or intermediate state between the two. One type does not evolve into the other. Incidentally, the term ‘Galactic Cluster’ is often encountered when researching this field. It is an obsolete term for an OC and should be abandoned. It is too easily misunderstood as meaning a ‘cluster of galaxies’ and can lead to confusion.

GCs are in fact globular. They are collections of thousands, if not hundreds of thousands, of stars forming spheroidal aggregates much more densely packed towards their centers. OCs are amorphous and irregular in shape, random clumps of several hundred to several thousand stars resembling clouds of buckshot flying through space. Their distribution throughout the galaxy is different as well. GCs orbit the galactic center in highly elliptical orbits scattered randomly through space. They are, for the most part, located at great distances from us. OCs, on the other hand, appear to be restricted to mostly circular orbits in the plane of the Milky Way. Due to the obscuring effects of interstellar dust in the plane of the galaxy, most are seen relatively near Earth. although they are scattered liberally throughout the spiral arms.

Image: The NASA/ESA Hubble Space Telescope has captured the best ever image of the globular cluster Messier 15, a gathering of very old stars that orbits the center of the Milky Way. This glittering cluster contains over 100 000 stars, and could also hide a rare type of black hole at its center. The cluster is located some 35 000 light-years away in the constellation of Pegasus (The Winged Horse). It is one of the oldest globular clusters known, with an age of around 12 billion years. Very hot blue stars and cooler golden stars are seen swarming together in this image, becoming more concentrated towards the cluster’s bright center. Messier 15 is also one of the densest globular clusters known, with most of its mass concentrated at its core. Credit: NASA, ESA.

Studies of both types of clusters in nearby galaxies confirm these patterns are general, not a consequence of our Milky Way’s history and architecture, but a feature of galactic structure everywhere. Other galaxies are surrounded by clouds of GCs, and swarms of OCs circle the disks of nearby spirals. It appears that the Milky Way hosts several hundred GCs and several thousand OCs. It is now clear that not only is the distribution and morphology of star clusters divided into two distinct classes but their populations are as well. OCs are often associated with clouds of gas and dust, and are sometimes active regions of star formation. Their stellar populations are often dominated by massive bright, hot stars evolving rapidly to an early death. GCs, on the other hand, are relatively dust and gas free, and the stars there are mostly fainter and cooler, but long-lived. Any massive stars in GCs evolved into supernovae, planetary nebulae or white dwarfs long ago.

It appears that the globulars are very old. They were created during the earliest stages of the galaxy’s evolution. Conditions must have been very different back then; indeed, globulars may be almost as old as the universe itself. GC stars formed during a time when the interstellar medium was predominantly hydrogen and helium and their spectra now reveal large concentrations of heavy elements (“metals”, in astrophysical jargon). The metals have been carried up from the stellar cores by convective processes late in the stars’ life. Any planets formed around this early generation of stars would likely be gas giants, composed primarily of H and He—not the rocky Earth-type worlds we tend to associate with life.

Open Clusters, on the other hand, are relatively new objects. Many of them we can see are still in the process of formation, condensing from molecular clouds well enriched by metals from previous cycles of nucleogenesis and star formation. These clouds have been seeded by supernovae, solar winds and planetary nebulae with fusion products so that subsequent generations of stars will have the higher elements to incorporate in their own retinue of planets.

Image; Some of our galaxy’s most massive, luminous stars burn 8,000 light-years away in the open cluster Trumpler 14. Credit: NASA, ESA, and J. Maíz Apellániz (Institute of Astrophysics of Andalusia, Spain); Acknowledgment: N. Smith (University of Arizona).

Older OCs may have broken up due to galactic tidal stresses but new ones seem to be forming all the time, and there appears to be sufficient material in the galactic plane to ensure a continuous supply of new OCs for the foreseeable future. In general, GCs are extremely old and stable, but not chemically enriched enough to be suitable for life. OCs are young, several million years old, and they usually don’t survive long for life to evolve there. Any intelligent life would probably evolve after the cluster broke up and its stars dispersed. BUT…there are exceptions.

The most important parameter that determines a star’s history is its initial mass. All stars start off as gravitationally collapsing masses of gas, glowing from the release of gravitational potential energy. Eventually, temperatures and pressures in the stars’ cores rise to the point where nuclear fusion reactions start producing light and heat. This energy counteracts gravity and the star settles down to a long period of stability, the main sequence. The terminology arises from a line of stars in the color-magnitude diagram of a star cluster. Main sequence stars stay on this line until they run out of fuel and wander off the main sequence.

All stars follow the same evolutionary pattern, but where on the main sequence they wind up, and how long they stay there, depend on their initial mass. Massive stars evolve quickly, lighter ones tend to stay on the main sequence a long time. Our Sun has been a main sequence star for about 4.6 billion years, and it will remain on the main sequence for about another 5 billion years. When it runs out of nuclear fuel it will wander off the main sequence, getting brighter and cooler as it evolves.

All stars evolve in a similar way, but the amount of time they spend in that stable main sequence state is highly dependent on their mass at birth. Studying the point on the color-magnitude diagram of a cluster’s main sequence where stars start to “peel-off” from the MS allows astrophysicists to determine the age of the cluster. It is not necessary to know the absolute brightness, or distance, of the stars since, by definition, all the stars in a cluster are at the same distance. The color-magnitude (or Hertzprung-Russell) diagram is as important to astronomy as the periodic table is to chemistry. It allows us to visualize stellar evolution using a simple graphic model to interpret the data. It is one of the triumphs of 20th century science.

It is this ability to determine the age of a cluster that allows us to select a set of OCs that meet the criterion of great age needed for biological evolution to take place. Although open clusters tend to quickly lose their stars through gravitational interactions with molecular clouds in the disc of the galaxy, a surprising number seem to have survived long enough for biological, and possibly technologically advanced, species to evolve. Although less massive stars, such as main sequence red dwarfs, tend to be preferentially ejected from OCs due to gravitational tides, more massive F, G, and K stars are more likely to remain.

Sky Catalog 2000.0 (1) lists 32 OCs of ages greater than 1.0 Gyr. A more up-to-date reference, the Wikipedia entry (2), lists others. No doubt, a thorough search of the literature will reveal still more. A few of these OCs are comparable in age to the globulars. They are relics of an ancient time. But many others are comparable to our Sun in age (indeed, our own star, like many others, was born in an open cluster).

Regardless of the observing technique or wavelength utilized, an OC provides the opportunity to examine a large number of stars simultaneously, stars which have been pre-selected as being of a suitable age to support life or a technically advanced civilization. It will also be assured that, as members of an OC, all the stars sampled were formed in a metal-rich environment, and that any planets formed about those stars may be rocky or otherwise Earthlike.

If a technical civilization has arisen on any of those stars, it is possible that they have explored or colonized other stars in the cluster and we have the opportunity to eavesdrop on intra-cluster communications. And from the purely practical point of view, when acquiring scarce funding or telescope time for such a project, it will be possible to piggy-back a SETI program onto non-SETI cluster research. Other than SETI, there are very good reasons to study OCs. They provide a useful laboratory for investigations into stellar evolution.

References

1) Sky Catalog 2000.0, Vol II, Sky Publishing Corp, 1985.

2) https://en.wikipedia.org/wiki/List_of_open_clusters

Suggestions for Additional reading

1. H. Cordova, The SETI Potential of Open Star Clusters, SETIQuest, Vol I No 4, 1995

2. R. De La Fuente Marcos, C. De La Fuente Marcos, SETI in Star Clusters: A Theoretical Approach, Astrophysics and Space Science 284: 1087-1096, 2003

3. M.C. Turnbull, J.C. Tarter, Target Selection for SETI II: Tycho-2 Dwarfs, Old Open Clusters, And the Nearest 100 Stars, ApJ Supp. Series 149: 423-436, 2003

Alien Life or Chemistry? A New Approach

Working in the field has its limitations, as Alex Tolley reminds us in the essay that follows, but at least biologists have historically been on the same planet with their specimens. Today’s hottest news would be the discovery of life on another world, as we saw in the brief flurries over the Viking results in 1976 or the Martian meteorite ALH84001. We rely, of course, on remote testing and will increasingly count on computer routines that can make the fine distinctions needed to choose between biotic and abiotic reactions. A new technique recently put forward by Robert Hazen and James Cleaves holds great promise. Alex gives it a thorough examination including running tests of his own to point to the validity of the approach. One day using such methods on Mars or an ice giant moon may confirm that abiogenesis is not restricted to Earth, a finding that would have huge ramifications not just for our science but also our philosophy.

by Alex Tolley


Perseverance rover on Mars – composite image.

Cast your mind back to Darwin’s distant 5-year voyage on HMS Beagle. He could make very limited observations, make drawings and notes, and preserve his specimen collection for his return home to England.

Fifty years ago, a field biologist might not have much more to work with. Hours from a field station or lab with field guides and kits to preserve specimens, with no way to communicate. As for computers to make repetitive calculations, fuggedaboutit.

Fast forward to the late 20th and early 21st centuries, and fieldwork is extending out to the planets in our solar system to search for life. Like Darwin’s voyage, the missions are distant and long. Unlike Darwin, samples have not yet been returned from any planets, only asteroids and comets. Communication is slow, more on the order of field experiences. But instead of humans, our robot probes are “Going where no one has gone before” and humans may not go until much later. The greater the communication lag, the more problematic the central command to periphery control model. Reducing this delay demands a need for more peripheral autonomy at the periphery to make local decisions.

The 2006 Astrobiology Field Laboratory Science Steering Group report recommended that the Mars rover be a field laboratory, with more autonomy [17]. The current state of the art is the Perseverance rover taking samples in the Jezero crater, a prime site for possible biosignatures. Its biosignature instrument, SHERLOC, uses Raman spectrography and luminescence to detect and identify organic molecules [6]. While organic molecules may have been detected [19], the data had to be transmitted to Earth for interpretation, maintaining the problem of lag times between each sample to be chosen and analyzed.

As our technology improves, can these robots operating on planetary surfaces be able to do more effective in situ analyses in the search for extant or extinct life, so that they can operate more quickly like a human field scientist, in the search for life?

While we “know life when we see it”, nevertheless we still struggle to define what life is, although with terrestrial life we have sufficient characteristics except for edge cases like viruses and some ambiguous early fossil material. However, some defining characteristics do not apply to dead, or fossilized organisms and their traces. Fossil life does not metabolize, reproduce, or move, and molecules that are common to life no longer exist in their original form. Consider the “fossil microbes” on the Martian meteorite ALH84001 that caused such a sensation when announced but proved ambiguous.

Historically, for fossil life, we have relied on detecting biosignatures, such as C13/C12 ratios in minerals (due to chlorophyll carbon isotope preference), long-lasting biomolecules like lipids, homochirality of organic compounds, and disequilibria in atmospheric gases. Biomolecules can be ambiguous, as the amino acids detected in meteorites are most likely abiotic, something the Miller-Urey experiment demonstrated many decades ago.

Ideally, we would like a detection method that is simple, robust, and whose results can be interpreted locally without requiring analysis on Earth.

A new method to try to identify the probably biotic nature of samples with organic material is the subject of a new paper from a collaboration under Prof. Robert Hazen and James Cleaves. The team not only uses an analytical method—pyrolysis gas chromatography coupled to electron impact ionization mass spectrometry (Pyr-GC-EI-MS) to heat (pyrolyze), fractionate volatile components (gas chromatography), and determine their mass (mass spectrometry), but also analyzes the data to classify whether the new samples contain organic material of biological origin. Their reported early results are very encouraging [10, 11, 12].

The elegance of Hazen et al’s work has been to apply the Pyr-GC-EI-MS technique [3, 15, 18] that is not only available in the laboratory, but is also designed for planetary rovers to meet the need for local analysis. Their innovation has been to couple this process with computationally lightweight machine learning models to classify the samples, thereby bypassing the time lags associated with distant terrestrial interpretation. A rover could relatively rapidly take samples in an area and determine whether any might have a biosignature based on a suite of different detected compounds and make decisions locally on how to proceed.

The resulting data of masses and extraction time can be reduced and then classified using the pre-trained Random Forest [4], which is a suite of Decision Trees (see Figure 3) using samples of the feature set of masses, to provide a classification, which with the currently tested samples, provides a better than 90% probability of correct classification. The reported experiment used 134 samples, 75 labeled as abiotic and 59 as biotic or of biotic origin. The data set ranged in mass from 50 to 700 and several thousand scans over time. This data was reduced to a manageable size by reducing the mass and time ranges to 8147 values. The samples were then run against several machine learning methods, of which the Random Forest worked best.

To provide a visualization of which mass and time values were most instrumental in classifying the data, the 20 most informative data points were extracted and overlaid on the MS data as shown in Figure 1.

Figure 1 above shows 2 samples of data. One is the Allende meteorite which is classified as a 4.5 billion years old abiotic sample. This is contrasted with one of the microbial samples. While the details of the meteorite sample were not provided, older analyses by others indicated that the surface was contaminated with terrestrial material, whilst the interior matrix included polycyclic aromatic hydrocarbons, a common material found in samples from space missions [7,8]. The bacterial sample, as expected, shows many more compounds after pyrolysis, as the organism is composed of a large variety of organic compounds including amino acids, nucleobases, lipids, and sugars which will decompose with heating. A key point is that the discriminant features are not coincident with the most common masses in the samples, but rather in the rarer compounds as indicated by their intensities. [The lower bound mass bin ensures that common pyrolysis products such as low carbon number compounds will be excluded from the analysis and visualization. The data is normalized to the highest peak so that relative values rather than absolutes are analyzed to eliminate sample amounts.] Most of the defining compounds are in the 140 – 200 mass range, which would imply all-carbon compounds with 12-16 atoms.

Figure 2 shows a 2-dimensional Principal Components Analysis (PCA) using the 20 most informative features that illustrate the separation of the sample types. The expanded box encompasses all the abiotic samples.

I note that even though the biotic and natural samples were given the same classification label, these samples separate quite clearly too, indicating that the natural samples appear almost distinct from the biotic samples. What is perhaps surprising is that biological materials like cedarwood (oils derived from the bark) cluster with the abiotic samples, and even cyanobacteria seem similar in this view. Notice that the dried oak leaf, clearly a degraded living material, is remarkably similar to a cysteine (amino acid) and glucose Maillard reaction (used in the searing of foods to create flavors). A number of the natural materials that were classified as of biological origin or containing material of biological origin, also cluster closely with the abiotic samples, such as Quisqueite and Asphaltum. The peat sample (labeled natural) is placed between the bulk of both biological and natural samples.

Why should this technique work to classify samples according to the type labels? It has been argued that living things are both complex, but composed of molecules that occupy a relatively small space of possible diversity. [Work by Lee Cronin’s group has looked at the way biology restricts the possible structures of organic molecules to create complex macromolecules using few subunits. For example, the amino acid glycine is both important as a constituent of proteins, forming much of the structure of collagen, and is central to several biosynthesis pathways, that include the synthesis of porphyrins and thence to heme in red blood corpuscles. Some macromolecules such as cellulose are formed entirely of D-glucose, as are most complex sugar macromolecules. Cronin calls his technique Assembly Theory [1].]

But larger molecules constructed of a small number of simpler molecules alone are insufficient. Cellulose is a polymer of D-glucose molecules, but clearly, we would not state that a sheet of wet paper was once living, or formed by natural processes. A minimal complexity is required. Life relies on a suite of molecules connected by metabolic pathways that exquisitely restrict the possible number of resulting molecules, however complex, such as proteins that are constructed from just 20 of the possible much greater number of amino acids. At the heart of all life is the Krebs cycle which autotrophs use in the reverse direction to oxidation as part of carbon fixation to build biomass, often glucose to build cellulose cell walls.

The Pyr-GC-EI-MS technique detects a wide range of organic molecules, but the machine learning algorithm uses a set of specific ones to detect the requisite complexity as well as the abiotic randomness. In other words, this is complementary to Cronin’s “Assembly Theory” of life.

I would note that the PCA uses just 20 variables to separate the abiotic and biotic/natural samples. This appears adequate in the majority of the sample set but may be fewer than the variables used in the Random Forest machine learning algorithm. [A single Decision Tree using my reduced data uses just 12 rules – (masses and normalized frequency), but the accuracy is far lower. The Random Forest using different rules (masses and quantities, would be expected to use more features.]

How robust is this analysis?

The laboratory instrument generates a large amount of data for each sample, over 650 mass readings repeated over 6000 times over the scan time. The data was reduced for testing which in this case was 8149 values. There were 134 samples, 59 were classed as biotic or natural, and 75 were abiotic samples. A Random Forest (a suite of Decision Trees) algorithm proved the best method to classify the samples. This resulted in a 90+% correct classification of the sample types. The PCA visualization in Figure 2 is instructive as it shows how the samples were likely classified by the Random Forest model, and which samples were likely misclassified. The PCA used just 20 of the highest-scoring variables to separate the 2 classes of samples.

Generally, the Pyr-GC-EI-MS technique is considered robust with respect to masses extracted from different samples of the same material. The authors included replicates in the samples which should, ideally, be classified together in the same leaf in each Decision Tree in the Random Forest. That this is the case in this experiment is hinted by the few labels that point to 2 samples that are close together in the PCA shown in Figure 2, e.g. the cysteine-glucose Maillard reaction. That replicates are very similar is important as it indicates that the sample processing technique reliably produces the same output and therefore single samples are producing reliable mass and time signals with low noise. [In my experiment (see Appendix A) where K-means clustering was used, in most cases, the replicate pairs were collected together in the same cluster indicating that no special data treatment was needed to keep the replicates together.]

The pyrolysis of the samples transforms many of the compounds, often with more species than the original. For example, cellulose composed purely of D-Glucose will pyrolyze into several different compounds [18]. The assumption is that pyrolysis will preserve the differences between the biotic and abiotic samples, especially for material that has already undergone heating, such as coal. As the pyrolysis products in the mass range of 50 to 200 may no longer be the same as the original compounds, this technique can be applied to any sample containing organic material.

The robustness of the machine learning approach can be assessed by the distribution of the accuracy of the individual runs of the Random Forest. This is not indicated in the article. However, the high accuracy rate reported does suggest that the technique will report this level of accuracy consistently. What is not known is whether this existing trained model would continue to classify new samples accurately. This will also indicate the likely boundary conditions where this model works and whether retraining will be needed after the sample set is increased. This will be particularly important when assessing the nature of any confirmed extraterrestrial organic material that is materially different from that recovered from meteorites.

The robustness may be dependent on the labeling to train the Random Forest model. The sample set labels RNA and DNA as abiotic because they were sourced from a laboratory supply, while the lower complexity insect chitin exoskeleton was labeled biotic. But note that the chitin sample is within the abiotic bounding box in Figure 2, as well as the DNA sample.

Detecting life from samples that are fossils, degraded material, or parts of an organism like a skeletal structure, probably requires being able to look for both complexity and material that is composed of fewer, simpler subunits. In extremis, a sample with few organic molecules even after pyrolysis will likely not be complex enough to be identified as biotic (e.g. the meteorite samples), while a large range of organic molecules may be too varied and indicate abiotic production (e.g. Maillard reactions caused by heating). There will be intermediate cases, such as the chitinous exoskeleton of an insect that has relatively low molecular complexity but which the label defines as biotic.

What is important here is that while it might be instructive to know what the feature molecules are, and their likely pre-heated composition, the method does not rely on anything more than the mass and peak appearance time of the signal to classify the material.

Why does the Random Forest algorithm work well, and exceed that of a single Decision Tree or 2-layer Perceptron [a component of neural networks used in binary classification tasks]? A single Decision Tree requires that the set of features have a strong common overlap for all samples in the class. The greater the overlap, the fewer rules are needed. However, a single Decision Tree model is brittle in the face of noise. This is overcome with the Random Forest by using different subsets of the features to build each tree in the forest. With noisy data, this builds robustness as the predicted classification is based on a majority vote. (See Appendix A for a brief discussion on this.)

Is this technique agnostic?

Now let me address the important issue of whether this approach is agnostic to different biologies, as this is the crux of whether the experimental results will detect not just life, but extraterrestrial life. Will this approach address the possibly very different biologies of life evolved from a different biogenesis?

Astrobiology, a subject with no examples, is currently theoretical. There is almost an industry trying to provide tests for alien life. Perhaps the most famous example is the use of the disequilibria of atmospheric gases, proposed by James Lovelock. The idea is that life, especially autotrophs like plants on Earth, will create an imbalance in reactive gases such as oxygen and methane that keeps them apart from their equilibrium. This idea has since been bracketed with constraints and additional gases, but the basic idea remains a principal approach for exoplanets where only atmospheric gas spectra can be measured.

As life is hypothesized to require a complex set of molecules, yet far fewer than a random set of all possible molecules, or as Cronin has suggested, reuse of molecules to reduce the complexity of building large macromolecules, it is possible that there could be fossil life, either terrestrial or extraterrestrial, that has the same apparent complexity, but largely non-overlapping molecules. The Random Forest could therefore build some Decision Trees that could select different sets of molecules to make the same biotic classification, suggesting that this is an agnostic method. However, this has yet to be tested as there are no extraterrestrial biotic samples to test. It may require such samples, if found and characterized as biotic, to be added to a new training set should they not be classified as biotic using the current model.

As this experiment assumes that life is carbon-based, clearly truly exotic life based on other key elements such as silicon would be unlikely, but not impossible, to be detected if volatile non-organic materials in a sample could be classified correctly.

The authors explain what agnostic in their experiment means:

Our Proposed Biosignature is Agnostic. An important finding of this study is that abiotic, living, and taphonomic suites of organic molecules display well-defined clusters in their high-dimensional space, as illustrated in Fig. 2. At the same time, large “volumes” of this attribute space are unpopulated by either abiotic suites or terrestrial life. This topology suggests the possibility that an alien biochemistry might be recognized by forming its own attribute cluster in a different region of Fig. 2—a cluster that reflects the essential role in selection for function in biotic systems, albeit with potentially very different suites of functional molecules. Abiotic systems tend to cluster in a very narrow region of this phase space, which could in principle allow for easy identification of anomalous signals that are dissimilar to abiotic geochemical systems or known terrestrial life.

What they are stating is that their approach will detect the signs of life in both extant organisms and the resulting decay of their remains when fossilized, such as shales and fossil fuels like coal and oil. As the example PCA of Figure 2 shows, the abiotic samples are tightly clustered in a small space compared to the far greater space of the biotic and once-biotic samples. The authors’ Figure 1 shows that their chosen method results in fewer different molecules found in the Allende meteorite compared to a microbe. I note that the dried oak leaf that is also within the abiotic cluster of the PCA visualization is possibly there because the bulk of the material is cellulose. Cellulose is made of chains of polymerized D-glucose, and while the pyrolysis of cellulose is a physical process that creates a wider assortment of organic compounds [18], this still limits the possible pyrolysis products.

This analysis is complementary to Cronin’s Assembly Theory which theorizes a reduced molecular space of life compared to the randomness and greater complexity of purely chemical and physical processes. This is because life constrains its biochemistry to enzyme-mediated reaction pathways. Assembly Theory [1] and other complexity theories of life [15] would be expected to reduce the molecular space compared to the possible arrangements of all the atoms in an organism.

The authors’ method is probably detecting the greater space of molecules from the required complexity of life compared to the simpler samples and reactions that were labeled as abiotic.

For any extraterrestrial “carbon units” that are theorized to follow organizing principles, this method may well detect extraterrestrial life, whether extant or fossilized, from a unique abiogenesis. However, I would be cautious of this claim simply because there were no biotic extraterrestrial samples used, because we have none, only presumed abiotic samples such as the organic material inside meteorites that should not be contaminated with terrestrial life.

The authors suggest that an alien biology using very different biological molecules might form their own discrete cluster and therefore be detectable. In principle, this is true, but I am not sure that the Random Forest machine learning model would detect the attributes of this cluster without training examples to define the rules needed. Any such samples might simply expose any brittleness in the model and either cause an error or be classified as a false positive for either a biotic or abiotic sample. Ideally, as Asimov once stated, the phrase most associated with interesting discoveries “is not ‘Eureka’ but ‘That’s funny . . .’”, might be associated with an anomalous classification. This might be particularly noticeable if the technique indicates that the sample is abiotic, while a direct observation by microscope clearly shows wriggling microbes.

In summary, it is yet to be tested against new, unknown samples to confirm whether it is both robust, and also agnostic, for other carbon-based life.

The advantage of this technique for remote probes

While the instrument data would likely be sent to Earth regardless of local processing and any subsequent rover actions, the trained Random Forest model is computationally very lightweight and easy to run on the data. Inspection of the various Decision Trees in the Random Forest allows an explanation for which features best classify the samples. As the Random Forest is updated by larger sample sets, it is easy to update the model to analyze samples in the lab or on a remote robotic instrument, in contrast to Artificial Neural Network architectures (ANN) that are computationally intensive. Should a sample that looks like it could be alien life but produces an anomalous result (That’s funny…”), the data can be analyzed on Earth and then assigned a classification, and the Random Forest model rerun with the new data either on Earth and the model uploaded, or locally on the probe.

Let me stress again that the instrumentation needed is already available for life-detection missions on robotic probes. The most recent is the Mars Organic Molecule Analyzer (MOMA) [9] which is to be one of the suite of instruments on the Rosalind Franklin rover as part of the delayed ExoMars mission which is now planned for a 2028 launch. MOMA will use both the Pyr-GC-EI-MS sample processing approach, plus a UV laser on the organic material extracted from 2-meter subsurface drill cores to characterize the material. I would speculate that it might make sense to calibrate the sample set with the MOMA instruments to determine if the approach is as robust with this instrument as the lab equipment for this study. The sample set can be increased and run on the MOMA instruments and finalized well before the launch date.

[If the Morningstar Mission to Venus does detect organic material in the temperate Venusian clouds, perhaps in 2025, this type of analysis using instruments flown on a subsequent balloon mission might offer the fastest way to determine if that material is from a life form before any later sample return.]

While this is an exciting, innovative approach to classifying organic molecules and classifying them as biotic or abiotic, it is not the only approach and should be considered complementary. For example, terrestrial fossils may be completely mineralized, with their form indicating origin. A low-complexity fragment of an insect’s exoskeleton would have a form indicative of biotic origin. The dried oak leaf in the experiment that clusters with the abiotic samples would leave an impression in the sediment indicative of life, just as we see occasionally in coal seams. Impressions left by soft-bodied creatures that have completely decayed would not be detectable by this method even though their shape may be obviously from an organism. [Although note that shape alone was insufficient for determining the nature of the “fossils” in the Martian meteorite, ALH84001.]

Earlier, I mentioned that the cellulose of paper represents an example with low complexity compared to an organism. However, if a robot probe detected a fragment of paper buried in a Martian sediment, we would have little hesitation in identifying it as a technosignature. Similarly, a stone structure on Mars might have no organic material in its composition but clearly would be identified as an artifact built by intelligent beings.

Lastly, isotopic composition of elements can be indicative of origin when compared to the planetary background isotopic ratios. If we detected methane (CH4) with isotope ratios indicative of production by subsurface methanogens, that would be an important discovery, one that would be independent of this experimental approach.

Despite my caveats and cautions, local life detection, rather like the attempts with the 1976 Viking landers may be particularly important now that the Mars Sample Return mission costs are ballooning and may result in a cancelation, stymying the return to Earth of the samples Perseverance is collecting [16]. One of the major benefits of training the Apollo astronauts to understand the geology and identify important rock samples was the local decisions made by the astronauts over which rock samples to collect, rather than taking random samples and hoping the selection was informative. A mission to an icy moon would benefit from such local life detection efforts if multiple attempts need to be made in fairly rapid succession without requiring communication delays with Earth for analysis and decision-making and where no sample return to Earth was likely. This innovative technique appears to be an important contribution to the search for extraterrestrial life in our system, and possibly even beyond if our probes capture samples from interstellar objects.

The paper is “Cleaves, J et al, Hazen, R, “A robust, agnostic molecular biosignature based on machine Learning,” PNAS 120 (41) (September 25, 2023) e2307149120. Abstract.

———————————————————————

Appendix A. My experiment with the supplied data. [12]

Method

To test some of the feedback from the authors, I ran some simple machine-learning experiments on the data. Rather than reduce the data to the number of variables in the paper, I used a simple data reduction by collapsing the scan data dimension so that only the single mass values remained. I normalized to the largest mass value in a sample that was set to 100 and all normalized floating point numbers were reduced to integers. All the resulting values of less than 1 were therefore set to 0. I used the classification labels as given. I also shuffled the class labels to test that the information in the data was lost with this operation. I used the Weka ML software package for running Decision Trees, Random Forests, and other ML methods [20].

Results and Discussion

Using the example I ran [figure 3] it is clear that the presence of a molecule[s] of mass 280 is sufficient to classify 14 of the 59 biological samples with no other rules needed, and if that rule fails, passing a rule with the presence of a molecule about ½ the mass of the first rule, adds a further 8 samples correctly classified as biological. However, it takes a further 6 rules to classify another 22 biological samples, and 7 rules to select 48 (1 sample was a false positive) of the 75 abiotic samples. The rules used mostly used larger molecules to determine the classifications because they had the most discriminatory power, as suggested by the number of the larger molecules of the 20 used in the PCA visualization. Of the 12 rules in my experiment, all but 3 used masses of 100 or greater, with 3 rules of 200 or greater. It should be noted that many rules simply needed the presence or absence (less than 1% of the peak frequency) of a molecule. The 2 largest biotic and abiotic leaves each required 7 rules, but about half required some non-zero value. The biotic leaf with 22 samples had just 3 rules with peak values that were present, while the abiotic leaf with 49 classified samples had all 7 rules with no peak value or values below a threshold.

Figure 3. The model for a Decision Tree output for a reduced collapsed set of data. It shows the rule tree of different mass normalized frequencies to classify abiotic [A], and biotic and natural [B], samples as leaves. There are 134 samples, For training, all the samples were used, 75 are classed abiotic, and 59 and biotic/natural. [The few misclassified samples were excluded for simplicity and clarity]. As all samples were used, there was no out-of-sample testing of the model.

The best classifier was the Random Forest, as found by the authors. This far exceeded a single Decision Tree. It even exceeded a 2 layer Perceptron. The Random Forest managed to reach a little more than 80% correct classification, which fell to random with the shuffled data. While the results using the more greatly reduced data were less accurate than those of the paper, this is expected by the data reduction method.

To test whether the data had sufficient information to separate the 2 classes simply by clustering, I ran a K-Means clustering [14] to determine how the data separated.

1. The 2 clusters were each comprised of about 60% of one class. Therefore while the separation was poor, there was some separation using all the data. Shuffling the labels destroyed any information in the samples as it did with the Decision Tree and Random Forest tests.

2. The replicate pairs almost invariably stayed in the same cluster together, confirming the robustness of the data.

3. The natural samples, i.e. those with a biogenic origin, like coal, tended mostly to cluster with the abiogenic samples, rather than the biotic ones.

I would point out that the PCA in Figure 2 was interpreted to mean that abiotic samples clustered tightly together. However, an alternative interpretation is that the abiotic and natural samples separate from the biotic if a separation is drawn diagonally to separate the biotic samples from all the rest.

One labeling question I have was placing the commercially supplied DNA and RNA samples in the abiotic class. If we detected either as [degraded] samples on another world, we would almost certainly claim that we had detected life once the possibility of contamination was ruled out. Switching these labels made very little difference to my Random Forest classification overall, but it did switch more samples to be classified as biotic, in excess of the switch of the 2 samples to biotic labels. It did make a difference for a simpler Decision Tree. It increased the correct classifications (92 to 97 of 134), mostly reducing the misclassification of abiotic to biotic classes, (23 to 16). The cost of this improvement was 2 extra nodes and 1 leaf in the Decision Tree.

The poor results of the 2-layer Perceptron indicate that the nested rules used in the Decision Trees are needed to classify the data. Perceptrons are 2-layer artificial neural networks (ANNs) that have an input and output layer, but no hidden neural layers. Perceptons are known to fail the exclusive-OR test (XOR) although the example Decision Tree in Figure 3 does not require any variables to overcome this issue. A multilayer neural net with at least 1 hidden layer would be needed to match the results of the Random Forest.

In conclusion, my results show that even with a dimensionally reduced data set, the data contains some information in total that allows a weak separation of the 2 classification labels and that the random Forest is the best classifier of many that were available in the WEKA ML software package.

References

1. Assembly Theory (AT) – A New Approach to Detecting Extraterrestrial Life Unrecognizable by Present Technologies www.centauri-dreams.org/2023/05/16/assembly-theory-at-a-new-approach-to-detecting-extraterrestrial-life-unrecognizable-by-present-technologies/

2. Venus Life Finder: Scooping Big Science
www.centauri-dreams.org/2022/06/03/venus-life-finder-scooping-big-science/

3. Pyrolysis – Gas Chromatography – Mass Spectroscopy en.wikipedia.org/wiki/Pyrolysis%E2%80%93gas_chromatography%E2%80%93mass_spectrometry

4. Random Forest en.wikipedia.org/wiki/Random_forest accessed 10/05/2023/

5. PCA “Principal Component Analysis” en.wikipedia.org/wiki/Principal_component_analysis accessed 10/05/2023

6, SHERLOC “Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals“ en.wikipedia.org/wiki/Scanning_Habitable_Environments_with_Raman_and_Luminescence_for_Organics_and_Chemicals accessed 10/06/2023

7. Han, J et al, Organic Analysis on the Pueblito de Allende Meteorite Nature 222, 364–365 (1969). doi.org/10.1038/222364a0

8. Zenobi, R et al, Spatially Resolved Organic Analysis of the Allende Meteorite. Science, 24 Nov 1989 Vol 246, Issue 4933 pp. 1026-1029 doi.org/10.1126/science.246.4933.1026

9. Goesmann, F et al The Mars Organic Molecule Analyzer (MOMA) Instrument: Characterization of Organic Material in Martian Sediments. Astrobiology. 2017 Jul 1; 17(6-7): 655–685.
Published online 2017 Jul 1. doi: 10.1089/ast.2016.1551

10. Cleaves, J et al, Hazen, R, A robust, agnostic molecular biosignature based on machine Learning, PNAS September 25, 2023, 120 (41) e2307149120
doi.org/10.1073/pnas.2307149120

11. __ Supporting information. www.pnas.org/action/downloadSupplement?doi=10.1073%2Fpnas.2307149120&file=pnas.2307149120.sapp.pdf

12. __ Mass Spectroscopy data: osf.io/ubgwt

13. Gold, T. The Deep Hot Biosphere: The Myth of Fossil Fuels. Springer Science and Business Media, 2001.

14. K-means clustering en.wikipedia.org/wiki/K-means_clustering

15. Chou, L et al Planetary Mass Spectrometry for Agnostic Life Detection in the Solar System Front. Astron. Space Sci., 07 October 2021 Sec. Astrobiology Volume 8 – 2021
doi.org/10.3389/fspas.2021.755100

16. “Nasa’s hunt for signs of life on Mars divides experts as mission costs rocket“ Web access 11/13/2023 www.theguardian.com/science/2023/nov/12/experts-split-over-nasa-mission-to-mars-costs-rocket

17. The Astrobiology Field Laboratory. September 26, 2006. Final report of the MEPAG Astrobiology Field Laboratory Science Steering Group (AFL-SSG). Web: mepag.jpl.nasa.gov/reports/AFL_SSG_WHITE_PAPER_v3.doc

18. Wang, Q., Song, H., Pan, S. et al. Initial pyrolysis mechanism and product formation of cellulose: An Experimental and Density functional theory(DFT) study. Sci Rep 10, 3626 (2020). https://doi.org/10.1038/s41598-020-60095-2

19. Sharma, S., Roppel, R.D., Murphy, A.E. et al. Diverse organic-mineral associations in Jezero crater, Mars. Nature 619, 724–732 (2023). https://doi.org/10.1038/s41586-023-06143-z

20. Weka 3: Machine Learning Software in Java https://www.cs.waikato.ac.nz/ml/weka/

Galactic ‘Nature Preserves’ over Deep Time

Speculating about the diffusion of intelligent species through the galaxy, as we’ve been doing these past few posts, is always jarring. I go back to the concept of ‘deep time,’ which is forced on us when we confront years in their billions. I can’t speak for anyone else, but for me thinking on this level is closer to mathematics than philosophy. I can accept a number like 13.4 × 10⁹ years (the estimate for the age of globular cluster NGC 6397 and a pointer to the Milky Way’s age) without truly comprehending how vast it is. As biological beings, a century pushes us to the limit. What exactly is an aeon?

NGC 6397 and other globular clusters are relevant because these ancient stellar metropolises are the oldest large-scale populations in the Milky Way. But I’m reminded that even talking about the Milky Way can peg me as insufferably parochial. David Kipping takes me entirely out of this comparatively ‘short-term’ mindset by pushing the limits of chronological speculation into a future so remote that elementary particles themselves have begun to break down. Not only that – the Columbia University astrophysicist finds a way for human intelligence to witness this.

You absolutely have to see how he does this in Outlasting the Universe, a presentation on his Cool Worlds YouTube channel. Now Cool Worlds is a regular stop here because Kipping is a natural at rendering high-level science into thoughtful explanations that even the mathematically challenged like me can understand. Outlasting the Universe begins with Kipping the narrator saying “We are in what you would call the future…the deep future” and takes human evolution through the end of its biological era and into a computer-borne existence in which a consciousness can long outlive a galaxy.

Image: Astrophysicist, author and indeed philosopher David Kipping. Credit: Columbia University.

Along the way we remember (and visit in simulation) Freeman Dyson, who once speculated that to become (almost) immortal, a culture could slow down the perceived rate of time. “Like Zeno’s arrow,” says Kipping, “we keep dialing down the speed.” The visuals here are cannily chosen, the script crisp and elegant, imbued with the ‘sense of wonder’ that brought so many of us to science fiction. Outlasting the Universe is indeed science fiction of the ‘hard SF’ variety as Kipping draws out the consequences of deep time and human consciousness in ways that make raw physics ravishing. I envy this man’s students.

With scenarios like this to play with, where do we stand with the ‘zoo hypothesis?’ It must, after all, reckon with years by the billions and the spread of intelligence. Science fiction writer James Cambias responded to my Life Elsewhere? Relaxing the Copernican Principle post with a tight analysis of the notion that we may be under observation from a civilization whose principles forbid contact with species they study. This is of course Star Trek’s Prime Directive exemplified (although the lineage of the hypothesis dates back decades), and it brings up Jim’s work because he has been so persistent a critic of the idea of shielding a population from ETI contact.

Jim’s doubts about the zoo hypothesis go back to his first novel. A Darkling Sea posits an Europa-like exoplanet being studied by a star-faring species called the Sholen, who are employing a hands-off policy toward local intelligence even as they demand that human scientists on the world’s sea bottom do the same. Not long after publication of the novel (Tor, 2014), he told John Scalzi that he saw Prime Directives and such as “ …a mix of outrageous arrogance and equally overblown self-loathing, a toxic brew masked by pure and noble rhetoric.” The arrogance comes from ignoring the desires of the species under study and denying them a choice in the matter.

In a current blog post called The Zoo Hypothesis: Objections, Jim lays this out in rousing fashion:

…we deduce that you can’t hide a star system which contains a civilization capable of large-scale interstellar operations, which the Zookeepers are by definition. They’re going to be emitting heat, EM radiation, laser light, all the spoor of a Kardashev Type I or higher civilization. And the farther away they are, the more they’re going to be emitting because they need to be bigger and more energy-rich in order to have greater reach.

This gives us one important lesson: if the goal of a Zoo is to keep the civilizations inside from even knowing of the existence of other civilizations, the whole thing is impossible. You can’t have a Zoo without Zookeepers, and the inhabitants of the Zoo will detect them.

Jim’s points are well-taken, and he extends the visibility issue by noting that we need to address time, which must be deep indeed. For a civilization maintaining all the apparatus of a protected area around a given star has to do so on time frames that are practically geological in length. Here we can argue a bit, for a ‘zoo’ set up for reasons we don’t understand in the first place might well come into existence only when the species being studied has reached the capability of detecting its observers.

I referenced Amri Wandel (Hebrew University of Jerusalem) on this the other day. Wandel argues that our own industrial lifespan is currently on the order of a few centuries, and who knows what level of technological sophistication a ‘zoo-keeping’ observer culture might want us to reach before it decides it can initiate contact? That would drop the geological timeframe down to a more manageable span, although the detectability problem still remains. So does the issue of interaction with other star-faring species who might conceivably need to be warned off entering the zoo. Cambias again:

If Captain Kirk or whoever shows up on your planet and says “I’m from another planet. Let’s talk and maybe exchange genetic material — or not, if you want me to leave just say so,” that’s an infinitely more reasonable and moral act than for Captain Kirk to sneak around watching you without revealing his own existence. The first is an interaction between equals, the second is the attitude of a scientist watching bacteria. Is that really a moral thing to do? Why does having cooler toys than someone else give you the right to treat them like bacteria?

This is lively stuff, and speculation of this order is why many people begin reading and writing science fiction in the first place. A hard SF writer, a ‘world builder,’ will make sure that he or she has thought through implications for every action he attributes not only to his characters but the non-human intelligences they may interact with. One thing that had never occurred to me was the issue of visibility when translated to the broader galaxy. Because a zoo needs to be clearly marked. Here’s Jim’s view:

If you’re going to exclude other civilizations from a particular region of the Galaxy, you have to let them know. Shooting relativistic projectiles or giant laser beams at incoming starships is a very ham-fisted way of communicating “keep out!” — and it runs the risk of convincing the grabby civilization that you’re shooting at to start shooting back. And if they’re grabby and control a lot of star systems, that’s going to be a lot of shooting.

Jim’s points are telling, and the comments on my recent Centauri Dreams posts also reflect readers’ issues with the zoo hypothesis. My partiality to it takes these issues into account. If the zoo hypothesis is the best of the solutions to the Fermi question, then the likelihood that other intelligent species are in our neighborhood is vanishingly small. Which lets me circle back to the paper by Ian Crawford and Dirk Schulze-Makuch that set off this entire discussion. It asked, you’ll recall, whether the zoo hypothesis wasn’t the last standing alternative to the idea that technological civilizations are, at the least, rare. It’s not a good alternative, but there it is.

In other words, I’d like the zoo hypothesis to have some traction, because it’s the only way I can find to imagine a galaxy in which intelligent civilizations are common.

Consider the thinking of Crawford and Schulze-Makuch on other hypotheses. Interstellar flight might be impossible for reasons of distance and energy, but this seems a non-starter given that we know of ways within known physics to send a payload to another star even in this century. A slow exploration front moving at Voyager speeds could do the trick in a fraction of the time available given the age of the Milky Way. The lack of SETI detections likewise points to technologies that are physically feasible (various kinds of technosignatures) but are not yet observed.

Is the answer that civilizations don’t live very long, and the chances of any two existing at the same brief time in the galaxy are remote? The nagging issue here is that we would have to assume that all civilizations are temporally limited. It takes only one to find a way through whatever ‘great filter’ is out there and survive into a star-faring maturity to get the galaxy effectively visited and perhaps colonized by now. Crawford and Schulze-Makuch reject models that result in volumes of the galactic disk being unvisited during the four billion years of Earth’s existence, considering them valid mathematically but implausible as solutions to the larger Fermi puzzle.

Many of the hypotheses to explain the Great Silence go even further into the unknowable. What, for example, do we make of attempts to parse out an alien psychology, which inevitably is seen, wittingly or not, as reflecting our own human instincts and passions? Monkish cultures that choose not to expand for philosophical reasons will remain unknowable to us, for example, as will societies that self-destruct before they achieve interstellar flight. We can still draw a few conclusions, though, as Crawford and Schulze-Makuch do, all pointing at least to intelligence being rare.

Although we know nothing of alien sociology, it seems inevitable that the propensity for self-destruction, interstellar colonization and so on must be governed by probability distributions of some kind. The greater the number of ETIs that have existed over the history of the Galaxy, the more populated will be the non-self-destructed and/or pro-colonization wings of these distributions, and it is these ETIs that we do not observe. On the other hand, if the numbers of ETIs have always been small, these distributions will have been sparsely populated and the non-observation of ETIs in their expansionist wings follows naturally.

Image: Are ancient ruins the only thing we may expect to find if we reach other star systems? Are civilizations always going to destroy themselves? The imposing remains of Angkor Wat. Credit: @viajerosaladeriva.

Likewise, we still face the problem that, as Stapledon long ago noted, different cultures will choose different priorities. Why assume that in a galaxy perhaps stuffed with aliens adopting Trappist-like vows of silence there will not be a few societies that do want to broadcast to the universe, a METI-prone minority perhaps, but observable in theory. We have no paradox in the Fermi question if we assume that aliens are rare, but if they are as common as early science fiction implied, the paradox is only reinforced.

So Crawford and Schulze-Makuch have boiled this down to the zoo hypothesis or nothing, with the strong implication that technological life must indeed be rare. I rather like my “one to ten” answer to the question of how many technological species are in the galaxy, because I think it squares with their conclusions. And while we can currently only speculate on reasons for this, it’s clear that we’re on a path to draw conclusions about the prevalence of abiogenesis probably in this century. How often technologies emerge after unicellular life covers a planet is a question that may have to wait for the detection of a technosignature. And as is all too clear, it’s possible this will never come.

The paper is Crawford & Schulze-Makuch, “Is the apparent absence of extraterrestrial technological civilizations down to the zoo hypothesis or nothing?” Published online in Nature Astronomy 28 December 2023 (abstract). James Cambias’ fine A Darkling Sea (Tor, 2014) is only the first of his novels, the most recent of which is The Scarab Mission (Baen, 2023), part of his ‘billion worlds’ series. Modesty almost, but not quite, forbids me from mentioning my essay “Ancient Ruins” which ran in Aeon a few years back.

Can the ‘Zoo Hypothesis’ Be Saved?

If we were to find life other than Earth’s somewhere else in the Solar System, the aftershock would be substantial. After all, a so-called ‘second genesis’ would confirm the common assumption that life forms often, and in environments that range widely. The implications for exoplanets are obvious, as would be the conclusion that the Milky Way contains billions of living worlds. The caveat, of course, is that we would have to be able to rule out the transfer of life between planets, which could make Mars, say, controversial. But find living organisms on Titan and the case is definitively made.

Ian Crawford and Dirk Schulze-Makuch point out in their new paper on the Fermi question and the ‘zoo hypothesis’ that this issue of abiogenesis could be settled relatively soon as our planetary probes gain in sophistication. We could settle it within decades if we found definitive biosignatures in an exoplanet atmosphere, but here my skepticism kicks in. My guess is that once we have something like the Habitable Worlds Observatory in place (and a note from Dominic Benford informs me that NASA has just put together teams to guide the development of HWO, the flagship mission after the Nancy Grace Roman Space Telescope), the results will be immediately controversial.

In fact, I can see a veritable firestorm of debate on the question of whether a given biosignature can be considered definitive. Whole journals a few decades from now will be filled with essays pushing abiotic ways to produce any signature we can think of, and early reports that support abiogenesis around other stars will be countered with long and not always collegial analysis. This is just science at work (and human nature), and we can recall how quickly Viking results on Mars became questioned.

So I think in the near term we’re more likely to gain insights on abiogenesis through probing our own planetary system. Life on an ice giant moon may turn up, or around a gas giant like Saturn in an obviously interesting moon like Enceladus, and we can strengthen our hunch that abiogenesis is common. In which case, where do we stand on the development of intelligence or, indeed, consciousness? What kind of constraints can we put on how often technology is likely to be the result of highly evolved life? Absent a game-changing SETI detection, we’re still left with the Fermi question. We have billions of years of cosmic history to play with and a galaxy that over time could be colonized.

Image: JWST’s spectacular image of M51 (NGC 5194), some 27 million light-years away in the constellation Canes Venatici. Taken with the telescope’s Near-InfraRed Camera (NIRCam), the image is so lovely that I’ve been looking for an excuse to run it. This seems a good place, for we’re asking whether a universe that can produce so many potential homes for life actually gives rise to intelligence and technologies on a galaxy-wide scale. Here the dark red features trace warm dust, while colors of red, orange, and yellow flag ionized gas. How long would it take for life to emerge in such an environment, and would it ever become space-faring? Credit: ESA/Webb, NASA & CSA, A. Adamo (Stockholm University) and the FEAST JWST team.

Crawford and Schulze-Makuch ask a blunt question in the title of their paper in Nature Astronomy: ”Is the apparent absence of extraterrestrial technological civilizations down to the zoo hypothesis or nothing?” The zoo hypothesis posits that we are being studied by beings that for reasons of their own avoid contact. David Brin referred in his classic 1983 paper “The Great Silence” (citation below) to this as one variation of a quarantine, with the Solar System something like a nature preserve whose inhabitants have no idea that they are under observation.

Quarantines can come in different flavors, of course. Brin notes the possibility that observers might wait for our species to reach a level of maturity sufficient to join what could be a galactic ‘club’ or network. Or perhaps the notion is simply to let planets early in their intellectual development lie fallow as their species mature. Wilder notions include the idea that we could be quarantined because we represent a danger to the existing order, though it’s hard to imagine a scenario in which this occurs.

But the Crawford / Schulze-Makuch paper is not exactly a defense of the zoo hypothesis. Rather, it asks whether it is the only remaining alternative to the idea that the galaxy is free of other civilizations. The paper quickly notes the glaring issue with the hypothesis, and it’s one anticipated by Olaf Stapledon in Star Maker. While any species with the ability to cross interstellar distances might remain temporarily hidden, wouldn’t there be larger trends that mitigate the effectiveness of their strategy? Can you hide one or more civilizations that have expanded over millions of years to essentially fill the galaxy? At issue is the so-called ‘monocultural fallacy’:

…to explain the Fermi paradox in a Galaxy where ETIs are common, all these different, independently evolved civilizations would need to agree on the same rules for the zoo. Moreover, to account for the apparent non-interference with Earth’s biosphere over its history, these rules may have had to remain in place, and to have been adhered to, ever since the first appearance of colonizing ETI in the Galaxy, which might be billions of years if ETIs are common. Indeed, Stapledon (ref. 29, p.168) anticipated this problem when he noted, from the point of view of a future fictional observer, that “different kinds of races were apt to have different policies for the galaxy”.

I always return to Stapledon with pleasure. I dug out my copy of Star Maker to cite more from the book. Here the narrator surveys the growth and philosophies of civilizations in their multitudes during his strange astral journey:

Though war was by now unthinkable, the sort of strife which we know between individuals or associations within the same state was common. There was, for instance, a constant struggle between the planetary systems that were chiefly interested in the building of Utopia, those that were most concerned to make contact with other galaxies, and those whose main preoccupation was spiritual. Besides these great parties, there were groups of planetary systems which were prone to put the well-being of individual world-systems above the advancement of galactic enterprise. They cared more for the drama of personal intercourse and the fulfillment of the personal capacity of worlds and systems than for organization or exploration of spiritual purification. Though their presence was often exasperating to the enthusiasts, it was salutary, for it was a guarantee against extravagance and against tyranny.

That’s a benign kind of strife, but it has an impact. The matter becomes acute when we consider interacting civilizations in light of the differential galactic rotation of stars, as Brin pointed out decades ago. The closest species to us at any given time would vary as different stars come into proximity. That seems to imply a level of cultural uniformity that is all but galaxy-wide if the zoo hypothesis is to work. But Crawford and Schulze-Makuch are on this particular case, noting that a single early civilization (in galactic history) might be considered a ‘pre-emptive civilization’ (this is Ronald Bracewell’s original idea), thus enforcing the rules of the road to subsequent ETIs. In such a way we might still have a galaxy filled with technological societies.

An interesting digression here involves the age of likely civilizations. We know that the galaxy dates back to the earliest era of the universe. European Southern Observatory work on the beryllium content of two stars in the globular cluster NGC 6397 pegs their age at 13,400 ± 800 million years. Extraterrestrial civilizations have had time to arise in their multitudes, exacerbating the ‘monocultural’ issue raised above. But the authors point out that despite its age, the galaxy’s habitability would have been influenced by such issues as “a possibly active galactic nucleus, supernovae and close stellar encounters.” Conceivably, the galaxy at large evolved in habitability so that it is only within the last few billion years that galaxy-spanning civilizations could become possible.

Does that help explain the Great Silence? Not really. Several billion years allows ample time for civilizations to develop and spread. As the paper notes, we have only the example of our Earth, in which it took something like two billion years to develop an atmosphere rich in the oxygen that allowed the development of complex creatures. You don’t have to juggle the numbers much to realize that different stellar systems and their exoplanets are going to evolve at their own pace, depending on the growth of their unique biology and physical factors like plate tectonics. There is plenty of room even in a galaxy where life only emerged within the last billion years for civilizations to appear that are millions of years ahead of us technologically.

Image: The globular cluster NGC 6397. A glorious sight that reminds us of the immensity in both space and time that our own galaxy comprehends. Credit: ESO.

Back to the zoo hypothesis. Here’s one gambit to save it that the paper considers. A policy of non-interference would only need to be enforced for a few thousand years – perhaps only a few hundreds – if extraterrestrials were interested primarily in technological societies. This is Amri Wandel’s notion in an interesting paper titled “The Fermi paradox revisited: technosignatures and the contact era” (citation below). Wandel (Hebrew University of Jerusalem) eases our concern over the monocultural issue by compressing the time needed for concealment. Crawford and Schulze-Makuch cite Wandel, but I don’t sense any great enthusiasm for pressing his solution as likely.

The reasons for doubt multiply:

Even if they can hide evidence of their technology (space probes, communications traffic and so forth), hiding the large number of inhabited planets in the background implied by such a scenario would probably prove challenging (unless they are able to bring an astonishingly high level of technical sophistication to the task). In any case, advanced technological civilizations may find it difficult to hide the thermodynamic consequences of waste heat production, which is indeed the basis of some current technosignature searches. Moreover, any spacefaring civilization is likely to generate a great deal of space debris, and the greater the number of ETIs that have existed in the history of the Galaxy the greater the quantity of debris that will drift into the Solar System, where a determined search may discover evidence for it.

Why then highlight the zoo hypothesis when it has all these factors working against it? Because in the view of the authors, other solutions to the Fermi question are even worse. I’m running out of time this morning, but in the next post I want to dig into some of these other answers to see whether any of them can still be salvaged. For the more dubious our solutions to the ‘where are they’ question, the more likely it seems that there are no civilizations nearby. We’ll continue to push against that likelihood with technosignature and biosignature searches that could change everything.

The paper is Crawford & Schulze-Makuch, “Is the apparent absence of extraterrestrial technological civilizations down to the zoo hypothesis or nothing?” Published online in Nature Astronomy 28 December 2023 (abstract). David Brin’s essential paper is “The Great Silence – the Controversy Concerning Extraterrestrial Intelligent Life,” Quarterly Journal of the Royal Astronomical Society Vol. 24, No.3 (1983), pp. 283-309 (abstract/full text). Amri Wandel’s paper is “The Fermi Paradox revisited: Technosignatures and the Contact Era,” Astrophysical Journal 941 (2022), 184 (preprint).

Life Elsewhere? Relaxing the Copernican Principle

Most people I know are enthusiastic about the idea that other intelligent races exist in the galaxy. Contact is assumed to be an inevitable and probably profoundly good thing, with the exchange of knowledge possibly leading to serious advances in our own culture. This can lead to a weighting of the discourse in favor of our not being alone. The ever popular Copernican principle swings in: We can’t be unique, can we? And thus every search that comes up empty is seen as an incentive to try still other searches.

I’m going to leave the METI controversy out of this, as it’s not my intent to question how we should handle actual contact with ETI. I want to step back further from the question. What should we do if we find no trace of extraterrestrials after not just decades but centuries? I have no particular favorite in this race. To me, a universe teeming with life is fascinating, but a universe in which we are alone is equally provocative. Louis Friedman’s new book Alone But Not Lonely (University of Arizona Press, 2023) gets into these questions, and I’ll have more to say about it soon.

I’ve thought for years that we’re likely to find the galaxy stuffed with living worlds, while the number of technological civilizations is tiny, somewhere between 1 and 10. The numbers are completely arbitrary and, frankly, a way I spur (outraged) discussion when I give talks on these matters. I’m struck by how many people simply demand a galaxy that is alive with intelligence. They want to hear ‘between 10,000 and a million civilizations,’ or something of that order. More power to them, but it’s striking that such a lively collection of technological races would not have become apparent by now. I realize that the search space is far vaster than our efforts so far, but still…

Image: The gorgeous M81, 12 million light years away in Ursa Major, and seen here in a composite Spitzer/Hubble/Galaxy Evolution Explorer view. Blue is ultraviolet light captured by the Galaxy Evolution Explorer; yellowish white is visible light seen by Hubble; and red is infrared light detected by Spitzer. The blue areas show the hottest, youngest stars, while the reddish-pink denotes lanes of dust that line the spiral arms. The orange center is made up of older stars. Should we assume there is life here? Intelligence? Credit: NASA/JPL.

So when Ian Crawford (Birkbeck, University of London) was kind enough to send me a copy of his most recent paper, written with Dirk Schulze-Makuch (Technische Universität Berlin), I was glad to see the focus on an answer to the Fermi question that resonates with me, the so-called ‘zoo hypothesis.’ A variety of proposed resolutions to the ‘where are they’ question exist, but this one is still my favorite, a way we can save all those teeming alien civilizations, and a sound reason for their non-appearance.

As far as I know, Olaf Stapledon first suggested that intelligent races might keep hands off civilizations while they observed them, in his ever compelling novel Star Maker (1937). But it appears that credit for the actual term ‘zoo hypothesis’ belongs to John Ball, in a 1973 paper in Icarus. From Ball’s abstract:

Extraterrestrial intelligent life may be almost ubiquitous. The apparent failure of such life to interact with us may be understood in terms of the hypothesis that they have set us aside as part of a wilderness area or zoo.

That’s comforting for those who want a galaxy stuffed with intelligence. I want to get into this paper in the next post, but for now, I want to note that Crawford and Schulze-Makuch remind us that what is usually styled the Fermi ‘paradox’ is in fact no paradox at all if intelligent races beyond our own do not exist. We have a paradox because we are uneasy with the idea that we are somehow special in being here. Yet a universe devoid of technologies other than ours will look pretty much like what we see.

The angst this provokes comes back to our comfort with the ‘Copernican principle,’ which is frequently cited, especially when we use it to validate what we want to find. Just as the Sun is not the center of the Solar System, so the Solar System is not the center of the galaxy, etc. We are, in other words, nothing special, which makes it more likely that there are other civilizations out there because we are here. If we can build radio telescopes and explore space, so can they, because by virtue of our very mediocrity, we represent what the universe doubtless continues to offer up.

But let’s consider some implications, because the Copernican principle doesn’t always work. It was Hermann Bondi, for example, who came up with the notion that we could apply the principle to the cosmos at large, noting that the universe was not only homogeneous but isotropic, and going on to add that it would show the exact same traits for any observer not just at any place but at any time. The collapse of the Steady State theory put an end to that speculation as we pondered an evolving universe where time’s vantage counted critically in terms of what we would see.

Our position in time matters. So, for that matter, does our position in the galaxy.

But physics seems to work no matter where we look, and the assumption of widespread physical principles is essential for us to do astronomy. So as generalizations go, this Copernican notion isn’t bad, and we’d better hang on to it. Kepler figured out that planetary orbits weren’t circular, and as Caleb Scharf points out in his book The Copernicus Complex: Our Cosmic Significance in a Universe of Planets and Probabilities (Farrar, Straus and Giroux, 2014), this was a real break from the immutable universe of Aristotle. So too was Newton’s realization that the Sun itself orbits around a variable point close to its surface and well offset from its core.

So even the Sun isn’t the center of the Solar System in any absolute sense. As we move from Ptolemy to Copernicus, from Tycho Brahe to Kepler, we see a continuing exploration that pushes humanity out of any special position and any fixed notions that are the result of our preconceptions. I think the problem comes when we make this movement a hard principle, when we say that no ‘special places’ can exist. We can’t assume from a facile Copernican model that each time we apply the principle of mediocrity, we’ve solved a mystery about things we haven’t yet proven.

Consider: We’ve learned how unusual our own Solar System appears to be; indeed, how unusual so many stellar systems are as they deviate hugely from any ‘model’ of system development that existed before we started actually finding exoplanets. This is why the first ‘hot Jupiters’ were such a surprise, completely unexpected to most astronomers.

Is the Sun really just another average star lost in the teeming billions that accompany it in its 236 million year orbit of the galaxy? There are many G-class stars, to be sure, but if we were orbiting a more average star, we would have a red dwarf in the sky. These account for 75 percent, and probably more, of the stars in the Milky Way. We’re not average on that score, not when G-class stars amount to a paltry 7 percent of the total. Better to say that we’re only average, or mediocre, up to a point. If we want to take this to its logical limit, we can back our view out to the scale of the cosmos. Says Scharf::

The fact that we are so manifestly located in a specific place in the universe — around a star, in an outer region of a galaxy, not isolated in the intergalactic void, and at just this time in cosmic history — is simply inconsistent with ‘perfect’ mediocrity.

And what about life itself? Let me quote Scharf again (italics mine). Here he works in the anthropic idea that our observations of the universe are not truly random but are demanded by the fact that the universe can produce life in the first place:

…a Copernican worldview at best suggests that the universe should be teeming with life like that on Earth, and at worst doesn’t really tell us one way or the other. The alternative — anthropic arguments — require only a single instance of life in the universe, which would be us. At best, some fine-tuning studies suggest that the universe could be marginally suitable for heavy-element-based-life-forms, rather than being especially fertile. Neither view reveals much about the actual abundance of life to be expected in our universe, or much about our own more parochial significance or insignificance.

So when we speculate about the Fermi question, we need to be frank about our assumptions and, indeed, our personal inclinations. If we relax our Copernican orthodoxy, we have to admit that because we are here does not demand that they are there. Let’s just keep accumulating data to begin answering these questions.

And as we’ll discuss in the next post, Crawford and Schulze-Makuch point out that we’re already entering the era when meaningful data about these questions can be gathered. One key issue is abiogenesis. How likely is life to emerge even under the best of conditions? We may have some hard answers within decades, and they may come from discoveries in our own system or in biosignatures from a distant exoplanet.

If abiogenesis turns out to be common (and I would bet good money that it is), we still have no knowledge of how often it evolves into technological societies. An Encyclopedia Galactica could still exist. Could John Ball be right that other civilizations may be ubiquitous, but hidden from us because we have been sequestered into ‘nature preserves’ or the like? Are we an example of Star Trek’s ‘Prime Directive’ at work? There are reasons to think that the zoo hypothesis, out of all the Fermi ‘solutions’ that have been suggested, may be the most likely answer to the ‘where are they’ question other than the stark view that the galaxy is devoid of other technological societies. We’ll examine Crawford and Schulze-Makuch’s view on this next time.

Caleb Scharf’s The Copernicus Complex: Our Cosmic Significance in a Universe of Planets and Probabilities is a superb read, highly recommended. The Ball paper is “The Zoo Hypothesis,” Icarus Volume 19, Issue 3 (July 1973), pp. 347-349 (abstract). The Crawford & Schulze-Makuch paper we’ll look at next time is “Is the apparent absence of extraterrestrial technological civilizations down to the zoo hypothesis or nothing?” Nature Astronomy 28 December, 2023 (abstract).

Talking to Starglider

When we’ve discussed interstellar ‘interlopers’ like ‘Oumuamua and 2I/Borisov, the science fiction-minded among us have now and then noted Arthur Clarke’s Rendezvous with Rama (Gollancz, 1973). Although we’ve yet to figure out definitively what ‘Oumuamua is (2/I Borisov is definitely a comet), the Clarke reference is an imaginative nod to the possibility that one day an alien craft might enter our Solar System during a gravitational assist maneuver and be flung outward on whatever its mission was (in Rama’s case, out in the direction of the Large Magellanic Cloud).

Since we’ll never see ‘Oumuamua again, we wait with great anticipation the work of the Legacy Survey of Space and Time (LSST), which will be run via the Vera Rubin Telescope (first light in 2025). Estimates vary widely but the consensus seems to be that with a telescope capable of imaging the entire visible sky in the southern hemisphere every few nights, the LSST should produce more than a few interstellar objects, perhaps ten or more, every year. We probably won’t find a Rama, but who knows?

Meanwhile, I’m reminded of another Clarke novel that rarely gets the attention in this regard that Rendezvous with Rama does. This is 1979’s The Fountains of Paradise (BCA/Gollancz). Although known primarily for its exploration of space elevators (and its reality-distorting geography), the novel includes as a separate theme another entry into the Solar System, this time by a craft that, unlike Rama, is willing to take notice of us. Starglider is its name, and it represents a civilization that is cataloging planetary systems through probes scattered across a host of nearby stars.

Starglider has a 500 kilometer antenna to communicate with its home star (humans name this Starholme), and in the words of a report on its activities within the novel, it more or less ‘charges its batteries’ each time it makes a close stellar pass. Having explored the Alpha Centauri trio, its next destination after the Sun is Tau Ceti. The game plan is that each stellar encounter will gather data and open communications with any civilization found there as a precursor to long-term radio contact and, presumably, entry into some kind of interstellar information network.

This is rather fascinating. For Starglider is smart enough to have studied human languages and is able to converse, after a fashion. From the novel:

It was obvious from its first messages that Starglider understood the meaning of several thousand basic English and Chinese words, which it had deduced from an analysis of television, radio, and especially broadcast video-text services. But what it had picked up during its approach was a very unrepresentative sample from the whole spectrum of human culture; it contained little advanced science, still less advanced mathematics, and only a random selection of literature, music, and the visual arts.

Like any self-taught genius,therefore, Starglider had huge gaps in its education. On the principle that it was better to give too much than too little, as soon as contact was established, Starglider was presented with the Oxford English Dictionary, the Great Chinese Dictionary (Mandarin edition), and the Encyclopedia Terrae. Their digital transmission required little more than fifty minutes, and it was notable that immediately thereafter Starglider was silent for almost four hours — its longest period off the air. When it resumed contact, its vocabulary was immensely enlarged, and more than ninety-nine percent of the time it could pass the Turing test with ease — that is, there was no way of telling from the messages received that Starglider was a machine, and not a highly intelligent human.

Clarke slyly notes the cultural differences between species as opposed to the commonality of, say, mathematics, saying that Starglider had little comprehension of lines like this from Keats:

Charm’d casements, opening on the foam
Of perilous seas, in faery lands forlorn…

And it drew a blank on Shakespeare as well:

Shall I compare thee to a summer’s day?
Thou art more lovely and more temperate…

Well, these are aliens, after all. We have enough trouble with cross-cultural references here on Earth. Humans broadcast thousands of hours of music and video drama to Starglider to help it out, but here, of course, we run into the messaging problem. Just how much do we want to reveal of ourselves to a culture about which we have all too little information other than that it is markedly more advanced than our own? You’ll find that aspect of the METI debate explored as a core part of the Starglider subplot.

Some have panned Starglider’s appearance in the novel because it seems intrusive to the plot (although I suppose I could argue that autonomous probes cataloging stellar systems almost have to be intrusive to get their job done). But in the midst of the Starglider passages, we learn that the chatty aliens, now freely talking to humans via radio, catalog the civilizations they find on a scale based on their technological accomplishments. Is this Clarke channeling Nikolai Kardashev?

Whatever the case, Clarke as always takes the long view, and the long view by its very nature always pushes out into mystery. Consider the scale used by Starglider:

    I. Stone Tools

    II. Metals, fire

    III. Writing, handicrafts, ships

    IV. Steam power, basic science

    V. Atomic energy, space travel

    VI. “…the ability to convert matter completely into energy, and to transmute all elements on an industrial scale.”

On this scale of one through six we can place our species at level 5, as Starglider sees us. But are there further levels? Clarke is wise to imply their existence without exploring it any further, as this lets the reader’s imagination do the job. He’s expert at this:

“And is there a Category Seven?” Starglider was immediately asked. The reply was a brief “Affirmative.” When pressed for details, the probe explained: “I am not allowed to describe the technology of a higher-grade culture to a lower one.” There the matter remained, right up to the moment of the final message, despite all the leading questions designed by the most ingenious legal brains of Earth.

When the University of Chicago’s Department of Philosophy transmits the whole of Thomas Aquinas’ Summa Theologica to Starglider, all hell breaks loose. I turn you to the novel for more.

Image; Hubble took this image on Oct. 12, 2019, when comet 2I/Borisov was about 418 million kilometers from Earth. The image shows dust concentrated around the nucleus, but the nucleus itself was too small to be seen by Hubble. We are on the cusp of a windfall of ‘interstellar interloper’ data as the LSST comes online within a few years. Will we ever find a Rama, or a Starglider, amidst our observations? Credit: NASA, ESA and D. Jewitt (UCLA).

As I mentioned, some critics fault The Fountains of Paradise for Starglider’s very presence, noting that there are essentially two plots at work here. In fact there are in fact three plots taking place on different timescales here, one of them dating back several thousand years, and recall that the voyage of Starglider itself spans millennia, the mission having began some 60,000 years before the events of the main part of the novel – construction of the space elevator – take place. This kind of chronological juggling, allows Clarke to inspire deeper reflection on humanity’s place in the universe and I find it enormously effective.

Wonders fairly pop out of Clarke’s early novels and much of his later work. On that score, I likewise refuse to fault him severely because he cannot achieve complex characterization. A case can be made (James Gunn makes it strongly) that science fiction of Clarke’s ilk needs to put the wonder first. Rich, strange and complicated characters confronting rich, strange and wondrous events may lead to one richness too many. For we, the readers, to absorb the mystery, we need to see how a relatively straightforward character reacts. It’s that contrast that Clarke aims to mine.

That’s only one way of doing science fiction, but much science fiction of the 1950s, which I consider the genre’s true golden age (with a nod to the late 1930s, as one must) often operated with precisely this conceit. And that’s okay, because when writers of greater literary style began to emerge – writers like Alfred Bester, say, with his staggering The Stars My Destination (1956) we were able to see complex characters confronting the deeply strange in ways that simply added depth to the experience. Look at Robert Silverberg in the 1960s as an exemplar of an almost magical insight into what makes the individual human tick. Once you’ve begun on that journey, the field is altered forever, but that doesn’t negate its rich past.

In fact, none of this subsequent growth nullifies Clarke’s accomplishment in the realm of big ideas. Consider him a writer of a kind of SF that flourished and fed a mighty stream into what has now become a river of wildly untamed ideas and insights. And sometimes only Clarke will do. Thus when i read, for the umpteenth time, The City and the Stars, I’m again dazzled by the very title, and the first few pages take me back into a realm where there are suns not quite our own casting a numinous glow over landscapes we learn to navigate through characters who learn with us. Like Stapledon’s, like Asimov’s, Clarke’s is a voice we’ll celebrate deep into the future.