Centauri Dreams

Imagining and Planning Interstellar Exploration

The “Habitability” of Worlds (Part II)

If we ever thought it would be easy to tell whether a planet was ‘habitable’ or not, Stephen Dole quickly put the idea to rest when he considered all the factors involved in his study Habitable Planets for Man (1964). In this second part of his essay on habitability, Dave Moore returns to Dole’s work and weighs these factors in light of our present knowledge. What I particularly appreciate about this essay in addition to Dave’s numerous insights is the fact that he has brought Dole’s work back into focus. The original Habitable Planets for Man was a key factor in firing my interest in writing about interstellar issues. And Centauri Dreams reader Mark Olson has just let me know that Dole appears as a major character in a novel by Harry Turtledove called Three Miles Down. It’s now in my reading stack.

by Dave Moore

In Part I of this essay, I listed the requirements for human habitability in Stephen Dole’s report, Habitable Planets for Man. Now I’ll go over what we’ve subsequently learned and see how this has changed our perspective.

Dole, in calculating the likelihood of a star having a habitable planet, produced his own ‘Drake equation.’

Image: Dole’s ‘Drake Equation.’

Dole assigns the following probabilities to his equation: PHP=Nsub>S Pp Pi PD PM Pe PB PR PA PL:

Pp = 1.0, Pi = 0.81, PM = 0.19, Pe = 0.94, PR = 0.9, PL = 1.0, PB = 0.95 for a star taken at random, 1.0 if there is no interference with the other star in a binary system. He calculates that for stars around solar mass there is a 5.4% chance of having a habitable planet.

I’ll only summarize his calculations as this is not the primary thrust of this essay. Some of his estimates such as Pp = 1.0, the number of stars with planets, have held up well. Others need adjusting, but by far the biggest factors that determine the likelihood of a planet being habitable for humans are those he didn’t consider in depth.

Since Dole’s report, we’ve learned a lot more about the carbonate-silicate cycle and atmospheric circulation. The carbonate-silicate cycle provides a stronger negative feedback loop over a wider range of insolation than thought at the time of his report. Atmospheric and oceanic heat transport have been shown to work more efficiently also. This leads to a more positive assessment to the range of habitability. Planets with high axial tilts and eccentricities, which Dole had excluded, are now considered potentially habitable; and more importantly, there’s the possibility that tidally-locked planets around M-dwarf stars may be habitable. M-dwarf stars being the most common in the galaxy, this makes a big difference to the number of potentially habitable planets. Nsub>S, the mass range of stars, is now opened up. Pi, the range of inclination, is probably 1.0, and PD, the probability that there is a planet in the habitable zone, which he gave as 0.63 and is still a good estimate, is now extended to M dwarfs. And given that tidally locked planets are no longer excluded, PR, the rate of rotation is not a limiting factor.

On PM, Dole’s assumptions for the size of a habitable Earth-like world have held up well. His calculations on atmospheric retention and escape conclude that planets between 0.4 Earth mass and 2.35 Earth mass could be Earth-like. Planets below 0.4 Earth mass would lose their atmospheres. Planets above 2.35 Earth mass would retain their primordial Hydrogen and Helium atmospheres and become what we now call Hycean planets or Super-Earths.

This gives a range of surface gravities, assuming a composition similar to Earth’s, of between 0.68 and 1.5 G, which would mean from a gravitational perspective most of the range is within what humanity could handle. Dole puts the upper limit at 1.25 G based on mobility measurements made in centrifuges from that time. I would agree with him even though there are a lot of people walking around today with one and a half times their ideal weight. The limiting factor for high G is heart failure at an early age, a condition extremely tall people here on Earth suffer from. If you are a six-foot person on a 1.5 G world, your heart is pumping blood equivalent to that of a nine-foot person. In this case, people of short stature have a distinct advantage. A five-foot person would have the blood pressure equivalent of being seven foot six on a 1.5 G world and six foot three on a 1.25 G world.

However, when it comes to the frequency of Earth-sized worlds in the habitable zone, Dole’s guess at PM = 0.19 is probably too high even when we now include tidally-locked planets around red-dwarf stars. He, like the rest of us until recently, had no clue that sub-Neptunes and super-Earths would be the most frequently-sized planets in the habitable zone of a roughly Sol-mass star.

From our observations, Dole’s guess on orbital eccentricity, Pe, looks like it’s in the ballpark, again due to the inclusion of red-dwarf stars with their tidally circularized orbits. With a lot of these factors, though, slight changes in probability do not make a big difference in the frequency of habitable planets. The big differences come from those he didn’t consider.

Dole noted that water coverage on a planet could determine its habitability. He did not go over this in any detail, however, mainly I suspect because he had no information to go on. He didn’t include a term for it in his calculations. But, we do know from density determinations of transiting Earth-sized planets that there’s a significant possibility that a large percentage of them may be excluded due to being covered by deep oceans. This would mean, even if they had breathable atmospheres, they would not meet Dole’s criteria for habitability.

While Dole went carefully over the range of breathable atmospheres humans could tolerate, he essentially assigned a probability of 1.0 to the formation of this atmosphere once life appears on the planet, PL, and sufficient time has passed, PA, to which he arbitrarily assigns a period of 3 billion years. He made no consideration of how likely it would be for this process to go off the rails.

Yet, if you consider the range of possible atmospheric compositions and pressures on Earth-like planets, those that meet the requirements of human habitability are narrow. This is the one factor that is most likely to winnow the field with the possible exception of average water composition.

When considering what percentage of Earth-like planets could have a breathable atmosphere: Oxygen between 100 and 400 millibars, Nitrogen less than 2.3 bar, CO2 less than 10 millibars, and no poisonous gasses, we are helped by a natural connection of these parameters. Oxygen destroys most poisonous gasses. The Carbonate-Silicate cycle will draw down CO2 to low levels. With Nitrogen we note that Venus has 3 bars of Nitrogen. Earth has a similar stock, but most of it is either dissolved in the oceans or mineralized as nitrates. Mars still has a 2.6% by volume trace of its primordial Nitrogen atmosphere. This points to a certain consistency for terrestrial planets with regard to their Nitrogen stock; however, Oxygen to Nitrogen ratios do vary from star to star. Getting the level of Oxygen within breathable parameters is more problematic, though. It’s a reactive gas that disappears with time. I can see two possible pathways that can lead to a breathable atmosphere, one abiotic and one biotic.

On the abiotic front, there’s a robust mechanism available for generating Oxygen. If the planet is warm enough to have significant quantities of water vapor in the upper atmosphere or has a steam atmosphere, then photolysis and subsequent Hydrogen escape will result in the build-up of Oxygen.

Planets less massive than the Earth-like range lose their atmospheres. Planets more massive retain their primordial Hydrogen, which means any Oxygen resulting from photolysis will recombine to form water. Intermediate-sized planets, however, can build up Oxygen via Hydrogen escape.

How much it builds up depends on the balance of production and removal. The amount produced depends on stratospheric water vapor and UV levels. The rate of removal is determined by three main processes: Oxygen escape, which is dependent on planetary mass, magnetic field strength and the strength of plasma wind from its primary; chemical reaction with reducing gasses, which is proportional to the level of volcanic emissions; and the oxidation of exposed regolith due to volcanism and weathering, the first being proportional to the level of volcanism and the second being proportional to the planet’s temperature.

Abiotic Oxygen atmospheres are probably transitory in nature over geological time periods, but I do see sufficient Oxygen being generated at various stages in an Earth-like planet’s history. The first is from the time when a planet’s red-dwarf primary is sliding down its Hayashi track towards its position on the main sequence. Due to the star’s greater luminosity at this time, an Earth-like planet destined for the habitable zone will spend 100 million to a billion years with a steam atmosphere. Models of this process indicate it could lose up to several Earth oceans of water through photolysis and Hydrogen loss. The loss of an Earth ocean translates into roughly 300 bar of Oxygen, most of which, as with Venus, will finish up oxidizing the crust. If, however, the various factors balance out, so that when the planet’s steam atmosphere condenses as the star arrives at its main sequence position, the water fraction is sufficient to provide both oceans and continents, and the Oxygen production and removal hove balanced out to produce a breathable but non-toxic level of Oxygen, then we should get a habitable planet, albeit one with a highly oxidizing surface chemistry like Mars.

If this all sounds highly unlikely, you are probably right, but there are a lot of red dwarf stars in our galaxy.

Image: Artist’s impression of the ultracool dwarf star TRAPPIST-1 from the surface of one of its planets. We’re beginning to learn whether the inner worlds here have atmospheres, but will we find that any of the seven are habitable? Credit: ESO.

Oxygen generation through photolysis occurs anytime an Earth-like planet has a high level of water loss. Mars is thought to have lost an ocean of water corresponding to 1.4% of Earth’s ocean early in its history, which translates into a total partial pressure of 4.2 bar of Oxygen (under 1 G.) This Oxygen generation would have occurred over a long period, so the partial pressure at any given time was probably low; but you’ll notice that the mineralogy of Mars from around 4 billion years ago is highly oxidizing whereas Earth’s surface didn’t become oxidizing until 2.2 billion years ago.

Also an Earth-like planet suffering from runaway greenhouse such as Venus did two billion odd years ago would also experience a build-up in Oxygen.

If the presence of life in the galaxy is sparse, then this mechanism may result in more planets having Oxygen in their atmospheres than those that get it through biotic means, so Oxygen lines in the spectra of a planet’s atmosphere would not be a good indication that it harbors life.

We are familiar through descriptions of the history of life on how the biotic process leads to a breathable atmosphere. This has implications, however. To frame this, I’ll use a model in which planets become habitable at the rate of one per million stars starting nine billion years ago. (The figure I selected is arbitrary. You are welcome to adjust it and see what sort of results you get.) Given that star formation in our galaxy is about one star per year (star formation rates have varied over time but an average of one per year will suffice for this model), this will result in the total of 9000 planets that will be habitable to humans at some point in their lifetime. There may well be many more life-bearing planets than this, but this model is only interested in the ones that become habitable to humans.

If we assume these planets have a similar evolutionary track to Earth, then the youngest 5% of these will be at the prebiotic stage. Until about 2.2 billion years ago Earth was dominated by anaerobic life, so the next 20% will have anaerobic atmospheres full of toxic gasses. Hydrogen Sulfide in particular is lethal, killing at 1000 ppm. Intrepid explorers will have to live in sealed habitats with airlocks and go around on the surface in spacesuits. Does this meet your definition of habitable?

About 2.2 billion years ago on Earth, photosynthetic aerobes got the upper hand in Earth’s chemistry and the surface became oxidized with an atmosphere of 1-2% of oxygen. If their timeline is similar to Earth’s, then 20% of these planets would fit this condition.

These planets would be a far more pleasant place to explore. Toxic gasses would be removed by the Oxygen. You could probably go around with just an oxygen concentrator on your back feeding a tube to your nose. Habitats wouldn’t need airlocks; double doors would do. How would you classify these planets?

Then 500 million years ago Earth became fully habitable when the Oxygen concentration crossed 15% and the air became breathable. This period represents 5% of the sample. However, there’s a side effect to this. Oxygen is not very soluble in water and O2 concentrations fall off rapidly with distance. This is why the macroscopic lifeforms from the Pre-Cambrian age (>500 mya) were either flat leaf-like shapes or sponges, both of which give short diffusion distances throughout the organism. Once the oxygen concentration rose, however, lifeforms could develop thickness, and with thickness, they could develop organs such as hearts and circulatory systems, which could then circulate an oxygenated fluid throughout their bodies. A breathable atmosphere allows for the development of complex macroscopic life.

And, over time, complex macroscopic life gives rise to the second side effect of breathable Oxygen levels – sapience. This has often been considered a rare possibility, a fortuitous combination of circumstance, and in the Drake equation it is assigned a low fractional value, but the idea that intelligent life is rare and unique derives from our historical and religious concept that mankind is something unique and apart from the animal kingdom. However, studies show a steady increase in encephalization over time and its widespread occurrence in different phyla and classes: octopi in the mollusks, parrots and corvids in the birds, and dolphins, elephants and apes in the mammals.

Varying levels of communication signaling have been found in numerous species. Just recently, a troop of Chimpanzees has been found to have a 390-word vocabulary constructed by combining grunts and chirps in various sequences. It therefore seems that our ability with language is merely a development of existing trends rather than something that came out of nowhere. And language is the abstract representation of an object or action, so the manipulation of language leads to abstract reasoning.

Encephalization is a tradeoff between the energy consumption of neurons and the benefits they produce in reproductive fitness. Increasing the number of neurons in an organism is easy. A simple mutation in the precursor cells allowing them to divide one more time will do this; however, organizing those extra neurons into something useful enough to justify their extra metabolic cost is a lot more difficult. But increases in neural complexity can lead to more complex behaviors, which can increase fitness or allow the creature to colonize new niches. In addition, neurons, over time, have evolved to become more efficient. Moore’s law operates, but with a doubling time on the order of 100 million years. Parrots’ neurons are both smaller and three times more energy efficient than human ones. So, not only does encephalization increase with time, but the tradeoff moves in its favor. However, like any increases in biological complexity and sophistication, this does take time.

This points to the conclusion that on planets habitable to humans, the evolution of sentience is not so much a case of if, but when.

An atmosphere breathable to humans is also flammable over most of its range, so a good proportion of these sapients would have access to fire allowing smelting technology to develop. What the model I used implies is that 50% of habitable planets will by now have had intelligent life forms evolve on them, a majority of which could develop technology.

I would support this argument by applying the Law of Universality that states that no matter where you are in the universe the laws of nature operate in the same way. This means that a planet like Earth would produce intelligent life forms. There is a certain contingent element in evolution, so the timing and the resulting life forms would not be identical; however, the broad driving forces of evolution would produce something similar. This can be seen in the many cases of convergent evolution that have occurred on Earth. How different from Earth a planet has to be before it stops producing intelligent life forms is a matter of conjecture, but if these changes cripple the evolution of intelligent lifeforms, there’s a good chance they cripple the formation of a breathable atmosphere.

What these intelligent life forms would do to their planet over the eons is a matter of speculation, but if for some reason intelligent life did not arise, then complex life could thrive and the planet would be habitable for another billion years or more – depending on the star’s spectral type – before the star’s increasing luminosity sets off a runaway greenhouse. This means that of the planets that are habitable for humans at some stage in their life approximately 15-25% will be habitable at any given time. (The upper bound assumes that there are a high proportion of them around lower mass stars with longer lifetimes.)

If, however, intelligent life develops on planets as a matter of course, then the model indicates that for every habitable planet we have now (5% of the total) approximately ten planets had intelligent lifeforms at some stage in their history (50% of the total.) And if intelligent life is a side effect of habitability, then there will be a correlation between the number of habitable planets and the number of exosolar technological civilizations in our galaxy. So, in an inversion of the usual order of things, we can estimate the number of planets habitable for humans from the number of alien civilizations in the galaxy. The model I’ve been using points to them being within an order of magnitude of each other.

Adding in the fact that we have no information on the evolution of intelligent life on non-habitable planets, then calculating the number of habitable planets from evidence of alien civilizations is an upper bound. On the other side of the scales, there’s the number of planets that are habitable through abiotic means. Planetary atmospheric spectra within the next couple of decades may give us some indication of this. If, however, we use Hanson’s estimate where he deduces that from the lack of evidence of alien civilizations in our galaxy that the number of technological life forms is just one – us – then this would also point to the number of habitable planets in our galaxy being just one: Earth.

As a final point I would like to add that while I have not done a full literature search, I have read widely in this field and have not come across as rigorous consideration as Dole’s work on defining habitability for humans and considering the likelihood of finding planets that match that criterion. The field’s general mindset seems to focus on finding the conditions upon which life arises; then it just assumes evolution will automatically lead to a habitable planet for humans. We have learned a lot since Dole wrote his paper, but there does not seem to have been much reexamination of the topic. It is perhaps time we applied our minds to it.

References

Stephen Dole, Habitable Planets For Man, The Rand Corporation, R414-R
https://www.rand.org/content/dam/rand/pubs/reports/2005/R414.pdf

Dave Moore, “’If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare’: A review”
https://centauri-dreams.org/2022/05/20/if-loud-aliens-explain-human-earliness-quiet-aliens-are-also-rare-a-review/

Robin Hanson, Daniel Martin, Calvin McCarter, Jonathan Paulson, “If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare,” The Astrophysical Journal, 922, (2) (2021)

The “Habitability” of Worlds (Part I)

Dave Moore is a Centauri Dreams regular who has long pursued an interest in the observation and exploration of deep space. He was born and raised in New Zealand, spent time in Australia, and now runs a small business in Klamath Falls, Oregon. He counts Arthur C. Clarke as a childhood hero, and science fiction as an impetus for his acquiring a degree in biology and chemistry. Dave has kept up an active interest in SETI (see If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare) as well as the exoplanet hunt. In the essay below, he examines questions of habitability and how we measure it, issues that resonate in a time when we are preparing to evaluate exoplanets as life-bearing worlds and look for their biosignatures.

by Dave Moore

In this essay I’ll be examining the meaning of the word ‘habitable’ when applied to planetary bodies. What do we mean when we talk about a habitable planet or a planet’s habitability? What assumptions do we make? The first part of this essay will look into this and address the implications that come with it. In part two, I’ll focus on human habitability, looking at the mechanisms that could produce a habitable planet for humans and what this would imply.

If you look at the Wikipedia entry on habitable planets, the author implies that “habitability” refers to the ability of a planetary body to sustain life, and this is by far the most frequent use of the term, particularly in the literature of popular science articles.

Europa has sulfate deposits on it, which indicates that its surface is oxidizing. If the hydrothermal vents in the moon’s subsurface ocean are like those on Earth, they would release reducing gases such as H2S, and Methane. A connection between the two would provide an electrochemical differential that life could exploit. So it’s quite plausible that Europa’s ocean could harbor life, and if it does, would this now make it a “habitable” moon? If we find subsurface Methanogens on Mars, does Mars become a habitable planet? Traces of Phosphine in Venusian clouds point to the possibility of life forms there. If that’s so, would Venus now be considered habitable?

Andrew LePage on his website is more careful in defining what a habitable planet is. On his Habitable Planet Reality Check postings, he has the following definition:

…the best we can hope to do at this time is to compare the known properties of extrasolar planets to our current understanding of planetary habitability to determine if an extrasolar planet is “potentially habitable.” And by “habitable,” I mean in an Earth-like sense where the surface conditions allow for the existence of liquid water – one of the presumed prerequisites for the development of life as we know it. While there may be other worlds that might possess environments that could support life, these would not be Earth-like habitable worlds of the sort being considered here.

By Andrew’s definition, a habitable planet is first a body that can give rise to life. He then narrows it by adding that the type of life is “life as we know it,” which is life that needs an aqueous medium to evolve. If life evolved in some other medium, say Ammonia, then this would be life as we don’t know it; and the planet would not be classified as habitable. But this is not the only definitional constraint he makes. The planet must also be Earth-like in a sense that its surface conditions allow for liquid water. Europa would be excluded even if it had life in its oceans as its surface conditions do not allow for liquid water. His definition also implies that the planet must be in the habitable zone as defined by Kopparapu, which is thought to be the zone of insolation that allows for surface water on “Earth-like” planets. Would an ocean world with an ocean full of life fit his definition of habitable? Would a Super-Earth with a deep Hydrogen atmosphere (sometimes called a Hycean world) outside the habitable zone but with both oceans and continents and a temperate surface at moderate temperature be habitable? I do note however that his definition does not include human survivability as a requirement because elsewhere in his post he talks about the factors that have kept Earth habitable over billions of years, and Earth’s atmosphere has only been breathable to humans over the last 500 million years.

I’m not picking on Andrew in particular here; he has put more thought into the matter of defining habitability than most. Why I am using him as an example is to show just how fraught defining habitability can be. It’s a word that is bandied about with a lot of unexamined assumptions.

This may seem picayune, but the study of life on other worlds has very little data to rely on, so hypotheses are made using logical inference and logical deduction. And if your definitions are inexact, sliding in meaning through your logical process, then you are likely to draw invalid conclusions. Also, if the definition of habitable is that of a planet that could have life evolve on it, why include this arbitrary set of exclusions?

The answer becomes obvious from reading articles in the popular press. A habitable planet is not just one that is life-bearing, but a planet in which life gives rise to conditions that may be habitable for humans.

The assumption that life leads to human habitability is strongly ingrained from our historical experience. By the early 19th century, it was known that oxygen was required to survive and plants produced oxygen, hence the idea of life and human habitability became intertwined. Also, our experience of exploring Earth strongly influenced our perception of other planets. We found parts of Earth hot, parts cold, others wet and others dry. Indigenous inhabitants were almost everywhere, and you could always breathe the air. And this mindset was carried over to our imaginings of planets. They would be like Earth, only different.

For instance, H. G. Wells, an author known for applying scientific rigor to his stories, in The First Men in the Moon (1901), postulates a thin but breathable atmosphere on the moon and its native inhabitants. This is despite the lack of atmosphere on the moon being known for over a 100 years prior. Such was our mindset about other planetary bodies. Pulp SF before WWII got away with swash-buckling adventures on pretty much every body in the solar system without the requirement for space suits. Post WWII, until the early sixties, both Venus and Mars were portrayed as having breathable atmospheres, Mars usually as a dying planet as per Bradbury, Venus as a tropical planet as for example in Heinlein’s Between Planets (1951.)

When the first results from Mariner 2 came back in 1962 showing the surface of Venus was hot enough to roast souls, there was considerable resistance in the scientific community to accepting this and much scrambling to come up with alternative explanations. In 1965 Mariner 4 flew by Mars showing us a planet that was a cratered approximation of the moon and erased our last hopes that the new frontiers in our solar system would be anything like the old frontier. Crushed by what our solar system had served up, we turned to the stars.

Our search for life is now two-pronged: the first part being a search for signals from technological civilizations, which we regard as a pretty good indication of life; the second being the search for biomarkers on exosolar planets. We’re searching for biomarkers because, in the near future, characterizing exosolar planets will be by mass, radius and atmospheric spectra. Buoyed by our knowledge of extremophiles, we continue to search the planets and moons of our solar system for signs of life, but now it is in places not remotely habitable by humans. If the parameters for the search for life touch on habitable conditions for humans, they are purely tangential. These two elements once fused together in our romantic past have now become separate.

This divergence has led to a change in goals to the search for life. We look now for the basic principles that govern the emergence of life and under what conditions can life evolve and/or allow for panspermia? This leads to the concept of planetary habitability being secondary. Life, once evolved, in its single-celled form, is tough and adaptable, so it is likely to continue until there’s a really major change in the state of a planet; habitability is a parameter of life’s continuity, not its origins. So when describing planets, terms like life-potential or life-bearing become more pertinent. This latter term is now starting to be used in preference to the description habitable.

If we now look at the other fork, the idea of habitability when applied to humans, we note that the term has been used in a loose sort of way since the 17th century. Even the idea of the habitable zone was first raised in the 19th century, but it was Stephen Dole with his report, Habitable Planets For Man, under the auspices of the Rand Corporation in 1964 that put a modern framework to it by precisely defining what a habitable planet was for humans. The book can be downloaded at the Rand site.

This report has held up well considering it was written at a time (1962) when Mercury’s mass had not been fully established and Venus’s atmosphere and surface temperature were unknown.

Image: PG note: Neither Dave nor I could find a better image of the cover of the original Dole volume than the one above, but Stephen Dole’s Planets for Man was a new version of the more technical Habitable Planets for Man, co-authored by Isaac Asimov and published in 1964. If you happen to have a copy of the earlier volume and could scan the cover at higher resolution, I would appreciate having the image in the Centauri Dreams files.

Dole first defines carefully what he means by habitability (material omitted for brevity):

“For present purposes, we shall enlarge on our definition of a habitable planet (a planet on which large numbers of people could live without needing excessive protection from the natural environment) to mean that the human population must be able to live there without dependence on materials bought from other planets. In other words, a planet that is habitable can supply all of the physical requirements of human beings and provide and environment in which people can live comfortably and enjoyably…”

You’ll note that Dole’s definition contains echoes of the experience of American settlement where initial settlement is exercised with minimal technology and living off the land. There is emphasis on self-sustainment. It’s the sort of place you’d send an ark ship to.

I take a view of habitability as more of a sliding scale on how much technology you need to survive and live comfortably. On some parts of Earth, the level of technology needed to survive is minimal: basic shelter, light clothes and a pair of flip-flops will do the job. Living at the South Pole is a different story. You must have a heated, insulated station to live in, and when you venture outside, you need heavily insulated clothing covering your entire body and goggles to prevent your eyeballs from freezing. Move to Mars and you need to add radiation protection and a pressurized, breathable atmosphere. The more hostile the environment the more technology you need. By stretching the definition, you could say that an O’Neill colony makes space itself habitable.

I contrast my definition to Dole’s to show that even when dealing with what makes a planet “habitable for humans” you can still get a significant variation on what this entails.

Dole does however itemize carefully the specific requirements necessary to meet his definition. They are:

Temperature: The planet must have substantial areas with mean annual temperatures between 32°F and 86°F. This is not only to meet human needs for comfort, but to allow the growing of crops and the raising of animals. Also seasonal temperatures cannot be too extreme.

Surface gravity: up to 1.5 g.

Atmospheric composition and pressure: For humans, the lower limit for Oxygen is a partial pressure of 100 millibars, below which hypoxia sets in. The upper limit is about 400 millibars at which you get Oxygen toxicity, resulting in things like blindness over time. For inert gasses, there is a partial pressure above which narcosis occurs. This is proportional to the molecular weight of the molecule. The most important of these to consider is Nitrogen, which becomes narcotic above a partial pressure of 2.3 bar. For CO2, the upper limit is a partial pressure of 10 millibars, above which acidosis leads to long term health problems and impaired performance. Most other gasses are poisonous at low or very low concentrations.

Image: Original illustration from Dole’s Report. You may notice the lower level of O2 set at 60 mm Hg. This is the blood level minimum not the atmospheric minimum. There is a 42 millibar drop in O2 partial pressure between the atm. and the blood.

Other factors he considered were having enough water for oceans but not enough to drown the planet, sufficient light, wind velocities that aren’t excessive or too much radioactivity, volcanic activity or meteorite in-fall.

Dole then went on to discuss general planetology and how stellar parameters would affect habitability—something we now know in much greater detail–and he finishes up by calculating the likelihood of a habitable planet around the nearest stars in a manner similar to the Drake equation.

You will notice that these requirements listed bear little resemblance to the parameters used when discussing habitability with regard to life. The two have gone their separate ways.

Using Dole’s report as a basis for examining the habitability of a planet, in Part II of this essay, I will note how our current state of knowledge has updated his conclusions. Then I will look at how you could produce a planet habitable for humans and the consequences of those mechanisms.

——–

Wikipedia Planetary Habitability Definition
https://en.wikipedia.org/wiki/Planetary_habitability

Andrew LePage: Habitable Planet Reality Check: TOI-700e
https://www.drewexmachina.com/2023/01/23/habitable-planet-reality-check-toi-700e-discovered-by-nasas-tess-mission/

Manasvi Lingam, A brief history of the term ‘habitable zone’ in the 19th century, International Journal of Astrobiology, Volume 20, Issue 5, October 2021, pp. 332 – 336.

Stephen Dole, Habitable Planets For Man, The Rand Corporation, R414-R
https://www.rand.org/content/dam/rand/pubs/reports/2005/R414.pdf

A Liquid Water Mechanism for Cold M-dwarf Planets

A search for liquid water on a planetary surface may be too confining when it comes to the wide range of possibilities for supporting life. We see that in our own Solar System. Consider the growing interest in icy moons like Europa and Enceladus, where there is no possibility of surface water but a potentially rich environment under a thick layer of ice. Extending these thoughts into the realm of exoplanets reminds us that our calculations about how many life-bearing worlds are out there may be in need of revision.

This is the thrust of work by Lujendra Ojha (Rutgers University) and colleagues, as developed in a paper in Nature Communications and presented at the recent Goldschmidt geochemistry conference in Lyon. What Ojha and team point out is that radiogenic heating can maintain liquid water below the surface of planets in M-dwarf systems, and that added into our astrobiological catalog, such worlds, orbiting a population of stars that takes in 75 percent or more of all stars in the galaxy, dramatically increase the chances of life elsewhere. The effect is striking. Says Ojha:

“We modeled the feasibility of generating and sustaining liquid water on exoplanets orbiting M-dwarfs by only considering the heat generated by the planet. We found that when one considers the possibility of liquid water generated by radioactivity, it is likely that a high percentage of these exoplanets can have sufficient heat to sustain liquid water – many more than we had thought. Before we started to consider this sub-surface water, it was estimated that around 1 rocky planet every 100 stars would have liquid water. The new model shows that if the conditions are right, this could approach 1 planet per star. So we are a hundred times more likely to find liquid water than we thought. There are around 100 billion stars in the Milky Way Galaxy. That represents really good odds for the origin of life elsewhere in the universe.”

Image: This is Figure 2 from the paper. Caption: Schematic of a basal melting model for icy exo-Earths. a Due to the high surface gravity of super-Earths, ice sheets may undergo numerous phase transformations. Liquid water may form within the ice layers and at the base via basal melting with sufficient geothermal heat. If high-pressure ices are present, meltwater will be buoyant and migrate upward, feeding the main ocean. The red arrows show geothermal heat input from the planet’s rocky interior. b Pure water phase diagram from the SeaFreeze representation illustrating the variety of phases possible in a thick exo-Earth ice sheet. Density differences between the ice phases lead to a divergence from a linear relationship between pressure and ice-thickness. Credit: Ohja et al.

The effect is robust. Indeed, water can be maintained above freezing even when planets are subject to as little as 0.1 Earth’s geothermal heat produced by radiogenic elements. The paper models the formation of ice sheets on such worlds and implies that the circumstellar region that can support life should be widened, which would take in colder planets outside what we have normally considered the habitable zone.

But the work goes further still, for it implies that planets closer to their host star than the inner boundaries of the traditional habitable zone may also support subglacial liquid water. We also recall that the sheer ubiquity of M-dwarfs in the galaxy helps us, for if water from an internal ocean does reach the surface, perhaps through cracks venting plumes and geysers, we may find numerous venues relatively close to the Sun on which to search for biosignatures.

The key factor here is subglacial melting through geothermal heat, for oceans and lakes of liquid water should be able to form under the ice on Earth-sized planets even when temperatures are as low as 200 K, as we find, for example, on TRAPPIST-1g, which is the coldest of the exoplanets for which Ojha’s team runs calculations.

Such water is found to be buoyant and can migrate through this ‘basal melting,’ a term used, explain the authors, for “any situation where the local geothermal heat flux, as well as any frictional heat produced by glacial sliding, is sufficient to raise the temperature at the base of an ice sheet to its melting point.” Subglacial ice sheets are found on Earth in the West Antarctic Ice Sheet, Greenland and possibly the Canadian Arctic, and the paper points out the possibility of the mechanism at work at the south pole of Mars.

The authors’ modeling uses a software tool called SeaFreeze along with a heat transport model to investigate the thermodynamic and elastic properties of water and ice at a wide range of temperatures and pressures. Given the high surface gravity of worlds like Proxima Centauri b, LHS 1140 b and some of the planets in the TRAPPIST-1 system, water ice should be subjected to extreme pressures and temperatures, and as the paper points out, may evolve into high-pressure ice phases. In such conditions, the meltwater migrates upward to form lakes or oceans. Indeed, this kind of melting and migration of water is more likely to occur on planets where the ice sheets are thicker and there is both higher surface gravity as well as higher surface temperatures.

Image: A frozen world heated from within, as envisioned by the paper’s lead author, Lujendra Ojha.

Beyond radiogenic heating, tidal effects are an interesting question, given the potential tidal lock of planets in close orbits around M-dwarfs. Yet planets further out in the system could still benefit from tidal activity, as the paper notes about TRAPPIST-1:

…the age of the TRAPPIST-1 system is estimated to be 7.6 ± 2.2 Gyr; thus, if geothermal heating has waned more than predicted by the age-dependent heat production rate assumed here, tidal heating could be an additional source of heat for basal melting on the TRAPPIST-1 system. On planets e and f of the TRAPPIST-1 system, tidal heating is estimated to contribute heat flow between 160 and 180 mW m−2. Thus, even if geothermal heating were to be negligible on these bodies, basal melting could still occur via tidal heating alone. However, for TRAPPIST-1 g, the mean tidal heat flow estimate from N-body simulation is less than 90 mW m−2. Thus, ice sheets thinner than a few kilometers are unlikely to undergo basal melting on TRAPPIST-1 g.

So we have two mechanisms in play to maintain lakes or oceans beneath surface ice on M-dwarf planets. The finding is encouraging given that one of the key objections to life in these environments is the time needed for life to evolve given that the young planet should be bombarded by ultraviolet and X-ray radiation, a common issue for these stars. We put in place what Amri Wandel (Hebrew University of Jerusalem), who writes a commentary on this work for Nature Communications, calls ‘a safe neighborhood,’ and one for which forms of biosignature detection relying on plume activity will doubtless emerge building on our experience at Enceladus and Europa.

The paper is Ojha et al., “Liquid water on cold exo-Earths via basal melting of ice sheets,” Nature Communications 13, Article number: 7521 (6 December, 2022). Full text. Wandel’s excellent commentary is “Habitability and sub glacial liquid water on planets of M-dwarf stars,” Nature Communications 14, Article number: 2125 (14 April 2023). Full text.

Reducing the Search Space with the SETI Ellipsoid

SETI’s task challenges the imagination in every conceivable way, as Don Wilkins points out in the essay below. A retired aerospace engineer with thirty-five years experience in designing, developing, testing, manufacturing and deploying avionics, Don is based in St. Louis, where he is an adjunct instructor of electronics at Washington University. He holds twelve patents and is involved with the university’s efforts at increasing participation in science, technology, engineering, and math. The SETI methodology he explores today offers one way to narrow the observational arena to targets more likely to produce a result. Can spectacular astronomical phenomena serve as a potential marker that could lead us to a technosignature?

by Don Wilkins

Finite SETI search facilities searching a vast search volume must set priorities for exploration. Dr. Jill Tarter, Chair Emeritus for SETI Research, describes the search space as a “nine-dimensional haystack” composed of three spatial, one temporal (when the signal is active), two polarization, central frequency, sensitivity, and modulation dimensions. Methods to reduce the search space and prioritize targets are urgently needed.

One method for limiting the search volume is the SETI Ellipsoid, Figure 1, which is reproduced from a recent paper in The Astronomical Journal by lead author James R. A. Davenport (University of Washington: Seattle) and colleagues. [1]

Image: This is Figure 1 from the paper. Caption: Schematic diagram of the SETI Ellipsoid framework. A civilization (black dot) could synchronize a technosignature beacon with a noteworthy source event (green dot). The arrival time of these coordinated signals is defined by the time-evolving ellipsoid, whose foci are Earth and the source event. Stars outside the Ellipsoid (blue dot) may have transmitted signals in coordination with their observation of the source event, but those signals have not reached Earth yet. For stars far inside the Ellipsoid (pink dot), we have missed the opportunity to receive such coordinated signals. Credit: Davenport et al.

In this approach, an advanced civilization (black dot) synchronizes a technosignature beacon with a significant astronomical event (green dot). The astronomical event, in the example, is SN 1987A, a type II supernova in the Large Magellanic Cloud, a dwarf satellite galaxy of the Milky Way. The explosion occurred approximately 51.4 kiloparsecs (168,000 light-years) from the Sun.

Arrival time of the coordinated signals is defined by a time-evolving ellipsoid, with foci at Earth (or an observation station within the Solar System) and the source event. The synchronized signals arrive from an advanced civilization based on the distance to the Solar System or other system with a technological system (d1), and the distance from the advanced civilization to the astronomical event (d2). Signals from civilizations (blue dot) outside the Ellipsoid coordinated with the source event have not reached the Solar System. Stars inside the Ellipsoid (pink dot) but on line between the advanced civilization and the Solar System will not receive the signals intended for the Solar System. However, the advanced civilization could beam new signals to the pink star and form a new Ellipsoid.

The source event acts as a “Schelling Point” to facilitate communication between observers who have not coordinated the time or place of message exchanges. A Schelling point is a game theory concept which proposes links can be formed between two would-be communicators simply by using common references, in this case a supernova, to coordinate the time and place of communication. In addition to supernovae, source events include gamma-ray bursts, binary neutron star mergers, and galactic novae.

In conjunction with the natural event which attracts the attention of other civilizations, the advanced civilization broadcasts a technosignature signal unambiguously advertising its existence. The technosignature might, as an example, mimic a pulsar’s output: modulation, frequency, bandwidth, periods, and duty cycle.

The limiting factor in using the SETI Elliposoid to search for targets is the unavailability of precise distance measurements to nearby stars. The Gaia project remedies that problem. The mission’s two telescopes provide parallaxes, with precision 100 times better than its predecessors, for over 1.5 billion sources. Distance uncertainties are less than 10% for stars within several kiloparsecs of Earth. This precision directly translates into lower uncertainties on the timing for signal coordination along the SETI Ellipsoid.

“I think the technique is very straightforward. It’s dealing with triangles and ellipses, things that are like high-school geometry, which is sort of my speed,” James Davenport , University of Washington astronomer and lead author in the referenced papers, joked with GeekWire. “I like simple shapes and things I can calculate easily.” [2]

An advanced civilization identifies a prominent astronomical event, as an example, a supernova. It then determines which stars could harbor civilizations which could also observe the supernova and the advanced civilization’s star. An unambiguous beacon is transmitted to stars within the Ellipsoid. The volume devoted to beacon propagation is significantly reduced, which reduces power and cost, when compared to an omnidirectional beacon.

At the receiving end, the listeners would determine which stars could see the supernova and which would have time to send a signal to the listeners. The listening astronomers would benefit by limiting their search volume to stars which meet both criteria.

For example, astronomers on Earth only observed SN 1987A in 1987, thirty six years ago. If the advanced civilization beamed a signal at the Solar System a century ago, our astronomers would not have the necessary clue, the observation of SN 1987A, to select the advanced civilization’s star as the focus of a search. Assuming both civilizations are using SN 1987A as a coordination beacon, human astronomers should listen to targets within a hemisphere defined by a radius of thirty-six light-years.

The following is written with apologies to Albert Einstein. The advanced civilization could observe the motion of stars and predict when a star will come within the geometry defined by the Ellipsoid. In the case of the Earth and SN1987A, the advanced civilization could have begun transmissions thirty-six years ago.

The recently discovered SN 2023ixf in the spiral galaxy M101 could serve as one of the foci of an Ellipsoid. 108 stars within 0.1 light-year of the SN 2023ixf – Earth SETI Ellipsoid. [3]

Researchers propose to use the Allen Telescope Array (ATA), designed specifically for radio technosignature searches, to search this Ellipsoidal. The authors point out the utility of the approach and caution about its inherent anthropocentric biases:

“…there are numerous other conspicuous astronomical phenomena that have been suggested for use in developing the SETI Ellipsoid, including gamma-ray bursts (Corbet 1999), binary neutron star mergers (Seto 2019), and historical supernovae (Seto 2021). We cannot know what timescales or astrophysical processes would seem “conspicuous” to an extraterrestrial agent with likely a much longer baseline for scientific and technological discovery (e.g., Kipping et al. 2020; Balbi & Ćirković 2021). Therefore we acknowledge the potential for anthropogenic bias inherent in this choice, and instead focus on which phenomena may be well suited to our current observing capabilities.”

1. James R. A. Davenport , Bárbara Cabrales, Sofia Sheikh , Steve Croft , Andrew P. V. Siemion, Daniel Giles, and Ann Marie Cody, Searching the SETI Ellipsoid with Gaia, The Astronomical Journal, 164:117 (6pp), September 2022, https://doi.org/10.3847/1538-3881/ac82ea

2. Alan Boyle, How ‘Big Data’ could help SETI researchers intensify the search for alien civilizations, 22 June 2022, https://www.geekwire.com/2022/how-big-data-could-help-seti-researchers-intensify-the-search-for-alien-civilizations/

3. James R. A. Davenport, Sofia Z. Sheikh, Wael Farah, Andy Nilipour, B´arbara Cabrales, Steve Croft, Alexander W. Pollak, and Andrew P. V. Siemion, Real-Time Technosignature Strategies with SN2023ixf, Draft version June 7, 2023.

Earth in Formation: The Accretion of Terrestrial Worlds

It would be useful to have a better handle on how and when water appeared on the early Earth. We know that comets and asteroids can bring water from beyond the ‘snowline,’ that zone demarcated by temperatures beyond which volatiles like water, ammonia or carbon dioxide are cold enough to condense into ice grains. For our Solar System, that distance in our era is 5 AU, roughly the orbital distance of Jupiter, although the snowline would have been somewhat closer to the Sun during the period of planet formation. So we have a mechanism to bring ices into the inner Solar System but don’t know just how large a role incoming ices played in Earth’s development.

Knowing more about the emergence of volatiles on Earth would help us frame what we see in other stellar systems, as we evaluate whether or not a given planet may be habitable. Usefully, there are ways to study our planet’s formation that can drill down to its accretion from the materials in the original circumstellar disk. A new study from Caltech goes to work on the magmas that emerge from the planetary interior, finding that water could only have arrived later in the history of Earth’s formation.

Published in Science Advances, the paper involves an international team working in laboratories at Caltech as well as the University of the Chinese Academy of Sciences, with Caltech grad student Weiyi Liu as first author. When I think about studying magma, zircon comes first to mind. It appears in crystalline form as magma cools and solidifies. I’m no geologist, but I’m told that the chemistry of melt inclusions can identify factors such as volatile content and broader chemical composition of the original magma itself. Feldspar crystals are likewise useful, and the isotopic analysis of a variety of rocks and minerals can tell us much about their origin.

So it’s no surprise to learn that the Caltech paper uses isotopes, in this case the changing ratio of isotopes of xenon (Xe) as found in mid-ocean ridge basalt vs. ocean island basalt. Specifically, 129Xe* comes from the radioactive decay of the extinct volatile 129I, whose half-life is 15.7 million years, while 136Xe*Pu comes from the extinction of 244Pu, with a halflife of 80 million years. So the 129Xe*/136Xe*Pu ratio is a useful tool. As the paper notes, this ratio:

…evolves as a function of both time and reservoirs compositions (i.e., I/Pu ratio) early in Earth’s history. Hence, the study of the 129Xe*/136Xe*Pu in silicate reservoirs of Earth has the potential to place strong constraints on Earth’s accretion and evolution.

The ocean island basalt samples, originating as far down as the core/mantle boundary, reveal this ratio to be low by a factor of 2.8 as compared to mid-ocean ridge basalts, which have their origin in the upper mantle. Using computationally intensive simulations drawing on what is known as first-principles molecular dynamics (FPMD), the authors find that the low I/Pu levels were established in the first 80 to 100 million years of the Solar System (thus before 129I extinction), and have been preserved for the past 4.45 billion years. Their calculations assess the I/Pu findings under different accretion scenarios, drawing on simulated magmas from the lower mantle, which runs from 680 kilometers below the surface, to the core-mantle boundary (2,900 kilometers), and also from the upper mantle beginning at 15 kilometers and extending downward to 680 kilometers.

The result: The lower mantle reveals an early Earth composed primarily of dry, rocky materials, with a distinct lack of volatiles, with the later-forming upper mantle numbers showing three times the amount of volatiles found below. The volatiles essential for life seem to have emerged only within the last 15 percent, and perhaps less, of Earth’s formation. In the caption below, the italics are mine.

Image: This is Figure 4 from the paper. Caption: Schematic representation of the heterogeneous accretion history of Earth that is consistent with the more siderophile behavior of I and Pu at high P-T [pressure-temperature] conditions (this work). As core formation alone does not result in I/Pu fractionations sufficient to explain the ~3 times lower 129Xe*/136Xe*Pu ratio observed in OIBs [ocean island basalt] compared to MORBs [mid-ocean ridge basalt], a scenario of heterogeneous accretion has to be invoked in which volatile-depleted differentiated planetesimals constitute the main building blocks of Earth for most of its accretion history (phase 1), before addition of, comparatively, volatile-rich undifferentiated materials (chondrite and possibly comet) during the last stages of accretion (phase 2).Isolation and preservation, at the CMB [core mantle boundary], of a small portion of the proto-Earth’s mantle before addition of volatile-rich material would explain the lower I/Pu ratio of plume mantle, while the mantle involved in the last stages of the accretion would have higher, MORB-like, I/Pu ratios. Because the low I/Pu mantle would also have an inherently lower Mg/Si, its higher viscosity could help to be preserved at the CMB until today. Credit: Liu et al.

We’re a long way from knowing in just what proportions Earth’s water has derived from incoming materials from beyond the snowline. But we’re making progress:

…our model sheds light on the origin of Earth’s water, as it requires that chondrites represent the main material delivered to Earth in the last 1 to 15% of its accretion. Independent constraints from Mo [molybdenum] nucleosynthetic anomalies require these late accreted materials to come from the carbonaceous supergroup. Together, these results indicate that carbonaceous chondrites [the most primitive class of meoteorites, containing a high proportion of carbon along with water and minerals] must have represented a non-negligible fraction of the volatile-enriched materials in phase 2 and, thus, play a substantial role in the water delivery to Earth.

All this from the observation that mid-ocean ridge basalts had roughly three times higher iodine/plutonium ratios (inferred from xenon isotopes) as compared to ocean island basalts. The key to this paper, though, is the demonstration that the ratio difference is likely from a history of accretion that began with dry planetesimals followed by a secondary accretion phase driven by infalling materials rich in volatiles.

Thus Earth presents us with a model of planet formation from dry, rocky materials, one that presumably would apply to other terrestrial worlds, though we’d like to know more. To push the inquiry forward, Caltech’s Francois Tissot, a co-author on the paper, advocates looking at rocky worlds within our own Solar System:

“Space exploration to the outer planets is really important because a water world is probably the best place to look for extraterrestrial life. But the inner solar system shouldn’t be forgotten. There hasn’t been a mission that’s touched Venus’ surface for nearly 40 years, and there has never been a mission to the surface of Mercury. We need to be able to study those worlds to better understand how terrestrial planets such as Earth formed.”

And indeed, to better measure the impact of ices brought from far beyond the snowline to the infant worlds of the inner system. Tissot’s work demonstrates how deeply we are now delving into the transition between planetary nebulae and fully formed planets. working across the entire spectrum of what he calls ‘geochemical problematics,’ which includes studying the isotopic makeup of meteorites and their inclusions, the reconstruction of the earliest redox conditions in the Earth’s ocean and atmosphere, and the analysis of isotopes to investigate ancient magmas. At Caltech, he has created the Isotoparium, a state-of-the-art facility for high-precision isotope studies.

That we are now probing our planet’s very accretion is likely not news to many of my readers, but it stuns me as another example of extraordinary methodologies driving theory forward through simulation and laboratory work. And as we don’t often consider work on the geological front in these pages, it seems a good time to point this out.

The paper is Weiyi Liu et al., “I/Pu reveals Earth mainly accreted from volatile-poor differentiated planetesimals,” Science Advances Vol. 9, No. 27 (5 July 2023) (full text).

On Retrieving Dyson

One of the pleasures of writing and editing Centauri Dreams is connecting with people I’ve been writing about. A case in point is my recent article on Freeman Dyson’s “Gravitational Machines” paper, which has only lately again come to light thanks to the indefatigable efforts of David Derbes (University of Chicago Laboratory Schools, now retired). See Freeman Dyson’s Gravitational Machines for more, as well as the follow-up, Building the Gravitational Machine. I was delighted to begin an email exchange with Dr. Derbes following the Centauri Dreams articles, out of which emerges today’s post, which presents elements of that exchange.

I run this particularly because of my continued fascination with the work and personality of Freeman Dyson, who is one of those rare individuals who seems to grow in stature every time I read or hear about his contributions to physics. It was fascinating to receive from Dr. Derbes not only the background on how this manuscript hunter goes about his craft, thereby illuminating some of the more hidden corners of physics history, but also to learn of his recollections of the interactions between Dyson and Peter Higgs, whose ‘Higgs mechanism’ has revolutionized our understanding of mass and contributed a key factor to the Standard Model of particle physics. I’m also pleased to make the acquaintance of a kindred spirit, who shares my fascination with how today’s physics came to be, and the great figures who shaped its growth.

by David Derbes

I have a lifelong interest in the history of physics, particularly the history of physicists. Somehow I got through graduate school (in the UK; but I’m American) with only a very shaky acquaintance with Feynman diagrams and calculations in QED [quantum electrodynamics, the relativistic quantum theory of electrically charged particles, mutually interacting by exchange of photons]. This led me to a program of self-study (resulting in “Feynman’s derivation of the Schrödiinger equation”, Amer. Jour. Phys. 64 (1996) 881-884, two editions of Dyson’s AQM [Advanced Quantum Mechanics], and, with Richard Sohn, David J. Griffiths, and a cast of thousands, Sidney Coleman’s Lectures on Quantum Field Theory).

Along the way I stumbled onto David Kaiser’s Drawing Theories Apart, a sociological study of Feynman’s diagrams. Kaiser, who is now a friend, is a very remarkable fellow; he has two PhD’s, one in physics ostensibly under Coleman but actually under Alan Guth, and another in the history and philosophy of science). Kaiser mentioned the Cornell AQM notes of Dyson, never published, and I thought, hmmm… I found scans of them online at MIT, and (deleting a few side trips here) contacted Dyson about LaTeX’ing them for the arXiv (where they may be found today).

Image: Physicist, writer and teacher David Derbes, recently retired from University of Chicago Laboratory Schools. Credit: Maria Shaughnessy.

Dyson was quite enthusiastic. It probably helped that I had been a grad student of Higgs’ under Nick Kemmer at Edinburgh; Kemmer had steered Dyson towards physics and away from mathematics at Cambridge after the war. Ultimately (in my opinion) it is Dyson who was (very quietly) responsible for the recognition of Higgs’s work, and its incorporation by Weinberg into the Standard Model. Dyson had seen Higgs’s short pieces from 1964, learned (maybe from Kemmer) that he was at UNC Chapel Hill for 1965-66, wrote Higgs to give a talk at the IAS, which led to his giving a talk to Harvard (with Coleman, Glashow, and maybe Weinberg, then at MIT, in the audience).

Typing up Dyson’s Cornell lectures killed two birds: I learned more about QED, and I learned LaTeX from scratch. In retirement, “manuscript salvage” is my main hobby. (There are at least a couple of other oddballs who are doing much the same thing: David Delphenich, and there’s a guy in Australia, Ian Bruce, who has done a bunch of stuff from the 17th and 18th century, among other things a new translation of the Principia.)

Flash forward to shortly after LIGO’s results were announced. A letter in Physics Today drew attention to Dyson’s “Gravitational Machines”, so I went looking for it in the Cameron collection. I have a copy of Dyson’s Selected Works, and as you report the paper is not there. Couldn’t find it anywhere else, either. Cameron’s collection was mostly published in ephemeral paperback (I think there were a small number of hardbacks for libraries, but the U of Chicago’s copy is in paper covers).

So I wrote Dyson, with whom I had developed a very friendly relationship (there is a second edition of AQM, and it was more work than the first, due to the ~200 Feynman diagrams in the supplement), and asked if he would consent to my retyping (and redrawing the illustration for) his article for the arXiv. He was pleased by this. I very much regret that I couldn’t get it done before he died. The reason for that was copyright problems.

I’m going to give you only bullet points for that. Cameron died in 2005. His Interstellar Communication was published by W. A. Benjamin, then purchased by Cummings, Cummings was purchased by Addison-Wesley, and most of A-W’s assets purchased by Pearson; some by Taylor & Francis (UK). Took about four years to unravel. Neither Pearson (totally unhelpful) nor T&F (much better) had any record of the Cameron collection. As this may be helpful to you down the road, here was the resolution:

A work which was in copyright prior to January 1, 1964 had to have its copyright renewed in the 28th year after original copyright or lose its US copyright protection forever. Cameron’s collection was copyrighted in 1963. It took hours, but by scouring the online catalog at the US Copyright Office (you can do it in person near the Library of Congress) I was able to convince myself that the copyright had never been renewed. As far as US copyright goes, “Gravitational Engines” is in the public domain, and so I was clear of corporate entanglements (more to the point, so is the arXiv).

However, as I learned from Dyson’s Selected Papers, the article had originally been entered into an annual contest by the Gravity Research Foundation. The contributors to this contest read historically like a Who’s Who of astrophysics, general relativists and astronomers. So I got in touch with that organization’s director, George Rideout Jr. Rideout’s father had been appointed director by Roger W. Babson. who made a pile of money and set the foundation up. The story behind this is very sad: His beloved older sister drowned, and he blamed gravity. So he thought, well, if people could only invent anti-gravity, that might prevent future disasters. So he set up the foundation. (I think they also provided some funding for GR1 [Conference on the Role of Gravitation in Physics], the first international general relativity conference, Chapel Hill, 1957.)

I quickly obtained permission from George Rideout, satisfied the arXiv officials that they were free and clear to post “Gravitational Engines,” and here we are. (As I mentioned in the arXiv posting, the abstract comes from the original Gravity Research Foundation submission; it is absent in the Cameron collection.)

Incidentally, in chasing down other things, I found something I’d been seeking for a long time, the report from the Chapel Hill conference:

https://edition-open-sources.org/publications/#sources

https://edition-open-sources.org/sources/5/index.html

(So as you can see, there are several of us oddball manuscript hunters out there.)

Theoretical physics was not that large a community in 1965, and the British community even smaller. The physicists of Dyson’s generation typically went to Cambridge (which remains the main training ground for math and physics in the UK), with smaller spillover at Oxford, Imperial College London, and Edinburgh.

Kemmer hired Higgs at Edinburgh (Peter had been in the same department as Maurice Wilkins and Rosalind Franklin at King’s College, London. He was an expert at the time on crystal structure via group theory. He did not have any direct involvement with the DNA work, though subsequently he wrote an article that had a lot to do post facto with explaining the helical structure. The big boss at the lab (not Wilkins) was apparently quite annoyed with Higgs that he didn’t want to work on DNA.) Higgs wrote a Kemmer obit for the University of Edinburgh bulletin. He had been at Edinburgh for a couple of years in the 1950s in a junior position before he returned for good in 1960 (I think).

If I recall correctly, as Peter tells the story, Sheldon Glashow (who Higgs had known since a Scottish Summer School (conference) in Physics, 1960, I think) told Higgs that if he were ever planning to be in the Northeast, Glashow would arrange for Peter to give a talk at Harvard on whatever he liked. Independently of Glashow, Dyson wrote Peter to give a talk on what is now famously the Higgs mechanism at IAS, and Peter called Glashow to say something like, “Well, I’m driving from Chapel Hill to Princeton, and I see that Cambridge is only another few hours, so…” and that led to Higgs giving pretty much the same talk at Harvard, a really important event. But if Dyson hadn’t asked Peter to come to Princeton, he would not have gone to Harvard.

[Thus the contingencies of history, always telling a fascinating tale, in this case of a concept that rocked the world of physics, and wouldn’t you know Freeman Dyson would be in the middle of it.- PG]

Charter

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For many years this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image courtesy of Marco Lorenzi).

Now Reading

Version 1.0.0

Recent Posts

On Comments

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to others, I'm probably not going to run it.

Follow with RSS or E-Mail

RSS
Follow by Email

Follow by E-Mail

Get new posts by email:

Advanced Propulsion Research

Beginning and End

Archives