≡ Menu

New Approaches to the Age of Saturn’s Moons

The presence of the always intriguing Titan brings into sharper focus recent work on the age of the moons of Saturn conducted by Samuel Bell (Planetary Science Institute). Given the active weathering visible on Titan, the assumption that it is at least four billion years old, which draws on earlier work on the age of Saturn’s moon system, is challenged by the lakes, mountains, riverbeds and dunes we see in the Cassini data. Bell argues that an older Titan would have to be one with an extremely low erosion rate and minimal resurfacing.

But maybe Titan is younger than we’ve thought. Bell assembles the context of Titan in the overall system at Saturn by studying the cratering rate on the various moons. Determining the age of a planetary surface — think Mars or the Moon — is generally done by counting the impact craters and weighing this against the cratering rate. At Saturn, the problem is that the cratering rate is not known. It would be one value if, as previous work has assumed, the craters on the Saturnian moons all came from objects orbiting the Sun. Bell wondered if this was true:

“If the impacts came solely from Sun-orbiting objects, the relative cratering rate would be much, much higher the closer the moons are to Saturn. However, the crater densities of the oldest surfaces of Mimas, Tethys, Dione, Rhea, and Iapetus are all relatively similar. It would be too much of a coincidence for the ages of the oldest surfaces on each moon to vary by the exact amounts necessary to produce broadly similar crater densities. As a result, it seems much likelier that the impactors actually come from objects orbiting Saturn itself, moonlets that would be too small to detect with current technology.”

Image: This mosaic of Saturn’s moon Mimas showing its cratered surface was created from images taken by NASA’s Cassini spacecraft. Credit: NASA/JPL-Caltech/Space Science Institute.

A new chronology emerges if we accept this model. Saturn-orbiting impactors allow a younger age to be calculated, one that, for Titan, more clearly squares with the Cassini data.

Bell is clear about the factors of system age that we have yet to explain, and acknowledges than an older Titan is still possible. From the paper:

I… prefer a model of dominantly planetocentric cratering, with an impactor production function that probably does not vary by more than a factor of ~5 between Mimas and Iapetus. This planetocentric cratering model makes the young moons hypothesis possible and implies that the cratered plains of Mimas, Tethys, Dione, Rhea, and Iapetus are of broadly similar age. Under this model, the surface of Titan is definitely younger than the cratered plains of Rhea and Iapetus, and it could easily be much, much younger. However, due to lack of constraints on the planetocentric cratering rate and how it varies with time, the planetocentric model provides very limited constraints in terms of
absolute age. While it suggests a vigorously resurfaced Titan with a young surface, the model cannot rule out a surface of Titan that dates back to the early solar system, a very old surface with a very slow erosion rate and negligible endogenic resurfacing.

I bring all this up this morning to add context to a 2019 paper on Saturn’s moons from Marc Neveu (NASA GSFC) and Alyssa Rhoden (Southwest Research Institute). In “Evolution of Saturn’s Mid-sized Moons,” the duo make the case that the orbits of Mimas, Enceladus, Tethys, Dione and Rhea are hard to square with their geology. From their paper:

The moons’ ages are debated. Their crater distributions, assuming Sun-orbiting impactors extrapolated from present-day observed small-body populations, suggest surfaces billions of years old. Conversely, the measured fast expansion of their orbits, probably due to tides raised by the moons on Saturn, indicates— assuming dissipation levels that are constant over both time and frequency of tidal excitation—that this relatively compact moon system is less than a billion years old. This could explain why some moons may not have encountered predicted orbital resonances, and supports scenarios of non-primordial formation from debris of the tidal or collisional disruption of progenitor moons.

The scientists have run numerical simulations coupling geophysical and orbital evolution over a 4.5 billion year period, with the orbits expanding with time through tidal effects. For the overview, let me just quote the abstract below, as I’m short on time this morning. But notice the ramifications of system age for another interesting moon, Enceladus:

Dissipation within the moons decreases their eccentricities, which are episodically increased by moon−moon interactions, causing past or present oceans to exist in the interiors of Enceladus, Dione and Tethys. In contrast, Mimas’s proximity to Saturn’s rings generates interactions that cause such rapid orbital expansion that Mimas must have formed only 0.1−1 billion years ago if it postdates the rings. The resulting lack of radionuclides keeps it geologically inactive. These simulations explain the Mimas−Enceladus dichotomy, reconcile the moons’ orbital properties and geological diversity, and self-consistently produce a recent ocean on Enceladus.

But back to Samuel Bell, who is clearly right about how meager our knowledge of the evolution of this system of moons really is:

“With the new chronology, we can much more accurately quantify what we do and don’t know about the ages of the moons and the features on them. The grand scale history of the Saturn system still hides many mysteries, but it is beginning to come into focus.”

The paper is Bell, “Relative Crater Scaling Between the Major Moons of Saturn: Implications for Planetocentric Cratering and the Surface Age of Titan,” Journal of Geophysical Research Planets 26 May 2020 (abstract). The Neveu and Rhoden paper is “Evolution of Saturn’s mid-sized moons,” Nature Astronomy 3 (1 April 2019), 543-552 (abstract).

tzf_img_post
{ 1 comment }

Is there a single technology that can take us from being capable of reaching space to actually building an infrastructure system-wide? Or at least getting to a tipping point that makes the latter possible, one that Nick Nielsen, in today’s essay, refers to as a ‘space breakout’? We can think of game-changing devices like the printing press with Gutenberg’s movable type, or James Watt’s steam engine, as altering — even creating — the shape and texture of their times. The issue for space enthusiasts is how our times might be similarly altered. Nick here follows up an earlier investigation of spacefaring mythologies with this look at indispensable technologies, forcing the question of whether there are such, or whether technologies necessarily come in clusters that enforce each other’s effects. The more topical question: What is holding back a spacefaring future that after the Apollo landings had seemed all but certain? Nielsen, a frequent author in these pages, is a prolific writer whose work can be tracked in Grand Strategy: The View from Oregon, and Grand Strategy Annex.

by J. N. Nielsen

1. Another Hypothesis on a Sufficient Condition for Spacefaring Civilization
2. The Nineteenth Century and the Steam Engine
3. The Twentieth Century and the Internal Combustion Engine
4. The Twenty-First Century and the Energy Problem
5. The World That Might Have Been: Accessible Fission Technology
6. Nuclear Rocketry as a Transformative Technology
7. Practical, Accessible, and Ubiquitous Technologies
8. The Potential of an Age of Fusion Technology
9. Indispensability and Fungibility
10. Four Hypotheses on Spacefaring Breakout

1. Another Hypothesis on a Sufficient Condition for Spacefaring Civilization

Civilization is the largest, the longest lived, and the most complex institution that human beings have built. As such, describing civilization and the mechanisms by which it originates, grows, develops, matures, declines, and becomes extinct is difficult. It is to be expected that there will be multiple explanations to account for any major transition in civilization. At our present state of understanding, the best we can hope to do is to rough out the possible classes of explanations and so lay the groundwork for future discussions that penetrate into greater depth of detail. It is in this spirit that I want to return to the argument I made in an earlier Centauri Dreams post about the origins of spacefaring civilization.

The central argument of Bound in Shallows was that, while being a space-capable civilization is a necessary condition of being a spacefaring civilization, an adequate mythology is the sufficient condition that facilitates the transition from space-capable to spacefaring civilization. According to this argument, the contemporary institutional drift of the space program and of our civilization is a result of no contemporary mythology being readily available (or, if available, such a mythology remains unexploited) to serve as the social framework within which a spacefaring breakout could be understood, motivated, rationalized, and justified.

In the present essay I will consider an alternative hypothesis on the origins of spacefaring civilization, again building on the fact that we are, today, a space-capable civilization that has not as of yet, however, experienced a spacefaring breakout. The alternative hypothesis is that a key technology is necessary to great transitions in the history of civilization, and that a key technology is like the keystone of an arch, which when present constitutes a stable structure that will endure, but, when absent, the structure collapses. Successful civilizations see a sequence of key technologies that are exploited at a moment of opportunity that allows civilization to internally revolutionize itself and so avoid stagnation. I will call this the technological indispensability hypothesis.

There are many key technologies that could be identified—the bone needle, agriculture, written language, the moveable type printing press—each of which represented a major turning point in human history when the technology in question was exploited to the fullness of its potential. We will take up this development relatively late in the history of civilization, beginning with the steam engine as the crucial technology of the industrial revolution, and therefore the technology responsible for the breakthrough to industrialized civilization.

[Indian & Primose Mills steam engine, built in 1884, in service until 1981]

2. The Nineteenth Century and the Steam Engine

The nineteenth century belonged to steam power, which both built upon previous technological innovations as well as laying the groundwork for the large-scale exploitation of later technologies. But it was steam power that enabled the industrial revolution, which was an inflection point in human agency, both in terms of human ability to reshape our environment and the human ability to harness energy for human use on ever-greater scales. Without the rapid adoption and large-scale exploitation of steam engine technologies for shipping, railways, resource extraction, and industrial production as the model for industrialized civilization, later technological developments (like the internal combustion engine or the electric motor) probably would not have been so effectively exploited.

Almost two hundred years of continuous development built on prior technologies from the earliest steam devices (not counting earlier steam turbines such as that of Hero of Alexandria, which was not a stepping stone to later developments building on this technology) to James Watt’s steam engine. A series of inventors, starting in the early seventeenth century—Giovanni Battista della Porta (1535-1615), Jerónimo de Ayanz y Beaumont (1553-1613), Edward Somerset, second Marquess of Worcester (1602-1667), Denis Papin (1647-1713), Thomas Savery (1650-1715), and Jean Desaguliers (1683-1744)—created steam-powered devices of increasing efficiency and utility. And, of course, while James Watt’s steam engine was the culmination of these developments, it was not an end point of design, but the point of origin of exponential technological improvements that followed.

The technology of the steam engine, then, could be construed as a key technology that enabled the industrial revolution. Previous labor-saving technologies—not only earlier forms of the steam engine as implied by the evolution of that technology, but also water mill and windmill technology known since classical antiquity—were limited by their inefficiency and by the sources of energy they harvested. The steam engine, once understood, was capable of increasing efficiency both through improved design and precision engineering, and it allowed human beings to tap into sources of energy sufficiently plentiful and dense that powered machine works could, in principle, be installed at almost any location and be operated continuously for as long as fuel could be supplied (which supply was facilitated by the energy density of the fuel, first coal for steam technologies, then oil for the internal combustion engine).

About fifty years after Watt’s later iterations of his steam engine design, Sadi Carnot published Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance (Reflections on the Motive Power of Fire and on Machines Fitted to Develop that Power, 1824), and in so doing systematically assimilated steam engine technology to the conceptual framework of science. It was this scientific understanding of what exactly the steam engine was doing that made it possible to improve the technology beyond the limits of tinkering (or what we might today call “hacking”). As we shall see, however, the full exploitation of a transformative technology seems to require both scientific development and practical tinkering.

In regard to my thesis in Bound in Shallows, mythologies present in the Victorian age that enabled the exploitation of steam technology could include the belief in human progress and belief in the distinctive institutions of Victorian society. To take the latter first, in The Victorian Achievement I argued that the ability for Victorian England to keep itself intact despite the wrenching changes wrought by the industrial revolution was key to the success of the industrial revolution: “[Victorian civilization] achieved nothing less than the orderly transition from agricultural civilization to industrialized civilization.”

At the same time that a civilization must internally revolutionize itself in order to avoid stagnation, it must also provide for continuity by way of some tradition that transcends the difference between past, present, and future. The ideology of Victorian society made this possible for England during the industrial revolution. A sufficiently large internal revolution that fails to maintain some continuity of tradition could result in the emergence of a new kind of civilization that must furnish itself with novel institutions or reveal itself as stillborn. If the population of a revolutionized civilization cannot be brought along with the radical changes in social institutions, however, the internal revolution, rather than staving off stagnation, simply becomes an elaborate and complex form of catastrophic failure in which a society approaches an inflection point and cannot complete the process, coming to grief rather than advancing to the next stage of development.

It has become a commonplace of historiography that nineteenth century Europe, and Victorian England in particular, believed in a “cult of progress”; the studies on this question are too numerous to cite. A revisionary history might seek to overturn this consensus, but let us suppose this is true. If belief in progress distinctively marked the nineteenth century engagement with the earliest industrial technologies, we can regard this as an antithetical state of mind to what Gilbert Murray called a “failure of nerve” [1], and as such a steeling of nerve may have been what was necessary for a previously agricultural economy to find itself rapidly transformed into an industrialized economy and to survive the transition intact.

At this point, we can equally well argue for the indispensability of technology or the indispensability of mythology in the advent of a transformation in civilization, but now we will pass over into further developments of the industrial revolution. After the age of the steam engine, the twentieth century belonged to the internal combustion engine burning fossil fuel. It was the internal combustion engine that drove technological and economic modernity first revealed by steam technology to new heights.

[The Wärtsilä-Sulzer RTA96-C internal combustion engine]

3. The Twentieth Century and the Internal Combustion Engine

The key technology of the twentieth century, and the successor technology to the steam engine, was the internal combustion engine. The first diesel engine was built in 1897, and the diesel engine rapidly found itself employed in a variety of industrial applications, especially in transportation: shipping, railroads, and trucking. Two-stroke and four-stroke gasoline engines converged on practical designs in the late nineteenth and early twentieth century and began to replace steam engines in those applications where diesel engines had not already replaced steam.

The internal combustion engine has a fuel source that can be stored in bulk (also true for steam engines), and it is scalable. The scalability of the internal combustion engine often goes unremarked, but it is the scalability that ensured the penetration of the internal combustion engine into all sectors of the economy. An internal combustion engine can be made so small and light that it can be carried around by one person (as in the case of a yard trimmer) and it can be made so large and powerful that it can used to power the largest ships ever built. [2] The internal combustion engine is sufficiently versatile that it can be dependably employed in automobiles, trucks, trains, ships, power generation facilities, and industrial applications.

While it would be misleading to claim that the internal combustion engine was revolutionary to the degree that the steam engine was revolutionary, it would nevertheless be accurate to say that the internal combustion engine allowed for the expansion and consolidation of the industrialized civilization made possible by the steam engine.

The internal combustion engine proliferated at a time when the belief in the institutions of societies undergoing industrialization weakened and arguably has never recovered, so that it would be difficult to argue that the ongoing industrial revolution was driven by a distinctive mythology, whereas the continued development and refinement of the crucial technologies of industrialization continued to advance even as the core mythologies of industrializing societies were questioned as never before. At this point, technology looks more indispensable to ongoing industrialization than does mythology.

The experience of the First World War was a turning point both in technology and social change. I have called the First World War the First Global Industrialized war; for the first time, the war effort was existentially dependent upon fossil fuel powered trains, trucks, motorcycles, aircraft, and tanks, which transformed the experience of combat, so that German soldiers thereafter spoke of the “frontline experience” (Fronterlebnis). Even while all traditional warfighting seemed to vanish as being irrelevant (heroic cavalry charges no longer carried the day or turned the tide), a new kind of industrialized war experience appeared, and we can find this experience not merely described but celebrated by Ernst Jünger in Storm of Steel, Copse 125, and other works.

The war led to the destruction of many political regimes in Europe that had endured for hundreds of years, and saw the appearance of radical new regimes like Soviet Russia, which emerged from the wreckage of Tsarist Russia, which could trace its origins back almost a millennium. Whether these ancient regimes were the victims of a mythology that catastrophically failed in the midst of industrialized warfare, or whether the failed regimes brought down traditional mythologies with them, is probably a chicken-and-egg question. But even as ancient regimes and their associated mythologies failed, technology triumphed, and with technology there arose new forms of human experience, the principal driver of which new experiences was continued technological innovation.

[Reactor dome being lowered into place at Shippingport Atomic Power Station in Pennsylvania]

4. The Twenty-First Century and the Energy Problem

Both steam engines and internal combustion engines exploited the energy of fossil fuels. What economists would call the negative externalities of the trade in fossil fuels that grew in the wake of the adoption of the internal combustion engine included the “resource curse,” which marred the political economy of many nation-states that possessed fossil fuels, and extensive pollution resulting from the extraction, refining, transportation, and consumption of fossil fuels. No one could have guessed, at the beginning of the twentieth century (much less at the beginning of the nineteenth century), the monstrosity that fossil fueled internal combustion engines would become, and, by the time our civilization was utterly dependent upon the internal combustion engine, it was too late to do anything except to attempt to mitigate the damage of the entire energy infrastructure than had been created to fuel our industries.

Having realized, after the fact, the dependency of industrialized civilization upon fossil fuels, we find ourselves and our society dependent upon industries that have high energy requirements, but lacking the technology to replace these industries at scale. We are trapped by our energy needs.

I am not going to attempt to summarize the large and complex issues of the advantages and disadvantages of energy alternatives, as countless volumes have already been devoted to this topic, but I will only observe that an abundant and non-polluting source of energy is necessary to the continued existence of technological civilization. We can have civilization without abundant and non-polluting sources of energy, but it will not be the energy-profligate civilization we know today. If energy is non-abundant, it must be rationed; and if energy is polluting, we will gradually but inevitably poison ourselves on our own wastes. Both alternatives are suboptimal and eventually dystopian; neither lead to future transformations of civilization that transcend the past by attaining greater complexity.

Just as there are those who argue for the continuing exploitation of fossil fuels without limit, and who appear to be prepared to accept the consequences of this unlimited use of fossil fuels, there are also those who argue for the abandonment of fossil fuels without any replacement, so that our fossil fuel dependent civilization must necessarily come to an end. Among those who argue for the abolition of energy-intensive industry, we can distinguish between those who advocate the complete abolition of technological civilization (Ted Kaczynski, John Zerzan, Derrick Jensen) and those who look toward a kind of “small is beautiful” localism of “eco-communalism” [3] that would preserve some quality of life features of industrialized civilization while severely curtailing consumerism and mass production.

Human beings would accept sacrifices on this scale, including sacrificing their energy demands, if they believed their sacrifice to be meaningful and that it contributed to some ultimate purpose (or what Paul Tillich called an “ultimate concern”). In other words, a sufficient mythological basis is necessary to justify great sacrifices. We have seen intimations of this level of ideological engagement and call to sacrifice with the most zealous environmental organizations, such as Extinction Rebellion — the “Red Brigade” protesters present themselves with a theatricality that is certain to attract some while repelling others; I personally find them deeply disturbing—which cultivates a quasi-religious intensity among its followers. It is unlikely that those who came to maturity within a technological civilization fully understand what the implied sacrifices would entail, but that is irrelevant to the foundation of the movement; if the movement were to be successful, the eventual regret of those caught up in it would not arrest the progress of a new ideology that sweeps aside all impediments to its triumph.

The proliferation of environmental groups since the late twentieth century (the inflection point is often given as being the publication of Rachel Carson’s Silent Spring in 1962) demonstrates that this is a growing movement, but it is not clear that the most zealous groups can seize the narrative of the movement and become the focus of environmental activism. If, however, individuals were inspired by a quasi-religious zealotry to sacrifice energy-intensive living, we cannot rule out the possibility that the intensity of environmental belief could pave the way, so to speak, toward a transformative future for civilization that did not involve energy resources equal to or greater than those in use at present.

Energy resources equal to or greater than those in use today are crucial to any other scenario for the continuation of civilization. In the same way that eight billion or more human beings can only be kept alive by a food production industry scaled as at present, and to tamper with this arrangement would be to court malnutrition and mass starvation, so too eight billion human beings can only be kept alive by an energy industry scaled as at present, and to tamper with this arrangement would be to court disaster. This disaster could be borne if everyone possessed a burning faith in the righteousness of energy sacrifice, but in planning for the needs of mass society we may need to eventually recognize mass conversion experiences, but such cannot be the basis of policy; there is no way to impose this kind of belief.

One of the persisting visions of a solution for the energy problem of the twenty-first century is widely and cheaply available electricity that can be used to power electrical motors that would replace the fossil fueled engines that now power our industrialized economy. Throughout the nineteenth century dominance of the steam engine and the twentieth century dominance of the internal combustion engine, electric motors were under continual development and improvement. Electric motors came into wide use in industrial applications in the twentieth century, and into limited use for transportation, especially in streetcars when electrical power could be supplied by overhead lines. This can and has been done for longer distance electric railways as well, but the added infrastructure cost of not only laying track, but also constructing the electrical power distribution lines limited electrical train development. For ships and planes, electrical power has not been practicable to date. Only now, in the twenty-first century, are electrical technologies advancing to the point that electrical aircraft may become practical.

The problem is not electrical motors, but the electricity. Providing electricity at industrial scale is a challenge, and we meet that challenge today with fossil fuels, so that even if every form of transportation (automobiles, buses, trucks, shipping, trains, aircraft, etc.) were converted to electrical motors, the electricity grid supplying the electrical needs for these applications would still involve burning fossil fuels. A number of well-heeled businesses have recognized this and installed solar power panels on the roofs of their garages so that their well-heeled employees can plug in their electric cars while they work. This is an admirable effort, but it is not yet a solution for transportation at the scale demanded by our civilization.

If the electrical grid could either be developed in the direction of highly distributed generation with a large number of small electricity sources feeding the grid (which could well be renewables), or a continuation of the centralized generation model but without the fossil fuel dependency of coal, oil, and natural gas generating facilities, the use of electricity as the primary energy for industrial processes could be achieved with a minimum of compromises (primarily those compromises entailed by the difficulty of storing electricity, i.e., the battery problem). What would replace centralized generation if fossil fuel use were curtailed? There is the tantalizing promise of fusion, but before this technology can supply our energy needs, it would have to be shown to be practicable, accessible, and ubiquitous, which is an achievement above and beyond proof-of-concept for better-than-break-even fusion. At present, there seem to be few alternatives to nuclear fission.

The twenty-first century energy problem is the problem of the maintenance of the industrialized civilization that was built first upon steam engines and then upon the internal combustion engine; it is partially a problem of the direction our civilization will take, but it is not a problem of managing a transformative technology and the social changes driven by the introduction of a transformative technology. The initial introduction of powered machinery was such a transformative technology, but the ability to continue the use of powered machinery is no longer transformative, merely a continuation of more of the same.

It is as though we find ourselves, in the early twenty-first century, groping in the dark for a way forward. There is no clear path for the direction of civilization (which would include a clear path to energy resources commensurate with our energy-intensive civilization), and no consensus on defining a clear path forward. This absence of a clear path forward can be construed as a mythological deficit, or as the absence of a crucial technology. Here, I think, the balance of the argument favors a mythological deficit, because we possess nuclear technology, but no mythology surrounds the use of nuclear technology that would rationalize and justify its use at industrial scale—or, at least, no mythology sufficiently potent to overcome the objections to nuclear power.

[The unbuilt Clinch River Breeder Reactor Project (CRBRP)]

5. The World That Might Have Been: Accessible Fission Technology

One of the potential answers to the twenty-first century energy problem is nuclear power, but nuclear power is one of many nuclear technologies, and nuclear technologies taken together, had they been exploited at scale, might have been a transformative technology, both for the maintenance of industrialized civilization without fossil fuels, as well as for the transformation of our planetary industrialized civilization into a spacefaring civilization. Submarines and aircraft carriers are now routinely powered by fission reactors, and it would be possible to engineer fission reactors for railways and aircraft. Ford once proposed the Nucleon automobile, but this level of fission miniaturization is probably impractical. But the nuclearization of our infrastructure has stagnated. Once ambitious plans to build hundreds of nuclear reactors across the US were scrapped, and instead we find new natural gas generating plants under construction.

Darcy Ribeiro wrote of a “thermonuclear revolution” as one of many technological revolutions constituting civilizational processes that are, “…transformations in man’s ability to exploit nature or to make war that are prodigious enough to produce qualitative alterations in the whole way of life of societies.” [4] But if we do recognize thermonuclear technologies as revolutionary, we cannot identify them as having fulfilled their revolutionary function because of the stagnation of nuclearization. The promise and potential of nuclear technology never really got started, despite plans to the contrary.

While there were plans for the nuclear industry to be a major sector of the US economy, and these plans were largely derailed by construction costs that spiraled due to regulation, the nuclear industry thus conceived and thus derailed was always to be held under the watchful eye of the government and its nuclear regulation agencies. After the construction of nuclear weapons, it was too late to put the nuclear genie back in the bottle, but if the genie couldn’t be put back in the bottle, it could be shackled and placed under surveillance. The real worry was proliferation. If fissile materials become easily available, other nation-states would possess nuclear weapons sooner rather than later, and the post-war political imperative was to bring into being a less dangerous world. A world in which nuclear weapons were commonplace would be a far more dangerous world than that which preceded the Second World War, so that despite the division of the world by the Cold War, the one policy upon which almost all could agree was the tight control of fissile materials, hence the de facto constraints placed upon nuclear science, nuclear technology, and nuclear engineering. [5]

The human factor in technological development is essential, as in mythology. The details of a mythology may speak to one person and not another. So, too, a particular technological challenge may speak to one person and not to another. For those who might have had a special bent for nuclear technologies, their moment never arrived. At least two generations, perhaps three generations, of scientists, technologists, and engineers who would have dedicated their careers to the emerging and rapidly changing technology of nuclear rocketry and the application of nuclear technology to space systems, had to find another use for their talents. These careers that didn’t happen, and lives that didn’t unfold, can never be measured, but we should be haunted by the lost opportunity they represent. And perhaps we are haunted; this silent, unremarked loss would account for institutional drift and national malaise (i.e., stagnation) as readily as the absence of a mythology.

Even benign nuclear technologies that do not directly involve fissionable materials have suffered due to their expense. When funding for the SSC was cancelled (after an initial two billion had been spent), an entire generation of American scientists have had to go to CERN in Geneva because that is where the instrument is that allows for research at the frontiers of fundamental physics. There is only this single facility in the world for research into fundamental particle physics at the energy levels possible at the LHC. The expense of nuclear science has been another strike against its potential accessibility. Funding for scientific research is viewed as a zero-sum game, in which a new particle accelerator is understood to mean that another device does not get funded. Sabine Hossenfelder’s tireless campaign of questioning the construction of ever-larger particle accelerators takes place against this background of zero-sum funding of scientific research. But if science were growing exponentially, as industry grew exponentially during the industrial revolution, there would be few (or, at least, fewer) conflicts over funding scientific research.

Not only are nuclear technologies politically dangerous and expensive, nuclear technologies are also physically dangerous; extreme care must be taken so that nuclear materials do not kill their handlers. The “demon core” sphere of plutonium, which was slated to be the core of another implosion nuclear weapon (tentatively scheduled to be dropped August 19, but the Japanese surrendered on August 15), was responsible for the deaths of Harry Daghlian (due to an incident on 21 August 1945) and Louis Slotin (due to an incident on 21 May 1946) as they tested the core’s criticality. Fermi had warned Slotin that he would be dead within a year if he failed to maintain safety protocols, but apparently there was a certain thrill involved in “tickling the dragon’s tail.” The bravado of young men taking risks with dangerous technology is part of the risk/reward dialectic. Daghlian and Slotin were nuclear tinkerers, and it cost them their lives.

Generally speaking, industrial technologies are dangerous. The enormous machines of the early industrial revolution sometimes failed catastrophically, and took lives when they did so. Sometimes steam boilers exploded; sometimes trains jumped their tracks. Nuclear technologies are subject to dangers of this kind, as well as the unique dangers of the nuclear materials themselves. Because of this extreme danger—partly for reasons of personal safety, and partly for reasons of proliferation, which can be understood as social safety—nuclear reactors have developed toward a model of sealed containers that can operate nearly autonomously for long periods of time. [6] This limits hands-on experience with the technology and the ability to tinker with a functioning technology in order to improve efficiency and to make new discoveries.

There is a kind of dialectic in the development of technology since the development of scientific methods, such that the most advanced science of the day allows for new technological innovations, but once the technological innovations are made available to industry, thousands, perhaps tens of thousands or hundreds of thousands of individuals using the technology on a daily basis leads to a level of familiarity and practical know-how, which can then be employed to fine tune the use of the technology, and sometimes can be the basis of genuine technological innovations. Scientists design and build the prototypes of the technology, but engineers refine and improve the prototypes in industrial application, and this is a process more like tinkering than like science. So while the introduction of scientific method in the development of technology results in an inflection point in the development of technology (which is what the industrial revolution was), tinkering does not necessarily disappear and become irrelevant.

Because of the dangers of nuclear technologies, there is very little tinkering that goes on. Indeed, I suspect that the very idea of “nuclear tinkering” would send shudders down the spine of regulators and concerned citizens alike. And yet, it would be nuclear tinkering with a variety of different designs of nuclear rockets that would lead to a more effective and efficient use of nuclear technologies. As we noted with the steam engine, incremental improvements were made throughout the seventeenth and eighteenth centuries until the efficiency of James Watt’s steam engine became possible, and most of this was the result of tinkering rather than strictly scientific research, as the science of steam engines was not made explicit until Carnot’s book fifty years after Watt’s steam engine. In the case of nuclear technology, the fundamental science was accomplished first, and only later was that science engineered into specific nuclear technologies, which may be one of the factors that has limited hands-on engagement with nuclear technologies.

[Phoebus 1 A was part of the Rover Program to build a nuclear thermal rocket.]

6. Nuclear Rocketry as a Transformative Technology

Suppose that, for any spacefaring civilization, the key and indispensable technology is nuclear rocketry, or, we can say more generally, nuclear technology employed in spacecraft. Whether nuclear technology is employed in nuclear rockets, or in order to deliver megawatts of power in a relatively small package (e.g., to power an ion thruster), the use of nuclear fission could be a key means of harnessing of energies on a scale to enable space exploration with an accessible technology.

In what way is nuclear technology accessible? Human civilization has been making use of nuclear fission to generate electrical power (among other uses) for more than fifty years, all the while as research into nuclear fusion has continued. Nuclear fusion is proving to be a difficult technology to master. A century or two may separate the practical utility of fission power and fusion power. In historiographical terms, fission and fusion technologies may find themselves separated each into distinct longue durée periods — an Age of Fission and, later, an Age of Fusion. That means that nuclear fission technology is potentially available and accessible throughout a period of history during which nuclear fusion technology is not yet available or accessible.

How much could be achieved in one or two hundred years of unrestrained development of nuclear fission technology and its engineering applications? With an early spacefaring breakout, this could mean one or two hundred years of building a spacefaring civilization, all the while refining and improving nuclear fission technology in a way that is only possible when a large number of individuals are involved in an industry, with, say, two or more nuclear rocket manufacturers in competition, each trying to derive the best performance from their technology.

We know that the ideas were available in abundance for the exploitation of nuclear technology in space exploration. The early efflorescence of nuclear rocket designs has been exhaustively catalogued by Winchell Chung in his Atomic Rockets website, but this early enthusiasm for nuclear rocketry became a casualty of proliferation concerns. However, the imagination revealed early in the Atomic Age demonstrates that, had the opportunity been open, human creativity was equal to the challenge, and had this industry been allowed to grow, to develop, and to adapt, the present age would not have been one of stagnation.

In a steampunk kind of way, a spacefaring civilization of nuclear rocketry would in some structural ways resemble the early industrialized civilization of steam power. The nineteenth century industrial revolution was made possible by enormous machinery—steamships, steam locomotives, steam shovels (which made it possible to dig the Panama Canal), etc. A technological civilization that projected itself beyond Earth by nuclear rocketry would similarly be attended by enormous machinery. While fission reactors can be made somewhat compact, there are lower limits to practicality even for compact reactors, so that technologies enabled by the widespread exploitation of fission technology would be built at any scale that would be convenient and inexpensive. Nuclear powered spacecraft could open up the solar system to human beings, but these craft would likely be large and require a significant contingent of engineers and mechanics to keep them functioning safely and efficiently, much as steam locomotives and steamships required a large crew and numerous specializations to operate dependably.

[The bone needle, the moveable type printing press, and the steam engine]

7. Practical, Accessible, and Ubiquitous Technologies

We can summarize the technological indispensability hypothesis such that being a space-capable civilization is a necessary condition of being a spacefaring civilization, but a crucial spacefaring technology is the sufficient condition that facilitates the transition from space-capable to spacefaring civilization. What makes a spacefaring technology a sufficient condition for the transition from space-capable to spacefaring civilization is its practicality, its accessibility, and its ubiquity. A practical technology accomplishes its end with a minimum of complexity and difficulty; an accessible technology is affordable and adaptable; ubiquitous technologies are widely available with few barriers to acquisition. Stated otherwise, practical technologies don’t break down; accessible technologies can be repaired and modified; ubiquitous technologies are easy to buy, cheap, and plentiful.

Given the technological indispensability hypothesis, we can account for the drift of contemporary technological civilization by the absence of a key technology that would have allowed our civilization to take its next step forward, and we can further identify one technology—nuclear rocketry—as the absent key technology that, had it been exploited at the scale of steam engines in the nineteenth century or internal combustion engines in the twentieth century, would have resulted in a spacefaring breakout, and therefore a transformation of civilization.

None of this is inevitable, however. The mere existence of a technology is not, in itself, sufficient for a technology to transform a society. Some technologies, probably most technologies, are not intrinsically transformative. Of those technologies that are transformative, not all of these technologies have the potential to be practical, accessible, and ubiquitous. Of those technologies that are socially transformative and are practical, accessible, and ubiquitous, not all are sufficiently widely adopted to result in a transformational impact.

The list of technologies I cited earlier—among them, the bone needle, moveable type printing, and the steam engine—all were technologies that were transformative as well as being practical, accessible, and ubiquitous. The bone needle allowed for sewing form-fitting clothing during the last glacial maximum, therefore making it possible for human beings to expand across the entire surface of Earth. Movable type printing made books and pamphlets inexpensive and resulted in the exponential growth of knowledge; without inexpensive books and journals, the scientific revolution would not have made the impact that it did. Steam engines made the industrial revolution possible.

However, the existence of the technology alone is not sufficient; stated otherwise, it is not inevitable that a technology that is transformative will have the social impact that some of these technologies have had. The Chinese independently developed moveable type printing, and while this technology was in limited use, it did not revolutionize Chinese society. Chinese society stagnated in spite of possessing movable type printing technology. There are many possible explanations for this, first and foremost, the Chinese language itself may have required too many characters for movable type printing to be as effective a technology as it was for languages employing phonetic symbols with a smaller character set. In other words, the transformative technology of movable type printing may not have been practical and accessible using the Chinese character set; clearly it did not achieve ubiquity.

The example of the role of the Chinese language [7] in idea diffusion points to the possibility that a sequence of technologies (language is a technology of communication) may have to unfold in a particular order, with a civilization at each developmental juncture adopting a particular key technology (for linguistic technology, this might be a syllabary or a phonetic script), in order for later transformative events in civilization to occur. Formulated otherwise, transformative changes in civilization, like the industrial revolution, or a spacefaring breakout if that were to occur, may be metaphorically compared to inserting a key into the lock, such that each successive tumbler must be positioned in a particular way in order to finally unlock the mechanism.

In light of the above, we can reformulate the technological indispensability thesis such that a key spacefaring technology is the sufficient condition that facilitates the transition from space-capable to spacefaring civilization, but this crucial spacefaring technology must supervene upon the adoption of earlier technologies that facilitate and serve as the foundation for later spacefaring technology. We can call this the strong technological indispensability hypothesis, as it refers to technology alone as the transformative catalyst in civilizational change. The fact that the existence a technology alone does not inevitably result in its industrial exploitation once again points to the role of social factors—what I would call a sufficient mythological basis for the exploitation of a technology. In a weak formulation of the technological indispensability hypothesis, a sequence of technologies must be available, but it is a mythological trigger that leads to their exploitation. Here technology is still central to the historical process, but it must be supplemented by mythology. If we take this mythological supplement to be the sufficient condition for a spacefaring breakout, then we are back at the argument I made in Bound in Shallows.

We needn’t, of course, focus on any single causal factor, such as technology. It may be both the absence of a key technology and the absence of a key mythology. Just as the absence of a mythology may have been a factor that kept the technology from being exploited, the absence of the technology may have been a factor in limiting the mythological elaboration of its role in society. Much that I have written above about technology could be applied, mutatis mutandis, to mythology: a key mythology may need to develop organically out of previous mythologies, so that if a particular mythological tradition is absent, or develops in a different way, it cannot become the mythology that would superintend the expansion of a civilization beyond Earth. Moreover, these developments in technology and mythology may need to occur in parallel, so that it is like two keys inserted into two locks, each lining up each successive tumblers in a particular orientation—like launching a nuclear missile.

[Princeton Plasma Physics Laboratory, PFRC-2]

8. The Potential for an Age of Fusion Technology

Can we skip a stage of technological development? Can we make the transition directly from our fossil-fueled economy to a fusion-based economy, without passing through the stage of the thermonuclear revolution? Or should we regard the development of fusion technologies to be an extension of, and perhaps even the fulfillment of, the thermonuclear revolution?

Part of the promise of fusion is that it does not require fissile materials and so does not fall under the interdict that cripples the development of fission technologies, but fusion technology is not without its dangers; the promise of fusion technology is balanced by its problems. One can gain an appreciation of the complexity and difficulty of fusion engineering from a pessimistic article by Daniel Jassby, “Fusion reactors: Not what they’re cracked up to be,” which, in addition to discussing the problems of making fusion work as an energy source, also notes that the neutron flux from deuterium-tritium fusion could be used to enrich uranium 238 into plutonium 239, so that fusion does not eliminate the nuclear proliferation problem (although, presumably, continued tight control of uranium could obtain similar non-proliferation outcomes as is today the case with fission). Of course, for every pessimist there is an optimist, and there are plenty of optimists for the future of fusion.

While fusion technology would not necessarily involve fissionable material, and therefore would facilitate the construction of nuclear weapons to a lesser degree than fission technologies, the capabilities that widespread exploitation of fusion technology would put into the hands of human beings would scarcely be any less frightening than nuclear weapons. In this sense, the problem of nuclear weapons proliferation is only a stand-in for a more general problem of proliferation that follows from any technological advance, as any technology that enhances human agency also enhances the ability of human beings to wage war and to commit atrocities. Biotechnology, for example, also places potentially catastrophic powers into the hands of human beings. Nuclear weapons finally pushed human agency over the threshold of human extinction and so prompted a response—international non-proliferation efforts—but this problem will re-appear whenever a technology reaches a given level of development. Will each successive technological development that pushes human agency over the threshold of human extinction provoke a similar response? And is this a mechanism that limits the technological development of civilizations generally, so that this can be extrapolated as a response to the Fermi paradox?

It may be possible that humanity skips the stage of development that would have been represented by the widespread exploitation of thermonuclear technology (here understood as fission technologies), but this skipping a stage comes with an opportunity cost: everything that might have been achieved in the meantime through thermonuclear technologies is delayed until fusion technologies can be made sufficiently practical, accessible, and ubiquitous. But because of the severe engineering challenges of fusion, the mastery of fusion technology will greatly enhance human agency, and as such it will eventually suggest the possibility of human extinction by means of the weaponization of fusion technologies, and so bring itself under a regime of tight control that would ensure that fusion technologies never achieve a transformative role in civilization because it never becomes practical, accessible, and ubiquitous.

[Mercury-vapor, fluorescent, and incandescent electrical lighting technologies]

9. Indispensability and Fungibility

The technological indispensability hypothesis implies its opposite number, which is the technological fungibility hypothesis: no technology, certainly no one, single technology, is the key to a transformative change in civilization. But what does it mean for a technology to be one technology? Are there not classes of related technologies? How do we distinguish technologies or classes of technologies?

One could argue that some particular technology is necessary to advance a civilization to a new stage of complexity, but that the nature of technology is such that, if one technology is not available (i.e., some putatively key technology is absent), some other technology will serve as well, or almost as well. If we cannot build nuclear rockets due to proliferation concerns, then we can build reusable chemical rockets and ion thrusters and solar sails. Under this interpretation, no single technology is key; what matters is how effectively some given technology is exploited.

Arguments such as this appear frequently in discussions of the ability of civilization to be rebuilt after a catastrophic failure. Some have argued that our near exhaustion of fossil fuels means that if our present industrialized civilization fails, there will be no second chance on Earth for a spacefaring breakout, because fossil fuels are a necessary condition for industrialization (and, by extension, a necessary condition for fossil fuel technologies like steam engines and internal combustion engines that are key technologies for industrialization). We have picked the low-hanging fruit of fossil fuels, so that any subsequent industrialization would have to do without them. [8]

In order to do justice to the technological fungibility hypothesis it would be necessary to formulate a thorough and rigorous distinction between technologies and engineering solutions to technological problems. This in turn would require an exhaustive taxonomy of technology. Is electric lighting a technology, while mercury-vapor lamps and fluorescent bulbs are two distinct engineering solutions to the same technological problem, or do we need to be much more specific and identify incandescent light bulbs as a single technology, with the different materials used to construct the filament being distinct engineering solutions to the technological problems posed by incandescent bulb design? If the latter, is electrical lighting then a class of technologies? Should we distinguish fungibility within a single technology (i.e., the diverse engineering expressions of one technology) or within a class of technologies? Without such a technological taxonomy, we are comparing apples to oranges, and we cannot distinguish between technological indispensability and technological fungibility.

These arguments about the fungibility of technology in industrialization also points to a parallel treatment for mythology: mythologies, too, may be fungible, and if a given mythology is not available in a culture, another could serve the same function as well.

[Wilhelm Windelband, 1848-1915]

10. Four Hypotheses on Spacefaring Breakout

We are now in a position to distinguish four hypotheses for an historiographical explanation for a spacefaring breakout, and, by extension, for other macrohistorical transformations of civilization (beyond a narrow focus on spacefaring mythology and spacefaring technology):

  • The Mythological Indispensability Hypothesis: a key mythology is a sufficient condition for a transformation of civilization.
  • The Mythological Fungibility Hypothesis: some mythology is a sufficient condition for a transformation of civilization, but there are many such peer mythologies.
  • The Technological Indispensability Hypothesis: a key technology is a sufficient condition for a transformation of civilization.
  • The Technological Fungibility Hypothesis: some technology is a sufficient condition for a transformation of civilization, but there are many such peer technologies.

Each of these hypotheses can be given a strong form and a weak form, yielding eight permutations: strong permutations of the hypotheses are formulated in terms of a single cause; weak permutations of the hypotheses are formulated in terms of multiple causes, though one cause may predominate.

I began this essay with the assertion that civilization is the largest, the longest lived, and the most complex institution that human beings have built. This makes maintaining any hypothesis about civilization difficult, but not, I think, impossible. We cannot grow civilizations in the laboratory, and we cannot experiment with civilizations in any meaningful way. However, we can learn to observe civilizations under controlled conditions, even if we cannot control what will be the dependent variable and what the independent variable.

History is the record of controlled observation of civilization (or an implicit attempt at such), but history leaves much to be desired in terms of scientific rigor. Explicitly coming to understand history as a controlled observation of civilization would require a transformation of how history is pursued as a discipline. The conceptual framework required for this transformation does not yet exist, so we cannot pursue history in this way at the present time, but we can contribute to the formulation of the conceptual framework that will make it possible to pursue history as the controlled observation of civilization in the future.

This process of transforming the conceptual framework of history must follow the time-tested path of the sciences: making our assumptions explicit, making the principles by which we reason explicit, employing only evidence collected under controlled conditions, and so on. Another crucial element, less widely recognized, is that of formulating a conceptual framework that employs concepts of the proper degree of scientific abstraction, something I have previously discussed in Scientific Knowledge and Scientific Abtraction. This latter is perhaps the greatest hurdle for history, which has been understood as a concretely idiographic form of knowledge, in contradistinction to the nomothetic forms of knowledge of the natural sciences. [9]

In a future essay I will argue that history is intrinsically a big picture discipline, so that it must employ big picture concepts, which would make of history the antithesis of the idiographic. Moreover, there is no extant epistemology of big picture concepts (which we can also call overview concepts) that recognizes their distinctiveness and theoretically distinguishes them from smaller scale concepts, and this means that a transformation of history is predicated upon the formulation of an adequate epistemology that can clearly delineate a body of historical knowledge. In order to assess the hypotheses formulated above, it will be necessary to supply these missing elements of historical thought.

Notes

[1] I discussed Gilbert Murray on the failure of nerve in an earlier Centauri Dreams post, Where Do We Come From? What Are We? Where Are We Going?

[2] The largest internal combustion engine is the Wärtsilä-Sulzer RTA96-C; one of the remarkable things about this engine is how closely it resembles the construction of an internal combustion engine you would find in any conventional automobile.

[3] The Tellus Institute describes eco-communalism as follows: “… the green vision of bio-regionalism, localism, face-to-face democracy, small technology, and economic autarky. The emergence of a patchwork of self-sustaining communities from our increasingly interdependent world, although a strong current in some environmental and anarchist subcultures seems implausible, except in recovery from collapse.”

[4] Darcy Ribeiro, The Civilizational Process, Washington: Smithsonian Institution Press, 1968, p. 13.

[5] I have previously examined this idea in Trading Existential Opportunity for Existential Risk Mitigation: a Thought Experiment, where I posed the choice between the exploitation of nuclear technologies or the containment of nuclear technologies as a thought experiment.

[6] The newest reactor under development for the next class of US nuclear submarines, the S1B reactor, will be designed to operate for 40 years without refueling.

[7] Civilizations can and have changed their languages in order to secure greater efficiency in communication, and therefore idea diffusion. Mainland China has adopted a simplified character set. Both Japanese Kanji characters and traditional Korean characters were based on traditional Chinese models; the Japanese developed two alternative writing systems, Katakana and Hiragana (both of which are premodern in origin); the Koreans developed Hangul, credited to Sejong the Great in 1443. Under Atatürk, the Turks abandoned the Arabic script and adopted a Latin character set. Almost every civilization has adopted Hindu-Arabic numerals for mathematics.

[8] I have addressed this in answer to a question on Quora: If our civilization collapsed to pre-Industrial; do we have sufficient resources to recover (repeat the Industrial Revolution) to high tech? Or do we need to get into space on this go?

[9] On the distinction between the idiographic and the nomothetic cf. Windelband, Wilhelm, “Rectorial Address, Strasbourg, 1894,” History and Theory, Vol. 19, No. 2 (Feb., 1980), pp. 169-185.

tzf_img_post
{ 92 comments }

K2-315b: Tight Orbits and the Joy of Numbers

The newly found planet K2-315b catches the eye because of its 3.14-day orbit, a catch from the K2 extension of the Kepler Space Telescope mission that reminds us of a mathematical constant. As I’m prowling through David Berlinski’s Infinite Ascent (Modern Library, 2011), a quirky and quite lively history of mathematics at the moment, the references to ‘pi in the sky’ that I’m seeing in coverage of the discovery are worth a chuckle. Maybe the Pythagoreans were right that everything is number. Pythagoras would have loved K2-315b and would have speculated on its nature.

After all, as Berlinski notes about Pythagoras (ca. 570 to ca. 490 BCE) and his followers, they were devoted to what he calls ‘a higher spookiness”:

The Pythagoreans never succeeded in explaining what they meant by claiming that number is the essence of all things. Early in the life of the sect, they conjectured that numbers might be the essence of all things because quite literally “the elements of numbers were the elements of all things.” In this way, Aristotle remarks, “they constructed the whole heaven out of numbers.” This view they could not sustain. Aristotle notes dryly that “it is impossible that [physical] bodies should consist of numbers,” if only because physical bodies are in motion and numbers are not. At some time, the intellectual allegiances of the sect changed and the Pythagoreans began to draw a most Platonic distinction between the world revealed by the senses and the world revealed by the intellect.

And we’re off into weird metaphysics, down a historical rabbit hole. But enough of the Pythagorean buzz with numbers remains that to this day we love the odd coincidence. Hey, K2-315b is the 315th planetary system discovered inside the K2 data, a near miss from 314. MIT’s Julien de Wit, a co-author of the paper on this discovery, points out that “everyone needs a bit of fun these days,” and it’s a reference to the paper’s playful title: “π Earth: A 3.14 day Earth-sized Planet from K2’s Kitchen Served Warm by the SPECULOOS Team.” MIT graduate student Prajwal Niraula is lead author of the paper, published in the Astronomical Journal.

Image: Scientists at MIT and elsewhere have discovered an Earth-sized planet that zips around its star every 3.14 days. Credit: NASA Ames/JPL-Caltech/T. Pyle, Christine Daniloff, MIT.

What we know about K2-315b is that its radius is about 0.95 that of Earth and, importantly, that it orbits a cool, low-mass star about a fifth of the Sun’s size. Its mass has yet to be determined, but as MIT press materials point out, its surface temperature is around 450 K, which is about where you want your oven to be if you’re baking an actual pie. There is little likelihood of any lifeforms on this planet capable of groaning at puns, though I do think the discovery is helpful because it’s yet another case of an ultracool dwarf star that may be a target for the James Webb Space Telescope. Large transit depths make for interesting studies of planetary atmospheres.

I try to keep up with SPECULOOS, another wonderful acronym: Search for habitable Planets EClipsing ULtra-cOOl Stars. Here we’re dealing with four 1-meter telescopes at Chile’s Paranal Observatory in the Atacama Desert, and a more recently included fifth instrument called Artemis in Tenerife, Spain. The observing effort is led by Michael Gillon (University of Liège, Belgium) and conducted in collaboration with various institutions including MIT and the University of Bern, along with the Canary Islands Institute of Astrophysics and the European Southern Observatory.

Image: The SPECULOOS project aims to detect terrestrial planets eclipsing some of the smallest and coolest stars of the solar neighborhood. This strategy is motivated by the unique possibility to study these planets in detail with future giant observatories like the European Extremely Large Telescope (E-ELT) or the James Webb Space Telescope (JWST). The exoplanets discovered by SPECULOOS should thus provide mankind with an opportunity to study the atmosphere of extrasolar worlds similar in size to our Earth, notably to search for traces of biological activity. Credit: SPECULOOS.

The K2-315b work spanned several months of K2 observation from 2017 in which 20 transit signatures turned up with a repetition of 3.14 days. At this point, closer examination relied upon tightening the transit time even further, as co-author Benjamin Rackham points out:

“Nailing down the best night to follow up from the ground is a little bit tricky. Even when you see this 3.14 day signal in the K2 data, there’s an uncertainty to that, which adds up with every orbit.”

Fortunately, Rackham had developed a forecasting algorithm to pin the transits down, and subsequent observations in February of 2020 with the SPECULOOS telescopes nailed three transits, one from Artemis in Spain and the other two from the Paranal instruments. The paper points out that differences in atmospheric “mean molecular mass, surface pressure, and/or
cloud/haze altitude will strongly affect the actual potential of a planet for characterization,” with ramifications for the study even of promising worlds like those circling TRAPPIST-1.

Nonetheless, K2-315b (referred to in the K2 data as EPIC 249631677) looks intriguing enough for JWST observations to be considered:

With an estimated radial velocity semi-amplitude of 1.3 m s−1 (assuming a mass comparable to that of Earth), the planet could be accessible for mass measurements using modern ultra-precise radial velocity instruments. Such possibilities and a ranking amongst the 10 best-suited Earth-sized planets for atmospheric study, EPIC 249631677 b will therefore play an important role in the upcoming era of comparative exoplanetology for terrestrial worlds. It will surely be a prime target for the generation of observatories to follow JWST and bring the field fully into this new era.

Note that reference to ‘comparative exoplanetology.’ Not all exoplanets singled out for atmospheric characterization are going to be ‘habitable’ in the sense of life as we know it. After all, we began using transmission spectroscopy to study atmospheres by working with ‘hot Jupiters’ like HD 209458b. We learn as we go, and firming up our methods by studying small planets around ultracool dwarf stars within 100 parsecs or so is part of the path toward finding a living world.

The paper is Niraula et al., “π Earth: a 3.14-day Earth-sized Planet from K2’s Kitchen Served Warm by the SPECULOOS Team,” Astronomical Journal Vol. 160, No. 4 (21 September 2020). Abstract / Preprint.

tzf_img_post
{ 31 comments }

Radar for a Giant Planet’s Moons

One of my better memories involving space exploration is getting the chance to be at the Jet Propulsion Laboratory to see the Mars rovers Spirit and Opportunity just days before they were shipped off to Florida for their eventual launch. Being near an object that, though crafted by human hands, is about to be a presence on another world is an unusual experience, one that made me reflect on artifacts from deep in the human past and their excavation by archaeologists today. Will future humans one day recover our early robotic explorers?

That reflection was prompted by news from JPL that engineers have delivered the key elements of a critical ice-penetrating radar instrument for the European Space Agency’s mission to three of Jupiter’s icy moons. JUICE — JUpiter ICy moons Explorer — is scheduled for a launch in 2022, with plans to orbit Jupiter for three years, involving multiple flybys of both Europa and Callisto, with eventual orbital insertion at Ganymede. Analyses of the interiors as well as surfaces of the three moons should vastly improve our knowledge of their composition.

Image: NASA’s Jet Propulsion Laboratory built and shipped the receiver, transmitter, and electronics necessary to complete the radar instrument for Jupiter Icy Moons Explorer (JUICE), the ESA (European Space Agency) mission to explore Jupiter and its three large icy moons. In this photo, shot at JPL on April 27, 2020, the transmitter undergoes random vibration testing to ensure the instrument can survive the shaking that comes with launch. Credit: NASA/JPL-Caltech.

Here again we’re looking at something in the hands of humans on Earth that will one day move out beyond our orbit, in this case to the moons of our system’s largest planet, sending back priceless data. On a practical level, this is what people in the space exploration business do. On the level of sheer human response, my own at least, looking at how we build our spacecraft puts a bit of a chill up my spine, the good kind of chill that signals being in the presence of something profound, something caught up in what seems a hard-wired human need to explore.

The words “ice-penetrating radar” should resonate among all of those who wonder about the ocean under the ice at Europa. But of course we also have reason to believe that both Ganymede and Callisto have oceans whose depths we have yet to measure. Getting a sense for how thick the ice is on these worlds will be part of what the JUICE mission’s RIME instrument will, we can hope, deliver. RIME — Radar for Icy Moon Exploration — is said to have the capability of sending out radio waves that can penetrate up to 10 kilometers deep, reflecting off subsurface features and helping us figure out the thickness of the ice.

Image: The Radar for Icy Moon Exploration, or RIME, instrument is a collaboration by JPL and the Italian Space Agency (ASI) and is one of ten that will fly aboard JUICE. This photo, shot at JPL on July 23, 2020, shows the transmitter as it exits a thermal vacuum chamber. The test is one of several designed to ensure the hardware can survive the conditions of space travel. The thermal chamber simulates deep space by creating a vacuum and by varying the temperatures to match those the instrument will experience over the life of the mission. Credit: NASA/JPL-Caltech.

And as we all know, work on anything these days is complicated by COVID-19, with many JPL employees forced to work remotely, and necessary delays to equipment testing including vibration, shock and thermal vacuum tests to ensure the equipment is ready for the deep space environment. The engineers returning to work after the delay under new safety protocols faced a tight schedule, but they made it work. JPL delivered the transmitter and receiver for RIME along with electronics necessary for communicating with its antenna.

All this occurs as part of a collaboration between JPL and the Italian Space Agency (ASI). The RIME instrument is led by principal investigator Lorenzo Bruzzone (University of Trento, Italy). As to JPL’s role under trying pandemic conditions, co-principal investigator Jeffrey Plaut says:

“I’m really impressed that the engineers working on this project were able to pull this off. We are so proud of them, because it was incredibly challenging. We had a commitment to our partners overseas, and we met that – which is very gratifying.”

Gratifying indeed, and a reminder that along with JUICE, we can also anticipate NASA’s Europa Clipper, set to launch some time in the mid-2020s. Europa Clipper should arrive about the same time as JUICE, and will perform multiple flybys of Europa. Will we be able to determine the thickness of Europa’s frozen surface from the combined data of both missions? A relatively thin crust would make for the possibility of eventual penetration by instruments for a look at what lies beneath, but a shell of 15 to 25 kilometers in thickness would call for other strategies.

Image: The European Space Agency (ESA) Jupiter Icy Moons Explorer (JUICE) spacecraft explores the Jovian system in this illustration. Credit: ESA/NASA/ATG medialab/University of Leicester/DLR/JPL-Caltech/University of Arizona.

tzf_img_post
{ 16 comments }

On White Dwarf Planets as Biosignature Targets

So often a discovery sets off a follow-up study that strikes me as even more significant in practical terms. This is not for a moment to downplay the accomplishment of Andrew Vanderburg (University of Wisconsin – Madison) and team that discovered a planet in close orbit around a white dwarf. This is the first time we’ve found a planet that has survived its star’s red giant phase and remains in orbit around the remnant, and quite a tight orbit at that. Previously, we’ve had good evidence only of atmospheric pollution in such stars, indicating infalling material from possible asteroids or other objects during the primary’s cataclysmic re-configuration.

The white dwarf planet, found via data gathered from TESS (Transiting Exoplanet Survey Satellite) and the Spitzer Space Telescope, makes for quite a discovery. But coming out of this work, I also love the idea of studying such a world with tools we’re likely to have soon, such as the James Webb Space Telescope, and on that score, Lisa Kaltenegger (Carl Sagan Institute, Cornell University), working with Ryan MacDonald and including Vanderburg in the team, have shown us how JWST can identify chemical signatures in the atmospheres of possible Earth-like planets around white dwarf stars. Assuming we find such, and I suspect we will.

The planet at the white dwarf WD 1856+534 is anything but Earth-like. It’s running around the star every 34 hours, which means it’s on a pace 60 times faster than Mercury orbits the Sun. The planet here is also the size of Jupiter, and what a system we’ve uncovered — the new world orbits a star that is itself only 40 percent larger than Earth (imagine the transit depth possible with white dwarfs transited by a gas giant!) In this planetary system, the planet we’ve detected is about deven times larger than its primary. Says Vanderburg:

“WD 1856 b somehow got very close to its white dwarf and managed to stay in one piece. The white dwarf creation process destroys nearby planets, and anything that later gets too close is usually torn apart by the star’s immense gravity. We still have many questions about how WD 1856 b arrived at its current location without meeting one of those fates.”

Image: In this illustration, WD 1856b, a potential Jupiter-size planet, orbits its dim white dwarf star every day-and-a-half. WD 1856 b is nearly seven times larger than the white dwarf it orbits. Astronomers discovered it using data from NASA’s Transiting Exoplanet Survey Satellite (TESS) and now-retired Spitzer Space Telescope. Credit: NASA GSFC.

So on the immediate question of WD 1856 b, let’s note that we have a serious issue with explaining how the planet got to be this close to the white dwarf in the first place. White dwarfs form when stars like the Sun swell into red giant status as they run out of fuel, a phase in which 80 percent of the star’s mass is ejected, leaving a hot core — the white dwarf — behind. Anything on relatively close orbit would be presumably swallowed up in the stellar expansion phase.

Which is why Vanderburg’s team believes the planet probably formed fully 50 times farther away from its present location, later moving inward perhaps through interactions with other large bodies close to the planet’s original orbit, with its orbit circularizing as tidal forces dissipated. Such instabilities could bring a planet inward, as could other scenarios involving the red dwarfs G229-20 A and B in this triple star system, although the paper plays down this idea, as well as the notion of a rogue star acting as a perturber. Other Jupiter-like planets, presumably long gone, seem to be the best bet to explain this configuration.

From the paper:

…a more probable formation history is that WD 1856 b was a planet that underwent dynamical instability. It is well established that when stars evolve into white dwarfs, their previously stable planetary systems can undergo violent dynamical interactions that excite high orbital eccentricities. We have confirmed with our own simulations that WD 1856 b-like objects in multi-planet systems can be thrown onto orbits with very close periastron distances. If WD 1856 b were on such an orbit, the orbital energy would have rapidly dissipated, owing to tides raised on the planet by the white dwarf. The final state of minimum energy would be a circular, short-period orbit. The advanced age of WD 1856 (around 5.85 Gyr) gives plenty of time for these relatively slow (of the order of Gyr) dynamical processes to take place. In this case, it is no coincidence that WD 1856 is one of the oldest white dwarfs observed by TESS.

Did you catch that reference to the white dwarf’s age? The 5.85 billion year frame gives ample opportunity for such orbital adjustments to take place, winding up with the observed orbit. Or perhaps we’re dealing with interactions with a debris disk around the star, as co-author Stephen Kane (UC-Riverside, and a member of the TESS science team) hypothesizes:

“In this case, it’s possible that a debris disc could have formed from ejected material as the star changed from red giant to white dwarf. Or, on a more cannibalistic note, the disc could have formed from the debris of other planets that were torn apart by powerful gravitational tides from the white dwarf. The disc itself may have long since dissipated.”

But back to Lisa Kaltenegger, lead author of a paper in Astrophysical Journal Letters probing whether an exposed stellar core — a white dwarf — would be workable as a target for the JWST, in which case we would like to look at planetary atmospheres to probe for the possibility of biosignatures. Here the news is good, for Kaltenegger believes that such detections would be possible, assuming rocky planets exist around these stars. WD 1856 b gives hope that such a world could exist in the white dwarf’s habitable zone for a period longer than the time it took for life to develop on Earth. The implications are intriguing:

“What if the death of the star is not the end for life?” Kaltenegger said. “Could life go on, even once our sun has died? Signs of life on planets orbiting white dwarfs would not only show the incredible tenacity of life, but perhaps also a glimpse into our future.”

Image: In newly published research, Cornell researchers show how NASA’s upcoming James Webb Space Telescope could find signatures of life on Earth-like planets orbiting burned-out stars, known as white dwarfs. Credit: Jack Madden/Carl Sagan Institute.

The Kaltenegger team used methods developed to study gas giant atmospheres and combined them with computer models configured to apply the technique to small, rocky white dwarf planets. The researchers found that JWST, when observing an Earth-class planet around a white dwarf, could detect carbon dioxide and water with data from as few as 5 transits. According to co-lead author Ryan MacDonald, it would take a scant two days of observing time with JWST to probe for the classic biosignature gases ozone and methane. Adds MacDonald:

“We know now that giant planets can exist around white dwarfs, and evidence stretches back over 100 years showing rocky material polluting light from white dwarfs. There are certainly small rocks in white dwarf systems. It’s a logical leap to imagine a rocky planet like the Earth orbiting a white dwarf.”

So we have a possible target we’ll want to add into the exoplanet mix when it comes to nearby white dwarf systems. WD 1856 is about 80 light years out in the direction of Draco. The white dwarf formed over 5 billion years ago, as noted in the paper, but the age of the original Sun-like star may take us back as much as 10 billion years. The post red giant phase allows plenty of time for orbital adjustment, drawing rocky worlds inward and circularizing their orbit. Will we find such planets in this setting in the near future? The hunt for such will surely intensify.

The paper is Vanderburg et al., “A giant planet candidate transiting a white dwarf,” Nature 585 (16 September 2020), 363-367 (abstract). The Kaltenegger paper is “The White Dwarf Opportunity: Robust Detections of Molecules in Earth-like Exoplanet Atmospheres with the James Webb Space Telescope,” Astrophysical Journal Letters Vol. 901, No. 1 (16 September 2020). Abstract.

tzf_img_post
{ 24 comments }

SETI and Altruism: A Dialogue with Keith Cooper

Keith Cooper’s The Contact Paradox is as thoroughgoing a look at the issues involved in SETI as I have seen in any one volume. After I finished it, I wrote to Keith, a Centauri Dreams contributor from way back, and we began a series of dialogues on SETI and other matters, the first of which ran here last February as Exploring the Contact Paradox. Below is a second installment of our exchanges, which were slowed by external factors at my end, but the correspondence continues. What can we infer from human traits about possible contact with an extraterrestrial culture? And how would we evaluate its level of intelligence? Keith is working on a new book involving both the Cosmic Microwave Background and quantum gravity, the research into which will likewise figure into our future musings that will include SETI but go even further afield.

Keith, in our last dialogue I mentioned a factor you singled out in your book The Contact Paradox as hugely significant in our consideration of SETI and possible contact scenarios. Let me quote you again: “Understanding altruism may ultimately be the single most significant factor in our quest to make contact with other intelligent life in the Universe.”

I think this is exactly right, but the reasons may not be apparent unless we take the statement apart. So let’s start today by talking about altruism before we explore the question of ‘deep time’ and how our species sees itself in the cosmos. I think we have ramifications here for how we deal not only with extraterrestrial contact but issues within our own civilization.

I’m puzzled by the seemingly ready acceptance of the notion that any extraterrestrial civilization will be altruistic or it could not have survived. Perhaps it’s true, but it seems anthropocentric given our lack of knowledge of any life beyond Earth. What, then, did you mean with your statement, and why is understanding altruism a key to our perception of contact?

  • Keith Cooper

I think so much that is integral to SETI comes down to our assumptions about altruism. How often do we hear that an older extraterrestrial society will be altruistic, as though it’s the end result of some kind of evolutionary trajectory. But there’s several problems with this. One is that the person making such claims – usually an astrophysicist straying into areas outside their field of expertise – is often conflating ‘altruism’ with ‘being nice’.

And sure, maybe aliens are nice. I kind of get the logic, even though it’s faulty. The argument is that if they are still around then they must have abandoned war long ago, otherwise they would have destroyed themselves by now, ergo they must be peaceful.

And it’s entirely possible, I suppose, that a civilisation may have developed in that direction. In The Better Angels of Our Nature, Steven Pinker attempted to argue that our civilization is becoming more peaceable over time, although Pinker’s analysis and conclusions have been called into question by numerous academics.

  • Paul Gilster

I hope so. I think the notion is facile at best.

  • Keith Cooper

It’s what human societies should always aim for, I truly believe that, but whether we can achieve it or not is another question. When it comes to SETI, we seem to home in on the most simplistic definitions of what an extraterrestrial society might be like – ‘they’ve survived this long, they must be peaceful’. A xenophobic civilization might be at peace with its own species, but malevolent towards life on other planets. A planet could be at peace, but that peace could be implemented by some 1984-style dystopian dictatorship where nobody is free. Neither of which is particularly ‘nice’, and we could think of many other scenarios, too.

Nevertheless, this myth of wise, kindly aliens has grown up around SETI – that was the expectation, 60 years ago, that ET would be pouring resources into powerful beacons to make it easy for us to detect them. To transmit far and wide across the Galaxy, and to maintain those transmissions for centuries, millennia, maybe even millions of years, would require huge amounts of resources. When we consider that the aliens may not even know for sure whether they share the Universe with other life, it’s a huge gamble on their part to sacrifice so much time and energy in trying to communicate with others in the Universe.

If we look at what altruism really is, and how that may play into the likelihood that ET will want to beam messages across the Galaxy given the cost in time and energy, then it poses a big problem for SETI. ET really needs to help us out – to display a remarkable degree of selfless altruism towards us – by plowing all those resources into transmitting signals that we’ll be able to detect.

One of the forms that altruism can take in nature is kin selection. We can see how this has evolved: lifeforms want to ensure that their genes are passed on to later generations, so a parent will act to protect and give the greatest possible advantage to their child, or nieces and nephews. That’s a form of altruism predicated by genes, not ethics. Unless some form of extreme panspermia has been at play, alien life would not be our kin, so they would be unlikely to show us altruistic behaviour of this type.

  • Paul Gilster

But we haven’t exhausted all the forms altruism might take. Is there an expectation of mutual benefit that points in that direction?

  • Keith Cooper

Okay, so what about quid pro quo? That’s a form of reciprocal altruism. Consider, though, the time and distance separating the stars. It could take centuries or millennia for a message to reach a destination, and there’s no guarantee that anyone is going to hear that message, nor that they will send a reply. That’s a long time to wait for a return on an investment, if there even is a return. Why plow so many resources into transmitting if that’s the case? What’s in it for them?

So if kin selection and reciprocal altruism are not really tailored for interstellar communication, then it seems more unlikely that we will hear from aliens. Of course, there is always the possibility of exceptions to the rule, one-off reasons why a society might wish to broadcast its existence. Maybe ET wants to transmit a religious gospel to the stars to convert us all. Maybe they are about to go extinct and want to send one last hurrah into the Universe. But these would not be global reasons, and we shouldn’t expect alien societies to make it easy for us to discover them.

  • Paul Gilster

Good point. Why indeed should they want us to discover them? I can think of reasons a society might decide to broadcast its existence to the stars, though I admit that it’s a bit of a strain. But aliens are alien, right? So let’s assume some may want to do this. I like your mention of reciprocal altruism, as it’s conceivable that an urge to spread knowledge, for example, might result in a SETI beacon of some kind that points to an information resource, the fabled Encyclopedia Galactica. What a gorgeous dream that something like that might be out there.

Curiosity leads where curiosity leads. I wonder if it’s a universal trait of intelligence?

  • Keith Cooper

It’s interesting that you describe the Encyclopedia Galactica as a ‘dream’, because I think that’s exactly what it is, a fantasy that we’ve imagined without any strong rationale other than falling back on this outdated idea that aliens are going to act with selfless altruism. As David Brin argues, if you pump all your knowledge into space freely, what do you have left to barter with? And yet it is expectations such as receiving an Encyclopedia Galactica that still drive SETI and influence the kinds of signals that we search for. I really do think SETI needs to move on from this quaint idea. But I digress.

  • Paul Gilster

It’s certainly worth keeping up the SETI effort just to see what happens, especially when it’s privately funded. But I want to circle back around. I’ve always had an interest in what the general public’s reaction to the idea of extraterrestrial civilization really is. In the 16 years that I’ve been writing about this and talking to people, I’ve found a truly lopsided percentage that believe as a matter of course that an advanced civilization will be infinitely better than our own. This plays to a perceived disdain for human culture and a faith in a more beneficent alternative, even if it has to come from elsewhere to set right our fallen nature.

Put that way, it does sound a bit religious, but so what — I’m talking about how human beings react to an idea. Humans construct narratives, some of them scientific, some of them not.

I’m also talking about the general public, not people in the interstellar community, or scientists actively working on these matters. As you would imagine with COVID about, I’m not making many talks these days, but when I was fairly active, I’d always ask audiences of lay people what they thought of intelligent aliens. The reaction was almost always along two lines: 1) The idea used to seem crazy, but now we know it’s not. And 2) it would be something like an European Renaissance all over again if we made contact, because they would have so much to teach us.

A golden age, with its Dantes and Shakespeares and Leonardos. Or think of the explosion of Chinese culture and innovation in the Tang Dynasty, or Meiji Japan, all this propelled by the infusion not of recovered ancient literature and teaching, as in the European example, but materials discovered in the evidently limitless databanks of the Encyclopedia Galactica.

I ran into these audience reactions so frequently in both talks to interested audiences and just conversations among neighbors and friends that I had to ask what was propelling the Hollywood tradition of scary movies about alien invasion? What about Independence Day, with its monstrous ships crushing the life out of our planet? So I would ask, if you believe all this altruistic stuff, why do you keep going to these sensational movies of death and destruction?

The answer: Because people think they’re fun. They’re a good diversion, a comic book tale, a late night horror movie where getting scared is the point. Whole film franchises are built around the idea that fear is addictive when experienced within the cocoon of a home or theater. Thus the wave of horror fiction that has been so prominent in recent years. It’s because people like being scared, and the reason for that goes a lot deeper into psychiatry than I would know how to go. I admit I may not believe in Cthulhu, but I love going to Dunwich with H. P. Lovecraft.

Keith, as we both know — and you, as the author of The Contact Paradox would know a lot more about this than I do — there is an active lobby against messaging to the stars: METI. I’ve expressed my own opposition to METI on many an occasion in these pages, and the discussion has always been robust and contentious, with the evidently minority position being that we should hold back on such broadcasts unless we reach international consensus, and the majority position being that it doesn’t matter because sufficiently intelligent aliens already know about us anyway.

I don’t want to re-litigate any of that here. Rather, I just want to note that if the anti-METI position gets loud pushback in the interstellar community, it gets even louder pushback among the general public. In my talks, bringing up the dangers of METI invariably causes people to accuse me of taking films like Independence Day too seriously. From what I can see from my own experience, most people think ETI may be out there but assume that if it ever shows up on our doorstep, it will represent a refined, sophisticated, and peaceful culture.

I don’t buy that idea, but I’m so used to seeing it in print that I was startled to read this in James Trefil and Michael Summers’ recent book Imagined Life. The two first tell a tale:

Two hikers in the mountains encounter an obviously hungry grizzly bear. One of the hikers starts to shed his backpack. The other says, “What are you doing? You can’t run faster than that bear.”

“I don’t have to run faster than the bear — I just have to run faster than you.”

Natural selection doesn’t select for bonhomie or moral hair-splitting. The one whose genes will survive in the above encounter is the faster runner. Trefil and Summers go on:

So what does this tell us about the types of life forms that will develop on Goldilocks worlds? We’re afraid that the answer isn’t very encouraging, for the most likely outcome is that they will probably be no more gentle and kind than Homo Sapiens. Looking at the history of our species and the disappearance of over 20 species of hominids that have been discovered in the fossil record, we cannot assume we will encounter an advanced technological species that is more peaceful than we are. Anyone we find out there will most likely be no more moral or less warlike that we are…

That doesn’t mean any ETI we find will try to destroy us, but it does give me pause when contemplating the platitudes of the original The Day the Earth Stood Still movie, for example. It’s so easy to point to our obvious flaws as humans, but the more likely encounter with ETI, if we ever meet them face to face, will probably be deeply enigmatic and perhaps never truly understood. I also argue that there is no reason to assume that individual members of a given species will not have as much variation between them as do individual humans.

It’s a long way from Francis of Assisi to Joseph Goebbels, but both were human. So what happens, Keith, if we do get a SETI signal one day. And then, a few days later, another one that says, “Disregard that first message. The one you want to talk to is me?”

  • Keith Cooper

I’m hesitant to rely too closely on comparisons with ourselves and our own evolution, since ultimately we are just a sample of one, and we could be atypical for all we know. I see what Trefil and Summers are saying, but equally I could imagine a world, perhaps with a hostile environment, where species have to work together to survive. Instead of survival of the fittest, it becomes survival of those who cooperate. And suppose intelligent life evolves to be post-biological. What role do evolutionary hangovers play then?

I think the most we can say is that we don’t know, but that for me is enough of a reason to be cautious both about the assumptions we make in SETI, and about the possible consequences of METI.

But you’re right about our flawed assumption that aliens will exist in a monolithic culture. Unless there’s some kind of hive mind or network, there will likely be variation and dissonance, and different members of their species may have different reactions to us.

If we detected two beacons in the same system, I think that would be great! Why? Because it would give us more information about them than a single signal would. Since we will have no knowledge of their language, their culture, their history or their biology, being able to understand their message in even the most general sense is going to be exceptionally difficult.

So, if we detect a signal, we might not be able to decipher it or learn a great deal. But if we detect two different, competing beacons from the same planet, or planetary system, then we will know something about them that we couldn’t know from just one unintelligible signal, which is that they are not necessarily a monolithic culture, and that their society may contain some dissonance, and this may influence how, and if, we respond to their messages.

For me, the name of the game is information. Learn as much about them as we can before we embark on making contact, because the more we know, then the less likely we are to be surprised, or to make a misunderstanding that could be catastrophic.

  • Paul Gilster

Just so. But there, you see, is the reason why I think we have to be a lot more judicious about METI. It’s just conceivable that, to them as well as us, content matters.

But look, I see you’re headed in a direction I wanted to go. If information is the name of the game, then information theory is going to play a mighty role in our investigations. So it’s no surprise that you dwell on the matter in The Contact Paradox. Here we’re in the domain of Claude Shannon at Bell Laboratories in the 1940s, but of course signal content analysis applies across the whole spectrum of information transmittal. Shannon entropy measures disorder in information, which is a way of saying that it lets us analyze communications quantitatively.

Do you know Stephen Baxter’s story “Turing’s Apple?” Here a brief signal is detected by a station on the far side of the Moon, no more than a second-long pulse that repeats roughly once a year. It comes from a source 6500 light years from Earth, and Baxter delightfully presents it as a ‘Benford beacon,’ after the work Jim and Greg Benford have done on the economics of extraterrestrial signaling and the understanding that instead of a strong, continuous signal, we’re more likely to find something more like a lighthouse that sweeps its beam around the galaxy, in this case on the galactic plane where the bulk of the stars are to be found.

Baxter’s story sees the SETI detection as a confirmation rather than a shock, a point I’m glad to see emerging, since I think the idea of extraterrestrial intelligence is widely understood. No great revolution in thought follows, but rather a deepening acceptance of the fact that we’re not alone.

Anyway, in the story, the signal is investigated, six pulses being gathered over six years, with the discovery that this ETI uses something like wavelength division multiplexing, dividing the signal into sections packed with data. Scientists turn to Zipf graphing to tackle the problem of interpretation – as you present this in your book, Keith, this means breaking the message into components and going to work on the relative frequency of appearance of these components. From this they deduce that the signal is packed with information, but what are its elements?

Shannon entropy analysis looks for the relationships between signal elements, so how likely is it that a particular element will follow another particular element? Entropy levels can be deduced – how likely are not just pairs of elements to appear, but triples of elements? In English, for example, how likely is it that we might find a G following an I and an N? Dolphin languages get as high as fourth-order entropy by this analysis, as you know. Humans get up to eighth or ninth. Baxter’s signal analysts come up with a Shannon entropy in the range of 30 for ETI.

Let me quote this bit, because I love the idea:

“The entropy level breaks our assessment routines… It is information, but much more complex than any human language. It might be like English sentences with a fantastically convoluted structure – triple or quadruple negatives, overlapping clauses, tense changes… Or triple entendres, or quadruples.”

We’re in challenging territory here. In the story, ETI is a lot smarter than us, based on Shannon entropy. The presence of this kind of complexity in a signal, in Baxter’s scenario, is evidence that the detected message could not have been meant for us, because if it were, the broadcasting civilization would have ‘dumbed it down’ to make it accessible. Instead, humanity has found a signal that demonstrates the yawning gap between humanity and a culture that may be millions of years old. If we find something like this, it’s likely we would never be able to figure it out.

Would something like this be a message, or perhaps a program? If we did decode it, what would it mean? An ever better question: What might it do? Baxter’s story is so ingenious that I don’t want to give away its ending, but suffice it to say that impersonal forces may fall well outside our conventional ideas of ‘friendly’ vs. ‘hostile’ when it comes to bringing meaning to the cosmos.

But let’s wrap back around to Shannon and Zipf, and the SETI Institute’s Laurance Doyle, to whom you talked as you worked on The Contact Paradox. Doyle told you that communication complexity invariably tells us something about the cultural complexity of the beings that sent the message. And I think the great point that he makes is that the best way to approach a possible signal is by studying how communications systems work right here on Earth. Thus Claude Shannon, who started working out his theories during World War II, gets applied to the question of species intelligence (dolphins vs. humans) and now to hypothetical alien signals.

In a broader sense, we’re exploring what intelligence is. Does intelligence mean technology, or are technological societies a subset of all the intelligent but non-tool making cultures out there? SETI specifically targets technology, which may itself be a rarity even in a universe awash with forms of life with high Shannon entropy in communications they make only among themselves.

A great benefit of SETI is that it is teaching us just how much we don’t know. Thus the recent Breakthrough Listen breakdown of their findings, which extends the data analysis to a much wider catalog of stars by a factor of 220, all at various distances and all within the ‘field of view,’ so to speak, of the antennae at Green Bank and Parkes. Still more recent work at the Murchison Widefield Array tackles an even vaster starfield. Still no detections, but we’re getting a sense of what is not there in terms of Arecibo-like signals aimed intentionally at us.

So how do you react to the idea that, in the absence of information to analyze from an actual technological signal, we will always be doing no more than collecting data about a continually frustrating ‘great silence?’ Because SETI can’t ever claim to have proven there is no one there.

  • Keith Cooper

That’s one of my unspoken worries about SETI; how long do we give it before we start to suspect that we’re alone? People might say, well, we’ve been searching for 60 years now – surely that’s long enough? Of course, modern SETI may be 60 years old, but we’ve certainly not accrued 60 years’ worth of detailed SETI searches. We’ve barely scratched the tip of the iceberg bobbing up above the cosmic waters.

So how long until we can safely say we’ve not only seen the tip of the iceberg, but that we’ve also taken a deep dive to the bottom of it as well? Maybe our limited human attention spans will come into play long before then, and we’ll get bored and give up. I think we can also be too quick to assume that there’s no one out there. Take the recent re-analysis of Breakthrough Listen data, which prompted one of the researchers, Bart Wlodarczyk-Sroka of the University of Manchester, to declare:

“We now know that fewer than one in 1600 stars closer than about 330 light years host transmitters just a few times more powerful than the strongest radar we have here on Earth. Inhabited worlds with much more powerful transmitters than we can currently produce must be rarer still.”

Except that we don’t know that at all. All we can say was that there was no one transmitting a radio signal during the brief time that Breakthrough was listening. We could have easily missed a Benford Beacon, for instance. It’s a problem of expectation versus reality – we expect these powerful, omnipresent beacons, and when we don’t find them we jump to the conclusion that ET must not exist, rather than the possibility that our expectation is flawed.

The Encyclopedia Galactic is a similar kind of expectation that isn’t just a fanciful notion, but is a concept that actively influences SETI – we expect ET to be blasting out this guide to the cosmos, so we tailor SETI to look for that kind of signal, rather than something like a Benford Beacon. It also biases our thinking as to what we might gain from first contact – all this knowledge given to us by peaceful, selflessly altruistic beings. It would be lovely if true, but I think it’s dangerous to expect it.

Case in point: Brian McConnell recently wrote on Centauri Dreams about his concept for an Interstellar Communication Relay – basically a way of disseminating the data detected within a received signal, giving everybody the chance to try and decipher it [see What If SETI Finds Something, Then What?]. He rightly points out that we need to start thinking about what happens after we detect a signal, and the relay is a nifty way of organising that, so that should we detect a signal tomorrow, we will already have procedures in hand.

I won’t comment too much on the technical aspects, other than to say that if a message contains a Shannon entropy of 30, then it probably won’t matter how many people try and make sense of the message, we won’t get close (A.I., on the other hand, may have a bit more luck).

The Interstellar Communication Relay is an effort to democratize SETI. My cynical side worries, however, about safeguards. The relay relies on people acting in good faith, and not concealing or misusing any information gleaned from a signal. McConnell proposes a ‘copyleft license’, a bit like a creative commons license, that will put the data in the public domain while preventing people commercialising it for their own gain. I can see how this makes sense in the Encyclopedia Galactica paradigm – McConnell refers to entrepreneurs being allowed to make “games and educational software” from what we may learn from the alien signal.

I worry about this. In The Contact Paradox, I wrote about how even something as innocent as the tulip, when introduced into seventeenth-century Dutch society, proved disruptive (https://en.wikipedia.org/wiki/Tulip_mania). The Internet, motor cars, nuclear power – they’ve all been disruptive, sometimes positively, other times negatively.

How do we manage the disruptive consequences of information from an extraterrestrial signal? Even if ET has the best of intentions for us, they can’t foresee what the effects will be when facets of their culture or technology are introduced into human society, in which case the expectation that ET will be wise and ‘altruistic’ is almost irrelevant. Heaven forbid they send us technology that could be turned into a weapon, and we can’t guarantee that bad actors – after being freely given that information – won’t run off with it and use it for their own nefarious ends. A copyleft license surely isn’t going to put them off.

My feeling is that fully deciphering a signal will take a long, long time, if ever, in which case we shouldn’t worry quite so much. But suppose we are able to decipher it quickly, and it’s more than just a simple ‘greetings’. Yes, we have to think about what happens after we detect a signal, but it’s not just the mechanics of processing that data that we have to think about; we also have to plan how we manage the dissemination of potentially disruptive information into society in a safe way. It’s a dilemma that the whole of SETI should be grappling with I think, and nobody – certainly not me – has yet come up with a solution. But, I think that revising our assumptions, recasting our expectations, and casting aside the idea that ET will be selflessly altruistic and wise, would be a good start.

  • Paul Gilster

Well said. As I look back through our exchanges, I see I didn’t get around to the Deep Time concept I wanted to explore, but maybe we can talk about that in our next dialogue, given your interest in the Cosmic Microwave Background, which is the very boundary of Deep Time. Let’s plan on discussing how ideas of time and space have, in relatively short order, gone from a small, Earth-centered universe defined in mere thousands of years to today’s awareness of a cosmos beyond measure that undergoes continuous accelerated expansion. All Fermi solutions emerge within this sense of the infinite and challenge previous human perspectives.

tzf_img_post
{ 63 comments }

Odds and Ends on the Clouds of Venus

James Gunn may have been the first science fiction author to anticipate the ‘new Venus,’ i.e., the one we later discovered thanks to observations and Soviet landings on the planet that revealed what its surface was really like. His 1955 tale “The Naked Sky” described “unbearable pressures and burning temperatures” when it ran in Startling Stories for the fall of that year. Gunn was guessing, but we soon learned Venus really did live up to that depiction.

I think Larry Niven came up with the best title among SF stories set on the Venus we found in our data. “Becalmed in Hell” is a 1965 tale in Niven’s ‘Known Space’ sequence that deals with clouds of carbon dioxide, hydrochloric and hydrofluoric acids. No more a tropical paradise, this Venus was a serious do-over of Venus as a story environment, and the more we learned about the planet, the worse the scenario got.

But when it comes to life in the Venusian clouds — human, no less — I always think of Geoffray Landis, not only because of his wonderful novella “The Sultan of the Clouds,” but also because of his earlier work on how the planet might be terraformed, and what might be possible within its atmosphere. For a taste of his ideas on terraforming, a formidable task to say the least, see his “Terraforming Venus: A Challenging Project for Future Colonization,” from the AIAA SPACE 2011 Conference & Exposition, available here. But really, read “The Sultan of the Clouds,” where human cities float atop the maelstrom:

“A hundred and fifty million square kilometers of clouds, a billion cubic kilometers of clouds. In the ocean of clouds the floating cities of Venus are not limited, like terrestrial cities, to two dimensions only, but can float up and down at the whim of the city masters, higher into the bright cold sunlight, downward to the edges of the hot murky depths… The barque sailed over cloud-cathedrals and over cloud-mountains, edges recomplicated with cauliflower fractals. We sailed past lairs filled with cloud-monsters a kilometer tall, with arched necks of cloud stretching forward, threatening and blustering with cloud-teeth, cloud-muscled bodies with clawed feet of flickering lightning.”

Published originally in Asimov’s (September 2010) and reprinted in the Dozois Year’s Best Science Fiction: Twenty-Eighth Annual Collection, the story depicts a vast human presence in aerostats floating at the temperate levels. Landis has explored a variety of Venus exploration technologies including balloons, aircraft and land devices, all of which might eventually be used in building a Venusian infrastructure that would support humans.

We’ve already seen that Carl Sagan had written about possible life in the Venusian atmosphere, and an even more ambitious Paul Burch considered using huge mirrors in space to deflect sunlight, generate power, and cool down the planet. Closer to our time, an internal NASA study called HAVOC, a High Altitude Venus Operational Concept based on balloons, was active, though my understanding is that the project, in the hands of Dale Arney and Chris Jones at NASA Langley, has been abandoned. Maybe the phosphine news will give it impetus for renewal. The Landis aerostats would be far larger, of course, carrying huge populations. I have to wonder what ideas might emerge or be reexamined given the recent developments.

Image: Artist’s rendering of a NASA crewed floating outpost on Venus

With Venus so suddenly in the news, I see that Breakthrough Initiatives has moved swiftly to fund a research study looking into the possibility of primitive life in the Venusian clouds. The funding goes to Sara Seager (MIT) and a group that includes Janusz Petkowski (MIT), Chris Carr (Georgia Tech), Bethany Ehlmann (Caltech), David Grinspoon (Planetary Science Institute) and Pete Klupar (Breakthrough Initiatives). The group will go to work with the phosphine findings definitely in mind. Pete Worden is executive director of Breakthrough Initiatives:

“The discovery of phosphine is an exciting development. We have what could be a biosignature, and a plausible story about how it got there. The next step is to do the basic science needed to thoroughly investigate the evidence and consider how best to confirm and expand on the possibility of life.”

Phosphine has been detected elsewhere in the Solar System in the atmospheres of Jupiter and Saturn, with formation deep below the cloud tops and later transport to the upper atmosphere by the strong circulation on those worlds. Given the rocky nature of Venus, we’re presumably looking at far different chemistry as we try to sort out what the ALMA and JCMT findings portend, with exotic and hitherto natural processes still possible. On that matter, I’ll quote Hideo Sagawa (Kyoto Sangyo University, Japan), who was a member of the science team led by Jane Greaves that produced the recent paper:

“Although we concluded that known chemical processes cannot produce enough phosphine, there remains the possibility that some hitherto unknown abiotic process exists on Venus. We have a lot of homework to do before reaching an exotic conclusion, including re-observation of Venus to verify the present result itself.”

Image: ALMA image of Venus, superimposed with spectra of phosphine observed with ALMA (in white) and JCMT (in grey). As molecules of phosphine float in the high clouds of Venus, they absorb some of the millimeter waves that are produced at lower altitudes. When observing the planet in the millimeter wavelength range, astronomers can pick up this phosphine absorption signature in their data as a dip in the light from the planet. Credit: ALMA (ESO/NAOJ/NRAO), Greaves et al. & JCMT (East Asian Observatory).

I’ll close with the interesting note that the BepiColombo mission, carrying the Mercury Planetary Orbiter (MPO) and Mio (Mercury Magnetospheric Orbiter, MMO), will be using Venus flybys to brake for destination, one on October 15, the other next year on August 10. It has yet to be determined whether the onboard MERTIS (MErcury Radiometer and Thermal Infrared Spectrometer) could detect phosphine at the distance of the first flyby — about 10,000 kilometers — but the second is to close to 550 kilometers, a far more promising prospect. You never know when a spacecraft asset is going to suddenly find a secondary purpose.

Image: A sequence taken by one of the MCAM selfie cameras on board of the European-Japanese Mercury mission BepiColombo as the spacecraft zoomed past the planet during its first and only Earth flyby. Images in the sequence were taken in intervals of a few minutes from 03:03 UTC until 04:15 UTC on 10 April 2020, shortly before the closest approach. The distance to Earth diminished from around 26,700 km to 12,800 km during the time the sequence was captured. In these images, Earth appears in the upper right corner, behind the spacecraft structure and its magnetometer boom, and moves slowly towards the upper left of the image, where the medium-gain antenna is also visible. Credit: ESA/BepiColombo/MTM, CC BY-SA IGO 3.0.

And keep your eye on the possibility of a Venus mission from Rocket Lab, a privately owned aerospace manufacturer and launch service, which could involve a Venus atmospheric entry probe using its Electron rocket and Photon spacecraft platform. According to this lengthy article in Spaceflight Now, Rocket Lab founder Peter Beck has already been talking with MIT’s Sara Seager about the possibility. Launch could be as early as 2023, a prospect we’ll obviously follow with interest.

A final interesting reference re life in the clouds, one I haven’t had time to get to yet, is Limaye et al., “Venus’ Spectral Signatures and the Potential for Life in the Clouds,” Astrobiology Vol. 18, No. 9 (2 September 2018). Full text.

tzf_img_post
{ 45 comments }

What Phosphine Means on Venus

A biosignature is always going to create a rolling discussion that gradually homes in on a consensus. Which is to say that the recent discovery of phosphine in the upper atmosphere of Venus has inspired a major effort to figure out how phosphine could emerge abiotically. After all, the scientists behind the just published paper on the phosphine discovery seem to be saying something to the community like “We can’t come up with a solution other than life to explain this. Maybe you can.”

The ‘maybes’ are out there and they include life, but what a tough spot for life to develop, for obvious reasons, not the least of which is the hyper-acidic nature of its clouds. So let’s dig into the story a bit more. The idea of life in the cloud layers of an atmosphere has a long pedigree, even on Venus, where discussions go back at least to the 1960s. Harold Morowitz and Carl Sagan examined the matter in a paper in Science in 1967, a speculation that led them to conclude “it is by no means difficult to imagine an indigenous biology in the clouds of Venus.”

And while the temperature at Venus’ surface can reach 480° Celsius, the temperatures between 48 and 60 kilometers above the surface are relatively benign, in the range of 1° to 90° C. A team led by Jane Greaves (Cardiff University) detected the spectral signature of phosphine through observations at 1 millimeter wavelength made with the James Clerk Maxwell Telescope (JCMT) in Hawaii, later confirmed with data from the Atacama Large Millimeter Array (ALMA) observatory in Chile. The resulting paper is lengthy and judiciously written, as witness:

If no known chemical process can explain PH3 within the upper atmosphere of Venus, then it must be produced by a process not previously considered plausible for Venusian conditions. This could be unknown photochemistry or geochemistry, or possibly life. Information is lacking—as an example, the photochemistry of Venusian cloud droplets is almost completely unknown. Hence a possible droplet-phase photochemical source for PH3 must be considered (even though PH3 is oxidized by sulfuric acid). Questions of why hypothetical organisms on Venus might make PH3 are also highly speculative…

And here again, the note that what we are talking about is unusual chemistry:

Even if confirmed, we emphasize that the detection of PH3 is not robust evidence for life, only for anomalous and unexplained chemistry. There are substantial conceptual problems for the idea of life in Venus’s clouds—the environment is extremely dehydrating as well as hyperacidic. However, we have ruled out many chemical routes to PH3

Image: Artist’s impression of Venus, with an inset showing a representation of the phosphine molecules detected in the high cloud decks. Credit: ESO / M. Kornmesser / L. Calçada & NASA / JPL / Caltech. Licence type Attribution (CC BY 4.0).

Phosphine is a rare molecule, one that is made on Earth through industrial methods, although microbes that live in environments without oxygen can likewise produce it when phosphate is drawn from minerals or other sources and coupled with hydrogen. MIT researchers have previously investigated it as a potential biosignature, one of a great many studied by Sara Seager and William Bains that we’ll want to use in our investigations of exoplanet atmospheres. It’s clear, though, that no one expected to find it in the clouds of Venus. Greaves explains:

“This was an experiment made out of pure curiosity, really – taking advantage of JCMT’s powerful technology, and thinking about future instruments. I thought we’d just be able to rule out extreme scenarios, like the clouds being stuffed full of organisms. When we got the first hints of phosphine in Venus’ spectrum, it was a shock!… In the end, we found that both observatories had seen the same thing – faint absorption at the right wavelength to be phosphine gas, where the molecules are backlit by the warmer clouds below.”

The international team working on the phosphine detection has investigated everything from minerals drawn into the clouds from the surface to volcanes, lightning, even sunlight, but none of the processes examined made enough phosphine to account for the data. In fact, the abiotic methods could produce at best one ten thousandth of the amount found in the telescope data.

But what a tough place for life to persist given an atmosphere where the high clouds are about 90 percent sulphuric acid. The hostility of the Venusian environment doubles down on the question of whether there are abiotic processes we have yet to consider. Following up on the phosphine detection, a new paper from the MIT researchers homes in on the matter:

(Greaves et al. 2020) have reported the candidate spectral signature of phosphine at altitudes >~57 km in the clouds of Venus, corresponding to an abundance of tens of ppb [parts per billion]. It was previously predicted that any detectable abundance of PH3 in the atmosphere of a rocky planet would be an indicator of biological activity (Sousa-Silva et al. 2020). In this paper we show in detail that no abiotic mechanism based on our current understanding of Venus can explain the presence of ~20 ppb phosphine in Venus’ clouds. If the detection is correct, then this means that our current understanding of Venus is significantly incomplete.

Image: This artistic impression depicts Venus. Astronomers at MIT, Cardiff University, and elsewhere may have observed signs of life in the atmosphere of Venus. Credit: ESO (European Space Organization)/M. Kornmesser & NASA/JPL/Caltech.

And from MIT co-author Clara Sousa-Silva, who examined phosphine as an exoplanet biosignature in a paper earlier this year, a look at the broader implications:

“A long time ago, Venus is thought to have oceans, and was probably habitable like Earth. As Venus became less hospitable, life would have had to adapt, and they could now be in this narrow envelope of the atmosphere where they can still survive. This could show that even a planet at the edge of the habitable zone could have an atmosphere with a local aerial habitable envelope.”

What a boon this finding will be to those interested in taking our eye off Mars for an astrobiological moment and looking toward the nearest terrestrial planet, for follow-up studies have to include one or more missions to Venus to study its atmosphere, perhaps including some kind of sampling and return to Earth. The MIT paper, Bains et al. as referenced below, includes both Seager and Sousa-Silva as co-authors, along with Cardiff’s Greaves, and bears a title that defines the issue: “Phosphine on Venus Cannot be Explained by Conventional Processes.”

Seager’s work on a wide range of potential biosignatures is definitive and has been examined before in these pages. Anyone interested in the broader question of how we go about defining a biosignature needs to get conversant with her “Toward a List of Molecules as Potential Biosignature Gases for the Search for Life on Exoplanets and Applications to Terrestrial Biochemistry,” Astrobiology, June 2016, 16(6): 465-485 (abstract).

So perhaps life, or perhaps a yet undiscovered mechanism for producing phosphine on Venus. Either way, the path forward includes an examination of a possible paradigm shift — the authors use this phrase — involving not just Venus but terrestrial planets in general. And I think we can assume that laboratory work on phosphorous chemistry is about to get a major boost.

The paper is Greaves et al., “Phosphine gas in the cloud decks of Venus,” Nature Astronomy 14 September 2020 (abstract). The MIT paper is Bains et al., “Phosphine on Venus Cannot be Explained by Conventional Processes,” submitted to Astrobiology – Special Collection: Venus (preprint). The Sousa-Silva paper on phosphine is “Phosphine as a Biosignature Gas in Exoplanet Atmospheres,” Astrobiology Vol. 20, No. 2 (31 January 2020). Abstract.

tzf_img_post
{ 74 comments }

Exploring Tidal Heating in Large Moons

Io, Jupiter’s large, inner Galilean moon, is the very definition of a tortured surface, as seen in the image below, taken by the Galileo spacecraft in 1997. Discovering volcanic activity — and plenty of it — on Io was one of the early Voyager surprises, even if it didn’t surprise astrophysicist Stanton Peale (UC-Santa Barbara) and colleagues, who predicted the phenomenon in a paper published shortly before Voyager 1’s encounter. We now know that Io is home to over 400 active volcanoes, making it the most geologically active body in the Solar System.

We’re a long way from the Sun here, but we know to ascribe Io’s surface upheaval to tidal heating forced by the presence of Jupiter as the gravitational forces involved stretch and squeeze not just Io but, of course, Europa, Ganymede and Callisto, all of them interesting because of the possibility of liquid oceans beneath the surface. Io is close enough to the giant world that rock can be melted into magma, but it’s the ice under more distant Europa that gets the lion’s share of interest because of its astrobiological possibilities. And now we learn that not just Jupiter but the other Jovian moons may be involved in significant tidal heating effects.

Image: NASA’s Galileo spacecraft caught Jupiter’s moon Io, the planet’s third-largest moon, undergoing a volcanic eruption. Locked in a perpetual tug of war between the imposing gravity of Jupiter and the smaller, consistent pulls of its neighboring moons, Io’s distorted orbit causes it to flex as it swoops around the gas giant. The stretching causes friction and intense heat in Io’s interior, sparking massive eruptions across its surface. Credit: NASA.

In the paper on this work, recently published in Geophysical Research Letters, lead author Hamish Hay (JPL) refines graduate work he performed at the University of Arizona’s Lunar and Planetary Laboratory. The scientists have found that the tidal response to other moons is surprisingly large, and consider it an important factor in the evolution of the satellite system at Jupiter, which comprises almost 80 moons in its entirety. Subsurface oceans could be maintained only through a balance between internal heat and its dissipation, so we need to know where this heat comes from and how it is distributed to understand these oceans.

Resonance appears to be the key. Push any object and let go and you create a wobble at the object’s own natural frequency. Hay uses the example of pushing a swing to explain it: Keep pushing the swing at that frequency and the resulting oscillations increase. Push at the wrong frequency — or in Hay’s analogy, push the swing at the wrong time — and the swing’s motion is dampened. In the case of the Jovian moons, the depth of a subsurface ocean determines the natural frequency of each of the moons the team studied. Says Hay:

“These tidal resonances were known before this work, but only known for tides due to Jupiter, which can only create this resonance effect if the ocean is really thin (less than 300 meters or under 1,000 feet), which is unlikely. When tidal forces act on a global ocean, it creates a tidal wave on the surface that ends up propagating around the equator with a certain frequency, or period.”

Image: The four largest moons of Jupiter in order of distance from Jupiter: Io, Europa, Ganymede and Callisto. Credit: NASA.

Hay and company are arguing that each Galilean moon raises tides on the others, even if we’ve ignored the process in the past because Jupiter’s gravitational effects are obviously so huge. The researchers have modeled subsurface tidal currents to study how the resonant response of an ocean shows up in the generation of tidal waves that can release significant amounts of heat into the oceans and crusts of Io (where the ocean is thought to be magma) and Europa.

The result: Jupiter alone, in this modeling, cannot account for tides of the right frequency to cause the necessary resonance to maintain the internal oceans we believe exist among these moons, because the oceans we predict under the ice on moons like Europa are simply too deep. The gravitational effects of other moons have to be added to those of Jupiter to produce the requisite tidal forces. The resulting tidal resonance produced by Jupiter and the other moons produces oceans stable over geological time that must be tens to hundreds of kilometers deep.

If this is correct, there should be observable effects on the surface, opening the way for new observations as future spacecraft explore the Galilean moons. From the paper:

Additional observable signatures may emerge if an ocean is nearly resonant. The dominant modes due to moon forcing are westward‐propagating tidal waves. These waves produce unique, zonally symmetric patterns of time‐averaged heat flux, with heating focused toward low latitudes and peaking either side of the equator (Figure 3b). Heightened geological activity at low latitudes would be expected from such a distribution of heat flow, which has been suggested from the locations of chaos terrains on Europa (Figueredo & Greeley, 2004; Soderlund et al., 2014) and volcanism on Io (Mura et al., 2020; Veeder et al., 2012), although the polar coverage is poor. The crust would correspondingly be thinner at low latitudes, which could be observable using gravity and topography data. Small‐scale turbulent mixing in the ocean may act to diffuse this heating pattern…

The heating pattern explored in this paper is, the scientists say, significantly different from the tidal heating forced by Jupiter in the crust, which tends to be enhanced toward the poles. The authors see consequences for the ambient magnetic field that the paper explores, and which would be within the sensitivity of the magnetometer to be flown aboard the upcoming JUICE mission, and probably within range of the instrumentation on Europa Clipper.

There are interesting exoplanet implications here as well. Note this:

Our study suggests for the first time a mechanism where the ocean could play a crucial role in the heat budget of the Galilean moons, as opposed to previous studies limited to diurnal frequencies where dissipation is often negligible (e.g., Chen et al., 2014; Hay & Matsuyama, 2019a). In light of this, reexamination of evolution models may be needed in the future. The effect of moon‐moon tides may be even larger in the TRAPPIST‐1 system if any of the planets contain significant bodies of liquid, as has been suggested (Grimm et al., 2018). The habitability of closely packed ocean worlds may depend on these tides.

The paper is Hay et al., “Powering the Galilean Satellites with Moon‐Moon Tides,” Geophysical Research Letters Vol. 47, Issue 15 (16 August 2020). Abstract/ Full Text.

tzf_img_post
{ 6 comments }

Janus: Twin Spacecraft to Study Binary Asteroids

When we looked earlier this week at the Solaris mission, a concept designed to study the Sun’s polar regions, I commented on another early concept called the Auroral Reconstruction CubeSwarm (ARCS). The mission intrigued me because it consisted of CubeSats in swarm formation, working together with numerous ground observatories, to study the Earth’s auroras. The paradigm of miniaturization, low cost and creative design surfaces yet again in Janus, a proposal out of the University of Colorado at Boulder and Lockheed Martin that would involve twin spacecraft studying twin targets, the binary asteroids 1996 FG3 and 1991 VH.

Daniel Scheeres (CU-Boulder) is principal investigator for Janus, the plan being for the university to handle the analysis of data and images from the mission, with Lockheed Martin building and operating the two spacecraft. It should be a familiar role for both entities, as Lockheed Martin supports operations for OSIRIS-REx at asteroid Bennu, while Scheeres leads the radio science team for that mission. Each Janus spacecraft is roughly the size of a suitcase, a carry-on at that, echoing the theme of keeping spacecraft small and straightforward.

Lockheed Martin’s Josh Wood is project manager for Janus:

“We see an advantage to be able to shrink our spacecraft. With technology advancements, we can now explore our solar system and address important science questions with smaller spacecraft… We see this evolution to smaller and more capable spacecraft being a key market in the future for scientific missions. Now, we want to execute and show that we can do it.”

Lowering costs and preparation time by using off-the-shelf components is all part of the same parameter space that supports the movement toward CubeSats and so-called SmallSats. The mission will be part of NASA’s SIMPLEx program, which focuses on small spacecraft and satellites, and is projected to cost less than $55 million.

The twin Janus spacecraft have a long journey ahead, with a gravity assist at Earth following an initial solar orbit and a subsequent trajectory that takes them beyond the orbit of Mars. The craft are to use VACCO MiPS (Micro-Propulsion System), a low-cost, cold gas propulsion option designed for CubeSats consisting of five thrusters for pitch, yaw, roll and delta-v.

Malin Space Science Systems is to provide the instrument suite including visible and infrared cameras, with power delivered by three deployable solar arrays and batteries. Launch is to be in 2022, with the twin craft lofted as secondary payloads on a Falcon-Heavy (Block 5) in the same launch that will carry the Psyche and EscaPADE missions.

From a short summary presented at the 51st Lunar and Planetary Science Conference (2020):

Janus science will combine flyby observations of the target binary asteroids with ground-based observations, enabling the high resolution imaging and thermal data to be placed into a global context and leveraging all available data to construct an accurate topographical and morphological model of these bodies. Based on these measurements, the formation and evolutionary implications for small rubble pile asteroids will be studied.

Image: Rendering of the orbital pattern of the binary asteroid 1999 KW4. We have much to learn about binary asteroids. (Credit: NASA/JPL).

Binary asteroids have yet to be studied up close, but this is a configuration that represents about 15 percent of the asteroids in the Solar System. Says Scheeres:

“We think that binary asteroids form when you have a single asteroid that gets spun up so fast that the whole thing splits in two and goes through this crazy dance… Once we see them up close, there will be a lot of questions we can answer, but these will raise new questions as well. We think Janus will motivate additional missions to binary asteroids.”

tzf_img_post
{ 8 comments }