≡ Menu

Into Titan’s Haze

I can remember when I first read about the experiment that Stanley Miller and Harold Urey performed at the University of Chicago in 1952 to see if organic molecules could be produced under conditions like those of the early Earth. It was a test of abiogenesis, though that wasn’t a word I knew at the time. Somewhere around 5th grade, I was a kid reading a book whose title has long escaped me, but the thought that scientists could re-create the atmosphere the way it was billions of years ago seized my imagination.

Never mind that exactly what was in that atmosphere has been controversial. What thrilled me was the attempt to reproduce something long gone — billions of years gone — and to experiment to find out what it might produce. I just finished Samanth Subramanian’s elegant biography of J. B. S. Haldane, the polymathic geneticist, mathematician, physiologist (and too much more to list here), whose work on the chemical formation of life was strongly supported by the Miller and Urey results, as was that of the Soviet biochemist Alexander Ivanovich Oparin, to whom Haldane always deferred when asked who should be given priority for the idea.

The biography, A Dominant Character (W. W. Norton, 2020) is a gem; I highly recommend it to those interested in these matters. And it was just the thing to be reading when I began to hear about the work of Fabian Schulz and Julien Maillard (IBM Research-Zurich).

Working with colleagues at the University of Paris-Saclay, the University of Rouen at Mont-Saint-Aignan, and the Fritz Haber Institute of the Max Planck Society, the researchers have been experimenting with atmospheres as well, though not of our own world but Titan, a moon frequently described as having analogues to the early Earth. In fact, they’ve re-created its atmosphere in an Earth laboratory, which may eventually tell us much about abiogenesis in both places through the use of atomic-scale microscopy.

Titan continues to fascinate. No other object in the Solar System offers up a nitrogen atmosphere of this density, along with organic processes and interactions between the atmosphere and the surface on a grand and highly visible scale. There is a distinct possibility that Earth’s atmosphere 2.8 billion years ago was close to what we see on Titan today. The timeframe is based on the creation of the first reef systems in the Mesoarchean Era, as cyanobacteria began their photosynthetic work to turn carbon dioxide into oxygen. So this is an obviously fecund arena for researchers to probe.

Image: Although the Huygens probe has now pierced the murky skies of Titan and landed on its surface, much of the moon remains for the Cassini spacecraft to explore. Titan continues to present exciting puzzles. This view of Titan uncovers new territory not previously seen at this resolution by Cassini’s cameras. The view is a composite of four nearly identical wide-angle camera images. Credit: NASA/JPL/Space Science Institute.

We’d like to know a lot more about that frustrating photochemical haze that hid the surface of Titan when Voyager 1 took its jog at Saturn to get a look at the moon. Here we’re seeing nanoparticles made out of organic molecules, with carbon, hydrogen and nitrogen in abundance. All this is the result of radiation from the Sun as it streams into the methane and nitrogen mix making up the bulk of Titan’s atmosphere. Previous lab experiments have focused on the organic molecules called tholins to understand the chemical nature of the molecules from which the haze is ultimately derived.

The term ‘tholin’ was first used in a 1979 Nature paper co-authored by Carl Sagan and Bishun Khare, who would doubtless be thrilled to see how large a role they play in our analysis of material in the outer system. Tholins got a lot of public exposure, for instance, when New Horizons flew past 486958 Arrokoth in the outer Solar System. They’re thought to have accounted for its reddish color, and are in fact common in this distant region as solar UV and cosmic rays interact with organic compounds on icy bodies.

The IBM experiment was structured to allow Schulz and Maillard to observe tholins in the formation process. Co-authors Leo Gross and Nathalie Carrasco explain:

“We flooded a stainless-steel vessel with a mixture of methane and nitrogen and then triggered chemical reactions through an electric discharge, thereby mimicking the conditions in Titan’s atmosphere. We then analyzed over 100 resulting molecules composing Titan’s tholins in our lab at Zurich, obtaining atomic resolution images of around a dozen of them with our home-built low-temperature atomic force microscope.”

Image: Titan’s aerosol analogues as seen by Scanning Electron Microscopy. Credit Nathalie Carrasco.

The work is significant because it is revealing how compounds like those found in Titan’s haze are built by using atomic-scale microscopy. This is a deep look into chemical bonding and structure that goes well beyond previous techniques, and offers what appears to be a new astrobiological tool. The scientists believe their work can be turned toward the analysis of Titan’s methane cycle, which like Earth’s hydrological cycle moves between a gaseous and liquid state, producing the moon’s lakes and seas.

The IBM work confirms that Titan’s orange haze is primarily made up of nitrogen-containing polycyclic aromatic hydrocarbons, with chemical structures that are related to the ‘wettability” of the haze, a factor that determines whether the haze nanoparticles float on the moon’s hydrocarbon lakes. From an IBM research blog:

Finding these new details on the chemical structure of tholins adds to our understanding not only of Titan’s haze but also of the likelihood that aerosols might have favored life on the early Earth in the past.

Did hazes like this at one time protect fragile DNA molecules from the Sun’s radiation? Gross and Carrasco point to the fact that the molecular structures the team has imaged are good absorbers of ultraviolet light. That would be useful information not just about the early Earth but also about the prospects for forms of life emerging on Titan itself. Future missions like Dragonfly should give us much information in this regard.

Meanwhile, I’m most interested in the implications of this work for astrobiology in general. Let me quote from the paper on these laboratory analogues of Titan’s haze:

These molecules are for example good UV absorbers and thus modulate the radiative balance of the atmosphere (Brassé et al. 2015). This chemical structure would also influence the surface energy of the haze particles, controlling their wettability with liquid/solid hydrocarbons and nitriles: it would impact their propensity to trigger methane rains in the troposphere and/or to transiently float at the lake surfaces of Titan (Cordier & Carrasco 2019; Yu et al. 2020).

And keep this in mind for the overall context:

More generally this work showed the potential of AFM technique to reveal the chemical structure of complex organic material of interest for astrochemistry, opening new perspectives in the chemical analysis of rare and complex material such as organic matter contained in meteorites or in the frame of future sample return missions.

The paper is Schulz et al., “Imaging Titan’s Organic Haze at Atomic Scale,” Astrophysical Journal Letters Vol. 908, No. 1 (12 February 2021). Abstract / Full Text.

tzf_img_post
{ 0 comments }

How do we go about crafting a spacefaring civilization? Nick Nielsen has been exploring the issues involved in terms of the choices cultures make and their conception of their future. Change the society and you change the outcome, with huge ramifications for our potential growth off-planet and on. The history of so-called ‘futurism’ tells us that visions of human potential differ according to the desirability (or lack of it) of deploying resources to space research, and it is a telling fact that many analyses extant today leave space out of the equation altogether. Have a look, then, at possible civilizations, their outcomes dictated by the assumptions they draw on as they attempt to pass through a bottleneck defined by a planetary society negotiating its relationship with the cosmos.

by J. N. Nielsen

1. Space Infrastructure Architectures
2. The Problems of Futurism
3. Beyond Institutionalized Futurism
4. Futurism at the Scale of Civilization
5. Six Possible Civilizations
5a. Space Development of Enlightenment Civilizations
5b. Space Development of Scientific Civilizations
5c. Space Development of Environmentalist Civilizations
5d. Space Development of Traditionalist Civilizations
5e. Space Development of Virtualist Civilizations
5f. Space Development of Urbanist Civilizations
6. Internal Conflict, Growth, and Destabilization
7. Buildout and the Exaptation of Civilizations
8. The View from the Bottom of a Gravity Well: Crabs in a Bucket

1. Space Infrastructure Architectures

Some years ago I wrote The Infrastructure Problem (2014), in which I touched upon the different spacefaring infrastructure architectures that would result from different admixtures of scientific research, technological development, and practicable engineering. Some years after I revisited some of these themes in The Return of the Space Settlement Vision (2017), especially the difference between the minimal space development architecture of Zubrin and Musk, and the maximal space development architecture of Wernher von Braun, Gerard K. O’Neill, and Bezos. These are the two most obvious alternative architectures for space development, but not the only two possibilities; in what follows I will inquire into the possibilities for qualitatively distinct space development as this development reflects the priorities of the society that designs, funds, and builds space infrastructure.

The choice between space development architectures is not merely a question of how best to get to Mars, or to some other destination; the question of space development architectures extrapolated to its greatest reach converges on the kind of civilization that builds a space infrastructure: the kind of space development that occurs will be a function of the kind of civilization that undertakes this development. But this is not merely an asymmetrical expression of a given kind of civilization that builds a given kind of space infrastructure; the buildout of a given kind of space architecture will have (or would have, in each case) consequences both intended and unintended, influencing in turn the civilization that builds the infrastructure.

For a terrestrial analogy, consider the buildout of transportation networks: Japan has a rail network that allows almost anyone to travel anywhere without the need of a personal vehicle; Europe has both extensive highway systems and extensive rail networks; the Americas have relied mostly on road networks and airports for transportation. Each of these transportation infrastructures is a reflection of the society that built the infrastructure, but the existence of the infrastructure in its turn contributes to the growth of certain social institutions while limiting the possibilities for other social institutions. Infrastructure projects are not socially neutral; they represent the buildout of a particular kind of society.

What kinds of societies, then, pursue particular kinds of space development? During the Cold War, space development took the form of the Space Race, which was an ideological competition intended to prove one social model superior to the other. But the Cold War eventually converged on the Apollo-Soyuz handshake in space. That cooperation, over several decades, grew into the ISS, and many see this cooperative model as the future of space development, even as private industry enters the launch market and national space programs multiply. What do these divergent trends portend for the future of space development? Let us turn to some futurist scenarios for relatively near-term prognostications in regard to the forms that social and space development may take.

2. The Problems of Futurism

Many futurist scenarios are formulated without any reference whatsoever to space development. It is this kind of blindness to an opportunity that could grow into a future that dwarfs all other possible futures that makes futurism so consistently disappointing. [1] Past futurist efforts have not merely been wrong, but often in retrospect are laughable, so wide of the mark are they. Futurists have learned at least a few lessons from their past disappointments, now typically framing multiple scenarios based on explicitly identified variables, rather than predicting particular events or developments.

Recently there has been much discussion of a 2010 Rockefeller Foundation study, Scenarios for the Future of Technology and International Development, because one of the scenarios of the study (“Lock Step”) so closely resembled the events of 2020, but the interesting feature of the Rockefeller Institute study was its creative use of a graphed quadrant defined by two variables (also known as a political compass), with the variables being political and economic alignment on the one hand, and adaptive capacity on the other. [2] If one takes these two variables as continua and uses the continua as x and y axes of a graph, the four quadrants of the graph define four scenarios, as follows:

1. “Clever Together” (strong alignment, strong adaptive capacity)

2. “Lock Step” (strong alignment, weak adaptive capacity) [3]

3. “Smart Scramble” (weak alignment, strong adaptive capacity)

4. “Hack Attack” (weak alignment, weak adaptive capacity) [4]

What this principled futurism with scenarios defined by variables implies is that, if our world today more resembles the “Lock Step” scenario than the other scenarios defined by the method employed, that is because the world is becoming more aligned but with less adaptive capacity. The fact that a futurist scenario written ten years ago bears some resemblance to the world today is a tribute to the well-chosen axes of the Rockefeller Foundation futurists. [5]

Another principled futurist schematism is that of the Tellus Institute, which instead of employing a compass framework (four quadrants divided by two axes), distinguishes between scenarios that are better (“Great Transitions”), approximately the same (“Conventional Worlds”), or obviously worse (“barbarization”) than the world today. Then for each of these three possibilities, two further permutations (better and worse) of each possibility are defined, for six possible scenarios:

1. Market Forces (a conventional world in which market-driven forces dominate)

2. Policy Reform (a conventional world in which significant reforms are possible)

3. Fortress World (barbarization unto a neo-feudal society) [6]

4. Breakdown (barbarization unto civilizational collapse)

5. Eco-communalism (a great transition to local, ecologically sustainable societies)

6. New Paradigm (a great transition attended by a variety of glittering generalities, aiming at, “…a just, fulfilling, and sustainable civilization”)

While I find the futurism of the Rockefeller Foundation and the Tellus Institute to be interesting and instructive, I also find them to be fatally flawed, and not merely because little or no hint of space development plays a role in their scenarios. The Tellus Institute, for example, cannot let go of the idée fixe of world government (I wrote about this recently in When Futurism Gets Stuck in the Past), and therefore defines its scenarios such that a closer approximation to world government is always preferable, while the maintenance of local governments is always suboptimal. Similarly embedded presuppositions vitiate the value of the Rockefeller Foundation report.

John Maynard Keynes in his seminal work The General Theory of Employment, Interest and Money, wrote: “Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas.” [7] Keynes was right, but these influences are not confined to practical men. Dreamers and utopians are guilty of the same fault, as we see throughout futurist writings; embedded presuppositions, rarely made explicit, guide most futurist scenarios. In order to transcend our presuppositions and fundamentally question our relationship to the future, we need to take the presuppositions themselves as variables that may play out to a greater or a lesser extent. It is only through questioning our assumptions that we can ultimately understand ourselves and understand where we are going.

There are any number of institutional futurist scenarios, different to some degree from the reports of the Rockefeller Foundation and the Tellus Institute, as there are any number of institutions that produce them. [8] For example, ARUP (a consultancy that assists in the construction of major infrastructure projects) has produced a report, 2050 Scenarios, that, like the Rockefeller Foundation, details four future scenarios, and, also like the Rockefeller report, employs a compass, with its x-axis a continuum from social deterioration to social improvement, and with the y-axis a continuum from biosphere deterioration to biosphere improvement, defining its four scenarios as follows:

1. “Post Anthropocene” (social improvement, biosphere improvement)

2. “Humans Inc.” (social improvement, biosphere deterioration)

3. “Greentocracy” (social deterioration, biosphere improvement)

4. “Extinction Express” (social deterioration, social deterioration)

The ARUP reports notes, “The science-based targets of the nine Planetary Boundaries, Arup’s Drivers of Change cards, as well as the United Nations’ Sustainable Development Goals (often abbreviated as UN SDGs) were used to set parameters and guide the scenario development.”[9] The charts that accompany each scenario imply that the report writers relied heavily upon UN SDGs, which in the most optimistic scenario, “Post Anthropocene,” are all shown as “improved” [10] over today, while the most pessimistic scenario, “Extinction Express,” the UN SDGs are shown as all deteriorated, while the “Greentocracy” and “Humans, Inc.” scenarios are mixed in terms of progress or deterioration of UN SDGs.

Space development is mentioned in the ARUP report in the context of the most pessimistic of the scenarios: “The depletion of Earth’s natural resources has necessitated the expansion of new extractive frontiers in space and the deep sea” (p. 60), as though they were seeking a pretext to frame space development in the worst possible light. Global Trends 2030: Alternative Worlds (2012) by the National Intelligence Council includes a smattering of references to space development, frequently in conjunction with the militarization of space. For example: “The ability of a future adversary to deny or mitigate that information advantage—including through widening the combat to outer space—would have a dramatic impact on the future conduct of war.” (p. 69) The book Journey to Earthland: The Great Transition to Planetary Civilization (2016) by Paul Raskin of the Tellus Institute, briefly mentions space development (pp. 78, 85, 92), although in a better light, a more utopian light, than ARUP or the NIC. Scenarios for the Future of Technology and International Development (2010) from the Rockefeller Foundation has no mention of space development at all. In none of these reports is space development integral across all scenarios, and in none of the scenarios does space development play a significant role in the development of civilization. [11] Rand, to their credit, engages with the idea of space exploration to a much greater extent. [12]

For what little is said of space development in institutionalized futurism, the space development architectures are characterized in terms of resource extraction and military development. These motivations could drive space development futures based on resource extraction and military supremacy imperatives, which would entail distinctive space infrastructure architectures in each case. In other words, we can already see in these scenarios qualitatively distinct forms of space development, even where space development is not seen as central to a future scenario.

3. Beyond Institutionalized Futurism

The futurism discussed in the previous section I will call “institutionalized futurism,” as all of these reports were overseen by institutions and were written by teams of authors who work within the institution in question. Since institutions are created for a purpose, we rightly expect that purpose to be expressed throughout an institution, including being expressed in the reports of an institutional think tank. This is quite clearly the case with the reports described above, which come from institutions with agendas ranging from the conventional status quo to utopianism.

All of the scenarios that we have examined from institutional futurism have in common assumptions regarding shared values across nation-states, populations, and geographical regions. [13] In the Rockefeller Foundation report these assumptions are smuggled in as “alignment,” while in the ARUP report the burden is borne by improving societal conditions. However, it is meaningless to posit convergence or alignment of interests and values where these interests and values are left as a cipher, and that is why I say that these assumptions are “smuggled in.”

The Tellus Institute is the most egregious in its utopianism, that is to say, in its denial of the reality of the human condition, which is a condition of a plurality of interests and values, many of them misaligned, admitting of no common standard of social improvement. However, the utopianism of the Tellus Institute is a consequence of the Institute making its interests and values explicit, whereas in the Rockefeller and ARUP reports these interests and values are artfully dissembled—not exactly hidden, but also not displayed in the way that the Tellus Institute displays them. In this way, utopianism is a valuable exercise, as it makes explicit what others are thinking but do not say aloud. There is a sense in which the Rockefeller and ARUP reports exemplify what the Tellus Institute calls “Conventional Worlds,” as these reports embody unstated assumptions of Enlightenment ideology as it is understood in the early twenty-first century. The Tellus Institute, by contrast, plainly states these assumptions, and this explicitness is a virtue.

As a contrast to the institutionalized futurism considered above, let us now consider some instances of individual futurism. Individuals, it is true, are more vulnerable to simple mistakes (in a group, these would be pointed out) and to personal quirks and partiality (which would be diluted in institutionalized futurism), but they are not beholden to an institutional culture, and they are less vulnerable to groupthink that a number of individuals gathered together under an institutional umbrella.

As we have seen, space development does not seem to greatly interest institutionalized futurism. If we expand our survey of futurism to include individual futurists, it is an easy matter to find individuals who focus on space development and little else. I am going to avoid these and other specialist scenarios in order to discuss futurist scenarios of more general interest. There are, of course, countless individual futurists, but I will only mention two.

Peter Thiel in a number of talks has lately emphasized three scenarios for Europe’s future, asserting that a scenario needs to be concrete in order for it to be meaningful. This concreteness requirement is interesting; we have seen in the reports of institutionalized futurism the use of fictionalized vignettes in order to try to make these scenarios concrete, although I think that these efforts are much less effective than Thiel’s plain-spoken alternatives. His three scenarios are 1) environmentalism, 2) the surveillance state, and 3) Islamization. Intimations of all three are already present in contemporary Europe, so that little imagination is required to extrapolate any one of these into the future in a concrete and realistic manner.

Peter Thiel’s insistence upon the concreteness of future visions of society is possibly a reaction against the glittering generalities of utopianism and tacitly introduced shared values in terms of alignment or improvement. Thiel focuses on the hopes of fears of ordinary persons living and working today in societies in which environmentalism, state surveillance, and the presence of Islamic minority enclaves are already a reality, and any one of these could become the determining reality of Europe’s future.

The other individual futurist I will mention is Laurence Smith, who wrote The World in 2050: Four Forces Shaping Civilization’s Northern Future (2010). He has focused on approximately the same time period as the Rockefeller Foundation, ARUP, and the National Intelligence Council. Smith narrows his focus by discussing the far north, but his scenarios have wider repercussions and so can be readily extrapolated to a planetary scale. While Smith’s book is longer than the institutionalized reports discussed above, he doesn’t go deeply into methodology, or employ a schematic approach as in the institutionalized reports, but, like Thiel, he develops trends existing in the present in order to converge upon a future that exemplifies the direction he sees these trends heading toward.

The four forces that Smith identifies are demography, growing demand for natural resources, globalization (which, as Smith characterizes it, resembles “alignment” in the Rockefeller Foundation report), and climate change. Whereas Thiel identified three forces in the present and implied their divergence, Smith identifies four forces and weaves them together, implying their convergence, but both approaches involve identifying trends in the present and extrapolating them into the future. Smith explicitly characterizes his futurism as a thought experiment, and seeks to inject a conservative bias into his thought experiment by obeying four ground rules: 1) no silver bullets (only incremental technological progress), 2) no WWIII (no reshuffling the geopolitical deck), 3) no black swan events (which he calls “hidden genies”), and 4) the slogan “the models are good enough” (meaning that conventional scientific predictions guide his scenarios). It is worth noting that Thiel did not mention any similar cautions, but his three scenarios are consistent with Smith’s ground rules.

Where futurism is formulated in terms of historical trends and social forces, rather than being laid out in the form of a political compass, the variables are the rapidity of the development and the completeness of its realization, and these correspond to the axes of compass futurism: any two trends or forces identified by Thiel or Smith could be used to construct four scenarios based on quadrants defined by two axes. Thus Thiel’s three scenarios of environmentalism, surveillance, and Islam for Europe can each admit of how rapidly the scenario comes into being and how completely the scenario is realized. For example, environmentalism as a political project might unfold according to some accelerated timeline tied to a deadline in the near future, [14] or more slowly over 50 years, 100 years, or 200 years. And the degree of realization of an environmentalist political project might be diluted by the admixture of other trends that are also realized over a similar time period.

4. Futurism at the Scale of Civilization

My futurism is going to be an instance of individual futurism, for obvious reasons, and like Thiel and Smith I will identify trends in the present to play out in the future, but unlike Thiel and Smith my interest is in futurism on the civilization level. Seeing the future through the lens of how civilizations develop, and even how new civilizations come into being, casts the enterprise in a new light, and accounts for some of my choices of scenarios.

To be fair, in the kind of futurism that only looks toward the next thirty years or so (as with the futurism I have considered so far) [15], change at the scale of civilization isn’t in the cards, and the varying continua that we find most significant are precisely those that could be found in any civilization of any kind, and which were the focus of the institutionalized futurism discussed earlier: more or less economic activity, more or less adaptive capacity, a better or worse environment, etc. However, while the most noticeable developments over this near-term horizon will be sub-civilizational, these variables will continue to vary and even to reverse themselves indefinitely into the future, while the most important developments—the developments that will continue to shape history over longer time horizons—will be those that occur on a civilizational scale. A distinction can be made, then, between what I will call zero-sum variables that will pass through cycles of improvement or deterioration (as defined by some index) [16] and directional variables that will push human history toward unprecedented developments; civilizational scale variables are directional variables, and we will focus on these. [17]

How future developments manifest in institutions will be in part a function of the structure of the institutions in question. I define the institutional structure of civilization as an economic framework coupled to a conceptual framework by a central project. What is a central project? When a sufficiently large number of persons are able to unify their efforts around common interests, meanings, and values, I call this the central project of a civilization, but in so doing I recognize that conditions that allow for common interests and values are always limited in space and time. Civilizations rise and fall as conditions allow for large-scale social organization to unify around coherent social purposes. Such purposes are re-interpreted over time and so constitute a moving target; eventually conditions are transformed to the point that the purpose (or the social body devoted to the purpose) can no longer remain coherent, and the social institutions that had temporarily formed about the purpose begin their dissolution. Civilizations may, at this point, bifurcate [18], transform [19], or cede their place in history to a successor [20].

If you will grant me my institutional analysis at least hypothetically (having a model at least gives us a common framework for discussion), the most important question about a civilization, and perhaps the most difficult question to answer for the most complex and longest enduring civilizations, is what constitutes that civilization’s central project. For any new civilization that should arise in the future, the most significant question is what its central project will be, as this will be the glue that will hold the civilization together, that will mediate between practices that keep the civilization functioning and the theories by which a civilization justifies itself to itself, explaining the world in a way that will make sense for the civilization’s population (and for its neighbors, with which it will be engaged in relationships of cooperation, competition, and conflict).

Building on this institutional analysis of civilization, I will frame my futurist scenarios on a civilizational scale, and in terms of the nascent central projects of future civilizations coming into being. The only kinds of civilization that can come into being in the future are those that are consistent with having our civilization as their past, thus I see futurist scenarios through the lens of the kind of civilization we have today, and the kinds of civilization that ours could become, depending upon the trajectory of development we pursue. Therefore I will formulate my scenarios in terms of the kind of future civilization that comes into being as the result of a particular ideology assuming the role of central project.

Moreover, I will adopt most futurists’ indifference to space development by postulating future civilizations that constitute what I have called indifferently spacefaring civilizations, which is a civilization that does not take spacefaring as its central project. So we will consider futurist scenarios like those formulated by futurists, whether institutions or individuals, in which space development and space exploration do not play a central role in civilization, but may still be technically and economically possible for the civilization in question. In these scenarios, space development will not occur as an end in itself, but, if it occurs, it will occur as a means to the end or ends embodied in the central project. Indifferently spacefaring civilizations in the future will have to have some central project as the purpose that drives human activity, and without which social cohesion fails and a society fragments, but in what follows that central project will not be spacefaring.

5. Six Possible Civilizations

I have my own futurist scenarios of which I am rather fond, and these scenarios grow out of contemporary trends and forces, much as we find in Smith and Thiel, rather than deriving from a schematic framework, like the institutionalized futurism we have considered. However, the futurist scenarios I have previously worked out were not based on my current understanding of civilization, which will be my point of departure here. Focusing on kinds of civilization, and differentiating kinds of civilization by differentiating central projects of civilization (which is to say, the macro-institutional structure of civilization remains the same in all instances of civilization, even as the central projects will differ), I will (briefly) discuss the following scenarios for civilization:

1. The Enlightenment — One could just as well call Enlightenment civilizations humanist civilizations, as this communicates much of the content of the Enlightenment. I maintain that, since the Enlightenment, western civilization has attempted to make the Enlightenment the central project of civilization. In historical terms, the Enlightenment is still inchoate and not yet fully formed (like Christianity in the third century AD), and in so far as we today constitute an Enlightenment civilization, we have great difficulty in seeing this for what it is. There is a sense in which future Enlightenment civilizations are scenarios of stagnation, as the ongoing Enlightenment project means more of the same, but with variations within the parameters defined by Enlightenment imperatives.

2. Science — I have discussed the possibility of a properly scientific civilization, i.e., a civilization that takes science as its central project, in several places, especially Properly Scientific Civilization and The Central Project of Properly Scientific Civilizations. Civilization today is indifferently scientific, meaning that science plays an important role in Enlightenment civilization, but only serves as an end in itself for particular individuals and institutions, and not for society on the whole. A society devoted to the growth of scientific knowledge as an end in itself would undertake scientific research not because it improves human life or because it produces new technologies and industries, but simply for the sake of scientific knowledge.

3. Environmentalism — I have often said that environmentalism is the only ideology to emerge in the second half of the twentieth century with the power to influence the policy of nation-states, and even to make or unmake political destinies. Thiel recognized this in proposing environmentalism as one of the concrete futures for Europe. We can already see several possibilities for environmentalism as a nascent central project, including quasi-religious intensity of belief.

4. Traditionalism — Taking Julius Evola as my point of reference for traditionalism, I will identify traditionalism as an attempt to return to pre-modern (i.e., pre-Enlightenment) principles of social organization, though not necessarily a return to (specific) pre-modern institutions. A contemporary traditionalism sufficiently adapted to the transformative influence of industry and technology might not be recognizable as traditionalism from the perspective of past institutions, but that isn’t the point. Science came of age under absolutist regimes, and there is no reason to believe that science and technology cannot continue to develop under a future traditionalist absolutism.

5. Virtualization — The class of civilizations considered under “virtualization” will be all those that take computation and virtual worlds as their central project, which includes singularity scenarios, human enhancement (transhumanism), and John Smart’s Transcension Hypothesis, inter alia. I have previously written about scenarios like this in A Virtually Optimized World and Existential Risks to a Virtually Optimized World. As the cultivation of virtual worlds could potentially substitute for outward exploration and expansion, virtualization scenarios are mostly inwardly focused (not unlike the Enlightenment, or humanist scenarios), with a proportionately diminished interest in the outward focus of spacefaring.

6. Urbanism — In so far as civilization began with the building of cities, civilization is an essentially urban undertaking, so that to take cities as the focus of civilization is to make civilization itself its own reflexive central project. And indeed reflexivity often characterizes the later stages of social development (an inwardness not unlike virtualization scenarios). In the urbanism scenario, human beings focus on better ways of living together in cities, and the world more and more approximates an archipelago of megacities in which almost all human beings live, and so have a compelling interest to optimize urban life, which could well result in taking the cultivation of urban life as an end in itself.

Needless to say, all of these scenarios admit of countless interpretations, so that we are here only generically discussing these ideas [21]; each scenario above is rather a class of scenarios exemplifying a range of zero-sum variables in constituent institutions. Also, the above list is not intended to be exhaustive; we cannot rule out the possibility of a dark horse central project. The most interesting and most likely scenarios for the future of civilization will be those that incrementally depart from the above generic scenarios, and continue to developmentally diverge until they become something unrecognizable and inconceivable from our present perspective. [22] Thus we take up these scenarios in the spirit of experimentation and exploration.

Following Laurence Smith, I will note some ground rules for the scenarios. In every futurist scenario we can formulate, there is a permutation of that scenario in which space development comes to be neglected and ceases to play any role in human history for the foreseeable future. However, a certain amount of space development is already “baked into the cake,” as it were, by plans and budgets already in existence today. This planned and funded space development will go forward, but whether it will be a starting point for greater things, or whether it will be allowed to die, as the Apollo program was defunded and abandoned, will be integral with ongoing developments, which are subject to change.

In the sense of multiple distinct scenarios that converge upon space development neglect, contemporary space development is a race against time to establish an independent and self-sustaining human presence in space before history forecloses on this opportunity and humanity remains confined at the bottom of its homeworld gravity well until extinction. Each scenario in its space development neglect permutation is a unique race against time scenario, in which the race is conducted under distinct circumstances that bear upon its success or failure.

Just as every scenario we will consider will have a permutation in which space development comes to be neglected, every scenario we consider will also have a permutation in which that civilization is a failing civilization that is on a trajectory to extinction (i.e., a civilization for which a failure condition obtains). In the case of civilizational failure, space development for that civilization must necessarily end (even if spacefaring has an integral role in such a civilization), so there is a sense in which we can say that every scenario we will consider has at least two paths to the end of space development: through the neglect of space development, and through the failure of a civilization that might otherwise superintend space development of its own peculiar kind.

However, each futurist scenario also suggests a permutation in which space development plays a role in the political, economic, social, and scientific development of future society, even if that role is distinct from the role that space development plays in the contemporary world, or would play in a properly spacefaring civilization that takes spacefaring as its central project. I will focus on these latter permutations, though it might well be interesting to consider the many distinct scenarios by which space development might fail under different civilizational scenarios.

Above all, the purpose here is not merely to enumerate several quantitatively distinct space development futures (i.e., more or less space development), but more so qualitatively distinct space development futures (i.e., different kinds of space development). Space development admits of the possibility of more or less rapid deployment (rapidity of realization), and or more or less complete deployment (degree of realization), but both of these variables apply to all space development futures, and so constitute what I earlier called zero-sum variables.

“Religion is… the first spring of civilization: it preaches to us, and constantly reminds us of brotherhood, softens our heart, elevates our spirit, flatters and directs our imagination by extending the field of rewards and advantages into boundless territory, and interests us in the fortunes of others like us, while we envy this almost everywhere else.” Victor de Riqueti, marquis de Mirabeau [23]

5a. Space Development of Enlightenment Civilizations

I begin with the assumption that the Enlightenment project is the central project of western civilization in its present incarnation, and that the Enlightenment project has passed through multiple permutations since its inception. I will not attempt to make the argument for this sweeping claim here, as this requires its own exposition in a separate place. The assumption bears upon the present discussion because postulating the Enlightenment project as the central project of an indifferently spacefaring civilization means that our present civilization seamlessly develops into a spacefaring civilization while retaining its central project intact, albeit changed, as the Enlightenment project continues to take shape in light of ongoing contingent factors that influence its interpretation in theory and its application in practice. In a sense, then, an Enlightenment central project is the baseline scenario that represents the most probable future development for contemporary civilization, because it is a continuation of the same ideological program as the previous 250 years or so. Should the interpretation of the Enlightenment become fixed and cease to change, no longer passing through novel permutations, the future scenario of Enlightenment civilization would be a scenario of stagnation.

One way to conceptualize the Enlightenment project as a central project of civilization is its interest, varyingly expressed, in human flourishing. [24] As noted above (at 5.1), the Enlightenment largely coincides with humanism, which could be defined in terms of human flourishing, and even among the polarized political differences that have appeared in the wake of the Enlightenment—most notably the left/right political dichotomy—human flourishing is represented on both sides of the ideological divide, albeit differently interpreted. Is human flourishing as realized within human societies, best secured by liberty or by equality? Can liberty and equality be reconciled within a single social context?

An Enlightenment civilization’s space infrastructure development could be characterized as humanism in outer space. Where human flourishing is an end in itself, and spacefaring, among other activities, is a means to the end of human flourishing, space development serves the end of human development and is pursued as it is understood to realize the ends of human development. Yet the same conflict that has dogged the Enlightenment since its inception would continue to play out in competing visions of space development: would human flourishing in space best be secured by the liberty of space development (nascent private space industries vying for profit and market share) or would human flourishing in space best be secured by the equality of space development (an international space program in theory open and accessible to all)?

Human development is one of the great themes of the Enlightenment, especially human development in the form of education, as in Rousseau’s novel Emile. A minimalist Enlightenment space development scenario would involve an emphasis upon educational initiatives, which could include a significant component of space science, but only where space science does not conflict with Enlightenment ideology. Such a space program framed in terms of an educational initiative could involve a continued presence in space like the ISS, perhaps small scientific bases on the moon and Mars, and more space science undertaken by robotic probes.

While the space science component of space development as an educational initiative points toward automated spacecraft as scientific instruments, the deeper humanist promptings of the Enlightenment point toward a human space program in order to realize human possibilities in space, though the funding for such initiatives would always be balanced against humanist initiatives undertaken on Earth for the majority of the population largely untouched by and uninterested in space development. Thus a human space program would be pursued, but would be subject to both the opportunities and the conflicts of Enlightenment ideology.

In both of these scenarios—the space science educational scenario and the human space program scenario—any scientific knowledge derived as a consequence would be a mere means to the end of human flourishing. Neither science nor space program nor national achievement would take precedence over human achievement, which, like the tension between liberty and equality, is subject to a tension between individual human achievement (which represents liberty) and collective human achievement (which represents equality). If an Enlightenment civilization engaged in space exploration and settlement can balance these opposing imperatives (as Enlightenment civilization has, to date, attempted to do, though not always happily), it could extend itself into the cosmos; but if it fails to negotiate a sustainable social model suspended between polarized extremes, its efforts will fracture, and, from the fracturing of Enlightenment civilization, other civilizations will emerge in its wake—smaller, and so less capable, but also more focused and less constrained.

“The catastrophes provoked by the wars and revolutions of the past concerned or wrought havoc upon only limited regions; in the future a political catastrophe would mean the self-destruction of civilization, perhaps of the whole of humanity.” Werner Heisenberg

5b. Space Development of Scientific Civilizations

The idea of a properly scientific civilization holds a great fascination for me, partly because it seems so familiar on the one hand, while on the other hand it would be something unprecedented, and, in its pure form, something utterly alien to us. It seems familiar because many scientists and philosophers have spoken as though we today live in a scientific civilization (I have discussed some of these claims in Pathways into the Deep Future and The Role of Science in Enlightenment Universalism); it seems unfamiliar when we stop to think about what would be entailed by human beings pursuing science as an end in itself and not as a means to an end, and to do so at the scale of civilization, and this could take us quite far afield.

The scientific revolution is often conflated with the Enlightenment project, and the two forces have been tightly intertwined in western history ever since both were present (meaning that Enlightenment civilization and scientific civilization could easily be mistaken for one another), but modern science is older than the Enlightenment and is distinct from it. That is to say, we could ideally isolate modern science from the Enlightenment, and vice versa, treating each separately, but that ideal isolation would be an abstraction, because the two are not separate in fact. Further developments in civilization could nevertheless separate the two, with a bifurcation of western civilization into a properly Enlightenment civilization and a properly scientific civilization.

Many scientists in the twentieth century came to understand the dark underbelly of science—Oppenheimer said that physicists had “known sin” as a result of having constructed nuclear weapons—that high technology made possible by advanced science was morally neutral, and could be exploited equally effectively for good or evil. That science is tainted with sin is a deeply Christian conception (by derivation), while the idea that scientists should be socially responsible (i.e., responsive to the impact of their work upon society) is an Enlightenment idea, so we see the degree to which existing conceptions are an admixture drawn from a long history. However, there are also deep sources in the western tradition that identify knowledge as the good; this was the position of Socrates and Plato (along with the corollary that no man sins knowingly), and these would be the sources to which a properly scientific civilization would return in order to justify the pursuit of knowledge as an end in itself.

For a properly scientific civilization, space development would be about scientific research, and outer space offers almost limitless possibilities for research. Science as we know it today, as it has been developed on Earth, is a mere fragment of what science can be, what science can become, in a cosmological context. Science pursued as an end in itself could not avoid this realization, and as a result would be driven to extensive exploration and discovery in the cosmos, as has occurred in past episodes of scientific curiosity.

There is a sense in which the European Age of Discovery was a practical reflection of the theoretical implementation of science. As I like to point out, the scientific revolution occurred before the Enlightenment and before the industrial revolution, so we have the historical example of science as practised before the advent of these features of modernity. The scientific revolution and the Age of Discovery were respectively the framework and the infrastructure of a properly scientific civilization that was on the verge of realization, but which was preempted by the Enlightenment and industrialization (more on this terminology in sections 7 and 8 below).

The imperatives of a properly scientific civilization would not resolve the tension between those who would prefer to spend the entire space exploration budget on automated probes and those who would include a human space program as part of space exploration. As in other civilization scenarios developed here, even under the umbrella of a properly scientific civilization, many different degrees of space development buildout are possible. However, a maximally robotic space science program would still likely involve human scientists in space, perhaps not at the scale of settlement and the establishment of permanent communities, but still a robust human space program, perhaps at the scale of, say, Antarctic scientific missions, where thousands stay in settlements in Antarctica primarily conceived and operated as research stations. Needless to say, a maximally human space science program would always continue to use robotic space science missions as an extension of human reach, to go ahead of human researchers and to go into environments where human beings could not go, like the surface of Venus.

“The question is whether any civilization can wage relentless war on life without destroying itself, and without losing the right to be called civilized.” Rachel Carson

5c. Space Development of Environmentalist Civilizations

Environmentalism is the only ideology to emerge in the second half of the twentieth century that has proved to have transformative ambitions and social and political reach. Environmentalism has not only inspired changed practices (shaping the economic infrastructure), but has also produced a significant body of scholarship—in the case of conservation biology, this scholarship is scientific, but environmentalism has also resulted in a distinctive environmental philosophy (shaping the conceptual framework). With this distinctively environmentalist theoria and praxis, an environmental central project is almost inevitable as the fulfillment of environmentalist thought. Whether or not an environmentalist central project would prove to be a viable form of human civilization is another matter; I will here assume that this is possible.

Environmentalism spans the spectrum of Enlightenment political engagement. Whether we are talking about the some kind of utopian eco-communalism (as in one of the Tellus Institute’s scenarios) or a dystopian ecofascism (as in ARUP’s “Greentocracy” scenario), merely identifying an environmentalist civilization based on an environmentalist central project does little to constrain the political institutions of such a civilization. Similarly, space development in an environmentalist civilization could span a spectrum from the minimal to the maximal.

While many environmentalists are personally skeptical of any space program, many credit the photographs made possible by the space program as inflection points in the development of environmental consciousness. The “Blue Marble” and “Earthrise” photographs in particular have been cited as playing a role in the rise of environmentalism to political prominence. This dual attitude to space exploration, both a distrust of its significance and a recognition of its value, suggests that environmentalist civilizations may bifurcate into those that are favorable to space development and those that are unfavorable to space development. These horns of the dilemma of space development under environmentalism point to radically different outcomes.

Humanity under an environmentalist civilization might entirely retreat from space, or might project itself into the cosmos in order to practice conservation on a cosmological scale, but, between these two radically different outcomes, the space development of an environmentalist civilization would first of all focus on Earth observation and maintenance of the terrestrial biosphere, maintaining and improving the satellite network we have for this at present. Research missions to other bodies in our solar system might be undertaken to determine whether or not any possessed some form of life.

The quest for life beyond Earth undertaken from an environmentalist perspective would likely mean an expanding definition of life based on unclassifiable phenomena likely to be found (i.e., unclassifiable from the perspective of terrestrial biology). This expansion of the conception of life would point to an expanding conception of conservation, which we have already seen on Earth with the extension of conservation efforts from particular species to biotic communities to the non-living context of biotic communities. The conservation worldview projected at cosmological scale may entail the conservation of entire worlds (such as Mars) even if no life is found, on the basis of the intrinsic value of that world’s features.

There would be, then, a dialectic in any environmentally-driven space program, in so far as the more space exploration is undertaken the more human beings will be made aware of diverse forms of emergent complexity that could be recognized as having intrinsic value and therefore are to be regarded from the perspective of conservation. The less space exploration undertaken, the less the conversation worldview is expanded, and the environmentalist perspective remains cosmologically parochial. Those sectors of society not entirely onboard with the environmentalist central project (for every civilization has its dissenting minority) would put pressure on wider society by forcing this dialectic to play out, expanding our conception of the universe at the same time as expanding space development. However, this dialectic of expansive vs. parochial conservation imperatives could play out as long as an environmentalist civilization could endure, unfolding over hundreds if not thousands of years, thus displacing significant space development into the distant future.

“…a civilization or a society is ‘traditional’ when it is ruled by principles that transcend what is merely human and individual…” Julius Evola

5d. Space Development of Traditionalist Civilizations

Traditionalism represents many forces acting in society, among them the rejection of the Enlightenment project in its many manifestations, meaning that traditionalism can have many manifestations as it counters the many manifestations of the Enlightenment. Traditionalism is not one, but many, as there are many traditions. The plurality of tradition extends not only to various traditions deemed worthy of preservation, but also to radically different conceptions of what it means to be a tradition. Usually when we think of traditionalism we think of the preservation of ancient (or, at least, old) traditions and institutions, preferably in their pristine and unaltered form—something like what Marx had in mind when he wrote that, “The tradition of all dead generations weighs like a nightmare on the brains of the living.” [25] But whatever tradition that traditionalism celebrates, I take the possibility of a traditionalist civilization to be the antithesis of the baseline Enlightenment scenario, and perhaps a reaction against it.

Perhaps the most eminent traditionalist of the twentieth century was Julius Evola, who explicitly rejected the familiar conception of traditionalism (conceived in terms of traditional institutions) in favor of a preservation of principles: “For the authentic revolutionary conservative, what really counts is to be faithful not to past forms and institutions, but rather to principles of which such forms and institutions have been particular expressions, adequate for a specific period of time and in a specific geographical area.” [26] Elsewhere in the same book Evola said of traditionalism: “…it is the form bestowed by forces from above upon the overall possibilities of a given cultural area and specific period, through super-individual and even antihistorical values and through elites that know how to derive an authority and natural prestige from such values.” [27]

Of course, there can be disagreement over the interpretation of the principles themselves to which Evola refers; I argued above that the more complex a civilization becomes, the more difficult it is to identify its central project. It would be all too easy to pluck out a few traditional principles and identify these as the authentic basis for a traditional society, while neglecting a number of principles no less present in past social formations. This problem is not insuperable, but it also cannot be taken lightly. However, my present interest is to assume that this can be done (as with the environmentalist scenario) and that a distinctive civilization can be based on consciously traditional motives.

A familiar caricature of traditionalism is its presumed rejection of science and rationalism (this is of a piece with the criticism of traditionalism of wanting to turn back the clock and return to the horse-and-buggy days), but science and rationality are among many ancient principles that contemporary peoples can choose to honor or not. It is Evola’s traditionalism of principles that is most readily adaptable to the science and technology that have developed since the industrial revolution; since modern science emerged in the midst of early modern absolutism, there is no reason that further development in science and technology could not occur in a revived absolutist tradition.

The space development of a traditionalist civilization would likely take the form of affirming the principles of a traditionalist society—honoring the past, affirming continuity with the past, seeking to live worthy lives and to pursue worthy achievements in the light of tradition, shunning novelties for the sake of novelty, and so forth. A traditionalist civilization might in this spirit pursue a “flags and footprints” space program intended to send the message of the superiority of the social system adopted by that civilization—proof of concept of the social model, as it were. An ongoing “flags and footprints” space program would not be a negligible accomplishment; in order to continue to establish historic firsts, ever more daring efforts would need to be made, and this would eventually mean the buildout of space development consistent with ever further missions into deep space.

If, as during the Cold War, a successful space program is intended to prove the merit of the social model that has facilitated that space program, one risk is that failure is then interpreted as a failure of the model, which is one of the sources of the idea that “failure is not an option.” One might then predict risk aversion, but in so far as risk aversion will produce neither heroic accomplishments nor heroic failures, risk aversion fails to serve the social model. A traditionalist society can embrace dead heroes, but it cannot do without heroism in the celebration of the established social order. A space program can supply heroes, and when events go badly, the heroes can be celebrated and their memory can be invoked to greater efforts in the future.

Moreover, it would be a relatively simple matter to frame the space frontier in terms of super-individual and antihistorical value; the naturally non-anthropocentric character of space places it beyond the human concerns defined by our geocentrism and planetary endemism. To travel into space is a concrete form of transcendence—it is to transcend the mundane world. There is a potential conflict here with the rootedness of traditionalism in a particular place and time, though this sense of rootedness could be used to great effect in the establishment of space settlements, in which the settlers develop an attachment to their new home and are prepared to bear any burden in order to make a success of it.

“It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make, since it will lead to an ‘intelligence explosion.’ This will transform society in an unimaginable way.” I. J. Good

5e. Space Development of Virtualist Civilizations

We have by now all become familiar with scenarios like the technological singularity, intelligence explosion, and a life lived in virtual worlds. The popularity of immersive gaming experiences has demonstrated the power of these possibilities. We do not yet know the limits of such scenarios, either the technological limits in terms of what is possible, or the human limits in terms of what human beings would be willing to sacrifice in order to live in a boundless virtual world. Virtualist scenarios could range from individuals maximizing their time in virtual reality to the complete abandonment of biology. The former case would leave much of civilization intact; the latter case would mean the end of human civilization, or a transition to para-civilization.

The boundless virtual worlds of a virtualist civilization could replace the actual experience of exploration and discovery, so that a virtualist civilization can expand virtually without expanding actually. Such a civilization could “grow” without leaving its homeworld, and may find the virtual worlds that it creates for itself to be more interesting and more satisfying than the actual world. However, a virtualist civilization would also be an energy-intensive civilization, both in terms of its need for ever-greater quantities of energy as well as its ever-increasing need to rid itself of waste heat.

Much that a virtualist civilization would want to do could be done more efficiently and at a larger scale in space. Space solar power would be an efficient and almost inexhaustible source of energy, and using this energy in space rather than on a planetary surface would mean that the waste heat could be radiated into space. Just as a virtualist civilization may have no interest in the outer world and so may invest no resources in space exploration and discovery, so too a virtualist civilization may have no interest in preserving the biosphere of its origin, if we assume that a virtualist civilization has naturalistic origins in some intelligent progenitor species evolving in a biosphere. The ability to radiate waste heat into outer space may be of no interest if a virtualist civilization is unconcerned about the condition of its homeworld biosphere. Indeed, a radically virtualist civilization might choose to sterilize its homeworld, strip away its atmosphere, and reduce itself exclusively to virtual existence. As I. J. Good observed, radical virtualization could “…transform society in an unimaginable way.” Even given this scenario, however, space solar power would still offer the advantage of being uninterrupted by diurnal cycles.

In the scenarios of Freeman Dyson, John Smart, and Clément Vidal, all of whom focus on high energy density civilizations (the Dyson sphere, Vidal’s Stellivore Hypothesis, and with Smart’s Transcension Hypothesis adding the element of turning inward to virtual worlds), there are to be found elements of the above scenario. All of these high energy density scenarios involve a considerable infrastructure buildout based on the civilization’s energy budget, so that even if we understand these civilizations to have turned “inward” to virtual worlds in place of outward exploration, all require a substantial infrastructure for their execution. Furthermore, the more radical the scenario (i.e., the greater the divergence from civilization as we know it) the more exotic and elaborate the infrastructure implied by the scenario.

In the virtualist scenario, the buildout of space development to serve a virtualist civilization could be a pathway to the emergence of other civilization only if a virtualist civilization remained a human civilization. If, as in the most exotic scenarios, human beings surrender their biological embodiment and pursue complete virtualization, the result is the end of civilization as we know it and the development of another kind of complexity, distinct from civilization even though descended from it. Space development could well continue, but it would not be the space development of human civilization, as humanity would have transformed itself into something wholly other than what we are as biological beings. But in so far as a virtualist civilization falls short of full virtualization, leaving at least part of the population as biological beings, a new human civilization could use a virtualized civilization as a stepping stone to other forms of civilization.

“The country is faced with an ineluctable task: that of adjusting what it builds to the realities of a machine-governed civilization…” Le Corbusier

5f. Space Development of Urbanist Civilizations

Civilization begins with cities [28], so that an urbanist civilization would constitute a reflexive return to origins for civilization—in a sense, a renewal, a re-founding, and a re-interpretation of what it means for human beings to live in cities: how we live in cities, why we live in cities, and what is the nature of cities as the domicile of human beings. Le Corbusier, a modernist’s modernist who rejected traditionalism on pragmatic, moral, and scientific grounds, famously said that a house is a machine for living in [29]; if we understand a city to be a place for human beings to collectively inhabit, then in the same spirit as Le Corbusier we can say that a city is a factory for living in. What Le Corbusier sometimes called “machinist civilization” was, for Le Corbusier, a call to rebuild cities on modern principles, even proposing to tear down a large portion of downtown Paris and build tower blocks, the Plan Voison.

Cities are many things. In so far as we understand the city as a unit of production, as a locus of economic opportunities, or as a ritual center for religious ordnances, we do not understand the city as an end in itself; it is a mere means to an end. But insofar as we understand the city as holding a special place in human affairs, where human beings have created a unique way of life, and therefore urbanism is to be cultivated as a central feature of human life, then the existence and flourishing of cities is an end in itself, and urbanism is the central project of a civilization that embodies this understanding.

A spacefaring urbanist civilization might see its spacefaring capacity as a way to extend and expand the urban project through building cities in many different environments, subject to many different selection pressures, and testing the possible limits of cities in every possible way, focused on the development of urbanism beyond Earth. In this way, spacefaring opens up possibilities that can allow an urbanist civilization to exhaust and thus to realize its potentiality; an urbanist civilization that remains on a single planet, on its homeworld, cannot fully realize its potential in the same way.
Cities in space and cities on other worlds would be a way to more fully realize the possibilities of urbanism.

However, space development undertaken on a sufficiently large scale would involve fundamental challenges to the urban paradigm. For example, it is not clear if or whether an artificial settlement in space would be a city, at least as we have known cities on Earth. This conceptual challenge to urbanism could be avoided by developing space settlements on the model of cities, following the urban paradigm as closely as possible in a racially different context, or by focusing on building cities on other planets or moons. Building cities on Mars would be entirely within the urban paradigm, and indeed it would be extraordinarily difficult for a homesteader on Mars to live apart from a large settlement, but artificial habitats in space would be a different matter.

If someone lives on an artificial settlement in space with 10,000 others, is this a city? Could human habitats be built that housed hundreds of thousands, or even millions, but which were not cities in the strict sense? What is the strict sense of a city? Do we even know the conceptual parameters of cities? What if one built such an artificial structure, and made it a “wilderness,” and placed all human housing and industry below decks, as it were? Imagine an enormous O’Neill habitat is constructed, with the inner surface given over to forests, meadows, trails, and gardens. There are a few cabins artfully distributed around the landscape. Almost everyone lives below decks in minimalist apartments, but everyone gets a week in a picturesque cabin several times per year. Also, no one is more than 5 minutes away from a walk in an apparently natural setting. There are natural amphitheaters and public parks that host daily events so that everyone has access to the “outdoors.” How would this compare to life in a contemporary city on Earth? Would this be a city? This example suggests that it may be possible to radically question our definitions and typologies of cities by presenting us with something unprecedented in previous human history.

The space development of an urbanist civilization could conceivably involve robust space development of cities across the solar system, including communities that would transcend the urban paradigm and point to another kind of civilization beyond the urbanist, being ripe for exaptation as a properly spacefaring civilization.

6. Internal Conflict, Growth, and Destabilization

In each of these scenarios we can discern a fundamental source of tension. For Enlightenment civilization, the tension is between freedom and equality, both of which are presented as absolute goods, and neither of which can be exhaustively reconciled with the other; for scientific civilization, it is between science and pseudo-science, which can also be seen as the tension between the appearance and reality of scientific knowledge, which, for fallible human beings cannot be dissembled; for environmentalist civilization, between humanity negatively impacting the biosphere and humanity as an agent facilitating environmental imperatives (i.e., between humanism and anti-humanism); for traditionalism, between the preservation of particular institutions and the preservation of principles; for virtualization, between remaining human and surrendering humanity to full virtualization; and for urbanism, between the pull exercised by cities and the parallel desire to escape them, i.e., the dialectic of solitude and society.

Every civilization has internal conflicts; the triumph of a single central project in dominating a civilization does not mean uniformity of belief, but rather uniformity of presuppositions and a diversity of interpretations of the shared central project. We are all familiar today with the internal conflict of the Enlightenment, which, since the French Revolution, has taken the form of the political left vs. the political right. Past civilizations had their internal conflicts as well. For us today the Investiture Controversy is almost meaningless, as few are kept awake at night over the problem of secular appointment of bishops, and we do not see the political implications of differing interpretations of the Beatific Vision, but these were some of the conflicts that struck at the core of medieval European civilization. Similarly, future civilizations will have their internal conflicts. Both parties to these conflicts will share their dedication to the central project, which is a presupposition of their thought and action, but will disagree on its interpretation and the best means to its end.

When a civilization is at the height of its powers and confidence, internal tensions are channeled toward creative ends, so that the dialectic of opposed interests pushes the social narrative forward. When a civilization is failing, at a low ebb of confidence and much of the population feeling little or no sense of investment in the central project, then internal tensions can become destructive, opening a rift within that civilization, through which chaos pours out, like fuel poured on a fire, destabilizing the social whole. It is not that the tensions are different when they become destructive, but that the ability to manage and to employ the consequences of social tension toward constructive ends has faltered and destabilization escalates.

Joseph Schumpeter characterized capitalism in terms of “creative destruction,” and anyone who has lived long enough has seen both the constructive and the destructive side of capitalism; ideologues see only one side or the other, and not both. It is the same with the tensions intrinsic to any central project. To take only one of my scenarios as an example, I noted that the tension within environmentalism is between humanism and anti-humanism. Humanism is constructive when it brings out the best in us, our idealism and our altruism, and destructive when it becomes hubristic pride; analogously, anti-humanism is constructive when it is manifested as the pursuit of non-anthropocentric understanding, and destructive when it is manifested as self-loathing misanthropy. Constructive humanism and constructive anti-humanism can be channeled together into larger projects; destructive humanism and destructive anti-humanism can only conflict with each other, and withdrawal becomes the only rational strategy, further weakening the social whole.

Internal conflicts will push zero-sum variables back and forth, oscillating above and below an equilibrium value, which explains why within a given civilization these variables do not become directional variables and push civilization in a given direction. The imperatives incorporated in a civilization’s central project determine what are zero-sum variables and what are directional variables; in another civilization, the same forces shaping history could be differently distributed among zero-sum variables and directional variables.

Any of the internal conflicts in the civilization scenarios discussed above could be graphed as two axes defining four quadrants, as in the institutional futurism examined in section 2, yielding multiple scenarios all consistent with one and the same institutional structure of a given civilization. A long-lived civilization in passing through permutations of its central project may play out all of these possibilities, by turns exemplifying superficially distinct stages that are all expressions of the same underlying central project.

7. Buildout and the Exaptation of Civilizations

What can be learned from this exploration of six scenarios of space development futurism? I have argued that the most significant developments—those of the greatest impact that will endure for the longest period of time—are those tied to the destinies of entire civilizations (the directional variables of a civilization). As civilizations are born, mature, flourish, decay, and die they realize purposes, meanings, and values embodied in the central project. The buildout of infrastructure and framework is part of this realization of purposes, meanings, and values. [30]

In the same way that the buildout of infrastructure is an expression of a society that influences the development of that society in turn, the buildout of our conceptions of the future is an expression of the society that develops this conceptual framework, and the conceptual framework, once formulated, becomes the framework within which we express ourselves, our hopes, our desires, our fears, and our aspirations, just as the buildout of infrastructure becomes the setting within which the events of the future will transpire.

As noted earlier, infrastructure is not socially neutral; it embodies, albeit implicitly, a particular structure of society and a particular worldview. The same will be true for spacefaring infrastructure, which will correspond to the civilization that builds the infrastructure. Allow me to summarize in the following theses how the institutional structure of civilization is shaped by buildout:

The Infrastructure Thesis:

A civilization fulfills only those possibilities for which an infrastructure buildout has been undertaken (whether knowingly or unknowingly) that can realize the possibilities in question.

A civilization has the possibility of realizing potential transformations of itself, perhaps many different transformations into novel forms of civilization, but only those possibilities that it acts upon through buildout are ever realized. In the specific case of transformation into spacefaring civilization, the Infrastructure Thesis becomes the following:

Spacefaring Infrastructure Thesis

Spacefaring breakout for any civilization whatsoever will occur only after a spacefaring infrastructure buildout makes the breakout possible.

Space development has both an infrastructure dimension and a framework dimension; infrastructure does not exist in a vacuum, and it is not constructed in a vacuum. In each of the scenarios discussed above, a particular conception of how space ought to be developed (the framework) entails an infrastructure buildout, though this buildout in turn is subject to exaptation. Thus the Infrastructure Thesis alone is incomplete, and must be supplemented with a complementary formulation regarding the framework:

The Framework Thesis

There is no infrastructure buildout without a framework buildout that confers meaning and value on the effort, motivating and justifying the infrastructure buildout.

In the specific case of spacefaring civilization, the Framework Thesis becomes the following:

Spacefaring Framework Thesis

Spacefaring breakout will occur after a conceptual framework is formulated that is adequate to motivate the construction of a space-capable infrastructure at a scale consistent with breakout.

Both Infrastructure Thesis and Framework Thesis invoke the concept of the buildout of the institutional structure of civilization, which can for formulated as its own thesis:

The Buildout Thesis

A civilization makes those transitions that its institutional buildout makes possible.

Again, for the specific case of spacefaring civilization the Buildout Thesis becomes the following:

Spacefaring Buildout Thesis

A space-capable civilization makes the transition to a spacefaring civilization through an institutional buildout that facilitates spacefaring.

A stagnant civilization that maintains itself only, devoting no resources to expansion, may be more viable in the long term than a growing civilization engaged in the buildout of infrastructure and framework. To commit resources to an infrastructure project involves an opportunity cost in terms of the alternative projects that are not constructed, and there is the possibility that a civilization might invest resources into building out infrastructure and framework that is a dead end, and, if this investment in a dead end comes at the opportunity cost of the buildout of viable scenarios for the future of that civilization (or its successor), buildout can be a way for a civilization to dig its own grave.

It sometimes happens in history that a civilization apparently confident in its purposes and possessed of energy and resources will exhaust itself building out infrastructure and framework, only to collapse from the effort (which we could call the Overshoot Thesis), and to have its buildout exapted by a new civilization that takes the place of the old civilization. We see this pattern in the economic growth of the Mughal Empire, which produced such masterpieces as the Taj Mahal and the Shalamar Gardens, and then ceded its control of India to the British. Both of these civilizations—Mughal India and British India—presided over a submerged Hindu civilization. Another example: the expansion of Hellenism under Alexander the Great involved the founding of Greek cities across West and Central Asia, but Alexander’s unprecedented conquests fell apart after his death, though many of the cities survived, some as part of the eastern Roman Empire (later Byzantium) and some as part of the Buddhist civilization of Central Asia, the result of idea diffusion along the Silk Road (which had left as its major artifacts the Bamiyan Buddhas that were destroyed by the Taliban), which endured until Central Asia fell to Islam.

The use of institutions built by a previous civilization to realize novel ends that come to be identified with a new civilization (this new civilization being a transformation facilitated by buildout) is a process of exaptation that is embodied in the Buildout Thesis. What is exaptation? The Cambridge Dictionary of Human Biology and Evolution has a brief entry on exaptation, but also refers the reader to “preadaptation,” which is defined as follows:

preadaptation: any previously existing gene, anatomical structure, physiological process, or behavior pattern that makes new forms of evolutionary adaptation more likely. Any trait that confers an eventual advantage before the conditions that will make it adaptive prevail…

Exaptation as I use the term is the social equivalent of biological preadaptation: when a new civilization comes into being, the society was preadapted for the transition. [31] While we can entertain future scenarios for civilization that are possible in principle, without an institutional buildout there is nothing to be exapted for novel initiatives and aspirations.

Since I have formulated an Infrastructure Thesis and a Framework Thesis, for purposes of completeness I ought to also formulate the implied Central Project Thesis, which is as follows:

Central Project Thesis

A central project is the axis of alignment for a civilization, integrating infrastructure and framework into a coherent whole with historical directionality.

In the present essay I have been exclusively concerned with civilizations that do not have spacefaring as an integral part of their central project (i.e., indifferently spacefaring civilization), but for purposes of completeness here is the spacefaring permutation of the Central Project Thesis:

Spacefaring Central Project Thesis

A spacefaring central project would be the axis of alignment for a spacefaring civilization, integrating infrastructure and framework into a coherent whole with historical directionality. [32]

If the above theses are a reasonable approximation of how civilizations function, then I can conclude that, if any of the above scenarios for civilization are realized, such civilizations will be the result of the exaptation of some existing buildout of infrastructure or framework that facilitates the emergence of such a civilization. In so far as there are already intimations of all my above scenarios in contemporary civilization, we can discern these civilizations in a nascent form. For example, there is already a significant buildout of urban infrastructure, and, in the form of urban studies, there is a growing buildout of an urbanism framework, so that intimations of an urbanist civilization already exist—there is a kind of nascent urbanist civilization, but whether this nascent civilization ever fully takes shape and consolidates its institutional structure is yet to be seen.

When a novel civilization does take form, a cyclical process, already loosely-coupled prior to the civilization proper comes into existence, becomes more tightly-coupled as the civilization consolidates its institutional structure. Here lies the relationship between what I have previously called the STEM cycle and the form of civilization that emerged in the wake of the industrial revolution. [33] The appearance of industrial civilization drew together a loosely-coupled STEM cycle of science, technology, and engineering into a tightly-coupled STEM cycle in which the buildout of infrastructure facilitated the buildout of framework, and vice versa. [34] Science primarily belongs to the framework, while industrial engineering primarily belongs to the infrastructure, but when the two are brought together in a virtuous circle each advances the other.

8. The View from the Bottom of a Gravity Well: Crabs in a Bucket

While I have been here explicitly discussing civilizations that do not have spacefaring as their central project, the next obvious step is to consider how properly spacefaring civilizations might come into being, which is part of a larger inquiry into the problem of central project formation. If any of the civilization scenarios discussed above come to be realized, which would involve the emergence of a novel civilization based on a novel central project (except for the baseline scenario of Enlightenment civilization), the subsequent emergence of a spacefaring civilization from any of these predecessors would constitute yet another traumatic punctuation in history—which could be what I have elsewhere called a preemption, which idea I applied to early modern civilization not yet fully formed and mature when it was preempted by the industrial revolution and thus become something else entirely.

Granting the assumption that human civilization continues in its development without catastrophic failure (again, the failure condition), the most likely outcome is not some single civilization that emerges victorious in a contest among different traditions, but a multiplicity of civilizations, some of which exhibit no interest whatsoever in space development, some of which are space-capable but with only limited interest in spacefaring, and a few that actively pursue spacefaring. Among those that actively pursue spacefaring, a properly spacefaring civilization could emerge either by transforming that civilization in an historical preemption, or through bifurcation, with the properly spacefaring civilization breaking away to separately pursue its destiny, and this could occur while having little influence over other civilizations that demonstrate little or no interest in spacefaring.

A plurality of civilizations each pursuing different ends has been the norm of human history for the past ten thousand years. The expansionist semi-nomadic civilizations of the pre-modern era—horse nomads of Central Asia, including the Huns, the Mongols, and the Turks, inter alia, and seafaring nomads such as the Polynesians and the Norse — pursued their initiatives of exploration, trade, raiding, and conquest even as other peoples remained settled, and arguably deepened their connection to the land, building institutions that reflected their settled status and developing that suspicion and distrust of nomadic peoples that marks the history of borderlands where these different peoples, settled and nomadic, cross paths.

The reality of limited space on Earth means that multiple civilizations exist awkwardly and uncomfortably jostling one another on a crowded planet unified by transportation and communication networks. As humanity reaches the limit of its homeworld, and before it can effectively sustain itself away from its homeworld, civilization must experience a bottleneck. Civilization on Earth prior to the buildout of planetary-scale transportation and communication networks is the world before this bottleneck; a spacefaring breakout, whether or not the result of a spacefaring central project, in which a spacefaring frontier is opened to human exploitation, is the world after this bottleneck; civilization today is the world of the bottleneck.

During this bottleneck, civilizations are forced into unaccustomed intimacy, like the passengers on a lifeboat, and planetary-scale selection pressures entail the convergence of planetary-scale social institutions, so that there is an appearance of a single, unified human civilization, but the appearance only — not the reality of unity. [35] At the bottom of our terrestrial gravity well we are like crabs in a bucket, dragging each other down. It will only be if and when some civilization escapes the terrestrial gravity well that plurality rather than convergence will become manifest as an adaptive radiation of human (and post-human) societies is iterated on a cosmological scale.

And by a “bottleneck” in history I mean an historical present that is not an event, or even a conjuncture of events, but itself a longue durée period—from the advent of the industrial revolution, when the possibility of a technical solution to escape from our crowded homeworld first became conceivable, through another several hundred years into the future. Contemporary civilization still has several possible trajectories at this point. There is always the lurking possibility of catastrophic failure (the failure condition), and always the possibility of entering into long-term stagnation. Any of the scenarios discussed above in section 5, with or without a spacefaring capability, represent trajectories of development distinct from failure and stagnation (though any one of them could also terminate in failure or be extended in stagnation). And in so far as any of the scenarios discussed above could be transformed into or preempted by a properly spacefaring civilization, there are multiple trajectories by which a properly spacefaring civilization could come into being.

Notes

[1] Another blindness: I find it a troubling epitaph upon institutionalized futurism that none of the scenarios I examine deigned to acknowledge freedom as a key variable; individual and national self-determination seems to be as irrelevant to these futurists as is space development; and, in the case of national self-determination, those reports that emphasize alignment (Rockefeller Foundation) or global governance (Tellus Institute) stigmatize national self-determination as an element in their most pessimistic scenarios.

[2] Note that the y axis of the Rockefeller compass framework, political and economic alignment, could itself be divided into two axes of political alignment and economic alignment, plotted against each other and yielding four quadrants of distinct permutations of alignment.

[3] The global catastrophic risk that Bostrom calls “ephemeral global tyranny” would constitute a strong form of “alignment,” and this kind of alignment is implicitly recognized in the “Lock Step” scenario.

[4] The Rockefeller Foundation’s “Hack Attack” scenario, or weak alignment with weak adaptive capacity, is the most pessimistic scenario in this report, but there could be aspects of a future dominated by non-state actors that would produce better outcomes (i.e., a more preferred outcome) than state actors; the Tellus Institute’s “Eco-communalism” scenario could be interpreted in this way.

[5] In the August 2018 briefing paper Four Future Scenarios for the San Francisco Bay Area we find another two dimensional compass with the variables being a continuum from equality to inequality and a continuum from economic decline to economic growth.

[6] The Tellus Institute’s “Fortress World” scenario is quite similar to the Rockefeller Foundation’s “Lock Step.” Though the formulation of these scenarios derived from distinct principles, these distinct principles point to the possibility of an authoritarian future in which the strong do as they will and the weak suffer what they must (to invoke Thucydides’ description of Athenian hubris vis-à-vis the Melians).

[7] John Maynard Keynes, The General Theory of Employment, Interest and Money, Palgrave Macmillian, 2018, p. 340.

[8] An annoying feature of many recent futurist reports is their use of brief fictional scenarios attempting to make these speculative scenarios seem more real, as we find in the Rockefeller and ARUP reports. On the other hand, an interesting feature of the ARUP report is that in the credits they acknowledge “Media Influences,” which includes such dystopian cinema classics as Metropolis, Soylent Green, and Mad Max. It is refreshing to see this explicit acknowledgement of the influence of cinematic dystopianism, which almost makes up for the annoyance of poorly written fictional scenarios that give us no reason whatsoever to sympathize with the protagonists, who are generally unlikeable in their mediocrity.

[9] “Planetary Boundaries” is a reference to framework for quantifying human impacts on the biosphere as formulated by the Stockholm Resilience Centre. The nine planetary boundaries include climate change, change in biosphere integrity (biodiversity loss and species extinction), stratospheric ozone depletion, ocean acidification, biogeochemical flows (phosphorus and nitrogen cycles), land-system change (e.g., deforestation), freshwater use, atmospheric aerosol loading (microscopic particles in the atmosphere that affect climate and living organisms), and the introduction of novel entities (e.g. organic pollutants, radioactive materials, nanomaterials, and micro-plastics). I have looked at planetary boundaries through the unintended end of the telescope in my series Planetary Constraints, in which I consider the problem from the perspective of the constraints that planetary endemism imposes upon civilizations. ARUP’s “Drivers of Change” refers to 25 forces across 10 topics (climate change, convergence, demographics, energy, food, oceans, poverty, urbanization, waste, and water) shaping contemporary history as identified by ARUP Foresight.

[10] I place “improved” in scare quotes as I am a skeptic of UN Sustainable Development Goals, not only because there are no enforcement mechanisms attached to them (which is an objection that could easily be set aside if the SDGs are only used as key indicators, as in the ARUP report), but also because there are good reasons to question as to whether these SDGs capture the well being of the peoples they presume to quantify. Cf. my blog post Happiness: A Tale of Two Surveys.

[11] The very indifference to space development shown by many futurists is an implicit admission that their scenarios are consistent with either space development or space neglect, which could be introduced into their scenarios as another variable and mapped out schematically.

[12] I would be remiss if I did not also mention the Superforecasters project, but the superforecasters project is not especially relevant to what I am discussing here, as their methodology focuses on incrementally improving predictions of individual forecasters through feedback to forecasters on previous predictions. This method of improving forecasts may be effective, but it does not illuminate the larger theoretical issues involved in understanding the trajectory of a civilization’s development.

[13] I do not hold that human beings cannot share interests and values across nation-states, populations, and geographical regions, only that, heretofore, we have not seen this in human history. Human beings have their evolutionary psychology in common, and this common evolutionary origin could theoretically serve as the basis of planetary unification, but it is precisely on this point that social institutions are most mendacious, and the mendacity is at its worse in the most “advanced” nation-states, in which honesty about human nature has come to be morally unacceptable.

[14] The deadline I have in mind was the oft-stated slogan of “12 Years to Save the Planet” (this slogan was based on the UN report, “Global Warming of 1.5 ºC,” originally published in 2018 and which argued that dangers from anthropogenic climate change necessitated mitigation efforts that would limit global warming to 1.5 to 2.0 C by 2030, hence the 12 year figure in the slogan). Deadlines for a political project have the virtue of communicating urgency, but, on the flipside, when the deadline passes and the Apocalypse fails to materialize, the ideology takes a hit. Or it should take it hit, but human self-deception is such that failed prophecies of the past are readily forgotten if the proper incentive to forget them is present.

[15] In The Human Future in Space I discussed this kind of short-term futurism and contrasted it with futurism at another order of magnitude—looking 250 years into the future, rather than looking 25 years into the future—over which longer time scale we find obvious changes that are absent on shorter time scales.

[16] Note that what I have called “zero-sum variables” (in contrast to “directional variables”) do possess directionality, and this directionality may influence the course of history over the longest time scales, so that calling them “zero-sum variables” is far from being optimal terminology (if I can find a better way to formulate this I will do so), but they are variables that can and do reverse their directionality, and are likely to do so within the context of a generational time scale (20-30 years) and within any one civilization that comes to be defined by a directional variable. One way in which civilizations transform themselves into distinct kinds of civilization is when a zero-sum variable takes values beyond its ordinary parameters of oscillation around an equilibrium value and is transformed into a directional variable (I will discuss this further in a future essay).

[17] When Polybius wrote that Rome had conquered the known world in 53 years in the first chapter of his history, he explicitly noted that this was an unprecedented development in human affairs. I would identify this as a development on a civilizational scale; Polybius recognized this, though expressed it in different terms.

[18] Roman civilization bifurcated into Rome and Byzantium, and Roman civilization in the west collapsed thereafter.

[19] Medieval Europe was transformed into modern Europe in a process that was continuous at every point, but which resulted in the definitive end of medieval civilization.

[20] When expanding Islamic civilization conquered Central Asia and North Africa, the prior civilizations in these geographical regions came to an end and were replaced by Islam, as with the Buddhist civilization of Central Asia mentioned in section 7.

[21] It would be the business of a fiction writer or a poet to fill out these generic scenarios with concrete detail, such as Thiel demands for futurism. T. S. Eliot does something like this in his “The Journey of the Magi,” in which he imagines the Magi during their quest.

[22] Claudius Gros’ Genesis Project (cf. Developing Ecospheres on Transiently Habitable Planets: The Genesis Project), should this or some equivalent undertaking become the central project of a future civilization, could be understood as a biological central project, a technological central project, or a spacefaring central project. An example such as this is a salutary example that challenges facile classification; in this way, this is like actual central projects of actual civilizations, which are rarely easily classifiable. The more organically integral a central project is to the life of a people, the more difficult it is to separate out the central project; to do so is to exhibit an abstraction that has been disentangled from everything that gives it life. In any case, Gros’ Genesis Project comes integral with a spacefaring program, so this case is sufficiently straight-forward that we need not treat it separately. Cf. The Genesis Project as Central Project, Addendum on the Genesis Project as Central Project, Second Addendum on the Genesis Project as Central Project: Invasive Species, and Third Addendum on the Genesis Project as Central Project: the Biological Conception of Civilization.

[23] This is my own translation of Victor de Riqueti, marquis de Mirabeau’s L’Ami des hommes, ou traite de la population, which is credited as the first use of the word “civilization.” (p. 168 of the 1758 edition) The two uses of “civilisation” in Mirabeau’s L’Ami des hommes are instructive, the first by invoking religion as the “spring” of civilization, and the second for contrasting civilization with barbarism. Both are familiar themes, and here we see them present from the beginning. Also, our view of the Enlightenment today tends to overstate the skepticism and religious non-conformity of the era. Carl L. Becker in his classic study, The Heavenly City of the Eighteenth-Century Philosophers, argued that the Enlightenment was, if not orthodox, still deeply pious, much as E. M. W. Tillyard similarly argued in his famous study The Elizabethan World Picture: A Study of the Idea of Order in the Age of Shakespeare, Donne & Milton that the Shakespearean world was, if not orthodox, still deeply pious.

[24] Interestingly, the idea of “human flourishing” has its origins in natural law theory. Of human flourishing John Finnis wrote:

“What are principles of natural law? The sense that the phrase ‘natural law’ has in this book can be indicated in the following rather bald assertions, formulations which will seem perhaps empty or question-begging until explicated in Part Two. There is (i) a set of basic practical principles which indicate the basic forms of human flourishing as goods to be pursued and realized, and which are in one way or another used by everyone who considers what to do, however unsound his conclusions; and (ii) a set of basic methodological requirements of practical reasonableness (itself one of the basic forms of human flourishing) which distinguish sound from unsound practical thinking and which, when all brought to bear, provide the criteria for distinguishing between acts that (always or in particular circumstances) are reasonable-all-things-considered (and not merely relative-to-a-particular purpose) and acts that are unreasonable-all-things-considered, i.e. between ways of acting that are morally right or morally wrong—thus enabling one to formulate (iii) a set of general moral standards.” (Natural Law and Natural Rights, 1980, p. 23)

Finnis furthermore gives a list of seven “basic forms of human good,” which includes life, knowledge, play, aesthetic experience, friendship, practical reasonableness, and religion (pp. 85-90). I first encountered “human flourishing” in either Sam Harris or Nick Bostrom (can’t remember which), and as far as I can tell its current usage is derived from Finnis’ exposition, but if anyone knows of earlier expositions of human flourishing I would be interested to hear about them. I previously wrote about this in my newsletter 13.

[25] Karl Marx, The 18th Brumiare of Louis Bonapart (1852).

[26] Julius Evola, Men Among the Ruins: Postwar Reflections of a Radical Traditionalist, p. 115.

[27] Op. cit. p. 13.

[28] V. Gordon Childe’s paper, “The Urban Revolution,” in which he discussed the rise of the first cities, has in the archaeological literature been taken as formulating the diagnostic criteria for civilization.

[29] Le Corbusier made this claim at least three times. Cf. my blog post The Technology of Living.

[30] The distinction between infrastructure and framework can never be made exhaustive because the distinction can be pursued to its origin in the individual person, who both thinks and acts, and in whom thought and action are integral. Or, rather, the distinction can be made exhaustive, but only at the cost of the resulting conceptions being entirely abstract, that is to say, not exemplified in the actual world in the way we find these abstract concepts exemplified in theory.

[31] The buildout of shipping capacity in late medieval Europe made the Age of Discovery possible, and the Age of Discovery made the modern world possible, but when medieval traders were building better ships and exploring farther afield, they were not trying to create the modern world; they were working within their own civilization to attain their own ends as defined within that civilization. Nevertheless, these efforts made a transition to modern civilization possible, and these developments also entailed the dissolution of the previous civilization that had made it all possible.

[32] Permutations of the Central Project Thesis could be formulated for each of the six scenarios discussed above, such as the Enlightenment Central Project Thesis, and so on.

[33] I previously discussed the STEM cycle in my Centauri Dreams post Where Do We Come From? What Are We? Where Are We Going?

[34] This, at least, was the process as it occurred across the civilizations of western Europe. The west had been developing institutions of private property, industry, law, natural philosophy, and education for hundreds of years prior to the industrial revolution, so that these societies were primed for the change that organically grew out of this social milieu. Even as the western nation-states rapidly industrialized in the wake of the industrial revolution, most of the rest of the world continued their lifeways of agricultural civilization, and when industrialization did eventually come to them, it came as a top-down imposition by fiat of a distant central government—an ersatz industrial civilization. The developing world did not develop the institutions that facilitated the original appearance of industrialization, so that these institutions also had to be artificially imposed, and, even as they were imposed, indigenous institutions were submerged by them rather than being replaced.

[35] An illustration of the converging selection pressures upon a plurality of civilizations forced into coexistence on a planetary scale can be found in the quote from Heisenberg in the beginning of section 5b. It is the appearance of convergence and the appearance of unity that has deceived the institutional futurists at the Rockefeller Institute to invoke alignment as a fundamental variable and ARUP to invoke improving societal conditions as a fundamental variable without either of these institutions defining the purposes that are the motive for alignment or the metric by which social conditions are to be measured.

tzf_img_post
{ 64 comments }

TOI 451: Three Planets in a Stellar Stream

The planets orbiting the young star TOI 451 should be useful for astronomers working on the evolution of atmospheres on young planets. This is a TESS find, three planets tracked through their transits and backed by observations from the now retired Spitzer Space Telescope, with follow-ups as well from Las Cumbres and the Perth Exoplanet Survey Telescope. TOI 451 (also known as CD-38 1467) is about 400 light years out in Eridanus, a star with 95% of the Sun’s mass, some 12% smaller and rotating every 5.1 days.

That rotation is interesting, as it’s more than five times faster than our Sun rotates, a marker for a young star, and indeed, astronomers have ways of verifying that the star is only about 120 million years old. Here the Pisces-Eridanus stream, only discovered in 2019, becomes a helpful factor. A stream of stars forms out of gravitational interactions between our galaxy and a star cluster or dwarf galaxy, shoe-horning stars out of their original orbit to form an elongated flow.

Named after the two constellations in which the bulk of its stars reside, the Pisces-Eridanus stream is actually some 1,300 light years in length and as seen from Earth extends across fourteen different constellations. And while Stefan Meingast (University of Vienna) and team, who discovered the stream, pegged its age as somewhat older, follow-up work by Jason Curtis at Columbia University (New York) determined that the stream was 120 million years old.

Stars of the same age with a common motion through space occur in several forms. A stellar association is a loose grouping of stars, with a common origin although now gravitationally unbound and moving together (I’m simplifying here, to be sure, because there are a number of sub-classifications of stellar associations). A moving group is still coherent, but now the stars are less obviously associated as the formation ages. The Ursa Major moving group is the closest one of these to Earth. A stellar stream like the Pisces-Eridanus stream has been stretched out by tidal forces, a remnant fragment of a dwarf galaxy now torn apart and gradually dispersing.

Image: The Pisces-Eridanus stream spans 1,300 light-years, sprawling across 14 constellations and one-third of the sky. Yellow dots show the locations of known or suspected members, with TOI 451 circled. TESS observations show that the stream is about 120 million years old, comparable to the famous Pleiades cluster in Taurus (upper left). Credit: NASA GSFC.

As with stellar moving groups we’ve looked at before, the Pisces-Eridanus stream seems to feature many stars that share common traits of age and metallicity. TESS comes into its own when studying a system like TOI 451 because its measurements of stars in the Pisces-Eridanus stream show strong evidence of starspots (rotating in and out of view and thus causing the kind of brightness variation TESS was made to measure). Starspots are prominent in younger stars, as is fast rotation. And all of that helps narrow down the possible age of the TOI 451 system.

The three planets around TESS 451 have a story of their own to tell. With temperatures ranging from 1200° C to 450° C, these are super-Earths, with orbits of 1.9 days, 9.2 days and 16 days. Despite the intense heat of the star, the researchers believe these worlds will have retained their atmospheres, making them laboratories for theories of how atmospheres evolve and what their properties should be. Already we know there is a strong infrared signature between 12 and 24 micrometers, which suggests the likely presence of a debris disk. The paper describes it this way, likening the age of stars in the Pisces-Eridanus stream to that found in the Pleiades:

The frequency of infrared excesses decreases with age, declining from tens of percent at ages less than a few hundred Myr to a few percent in the field (Meyer et al. 2008; Siegler et al. 2007; Carpenter et al. 2009). In the similarly-aged Pleiades cluster, Spitzer 24µm excesses are seen in 10% of FGK stars (Gorlova et al. 2006). This excess emission suggests the presence of a debris disk, in which planetesimals are continuously ground into dust…

And in this case we have a debris disk with a temperature near or somewhat less than 300 K.

Image: This illustration sketches out the main features of TOI 451, a triple-planet system located 400 light-years away in the constellation Eridanus. Credit: NASA’s Goddard Space Flight Center.

A comparatively close system like this one should help us piece together the chemical composition of the planetary atmospheres as well as evidence of clouds and other features, with follow-up studies through instruments like the James Webb Space Telescope using transmission spectroscopy. Adding to the interest of TOI 451 is the fact that there may be a distant companion star, TOI 451 B, identified based on Gaia data on what appears to be a faint star about two pixels away from TOI 451. Or perhaps this is a triple system, as the paper suggests:

We note that Rebull et al. (2016), in their analysis of the Pleiades, detect periods for 92% of the members, and suggest the remaining non-detections are due to non-astrophysical effects. We have suggested TOI 451 B is a binary, which we might expect to manifest as two periodicities in the lightcurve. We only detect one period in our lightcurve; however, a second signal could have been impacted by systematics removal or be present at smaller amplitude than the 1.64 day signal, and so we do not interpret the lack of a second period further.

The difficulty of data collection here is apparent:

TOI 451 and its companion(s) are only separated by 37 arcseconds, or about two TESS pixels, so the images of these two stars overlap substantially on the detector. The light curve of the companion TOI 451 B is clearly contaminated by the 14x brighter primary star.

The non-standard methods used to extract the light curve of the companion star(s) are explained in the paper, and I’ll send you there if interested in the details. Note, too, the useful synergy of the TESS and Gaia datasets, which allowed the age of this system to be constrained and also resulted in the discovery of the three planets. As always, rapid growth in our datasets and cross correlations between them trigger the prospect of continuing discovery.

In connection with this work, I should also mention another finding from THYME, the TESS Hunt for Young and Maturing Exoplanets, out of which grew the TOI 451 work. HD 110082 b is a Neptune-class world of approximately 3.2 Earth radii, assumed to be about 11 times as massive as the Earth in a 250 million year old stellar system, another useful find when it comes to examining planet formation and evolution. The F-class primary is about 343 light years away.

The paper is Newton et al., “TESS Hunt for Young and Maturing Exoplanets (THYME). IV. Three Small Planets Orbiting a 120 Myr Old Star in the Pisces–Eridanus Stream,” Astronomical Journal Vol. 161, No. 2 (14 January 2021). Abstract / Preprint. The paper on HD 110082 b is Tofflemire et al., “TESS Hunt for Young and Maturing Exoplanets (THYME) V: A Sub-Neptune Transiting a Young Star in a Newly Discovered 250 Myr Association,” accepted at the Astronomical Journal (preprint).

tzf_img_post
{ 2 comments }

Extraterrestrial: On ‘Oumuamua as Artifact

The reaction to Avi Loeb’s new book Extraterrestrial (Houghton Mifflin Harcourt, 2021) has been quick in coming and dual in nature. I’m seeing a certain animus being directed at the author in social media venues frequented by scientists, not so much for suggesting the possibility that ‘Oumuamua is an extraterrestrial technological artifact, but for triggering a wave of misleading articles in the press. The latter, that second half of the dual reaction, has certainly been widespread and, I have to agree with the critics, often uninformed.

Image credit: Kris Snibbe/Harvard file photo.

But let’s try to untangle this. Because my various software Net-sweepers collect most everything that washes up on ‘Oumuamua, I’m seeing stark headlines such as “Why Are We So Afraid of Extraterrestrials,” or “When Will We Get Serious about ET?” I’m making those particular headlines up, but they catch the gist of many of the stories I’ve seen. I can see why some of the scientists who spend their working days digging into exoplanet research, investigate SETI in various ways or ponder how to build the spacecraft that are helping us understand the Solar System would be nonplussed.

We are, as a matter of fact, taking the hypothesis of extraterrestrial life, even intelligent extraterrestrial life, more seriously now than ever before, and this is true not just among the general public but also within the community of working scientists. But I don’t see Avi Loeb saying anything that discounts that work. What I do see him saying in Extraterrestrial is that in the case of ‘Oumuamua, scientists are reluctant to consider a hypothesis of extraterrestrial technology even though it stands up to scrutiny — as a hypothesis — and offers as good an explanation as others I’ve seen. Well actually, better, because as Loeb says, it checks off more of the needed boxes.

Invariably, critics quote Sagan: “Extraordinary claims require extraordinary evidence.” Loeb is not overly impressed with the formulation, saying “evidence is evidence, no?” And he goes on: “I do believe that extraordinary conservatism keeps us extraordinarily ignorant. Put differently, the field doesn’t need more cautious detectives.” Fighting words, those. A solid rhetorical strategy, perhaps, but then caution is also baked into the scientific method, as well it should be. So let’s talk about caution and ‘Oumuamua.

Loeb grew up on his family’s farm south of Tel Aviv, hoping at an early age to become a philosopher but delayed in the quest by his military service, where he likewise began to turn to physics. An early project was the use of electrical discharges to propel projectiles, a concept that wound up receiving funding from the US Strategic Defense Initiative during the latter era of the Cold War. He proceeded to do postgraduate work at the Institute for Advanced Study in Princeton, mixing with the likes of Freeman Dyson and John Bahcall, and moved on to become a tenured professor at Harvard. Long before ‘Oumuamua, his life had begun to revolve around the story told in data. He seems to have always believed that data would lead him to an audacious conclusion, and perhaps primed by his childhood even to expect such an outcome.

I also detect a trace of the mischief-maker, though a very deliberate one. To mix cultures outrageously, Loeb came out of Beit Hanan with a bit of Loki in him. And he’s shrewd: “You ask nature a series of questions and listen carefully to the answers from experiments,” he writes of that era, a credo which likewise informs his present work. Extraterrestrial is offered as a critique of the way we approach the unknown via our scientific institutions, and the reaction to the extraterrestrial hypothesis is displaying many of the points he’s trying to make.

Can we discuss this alien artifact hypothesis in a rational way? Loeb is not sure we can, at least in some venues, given the assumptions and accumulated inertia he sees plaguing the academic community. He describes pressure on young postdocs to choose career paths that will fit into accepted ideas. He asks whether what we might call the science ‘establishment’ is simply top-heavy, a victim of its own inertia, so that the safer course for new students is not to challenge older models.

These seem like rational questions to me, and Loeb uses ‘Oumuamua as the rhetorical church-key that pops open the bottle. So let’s look at what we know about ‘Oumuamua with that in mind. The things that trigger our interest and raised eyebrows arrive as a set of anomalies. They include the fact that the object’s brightness varied by a factor of ten every eight hours, from which astronomers could deduce an extreme shape, much longer than wide. And despite a trajectory that had taken it near the Sun, ‘Oumuamua did not produce an infrared signature detectable by the Spitzer Space Telescope, leading to the conclusion that it must be small, perhaps 100 yards long, if that.

‘Oumuamua seemed to be cigar-like in shape, or else flat, either of these being shapes that had not been observed at these extremes in naturally occurring objects in space. Loeb also notes that despite its small size and odd shape, the object was ten times more reflective than typical asteroids or comets in our system. Various theories spawned from all this try to explain its origins, but a slight deviation in trajectory as ‘Oumuamua moved away from the Sun stood out in our two weeks of data. That deviation also took it out of the local standard of rest, which in itself was an unusual place for it to have been until its encounter with our Sun caused its motion to deviate.

I don’t want to go over ground we’ve already covered in some detail here in the past — a search for ‘Oumuamua in the archives will turn up numerous articles, of which the most germane to this review is probably ‘Oumuamua, Thin Films and Lightsails. This deals with Loeb’s work with Shmuel Bialy on the non-gravitational acceleration, which occurred despite a lack of evidence for either a cometary tail or gas emission and absorption lines. All this despite an approach to the Sun of a tight 0.25 AU.

The fact that we do not see outgassing that could cause this acceleration is not the problem. According to Loeb’s calculations, such a process would have caused ‘Oumuamua to lose about a tenth of its mass, and he points out that this could have been missed by our telescopes. What is problematic is the fact that the space around the object showed no trace of water, dust or carbon-based gases, which makes the comet hypothesis harder to defend. Moreover, whatever the cause of the acceleration, it did not change the spin rate, as we would expect from asymmetrical, naturally occurring jets of material pushing a comet nucleus in various directions.

Extraterrestrial should be on your shelf for a number of reasons, one of which is that it encapsulates the subsequent explanations scientists have given for ‘Oumuamua’s trajectory, including the possibility that it was made entirely of hydrogen, or the possibility that it began to break up at perihelion, causing its outward path to deviate (again, no evidence for this was evident to our instruments). And, of course, he makes the case for his hypothesis that sunlight bouncing off a thin sail would explain what we see, citing recent work on the likelihood that the object was disk-shaped.

So what do we do with such an object, beyond saying that none of our hypotheses can be validated by future observation since ‘Oumuamua is long gone (although do see the i4IS work on Project Lyra). Now we’re at the heart of the book, for as we’ve seen, Extraterrestrial is less about ‘Oumuamua itself and more about how we do science, and what the author sees as a too conservative approach that is fed by the demands of making a career. He’s compelled to ask: Shouldn’t the possibility of ‘Oumuamua being an extraterrestrial artifact, a technological object, be a bit less controversial than it appears to be, given the growth in our knowledge in recent decades? Let me quote the book:

Some of the resistance to the search for extraterrestrial intelligence boils down to conservatism, which many scientists adopt in order to minimize the number of mistakes they make during their careers. This is the path of least resistance, and it works; scientists who preserve their images in this way receive more honors, more awards, and more funding. Sadly, this also increases the force of their echo effect, for the funding establishes ever bigger research groups that parrot the same ideas. This can snowball; echo chambers amplify conservatism of thought, wringing the native curiosity out of young researchers, most of whom feel they must fall in line to secure a job. Unchecked, this trend could turn scientific consensus into a self-fulfilling prophecy.

Here I’m at sea. I’ve been writing about interstellar studies for the past twenty years and have made the acquaintance of many scientists both through digital interactions and conversations at conferences. I can’t say I’ve found many who are so conservative in their outlook as to resist the idea of other civilizations in the universe. I see ongoing SETI efforts like the privately funded Breakthrough Listen, which Loeb is connected to peripherally through his work with the Breakthrough Starshot initiative to send a probe to Proxima Centauri or other nearby stars. The book contains the background of Starshot by way of showing the public how sails might make sense as the best way to cross interstellar distances, perhaps like Starshot propelled by beamed energy.

I also see active research on astrobiology, while the entire field of exoplanetary science is frothing with activity. To my eye as a writer who covers these matters rather than a scientist, I see a field that is more willing to accept the possibility of extraterrestrial intelligence than ever before. But I’m not working within the field as Loeb is, so his chastening of tribal-like patterns of behavior reflects, I’m sure, his own experience.

When I wrote the piece mentioned above, ‘Oumuamua, Thin Films and Lightsails, it was by way of presenting Loeb’s work on the deviation of the object’s trajectory as caused by sunlight, which he produced following what he describes in the book as “the same scientific tenet I had always followed — a hypothesis that satisfied all the data ought to be considered.” If nature wasn’t producing objects shaped like that of a lightsail that could apparently accelerate through the pressure of photons from a star, then an extraterrestrial intelligence was the exotic hypothesis that could explain it.

The key statement: “If radiation pressure is the accelerating force, then ‘Oumuamua represents a new class of thin interstellar material, either produced naturally…or is of an artificial origin.”

After this, Loeb goes on to say, “everything blew up.” Which is why on my neighborhood walks various friends popped up in short order asking: “So is it true? Is it ET?” I could only reply that I had no idea, and refer them to the discussion of Loeb’s paper on my site. Various headlines announcing that a Harvard astronomer had decided ‘Oumuamua was an alien craft have been all over the Internet. I can see why many in the field find this a nuisance, as they’re being besieged by people asking the same questions, and they have other work they’d presumably like to get on with.

So there are reasons why Extraterrestrial is, to some scientists, a needling, even cajoling book. I can see why some dislike the fact that it was written. But having to talk about one’s work is part of the job description, isn’t it? It was Ernest Rutherford who said that a good scientist should be able to explain his ideas to a barmaid. In these parlous times, we might change Rutherford’s dismissive ‘barmaid’ to a gender-neutral ‘blog writer’ or some such. But the point seems the same.

Isn’t communicating ideas part of the job description of anyone employed to do scientific research? So much of that research is funded by the public through their tax dollars, after all. If Loeb’s prickly book is forcing some scientists to take the time to explain why they think his hypothesis is unlikely, I cannot see that as a bad thing. Good for Avi Loeb, I’d say.

And whatever ‘Oumuamua is, we may all benefit from the discussion it has created. I enjoyed Loeb’s section on exotic theories within the physics community — he calls these “fashionable thought bubbles that currently hold sway in the field of astrophysics,” and in many quarters they seem comfortably accepted:

Despite the absence of experimental evidence, the mathematical ideas of supersymmetry, extra-spatial dimensions, string theory, Hawking radiation, and the multiverse are considered irrefutable and self-evident by the mainstream of theoretical physics. In the words of a prominent physicist at a conference that I attended: ‘These ideas must be true even without experimental tests to support them, because thousands of physicists believe in them and it is difficult to imagine that such a large community of mathematically gifted scientists could be wrong.”

That almost seems like a straw man argument, except that I don’t doubt someone actually said this — I’ve heard more or less the same sentiment voiced at conferences myself. Even so, I doubt many of the scientists I’ve gotten to know would go that far. But the broader point is sound. Remember, Loeb is all about data, and isn’t it true that multiverse ideas take us well beyond the realm of testable hypotheses? And yet many support them, as witness Leonard Susskind in his book The Black Hole War (2008):

“There is a philosophy that says that if something is unobservable — unobservable in principle — it is not part of science. If there is no way to falsify or confirm a hypothesis, it belongs to the realm of metaphysical speculation, together with astrology and spiritualism. By that standard, most of the universe has no scientific reality — it’s just a figment of our imaginations.”

So Loeb is engaging on this very charged issue that goes to the heart of what we mean by a hypothesis, about the falsifiability of an idea. We know where he stands:

Getting data and comparing it to our theoretical ideas provides a reality check and tells us we are not hallucinating. What is more, it reconfirms what is central to the discipline. Physics is not a recreational activity to make us feel good about ourselves. Physics is a dialogue with nature, not a monologue.

You can see why Extraterrestrial is raising hackles in some quarters, and why Loeb is being attacked for declaring ‘Oumuamua a technology. But of course he hasn’t announced ‘Oumuamua was an alien artifact. He’s said this is a hypothesis, not a statement of fact, and that it fits what we currently know, and that it is a plausible hypothesis and perhaps the most plausible among those that have been offered.

He goes on to call for deepening our commitment to Dysonian SETI, looking for signs of extraterrestrial intelligence through its artifacts, a field becoming known as astro-archaeology. And he considers what openness to the hypothesis could mean in terms of orienting our research and our imagination under the assumption that extraterrestrial intelligence is a likely outcome that should produce observables.

As I said above, Extraterrestrial should be on your shelf because it is above all else germane, with ‘Oumuamua being the tool for unlocking a discussion of how we do research and how we discuss the results. My hope is that it will give new public support to ongoing work that aims to answer the great question of whether we are alone in the universe. A great deal of that work continues even among many who find the ‘Oumuamua as technology hypothesis far-fetched and believe it over-reaches.

Is science too conservative to deal with a potentially alien artifact? I don’t think so, but I admire Avi Loeb for his willingness to shake things up and yank a few chains along the way. The debate makes for compelling drama and widens the sphere of discourse. He may well be right that by taking what he calls ‘’Oumuamua’s Wager” (based on Pascal’s Wager, and advocating for taking the extraterrestrial technology hypothesis seriously) we would open up new research channels or revivify stagnant ones.

Some of those neighbors of mine that I’ve mentioned actually dug ‘Oumuamua material out of arXiv when I told them about that service and how to use it, an outcome Ernest Rutherford would have appreciated. I see Extraterrestrial as written primarily for people like them, but if it does rattle the cages of some in the physics community, I think the field will somehow muddle through. Add in the fact that Loeb is a compelling prose stylist and you’ll find your time reading him well spent.

tzf_img_post
{ 73 comments }

Crafting the Bussard Ramjet

The Bussard ramjet is an idea whose attractions do not fade, especially given stunning science fiction treatments like Poul Anderson’s novel Tau Zero. Not long ago I heard from Peter Schattschneider, a physicist and writer who has been exploring the Bussard concept in a soon to be published novel. In the article below, Dr. Schattschneider explains the complications involved in designing a realistic ramjet for his novel, with an interesting nod to a follow-up piece I’ll publish as soon as it is available on the work of John Ford Fishback, whose ideas on magnetic field configurations we have discussed in these pages before.

The author is professor emeritus in solid state physics at Technische Universität Wien, but he has also worked for a private engineering company as well as the French CNRS, and has been director of the Vienna University Service Center for Electron Microscopy. With more than 300 research articles in peer-reviewed journals and several monographs on electron-matter interaction, Dr. Schattschneider’s current research focuses on electron vortex beams, which are exotic probes for solid state spectroscopy. He tells me that his interest in physics emerged from an early fascination with science fiction, leading to the publication of several SF novels in German and many short stories in SF anthologies, some of them translated into English and French. As we see below, so-called ‘hard’ science fiction, scrupulously faithful to physics, demands attention to detail while pushing into fruitful speculation about future discovery.

by Peter Schattschneider

When the news about the BLC1 signal from Proxima Centauri came in, I was just finishing a scientific novel about an expedition to our neighbour star. Good news, I thought – the hype would spur interest in space travel. Disappointment set in immediately: Should the signal turn out to be real, this kind of science fiction would land in the dustbin.

Image: Peter Schattschneider. Credit & copyright: Klaus Ranger Fotografie.

The space ship in the novel is a Bussard ramjet. Collecting interstellar hydrogen with some kind of electrostatic or magnetic funnel that would operate like a giant vacuum cleaner is a great idea promoted by Robert W. Bussard in 1960 [1]. Interstellar protons (and some other stuff) enter the funnel at the ship‘s speed without further ado. Fusion to helium will not pose a problem in a century or so (ITER is almost working), conversion of the energy gain into thrust would work as in existing thrusters, and there you go!

Some order-of-magnitude calculations show that it isn‘t as simple as that. But more on that later. Let us first look at the more mundane problems occuring on a journey to our neighbour. The values given below were taken from my upcoming The EXODUS Incident [2], calculated for a ship mass of 1500 tons, an efficiency of 85% of the fusion energy going into thrust, an interstellar medium of density 1 hydrogen atom/cm3, completely ionized by means of electron strippers.

On the Way

Like existing ramjets the Bussard ramjet is an assisted take-off engine. In order to harvest fuel it needs a take-off speed, here 42 km/s, the escape velocity from the solar system. The faster a Bussard ramjet goes, the higher is the thrust, which means that one cannot assume a constant acceleration but must solve the dynamic rocket equation. The following table shows acceleration, speed and duration of the journey for different scoop radii.

At the midway point, the thrust is inverted to slow the ship down for arrival. To achieve an acceleration of the order of 1 g (as for instance in Poul Anderson’s celebrated novel Tau Zero [3]), the fusion drive must produce a thrust of 18 million Newton, about half the thrust of the Saturn-V. That doesn’t seem tremendous, but a short calculation reveals that one needs a scoop radius of about 3500 km to harvest enough fuel because the density of the interstellar medium is so low. Realizing magnetic or electric fields of this dimension is hardly imaginable, even for an advanced technology.

A perhaps more realistic funnel entrance of 200 km results in a time of flight of almost 500 years. Such a scenario would call for a generation starship. I thought that an acceleration of 0.1 g was perhaps a good compromise, avoiding both technical and social fantasizing. It stipulates a scoop radius of 1000 km, still enormous, but let us play the “what-if“ game: The journey would last 17.3 years, quite reasonable with future cryo-hibernation. The acceleration increases slowly, reaching a maximum of 0.1 g after 4 years. Interestingly, after that the acceleration decreases, although the speed and therefore the proton influx increases. This is because the relativistic mass of the ship increases with speed.

Fusion Drive

It has been pointed out by several authors that the “standard“ operation of a fusion reactor, burning Deuterium 2D into Helium 3He cannot work because the amount of 2D in interstellar space is too low. The proton-proton burning that would render p+p → 2D for the 2D → 3He reaction is 24 orders of magnitude (!) slower.

The interstellar ramjet seemed impossible until in 1975 Daniel Whitmire [4] proposed the Bethe-Weizsäcker or CNO cycle that operates in hot stars. Here, carbon, nitrogen and oxygen serve as catalysts. The reaction is fast enough for thrust production. The drawback is that it needs a very high core temperature of the plasma of several hundred million Kelvin. Reaction kinetics, cross sections and other gadgets stipulate a plasma volume of at least 6000 m3 which makes a spherical chamber of 11 m radius (for design aficionados a torus or – who knows? – a linear chamber of the same order of magnitude).

At this point, it should be noted that the results shown above were obtained without taking account of many limiting conditions (radiation losses, efficiency of the fusion process, drag, etc.) The numerical values are at best accurate to the first decimal. They should be understood as optimistic estimates, and not as input for the engineer.

Waste Heat

Radioactive high-energy by-products of the fusion process are blocked by a massive wall between the engine and the habitable section, made up of heavy elements. This is not the biggest problem because we already handle it in the experimental ITER design. The main problem is waste heat. The reactor produces 0.3 million GW. Assuming an efficiency of 85% going into thrust, the waste energy is still 47,000 GW in the form of neutrinos, high energy particles and thermal radiation. The habitable section should be at a considerable distance from the engine in order not to roast the crew. An optimistic estimate renders a distance of about 800 m, with several stacks of cooling fins in between. The surface temperature of the sternside hull would be at a comfortable 20-60 degrees Celsius. Without the shields, the hull would receive waste heat at a rate of 6 GW/m2, 5 million times more than the solar constant on earth.

Radiation shielding

An important aspect of the Bussard ramjet design is shielding from cosmic rays. At the maximum speed of 60% of light speed, interstellar hydrogen hits the bow with a kinetic energy of 200 MeV, dangerous for the crew. A.C. Clarke has proposed a protecting ice sheet at the bow of a starship in his novel The Songs of Distant Earth [5]. A similar solution is also known from modern proton cancer therapy. The penetration depth of such protons in tissue (or water, for that matter) is 26 cm. So it suffices to put a 26 cm thick water tank at the bow.

Artificial gravity

It is known that long periods of zero gravity are disastrous to the human body. It is therefore advised to have the ship rotate in order to create artificial gravity. In such an environment there are unusual phenomena, e.g. a different barometric height equation, or atmospheric turbulence caused by the Coriolis forces. Throwing an object in a rotating space ship has surprising consequences, exemplified in Fig. 1. Funny speculations about exquisite sporting activities are allowed.

Fig. 1: Freely falling objects in a rotating cylinder, thrown in different directions with the same starting speed. In this example, drawn from my novel, the cylinder has a radius of 45 m, rotating such that the artificial gravity on the inner hull is 0.3 g. The object is thrown with 40 km/h in different directions. Seen by an observer at rest, the cylinder rotates counterclockwise.

Scooping

The central question for scooping hydrogen is this: Which electric or magnetic field configuration allows us to collect a sufficient amount of interstellar hydrogen? There are solutions for manipulating charged particles: colliders use magnetic quadrupoles to keep the beam on track. The symmetry of the problem stipulates a cylindrical field configuration, such as ring coils or round electrostatic or magnetic lenses which are routinely used in electron microscopy. Such lenses are annular ferromagnetic yokes with a round bore hole of the order of a millimeter. They focus an incoming electron beam from a diameter of some microns to a nanometer spot.

Scaling the numbers up, one could dream of collecting incoming protons over tens of kilometers into a spot of less than 10 meters, good enough as input to a fusion chamber. This task is a formidable technological challenge. Anyway, it is prohibitive by the mere question of mass. Apart from that, one is still far away from the needed scoop radius of 1000 km.

The next best idea relates to the earth’s magnetic dipole field. It is known that charged particles follow the field lines over long distances, for instance causing aurora phenomena close to earth’s magnetic poles. So it seems that a simple ring coil producing a magnetic dipole is a promising device. Let’s have a closer look at the physics. In a magnetic field, charged particles obey the Lorentz force. Calculating the paths of the interstellar protons is then a simple matter of plugging the field into the force equation. The result for a dipole field is shown in Fig. 2.

Fig. 2: Some trajectories of protons starting at z=2R in the magnetic field of a ring coil of radius R that sits at the origin. Magnetic field lines (light blue) converge towards the loop hole. Only a small part of the protons would pass through the ring (red lines), spiralling down according to cyclotron gyration. The rest is deflected (black lines).

An important fact is seen here: the scoop radius is smaller than the coil radius. It turns out that it diminishes further when the starting point of the protons is set at higher z values. This starting point is defined where the coil field is as low as the galactic magnetic field (~1 nT). Taking a maximum field of a few Tesla at the origin and the 1/(z/R)3 decay of the dipole field, where R is the coil radius (10 m in the example), the charged particles begin to sense the scooping field at a distance of 10 km. The scoop radius at this distance is a ridiculously small – 2 cm. All particles outside this radius are deflected, producing drag.

That said, loop coils are hopelessly inefficient for hydrogen scooping, but they are ideal braking devices for future deep space probes, and interestingly they may also serve as protection shields against cosmic radiation. On Proxima b, strong flares of the star create particle showers, largely protons of 10 to 50 MeV energy. A loop coil protects the crew as shown in Fig. 3.

Fig.3: Blue: Magnetic field lines from a horizontal superconducting current loop of radius R=30 cm. Red lines are radial trajectories of stellar flare protons of 10 MeV energy approaching from top. The loop and the mechanical protection plate (a 3 cm thick water reservoir colored in blue) are at z=0. It absorbs the few central impinging particles. The fast cyclotron motion of the protons creates a plasma aureole above the protective plate, drawn as a blue-green ring right above the coil. The field at the coil center is 6 Tesla, and 20 milliTesla at ground level.

After all this paraphernalia the central question remains: Can a sufficient amount of hydrogen be harvested? From the above it seems that magnetic dipole fields, or even a superposition of several dipole fields, cannot do the job. Surprisingly, this is not quite true. For it turns out that an arcane article from 1969 by a certain John Ford Fishback [6] gives us hope, but this is another story and will be narrated at a later time.

References

1. Robert W. Bussard: Galactic Matter and Interstellar Flight. Astronautica Acta 6 (1960), 1-14.

2. P. Schattschneider: The EXODUS Incident – A Scientific Novel. Springer Nature, Science and Fiction Series. May 2021, DOI: 10.1007/978-3-030-70019-5.

3. Poul Anderson: Tau Zero (1970).

4. Daniel P. Whitmire: Relativistic Spaceflight and the Catalytic Nuclear Ramjet. Acta Astronautica 2 (1975), 497-509.

5. Arthur C. Clarke: Songs of distant Earth (1986).

6. John F. Fishback: Relativistic Interstellar Space Flight. Astronautica Acta 15 (1969), 25-35.

tzf_img_post
{ 52 comments }

Technosignatures: Looking to Planetary Atmospheres

While we often think about so-called Dysonian SETI, which looks for signatures of technology in our astronomical data, as a search for Dyson spheres, the parameter space it defines is getting to be quite wide. A technosignature has to be both observable as well as unique, to distinguish it from natural phenomena. Scientists working this aspect of SETI have considered not just waste heat (a number of searches for distinctive infrared signatures of Dyson spheres have been run), but also artificial illumination, technological features on planetary surfaces, artifacts not associated with a planet, stellar pollution and megastructures.

Thus the classic Dyson sphere, a star enclosed by a swarm or even shell of technologies to take maximum advantage of its output, is only one option for SETI research. As Ravi Kopparapu (NASA GSFC) and colleagues point out in an upcoming paper, we can also cross interestingly from biosignature searches to technosignatures by looking at planetary atmospheres.

Biosignature science is the more developed of the two fields, though we’re seeing a lot of activity in technosignature work, the robust nature of which can be seen in the extensive references the Kopparapu team identifies. As applied to atmospheres, a search for technosignatures can involve looking for various forms of pollution that flag industrial activity.

To my knowledge, most work on atmospheric pollution has targeted chlorofluorocarbons (CFCs), a useful choice because there is no biological source here, although our own use of CFCs occurred in a fairly brief window and for a specific purpose (refrigeration). The NASA work targets the much more ubiquitous nitrogen dioxide (NO2), which can be a by-product of an industrial process and in general is produced by any form of combustion.

As Kopparapu notes:

“In the lower atmosphere (about 10 to 15 kilometers or around 6.2 to 9.3 miles), NO2 from human activities dominate compared to non-human sources. Therefore, observing NO2 on a habitable planet could potentially indicate the presence of an industrialized civilization.”

Adds Giada Arney, a co-author on the paper and a colleague of Kopparapu at GSFC:

“On Earth, about 76 percent of NO2 emissions are due to industrial activity. If we observe NO2 on another planet, we will have to run models to estimate the maximum possible NO2 emissions one could have just from non-industrial sources. If we observe more NO2 than our models suggest is plausible from non-industrial sources, then the rest of the NO2 might be attributed to industrial activity. Yet there is always a possibility of a false positive in the search for life beyond Earth, and future work will be needed to ensure confidence in distinguishing true positives from false positives.”

Image: Artist’s illustration of a technologically advanced exoplanet. The colors are exaggerated to show the industrial pollution, which otherwise is not visible. Credit: NASA/Jay Freidlander.

This is evidently the first time NO2 has been examined in technosignature terms. The scientists deploy a cloud-free 1-dimensional photochemical model that uses the atmospheric temperature profile of today’s Earth to examine possible mixing ratio profiles of nitrogen oxide compounds on a planet orbiting several stellar types, one of them being a G-class star like the Sun, the others being a K6V and two M-dwarfs, one of these being Proxima Centauri. The authors then calculate the observability of these NO2 features, considering observing platforms like the James Webb Space Telescope and the projected Large UV/Optical/IR Surveyor (LUVOIR) instrument.

Usefully, atmospheric NO2 strongly absorbs some wavelengths of visible light, and the authors’ calculations show that an Earth-like planet orbiting a star like the Sun could be studied from as far as 30 light years away and an NO2 signature detected even with a civilization producing the pollutant at roughly the same levels we do today. This would involve observing at visible wavelengths over the course of at least 400 hours, which parallels what the Hubble instrument needed to produce its well-known Deep Field observations.

But adding yet more interest to K-class stars, whose fortunes as future targets for bio- and technosignature observations seem to be rising, is the fact that stars cooler than the Sun should generate a stronger NO2 signal. These stars produce less ultraviolet light that can break down NO2. As to M-dwarfs, we have this:

Further work is needed to explore the detectability of NO2 on Earth-like planets around M-dwarfs in direct imaging observations in the near-IR with ground-based 30 m class telescopes. NO2 concentrations increase on planets around cooler stars due to reduced availability of short-wavelength photons that can photolyze NO2 . Non-detectability at longer observation times could place upper limits on the amount [of] NO2 present on M-dwarf HZ planets like Prox Cen b.

Where work will proceed is in the model used to make these calculations, which will need to be more complex, as the paper acknowledges:

…when we prescribe water-ice and liquid water clouds, there is a moderate decrease in the SNR of the geometric albedo spectrum from LUVOIR-15 m, with present Earth-level NO2 concentration on an Earth-like planet around a Sun-like star at 10 pc. Clouds and aerosols can reduce the detectability and could mimic the NO2 feature, posing a challenge to the unique identification of this signature. This highlights the need for performing these calculations with a 3-D climate model which can simulate variability of the cloud cover and atmospheric dynamics self-consistently.

The authors consider biosignatures and technosignatures to be “two sides of the same coin,” a nod to the fact that we should be able to search for each at the same time with the next generation of observatories. Finding the common ground between biosignature research and SETI seems overdue, for a positive result for either would demonstrate life’s emergence elsewhere in the universe, and that remains question number one.

The paper is Kopparapu et al., “Nitrogen Dioxide Pollution as a Signature of Extraterrestrial Technology,” accepted at the Astrophysical Journal. (Preprint).

tzf_img_post
{ 9 comments }

Interstellar Travel and Stellar Evolution

The stars move ever on. What seems like a fixed distance due to the limitations of our own longevity morphs over time into an evolving maze of galactic orbits as stars draw closer to and then farther away from each other. If we were truly long-lived, we might ask why anyone would be in such a hurry to mount an expedition to Alpha Centauri. Right now we’d have to travel 4.2 light years to get to Proxima Centauri and its interesting habitable zone planet. But 28,000 years from now, Alpha Centauri — all three stars — will have drawn to within 3.2 light years of us.

But we can do a lot better than that. Gliese 710 is an M-dwarf about 64 light years away in the constellation Serpens Cauda. For the patient among us, it will move in about 1.3 million years to within 14,000 AU, placing it well within the Oort Cloud and making it an obvious candidate for worst cometary orbit disruptor of all time. But read on. Stars have come much closer than this.

In any case, imagine another star being 14,000 AU away, 20 times closer than Proxima Centauri is right now. Suddenly interstellar flight looks a bit more plausible, just as it would if we could, by some miracle, find ourselves in a globular cluster like M80, where stellar distances, at the densest point, can be something on the order of the size of the Solar System.

Image: This stellar swarm is M80 (NGC 6093), one of the densest of the 147 known globular star clusters in the Milky Way galaxy. Located about 28,000 light-years from Earth, M80 contains hundreds of thousands of stars, all held together by their mutual gravitational attraction. Globular clusters are particularly useful for studying stellar evolution, since all of the stars in the cluster have the same age (about 12 billion years), but cover a range of stellar masses. Every star visible in this image is either more highly evolved than, or in a few rare cases more massive than, our own Sun. Especially obvious are the bright red giants, which are stars similar to the Sun in mass that are nearing the ends of their lives. Credit: NASA, The Hubble Heritage Team, STScI, AURA.

These thoughts are triggered by a paper from Bradley Hansen and Ben Zuckerman, both at UCLA, with the interesting title “Minimal Conditions for Survival of Technological Civilizations in the Face of Stellar Evolution.” The authors note the long-haul perspective: The physical barriers we associate with interstellar travel are eased dramatically if species attempt such journeys only in times of close stellar passage. Put another star within 1500 AU, dramatically closer than even Gliese 710 will one day be, and the travel time is reduced perhaps two orders of magnitude compared with the times needed to travel under average stellar separations near the Sun today.

I find this an interesting thought experiment, because it helps me visualize the galaxy in motion and our place within it in the time of our civilization (whether or not our civilization will last is Frank Drake’s L factor in his famous equation, and for today I posit no answer). All depends upon the density of stars in our corner of the Orion Arm and their kinematics, so location in the galaxy is the key. Just how far apart are stars in Sol’s neighborhood right now?

Drawing on research from Gaia data as well as the stellar census of the local 10-parsec volume compiled by the REsearch Consortium On Nearby Stars (RECONS), we find that 81 percent of the main-sequence stars in this volume have masses below half that of the Sun, meaning most of the close passages we would experience will be with M-dwarfs. The average distance between stars in our neck of the woods is 3.85 light years, pretty close to what separates us from Alpha Centauri. RECONS counts 232 single-star systems and 85 multiple in this space.

Hansen and Zuckerman are intrigued. They ask what a truly patient civilization might do to make interstellar travel happen only at times when a star is close by. We can’t know whether a given civilization would necessarily expand to other stars, but the authors think there is one reason that would compel even the most recalcitrant into attempting the journey. That would be the swelling of the parent star to red giant status. Here’s the question:

As mentioned above, this stellar number density yields an average nearest neighbor distance between stars of 3.85 light years. However, such estimates rely on the standard snapshot picture of interstellar migration − that a civilization decides to embark instantaneously (at least, in cosmological terms) and must simply accept the local interstellar geography as is. If one were prepared to wait for the opportune moment, then how much could one reduce the travel distance, and thus the travel time?

Maybe advanced civilizations don’t tend to make interstellar journeys until they have to, meaning when problems arise with their central star. If this is the case, we might expect stars in close proximity at any given era — ruling out close binaries but talking only about stars that are passing and not gravitationally bound — to be those between which we could see signs of activity, perhaps as artifacts in our data implying migration away from a star whose gradual expansion toward future red giant phase is rendering life on its planets more and more unlivable.

Here we might keep in mind that in our part of the galaxy, about 8.5 kiloparsecs out from galactic center, the density of stars is what the authors describe as only ‘modest.’ Higher encounter rates occur depending on how close we want to approach galactic center.

Reading this paper reminds me why I wish I had the talent to be a science fiction writer. Stepping back to take the ‘deep time’ view of galactic evolution fires the imagination as little else can. But I leave fiction to others. What Hansen and Zuckerman point out is that we can look at our own Solar System in these same terms. Their research shows that if we take the encounter rate they derive for our Sun and multiply it by the 4.6 billion year age of our system, we can assume that at some point within that time a star passed within a breathtaking 780 AU.

Image: A passing star could dislodge comets from otherwise stable orbits so that they enter the inner system, with huge implications for habitable worlds. Is this a driver for travel between stars? Credit: NASA/JPL-Caltech).

Now let’s look forward. A gradually brightening Sun eventually pushes us — our descendants, perhaps, or whatever species might be on Earth then — to consider leaving the Solar System. Recent work sees this occurring when the Sun reaches an age of about 5.7 billion years. Thus the estimate for remaining habitability on Earth is about a billion years. The paper’s calculations show that within this timeframe, the median distance of closest stellar approach to the Sun is 1500 AU, with an 81 percent chance that a star will close to within 5000 AU. From the paper:

Thus, an attempt to migrate enough of a terrestrial civilization to ensure longevity can be met within the minimum requirement of travel between 1500 and 5000 AU. This is two orders of magnitude smaller than the current distance to Proxima Cen. The duration of an encounter, with the closest approach at 1500 AU, assuming stellar relative velocities of 50km/s, is 143 years. In the spirit of minimum requirements, we note that our current interstellar travel capabilities are represented by the Voyager missions (Stone et al. 2005); these, which rely on gravity assists off the giant planets, have achieved effective terminal velocities of ∼ 20 km/s. The escape velocity from the surface of Jupiter is ∼ 61 km/s, so it is likely one can increase these speeds by a factor of 2 and achieve rendezvous on timescales of order a century.

My takeaway on this parallels what the authors say: We can conceive of an interstellar journey in this distant era that relies on technologies not terribly advanced beyond where we are today, with travel times on the order of a century. The odds on such a journey being feasible for other civilizations rise as we move closer to galactic center. At 2.2 kiloparsecs from the center, where peak density seems to occur, the characteristic encounter distance is 250 AU over the course of 10 billion years, or an average 800 AU during a single one billion year period.

You might ask, as the authors do, how binary star systems would affect these outcomes, and it’s an interesting point. Perhaps 80 percent of all G-class star binaries will have separations of 1000 AU or less, which the authors consider disruptive to planet formation. Where technological civilizations do arise in binary systems, having a companion star is an obvious driver for interstellar travel. But single stars like ours would demand migration to another system.

We can plug Hansen and Zuckerman’s work into the ongoing discussion of interstellar migration. From the paper:

Our hypothesis bears resemblance to the slow limit in models of interstellar expansion (Wright et al. 2014; Carroll-Nellenback et al. 2019). In a model in which civilizations diffuse away from their original locations with a range of possible speeds, the behavior at low speeds is no longer a diffusion wave but rather a random seeding dominated by the interstellar dispersion. Even in this limit, the large age of the Galaxy allows for widespread colonization unless the migration speeds are sufficiently small. In this sense our treatment converges with prior work, but our focus is very different. We are primarily interested in how a long-lived technological civilization may respond to stellar evolution and not how such civilizations may pursue expansion as a goal in and of itself. Thus our discussion demonstrates the requirements for technological civilizations to survive the evolution of their host star, even in the event that widespread colonization is physically infeasible.

It’s interesting that the close passage of a second star is a way to reduce the search space for SETI purposes if we go looking for the technological signature of a civilization in motion. Separating out stars undergoing close passage from truly bound binaries is another matter, and one that would, the authors suggest, demand a solid program for eliminating false positives.

Ingenious. An imaginative exercise like this, or Greg Laughlin and Fred Adams’ recent work on ‘black cloud’ computing, offers us perspectives on the galactic scale, a good way to stretch mental muscles that can sometimes atrophy when limited to the near-term. Which is one reason I read science fiction and pursue papers from people working the far edge of astrophysics.

The paper is Hansen and Zuckerman, “Minimal conditions for survival of technological civilizations in the face of stellar evolution,” in process at the Astronomical Journal (preprint). Thanks to Antonio Tavani for the pointer on a paper I hadn’t yet discovered.

tzf_img_post
{ 57 comments }

‘Farfarout’ Confirmed Far Beyond Pluto

One thing is certain about the now confirmed object that is being described as the most distant ever observed in our Solar System. We’ll just be getting used to using the official designation of 2018 AG37 (bestowed by the Minor Planet Center according to IAU protocol) when it will be given an official name, just as 2003 VB12 was transformed into Sedna and 2003 UB313 became Eris. It’s got a charming nickname, though, the jesting title “Farfarout.”

I assume the latter comes straight from the discovery team, and it’s a natural because the previous most distant object, found in 2018, was dubbed “Farout” by the same team of astronomers. That team includes Scott Sheppard (Carnegie Institution for Science), Chad Trujillo (Northern Arizona University) and David Tholen (University of Hawaiʻi). Farout, by the way, has the IAU designation 2018 VG18, but has not to my knowledge received an official name. Trans-Neptunian objects can be useful for investigating the gravitational effects of possible larger objects — like the putative Planet 9 — deep in the reaches of the system.

Image: Solar System distances to scale, showing the newly discovered planetoid, nicknamed “Farfarout,” compared to other known Solar System objects, including the previous record holder 2018 VG18 “Farout,” also found by the same team. Credit: Roberto Molar Candanosa, Scott S. Sheppard (Carnegie Institution for Science) and Brooks Bays (University of Hawaiʻi).

As to Farfarout, it turned up in data collected at the Subaru 8-meter telescope at Maunakea (Hawaiʻi) in 2018, with observations at Gemini North and the Magellan telescopes (Las Campanas Observatory, Chile) helping to constrain its orbit. Its average distance from the Sun appears to be 101 AU, but the orbit is elliptical, reaching 175 AU at aphelion and closing to 27 AU (inside the orbit of Neptune) at its closest approach to the Sun. That makes for a single revolution about the Sun that lasts a thousand years, and a long history of gravitational interactions with Neptune.

Farfarout is thought to be about 400 kilometers in diameter, making it a very small dwarf planet, though this would depend on interpretations of its albedo and the assumption that it is an icy object. In any case, its gravitational dealings with Neptune over the course of the Solar System’s history affect its usefulness as a marker for detecting massive objects further out. For that, we turn to objects like Sedna and 2012 VP113, which do not approach Neptune.

On the other hand, the Neptune interactions can be useful, as Chad Trujillo points out:

“Farfarout’s orbital dynamics can help us understand how Neptune formed and evolved, as Farfarout was likely thrown into the outer solar system by getting too close to Neptune in the distant past. Farfarout will likely strongly interact with Neptune again since their orbits continue to intersect.”

Image: An early estimate of Farfarout’s orbit. Credit: By Tomruen – JPL [1], CC BY-SA 4.0.

We’re at the early stages of our explorations of the outer system, and it’s safe to assume that a windfall of such objects awaits astronomers as our cameras and telescopes continue to improve. Sheppard, Tholen and Trujillo will doubtless turn up more as they continue the hunt for Planet 9.

tzf_img_post
{ 15 comments }

Imaging Alpha Centauri’s Habitable Zones

We may or may not have imaged a planet around Alpha Centauri A, possibly a ‘warm Neptune’ at an orbital distance of roughly 1 AU, the distance between Earth and the Sun. Let’s quickly move to the caveat: This finding is not a verified planet, and may in fact be an exozodiacal disk detection or even a glitch within the equipment used to see it.

But as the paper notes, the finding called C1 is “is not a known systematic artifact, and is consistent with being either a Neptune-to-Saturn-sized planet or an exozodiacal dust disk.“ So this is interesting.

As it may be some time before we can make the call on C1, I want to emphasize the importance not so much of the possible planet but the method used to investigate it. For what the team behind a new paper in Nature Communications has revealed is a system for imaging in the mid-infrared, coupled with long observing times that can extend the capabilities of ground-based telescopes to capture planets in the habitable zone of other nearby stars.

Lead author Kevin Wagner (University of Arizona Steward Observatory) and colleagues describe a method showing a tenfold improvement over existing direct imaging solutions. Wavelength is important here, for exoplanet imaging usually works at infrared wavelengths below the optimum. Wagner points to the nature of observations from a warm planetary surface to explain why the wavelengths where planets are brightest can be problematic:

“There is a good reason for that because the Earth itself is shining at you at those wavelengths. Infrared emissions from the sky, the camera and the telescope itself are essentially drowning out your signal. But the good reason to focus on these wavelengths is that’s where an Earthlike planet in the habitable zone around a sun-like star is going to shine brightest.”

With exoplanet imaging up to now operating below 5 microns, where background noise is low, the planets we’ve been successful at imaging have been young, hot worlds of Jupiter class in wide orbits. Let me quote from the paper on this as well:

Their high temperatures are a remnant of formation and reflect their youth (~1–100 Myr, compared to the Gyr ages of typical stars). Imaging potentially habitable planets will require imaging colder exoplanets on shorter orbits around mature stars. This leads to an opportunity in the mid-infrared (~10 µm), in which temperate planets are brightest. However, mid-infrared imaging introduces significant challenges. These are primarily related to the much higher thermal background—that saturates even sub-second exposures—and also the ~2–5× coarser spatial resolution due to the diffraction limit scaling with wavelength. With current state-of-the-art telescopes, mid-infrared imaging can resolve the habitable zones of roughly a dozen nearby stars, but it remains to be shown whether sensitivity to detect low-mass planets can be achieved.

Getting around these challenges is part of what Breakthrough Watch is trying to do via its NEAR (New Earths in the Alpha Centauri Region) experiment, which focuses on the technologies needed to directly image low-mass habitable-zone exoplanets. The telescope in question is the European Southern Observatory’s Very Large Telescope in Chile, where Wagner and company are working with an adaptive secondary telescope mirror designed to minimize atmospheric distortion. That effort works in combination with a light-blocking mask optimized for the mid-infrared to block the light of Centauri A and then Centauri B in sequence.

Remember that stable habitable zone orbits have been calculated for both of these stars. Switching between Centauri A and B rapidly — as fast as every 50 milliseconds, in a method called ‘chopping’ — allows both habitable zones to be scrutinized simultaneously. Background light is further reduced by image stacking and specialized software.

“We’re moving one star on and one star off the coronagraph every tenth of a second,” adds Wagner. “That allows us to observe each star for half of the time, and, importantly, it also allows us to subtract one frame from the subsequent frame, which removes everything that is essentially just noise from the camera and the telescope.”

Among possible systematic artifacts, the paper notes the presence of ‘negative arcs’ due to reflections that are introduced within the system and must be eliminated. The image below shows the view before the artifacts have been removed and a second after that process is complete.

Image: This is Figure 2 from the paper. Caption: a high-pass filtered image without PSF subtraction or artifact removal. The α Centauri B on-coronagraph images have been subtracted from the α Centauri A on-coronagraph images, resulting in a central residual and two off-axis PSFs to the SE and NW of α Centauri A and B, respectively. Systematic artifacts labeled 1–3 correspond to detector persistence from α Centauri A, α Centauri B, and an optical ghost of α Centauri A. b Zoom-in on the inner regions following artifact removal and PSF subtraction. Regions impacted by detector persistence are masked for clarity. The approximate inner edge of the habitable zone of α Centauri A13 is indicated by the dashed circle. A candidate detection is labeled as ‘C1’. Credit: Wagner et al.

Over the years, we’ve seen the size of possible planetary companions of Centauri A and B gradually constrained, and as the paper notes, radial velocity work has excluded planets more massive than 53 Earth masses in the habitable zone of Centauri A (by comparison, Jupiter is 318 Earth masses). The constraint at Centauri B is 8.4 Earth masses, meaning that in both cases, lower-mass planets could still be present and in stable orbits. We already know of two worlds orbiting the M-dwarf Proxima Centauri.

You can find the results of the team’s nearly 100 hours of observations (enough to collect more than 5 million images) in the 7 terabytes of data now made available at http://archive.eso.org. Wagner is forthcoming about the likelihood of the Centauri A finding being a planet:

“There is one point source that looks like what we would expect a planet to look like, that we can’t explain with any of the systematic error corrections. We are not at the level of confidence to say we discovered a planet around Alpha Centauri, but there is a signal there that could be that with some subsequent verification.”

A second imaging campaign is planned in several years, which could reveal the same possible exoplanet at a different part of its modeled orbit, with potential confirmation via radial velocity methods. From the paper:

The habitable zones of α Centauri and other nearby stars could host multiple rocky planets–some of which may host suitable conditions for life. With a factor of two improvement in radius sensitivity (or a factor of four in brightness), habitable-zone super-Earths could be directly imaged within α Centauri. An independent experiment (e.g., a second mid-infrared imaging campaign, as well as RV, astrometry, or reflected light observations) could also clarify the nature of C1 as an exoplanet, exozodiacal disk, or instrumental artifact. If confirmed as a planet or disk, C1 would have implications for the presence of other habitable zone planets. Mid-infrared imaging of the habitable zones of other nearby stars, such as ε Eridani, ε Indi, and τ Ceti is also possible.

It’s worth keeping in mind that the coming extremely large telescopes will bring significant new capabilities to ground-based imaging of planets around nearby stars. Whether or not we have a new planet in this nearest of all stellar systems to Earth, we do have significant progress at pushing the limits of ground-based observation, with positive implications for the ELTs.

The paper is Wagner et al., “Imaging low-mass planets within the habitable zone of α Centauri,” Nature Communications 12: 922 (2021). Abstract / full text.

tzf_img_post
{ 29 comments }

A Black Cloud of Computation

Moore’s Law, first stated all the way back in 1965, came out of Gordon Moore’s observation that the number of transistors per silicon chip was doubling every year (it would later be revised to doubling every 18-24 months). While it’s been cited countless times to explain our exponential growth in computation, Greg Laughlin, Fred Adams and team, whose work we discussed in the last post, focus not on Moore’ Law but a less publicly visible statement known as Landauer’s Principle. Drawing from Rolf Landauer’s work at IBM, the 1961 equation defines the lower limits for energy consumption in computation.

You can find the equation here, or in the Laughlin/Adams paper cited below, where the authors note that for an operating temperature of 300 K (a fine summer day on Earth), the maximum efficiency of bit operations per erg is 3.5 x 1013. As we saw in the last post, a computational energy crisis emerges when exponentially increasing power requirements for computing exceed the total power input to our planet. Given current computational growth, the saturation point is on the order of a century away.

Thus Landauer’s limit becomes a tool for predicting a problem ahead, given the linkage between computation and economic and technological growth. The working paper that Laughlin and Adams produced looks at the numbers in terms of current computational throughput and sketches out a problem that a culture deeply reliant on computation must overcome. How might civilizations far more advanced than our own go about satisfying their own energy needs?

Into the Clouds

We’re familiar with Freeman Dyson’s interest in enclosing stars with technologies that can exploit the great bulk of their energy output, with the result that there is little to mark their location to distant astronomers other than an infrared signature. Searches for such megastructures have already been made, but thus far with no detections. Laughlin and Adams ponder exploiting the winds generated by Asymptotic Giant Branch stars, which might be tapped to produce what they call a ‘dynamical computer.’ Here again there is an infrared signature.

Let’s see what they have in mind:

In this scenario, the central AGB star provides the energy, the raw material (in the form of carbon-rich macromolecules and silicate-rich dust), and places the material in the proper location. The dust grains condense within the outflow from the AGB star and are composed of both graphite and silicates (Draine and Lee 1984), and are thus useful materials for the catalyzed assembly of computational components (in the form of nanomolecular devices communicating wirelessly at frequencies (e.g. sub-mm) where absorption is negligible in comparison to required path lengths.

What we get is a computational device surrounding the AGB star that is roughly the size of our Solar System. In terms of observational signatures, it would be detectable as a blackbody with temperature in the range of 100 K. It’s important to realize that in natural astrophysical systems, objects with these temperatures show a spectral energy distribution that, the authors note, is much wider than a blackbody. The paper cites molecular clouds and protostellar envelopes as examples; these should be readily distinguishable from what the authors call Black Clouds of computation.

It seems odd to call this structure a ‘device,’ but that is how Laughlin and Adams envision it. We’re dealing with computational layers in the form of radial shells within the cloud of dust being produced by the AGB star in its relatively short lifetime. It is a cloud in an environment that subjects it to the laws of hydrodynamics, which the paper tackles by way of characterizing its operations. The computer, in order to function, has to be able to communicate with itself via operations that the authors assume occur at the speed of light. Its calculated minimum temperature predicts an optimal radial size of 220 AU, an astronomical computing engine.

And what a device it is. The maximum computational rate works out to 3 x 1050 bits s-1 for a single AGB star. That rate is slowed by considerations of entropy and rate of communication, but we can optimize the structure at the above size constraint and a temperature between 150 and 200 K, with a mass roughly comparable to that of the Earth. This is a device that is in need of refurbishment on a regular timescale because it is dependent upon the outflow from the star. The authors calculate that the computational structure would need to be rebuilt on a timescale of 300 years, comparable to infrastructure timescales on Earth.

Thus we have what Laughlin, in a related blog post, describes as “a dynamically evolving wind-like structure that carries out computation.” And as he goes on to note, AGB stars in their pre-planetary nebula phase have lifetimes on the order of 10,000 years, during which time they produce vast amounts of graphene suitable for use in computation, with photospheres not far off room temperature on Earth. Finding such a renewable megastructure in astronomical data could be approached by consulting the WISE source catalog with its 563,921,584 objects. A number of candidates are identified in the paper, along with metrics for their analysis.

These types of structures would appear from the outside as luminous astrophysical sources, where the spectral energy distributions have a nearly blackbody form with effective temperature T ≈ 150 − 200 K. Astronomical objects with these properties are readily observable within the Galaxy. Current infrared surveys (the WISE Mission) include about 200 candidate objects with these basic characteristics…

And a second method of detection, looking for nano-scale hardware in meteorites, is rather fascinating:

Carbonaceous chondrites (Mason 1963) preserve unaltered source material that predates the solar system, much of which was ejected by carbon stars (Ott 1993). Many unusual materials have been identified within carbonaceous chondrites, including, for example, nucleobases, the informational sub-units of RNA and DNA (see Nuevo et al. 2014). Most carbonaceous chondrites have been subject to processing, including thermal metamorphism and aqueous alteration (McSween 1979). Graphite and highly aromatic material survives to higher temperatures, however, maintaining structure when heated transiently to temperatures of order, T ∼ 700K (Pearson et al. 2006). It would thus potentially be of interest to analyze carbonaceous chondrites to check for the presence of (for example) devices resembling carbon nanotube field-effect transistors (Shulakar, et al. 2013).

Meanwhile, Back in 2021

But back to the opening issue, the crisis posited by the rate of increase in computation vs. the energy available to our society. Should we tie Earth’s future economic growth to computation? Will a culture invariably find ways to produce the needed computational energies, or are other growth paradigms possible? Or is growth itself a problem that has to be surmounted?

At the present, the growth of computation is fundamentally tied to the growth of the economy as a whole. Barring the near-term development of practical ireversible computing (see, e.g., Frank 2018), forthcoming computational energy crisis can be avoided in two ways. One alternative involves transition to another economic model, in contrast to the current regime of information-driven growth, so that computational demand need not grow exponentially in order to support the economy. The other option is for the economy as a whole to cease its exponential growth. Both alternatives involve a profound departure from the current economic paradigm.

We can wonder as well whether what many are already seeing as the slowdown of Moore’s Law will lead to new forms of exponential growth via quantum computing, carbon nanotube transistors or other emerging technologies. One thing is for sure: Our planet is not at the technological level to exploit the kind of megastructures that Freeman Dyson and Greg Laughlin have been writing about, so whatever computational crisis we face is one we’ll have to surmount without astronomical clouds. Is this an aspect of the L term in Drake’s famous equation? It referred to the lifetime of technological civilizations, and on this matter we have no data at all.

The working paper is Laughlin et al., “On the Energetics of Large-Scale Computation using Astronomical Resources.” Full text. Laughlin also writes about the concept on his oklo.org site.

tzf_img_post
{ 11 comments }