≡ Menu

Go Voyager

It’s worth thinking about why Voyager 1 and 2, now coming up on their 40th year of operation, are still sending back data. After all, mission longevity becomes increasingly important as we anticipate missions well outside the Solar System, and the Voyagers are giving us a glimpse of what can be done even with 1970’s technology. We owe much of their staying power to their encounters with Jupiter, which demanded substantial protection against the giant planet’s harsh radiation, a design margin still used in space missions today.

The Voyagers were the first spacecraft to be protected against external electrostatic charges and the first with autonomous fault protection, meaning each spacecraft had the ability to detect problems onboard and correct them. We still use the Reed-Solomon code for spacecraft data to reduce data transmission errors, and we all benefited from Voyager’s programmable attitude and pointing capabilities during its planetary encounters.

Pioneer 6 was a doughty vehicle, but Voyager 2 (launched before Voyager 1) passed its record as longest continuously operating spacecraft back in August of 2012, while Voyager 1 eclipsed Pioneer 10’s distance mark in 1998 and is now traveling some 21 billion kilometers out. Voyager 1 is our sole spacecraft to leave the heliosphere, though Voyager 2 is expected to follow it in a few years, and we’ve already acquired important information, such as the fact that cosmic rays are four times more abundant in interstellar space than near the Earth.

You can see how all this begins to build the foundation for a ‘true’ interstellar mission, by which I mean one designed solely for the purpose of penetrating the local interstellar medium and reporting data from it. The heliosphere, Voyager has shown us, wraps around our Solar System and helps to provide a radiation shield for the planets. Missions both robotic and manned will need to be designed around the cosmic ray issues Voyager has uncovered.

Image: Voyager 1 image of Io showing active plume of Loki on limb. Heart-shaped feature southeast of Loki consists of fallout deposits from active plume Pele. The images that make up this mosaic were taken from an average distance of approximately 490,000 kilometers. Credit: NASA/JPL/USGS.

Still thinking interstellar, the Voyagers are telling us about the solar wind’s termination shock, that region where charged particles from the Sun slow to below the speed of sound as they push out into the interstellar medium — these are Voyager 2 measurements. Voyager 1 has measured the density of the interstellar medium as well as magnetic fields outside the heliosphere. The final benefit: We’ll have Voyager 2 outside the heliosphere while still in communication, so we can sample the interstellar medium from two different locations.

I always think of long spacecraft missions in terms of the people who work on them. Voyager is pushing on the ‘lifetime of a researcher’ rubric that some consider essential (though I disagree), the notion that missions have to be flown so that those who worked on them can see them through to destination. But of course the Voyagers have no destination as such; they’ll press on in a galactic orbit that takes fully 225 million years to complete. And as our spacecraft get even more rugged and capable of autonomy, we’ll soon take it as a given that multiple generations will be involved in seeing any complex mission through to completion. (See Voyager to a Star for my riff on a symbolic ‘extension’ to the Voyager mission).

Image: These two pictures of Uranus — one in true color (left) and the other in false color — were compiled from images returned Jan. 17, 1986, by the narrow-angle camera of Voyager 2. The spacecraft was 9.1 million kilometers from the planet, several days from closest approach. The picture at left has been processed to show Uranus as human eyes would see it from the vantage point of the spacecraft. Credit: NASA/JPL.

We have, according to the Jet Propulsion Laboratory, perhaps until 2030 before data from the Voyagers ceases. Each spacecraft contains three radioisotope thermoelectric generators (RTGs) running off the decay of plutonium-238. And as this JPL news release reminds us, with the spacecraft power decreasing by four watts per year, engineers have to be creative at figuring out how best to squeeze out data results under extreme power constraints.

For a mission this long, that means consulting documents written decades ago and at a completely different stage of technological development.

“The technology is many generations old, and it takes someone with 1970s design experience to understand how the spacecraft operate and what updates can be made to permit them to continue operating today and into the future,” said Suzanne Dodd, Voyager project manager based at NASA’s Jet Propulsion Laboratory in Pasadena.

Image: Global color mosaic of Triton, taken in 1989 by Voyager 2 during its flyby of the Neptune system. Triton is one of only three objects in the Solar System known to have a nitrogen-dominated atmosphere (the others are Earth and Saturn’s giant moon, Titan). The greenish areas include what is called the cantaloupe terrain, whose origin is unknown, and a set of “cryovolcanic” landscapes apparently produced by icy-cold liquids (now frozen) erupting from Triton’s interior. Credit: NASA/JPL/USGS.

It’s been quite a ride. Voyager discovered Io’s volcanoes and imaged rings around Jupiter, Uranus and Neptune, while finding hints of the apparent ocean within Europa that carries so much astrobiological interest. Between them, the Voyagers found a total of 24 new moons amongst the four planets they visited, detecting lightning on Jupiter and a nitrogen-rich atmosphere at Titan, the first to be found outside the Earth itself. And who can forget that bizarre terrain on Triton, or the tortured surface of Uranus’ moon Miranda?

“None of us knew, when we launched 40 years ago, that anything would still be working, and continuing on this pioneering journey,” said Ed Stone, Voyager project scientist based at Caltech in Pasadena, California. “The most exciting thing they find in the next five years is likely to be something that we didn’t know was out there to be discovered.”

Image: The Voyagers outbound. A representation of the heliosphere, including the termination shock (TS), the heliopause and the interstellar medium where the heliosphere ends. Credit: Science, NASA/JPL-California Institute of Technology. Note: In this image, the locations of the Voyagers are updated only to September 2011, by Brad Baxley, JILA.

Who knew that Voyager’s measurements of solar wind plasma, low-frequency radio waves, charged particles and magnetic fields would still be informing us fully forty years on? The next spacecraft to cross the heliosphere after Voyager, this time designed for just that purpose, will surely live even longer, challenging our conceptions of human achievement across generations and our willingness to tackle projects involving not just deep space but deep time.

This mission isn’t over. Go Voyager.



Exomoons: Rare in Inner Stellar Systems?

Exomoons — moons around planets in other star systems — are an exhilarating and at the same time seemingly inevitable prospect. There is little reason to assume our Solar System is unique in its menagerie of moons, with the gas giants favoring us particularly with interesting mission targets, and then there’s that fascinating double system at Pluto/Charon. If we visualize what we expect to find in any given stellar system, surely moons are part of the mix, and investigations like the Hunt for Exomoons with Kepler will doubtless find them.

An actual exomoon detection would be a triumph for exoplanet science, especially given how recently it was that we nailed down the first confirmed exoplanet, 51 Pegasi b, in 1995 (or, if you prefer, the 1992 detection of terrestrial-mass planets orbiting the pulsar PSR B1257+12). We’re new at this, and what huge strides we’ve made! Given the small size of the transit signal and its changing relation to the body it orbits, exomoons offer a particularly difficult challenge, although David Kipping’s team at HEK has plenty of Kepler data to work with.

Image: A star with a transiting planet and its moon. The angled area shows the inclination of the moon orbit. Orbit positions beyond the dashed line are not undergoing transit, and are thus not observable. Credit: Michael Hippke.

With all this in mind, every paper that comes out of HEK gets my attention. Kipping (Columbia University), working with graduate student Alex Teachey and citizen scientist Allan Schmitt, has now produced a paper that takes a significant step as the investigation proceeds. We have no detection yet — more about that in a moment — but we do have a broader result showing that exomoons are unusual in the inner regions of the systems surveyed.

Kipping and Teachey looked at 284 viable moon-hosting Kepler planetary candidates to search for moons around planets from Earth to Jupiter in size and distances from their stars of 0.1 to 1 AU. This finding seems to be getting less attention in the press than it deserves, so let’s dig into the paper on it:

Our results place new upper limits on the exomoon population for planets orbiting within about 1 AU of their host star, upper limits that are remarkably low. We have also analyzed subsets of the ensemble to test the effect of various data cuts, and we have identified the regime in which the OSE model presented in Heller (2014) breaks down, which we call the “Callisto Effect” — beyond 20 planetary radii, discrepancies appear in the results.

OSE stands for Orbital Sampling Effect, developed by René Heller in 2014 and described by Michael Hippke in Exomoons: A Data Search for the Orbital Sampling Effect and the Scatter Peak. OSE stacks multiple planet transits to search for an exomoon signature. What the paper is referring to as the ‘Callisto effect’ is the disagreement between OSE predictions and moons like Callisto. Even so, the authors continue to see OSE as a useful tool, and learning about an area in which it breaks down is helpful as we fine-tune our capabilities.

Back to the paper:

Our analysis suggests that exomoons may be quite rare around planets at small semi-major axes, a finding that supports theoretical work suggesting moons may be lost as planets migrate inward. On the other hand, if the dearth of exomoons can be read as a reliable indicator of migration, our results suggest a large fraction of the planets in the ensemble have migrated to their present location.

And that is a pointer to which we need to pay attention. Is a lack of exomoons a marker for planetary migration? If further analysis determines that it is, then we’ve found an extremely handy tool for studying the formation history of other stellar systems.

The Kepler data did yield one exomoon candidate in the Kepler-1625 system for which the authors have set up plans for follow-up observations with Hubble this fall. There is no way to know at this point whether we’ve got a genuine exomoon here or not. And I much appreciate the thorough job that Alex Teachey did in getting this point across to the public in his article Are Astronomers on the Verge of Finding an Exomoon? We learn here that the authors put their paper online earlier than intended because a media outlet was going to publish news about the upcoming Hubble study (Hubble proposals are publicly posted online).

And Teachey’s point is sound at a time when ideas whip around the Internet at lightspeed:

Peer review is a critical part of the scientific process, and we are not terribly comfortable putting out our results before they have been examined by a qualified referee. Unfortunately, we feel the circumstances have forced us to make our results freely available to the public before such a review, so that everyone may see for themselves what we are claiming and what we are not. While David and I are both big proponents of engaging with the public and boosting interest in the incredible things happening every day in astronomy, we have serious concerns about the potential for sensational headlines misleading the public into thinking a discovery has been made when it is really too early to say that for sure.

It’s a solid point. But I also want to emphasize that this paper’s findings about the apparent rarity of exomoons in the inner systems of the stars being studied is quite significant. To my knowledge this is the first time we’ve developed a constraint on exomoon formation. We doubless have moons hiding in the data (recall that the authors are looking for analogs to the Galilean moons of Jupiter), and we can also suspect they are going to be much more common in outer stellar systems, which is certainly the case in our own Solar System.

Don’t expect an immediate result from the Hubble observations. According to this article in Nature, Kipping and team will take about six months to analyze the work before making any announcements. Steady, painstaking effort is how this job gets done.

The paper is Teachey, Kipping & Schmitt, “HEK VI: On the Dearth of Galilean Analogs in Kepler and the Exomoon Candidate Kepler-1625b I,” submitted to AAS journals and available as a preprint. For helpful background, check Kipping, “The Transits of Extrasolar Planets with Moons,” PhD thesis, University College London (March 14, 2011), available online.



Stagnant Supercivilizations and Interstellar Travel

Just how long can a civilization live? It’s a key question, showing up as a factor in the Drake Equation and possibly explaining our lack of success at finding evidence for ETI. But as Andrei Kardashev believed, it is possible that civilizations can live for aeons, curbed only by the resources available to them, opening up the question of how they evolve. In today’s essay, Nick Nielsen looks at long-lived societies, asking whether they would tend toward stasis — Clarke’s The City and the Stars comes to mind — and how the capability of interstellar flight plays into their choices for growth. Would we be aware of them if they were out there? Have a look at supercivilizations, their possible trajectories of development, and consider what such interstellar stagnation might look like to a young and questing species searching for answers.

by J. N. Nielsen

What are stagnant supercivilizations?

As far as I know there are no precise definitions of supercivilizations, but this should not surprise us as there are no precise definitions of civilization simpliciter. In his paper, “On the Inevitability and the Possible Structures of Supercivilizations” (1985), Nikolai S. Kardashev explicitly formulated two assumptions regarding supercivilizations:

“I. The scales of activity of any civilization are restricted only by natural and scientific factors. This assertion implies that all processes observed in Nature (from phenomena in the microcosmos to those in the macrocosmos and all the way to the whole Universe) may in time be utilized by civilizations, be reproduced or even somewhat changed, though of course always in accordance with the laws of Nature.

“II. Civilizations have no inner, inherent limitations on the scales of their activities. This implies that presumptions of a possible self destruction of a civilization, or of a certain restrictions on the level of its development are not factual. Actually social conflicts may in fact be resolved, while civilizations will always face problems that demand larger scales of activity.” [1]

If Kardashev was right, there being only natural and scientific restrictions on the scale of the activity of civilization, and the absence of inherent limitations on civilizations, would mean that an expanding civilization would just keep expanding, subject only to natural laws like those of general relativity and quantum theory, thermodynamics and conservation laws. Presumably, then, older expanding civilizations would eventually become supercivilizations in virtue of the scale of their activities, which would grow proportionally (or perhaps exponentially) to their age. Here we see the relationship between supercivilizations and the recurrent motif of million-year-old or even billion-year-old civilizations. But once grown to these dimensions, what then?

In a series of posts — Stagnant Supercivilizations, An Alternative Formulation of Stagnant Supercivilizations, Suboptimal Civilizations, Supercivilizations and Superstagnation, and What Do Stagnant Supercivilizations Do During Their Million Year Lifespans? — I discussed Kardashevian supercivilizations that have become stagnant—in other words, civilizations that are very old, very large, very powerful, and very advanced, but which have attained a plateau of achievement and thus have ceased to develop. Such civilizations, in a growth phase, may have taken advantage of the absence of any inherent limitation upon the scale of their activities and would have grown to utilize all the processes of nature, subject only to the laws of nature. Their growth trajectory would have described an S-curve, much like a species that converges upon the carrying capacity of its ecosystem. Having reached an equilibrium with its environment—which, in the case of a supercivilization, is the cosmos itself—the growth of a supercivilization would then be limited by galactic ecology. [2]

This seems to contradict Kardashev’s second assumption, that, “Civilizations have no inner, inherent limitations on the scales of their activities,” but the carrying capacity of the cosmos would constitute an extrinsic or exogenous limitation on the scales of a supercivilization’s activities, rather than an intrinsic or endogenous limitation. Moreover, this extrinsic limitation, which, once encountered, entails stagnation, is consistent with Kardashev’s first assumption, that a supercivilization’s activities must be, “in accordance with the laws of Nature” and are restricted by natural factors. The carrying capacity of the cosmos is the natural restriction upon the growth of supercivilizations.

If a galaxy is the ecosystem in which a supercivilization comes to maturity, then the carrying capacity of a galaxy will determine the growth and eventual stagnation of supercivilizations once carrying capacity is reached, with that carrying capacity being determined by the accessibility of available matter and usable energy at the disposal of a supercivilization. This ecological limit to the growth of supercivilizations would constitute, “natural and scientific factors,” that would restrict a supercivilization’s scale of activity, constituting a confirmation of Kardashev’s principles, and would, additionally, make the metaphor of galactic ecology literally true.

This is but one possible scenario for the stagnation of a supercivilization. Sagan and Newman suggested a scenario of supercivilization stagnation based upon the intelligent progenitor species of a civilization transcending their biological limitations and becoming effectively immortal:

“A society of immortals must practice more stringent population control than a society of mortals. In addition, whatever its other charms, interstellar spaceflight must pose more serious hazards than residence on the home planet. To the extent that such predispositions are inherited, natural selection would tend in such a world to eliminate those individuals lacking a deep passion for the longest possible lifespans, assuming no initial differential replication.” [3]

According to Sagan and Newman the result of this would be:

“…a civilization with a profound commitment to stasis even on rather long cosmic time scales and a predisposition antithetical to interstellar colonization.” [4]

I could criticize this scenario on several grounds, but my purpose here is not to engage with the argument, but to present it for exhibition as one among multiple possible sources of stagnation for advanced civilizations. The point is that even the largest, oldest, most advanced civilizations are subject to stagnation—perhaps especially subject to stagnation.

[We could pursue terraforming within our own planetary system even without interstellar travel.]

Are there hard limits to interstellar travel?

In the argument that I unfolded in What Do Stagnant Supercivilizations Do During Their Million Year Lifespans? so as to concede a point to potential critics before this was used as a cudgel against my argument, I tried to show how, even without interstellar travel, a supercivilization could provide for itself civilizational-scale stimulation. My argument was that even a supercivilization confined to its home planetary system could engage in terraforming (or its non-terrestrial equivalent) and even world-building, and so might be able to observe the development of life over biological scales of time and the development of intelligence and civilization over their respective scales of time.

My assumption in making this argument was that a civilization in a position to make scientific observations of phenomena as fundamental as the origins of life, intelligence, and civilization, eventually would formulate a vast body of scientific knowledge based on these scientific observations. All of this was mere prelude in order to ask the question that was bothering me at the time: could a supercivilization remain stagnant when it was in a position to assimilate a vast body of scientific knowledge? It seems unlikely to me that a civilization that had grown to supercivilization status in virtue of its mastery of science and technology could remain unaffected by an influx of scientific knowledge.

As I noted above, I sought to demonstrate the possibility of civilizational-scale intellectual stimulation without recourse to interstellar space travel in order to focus on what is still possible to a very old civilization even under hard limits to space travel. If such a civilization also possessed technology sufficient for interstellar travel, then the possibilities for stimulation would be all the greater, and my argument would be strengthened, so that considering the narrower question of a supercivilization stranded within its home planetary system constituted a more rigorous test of the idea of civilizational-scale scientific stimulation.

We all know that, even among scientists, even among advocates of space travel, there are those who insist upon hard limits to interstellar travel. Hence the need to make an argument without an appeal to interstellar travel. This insistence upon hard limits to interstellar travel is not my position, but I do want to try to understand the reasoning and the motivations that have led otherwise intelligent individuals to declare interstellar travel to be not merely difficult, but an insuperable impossibility (or so difficult as to be impossible for all practical purposes). What, then, are the reasons given for the impossibility or impracticality of interstellar travel? I will consider this question by way of a digression discussing the idea of the search for extraterrestrial intelligence (SETI) and what I call the SETI paradigm.

[The SETI paradigm incorporates assumptions about the likelihood of interstellar travel.]

What is the SETI Paradigm?

Among those who insist upon hard limits to interstellar travel are many advocates of SETI, which is usually conceived as searching for intelligent extraterrestrial signals, whether radio or optical or otherwise. The two positions—denial of the possibility of interstellar travel and pursuit of SETI—are tightly-coupled, as the unlikelihood of interstellar spacefaring civilization is used to argue for SETI as the only alternative to discovering other life and intelligence in the universe through space exploration.

Philip Morrison, who along with Giuseppe Cocconi wrote the first paper on the possibility of SETI, also held this view in regard to, “…real interstellar travel, where people, intelligent machines, or whatever you like, go out to colonize. You travel in space as Magellan circumnavigated the world. I do not think this will ever happen. It is very difficult to travel in space.” [5]

Perhaps the locus classicus of the SETI paradigm was to be found already in 1962, three years after the Cocconi and Morrison paper:

“…space travel, even in the most distant future, will be confined completely to our own planetary system, and a similar conclusion will hold for any other civilization, no matter how advanced it may be. The only means of communication between different civilizations thus seems to be electro-magnetic signals.” [6]

And here is another clear statement of the SETI paradigm:

“The bottom line of all this is quite simply that interstellar travel is so enormously expensive and/or perhaps hazardous, that advanced civilizations do not engage in the practice because of the ease of information transfer via interstellar communication links.” [7]

The frequency with which cautions regarding the danger of interstellar travel are employed as an argument against interstellar travel suggests that the class of persons writing against interstellar travel are risk averse, but that does not mean that all sectors of society are equally risk averse. Some individuals seek out risk in order to confront “limit-experiences” (expérience limite), and never feel so fully as alive as when facing danger, death, and the possibility of personal annihilation. [8]

If we set aside the danger of interstellar travel as an artifact of risk aversion, knowing that risk tolerance is one of those individual variations that drives natural selection, we are left with the argument that interstellar spaceflight would be too expensive and too difficult to pursue. The potential cost of interstellar travel is a matter for another essay on another occasion, but I will only observe here that we do not yet know the economics of supercivilizations, so we must keep an open mind as to whether or not interstellar missions would be prohibitively expensive. I do not think that interstellar travel would be too expensive because a fully automated space-based industrial infrastructure, in possession of the energy and materials that are available beyond planetary surfaces, would find few construction projects to be too expensive, as there would be no economic trade-offs between building starships and producing consumer goods.

The idea that interstellar travel is enormously difficult I do not dispute, though I find it strange that anyone would argue for the, “…ease of information transfer via interstellar communication links,” when these links could not facilitate communication over scales of time relevant to civilization, except for communication with the nearest stars. If there were advanced civilizations located at the nearest stars, with which we might communicate over a time scale of years or even decades, we would already know about these cosmic neighbors. If there are advanced civilizations, then, they must be distant from us, and the greater the distance from us, the more unrealistic it is to imagine that civilizations could communicate on a civilizational scale of time.

I find it astonishing that those coming from the perspective of the SETI paradigm (which assumes limits on interstellar travel, whether hard or relatively soft limits) imagine an advanced civilization having the patience to wait thousands or tens of thousands of years for a message exchange, but being unwilling to send out interstellar missions operating on a similar scale of time. Here we must imagine supercivilizations who do not have the patience to develop advanced transportation technologies, but which do have the patience to wait thousands of years, or tens of thousands of years, or hundreds of thousands of years, to exchange messages with another civilization. For a stagnant supercivilization, this is easily imaginable and possible, but for a civilization in its growth phase, on the path to attaining supercivilization status, a thousand years of technological development is many times longer than terrestrial technological development since the industrial revolution, which has taken us from sailing ship to spaceship.

If a civilization were to send out a message, then collapse some thousands of years later, and the response to the message were then to arrive for some successor civilization still more millennia later, this could be not considered a conversation among civilizations. Under these conditions, only one-way messages make any sense. However, if relativistic spaceflight were to be developed, the intelligent progenitors of a civilization could travel directly to other civilizations and converse with them face-to-face (if both parties to the conversation possessed faces, that is). Now, it is true that civilization on the homeworld of this intelligent progenitor species would experience the same time lapse as beings who stayed on their homeworld and attempted to communicate by conventional SETI means, but those who actually traveled and experienced time dilation could directly experience all that there is to be experienced in the universe. A species in possession of relativistic spaceflight could always arrange for rendezvous with similarly time dilated communities to which they could return. Such a civilization would be “temporally distributed.” This is the argument I attempted to make, however imperfectly, in my previous Centauri Dreams post, Stepping Stones Across the Cosmos, though I suppose I didn’t explain myself adequately.

It beggars belief to suppose that a civilization in possession of relativistic spaceflight would choose to remain on its homeworld, waiting for signals thousands or millions of years old, when it could go out into the cosmos and investigate matters firsthand and to engage with the intelligent progenitors of other civilizations (if there are such) as peers, i.e., as fellow beings. I do not say that it is impossible that this should be the case, but it strikes me as extremely unlikely. If human civilization came into possession of relativistic spaceflight technology, and only one percent of the present human population of (more than) seven billion were interested in this development, there would still be seventy million human beings exploring the universe, and arranging rendezvous with groups having experienced similar time dilation and so belonging to the same historical period (and thus having something in common).

It is not uncommon, however, to view SETI not as predicated upon the impossibility of interstellar flight, and therefore as a substitute for direct contact, but rather as what we can do right now to establish contact, with interstellar travel still in the offing, yet to play its role when our technology achieves that level of development. In this sense, the SETI paradigm and actual exploration are in no sense inherently in conflict. It is entirely possible that a spacefaring civilization might possess a capability to explore relatively nearby planetary systems and yet eventually find itself at a very great distance from any other civilization, with which it could only communicate by electromagnetic means. Both of these enterprises—exploring nearby planetary systems, even if they have no life and no civilization, and communicating with other civilizations too distant for direct travel—would be profoundly stimulating to a civilization in scientific terms. Nevertheless, the SETI paradigm remains a powerful point of reference because in internal coherency of the assumptions it makes.

The advocate of the SETI paradigm must assert that interstellar travel is impossible, because, if it is possible, the idea of a grand Encyclopedia Galactica existing in the form of a network of SETI signals crisscrossing the cosmos is very unlikely to be realized. Thus this cluster of assumptions that I call the SETI paradigm —that interstellar travel is difficult or impossible, that communication is easy, and therefore SETI and METI are, or ought to be, the focus of the efforts of advanced civilizations to interact with peers—hang together by mutual implication. If we reject any one aspect of the paradigm, it falls apart. [9] The SETI enterprise may remain, but it becomes a small part of a big picture, and is no longer the big picture itself.

[Are we confined to our oasis in space?]

Is planetary endemism the eternal truth of humanity?

For some scientists, not directly concerned with SETI as an alternative for exploration, expressing the difficulty of interstellar travel and the unlikelihood of human beings traveling to other worlds has been a way to express the spirit of seriousness (yes, I am invoking Sartre [10]) in relationship to human planetary endemism, since the prior seriousness of our cosmological disposition (our Ptolemaic centrality) was deprived us by the Copernican revolution. No longer at the center of the universe, and schooled in humility by hundreds of years of Copernicanism, we have become acculturated to our apparently marginal role in the universe, and one way to express this idea is to assert that our marginal status is bound to our marginal homeworld orbiting a marginal star in a marginal galaxy.

Given this acculturation, our attachment to our homeworld—rather than being a mere empirical contingency, a truth ready-made by the accident of our origin upon a planetary body—is, as Sartre said, “…an ethics which is ashamed of itself and does not dare speak its name.” Instead of saying (though some do say this), “We ought not to leave Earth,” the SETI paradigm tells us, “We cannot leave Earth.” (The “ought” has been transformed into an “is”; it is brute fact, and no longer subject to volition.) And if we cannot leave Earth, our special relationship to Earth is retained. What Copernicanism has taken from us with one hand, it gives back with the other. We once again have a “special” relationship to Earth, though not the special relationship posited by the Ptolemaic system and its Aristotelian embroiderings.

For example, in my earlier Centauri Dreams post How We Get There Matters I quoted this from Peter Ward and Donald Brownlee:

“The starships of TV, movies, and novels are products of wishful thinking. Interstellar travel will likely never happen, meaning we are stranded in this solar system forever. We are also likely to be permanently stuck on Earth. It is our oasis in space, and the present is our very special place in time. Humans should enjoy and cherish their day in the Sun on a very special planet and not dwell too seriously on thoughts of unicorns, minotaurs, mermaids, or the Starship Enterprise. Our experience on Earth is probably repeated endlessly in the cosmos. Life develops on planets but it is ultimately destroyed by the light of a slowly brightening star. It is a cruel fact of nature that life-giving stars always go bad.” [11]

Eminent entomologist E. O. Wilson [12] went even farther than Ward and Brownlee:

“Another principle that I believe can be justified by scientific evidence so far is that nobody is going to emigrate from this planet, not ever.” [13]

Note that these are assertions without argument, though they invoke scientific evidence without actually arguing from scientific evidence. (I am going to quote more of the latter passage in another post to come, as it perfectly exemplifies a particular perspective on the human condition.)

These extrapolations beyond the SETI paradigm are arguably more damaging than the SETI paradigm itself, because it raises planetary endemism to a metaphysical status, seeking to overturn the essence of the Copernican revolution. The original formulations of the SETI paradigm were made by scientists who had clear and unambiguous reasons for favoring SETI communication over actual exploration, but those who have taken up the SETI paradigm as a way to express their skepticism about a spacefaring future have no such reasons, or, if they have them, they do not state them.

[Ludwig Wittgenstein]

Are we dealing with implicit proscriptions?

It could be that those who argue for hard limits to interstellar travel are incorporating implicit boundaries to the discussion, which, not having been made explicit, have not been part of the argument. This is particularly true in relation to a discussion of supercivilizations, which I will try to show below.

Wittgenstein noted such implicit proscriptions in a passage from his Philosophical Investigations:

“Someone says to me, ‘Show the children a game.’ I teach them gambling with dice, and the other says, ‘I didn’t mean that sort of game.’ In that case, must he have had the exclusion of the game with dice before his mind when he gave me the order?” [14]

This is how people most often talk at cross-purposes, and so we must make an effort to bring such presuppositions to the surface and make them explicit. What I particularly have in mind in regard to implicit boundaries to the scope of a discussion is the possibility that when someone says, “Interstellar travel is impossible,” what they really mean to say is that, “Interstellar travel is impossible within a given time horizon,” or, “Interstellar travel is impossible based on known science and technology.” This is of interest to me in the present context because the longevity of a supercivilization would presumably exceed the bounds of some ordinarily assumed time horizon, so that while most discussion of civilization would not need to address interstellar travel, it might still be allowed that interstellar travel is possible for supercivilizations, and ought to be discussed in relation to them.

Some of the quotes above seem to clearly rule out implicit qualifications to the assertions being made. For example, the quote from Sebastian von Hoerner explicitly stipulates that, “…space travel, even in the most distant future, will be confined completely to our own planetary system, and a similar conclusion will hold for any other civilization, no matter how advanced it may be.” [emphasis added] This doesn’t seem to leave much room for ambiguity. We need to take von Hoerner at his word, and see what it would mean for a civilization to be incapable of interstellar travel regardless of its age or its technological achievements, regardless of where it finds itself in the universe or in the history of the cosmos.

Without making any implicit boundaries of a discussion explicit, the denial of the possibility of interstellar travel becomes the denial of the possibility of interstellar travel by any civilization (1), at any stage of development (2), at any time in the history of the universe (3), by any means (4), and at any location within the universe (5). This would be a very strong assertion to make, and I can’t imagine that many would agree to it if they fully understood that which they were implicitly asserting. [15]

We could take these five implied conditions in turn and formulate how these implicit qualifications to the denial of the possibility of interstellar travel might be formulated if made explicit:

    1. Yes, interstellar travel is impossible for our civilization, but not necessarily for some other kind of civilization, and not necessarily impossible for a supercivilization.

    2. Yes, interstellar travel is impossible for our civilization at its present stage of development, but given a sufficiently long-lived civilization interstellar travel might be possible.

    3. Yes, interstellar travel is impossible at the present time in the history of the universe, but it may be possible at some other time when, for instance, another star approaches the sun closely enough for us to travel to it. [16]

    4. Yes, interstellar travel is impossible for known technologies, but we may yet develop technologies that will make it possible, or these technologies may be developed by other kinds civilizations.

    5. Yes, interstellar travel is impossible for us, located in a diffusely populated arm of our spiral galaxy, but it might be possible for civilizations located in regions of the galaxy where stars are more closely spaced (such as galactic centers, globular clusters, or merely closely-packed regions of elliptical galaxies).

When we put together the possibilities of different kinds of civilizations (including the different kind of civilization our civilization may become in the future), at different stages of development, at different times in the natural history of the universe, involving different means of transportation, and in other parts of the universe when stars are not as diffusely distributed, it seems a bit contrarian (and I don’t mean that in a flattering way) to insist that any and all interstellar travel is impossible.

A further implicit qualification may be present. Disavowals of the possibility of interstellar travel might be interpreted as specifically addressing the known cosmological circumstances for terrestrial civilization only, or such might be more widely interpreted as holding for any civilization that shares Earth’s cosmological circumstances, or, more widely yet, may hold for civilization whatsoever. In the narrowest of these three senses, the implicit qualification may be made explicit by asserting the proviso, “Well, yes, interstellar travel might be possible under these circumstances, addressing the above qualifications as we have done, but since we are likely the only civilization in the galaxy, the particular cosmological circumstances of Earth and terrestrial civilization are the only cosmological circumstances that really count. A civilization located in a globular cluster where stars are less than a light year apart might be able to pursue interstellar travel, but there are no civilizations; this class of civilizations is the empty set, so we may set it aside.”

By this same reasoning, any consideration of what supercivilizations might accomplish can also be set aside, because terrestrial civilization is not a supercivilization, and if we limit ourselves to what terrestrial civilization is now, and what it can do now, where it is located now, and so on, then we can dismiss the possibility of interstellar travel. (We can also dismiss any future for ourselves other than an eternally-iterated present.) Moreover, we have no particular reason to believe that terrestrial civilization will become a supercivilization, even if it survives for thousands of years or more. Whether or not a civilization does or can develop into a supercivilization may be entirely a matter of mere historical contingency, and, in this sense, the particular cosmological circumstances of Earth will mean the difference between whether terrestrial civilization can develop into a supercivilization, or if it will inevitably fail to do so. Moreover, whether or not a supercivilization stagnates or continues to develop may also be entirely a matter of mere historical contingency (an artifact of galactic endemism, as it were).

[“…we have all entered the Interstellar Age.” Jim Bell]

Is interstellar travel inevitable for long-lived civilizations?

When we combine technologies already known to us, despite our rudimentary development as a technological civilization, and the changing circumstances of the galaxies, which will, over a cosmological scale of time, move some stars closer to us (as other stars move farther from us), denying the possibility of eventual interstellar travel is like denying the possibility of what is already known. It is arguable, then, that interstellar travel is inevitable for supercivilizations. If a civilization persists for a period of time sufficient to become a supercivilization, it would persist through additional stages of development, through changing distances among stars, and through changing cosmological conditions, so that a settled and deliberate avoidance of interstellar travel would seem to be a precondition of a very old and advanced civilizations that never achieved interstellar breakout. We cannot rule this out, but we also cannot assume that every civilization will cultivate a settled and deliberate avoidance of space travel.

We are already capable of sending out a spacecraft into interstellar space. The “grand tour” gravitational assist of the Voyager probes has already sent Voyager 1 outside the solar system, though that was not part of the original mission of that spacecraft, and the spacecraft is not on a trajectory specifically tailored to encounter another star (though it may pass near another star over sufficiently long scales of time). But Voyager is in interstellar space, and in virtue of this Jim Bell has asserted, “…now the Voyagers are leaving the protective bubble of our sun and crossing over into the uncharted territory between the stars… we have all entered the Interstellar Age.” [17] By this measure, terrestrial civilization has already achieved interstellar breakout.

The gravitational assist that has been extensively employed to send robotic probes throughout our solar system, if specifically tailored to interstellar purposes, could significantly improve on Voyager’s trajectory in terms of getting a spacecraft to another planetary system. Given the possibility of an interstellar gravitational assist (cf. The Interstellar Gravitational Assist by Paul Gilster), and the possibility of selecting a trajectory specifically for the purpose to traveling to a star brought relatively nearby to us (i.e., optimizing the gravitational assist for an interstellar trajectory), even if terrestrial civilization stagnated at or near its present technological level of development, it would still be capable of interstellar travel if it endures for a sufficient period of time.

Similar considerations hold civilizations that happen to find themselves in cosmological circumstances more amenable to interstellar travel. In their paper “Globular Clusters as Cradles of Life and Advanced Civilizations” (which I discussed in The Globular Cluster Opportunity), R. Di Stefano and A. Ray discuss the possibilities for advanced spacefaring civilizations in globular clusters, where stars are more closely distributed and travel times between stars and their planetary systems would therefore be shorter than travel times among stars as we typically find them distributed in the arms of spiral galaxies. [18]

[“Assembling a Space Station” by Klaus Bürgle]

Would we recognize another stagnant supercivilization as a peer?

Even without “breakthrough” technologies, utilizing the science and technology available to a civilization a couple of hundred years into its industrial revolution, interstellar flight is conceivable, and, under some circumstances, practicable. Unique cosmological circumstances in which relatively low technological interstellar travel is possible may serve as incubators for spacefaring civilizations, which, under this unique selection pressure, would be more likely to develop the sciences and technologies conducive to the expansion of spacefaring civilization, and which would definitely lead to the development of the practical engineering skills necessary to (even nearby) interstellar travel.

Such a civilization would have far more practical engineering experience in spacecraft and living in space than we possess, even if it did not possess any science or technology that we do not also possess. To a certain degree (though not to an absolute degree), engineering expertise can vary independently of scientific knowledge and technological development. (Technologies have often grown out of engineering experience, so that technology and engineering tend to be more tightly-coupled than science and engineering.) We are reminded of this when we consider the lithic technology of Pleistocene human beings, or the stone-working technologies of early civilizations and their monumental architecture, the particular engineering techniques of which have been lost, and which are thus mysterious to us. Analogously, a spacefaring civilization with greater engineering experience in space than contemporary terrestrial civilization, but no greater scientific knowledge, initially might appear mysterious to us.

A truly ambitious civilization of this kind, perhaps not greatly technologically advanced, but with a determination to project itself into the cosmos, could, over cosmological scales of time (if it could survive that long), pass from one planetary system to another as stars passed nearby each other, pursuing a strategy of opportunistic interstellar travel, hopping from one nearly planetary system to the next, as the occasion presented itself. Such a civilization need not be advanced much beyond the level contemplated by Wernher von Braun in his mid-twentieth century plans for a space program that could ultimately, “…build a bridge to the stars, so that when the Sun dies, humanity will not die.” [19] A rudimentary spacefaring civilization of this kind could, over millions of years, expand throughout a significant portion of the galaxy. They might even be so “quiet” in electromagnetic terms, and leave such a light footprint on the galaxy, that we do not see them coming.

It would be a shock for us on Earth if we were eventually “discovered” by some civilization less technologically advanced than we are, but more keen on space exploration, and willing to invest blood and treasure in the effort when terrestrial civilization is not yet willing to invest in the enterprise. For if terrestrial civilization endures to become a supercivilization, but remains tightly-coupled to its homeworld, fearful to extend its reach into the cosmos, we are likely to be “discovered” rather than being the ones to do the discovering. Carl Sagan once wrote, “The surface of the Earth is the shore of the cosmic ocean… Recently, we have waded a little out to sea, enough to dampen our toes or, at most, wet our ankles. The water seems inviting. The ocean calls.” [20] Though the ocean calls, we have hesitated on the shore. Given a sufficiently long period of time—a scale of time over which a supercivilization might endure—there may be other civilizations that do not hesitate.

In my last Centuari Dreams post, Synchrony in Outer Space, I argued that civilizations can retrench from development that becomes so rapid as to be disorienting and socially disruptive, and that this may have happened with the mid-twentieth century space program, which was defunded and neglected after the Apollo Program, but which could have been expanded, had the political will been present (cf. Late-Adopter Spacefaring Civilization: the Preemption that Didn’t Happen). In the event of a (counterfactual) expansion of the mid-twentieth century space program, the history of terrestrial civilization would have bifurcated sharply from the path it did in fact take.

If we encountered a civilization that had taken an earlier path to spacefaring civilization, would we recognize them as the path not taken by terrestrial civilization, as being, in a sense, a peer civilization? This would be the meeting of two different kinds of stagnant supercivilizations—one that stagnated scientifically, but which expanded beyond its homeworld, and another that continued to expand the frontiers of scientific knowledge, but which stagnated on its homeworld—neither of them the kind of supercivilization that runs into the limit of the carrying capacity of the galaxy, and neither of them in possession of relativistic spaceflight technology.

These two civiilzations, supercivilizations in virtue of having endured for cosmologically significant periods of time, might be identified as instances of partially stagnant civilizations, and, in this sense, suboptimal civilizations (more specifically, suboptimal supercivilizations). If we acknowledge the possibility of suboptimal partially stagnant civilizations, we would not be surprised that such civilizations had not exhaustively colonized the entire galaxy, and that they had not built a powerful SETI beacon. Many such civilizations might be simultaneously present in the galaxy and yet know nothing of each other. This could be called the “suboptimal hypothesis” in response to the Fermi paradox.

– – – – – – – – – – – – – – – – – – – – – – – – – – – – –


[1] “On the Inevitability and the Possible Structures of Supercivilizations,” Nikolai S. Kardashev, in M. D. Papagiannis (ed.), The Search for Extraterrestrial Life: Recent Developments, Proceedings of the 112th Symposium of the International Astronomical Union Held at Boston University, Boston, Mass., U.S.A., June 18–21, 1984, Springer, 1985, 497-504.

[2] Galactic ecology has been characterized thus: “The timescale for the Galactic ecology is determined by the rate of star formation and the lifetime of the most massive stars (a few million years). This ecology must have existed, though in gradually changing form, over the life of the Galaxy. It is driven by the energy flows from the massive stars, and the material cycle through these same stars. Carbon, and heavier elements, are created in the massive stars, and released through winds and supernova explosions. They cycle between the various phases of the interstellar medium, before again being incorporated into stars and, in some cases, planetary systems and life. Further star formation in a molecular cloud is self-regulated by the massive stars already forming, and by the cooling agents which are already present in it. These agents gradually change as the elemental abundances, particularly of carbon, increase as the Galaxy evolves.” Michael G Burton, “Ecosystems, from life, to the Earth, to the Galaxy” (2001)

[3] “Galactic Civilizations: Population Dynamics and Interstellar Diffusion,” William I. Newman, Carl Sagan, ICARUS 46, 293-327, 1981, p. 295.

[4] Loc. cit.

[5] Morrison, Philip, “Conclusion: Entropy, Life, and Communication,” in Ponnamperuma, Cyril, and Cameron, A.G.W., Interstellar Communication: Scientific Perspectives, Boston, et al.: Houghton Mifflin Company, 1974, p. 171.

[6] von Hoerner, Sebastian, “The General Limits of Space Travel,” Science, 06 Jul 1962: Vol. 137, Issue 3523, pp. 18-23, DOI: 10.1126/science.137.3523.18)

[7] Wolfe, John H., “On the Question of Interstellar Travel,” in The Search for Extraterrestrial Life: Recent Developments, edited by Papagiannis, Michael D., Dordrecht: D. Reidel Publishing Company, 1985, pp. 449-454)

[8] Of limit-experiences Michel Foucault wrote, “…the point of life which lies as close as possible to the impossibility of living, which lies at the limit or the extreme.” Foucault, Remarks on Marx, semiotext(e), 1991, p. 31. In relation to John Rawls’ famous thought experiment characterizing a just society as one in which the society is constituted from behind a veil of ignorance as to our place in that society, it has been pointed out that the implied risk aversion is in no sense universal, and there are many who might favor a less “just” society on the premise that an able individual not opposed to risk-taking may make a better place for himself in such a world through his own effort.

[9] In calling this the “SETI paradigm” I do not mean to imply that everyone engaged in SETI accepts this paradigm, nor do I wish to argue against the legitimacy or indeed the importance of SETI, which I view as a worthwhile endeavor.

[10] Of the spirit of seriousness Sartre wrote, “The spirit of seriousness has two characteristics: it considers values as transcendent givens independent of human subjectivity, and it transfers the quality of ‘desirable’ from the ontological structure of things to their simple material constitution. For the spirit of seriousness, for example, bread is desirable because it is necessary to live (a value written in an intelligible heaven) and because bread is nourishing. The result of the serious attitude, which as we know rules the world, is to cause the symbolic values of things to be drunk in by their empirical idiosyncrasy as ink by a blotter; it puts forward the opacity of the desired object and posits it in itself as a desirable irreducible. Thus we are already on the moral plane but concurrently on that of bad faith, for it is an ethics which is ashamed of itself and does not dare speak its name. It has obscured all its goals in order to free itself from anguish. Man pursues being blindly by hiding from himself the free project which is this pursuit.” Sartre, Jean-Paul, Being and Nothingness, New York: Washington Square Press, 1969, p. 796.

[11] Peter Ward and Donald Brownlee, The Life and Death of Planet Earth: How the New Science of Astrobiology Charts the Ultimate Fate of Our World, New York: Henry Holt and Company, 2002, pp. 207-208.

[12] Of Wilson I recently noted, “…the major ideas that have marked his scientific career — island biogeography, sociobiology (which turned out to be evolutionary psychology in its nascent state), biophilia, multi-level selection, of which one component is group selection, and the recognition of eusociality as a distinct form of emergent complexity—are ideas that I have used repeatedly in the exposition of my own thought.” I repeat this here so that the reader understands that I in no sense impugn the scientific work of Wilson.

[13] E. O. Wilson, The Social Conquest of Earth, Part VI, chapter 27.

[14] Wittgenstein, Ludwig, Philosophical Investigations, Macmillan, 1989, between sections 70 and 71. This remark is not included in all editions of the Philosophical Investigations, e.g., it does not appear in the 50th anniversary commemorative edition.

[15] The argument I am employing here closely parallels the argument that G. E. Moore makes against unqualified formulations of utilitarianism in his short book Ethics. It is interesting to note in the present context that Moore’s argument against utilitarian takes as a counterfactual unanticipated by unqualified formulations of utilitarianism the possibility of extraterrestrial beings who would not respond to pleasure and pain as do human beings.

[16] Gliese 710 is likely to pass close to our solar system 1.35 million years from now, by which time, if terrestrial civilization survives, it will be a million-year-old supercivilization. In the recent paper “Searching for Stars Closely Encountering with the Solar System Based on Data from the Gaia DR1 and RAVE5 Catalogues,” by V.V. Bobylev and A.T. Bajkova, the authors review stars that will pass within one parsec of our solar system (less than the current distance to Proxima Centauri).

[17] Bell, Jim, The Interstellar Age: Inside the Forty-Year Voyager Mission, New York: Dutton, 2015, p. 3.

[18] Farther yet in the future, after the Milky Way and Andromeda galaxies have merged, and the stars of these galaxies will have been significantly rearranged, so to speak, our sun will have run its race, but many stars that are relatively isolated in regard to their stellar neighborhood may find themselves suddenly (on a cosmological scale of time) with a close neighbor, and vice versa. In this way, the cosmological context of any given planetary system might be radically altered over time.

[19] Quoted in Bob Ward, Dr. Space: The Life of Wernher von Braun, Annapolis, US: Naval Institute Press, 2013, Chapter 22, p. 218, with a footnote giving as the source, “Transcript, NBC’s Today program, New York, November 11, 1998.”

[20] Carl Sagan, Cosmos, chapter 1.



Breakthrough Starshot ‘Sprites’ in Orbit

If Breakthrough Starshot succeeds in launching a fleet of tiny probes to Proxima Centauri in 30 or 40 years, their payloads will be highly miniaturized and built to specifications far beyond our capabilities today. But the small ‘Sprites’ launched into low Earth orbit on June 23 give us an idea where the research is heading. Sprites are ‘satellites on a chip,’ growing out of research performed by Mason Peck and his team at Cornell University, which included Breakthrough Starshot’s Zac Manchester, who used a Kickstarter campaign to develop the concept in 2011 (see Sprites: A Chip-Sized Spacecraft Solution for background on the Cornell work).

Breakthrough Starshot executive director Pete Worden refers to Sprites as ‘a very early version of what we would send to interstellar distances,’ a notion that highlights the enormity of the challenge while pointing to the revolutionary changes that may make such payloads possible. The issues multiply the more you think about them — chip-like satellites in space have no radiation shielding and are susceptible to damage along the route of flight. But missions like these will help us analyze these problems and refine the technology.

Consider communications. In an email yesterday, Mason Peck told me that the Cornell team has juiced up the networking capabilities of the tiny spacecraft. “Now we have them talking to each other in a peer-to-peer network, and this demonstration shows how they synchronize like fireflies,” Peck said, a lovely image that points to what is becoming possible. Instead of a single large probe, think of a cluster of them, a fleet of spacecraft on chips, each carried by a sail. Losses along the route are assumed, but they are overcome by sheer numbers.

And as Peck, himself a key player in Breakthrough Starshot, goes on to point out, we’re beginning to learn how such chips can work among themselves:

This [peer-to-peer networking] capability would allow many of them to share science data, for example, or to create a persistent virtual senor out of many discrete sensors-on-chip. Also, in principle, their transmitting simultaneously could amplify the signals, enabling them to be heard from farther away. Or they could each transmit part of a dataset — say part of a large image.

We’ve never launched fully functional space probes as small as these, each 3.5-by-3.5 centimeter probe built upon a single circuit board and weighing in at just four grams. A Sprite can contain solar panels, computers, communications capability and an array of sensors. The tiny spacecraft’s electronics all function off the 100 milliwatts of electricity each generates.

The Sprites went into space aboard an Indian rocket as supplementary payloads. Now in orbit, the Latvian Venta satellite and the Italian Max Valier satellite, operated by OHB System AG, each have a Sprite attached to the outside, while the Max Valier satellite contains four more Sprites that are be deployed into space for subsequent study of their orbital dynamics.

Breakthrough Starshot is saying that communications from the mission show the Sprites are performing as designed, although Lee Billings, in a Scientific American post, has noted that the Sprites aboard the Max Valier satellite are problematic, with mission controllers thus far unable to establish communications with the external Sprite.

That could mean trouble for deploying the Max Valier’s four internal Sprites, but the stable orbits of the satellites give time for attempted fixes. Zac Manchester tells Billings that controllers have picked up signals from one external Sprite but are not sure which one it is. Even so, adds Manchester: “This is the first time we’ve successfully demonstrated Sprites end-to-end by flying them in space, powering them with sunlight and receiving their signals back on Earth.”

You may recall that Sprites have had their day aboard the International Space Station, being mounted for a long-term experiment outside the station before being returned to Earth undamaged from the exposure. Making a point that resonates with yesterday’s post on deorbiting space debris, Billings adds that the 2014 attempt to put 100 Sprites into orbit aboard a crowd-funded KickSat raised concerns over space debris; in any case, the Sprites were not deployed. Sprites will continue to be tested in space, but for now they will need to operate no higher than 400 kilometers above Earth, below which their orbits decay quickly.

How Sprites will evolve as Breakthrough Starshot continues to examine the technology remains to be seen. But remember that along the way, we have numerous potential uses for the tiny spacecraft here in our own system. Mason Peck has even talked about letting Sprites become charged through plasma interactions and then using a huge magnetic field like Jupiter’s as a particle accelerator to push the chips to thousands of kilometers per second.

That’s actually another way to get a payload to Proxima Centauri, though one that would take decades to get up to speed, and would still require several centuries for the journey. Even so, the idea of swarms of Sprites as interstellar probes, each communicating with the others like fireflies, has a surreal kind of beauty. In the meantime, could we use Sprites for interplanetary missions? Peck pointed out in a 2011 IEEE Spectrum article that the chips could use radiation pressure from the Sun to move around the Solar System. Let me quote him:

If a Sprite could be made thin enough, then its entire body could act as a solar sail. We calculate that at a thickness of about 20 micrometers—which is feasible with existing fabrication techniques—a 7.5-mg Sprite would have the right ratio of surface area to volume to accelerate at about 0.06 mm/s2, maybe 10 times as fast as IKAROS [the Japanese solar sail]. That should be enough for some interplanetary missions. If Sprites could be printed on even thinner material, they could accelerate to speeds that might even take them out of the solar system and on toward distant stars.

Image: Artist’s conception of a cloud of Sprite satellites over the Earth. Credit: Space Systems Design Studio/Cornell University.

Zac Manchester makes the same case, adding that Sprites can also be used to form three-dimensional antennas in deep space to monitor the kind of space weather that can damage power grids and orbiting satellites. Flying aboard larger spacecraft, they could be deployed as a rain of small probes to coat distant planetary surfaces with sensors.

“Eventually, every mission that NASA does may carry these sorts of nanocraft to perform various measurements,” says Pete Worden. “If you’re looking for evidence of life on Mars or anywhere else, for instance, you can afford to use hundreds or thousands of these things—it doesn’t matter that a lot of them might not work perfectly. It’s a revolutionary capability that will open up all sorts of opportunities for exploration.”



Testing out new sail applications is part of a European project called RemoveDebris, which focuses on strategies for dealing with the enormous amount of junk that is piling up around the Earth. Run by the Surrey Space Center at the University of Surrey (UK) and the Von Karman Institute of Belgium, the work takes note of the fact that, from flecks of paint to inactive satellites to spent rocket boosters, our planet is orbited by about 7000 tonnes of material. If you want to visualize that amount, it’s the equivalent of 583 London buses, according to this SSC news release.

You may recall that in the film Gravity, a Space Shuttle is destroyed by space debris. But the issue is hardly confined to Hollywood imaginings. Jason Forshaw is Surrey Space Centre project manager on the RemoveDebris team:

“Various orbits around the Earth that are commonly used for satellites and space missions are full of junk, which is a significant danger to our current and future spacecraft. Certain orbits – which are commonly used for imaging the earth, disaster monitoring and weather observation – are quickly filling up with junk, which could jeopardise the important satellites orbiting there. A future big impact between junk in that orbit could result in a real life ‘Gravity-like’ chain reaction of collisions.”

A scary thought, but as you would imagine, my interest for Centauri Dreams is primarily in terms of that interesting sail deployment. Funded by the European Commission, the Surrey effort, called InflateSail, has demonstrated inflatable sail deployment techniques and will be testing deorbiting sail technologies from a small satellite. Launch occurred on June 23, with deployment shortly after the CubeSat carrying the sail achieved orbit.

The sail is designed to demonstrate the effectiveness of a drag sail at causing satellites to lose altitude and burn up in the atmosphere. The satellite uses a cool gas generator to inflate a one-meter long boom. After boom inflation, a motor extends four carbon booms that extract the 10-meter square sail. In future use, such a sail could be carried aboard a satellite and deployed at the end of its life, to ensure that it does not join the ranks of space debris.

Image: The InflateSail mission has successfully tested both inflatable and ‘deorbit sail’ technologies in space from a small nanosatellite. Credit: University of Surrey.

You may recall that NASA launched a small sail called NanoSail-D2 in 2010 that eventually re-entered the atmosphere after 240 days in orbit. Its deployment from the FASTSAT satellite in which it launched did not occur on time, but the sail later ejected and deployed three days later, a follow-up to an earlier NanoSail-D that was lost during the launch attempt. In addition to testing systems for sail deployment, NanoSail-D2, like InflateSail from the SSC, was designed to explore deorbiting measures that could be applied to space debris.

The Surrey sail is in orbit and returning data to ground controllers. Drag produced by the sail will gradually lower its altitude for re-entry, causing it to burn up in the atmosphere. Craig Underwood, who is in charge of the Surrey Space Center’s environments and instrumentation group, is principal investigator for the mission. Says Underwood:

“We are getting tremendous data from the spacecraft, which have already given new insights into these key deorbiting technologies in the real space environment. InflateSail heralds yet another successful CubeSat mission for the space engineering and academic team at the SSC. It also demonstrates how we can effectively help reduce space junk, and later this year we will launch one of our flagship missions, RemoveDebris – one of the world’s first missions to test capturing of artificial space junk with a net and harpoon.”

Centauri Dreams’ take: The more experience we gain with sail deployment and operations, the better. In this case, we are looking at near-term sail applications to solve a serious problem for spacecraft near the Earth. But remember the success of Japan’s IKAROS mission at going interplanetary and testing sail deployment, navigation and data return. Early in the next decade, if all goes as planned, a new sail from JAXA spanning 50 meters to the side will deploy and head for Jupiter to study its trojan asteroids.

The future of sail technologies seems bright, particularly as we gain experience and begin to explore beamed energy possibilities. The fact that the Surrey Space Center has successfully deployed an inflatable sail from the small CubeSat in which it was contained is an encouraging nod to the continued development of sails for near-Earth use. As we master these technologies, we’ll apply them to missions deep into the Solar System and beyond.



SailBeam: A Conversation with Jordin Kare

Looking around on the Net for background information about Jordin Kare, who died last week at age 60 (see yesterday’s post), I realized how little is available on his SailBeam concept, described yesterday. SailBeam accelerates myriads of micro-sails and turns them into a plasma when they reach a departing starship, giving it the propulsion to reach one-tenth of lightspeed. Think of it as a cross between the ‘pellet propulsion’ ideas of Cliff Singer and the MagOrion concept explored by Dana Andrews.

So I thought this morning to offer you some thoughts about SailBeam and its genesis from the man himself. I interviewed Jordin back in early 2003 in a wide-ranging discussion that took in most aspects of his work. He was an easy interview — all I had to do was offer the occasional nudge and he would take off. I found him engaging and hugely likeable. What follows is a fraction of the entire interview, the part that focuses primarily on SailBeam and a bit on Kare himself. I’ve edited it but in general preferred to let Kare’s own voice come through. The images I use here are from Jordin’s “SailBeam Space Propulsion by Macroscopic Sail-type Projectiles,” a presentation he delivered at the 2001 NIAC workshop in Atlanta.

PG: Your work with NIAC on the SailBeam concept takes sail technologies down a new path. Tell me how SailBeam and the NIAC report came about.

JK: I am an astrophysicist by background. I worked at UC-Berkeley and got a doctorate there in 1984. For most of the time since, I’ve been an aerospace engineer, dividing my identity between physicist and engineer. A lot of what I’ve worked on in this area are advanced propulsion projects. So I’ve been involved in a community of people who do exotic propulsion things.

One of the things that’s always in my mind is doing advanced interstellar propulsion. In this case, I’d been aware of ideas for doing laser and microwave sails for interstellar propulsion. Bob Forward did prototypical work on that. I’d been involved in couple of workshops where he talked about the concept, one at the Jet Propulsion Laboratory a couple of years back.

Along those lines, I had realized that there’s a scaling law to how laser sails worked. If you took a laser sail and tried to get to a certain velocity with a certain size laser and certain size sail, and then you took that sail and cut it into several pieces and accelerated those one after another, you could get the same amount of mass to the same velocity in the same amount of time, but you could use a smaller laser because the sail doesn’t accelerate over as long a distance.

That was interesting but not very useful. And then I thought about work that Geoffrey Landis had done about using sails not made of metal foils. The trouble with small sails is that you’re pushing them harder, accelerating faster. And if you’re using metal sails, you can’t do much of that before they just melt. Landis had pointed out you could push harder on dielectric sails.


PG insert: A brief bit of background on this. The problem with metal films is that they have low emissivity. A small sail made of such materials overheats under the beam. High emissivity materials with higher melting temperatures are needed. Dielectrics are non-conductive materials that will emit a lot but absorb little of the radiation impinging upon them. Silicon carbide is a dielectric, as is aluminum trioxide and, Kare’s favorite, diamond. But back to the interview.


JK: Dielectric sails that are a thin layer of transparent material reflect better than metal foil sails because they have a different index of refraction [describing how light propagates through a particular medium], like the reflection off the surface of a piece of glass, or reflection off a metal film. Forward noted that dielectric sails could potentially have higher acceleration.

My mental light bulb went on and said I know from working in laser technology and other areas I work on, that you can make very low absorption dielectric materials. If I can make very high quality, very low absorption dielectrics, I could push them really hard. Now instead of thinking in terms of taking a sail and dividing it into ten pieces, I can divide it into a million pieces. I started doing calculations and realized that this made sense as a propulsion system.

I started pulling in pieces from other projects I’ve worked on. This got to the point where I could make a rough design of the system concept and started telling my associates about it. I did a quick presentation at a meeting of space people that we exotic propulsion people go to — the Space Technology and Applications International Forum every January in Albuquerque.

I was describing the SailBeam idea to Bob Forward and he was the one who said you should get some money out of NIAC to study this further; he’d been involved with reviewing stuff for NIAC. I knew other people who had worked with them, so I went to the next NIAC workshop and did a proposal for the following round. When I looked at the kinds of things NIAC was supporting, SailBeam fit with the tenor of their proposals.

PG: Of course the whole idea of sail technologies is changing.

JK: It is for sure, and we have to distinguish between solar and laser sails, or beamed energy sails. The idea of solar sails has been around for a long time. And there have been many changes of direction. They used to look primarily at metallic sails for solar sail missions, usually metallic sails coated on a plastic film. But people who were really aggressive thought in terms of free-standing metal sails. Just in the last couple or three years, carbon-carbon has emerged. Here we have carbon fibers fused together to make an open lattice material that is as lightweight as anything they were ever hoping for out of metal-coated plastics, and much easier to handle. Carbon-carbon also takes much higher temperatures than plastic film.

Suddenly the solar sail people began looking at Sundiver missions, where they fly a solar sail and let it drop close to the Sun, flying edge on until it gets well inside the orbit of Mercury and then turning it face-on to the Sun for that propulsive kick. This gives you much higher velocities than anything we’ve done today, something like 100 or 200 kilometers per second, which means this has application for missions far past Pluto. This kind of velocity lets you begin to talk about thousand astronomical unit missions, a true interstellar precursor.

PG: Missions to another star demand even more. A lot more.

JK: True. Bob Forward was working at Hughes on some of the earliest lasers back in the 1960s when he first came up with the beamed sail idea. His idea was that if you have a laser or microwave beam, you can focus much more energy over a longer distance than you can with sunlight. You can use the same light pressure that solar sail people are talking about to get much higher velocities. Forward came up with this thing called Starwisp [a microwave-driven wire-mesh sail about a kilometer in diameter with a flight time of 20 years to Alpha Centauri — see The Case for Beamed Sails].

Forward realized there were problems with the basic Starwisp concept. One that always bothered me was how Bob was going to get any useful information out of this Starwisp. He talked about having little sensors at the intersections of this fine wire mesh, magically having them turn into a large telescope aperture. I was never quite clear how that actually worked.

So that was his first notion of a very high velocity sail. Forward also came up with concepts for laser sails, in particular the multistage laser sail that would be able to decelerate at destination by splitting off part of the sail and using that to reflect the beam back onto a separate section. A lot of people were interested in that and the idea got used in a lot of science fiction, including Bob’s own writing.

I’ll mention there’s a paper Bob Forward wrote for a workshop I ran at Livermore in 1986, when we were looking at non-interstellar laser propulsion applications. His paper was “Laser Weapon Target Practice with GeeWhiz Targets.’ And in there he talked about a sail that was made of multiple layers of diamond film. I had almost forgotten about this when I came up with my notion of the SailBeam. He had the idea of using dielectric reflectors by way of getting to extremely high performance in a sail.

I use artificial diamond as the best material for my sail. So Bob, as was usually the case, had some of the same pieces considerably earlier than anyone else. But he also had much thicker sails with more layers and wasn’t trying for quite such high performance. He was talk about building something that could fly at perhaps 100 kilometers per second using the types of laser we were talking about building for strategic defense.

The problem with his interstellar propulsion scheme, and everyone agreed it was a problem, was the scale that was required. Because Bob Forward wrote about 10,000 kilometer diameter Fresnel zone plate lenses. He would show an artist’s conception of the lens hanging next to the Earth, and it was the same size as the Earth! The sail by itself would be a hundred or thousand kilometers in diameter, and the lasers were in the terawatt category. It was clear that in principle it would work, but it was, to say the least, a monumental engineering task.

We all wondered if we could do this better somehow. At a workshop out at the Jet Propulsion Laboratory, Geoff Landis ran the session on laser sails and we looked at how you could make smaller sails, asking what was the smallest sail you could build and still do interesting missions. We were still looking at a single laser pushing a single sail.

It was hard to come up with something buildable and still interesting, but Landis had looked at optimizing sails in terms of choosing the best possible material. He was the one who pointed out there was this notion of designing not with multiple layers of dielectric that Bob Forward had put into his ‘gee whiz targets’ paper, but with a single layer of dielectric a quarter wavelength thick. That plus the scaling property that I had been thinking about were some of the ingredients that led to the SailBeam.

PG: SailBeam works by turning your micro-sails into plasma to push the departing spacecraft.

JK: This is where Dana Andrews’ work with magsails was so critical. The notion of putting magnetic coils on a spacecraft, essentially a magnetic loop, and making a magnetic field around it to deflect the solar wind, the stream of charged particles from the Sun. I had done some work for Dana on MagOrion, a notion of making the magnetic field strong enough that you could set off an atomic blast behind the spacecraft and deflect the plasma produced by the bomb.

PG: This was the Project Orion idea applied to magsails.

JK: Exactly. The magsail replaces what would have been a physical sail. The idea was designed originally for cruising around inside the Solar System. But magsails and all these other threads tie together — remember that the original invention of the magsail came when Dana Andrews and Bob Zubrin were trying to figure out if they could make the Bussard ramjet work. They wanted to see what you could do if you were trying to collect interstellar hydrogen with a magnetic scoop. And what they discovered is that they couldn’t make a Bussard ramjet work, because the magnetic fields always ended up deflecting the ionized hydrogen at high velocity. What that turns into is a very good drag brake.

Tweaking the numbers a bit, they could make it be a drag break against the solar wind, which is flying along at a pretty good velocity in the Solar System, 75 K per sec or so. So they could fly around on the solar wind. But all this originated from looking at another interstellar propulsion concept. These ideas build on each other; they’re hybrids.

So I had been working on MagOrion, had done designs of the field coils for Dana Andrews, and that was another piece, because I wondered if I can accelerate little bitty sails and do this scaling of launching a million little sails instead of one big sail, what do I do with them? They are too small to be useful individually. Well, I can use them like a MagOrion. I can turn them into blobs of ions and bounce them off a magnetic field at the vehicle. So I got to pull in yet another piece from things that other people had come up with that I adapted for my own design.

PG: You also applied magsail to deceleration in the target stellar system.

JK: Exactly. One of the things that Dana and Bob Zubrin had pointed out in the past is that a magsail worked as a way of decelerating interstellar spacecraft. I’m carrying a magsail anyway, so Dana and I collaborated on an IAF paper on slowing down a SailBeam vehicle at the far end. Now we had both a way of accelerating and reusing some of that hardware to stop at the far end.

PG: This seems like a more realistic way to do it than Forward’s ‘staged sail’ concept.

JK: I think it is. The one limit on it is that it is not a very fast braking system. It does take tens of years to stop. And it doesn’t bring you down to a full stop. That’s because the force you get to slow down varies with how fast you’re going. So the slower you’re going, the less you slow down. At some point, the time it takes to slow down from a tenth of the speed of light to one percent of the speed of light isn’t too bad, but it takes progressively longer to slow down the rest of the way. You can argue design details as to whether you can get down slow enough that you can then come to a stop by braking against the wind from whatever star you’re approaching. That gives you an extra 75 or 100 kilometers per second for the wind velocity to work against.

Or maybe you’re going to have to carry some system like nuclear electric to slow you down the last 100 kilometers per second. Forward’s sail in principle would let you come to a complete stop or reach any final velocity you wanted to. But it does seem like a very difficult thing to do. It’s in the category of ideal technology. It’s pretty hard to see how you’d actually build it.

PG: You talk about using relay lenses along the acceleration path for your micro-sails. How does this system improve the original design?

That was something i realized late in the process of doing the design. My little sails accelerate over short distances by comparison to Forward’s big sail concepts, a few tens of thousands of kilometers. The problem with pushing a big sail is that I have this one big lens that has to focus the light on the sail some large distance away. How about if I take a smaller lens and use it to focus light, but then I put another lens at a place where the beam spreads out again. And I put another lens out and focus the beam yet again. So I have this spaced series of lenses.

It’s pretty easy to show this is not a useful thing to do if you’re trying to accelerate a large sail over a light year. Partly because you have to put the intermediate lenses a large fraction of a lightyear away and partly because you don’t gain when the lens and the sail are about the same size. There’s no advantage to it; you end up having the same amount of material in multiple lenses as you would in one big lens. Geoff Landis did a paper to show why it doesn’t work.

With my situation, though, I was only accelerating things for a few tens of thousands of kilometers. I had been thinking I’ll do this with one big telescope, a 500 meter telescope. But at some point I realized I’m taking this 500 meter telescope and focusing the beam on this little tiny sail. If I were to try to focus on another lens, another telescope, i could do that easily. I’m only accelerating over a short distance, so I can physically put a telescope forty thousand kilometers away; it’s not like I have to put it half a light year away.

So I realized I could build a 50 meter telescope and have 10 of them. Because of the way the numbers work out, because I’m focusing on a very small object, it turns out I gain in terms of the total area of the telescopes. I can make ten telescopes each a tenth of the diameter and spaced one tenth of the way along the path. And end up using, since each telescope is a tenth of the diameter, a hundredth of the area. So I can have ten times the total material of the telescopes.

Now I had gotten the lens down from 10,000 kilometers to a few hundred meters. Which certainly helps. Look, Bob Forward figured out a way you could get to the stars using known physics. Cliff Singer talked about using particle accelerators for ‘pellet’ propulsion. Both these notions left us huge engineering problems. What Geoff Landis and I both did was to say, can we do better from an engineering standpoint. Can we make this something we can actually build.

What I like best about Sailbeam is that as far as I know right now, it is the most engineering-practical way to get up to a tenth of speed of light.

PG: These sails get up to speed, shall we say, quickly.

JK: Yes. In some of the designs they go from zero to light speed in about a tenth of a second. That’s pushing it and in the design that’s in the final proposal, they take about three seconds. I love showing that slide — it shows what the limiting acceleration would be for an ideal microsail and it’s like 30 million gravities, or zero to lightspeed in .97 seconds. But even backing up because of materials properties, you’re talking about accelerating at hundreds of thousands of gravities and getting up to a large fraction of lightspeed in a few seconds.

PG: And you’re pushing, in an ideal scenario, an interstellar probe of what size.

JK: The baseline is a one ton probe. There really is nothing, you probably can’t go a lot smaller than that, though i wouldn’t swear you couldn’t build a one kilogram probe. But even with sophisticated miniaturization, it becomes hard to make a useful probe that’s much smaller than a ton. So I tend to look at that size scale. On other hand, there isn’t an upper limit. You could build much bigger probes, but they would cost more to build the lasers to launch them. The laser power you need is proportional to the mass.

PG: Up to near lightspeed in a second! You seem to be somebody who enjoys pushing the boundaries.

JK: It’s definitely a lot of fun to do. The interesting part of my work is coming up with new schemes and combinations to see if they work. The flip side is that interstellar flight is such a hard problem that you don’t get the satisfaction of something you expect to see built.

PG: Interstellar flight is all about long time frames. Even mission durations of 50 to 100 years are wildly beyond our current capabilities. So how do you cope with this perspective — long-term thinking isn’t something our culture has much patience with.

JK: It’s certainly something that is pretty rare in our society. Although I am occasionally amazed because on the one hand people don’t think long term, and then, on the other hand, I see people worrying about things like Social Security going bankrupt in thirty years. We have no idea of what the economy is going to be like in thirty years. So there are a few places in our society where people do think long term, but most of them don’t seem to me to be. Actually this is an interesting phenomenon.

PG: The notion of working on projects where you won’t get results in a lifetime or more is rare indeed. But from talking to you, I get the idea that you would be pleased to think that something you did today would contribute to a mission that might not launch until you and I are both gone.

JK: That’s absolutely true. I would be delighted if when I am old and gray, I discover that people are just starting to work on building something like SailBeam and are referring to me as having come up with the idea, or part of the idea. It’s not that I can’t imagine this SailBeam concept actually being launched within my lifetime — it’s not impossible — but it’s as much as I can reasonably expect to hope that in my time on Earth we’ll maybe be getting started on it.

PG: You’re also a science fiction fan.

JK: Yes. No fiction of my own rather than the occasional song. But I do often point out that I write both science fiction and fantasy. It’s just that the science fiction is usually titled ‘technical proposal’ and the fantasy is titled ‘budget proposal.’ I have never turned pro like Geoff Landis.

Certainly I’ve been an SF reader since way back when. I will note in fact that if there was any single book that turned me onto the notion of engineering interstellar flight, it would be the book Tau Zero by Poul Anderson. That was the one that got me going, stimulating a lot of interest in interstellar flight as something that we might actually make happen.


Jordin’s report on SailBeam concepts is “High-Acceleration Micro-Scale Laser Sails for Interstellar Propulsion,” Final Report, NIAC Research Grant #07600-070, revised February 15, 2002 and available here. And see Geoffrey Landis’ “Optics and Materials Considerations for a Laser-propelled Lightsail,” presented at the 40th International Astronautical Federation Congress, Málaga, Spain, Oct. 7-12, 1989 (full text).



Remembering Jordin Kare (1956-2017)

We’ve just lost a fine interstellar thinker. Jordin Kare has died of aortic valve failure at age 60. While Kare played a role in the Clementine lunar mapping mission and developed a reusable rocket concept in the 1990s that he thought could be parlayed into a space launch system (in typical Kare fashion, he called it “DIHYAN,” for ‘Do I Have Your Attention Now?’), it is through a laser sail system called SailBeam and a ‘fusion runway’ concept that he will most likely be remembered among those who study starflight. But he was also an active science fiction fan, ‘filksinger’ and poet whose name resonates wherever science fiction fans gather.

To science fiction writer Jerry Pournelle, who remembered Kare to a small mailing list over the weekend, it was a song called ‘Fire in the Sky’ that first came to mind. The first verse:

Prometheus, they say, brought God’s fire down to man
And we’ve caught it, tamed it, trained it since our history began
Now we’re going back to Heaven just to look Him in the eye
And there’s a thunder ‘cross the land and a fire in the sky

The song is a rousing tribute to outward yearning, written by a man who was a regular at science fiction conventions, where he achieved his fame as a singer. If you’re not familiar with the term ‘filksinger,’ it emerged in the musical community that evolved inside science fiction fandom. Kare was prolific at the genre and released two albums of his own work: Fire in the Sky (1991) and Parody Violation: Jordin Kare Straight and Twisted (2000). He was also a partner in Off Centaur Publications, a commercial venture specializing in such music. Pournelle liked ‘Fire in the Sky’ enough to feature it in the novel Fallen Angels (Baen, 1981), which he wrote with Larry Niven and Michael Flynn.

Image: Astrophysicist and space systems consultant Jordin Kare, who died on July 19.

As a physicist and aerospace engineer, Kare focused primarily on laser propulsion, both from ground-to-orbit and deep space perspectives. A long-time researcher at Lawrence Livermore National Laboratory, he put together an early laser propulsion workshop at LLNL in 1986; his work on laser launch from ground to orbit drew support from the Strategic Defense Initiative.

Kare left Livermore in the mid-90s to become a consultant specializing in spacecraft design. I ran into him through reports he did for what was then called the NASA Institute for Advanced Concepts, where he wrote about launch prospects using laser arrays, and reshaped laser sail concepts for speed and efficiency. I highly recommend you take a look at his “High-Acceleration Micro-Scale Laser Sails for Interstellar Propulsion” report for NIAC in 2002 (citation below). He would go on to become chief scientist in beamed power company LaserMotive.

In the interstellar community, it may be SailBeam that stands as his primary legacy. At a time when Robert Forward had studied vast lightsails hundreds of kilometers across, Kare went the other direction. He had realized that Forward’s sails demanded gigantic optical systems including in one instance a Fresnel lens in the outer Solar System that would be the size of a planet (this was to be used to collimate the powerful laser beam from the inner system). Why not power down and aim for a system far less complex by shrinking the sails themselves?

The gist of the idea is this: Kare’s tiny sails, made of diamond film and pushed by a multi-billion watt orbiting laser, could be accelerated much closer to their power source than Forward’s sails and brought up to a substantial fraction of lightspeed within seconds. Kare coupled sail concepts with Cliff Singer’s pellet propulsion, reasoning that his tiny sails could intercept a large interstellar probe and become a source of propulsion as they were vaporized into plasma.

Aerospace engineer Dana Andrews worked with Kare on various magsail concepts and wrote about SailBeam himself in a paper cited below. Andrews pointed out that SailBeam solved a key problem in particle beam propulsion — a neutral particle beam will disperse as it travels. A stream of tiny sails driven by laser will not. You might see some crossover here with another concept Andrews and Kare both studied called MagOrion, which would use plasma pulses from small nuclear explosions to drive a starship deploying a magnetic sail.

Kare’s fame also rests on the concept of a fusion runway, which he saw as a long string of pellets deployed ahead of a departing spacecraft. The vehicle, moving several hundreds of kilometers per second initially, would begin encountering fusion pellets with enough velocity to light up its main engines. A cruising velocity of 30,000 kilometers per second, he believed, was possible with a runway strung over a tenth of a light year (I wrote about this one recently in A Fusion Runway to Deep Space?) and had an interesting conversation with Kare about it.

More about that conversation tomorrow, because I would like to bring Kare’s own words into the mix. For now, Kare’s rousing finale to the song with which I began this piece:

Now the rest is up to us – there’s a future to be won
We must turn our faces outward, we will do what must be done
For no cradle lasts forever, every bird must learn to fly
And we’re going to the stars, see our fire in the sky
Yes, we’re going to the stars, see our fire in the sky

The Andrews paper is “Interstellar Propulsion Opportunities Using Near-Term Technologies,” in Acta Astronautica Vol. 55 (2004), pp. 443-451. Jordin Kare’s report on SailBeam concepts is “High-Acceleration Micro-Scale Laser Sails for Interstellar Propulsion,” Final Report, NIAC Research Grant #07600-070, revised February 15, 2002 and available here.



METI: A Response to Steven Johnson

Yesterday’s post dwelt on an article by Steven Johnson in the New York Times Magazine that looked at the issue of broadcasting directed messages to the stars. The article attempted a balanced look, contrasting the goals of METI-oriented researchers like Douglas Vakoch with the concerns of METI opponents like David Brin, and fleshing out the issues through conversations with Frank Drake and anthropologist Kathryn Denning. Johnson’s treatment of the issue prompted a response from a number of METI critics, as seen below. The authors, all of them prominent in SETI/METI issues for many years, are listed at the end of the text.

We thank Steven Johnson for his thoughtful New York Times Magazine article, which makes it clear that there are two sides to the METI issue. We applaud his idea that humankind needs a mechanism for decision-making on long-term issues that could threaten our future.

As Johnson implies, deliberately calling ourselves to the attention of a technological civilization more advanced than ours is one of those issues. What we do now could affect our descendants.

As Johnson asks, who decides? Without an agreed approach, the decision to transmit might be made by whoever has a sufficiently powerful transmitter.

Astronomers have given us an additional reason for addressing this question: the discovery of thousands of planets in orbit around other stars, increasing the probability that life and intelligence have evolved elsewhere in our galaxy.

METI is not scientific exploration. It is an attempt to provoke a reaction from an alien civilization whose capabilities and intentions are not known to us.

The most likely motivation for alien intervention is not a wish to exploit Earth’s territory or resources, but the potential threat posed by a new space-faring civilization — us. Scientists and engineers already are designing Humankind’s first unmanned interstellar probes. Some might be visiting nearby stars less than a century from now.

Image: Taken by the Advanced Camera for Surveys on the Hubble Space Telescope, this image shows the core of the great globular cluster Messier 13, to which a message was beamed in 1974. Credit: ESA/Hubble and NASA.

Though altruism may be a noble goal, human history suggests that it rarely extends beyond one’s own species. We have not been very altruistic toward dolphins, whales, or chimpanzees.

What mechanism can we devise for what Johnson calls global oversight of METI? In the 1970s conferences at Asilomar assessed dangers from the then theoretical notion of genetic engineering. The resulting compromises improved laboratory safety while allowing continued research in this field under an agreed set of rules.

In the 1980s, some of us proposed a first step toward agreed rules through the document known informally as the First SETI Protocol, which calls for consultations before responding to a detected alien signal. (That protocol has been endorsed by most SETI researchers, but has not been adopted by government agencies.) An attempt to gain consensus on a second protocol calling for consultations before the transmission of powerful, human-initiated signals foundered on a basic disagreement that is mirrored in today’s METI debate.

Seventeen years ago, the International Academy of Astronautics presented a proposal to the United Nations for an international decision-making process for sending such communications. The U.N. noted the report and filed it.

Plans to send powerful targeted messages to nearby solar systems have brought this issue back to our attention. The underlying issue has not changed. As renowned Chinese science fiction writer Cixin Liu wrote, “I’ve always felt that extraterrestrial contact will be the greatest source of uncertainty for humanity’s future.” Let’s address that issue as rationally as we can.

Gregory Benford, astrophysicist and science fiction author

James Benford, radio astronomer

David Brin, astrophysicist and science fiction author

Catharine A. Conley, NASA Planetary Protection Officer

John Gertz, former chairman of the SETI Institute

Peter W. Madlem, former board member of the SETI Institute

Michael Michaud, former diplomat, author

John Rummel, former Director, NASA Planetary Protection Office

Dan Werthimer, radio astronomer



Wrestling with METI

If we were to send a message to an extraterrestrial civilization and make contact, should we assume it would be significantly more advanced than us? The odds say yes, and the thinking goes like this: We are young enough that we have only been using radio for a century or so. How likely is it that we would reach a civilization that has been using such technologies for an even shorter period of time? As assumptions go, this one seems sensible enough.

But let’s follow it up. In an interesting piece in the New York Times Magazine, Steven Johnson makes the case this way: Given the age of the universe, almost 14 billion years, that means it would have taken 13,999,999,900 years before radio communications became a factor here on Earth. Now let’s imagine a civilization that deviates from our own timeline of development by just one tenth of one percent. If they are more advanced than us, they will have been using technologies like radio and its successors for 14 million years.

Assumptions can be tricky. We make them because we have no hard data on any civilization outside our own. About this one, we might ask: Why should there be any universal ‘timeline’ of development? Are there ‘plateaus’ when the steep upward climb of technological change goes flat? Soon we have grounds for an ever deeper debate. What constitutes civilization? What constitutes intelligence, and is it necessarily beneficial, or a path toward extinction?

Image: The Arecibo Observatory in Puerto Rico, from which a message was broadcast to the globular cluster M13 in 1974.

Airing out the METI Debate

I want to commend Johnson’s piece, which is titled “Greetings, E.T. (Please Don’t Murder Us.” As you can fathom from the title, the author is looking at our possible encounter with alien civilizations in terms not of detection but of contact, and that means we’re talking METI — Messaging Extraterrestrial Intelligence. What I like about Johnson’s treatment is that he goes out of his way to talk to both sides of a debate known more for its acrimony than its enlightenment. Civility counts, because both sides of the METI issue need to listen to each other. And the enemies of civilized discussion are arrogance and facile assertion.

It was Martin Ryle, then Astronomer Royal of Britain, who launched the first salvo in the METI debate in response to the Arecibo message of 1974, asking the International Astronomical Union to denounce the sending of messages to the stars. In the forty years since, about a dozen intentional messages have been sent. The transmissions of Alexander Zaitsev from Evpatoria are well known among Centauri Dreams readers (see the archives). Douglas Vakoch now leads a group called METI that plans to broadcast a series of messages beginning in 2018. The Breakthrough Listen initiative has also announced a plan to design the kind of messages with which we might communicate with an extraterrestrial civilization.

All of this will be familiar turf for Centauri Dreams readers, but Johnson’s essay is a good refresher in basic concepts and a primer for those still uninitiated. He’s certainly right that the explosion of exoplanet discovery has materially fed into the question of when we might detect ETI and how we could communicate with it. It has also raised questions of considerable significance about the Drake Equation; specifically, about the provocative term L, meant to represent the lifespan of a technological civilization.

Johnson runs through the Fermi question — Where are they? — by way of pointing to L’s increasing significance. After all, when Frank Drake drew up the famous equation and presented it at a 1961 meeting at Green Bank (the site of his Project Ozma searches), no one knew of a single planet beyond our Solar System. Now we’re learning not just how frequently they occur but how often we’re likely to find planets in the habitable zone around their stars. The numbers may still be rough, but they’re substantial. There are billions of habitable zone planets in the galaxy, so the likelihood of success for SETI would seem to rise.

And if we continue to observe no other civilizations? The L factor may be telling us that there is a cap to the success of intelligent life, a filter ahead of us in our development through which we may not pass, whether it be artificial intelligence or nuclear weaponry or nanotechnology. METI’s critics thus worry about planet-wide annihilation, and wonder if a limiting factor for L, at least for some civilizations, might be interactions with other, more advanced cultures. Far better for our own prospects if the ‘filter’ is behind us, perhaps in abiogenesis itself.

Hasn’t our own civilization already announced its presence, not just through an expanding wavefront of old TV and radio shows but also through the activity of our planetary radars, and the chemistry of our atmosphere? After all, even at our level of technology, we’re closing in on the ability to study the atmospheres of Earth-class planets around other stars. If this is the case, are we simply being watched from afar because we’re just one of many civilizations, and perhaps not one worth communicating with? METI proponents will argue that this is another reason to send a message: Announce that, at long last, we are ready to talk.

The counter-argument runs like this: A deliberately targeted message is a far different thing than the detection of life-signs on a distant planet. The targeted message is a wake-up call, saying that we are intent on reaching the civilizations around us and are beginning the process. Passive signal leakage is one thing; targeting a specific star implies an active level of interest. And the problem is, we have no way of knowing how an alien culture might respond.

Procedures for Consensus

In his article, Johnson is well served by the interviews he conducted with with Frank Drake (anti-METI, but largely because he would prefer to see METI funding applied to conventional SETI); METI proponent and former SETI scientist Vakoch; anti-METI spokesman and author David Brin; and anthropologist Kathryn Denning, who supports broad consultation on METI. Johnson does an admirable job in summarizing the key questions, one of which is this: If we are dealing with technologies whose use has huge consequences, do individuals and small groups have the right to decide when and how these technologies should be used?

I think Johnson hits the right note on this matter:

Wrestling with the METI question suggests, to me at least, that the one invention human society needs is more conceptual than technological: We need to define a special class of decisions that potentially create extinction-level risk. New technologies (like superintelligent computers) or interventions (like METI) that pose even the slightest risk of causing human extinction would require some novel form of global oversight. And part of that process would entail establishing, as Denning suggests, some measure of risk tolerance on a planetary level. If we don’t, then by default the gamblers will always set the agenda, and the rest of us will have to live with the consequences of their wagers.

Easier said than done, of course. How does global oversight work? And how can we bring about a discussion that legitimately represents the interests of humanity at large?

Consultation also meets an invariable response: You can talk all you want, but someone is going to do it anyway. In fact, various groups already have. In any case, when have you ever heard of human beings turning their back on technological developments? For that matter, how often have we deliberately chosen not to interact with another society? Johnson adds:

But maybe it’s time that humans learned how to make that kind of choice. This turns out to be one of the surprising gifts of the METI debate, whichever side you happen to take. Thinking hard about what kinds of civilization we might be able to talk to ends up making us think even harder about what kind of civilization we want to be ourselves.

The METI debate is robust and sometimes surprising because of what doesn’t get said. Under the frequent assumption that human civilization is debased, we assume an older culture will invariably have surmounted its own challenges to become enlightened and altruistic. Possibly so, but without data, how can we know that other civilizations may not be more or less like ourselves, in having the capacity for great achievement as well as the predatory instincts that can cause them to turn on themselves and on others? Is there a way of living with expansive technologies while remaining a flawed and striving culture that can still make huge mistakes?

We can’t know the characteristics of any civilization without data, which is why a robust SETI effort remains so crucial. As for METI, I’ll be publishing tomorrow a response to Johnson’s article from a group of METI’s chief opponents exploring these and other points.



Keeping an Eye on Ross 128

Frank Elmore Ross (1874-1960), an American astronomer and physicist, became the successor to E. E. Barnard at Yerkes Observatory. Barnard, of course, is the discoverer of the high proper motion of the star named after him, alerting us to its proximity. And as his successor, Ross would go on to catalog over 1000 stars with high proper motion, many of them nearby. Ross 128, now making news for what observers at the Arecibo Observatory are calling “broadband quasi-periodic non-polarized pulses with very strong dispersion-like features,” is one of these, about 11 light years out in the direction of Virgo.

Any nearby stars are of interest from the standpoint of exoplanet investigations, though thus far we’ve yet to discover any companions around Ross 128. An M4V dwarf, Ross 128 has about 15 percent of the Sun’s mass. More significantly, it is an active flare star, capable of unpredictable changes in luminosity over short periods. Which leads me back to that unusual reception. The SETI Institute’s Seth Shostak described it this way in a post:

What the Puerto Rican astronomers found when the data were analyzed was a wide-band radio signal. This signal not only repeated with time, but also slid down the radio dial, somewhat like a trombone going from a higher note to a lower one.

And as Shostak goes on to say, “That was odd, indeed.”

It’s this star’s flare activity that stands out for me as I look over the online announcement of its unusual emissions, which were noted during a ten-minute spectral observation at Arecibo on May 12. Indeed, Abel Mendez, director of the Planetary Habitability Laboratory at Arecibo, cited Type II solar flares first in a list of possible explanations, though his post goes on to note that such flares tend to occur at lower frequencies. An additional novelty is that the dispersion of the signal points to a more distant source, or perhaps to unusual features in the star’s atmosphere. All of this leaves a lot of room for investigation.

We also have to add possible radio frequency interference (RFI) into the mix, something the scientists at Arecibo are examining as observations continue. The possibility that we are dealing with a new category of M-dwarf flare is intriguing and would have obvious ramifications given the high astrobiological interest now being shown in these dim red stars.

All of this needs to be weighed as we leave the SETI implications open. The Arecibo post notes that signals from another civilization are “at the bottom of many other better explanations,” as well they should be assuming those explanations pan out. But we should also keep our options open, which is why the news that the Breakthrough Listen initiative has now observed Ross 128 with the Green Bank radio telescope in West Virginia is encouraging.

No evidence of the emissions Arecibo detected has turned up in the Breakthrough Listen data. We’re waiting for follow-up observations from Arecibo, which re-examined the star on the 16th, and Mendez in an update noted that the SETI Institute’s Allen Telescope Array had also begun observations. Seth Shostak tells us that the ATA has thus far collected more than 10 hours of data, observations which may help us determine whether the signal has indeed come from Ross 128 or has another source.

“We need to get all the data from the other partner observatories to put all things together for a conclusion,” writes Mendez. “Probably by the end of this week.”

Or perhaps not, given the difficulty of detecting the faint signal and the uncertainties involved in characterizing it. If you’re intrigued, an Arecibo survey asking for public reactions to the reception is now available.

I also want to point out that Arecibo Observatory is working on a new campaign to observe stars like Ross 128, the idea being to characterize their magnetic environment and radiation. One possible outcome of work like that is to detect perturbations in their emissions that could point to planets — planetary magnetic fields could conceivably affect flare activity. That’s an intriguing way to look for exoplanets, and the list being observed includes Barnard’s Star, Gliese 436, Ross 128, Wolf 359, HD 95735, BD +202465, V* RY Sex, and K2-18.

A final note: Arecibo is now working with the Red Dots campaign in coordination with other observatories to study Barnard’s Star, for which there is some evidence of a super-Earth mass planet. More on these observations can be found in this Arecibo news release.