The Prime Directive – A Real World Case

by Paul Gilster on August 28, 2015

Trying to observe but not harm another civilization can be tricky business, as Michael Michaud explains in the article below. While Star Trek gave us a model for non-interference when new cultures are encountered, even its fictional world was rife with departures from its stated principles. We can see the problem in microcosm in ongoing events in Peru, where a tribal culture coming into contact with its modern counterparts raises deeply ambiguous questions about its intentions. Michaud, author of Contact with Alien Civilizations (Copernicus, 2007), draws on his lengthy career in the U.S. Foreign Service to frame the issue of disruptive cultural encounter.

By Michael A.G. Michaud

michaud2-1

Science fiction fans all know of the Prime Directive, usually described as avoiding contact with a less technologically advanced civilization to prevent disruption of that society’s development. In a 1968 Star Trek episode, the directive was explicitly defined: “No identification of self or mission. No interference with the social development of said planet. No references to space or the fact that there are other worlds or civilizations.” Another version of the Prime Directive forbade introducing superior knowledge or technology to a society that is incapable of handling such advantages wisely.

Commentators have pointed out many violations of the directive in the Star Trek series (and in other science fiction programs). The Enterprise crew sometimes intervened to prevent tragedy or promote positive outcomes. De facto, observance of the Prime Directive was scenario-dependent.

Star Fleet personnel sometimes used hidden observation posts or disguises to watch or interact with natives. In one episode, Captain Kirk left behind a team of sociologists to help restore an alien society to a “human form.” At the other extreme, the Prime Directive was interpreted as allowing an alien civilization to die.

Star Trek was not the first source of a prime directive. In Olaf Stapledon’s 1937 novel Star Maker, superior beings take great care to keep their existence hidden from “pre-utopian” primitives so that the less advanced beings will not lose their independence of mind.

A recent article in Science reminds us that the practical application of such a principle in a real contact situation on Earth is riddled with complications and uncertainties. The government of Peru has been debating whether or not to make formal contact with a tribal people living in the Peruvian Amazon, sighted frequently over the past year.

Peruvian policy has been to avoid contact with isolated tribes and to protect them from intruders in their reserves. In practice, this policy has been difficult to enforce. Tour operators sell tickets for “human safaris;” some tribespeople loiter on the river bank, letting themselves be seen. One anthropologist said that they were deliberately seeking to interact with people on the river.

There is a dark side to tribal behavior. Some of the tribals raided a nearby village for supplies, killing two villagers.

The tribespeoples’ conflicting actions have left their desires unclear. Though some have sought goods, shooting arrows at Peruvians suggests that they do not want contact.

Peru’s government wants to train local people to avoid isolated tribes unless those tribes make the first move. The plan is to increase patrols, discourage raids, and make contact with the tribespeople only if they show a willingness for conversation.

This is termed “controlled contact.” Two anthropologists proposed in a Science editorial that “a well-designed contact can be quite safe,” but another group accused them of advocating a dangerous and misleading idea.

One of the proposed explanations for our non-detection of alien intelligences is the Zoo Hypothesis, which claims that more advanced civilizations deliberately avoid making themselves known to us so as not to disturb humankind’s autonomous development. Others suggest practical reasons for such apparently altruistic behavior. As Robert Rood put it, the only thing we could offer them is new ideas. Their intervention would stop our development.

Much of this debate has been driven by guilt over the impact of Western colonial powers on other Earthly societies. Star Trek and other science fiction treatments used interactions with aliens as allegories for our own world.

Some argue that external cultural influences can be positive. What we call Western Civilization was the product of many forces that came from outside. Europe’s major religions came from the Middle East. Others see Westernization as a threat that must be resisted, notably in the Islamic world.

If we ever find ourselves in contact with an alien civilization, one of the parties is likely to be more scientifically and technologically advanced than the other. Will the more powerful intelligences observe some sort of Prime Directive? That may be more complicated than many humans believe.

——-

References

Andrew Lawler, “Mashco Piro tribe emerges from isolation in Peru,” Science 349 (14 August 2015), 679.

“Prime Directive,” Wikipedia, accessed 21 August 2015.

Michael A.G. Michaud, Contact with Alien Civilizations, Copernicus (Springer), 2007, 237.

tzf_img_post

{ 19 comments }

Back to the Ice Giants?

by Paul Gilster on August 27, 2015

As data return from New Horizons continues, we can hope that an encounter with a Kuiper Belt Object is still in its future. But such an encounter will, like the flyby of Pluto/Charon itself, be a fleeting event past an object at huge distance. Our next chance to study a KBO might take place a bit closer in, and perhaps we’ll be able to study it with the same intense focus that Dawn is now giving the dwarf planet Ceres. How about an orbiter around Neptune, whose moon Triton is thought by many to be a KBO captured by the ice giant long ago?

The thought is bubbling around some parts of NASA, and was voiced explicitly by the head of the agency’s planetary science division, Jim Green, at this week’s meeting of a working group devoted to missions to the outer planets. Stephen Clark tackles the story in Uranus, Neptune in NASA’s Sights for a New Robotic Mission, which recounts the basic issues now in play. What comes across more than anything else is the timescale involved in putting together flagship missions, multi-billion dollar efforts on the order of our Cassini Saturn orbiter.

fig01

Image: Neptune’s moon Triton, as seen by Voyager 2. Credit: NASA/JPL.

Right now Europa is the more immediate priority when it comes to outer planets work, and for good reason, since NASA has already approved a probe to the Jovian moon. Here we’re talking about 2022 as the earliest possible launch date for a spacecraft that would orbit Jupiter and perform repeated close flybys of Europa, a world we need to study close-up because of the evidence for a liquid water ocean beneath its crust and the possibility of life there. Whether such a probe actually flies as early as 2022 is problematic, and so is the launch vehicle, which in a perfect confluence of events could conceivably be NASA’s powerful Space Launch System.

I say ‘perfect’ confluence because the muscular SLS, if it lives up to expectations, would offer more robust mission options not just for Europa but for all the outer planets. Before any of that happens, of course, we have to build and fly SLS. While issues like that remain fluid, the current investigations into still later missions to Uranus and Neptune would seem to be premature, but they have to be, given not just the scientific but the bureaucratic issues involved. The JPL study on Uranus and Neptune ponders missions that could be launched probably in the 2030s, with the expectation of coming up with a mission design that could be used at both of the ice giants, although cheaper options will also be considered.

In any case, space missions begin with preliminary studies that launch a process feeding into the decadal surveys that set priorities for the next ten years of research. The last such survey, coming out of the National Research Council in 2011, gave particular weight to Mars sample return and the Europa probe. Getting an ice giant mission into the next decadal survey is no sure thing, given a strong case to be made for further investigation of Titan, and Clark notes that Venus is also likely to gain support.

So it’s early days indeed for Uranus and Neptune, but conceptual studies are a critical first step toward eventual approval. It’s daunting, and a bit humbling, to realize that if all goes well — if the bureaucratic gauntlet can be successfully run all the way through myriad technical studies and peer review into the budgeting phase and beyond — any mission to the ice giants will arrive half a century after Voyager 2 made our first and only encounters with these worlds. A return with orbiters would give us the opportunity to evaluate the major differences between the ice giants and the larger gas giants around which we’ve been able to conduct orbital operations.

fig02

Image: Voyager 2 image of the crescents of Neptune and Triton taken on its outbound path, about 3 days after closest approach. The picture is a composite of images taken from a distance of almost 5 million km as Voyager 2 flew southward at an angle of 48 degrees to the ecliptic after its encounter with Neptune, the final encounter of its journey through the solar system. Credit: NASA/JPL.

It’s often said that we want missions that can be flown within the lifetime of the people who designed them, which is an understandable though mistaken thought. If you were a planetary scientist, would you turn down the chance to work on a Uranus/Neptune mission design even if you were unlikely to see it arrive? Clark quotes William McKinnon (Washington University, St. Louis) on the ice giant probe concept: “An ice giant mission, presumably an orbiter, is, alas, over the horizon as far as my lifespan is concerned, so I salute those who will live to see it!”

Which is what we do, looking forward to the next generation if we can’t complete a task within our own. On the individual level we can say little about the details of our own mortality, so there is no guarantee of survival to mission completion even for relatively nearby targets. We keep building spacecraft anyway. In the case of the outer system, we take decades to design, approve, build and fly spacecraft not in the name of individual ego but of a shared humanity.

The interstellar effort places this principle in even starker terms, for in terms of the physics we understand today, missions to another star will be a matter of, at best, decades and perhaps centuries of flight time. We can hope that rather than turning away from this effort, we continue to probe it with better designs, continuing missions and a determination to keep exploring.

tzf_img_post

{ 16 comments }

Sharper Views of Ceres

by Paul Gilster on August 26, 2015

The mapping of Ceres continues at a brisk pace. The Dawn spacecraft is now operating at 1470 kilometers from the surface, taking eleven days to capture and return images of the entire surface. As this JPL news release points out, each eleven day cycle consists of fourteen orbits, so we’re accumulating views of this formerly faint speck in unprecedented detail. Within the next two months, Dawn will map Ceres — all of Ceres — six times.

Have a look, for example, at this view of one of Ceres’ more intriguing surface features. Taken by Dawn’s framing camera on August 19, the image has a resolution of 140 meters per pixel.

PIA19631_ip

Image: NASA’s Dawn spacecraft spotted this tall, conical mountain on Ceres from a distance of 1,470 kilometers. The mountain, located in the southern hemisphere, stands 6 kilometers high. Its perimeter is sharply defined, with almost no accumulated debris at the base of the brightly streaked slope. Credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA.

The naming of surface features also continues, the image below showing a mountain ridge at lower left that’s in the center of Urvara crater. The 163-kilometer Urvara takes its name from an Indian and Iranian deity of plants and fields.

fig02

And below we have Gaue crater at the bottom of the frame, named after a Germanic goddess of the harvest.

fig03

JPL’s Marc Rayman, chief engineer for Dawn and mission director, notes the continuing success of the mapping operation:

“Dawn is performing flawlessly in this new orbit as it conducts its ambitious exploration. The spacecraft’s view is now three times as sharp as in its previous mapping orbit, revealing exciting new details of this intriguing dwarf planet.”

How to get views as good as Dawn is currently sending without actually making the trip? Rayman points out in his latest Dawn Journal entry that a telescope 217 times the diameter of Hubble could provide the same images, which makes a click on the Ceres image gallery all the more preferable. At its current height, Dawn’s camera sees a square 140 kilometers to the side, which is less than one percent of the almost 2.8 million square kilometer surface of the world.

Ahead for Dawn is a set of six mapping cycles (the images above come from the first of these), with changes in camera angle providing stereo views that will help us understand the topography. As it records infrared and visible spectra of the terrain, Dawn is also returning a radio signal that will help researchers probe the dwarf planet’s gravitational field, a key to the distribution of mass inside the object. At 308 million kilometers from Earth, Dawn’s radio signals take 34 minutes to make the round trip. Remember that all this is being accomplished despite the earlier failure of two of the craft’s reaction wheels, a problem in spacecraft orientation that has been surmounted by ground controllers and will not affect the outcome of the mission.

tzf_img_post

{ 16 comments }

OSIRIS REx: Asteroid Sample Return

by Paul Gilster on August 25, 2015

Just over a year from now, we’ll be anticipating the launch of the OSIRIS-REx mission, scheduled to rendezvous with the asteroid Bennu in 2018. This will be the first American mission to sample an asteroid, and it’s interesting to note that the materials scientists hope to return will constitute the largest sample from space since the days of Apollo. As with recent comet studies, asteroid investigations may give us information about the origin of the Solar System, and perhaps tell us something about sources of early water and organic materials. This NASA Goddard animation offers a fine overview of the target and the overall mission.

But OSIRIS-REx is about more than the early Solar System. Recent scare stories have compelled NASA to state that a different asteroid, sometimes identified as 2012 TT5, will not impact our planet in September of this year. As Colin Johnston points out in Astronotes (the blog of Armagh Planetarium), 2012 TT5 will, on the 24th of September of this year, pass within 0.055 AU, or roughly 8.25 million kilometers of the Earth, a rather comfortable miss in anyone’s book. We’re talking about a distance twenty times further than the Moon is from the Earth. [NOTE: I originally mis-stated the kilometer equivalent of 0.055 AU, now fixed thanks to readers who spotted the mistake in the comments].

In other words, this asteroid poses no problem whatsoever when it passes by in September. Let me just quote the NASA news release briefly to get this out of the way:

“There is no scientific basis — not one shred of evidence — that an asteroid or any other celestial object will impact Earth on those dates,” said Paul Chodas, manager of NASA’s Near-Earth Object office at the Jet Propulsion Laboratory in Pasadena, California.

In fact, NASA’s Near-Earth Object Observations Program says there have been no asteroids or comets observed that would impact Earth anytime in the foreseeable future. All known Potentially Hazardous Asteroids have less than a 0.01% chance of impacting Earth in the next 100 years.

The longer-term picture is that assessing asteroids that could be a problem — astronomers call these Potentially Hazardous Asteroids, or PHAs — is an ongoing effort and a wise one. We know that asteroid impacts have had a role to play in the history of our planet, and it would be folly to ignore the potential. Fortunately, the OSIRIS-REx mission plays into that effort, because Bennu is an object with a chance (about 1 in 2500) of impacting the Earth late in the next century. Getting samples here will help us understand how to mitigate any future impact.

osirisrex_arrival

Image: An artist’s concept of NASA’s OSIRIS-REx asteroid-sample-return spacecraft arriving at the asteroid Bennu. Credit: NASA’s Goddard Space Flight Center Conceptual Image Lab.

An extremely dark object, Bennu absorbs most incoming sunlight and radiates it away as heat. This brings the so-called Yarkovsky effect into play, gradually changing the orbit of the asteroid over time. Clearly, understanding the asteroid’s trajectory involves anticipating what this tenuous effect can do. Edward Beshore (University of Arizona), who is deputy principal investigator for OSIRIS-REx, explains the mission’s role:

“We’ll get accurate measurements of the Yarkovsky effect on Bennu by precisely tracking OSIRIS-REx as it orbits the asteroid. In addition, the instrument suite the spacecraft is carrying is perfectly suited to measure all the things that contribute to the Yarkovsky effect, such as composition, energy transport through the surface, temperature, and Bennu’s topography. If astronomers someday identify an asteroid that presents a significant impact hazard to Earth, the first step will be to gather more information about that asteroid. Fortunately, the OSIRIS-REx mission will have given us the experience and tools needed to do the job.”

When Bennu was selected as the target in 2008, there were over 7000 known Near-Earth Objects (NEOs), of which fewer than 200 had orbits with the low eccentricity and inclination best suited for the mission. At 492 meters in diameter, Bennu is large enough to offer a stable target for the lander, with a carbon-rich composition believed to include the organic molecules, volatiles and amino acids that could be considered life’s precursors. Of the list of 7000, only 5 NEOs met all these criteria. Bennu was the final pick, and we’ll track OSIRIS-REx’s operations there with great interest. The spacecraft is to arrive at the asteroid in August of 2018.

tzf_img_post

{ 18 comments }

Comet Impacts: Triggers for Life?

by Paul Gilster on August 24, 2015

With Rosetta’s continuing mission at Comet 67P/Churyumov–Gerasimenko, now post-perihelion but continuing to gather data, comets and their role in the history of the Solar System stay very much on my mind. Their role as delivery mechanisms for volatiles to an infant Earth is widely investigated, as is the idea that comet impacts may be linked to some of the great extinction events. But perhaps nothing is as provocative as the idea that comets had a role in actually starting life on our planet, with obvious implications for the likelihood of life elsewhere.

rosettasbigd

Image: This series of images of Comet 67P/Churyumov–Gerasimenko was captured by Rosetta’s OSIRIS narrow-angle camera on 12 August 2015, just a few hours before the comet reached the closest point to the Sun along its 6.5-year orbit, or perihelion. The image at left was taken at 14:07 GMT, the middle image at 17:35 GMT, and the final image at 23:31 GMT. The images were taken from a distance of about 330 km from the comet. The comet’s activity, at its peak intensity around perihelion and in the weeks that follow, is clearly visible. In particular, a significant outburst can be seen in the image captured at 17:35 GMT. Credit: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA.

And now we have the interesting news out of the recent Goldschmidt geochemistry conference in Prague, where we learned about experiments designed to duplicate early cometary impacts in a laboratory environment that replicates the Earth as it was four billion years ago. The study was performed by two Japanese scientists, Haruna Sugahara (Japan Agency for Marine-Earth Science and Technology, Yokahama) and Koichi Mimura (Nagoya University).

We learn that the result of simulated comet impact is the production of peptides up to three units long. Peptides are two or more amino acids linked in a chain, making them small proteins. Tripeptides — what Sugahara and Mimura found in their experiment — consist of three amino acids bound by two peptide bonds.

The researchers used a propellant gun to simulate the cometary impact, working with frozen mixtures of amino acids, water ice and silicates cooled to a brisk 77 K. Gas chromatography could then be used to analyze the result, showing that a significant number of the amino acids had joined into peptides after impact. According to this European Association of Geochemistry news release, the amount of peptides produced in these simulated events was roughly the same as would have been produced by terrestrial processes including lightning storms.

Because proteins are made up of polypeptides, the mechanisms that can form them are key to life. Bear in mind that NASA’s Stardust mission has already found amino acids like glycine in Comet Wild 2, while the Deep Impact collision in 2005 revealed organic particles inside the comet, adding to the notion that early comet strikes be a factor on Earth. Says Sugahara:

“Our experiment showed that the cold conditions of comets at the time of the impacts were key to this synthesis, as the type of peptide formed this way are more likely to evolve to longer peptides. This finding indicates that comet impacts almost certainly played an important role in delivering the seeds of life to the early Earth. It also opens the likelihood that we will have seen “similar chemical evolution in other extraterrestrial bodies, starting with cometary-derived peptides.”

pia02137_1

Image: This spectacular image of comet Tempel 1 was taken 67 seconds after it obliterated Deep Impact’s impactor spacecraft. The image was taken by the high-resolution camera on the mission’s flyby craft. Scattered light from the collision saturated the camera’s detector, creating the bright splash seen here. Linear spokes of light radiate away from the impact site, while reflected sunlight illuminates most of the comet surface. The image reveals topographic features, including ridges, scalloped edges and possibly impact craters formed long ago.
Credit: NASA/JPL-Caltech/UMD.

What this work highlights is the fact that we’re still learning about the ingredients that go into comets — Sugahara and Mimura, after all, worked with only some of these constituents. Thus the value of missions like Rosetta, as we continue to plumb the depths of cometary interiors. A model in which comets contributed some necessary substances while terrestrial processes like lightning created others is still very much in the mix. Both cometary and asteroid impacts during the Late Heavy Bombardment would have delivered a wide range of substances to the surface.

“This is a new piece of work which adds significantly to the exciting field of the origin of complex molecules on the Earth,” says Mark Burchell (University of Kent), who goes on to say:

“It has long been known that ices under shock can generate and break bonds in complex organics. The detection of amino acids on comet 81P/Wild 2 by the NASA Stardust mission in the last decade, and the now regular exciting news from the Rosetta mission to comet 67P/Churyumov-Gerasimenko indicates that comets are a rich source of materials. Two key parts to this story are how complex molecules are initially generated on comets and then how they survive/evolve when the comet hits a planet like the Earth. Both of these steps can involve shocks which deliver energy to the icy body. For example, Zita Martins and colleagues recently showed how complex organic compounds can be synthesized on icy bodies via shocks. Now, building on earlier work, Dr. Sugahara and Dr. Mimura have shown how amino acids on icy bodies can be turned into short peptide sequences, another key step along the path to life.”

The paper is Sugahara and Mimura, “Peptide synthesis triggered by comet impacts: A possible method for peptide delivery to the early Earth and icy satellites,” Icarus Vol. 257 (1 September 2015), pp. 103-112 (abstract).

tzf_img_post

{ 12 comments }

The Scientific Imperative of Human Spaceflight

by Paul Gilster on August 21, 2015

Interstellar distances seem to cry out for robotics and artificial intelligence. But as Nick Nielsen explains in the essay below, there is a compelling argument that our long-term goal should be human-crewed missions. We might ask whether the ‘overview effect’ that astronauts report from their experience of seeing the Earth from outside would have a counterpart on ever larger scales, including the galactic. In any case, what of ‘tacit knowledge,’ and that least understood faculty of human experience, consciousness? As always, Nielsen ranges widely in this piece, drawing on the philosophies of science and human experience to describe the value of an observing, embodied mind on the longest of all conceivable journeys. For more of Nick’s explorations, see his Grand Strategy: The View from Oregon and Grand Strategy Annex.

by J. N. Nielsen

0. A Scientific Argument for Human Exploration
1. The Human Condition in Outer Space
2. The Scientific Ellipsis of Tacit Knowledge
3. The Excommunication of the Eye
4. Human Experiences Intrinsic to Spacefaring Civilization
5. Upper and Lower Bounds of the Overview Experience
6. Observation and the Embodied Mind
7. The Knowledge Argument in Space Science
8. The Interstellar Imperative and the Human Imperative

deep field astronaut 2

0. A Scientific Argument for Human Exploration

Nick-Nielsen

In my essay The Moral Imperative of Human Spaceflight [1] I sought to construct an explicitly moral argument for human space travel. I would now like to make an explicitly scientific argument for human space travel. For those who dismiss the moral claims of human space flight, an argument from the scientific necessity of human spaceflight might possibly sound more plausible (T. E. Hulme once wrote that, “There has always been something rather unreal about ethics. In a library one’s hand glided over books on that subject instinctively.” [2]). Moreover, the claim is sometimes made that science inevitably favors robotic probes, which deliver “more bang for the buck”—a claim often coupled with a derisive attitude to human exploration as mere grandstanding or as a feel-good exercise.

While the moral goods and the scientific benefits of human spaceflight probably cannot be cleanly separated in practice, we can treat them according to the method of isolation and consider them individually and independently, as though the scientific benefits of human spaceflight might accrue regardless of the moral goods or evils that might result from human spaceflight, or that the moral goods of human spaceflight might accrue regardless of the scientific benefit or harm that might result from human spaceflight.

1. The Human Condition in Outer Space

Why go to the trouble of bringing the human body into extraterrestrial space? The human body requires oxygen, water, food, disposal of wastes (gaseous, liquid, and solid), a particular temperature range, and probably also gravity to remain healthy for extended periods of time. A human being additionally requires sleep, diversion (entertainment), and appropriate intellectual stimulation in order to achieve optimal performance for short periods of time. [3] Having evolved in a terrestrial biosphere in which all these resources are readily available (though some are at times contested and obtained only through competition), the human body is ill-adapted to the sterility of extraterrestrial space. In order to survive in space, all of these resources must be made available. In principle, this presents no essential problem, but in practical terms this means lifting all these resources out of Earth’s gravity well, until such time as there is sufficient infrastructure off the surface of Earth to provide these resources without immediate recourse to terrestrial sources—again, something that in principle presents no problems, but which in practice is a matter of great difficulty.

Because of the practical expense and difficulty of maintaining the human body in space, especially in comparison to the relative ease of operating a machine in the sterility of space, it has been argued that space science can be done most effectively and efficiently through the use of robotic probes. [4] With the financial resources to support a human presence in space becoming more scarce after the “space race” was won by the US, robotic probes have become the accepted method of doing science beyond Earth for the past several decades. These missions have transformed and are still transforming our knowledge of cosmology. While this strategy has been highly successful, it has given us a certain kind of science, and the science that has emerged from the use of robotic probes is not the only possible science.

2. The Scientific Ellipsis of Tacit Knowledge

The science performed by robotic probes is the result of a long process of development of the scientific method and scientific knowledge. This process had its earliest beginnings in ancient Greece, and accelerated with the scientific revolution. The technological iteration of science that has emerged since the industrial revolution has become a highly refined exercise in picking the low-hanging fruit of sensory perception. This narrow specialization, like so much in the process of industrialization, has yielded disproportionate successes, but it has yielded these gains at the expense of certain blind spots. One of these blind spots is tacit knowledge.

We know more than we can say, more than we can explain, more than we understand how we know what we know. Much of what we know is tacit knowledge, i.e., knowledge that we possess but which we cannot explain or make explicit. [5] Being able to recognize the faces of those familiar to you is an instance of tacit knowledge. You immediately recognize these faces, and yet you cannot say how exactly you recognize them. We can, of course, program a computer to recognize faces, and here we can say exactly how the recognition is accomplished, but this is not how human beings recognize another human face.

Only a small fragment of our knowledge is explicitly formalized. Once we arrive at a method for formalizing knowledge (and this is one of the functions of science), the process of producing and formalizing knowledge can be rendered systematic. Once made systematic, a body of knowledge takes on a life of its own, and the growth of knowledge can be pursued often without reference to the original source of knowledge in human experience. Like the use of scientific instruments to enhance and then to far surpass human senses, the enterprise of scientific knowledge at first enhances our common sense knowledge and then surpasses it. Nevertheless, human experience remains as a potential source of knowledge not yet fully exploited by science, waiting, as it were, for the insights that will capture some heretofore unappreciated and unformalized aspect of human experience that can then, in turn, take on a life of its own that grows independently of its human source.

3. The Excommunication of the Eye

The human body is the original scientific instrument. Science began as we explored the world with the our native sensory endowment. Most scientific instruments began as instruments to augment the human senses, as in the obvious case of the microscope and the telescope, which augment the capacity of the human eye. Such instruments can grow in complexity until human senses are made irrelevant, even while the conceptual framework of the science becomes less anthropocentric by purging itself of human-specific terms and concepts (sometimes called “folk concepts”).

When Benoît Mandelbrot said that, “The eye had been banished out of science. The eye had been excommunicated,” [6] this can be understood both literally and metaphorically. In seeking non-anthropocentric formulations in the sciences, the human eye had indeed been banished from the sciences; the human eye was no longer needed as an instrument in science because it had been replaced by far more sensitive instruments, and in a more radical sense the eye as a contingent relic of anthropomorphic science had to be banished, along with the centrality of the other human sensory organs to scientific knowledge.

The scientific account of sight and all that sight reveals to us of the world is the most advanced instance of science not only exhausting the capabilities of the human body as a scientific instrument, but of going far beyond the capabilities of the human body. The scientific account of vision has far surpassed what the human eye can see, and has become integrated with the fundamental physics that explains the electromagnetic spectrum, of which the eye perceives only a small fraction. (The scientific account of vision has also become integrated with the biology and physiology that explains the details of how the human vision system functions.) The eye can be banished from science because the eye has been surpassed and superseded by science; the eye must be banished from science in order for science to fulfill its promise as objective, i.e., non-anthropocentric, knowledge.

eye spectrum 3

Not all human senses, however, have been exhausted or exceeded by science. Our visceral sensations that reveal gravity, acceleration, and movement are less well understood, and they connect us corporeally to a different area of fundamental physics—that which falls within the purview of general relativity—which cannot yet be reconciled to the particle physics that explains the EM spectrum. Thus within our own bodies we experience the division in physics between quantum theory and general relativity—except that with our body we experience the world as a seamless whole, something we cannot yet do with physics. This is ironic, as the contemporary scientific mainstream view is that the conscious apprehension of the world is deceptive—a mere “user illusion”—and it is only (non-anthropocentric) science that can give us the correct view of the world.

4. Human Experiences Intrinsic to Spacefaring Civilization

While the human body is the original scientific instrument, in so far as we have not yet scientifically mastered all that our senses tell us about the world, the human body remains a valuable instrument for scientific research even in a time of advanced technology. The still unexplained functions of the human body that have yet to be fully exapted for science argue for the continued relevance of human beings as observers in scientific contexts, even in an advanced spacefaring civilization.

Earth_seen_from_Apollo_17

[The famous “Blue Marble” photograph of Earth taken 07 December 1972 by the crew of Apollo 17, at a distance of about 45,000 kilometers, or 28,000 miles.]

One of the consequences of knowing more than we can say, i.e., one of the consequences of possessing tacit knowledge, is that we occasionally have experiences that affect us in unpredictable and unprecedented ways. One of these experiences that has emerged from technologies that have expanded the range of human experiences is known as the overview effect. [7] Individuals who have traveled into space and have seen our planet whole from above report an experience of great personal relevance. All of us have had an attenuated experience of the same through the “blue marble” photographs that have show us the vision of Earth from space. This experience has not yet been fully explicated and remains at the level of a profound feeling because we do not yet have a theoretical framework sufficient to clarify and formalize the experience and thus to assimilate it to scientific knowledge. [8]

The overview effect may be understood as an experience intrinsic to spacefaring civilization, and in Cognitive Astrobiology and the Overview Effect I suggested that the overview effect as we know it, the view of our planet entire from space, is one among a class of experiences to be studied by what David Dunér has called, “astrocognitive epistemology, what we can learn through extraterrestrial explorations, interactions and encounters.” [9] Further experiences of this class are to be expected as our spacefaring capacity grows in sophistication.

Our natural sensory endowment allows for an overview of Earth, and, if we could place ourselves in space outside our galaxy, we would have a breathtaking overview of the Milky Way—an overview effect that our technology does not yet afford us. What would the rest of the sky beyond the Milky Way look like? Probably we would see the other galaxies of the local group: the Magellanic Clouds and the Helmi Stream would be obvious, and the Andromeda and Triangulum galaxies would be visible. Probably other galaxies would be visible as well, and some large scale structure unobscured by the many stars that populate any vision of the universe as viewed from within the Milky Way. We would not, however, see with our natural sensory endowment the faint galaxies captured in deep field images of the universe beyond our galaxy. [10] There are limits to the knowledge yielded by personal experience, just as there are limits to the kind of knowledge derived from robotic probes, so that these approaches to science are complementary.

5. Upper and Lower Bounds of the Overview Effect

The kind of personal knowledge represented by the overview effect, involving human experiences intrinsic to spacefaring civilization, has been limited by the limits of our sensory organs and the limits of our conscious apprehension in understanding our observations. Technological instrumentation expands the range of observation, and may someday also expand the range of conscious apprehension as well. The spacecraft we have constructed to date can be understood as scientific instruments that have enabled the observations that have made the overview effect possible. We cannot yet build the scientific instruments (spacecraft) that would enable the overview of the Milky Way described in the previous section, though we may someday be able to do so, and, if we do, we can predict what we will see. We cannot, however, predict how we will understand and interpret what we see.

With a starship as a scientific instrument, one might employ the time dilation of relativistic travel to arrive at an overview of our galaxy when the Milky Way and Andromeda were colliding. Relativistic travel could be the ultimate time lapse observation tool, allowing us to see and to personally experience scales of time far in excess of ordinary human perception, like the temporal inverse of employing high speed cameras to see phenomena that occur far too rapidly to be observed by ordinary means. Science on a far larger scale than any “big science” to date would be enabled by the use of a starship as a scientific instrument.

We can distinguish between the kind of scientific instruments that enable novel observations. Firstly, a rough distinction can be made between observations that can be made with the unaided eye and observations only possible with instrumentation, with the former giving us direct, personal, immediate, and visceral experience, and the latter providing us with a derivative experience of varying degrees of immediacy. The distinction is rough because it is difficult to say at what point experiences cease to possess the immediate and visceral qualities of personal experience. Watching an event through binoculars seems more immediate than watching on a television screen, and watching an event live on a television screen seems more immediate than watching a recording after the fact on a screen. Thus the immediacy of a personal experience is subject to a certain degree of ambiguity between mediated and unmediated experience.

A further distinction can be made between scientific instruments that register certain readings of phenomena not accessible to human beings without instrumentation, and scientific instruments that place the human observer within a context that allows novel observations to be made. There are, then, at least two ways in which technology can be used to augment our natural sensory endowment: through improvements in the resolution of a particular sense or senses, or through placing the entire observer into a context in which it is possible to observe that which could not be previously observed. The starship as a scientific instrument belongs in the second class of technological augmentations of human experience.

As described in the previous section (and as I noted in Cognitive Astrobiology and the Overview Effect), one could never experience, with one’s native sensory endowment, the view provided by the Hubble Space Telescope’s deep field image when this scientific instrument focused on a very small portion of the sky for a long period of time, but if one’s mind and senses were augmented by the technologies of transhumanism—i.e., if the limits of our sensory organs and our consciousness were mitigated or eliminated—it might well become possible to personally observe a deep field view of the cosmos. One might be enhanced so that one could in fact focus on a miniscule portion of the sky for days at a time—something not possible with one’s native cognitive endowment—or consciousness might be directly interfaced with the instrumentation that would make this possible. In an example such as this, technologies of sensory enhancement coincide with technologies that place the observer in a context that enables novel observations.

Why should we be concerned with a personal experience of something like a deep field image of the cosmos? Given a mind shaped by the evolutionary psychology of our particular embodiment and history—and given that this is what makes us human, and is therefore something we are likely to want to retain as our identity—the most powerful (if not transformative) epistemic experiences are those that follow from the use of technology to place a human observer entire into a context that allows immediate personal experience of a phenomenon. A high speed camera is a marvelous scientific instrument, but better still (from an epistemic point of view) would be the technology to speed up the consciousness and sensory capabilities until it were possible to observe high speed phenomena with one’s own eyes.

It is not only our sensory organs that allow us to observe the world. Consciousness, too, is an aspect of the human condition that can be understood as a scientific instrument that gives us a unique and indeed uniquely intimate way of perceiving the world. Consciousness makes it possible for us to “observe” time. This is the scientific instrument least understood among all the faculties of the human condition, and the least integrated into science, even as it is pervasively present. Contemporary science cannot explain what an observer is, or how an observer observes, but it recognizes that an observer is crucial for all science, which must often correct for the perspective and the biases of the observer.

Technologies that could enhance consciousness in the way that we have used technology to enhance our senses are not yet on the horizon, partly because we have no science of consciousness that could be technologically implemented, and partly because of moral objections to the enhancement of consciousness. The efficacy of consciousness enhancing drugs is questionable at this time, while social disapproval and legal penalties stymie systematic research into the enhancement of consciousness. Since consciousness is coupled to all human observations, the only kind of scientific instrumentation that can expand the range of observation, where “observation” is understood in terms of personal experience, are those that place the observer within a context in which novel observations are possible. This is the kind of science that has not been possible with robotic probes.

The transformative nature of observational experiences in which the individual is present as an individual, and observes with his or her own body, is almost certainly a function of several factors, including a reflexive self-awareness of being present as well as those subtle aspects of observation not yet mastered by science. Gestalt experiences that involve the entire body are only possible when the entire body is present in for the experience in question.

We can hold out hope for such transformative observational scientific experiences (i.e., further instances of the class of experiences including the overview experience) wherever physical symmetries hold. It seems unlikely that we could shrink ourselves down in size to be able to observe what electron microscope allows us to see (since symmetries of scale do not hold), but symmetries of space and time imply the possibility of personal knowledge of here or there, now or then, even on a cosmic scale. From this it is obvious that the overview effect is a special case—a limiting case, if you will (following Einstein, who wrote that, “No fairer destiny could be allotted to any physical theory, than that it should of itself point out the way to the introduction of a more comprehensive theory, in which it lives on as a limiting case.” [11])—of these possibilities of personal observation. The overview effect known to date is, as it were, the lower bound of astrocognitive epistemology. The upper bound of astrocognitive epistemology would be approached by a deep field overview.

6. Observation and the Embodied Mind

The presence of the whole person in making an observation matters because the human mind is, as we now say, embodied; the human body is the corporeal context of the human mind—and one might say with equal justification that the human mind is the cognitive context of the human body. One of the most significant developments in the philosophy of mind over the past several decades has been a rejection of Cartesian dualism and a recognition of and engagement with the “embodied” nature of the mind.

The embodied mind is embodied in a body with an evolutionary history, and the mind no less than the body has been selected for its evolutionary fitness (which does not always entail logical rigor). For some, this is a problem. Daniel Dennett opened his book Darwin’s Dangerous Idea with this observation:

“Darwin’s theory of evolution by natural selection has always fascinated me, but over the years I have found a surprising variety of thinkers who cannot conceal their discomfort with his great idea, ranging from nagging skepticism to outright hostility. I have found not just lay people and religious thinkers, but secular philosophers, psychologists, physicists, and even biologists who would prefer, it seems, that Darwin were wrong. This book is about why Darwin’s idea is so powerful, and why it promises—not threatens—to put our most cherished visions of life on a new foundation.” [12]

The contemporary iteration of this range of Darwinian rejectionism from nagging skepticism to outright hostility is the controversy over evolutionary psychology, which is sometimes dismissed as unfalsifiable, as indeed Popper once held that the whole of Darwinism was unfalsifiable and therefore unscientific. [13] Moreover, evolutionary psychology is distasteful because it forces us to recognize some unflattering aspects of human nature. In other words, evolutionary psychology is a Copernican punishment of pride of the human intellect. But it doesn’t, or need not, stop there.

The embodied mind cuts both ways: there are reductionist consequences for consciousness, but there are also anti-reductionist consequences for the body (and especially for the central nervous system, which is integrated both into the body and into the world). But the idea of anti-reductionism is so unfamiliar to us that we don’t have the terminology or the concepts to explain it. We can, however, begin to glimpse that evolutionary psychology has edifying as well as humbling implications—a duality that has long been recognized as a consequence of other aspects of the Copernican revolution. We are not the center of the universe, or even the center of our own solar system, but we are part of something that possesses an ineffable grandeur, and our minds are part of this also. Indeed, it is our minds that grasp their own derivation from the cosmos. [14]

7. The Knowledge Argument in Space Science

When the embodied mind is placed in a position to personally observe experiences intrinsic to spacefaring civilization, new forms of knowledge intrinsic to spacefaring civilization may result, and new forms of consciousness may emerge, shaped by the knowledge. This perfectly exemplifies Frank White’s description of the overview effect as, “…the predicted experience of astronauts and space settlers, who would have a different philosophical point of view as a result of having a different physical perspective.” (cf. note [7])

One might hold that nothing new is learned by the personal observation of such experiences. There is, appropriately, a philosophical thought experiment that addresses exactly this question. Known as “Mary’s room” or as the knowledge argument, the thought experiment was originally formulated by Frank Jackson in this way:

“Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?” [15]

An individual might master what I above called, “the fundamental physics that explains the electromagnetic spectrum… [and] the biology and physiology that explains the details of how the human vision system functions,” while never having had personal experience of them. The “Mary’s room” thought experiment is intended to force the question of whether anything is learned when personal experience of a given phenomenon is added to scientific knowledge of the same phenomenon. There is, as yet, no consensus on the question; philosophers disagree on whether Mary learns anything upon leaving Mary’s room.

As a thought experiment, “Mary’s room” it is intended not to give us a definitive answer to a circumstance that is never likely to occur in fact, but to sharpen our intuitions and refine our formulations. The same could be done for the overview effect. We could easily formulate a parallel circumstance to the knowledge argument that addresses the class of astrocognitive epistemological experiences to which the overview effect belongs, according to which some individual has studied scientifically everything there is to know about space travel and human perception in extraterrestrial space, and then the same individual travels into space and experiences this personally. Does the individual learn anything from the personal experience of space travel?

Astronauts and cosmonaut themselves, who have personally experienced this transition from scientific knowledge of what they expect to see, to the actual first person experience, have testified to the impact of the personal experience. Alan B. Shepard, Jr. said, “…no one could be briefed well enough to be completely prepared for the astonishing view that I got.” [16] Robert Cenker said, “Of all the people I’ve spoken to about the experience of space, only those closest to me can begin to understand. My wife knows what I mean by the tone of my voice. My children know what I mean by the look in my eye. My parents know what I mean because they watched me grow up with it. Unless you actually go and experience it yourself you will never really know.” [17]

If our scientific knowledge of space travel is incomplete, then it is difficult to avoid the conclusion that one learns something from personal experience of a phenomenon incompletely described by science. Indeed, there is an interesting interplay between the problem of tacit knowledge and the knowledge argument. The knowledge argument might be revised so that the situation it describes only holds for mature sciences from which all tacit, folk, and anthropocentric concepts have been purged. [18] Therefore a poorly understood experience such as the overview effect, which has not been fully assimilated to science, might be epistemically augmented by personal experience, but if we possessed an exhaustive account in the context of a mature science, the epistemic augmentation of personal experience would disappear.

The pursuit of mature and definitive formulations of science is as unending as the universe. We might arrive at the mature formulation of a given science, but in so doing new questions will be posed, and we will want to push the frontiers of knowledge further outward, so that there will always be an unknown epistemic margin where personal experiences may epistemically augment our account of the world where no science as yet exists to fully explain what we see. For example, a galactic overview such as described earlier would take place in the context of the continuing expansion of scientific knowledge, and seeing this for ourselves may still contribute to scientific knowledge in unexpected and unanticipated ways. Much is likely to change in ourselves and our civilization by the time we can achieve a galactic overview, so that we cannot predict what we will learn, or even what our science will be like at that time; we can only postulate the possibility of a particular kind of experience, anticipating but not knowing its content.

8. The Interstellar Imperative and the Human Imperative

In a previous Centauri Dreams post, The Interstellar Imperative, I argued that the starship will become the ultimate scientific instrument, “…constituting both a demanding engineering challenge to build and offering the possibility of greatly expanding the scope of scientific knowledge by studying up close the stars and worlds of our universe, as well as any life and civilization these worlds may comprise.” Our future starships could take the form of robotic probes [19], but given what I have written here about the human body as a scientific instrument, and the as-yet-unrealized scientific potential of human experience, the starship as a scientific instrument could only be fully exploited for the purposes of scientific knowledge when coupled with a human presence. Putting human beings in starships and sending them to other words is an indispensible condition of the continued advancement of science and scientific civilization.

– – – – – – – – – – – –

Notes

[1] This essay was the basis of my presentation at the 2011 100YSS symposium.

[2] T. E. Hulme, Speculations, New York: Harcourt Brace and Company, 1924, p. 257. And, again, in Hulme’s Further Speculations: “It must be very difficult for the writers on ethics (who seem to be more happily endowed than most of us) to realise how excessively difficult it is for the ordinary modern to realise that there is any real subject ‘Ethics’ which can be at all compared with ‘Logic’ or even with ‘Aesthetic.’ It seems almost impossible for us to look on it as anything objective; everything seems to us arbitrary and human, and we should at a certain age no more think of reading a book on ethics than we should reading one on manners or astrology. There may even seem something ridiculous about the word ‘Virtue’.” (p. 203)

[3] These lists of bodily and intellectual needs are not intended to be exhaustive, but are presented here only to give a sense of the challenges of supporting human life in space.

[4] As a survey of some of these robotic probes, cf., e.g., Great robotic missions to explore space by Pallab Ghosh.

[5] The term “tacit knowledge” is due to Michel Polanyi, whose book The Tacit Dimension is an exposition of tacit knowledge. The idea is developed throughout Polanyi’s works.

[6] From an interview with Benoît Mandelbrot in the NOVA documentary Fractals: Hunting the Hidden Dimension.

[7] The “overview effect” is a term due to Frank White, describing the experience of astronauts and cosmonauts who have seen the Earth whole from orbit or beyond, given exposition in his book The Overview Effect: Space Exploration and Human Evolution. White summarizes the overview effect as, “…the predicted experience of astronauts and space settlers, who would have a different philosophical point of view as a result of having a different physical perspective.” (Frank White, The Overview Effect, Boston: Houghton Mifflin Company, 1987, p. 4) My posts on the overview effect include The Epistemic Overview Effect, The Overview Effect as Perspective Taking, Hegel and the Overview Effect, The Overview Effect in Formal Thought, Our Knowledge of the Internal World, Personal Experience and Empirical Knowledge, and Cognitive Astrobiology and the Overview Effect.

[8] On the difference between profundity and clarity cf. The Study of Civilization as Rigorous Science and Addendum on the Study of Civilization as Rigorous Science.

[9] David Dunér, “Astrocognition: Prolegomena to a future cognitive history of exploration,” in Humans in Outer Space – Interdisciplinary Perspectives, edited by Ulrike Landfester, Nina-Louisa Remuss, Kai-Uwe Schrogl, and Jean-Claude Worms, Springer, 2011, p. 119. I prefer Pauli Laine’s term “cognitive astrobiology” to Dunér’s “astrocognition,” though Dunér’s analysis of the forms of astrocognition is a helpful framework.

[10] There have been three such “deep field” images from the Hubble Space Telescope, of increasing resolution, the Hubble Deep Field (HDF), the Hubble Ultra Deep Field, (HUDF), and the Hubble eXtreme Deep Field (XDF). There is also the Subaru Deep Field (SDF) image.

[11] Albert Einstein, Relativity: The Special and General Theory, New York: Plume, 2006, pp. 98-99.

[12] Daniel C. Dennett, Darwin’s Dangerous Idea: Evolution and the Meanings of Life, Penguin Books, 1995, Preface, p. 11.

[13] Popper changed his opinion on the scientificity of Darwinism over the course of his career. Popper, famous for his definition of scientificity in terms of falsifiability, wrote in his Unended Quest: An Intellectual Autobiology (1974, section 37), “I have come to the conclusion that Darwinism is not a testable scientific theory, but a metaphysical research programme—a possible framework for testable scientific theories.” Not many years later in “Natural Selection and the Emergence of Mind” a lecture delivered at Darwin College, Cambridge, 08 November 1977 (also Chapter VI in Evolutionary Epistemology, Rationality, and the Sociology of Knowledge, by Gerard Radnitzky, William Warren Bartley, Karl Raimund Popper), Popper said, “…I have changed my mind about the testability and the logical status of the theory of natural selection; and I am glad to have an opportunity to make a recantation. My recantation may, I hope, contribute a little to the understanding of the status of natural selection.”

[14] Evolutionary psychology places the human mind in the context of cognitive astrobiology, because evolutionary psychology itself must eventually be placed in the context of astrobiology, which is the more comprehensive discipline. The human mind placed in the context of cognitive astrobiology, i.e., the embodiment of mind in nature and history, means that, in Carl Sagan’s terms, our minds are star stuff too: “…if you are bothered by the disturbing vision that evolutionary psychology paints of the human mind, take heart, because it also implies this edifying corollary, that the mind is as much of the stars as it is of the earth, as much of the universe at large as of nature, red in tooth and claw.” (Cf. also The Mind as Star Stuff) In this astrocognitive context, the human being has a crucial role to play as an observer in science, and especially in those sciences that will emerge from the comparative study of other worlds, other life, and hopefully also other civilizations.

[15] Frank Jackson, “Epiphenomenal Qualia,” 1982, Philosophical Quarterly 32: 127–136. I have previously written about the “Mary’s room” thought experiment in Computational Omniscience and Colonia del Sacramento and the Knowledge Argument.

[16] Quoted in Frank White, The Overview Effect: Space Exploration and Human Evolution, Boston: Houghton Mifflin Company, 1987, p. 197.

[17] Quoted in Kevin W. Kelley, editor, The Home Planet, Reading, et al.: Addison-Wesley, 1988, p. 142. These astronaut and cosmonaut experiences might be interpreted as experiments that verify the epistemic role of personal experience explored by the knowledge argument.

[18] Cf. Folk Concepts and Scientific Progress

[19] Interstellar probes without a live crew are sometimes referred to as “Bracewell probes,” following the work of Ronald Bracewell, whose papers “Communications from Superior Galactic Communities” and “Interstellar Probes” proposed interstellar probes as a medium for communication among civilizations, as an alternative to the SETI paradigm of radio or optical communication. Cf. Bracewell, R. N. (1960), “Communications from Superior Galactic Communities,” Nature 186 (4726): 670–671, reprinted in A. G. Cameron, ed., Interstellar Communication, New York: W. A. Benjamin, 1963, pp. 243–248, and Bracewell, R. N., “Interstellar Probes,” in A.G.W. Cameron and Cyril Ponnamperuma, eds., Interstellar Communcation: Scientific Perspectives, Boston: Houghton-Mifflin, 1974, pp. 102-117.

tzf_img_post

{ 40 comments }

Building the Gas Giants

by Paul Gilster on August 20, 2015

Yesterday’s article on supernovae ‘triggers’ for star and planet formation shed some light on how a shock wave moving through a cloud of gas and dust could not only cause the collapse and contraction of a proto-star but also impart angular momentum to an infant solar system. Today’s essay focuses on a somewhat later phase of system formation. Specifically, how is it that gas giants like Jupiter and Saturn can form in the first place, given core accretion models that have ‘trigger’ problems of their own?

Here’s the issue: To create a gas giant, you need plenty of hydrogen and helium, material in which a solar nebula would be rich. But we’re learning a lot about how planetary systems evolve, and the emerging reality is that the gas disks from which planets are made usually last a comparatively brief time, somewhere on the order of one to ten million years. That would imply that the gas giants had to accumulate their atmospheres within this timeframe.

But how? Jupiter’s atmosphere is massive enough that it requires a large solid core. Forming first, that ice and rock object, itself of planetary size, then causes the gravitational inflow of gas and dust. So we’re asking that a core perhaps ten times the size of Earth form in no more than than a few million years. Hal Levison (SwRI), lead author of a new study on this issue, calls this “the timescale problem,” as you can see in this news release from his parent institution. It’s a problem, says Levison, that “has been sticking in our throats for some time.”

Because if you look at rocky worlds like the Earth, the current thinking is that it needed at least 30 million years to form, and that’s a bare minimum — the number could reach 100 million years. During this period, small objects gradually interact, banging into each other to create larger rocks, and so on in a process that leads to planetesimals and ultimately to terrestrial-class worlds. So how do we get the gas giant’s core to form quickly enough to enable gas accumulation sufficient to produce the observed thick atmospheres of Jupiter and Saturn?

dust-ring-nasa

Image: This artist’s concept of a young star system shows gas giants forming first, while the gas nebula is present. Southwest Research Institute scientists used computer simulations to nail down how Jupiter and Saturn evolved in our own solar system. These new calculations show that the cores of gas giants likely formed by gradually accumulating a population of planetary pebbles – icy objects about 30 centimeters in diameter. Credit: NASA/JPL-Caltech.

The Levison paper, co-authored with Martin Duncan (Queen’s University, Ontario) and Katherine Kretke (SwRI) makes the case that a 10 million year timeframe for a gas giant’s core is sufficient if the infant planet accumulates small planetary ‘pebbles,’ here explained as objects of ice and dust about 30 centimeters in diameter. Objects in this size range quickly spiral onto a protoplanet when sufficient gas is present. The rapidly accumulating gas acts as the snare to gather the core materials, which are concentrated by drag and gravitationally collapse.

Levison and colleagues believe that this ‘aerodynamic drag and collapse’ model can produce cores of the needed size in timeframes as short as a few thousand years. Moreover, properly tuned, the method produces a Solar System not so different from what we see. That ‘tuning’ involves assuming pebble formation timed just right so that gravitational interactions among the growing planetesimals cause the larger of them to scatter the smaller out of the disk, slowing down their further growth. Ironically, we need pebbles that aren’t in too much of a hurry:

“If the pebbles form too quickly, pebble accretion would lead to the formation of hundreds of icy Earths,” said Kretke. “The growing cores need some time to fling their competitors away from the pebbles, effectively starving them. This is why only a couple of gas giants formed.”

We wind up with a system that forms between one and four gas giants some 5 to 15 AU from the Sun, which isn’t a bad match at all for our own Solar System, with its two gas giants and the ice giants Uranus and Neptune. This is a computer simulation that, unlike numerous earlier attempts at modeling the core accretion model, does produce gas giants in the timeframe and configuration needed. The formation of gas giants early in a system’s history, then, remains consistent with the basic model, with a short period of core formation no longer a deal breaker.

The paper is Levison, Kretke and Duncan, “Growing the gas-giant planets by the gradual accumulation of pebbles,” Nature 524 (20 August 2015) pp. 322-324 (abstract).

tzf_img_post

{ 7 comments }

A Supernova Trigger for Our Solar System

by Paul Gilster on August 19, 2015

The interactions between supernovae and molecular clouds may have a lot to tell us about the formation of our own Solar System. Alan Boss and Sandra Keiser (Carnegie Institution for Science) have been exploring the possibility that our system was born as a result of a supernova ‘trigger.’ Their new paper follows up on work the duo have performed in recent years on how a cloud of dust and gas, when struck by a shock wave from an exploding star, could collapse and contract into a proto-star. The surrounding gas and dust disk would eventually give birth to the planets, although just how the latter occurs gets interesting, as the latest from Boss and Keiser reveals.

5_139

Image: An artist’s illustration of a protoplanetary disk. Credit: NASA/JPL-Caltech/T. Pyle (SSC).

The new work extends Boss and Keiser’s modeling of such events. But before getting into that, let’s look at what we already know from observations of far more distant celestial objects. Working at radio and submillimeter wavelengths, various researchers have examined a Type II supernova remnant called W44 in light of how it is interacting with the W44 giant molecular cloud. We think of molecular clouds as venues for star formation, places where molecules like molecular hydrogen can form.

Supernova remnants are bounded by an expanding shock wave behind which heated ejecta from the explosion are found, with the shock wave capable of heating up plasma to millions of Kelvin. What we see in the W44 remnant is that its shock wave, pushing into the W44 molecular cloud, is producing clumps of compressed molecular gas. The idea of a triggered collapse of dense molecular clouds and the injection of material from the supernova into a collapsing disk is supported by previous work Boss and Keiser performed with 2-D and 3-D modeling.

Scientists-Measure-the-Expansion-Velocity-of-a-Shockwave-of-the-Supernova-Remnant-W44

Image: Shockwave interactions with the supernova remnant W44. Credit: Keio University/NAOJ.

What we have in the current paper is a new look at the key issue in this collapse — the distribution of short-lived radioisotopes formed during the supernova explosion and pushed into the region of a cloud that would eventually become our Solar System. Boss and Keiser’s models show that a pressure wave from a supernova blast striking a dense gas cloud would form indentations in the surface of the cloud, where the radioisotopes would be injected into the collapsing gas. These ‘finger-like’ indentations are described in the paper as ‘R-T fingers,’ after English physicists Lord Rayleigh and Sir Geoffrey Taylor, who explored instabilities at the interface where two fluids of different densities meet.

The Boss/Keiser models can account for the isotopes we find in early meteorites, but the researchers believe they also point to an explanation for the spin of the Solar System. Angular momentum imparted by the shock indentations, they argue, would allow the disk of gas and dust to form around the Sun rather than being pushed directly into it. In their models, without the necessary spin produced by the shock front, the disk materials would simply disappear into the Sun. Thus shock-induced spin enables planet formation, a result Boss calls ‘a complete surprise to me.’

boss

Image: These images show the central plane of a rotating disk orbiting a newly formed protostar (dark dot) formed in a three-dimensional model of the shock-triggered collapse of a molecular cloud of gas and dust. Density is shown on the left, while the x velocity plot on the right shows how the shock (outer edge) has injected fingers with motions that are responsible for producing the spin of the disk around the central protostar. Credit: Alan Boss.

From the paper:

Remarkably, these new models have also introduced a new feature of the shock wave triggering and injection mechanism: the R-T fingers responsible for SLRI [short-lived radioisotope] injection can concomitantly result in the injection of enough momentum to largely determine the direction of the resulting disks’ spin axis orientations. In such cases, the R-T fingers may have been responsible not only for the acquisition of the SLRIs inferred to have been present in the most primitive meteorites, but also for the very fact that a rotating protostellar disk was formed, a disk that eventually led to the formation of our planetary system.

Thus we get an idea of what may have caused the collapse that led to the protostar that would become our own Sun, and by extension a mechanism for star and system formation that would have occurred widely in the galaxy. The R-T ‘fingers’ impart angular momentum that make disk formation around the infant star at the heart of the cloud possible.

The paper is Boss and Keiser, “Triggering Collapse of the Presolar Dense Cloud Core and Injecting Short-Lived Radioisotopes with a Shock Wave. IV. Effects of Rotational Axis Orientation,” accepted by The Astrophysical Journal (preprint).

tzf_img_post

{ 5 comments }

Dione: The Last Close Flyby

by Paul Gilster on August 18, 2015

We’re in the immediate aftermath of Cassini’s August 17 flyby of Saturn’s moon Dione. The raw image below gives us not just Dione but a bit of Saturn’s rings in the distance. As always, we’ll have better images than these first, unprocessed arrivals, but let’s use this new one to underscore the fact that this is Cassini’s last close flyby of Dione. I’m always startled to realize that outside the space community, the public is largely unaware that Cassini’s days are numbered. It’s as if these images, once they began, would simply go on forever.

The reality is that processes are already in place for Cassini’s final act. The ‘Grand Finale’ will be the spacecraft’s close pass by Titan (within 4000 kilometers of the cloud tops), followed by its fall into Saturn’s atmosphere on September 15, 2017, a day that will surely be laden with a great deal of introspection. Bear in mind that not long after Cassini’s demise, we’ll also see the end of the Juno mission at Jupiter. We may still have our two Voyagers out there pushing toward the interstellar deep and New Horizons as well, but once we lose Juno, we’ll undergo a hiatus in which there will be no missions on the way to the giant planets.

fig01

Image: A Cassini image taken on August 17, 2015. The camera was pointing toward Dione, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System in 2016. Credit: NASA/JPL-Caltech/Space Science Institute.

If you think back to the Voyager days, we were working ahead (on missions like Galileo) even as we tracked the Voyagers ever deeper into the Solar System. Cassini comes out of the Galileo days (it was launched in October of 1997, while Galileo was launched in October eight years earlier), a reminder of the parallel tracks that deep space exploration demands. Given the time to develop a mission and actually get it launched, we should always be a few steps ahead of ourselves. But as of 2018, we’re facing a long slog until the European Space Agency’s JUICE mission (Jupiter Icy Moon Explorer) and whatever NASA comes up with for Europa.

The Planetary Society’s Casey Dreier makes the case that the gap we’re facing will last at least eight years, between Cassini’s demise and any potential Europa mission from NASA. Eight years sounds like a long time, but it’s a best-case scenario, one possible only if we have a Europa mission ready to fly by 2022 and a fully functional SLS (Space Launch System) capable of launching it on a three-year trajectory.

We’ve gotten so used to Cassini’s stunning images that it’s hard to imagine they’re going to stop, but perhaps that eight year gap or whatever it turns out to be will remind us of what we’ve had and how hard we have to work to make such missions happen again. All of which is why I’m paying particular attention to these final targeted flybys of places like Dione. The recent flyby should offer helpful comparisons between it and Saturn’s other icy moons. But it will also give us a glimpse of the moon’s north pole, and at a resolution of only a few meters. Cassini’s Composite Infrared Spectrometer will be mapping areas that show unusual thermal anomalies.

fig02

Image: Dione as seen by Cassini on an earlier pass, on April 11, 2015. This view looks toward the trailing hemisphere of Dione. North on Dione is up. The image was taken in visible light with the Cassini spacecraft narrow-angle camera. The view was acquired at a distance of approximately 110,000 kilometers from Dione. Image scale is 660 meters per pixel. Credit: NASA/JPL-Caltech/Space Science Institute.

Dione turns out to be an interesting place, quite different from Enceladus. Notice in the image the so-called chasmata, producing what the Voyager team once labelled ‘wispy terrain.’ Features that had once been assumed to be surface deposits of frost were later revealed, by Cassini, to be icy cliffs standing out brightly against the backdrop of surface fractures. We’re probably looking at the result of tidal stress and, as with Enceladus, we’re being given a graphic lesson in how such forces can shape the evolution of a moon both externally and internally.

Giovanni Cassini would discover four moons around Saturn — Tethys, Dione, Iapetus and Rhea — although naming the satellites wouldn’t occur until John Herschel suggested in the mid-19th Century that the four be named after the Titans of classical myth. It’s interesting to note that Dione has two co-orbital ‘moons’ of its own — Helene and Polydeuces — located at Dione’s Lagrangian points 60 degrees ahead and behind Dione. A subsurface ocean is a possibility here, based on observations and modeling of mountains like Janiculum Dorsa, and given inevitable tidal heating, it could explain some of Dione’s surface features.

fig03

Image: A view of Saturn’s moon Dione captured by NASA’s Cassini spacecraft during a close flyby on June 16, 2015. The diagonal line near upper left is the rings of Saturn, in the distance. Credit: NASA/JPL-Caltech/Space Science Institute.

“Dione has been an enigma, giving hints of active geologic processes, including a transient atmosphere and evidence of ice volcanoes. But we’ve never found the smoking gun. The fifth flyby of Dione will be our last chance,” said Bonnie Buratti, a Cassini science team member at NASA’s Jet Propulsion Laboratory in Pasadena, California.

Last chance indeed, at least for a time. The close moon flybys of late 2015 will be followed by Cassini’s departure from Saturn’s equatorial plane to begin the set-up for the mission’s final year, in which the spacecraft will move repeatedly between Saturn and the ring system. Let’s hope that a new wave of exploration will get us back to Saturn’s moons so that images like these don’t take on the almost antique air now settling in over Apollo landing site photos, scenes that taunt us with where we have been and make us ask where we are truly bound.

tzf_img_post

{ 20 comments }

A Science Critique of Aurora by Kim Stanley Robinson

by Paul Gilster on August 14, 2015

I haven’t yet read Kim Stanley Robinson’s new novel Aurora (Orbit, 2015), though it’s waiting on my Kindle. And a good thing, too, for this tale of a human expedition to Tau Ceti is turning out to be one of the most controversial books of the summer. The issues it explores are a touchstone for the widening debate about our future among the stars, if indeed there is to be one. Stephen Baxter does such a good job of introducing the issues and the authors of the essay below that I’ll leave that to him, but I do want to note that Baxter’s novel Ultima is just out (Roc, 2015) taking the interstellar tale begun in 2014’s Proxima in expansive new directions.

by Stephen Baxter, James Benford and Joseph Miller

‘Ever since they put us in this can, it’s been a case of get everything right or else everyone is dead . . .’ (Aurora Chapter 2)

Stephen-Baxter

This essay is a follow-up to a review of Kim Stanley Robinson’s new novel Aurora by Gregory Benford, which critically examines the case that Robinson makes in the book that ‘no starship voyage will work’ (Chapter 7) – at least if crewed by humans. This is a strong statement, and even if the case is made in fictional form it needs to be backed up by a powerful and consistent argument. Greg criticises Robinson’s book mostly on sociological, political and ethical grounds.

Here, to complement Greg’s analysis, we take a critical look at the science in the book. Is Robinson’s ship a plausible habitat for a centuries-long voyage? Could the propulsion systems function as described? Is the planetary threat encountered by the would-be colonists biologically plausible?

This entry is mainly the initiative of Jim Benford, well known to readers of this blog; Jim is President of Microwave Sciences based in Lafayette, California, and his interests include electromagnetic power beaming for space propulsion. Also contributing has been Joseph Miller, biologist and neuroscientist, previously of the University of Southern California Keck School of Medicine, now at the American University of the Caribbean School of Medicine, with a long-time interest in extraterrestrial life. As for myself, I’m a science fiction writer, part-time contributor to such technical projects as the BIS-initiated Project Icarus, and author of some interstellar fiction myself, such as Ark (2009). And as the full-time writer I’m the one who got the privilege of writing up our conversations. Thanks, guys!

I should start by saying that Stan Robinson has been on my own (very short) list of must-read writers for the last twenty-five years at least, and that Aurora is a key book, as with all Robinson’s work deeply researched and deeply felt. If you haven’t bought the book yet, do so now.

Basics

Aurora is a tale of a multigeneration starship mission to Tau Ceti. (Note that Robinson’s starship is unnamed; here I’ve referred to it as ‘the Ship’.) The Ship reaches its target, but when it proves impossible to colonise the worlds there, a remnant of the crew struggles back to Earth.

This review is an analysis of technical and science aspects of this mission, based solely on evidence in the novel’s text. Of course any errors or misreadings of Robinson’s text are our sole responsibility.

We’ll be making comparisons with two classic studies. The BIS’s Project Daedalus (1978) was a study of an uncrewed interstellar probe which used the same fusion-rocket technology as did Robinson’s Ship in its deceleration mode. Daedalus had initial mass 50,000t (tonnes) of fuel (30kt deuterium (D), and 20kt helium-3 (He3)), the dry mass of its two stages amounted to 2700t, the payload was 450t, and the exhaust velocity was about 3.3%c, with cruise velocity 0.12c (c being the velocity of light). The Daedalus propulsion system was used only for acceleration; it couldn’t decelerate, and so was a flyby mission at its target star. In Aurora the Ship uses its fusion rocket only to decelerate.

Meanwhile the ‘Stanford Torus’ space habitat design (Johnson, 1976) was a product of a 1975 workshop involving NASA Ames and Stanford University. The final design was a torus 1790m across with the habitable tube 130m in diameter. Of a total surface area of about 2.3km2, 10,000 people would inhabit a usable surface area of about 0.7 km2. The station, located at L5, would be built of lunar resources. The total mass would be about 10 million t, of which 9.9 million t would be a radiation shield of lunar slag around the habitable ring in a layer 1.7m thick, leaving 0.1 million t as structural mass. The relevance to Aurora is that the Ship looks like two Stanford Toruses attached to a central spine.

Let’s begin by looking at the Ship’s construction and inhabitants.

The Ship

Construction

Most of what we learn about the Ship’s structure is given in Chapter 2. The Ship consists of a central spine 10km long, around which 2 rings of habitable ‘biomes’ spin, torus-like. Each ring consists of 12 cylindrical biomes, each 4km long, 1km diameter. There are also spokes and inner rings. The rings rotate around the spine to give a centrifugal gravity of 0.83g.

The 24 biomes contain samples of ecospheres from 12 climatic zones: Old World versions in one ring, New World in the other. Each biome has a ‘roof’ with a sunline, which models the required sunlight and seasonality, and a ‘floor’ on the side away from the spine. The liveable area in each cylinder is given as about 4 km2, which is about a third of the cylinder’s inner surface area: 96km2 total. In each biome there are stores under the ‘floor’, including fuel; we’re told this is used as a radiation shield during the cruise.

rotor station

The total habitable space is allocated as 70% agricultural; 5% urban / residential; 13% water; 13% protected wilderness. The wilderness areas are meant to be complete ecologies.

The crew numbers given appear contradictory; in some places Robinson states there are about 2100 total, but elsewhere is given a number of 300 people per biome which would total 7200. The crew numbers do vary through the centuries-long mission, with births and deaths.

How reasonable are these numbers, given the mission’s objectives? Could the Ship support that many people? Are they enough to found a human population at the target? And is there room for true wilderness?

Closed Ecologies

We don’t yet know how to maintain closed ecologies for long periods. The Ship’s biomes would suffer from small-closed-loop-ecology buffering problems, as Robinson illustrates very well in the text; we see the crew having to micro-manage the biospheres, and dealing with such problems as the depletion of key trace elements through unexpected chemical reactions. In some ways this may prove to be an even more daunting obstacle to interstellar exploration than propulsion systems.

Human population

If there are 300 people per biome, and given a total of 96km2 habitable area, that’s a population density of 75 /km2. Compare this with Earth’s global average of 13 /km2 ; crowded southern England is 667 / km2. In terms of the ability of the agricultural space (70% of total) to support the crew, that seems reasonable to us.

But if only 5% of the space is used for residential purposes, the effective living density is high, at 1500 per km2 – comparable to densely populated urban areas such as Hong Kong. Such densities would seem problematic on a long-duration mission, though of course the crew do have access to the other 95% of the habitable areas; people hike the wildernesses.

This group is of course meant to be sufficient to found a new human breeding population on a virgin world. What is the minimal population size to maintain the species without an evolutionary bottleneck? Something like 1000 is a good guess. Robinson’s original population was at least twice that. If that population size was maintained, genetic diversity would plausibly be sufficient.

‘Wilderness’

We’re told (Chapter 2) that each biome has about 4km2 of living space and that 13% of that space is given over to ‘wilderness’, that is 0.52 km2 per biome. The ecologies can include apex predators. In a biome called Labrador, for instance, ‘In the flanking hills sometimes a wolf pack was glimpsed, or bears’ (chapter 2).

This idea is explored in more depth in Robinson’s 2312, in which mobile habitats called ‘terraria’, hollowed-out asteroids, are used as reserves for species threatened on a post-climate-change Earth. But even these terraria are not very large in terms of the space needed by wildlife in nature. A wolf pack, consisting of about 10 animals, may have a territory of 35 km2 (Jędrzejewski et al, 2007). A 2312 terrarium with an inner surface area of about 160 km2 would have room for only about 4 packs, or about 40 individual animals, a small population in terms of genetic diversity.

It seems clear that the much smaller biomes of the Ship, though large in engineering terms, would be far too small to be able to host meaningful numbers of many animal species in anything resembling a natural population distribution. A wilderness needs a lot of room.

Mass

We are given a mass breakdown for the Ship as a whole. We’re told that during the Ship’s cruise phase, when it is fully laden with fuel, the total mass is 76% fuel, 10% each biome ring, and 4% the spine.

We aren’t told the Ship’s total mass, however, and to study the propulsion system’s performance we’ll need at least a guesstimate. This is derived by a comparison with the Stanford Torus design.

Each torus-like biome ring consists of 12 pods of length 4km, diameter 1km. So the surface area of 1 pod is 14.1 km2, including end caps. And the surface area of one biome ring is 170 km2 (which is much larger than the Stanford Torus).

The Ship’s biomes seem to lack a Stanford-like cloak of radiation-shielding material. Robinson says that ‘fuel, water and other supplies’ are stored under the biome floors to provide shielding; the ceilings are shielded by the presence of the spine. Elsewhere Robinson says that during the voyage, the fuel is ‘deployed as cladding around the toruses and the spine’ (Chapter 2)

Assume then that if a Ship biome ring has the same structural properties as the Stanford torus, and if most of its mass is in the hull, then a guesstimate for a single ring mass (without the fuel cladding) can be obtained by multiplying Stanford’s 0.1m tons structure mass (without shielding) by a factor to allow for the Ship ring’s larger surface area. The result is (0.1 * 170 / 2.3 =) 7.4 million tons per biome ring. We know this is 10% of the Ship’s total mass, which therefore breaks down as

76% fuel = 56.2 million tons
20% biome rings = 14.8 million tons
4% spine = 3 million tons
Total = 74 million tons.

These numbers shouldn’t be taken seriously, of course, except as an order of magnitude guide. Maybe they seem large – but remember that Daedalus needed 50,000t of fuel to send a 450t payload on a flyby mission to the stars, a payload comparable to the completed mass of the ISS. By comparison the Ship will be hauling two habitat rings each fifteen kilometres across. This is not a modest design.

Notice that if the Ship’s propulsion follows the Daedalus ratio, the fuel would consist of 60% D = 33.7m tons, 40% He3 = 22.5m tons.

And notice that since this fuel is used for deceleration only, the acceleration systems need to push all this mass up to ten per cent of lightspeed. These numbers do illustrate the monstrous challenges of interstellar travel, with a need to send very large masses to very large velocities, and decelerate them again.

On that note, let’s consider the propulsion systems.

Propulsion

Mission Profile

The Ship is a generation starship. Launched in 2545, it travels 11.8ly (light years) to Tau Ceti at cruise 0.1c (chapter 2). According to the text the journey consists of a number of phases.

  • The Ship is accelerated to the cruise speed of 0.1c by means of electromagnetic ‘scissors’ slingshot at Titan, imposing a brief’ acceleration of about 10g, and then a laser impulse for 60 years.
  • The Ship decelerates at the Tau Ceti system using its on-board fusion propulsion system. The technology, like that used by Daedalus, is known as ‘inertial confinement fusion’ (ICF), in which pellets of fuel are compressed, perhaps with laser or electron beams, until they undergo fusion; the high-speed products provide a rocket exhaust. For twenty years the Ship is decelerated by the detonation of fusion pellets at a rate of two per second. The fusion fuel is a mix of D and He3, as was the case for Daedalus (Chapter 1).
  • We’re told that the total journey time is about 170 years (Chapter 3), consistent with the profile given.
  • Colonisation in the Tau Ceti system is attempted and fails (this will be considered below).
  • A section of the crew chooses to return to the Solar System. The ICF system is refuelled at Tau Ceti, and used to accelerate the Ship to 0.1c (Chapter 5).
  • As the Ship’s systems break down, the surviving crew completes the final leg of the journey in cryosleep.
  • The Ship has no onboard way to decelerate at the Solar System (Chapter 6). The ICF fuel was exhausted by the acceleration from Tau Ceti, save for a trickle to be used during Oberth Manoeuvres (see below). The laser system reduces the Ship’s speed, but not to rest: from 10%c to 3%c. We’re told that the Ship then sheds the rest of this velocity mostly with 28 Oberth Manoeuvres, using the gravity wells of the sun, Jupiter, and other bodies. This process takes 12 years before crew shuttles are finally returned to Earth.

We can consider these phases in turn.

Acceleration from Solar System

In considering the acceleration system, it should be borne in mind that what we need to do is to give a very large, fuel-laden Ship sufficient kinetic energy for it to cruise at 0.1c. And because of inevitable inefficiencies, the energy input to any acceleration system will have to be that much greater.

In fact the launch out of the Solar System is a combination of two methods, vaguely described, neither of which is remotely efficient. There’s a ‘magnetic scissor’ that accelerates the ship over 200 million miles: ‘…two strong magnetic fields held the ship between them, and when the fields were brought together, the ship was briefly projected at an accelerative force equivalent to 10 g’s’.

(Of course such acceleration would stress the crew, even though in tests humans have survived such accelerations for very short periods – indeed the book claims five crew died. And such acceleration could stress lateral structures, such as the spars to the biome rings. Perhaps the stack is launched with its major masses in line with the thrust, and reassembled later.)

In Jim Benford’s grad school days, he ran some actual experiments on this effect, using a single turn coil. The energy in the capacitor bank driving it was about 1 kJ and the subject of the acceleration was a screwdriver sitting on a piece of wood in the coil centre. The coil current pulsed to peak in 2 µs. The screwdriver was accelerated across the room to a target at about 10 meters per second. The kinetic energy of the screwdriver was about 5 J and therefore the efficiency of transfer was less than 1%. It seems unsafe to assume an efficiency much better than this.

For the Ship, there then follows a laser driven acceleration. While lasers can certainly accelerate light craft, as has been shown experimentally, they can’t accelerate the enormously massive vehicle that the novel describes. The power required to accelerate by reflection of the laser photons can be calculated from the Ship mass (74 million tons), final velocity and acceleration time (to 0.1c in 60 years, so 0.17% g). The amount of power is about 100,000 TW, a truly astronomical scale. (Earth’s present electrical power output is 18 TW.) The efficiency of power beaming is low because only momentum is transferred from the photons to the ship. Efficiency is the time-averaged ratio of velocity to the speed of light. Therefore the efficiency of this process is about 5%.

The Ship and its mission would have to be a project of a very wealthy and very powerful interplanetary civilisation. It seems unlikely that they would resort to such a hopelessly inefficient system, if it could be made to work at all.

Deceleration at Tau Ceti

The Ship uses its onboard fusion rocket to decelerate.

We’re told the ICF deceleration phase takes 20 years at 0.005g, starting from 10%c cruise speed, with a Ship with an initial fuel load of 76% total mass. These numbers enable us immediately to calculate one critical number, the exhaust velocity of the fusion rocket. A ship with 76% fuel mass has a mass ratio (wet mass / dry mass) of (100/24=) 4.17. The rocket equation tells us that given that mass ratio and a total velocity change of 0.1c, the exhaust velocity must be 7%c. This is twice that of Daedalus, but perhaps not impossible for an advanced ICF system.

Our mass guesstimate above allows us to assess the performance of the rocket. Consuming 56.2mt of fuel in 20 years gives a mass usage rate of 94 kg/sec (cf Daedalus first stage 0.8 kg/sec). (Notice that the two fusion ‘pellets’ consumed per second are pretty massive beasts; in the Daedalus design pellets a few millimetres across were delivered at a rate of hundreds per second. This detail may be implausible. Indeed 49kg may be larger than fission critical mass!)

You can find the rocket’s thrust by multiplying mass usage by exhaust velocity, to get about 2000 MN (megaNewtons). This is much larger than the Daedalus first stage’s 8 MN. And the rocket power is 20,000 TW (the Daedalus first stage delivered 30 TW). Note that this power number is comparable to the launch figures.

Again, these numbers can be taken only as a guide. But you can see that the power generated needs to be maybe three orders of magnitude better than Daedalus, and exceeds our modern global usage by four orders of magnitude.

Meanwhile this system would consume a heck of a lot of fusion fuel. Where would you acquire that fuel, and where would you store it?

The storage is the easy part, relatively. Daedalus’s 50 kt of fuel was stored in six spherical cryogenic tanks with total volume 76,000 m3. At similar densities to store the Ship’s fuel load would require 860 million m3. That sounds a lot, but the volume of a biome ring is about 38 billion m3, so the fuel volume is only 2% of this, making it plausible that it could be stored, as Robinson says, in cladding tanks on the biome rings and spine, without requiring large separate structures. The Ship is big but hollow. It’s not immediately clear however how effective a layer of fuel would be as a cosmic radiation shield.

And note that the need for cryogenic store over centuries before use would be a challenge – as would the need to store any short-half-life propulsion components such as tritium, which has a half-life of 12.3 years, and would decay away long before the 170-year mission was over.
Getting hold of the fusion fuel, meanwhile, is the tricky part. It’s hard to overstate the scarcity of He3 in the Solar System, and presumably at Tau Ceti. Even Daedalus’s 20,000t would deplete the entire inventory of the isotope on Earth (37,000t), and the Ship’s 22.5mt would dwarf the Moon’s store (1 million t); only the gas giants could reasonably meet this demand (the Daedalus estimate was that the Jovian atmosphere contains about 1016 t). The Daedalus design posited acquisition from Jupiter, but estimated that to acquire Daedalus’s fuel load in 20 years would require that the Jovian atmosphere be processed at a rate of 28 tonnes per second. So again the challenge for the Ship’s engineers will be three orders of magnitude more difficult.

And regarding the return journey, although the Ship is stripped down, a fuel load of similar order of magnitude must be acquired from the Tau Ceti system, and without the assistance of a Solar-System-wide infrastructure. Of this huge project, Robinson says only that ‘volatiles came from the gas giants’ (Chapter 4).

Deceleration at Solar System

At the end of the novel, the Ship returns to Earth, decelerating mostly using what is called the ‘Oberth Manoeuvre’, invented by Hermann Oberth in 1928. This is a two-burn orbital manoeuvre that would, on the first burn, drop an orbiting spacecraft down into a central body’s gravity well, followed by a second burn deep in the well, to accelerate the spacecraft to escape the gravity well. A ship can gain energy by firing its engines to accelerate at the periapsis of its elliptical path.

Robinson wants to use this to decelerate from 3% of light speed down to Earth orbital velocity. 3% of lightspeed is 9,000 km/s. For reference, Earth’s orbital velocity is 30 km/s. Several deceleration mechanisms are referred to in the book. An unpowered gravity assist, passing by the sun and reversing direction, can steal energy from the sun’s rotational motion around the centre of the galaxy. That’s worth about 440 km/s. Other unpowered gravity assists can be used once the ship is in a closed orbit in the sun’s gravitational well. Flybys for aerobraking in the atmospheres of the gas giants are referred to as well. Altogether, these can get you <100 km/s.

But the key problem with using the Oberth Manoeuvre for deceleration of this returning starship is that this craft is on an unbound orbit. That means that, on entering the Solar System its trajectory can be bent by the sun’s gravity, but will then exit the System because it has not lost enough velocity to be bound to the Solar System. To be bound would require velocity decreased down to perhaps 100 km/sec, which is 1% of the incoming velocity. Therefore 99% of the deceleration has to take place in the first pass. And you can’t get that much from an Oberth Manoeuvre.

Cryosleep

As the Ship’s systems collapse, the returning crew gets from Earth plans to build a cryonic cold sleep method, which allows the viewpoint characters to survive until they reach the Earth.

This technology logically undermines most of the problems the early parts of the novel confront, and therefore undermines most of Robinson’s point about the difficulty of interstellar travel: If only the colonists had waited a few centuries for cryo technology, it would all have been so much easier! But this contradicts Robinson’s thesis.

Aurora

Having arrived at Tau Ceti, the colonists’ target planet, called Aurora, is judged lifeless but habitable from a remote sensing of an oxygen atmosphere – presumed created by non-biological process billions of years ago – but in the event the environment proves lethal for humans because of the presence of a deadly ‘prion’.

In a sense this is the point of the novel, that even if we reach the stars we will find only dead or hostile worlds: ‘I mean, they [alien worlds] are all going to be dead or alive, right? If they’ve got water and orbit in the habitable zone, they’ll be alive. Alive and poisonous . . . What’s funny is anyone thinking it [interstellar colonisation] would work in the first place’ (chapter 3). And as Greg noted in his essay this reflects recent misgivings expressed by Paul Davies and others about the habitability by Earth life of exoplanets.

Is this reasonable? And is Robinson correct that this could be the solution to Fermi’s famous paradox?

Robinson seems to be saying ‘alive’ worlds will be toxic to all possible biological explorers (there is a little wiggle room here since non-biological automated probes might still survive such worlds). This is a bold statement, but plausible since we lack any relevant data. However Robinson also says ‘dead’ worlds, essentially rocky Earth-size planets in the Goldilocks zone, could be terraformed but that project would take thousands of years. But why should that matter in a galaxy that is billions of years old? There should be plenty of time to terraform such planets, either by biological explorers or perhaps some type of self-replicating von Neumann probes or seed ships. There appears to be no solution in Aurora to Fermi’s question.

Oxygen and Biosignatures

(See Sinclair et al (2012) for a relevant reference.)

It seems implausible that oxygen in Aurora’s atmosphere might not be a biosignature: that is, that it could credibly be created by non-biological processes. Without some continual input into the atmosphere, you would expect any oxygen to rust out, as on Mars. Robinson says the oxygen on Aurora is due to the ultraviolet breakdown of water. We haven’t run the numbers, but that would be a hell of a lot of UV (which itself could make the planet uninhabitable). That might actually work better as a mechanism for oxygen production on Mars, at least long ago when Mars had liquid water. Indeed, UV is how Mars lost its water and atmosphere, and the same would happen on a dead Earthlike world. So Aurora can’t have oxygen; it gets blown off after the hydrogen from water.

Robinson also cites a failure to detect CH4 and H2S, possible markers of life, in Aurora’s air as ruling out a biological origin for the oxygen. However the interpretation of the presence of methane (CH4) in the Martian atmosphere has been a bone of contention for well over 15 years. Is it a biomarker or an index of geological activity? And as far as hydrogen sulphide goes, it sure as hell is not a biomarker on Io!

The ‘Prion’

The most significant biological problem in Robinson’s scenario is the organism that was so toxic to humans on Aurora. This is said to be ‘something like a prion’, and is apparently an isolated organism: as far as the explorers could tell there simply was no wider biosphere on Aurora.

For a biologist, that sounds really weird. This is a satellite a couple of billion years older than Earth and the only evolved organism is a prion? In addition we are not sure what ‘something like’ really means, but if it was indeed like a prion one must ask: where on Aurora are the proteins capable of being misfolded by a prion action? That’s what prions do; they cannot exist in isolation. And then why was it that human proteins, from a different biosphere altogether, were such a good match to the prion’s mechanisms?

Of course you can say it was ‘something like’ a prion but not really a prion. But then, what makes it ‘like’ a prion if not protein-folding?

It would take a lot more detail to make this strange single-organism biosphere a plausible ecosystem. Maybe if Robinson ever revisits Aurora and the stayers we could find out! Joe Miller thinks that an Andromeda Strain-like organism, inimical to Earth biology, is no more or less likely than ET organisms which simply find Earth biology indigestible. We don’t know, but the possibility that ET biology would be simply oblivious to Earth biology is a plausible situation, though not treated very much in SF because it is not very dramatic!

Conclusions

Robinson’s Aurora is a finely crafted tale of human drama and interstellar exploration. Its polemic purpose appears to be to demonstrate, in Robinson’s words, that ‘no [human-crewed] starship voyage will work’. There is much of the science and technology we haven’t explored in this brief note; there’s probably a master’s thesis here – indeed I’ve recommended the book to Project Icarus as a study project.

However, to summarise our conclusions:

  • The human crew transported to Aurora may plausibly be large enough to support a new breeding population. And the Ship’s dimensions seem adequate to support the crew through their centuries-long mission.
  • The challenge of maintaining small closed biospheres is depicted credibly, but the ‘wilderness’ areas of the biome arks are too small for their purpose.
  • Of the elements of the propulsion system, the electromagnetic / laser Solar System acceleration system needs to be so powerful it stretches credibility, while the Oberth Manoeuvre return-deceleration system as depicted is impossible. The ICF fusion rocket system appears generally credible, but would require the acquisition of heroic amounts of helium-3 fuel, a challenge especially at Tau Ceti.
  • Regarding Aurora itself, the notions of a non-biogenic oxygen atmosphere, and of a single-organism biosphere, and that an extraterrestrial organism as described might necessarily be inimical to humans, all lack credibility.

In summary, while Aurora is an intriguing combination of literary, political, scientific and technical notions, and while it reflects many current speculations about the difficulty of interstellar travel, in many instances it lacks the supporting credible scientific and technical detail required to make its polemic case that human interstellar travel is impossible. The journey is not plausible, and nor is the destination.

What Aurora illustrates very well, however, at least at an impressionistic level, is the tremendous difficulty of mounting such a voyage. Interstellar travel is a challenge for future generations, which will bring both triumph and tragedy.

References

Kim Stanley Robinson, Aurora, Orbit, 2015.

Kim Stanley Robinson, 2312, Orbit, 2012.

Bond et al, Project Daedalus Final Report, British Interplanetary Society, 1978.

Johnson, Richard D. and Holbrow, Charles, (editors), ‘Space Settlements: A Design Study’, NASA SP-413, 1977.

Jędrzejewski W, Schmidt K, Theuerkauf J, Jędrzejewska B, Kowalczyk R. 2007. ‘Territory size of wolves Canis lupus: linking local (Białowieża Primeval Forest, Poland) and Holarctic-scale patterns’. Ecography 30: 66–76.

Sinclair, S., Schulze-Makuch, D., Radley, C., Papazian, A., Miller, J., Marzocca, P. Lee, J., Gaviraghi, G., How to Develop the Solar System And Beyond: A Roadmap to Interstellar Space, Kindle Books, 2012.

tzf_img_post

{ 95 comments }