Ice Volcanoes on Ceres?

by Paul Gilster on February 7, 2017

If a terrestrial volcano erupts in molten rock, an ice volcano in the outer Solar System would presumably erupt with volatiles like water or ammonia. We have evidence of such things in places like Pluto and Triton, far beyond the snowline where water is abundant. Some scientists think Quaoar may have had cryovolcanic activity, and other candidates include Titan, Europa and Miranda. Which brings us to Ahuna Mons on the dwarf planet Ceres.

Discovered by the Dawn spacecraft in 2015, Ahuna Mons is unusual in many respects. Its sides are steep, its features well-defined, which suggests it is geologically young. If it is a cryovolcano, it seems to exist in splendid isolation, half the height of Mt. Everest on a surface otherwise bereft of similar features. Moreover, the orbit of Ceres between Mars and Jupiter gives us potential cryovolcanism closer to the Sun than has ever been observed before.


Image: Ahuna Mons seen in a simulated perspective view. The elevation has been exaggerated by a factor of two. The view was made using enhanced-color images from NASA’s Dawn mission. Credit: NASA.

Michael Sori (Lunar and Planetary Laboratory, University of Arizona) has a theory that just may explain Ahuna Mons’ peculiarities. As lead author of a new paper on the dwarf planet, Sori and colleagues have investigated the possibility that Ahuna Mons is simply the most recent of many cryovolcanoes that have formed on Ceres over millions of years, a young example of its type left alone as older ice volcanoes have gradually become deformed.

The heart of the hypothesis is viscous relaxation, the gradual flow of solids over time. We don’t see this with volcanoes on Earth because they are made of rock, but a high ice content could make a gradual flattening of a cryovolcano on Ceres possible. Given enough time, features like Ahuna Mons would effectively disappear from view, leaving no sign of their blocky structure. Ceres’ location relatively close to the Sun could accelerate the process.


Image: Ceres’ mysterious mountain Ahuna Mons is seen in this mosaic of images from NASA’s Dawn spacecraft. Dawn took these images from 385 kilometers above the surface, in December 2015. The resolution of the image is 35 meters per pixel. Credit: NASA.

We know that viscous relaxation occurs on Earth — we see the process in the flow of glaciers. On Ceres, we would have to presume a structure that is ice rich, which is not true over the entire surface. In fact, Dawn has revealed a high population of craters in many areas that show the crust of the dwarf planet is not sufficiently ice-rich to smooth out the topography uniformly. But some areas of Ceres present a different picture. From the paper:

This observation is consistent with Dawn geophysical observations [Ermakov et al., 2016a; Fu et al. 2016; Park et al. 2016], which reveal that Ceres (whose bulk density suggests an ice-rock mixture) is only partially differentiated [Zolotov, 2009] into icy and rocky layers in contrast to some pre-Dawn predictions of complete differentiation [McCord and Sotin, 2005; Thomas et al., 2005; Castillo-Rogez and McCord, 2010; Castillo-Rogez, 2011]. However, while the crust on average must be <30% ice by volume to support topography [Fu et al., 2016], variation in crater morphology [Bland et al., 2016] and spectroscopic detection of localized H2O [Combe et al., 2016] indicate ice content is laterally heterogeneous. Localized regions or individual landforms may be sufficiently ice-rich for flow to occur [Schmidt et al., 2016] even if the crust as a whole is not.

Is Ahuna Mons one such place? Sori and team modeled the flow of the feature assuming different proportions of water in the constituent materials of the mountain. The modeling demonstrates that if Ahuna Mons is composed of more than 40 percent water ice, viscous relaxation could indeed be in play. This would allow a flattening of between 10 to 50 meters per million years, enough to render cryovolcanoes unrecognizable over geologic time.

There could, in other words, have been other features like Ahuna Mons, with the latter, no more than 200 million years old, still in the gradual process of flattening. To firm up the idea, it will be necessary to study the surface for evidence of the remnants of other cryovolcanoes. Testable predictions flow from the modeling:

Based on our results, we predict older cryovolcanoes have shallower slopes, and that cryovolcanoes at mid-latitudes have asymmetries between poleward and equatorward facing slopes. We do not expect extensive viscous relaxation of polar features. The detection of this distribution of features would add strong support to the hypothesis that Ceres undergoes ice-rich cryovolcanism, and flow models would constrain Cerean cryovolcanic history.

The idea seems reasonable, especially given the alternative, that Ahuna Mons is the only ice volcano that formed on a world otherwise without such activity. “Ahuna Mons is at most 200 million years old. It just hasn’t had time to deform,” says Sori. It will take painstaking scrutiny of the Cerean surface to see whether other dome-like features fit into this picture. Also fascinating: The comparison between putative cryovolcanoes here and on other small worlds.

The paper is Sori et al., “The vanishing cryovolcanoes of Ceres,” accepted for publication at Geophysical Research Letters.



Agricultural Resources Beyond the Earth

by Paul Gilster on February 3, 2017

Gaining a human foothold on another world — Mars is the obvious first case, but we can assume there will be others — will require a search for resources to support the young colony. In today’s essay, Ioannis Kokkinidis looks at our needs in terms of agriculture, whether on a planetary surface or a space-borne vessel like an O’Neill colony or a worldship. Happily, his first reference, to Lucian of Samosata, has deep science fiction roots. The author of several Centauri Dreams posts including Agriculture on Other Worlds, Ioannis graduated with a Master of Science in Agricultural Engineering from the Department of Natural Resources Management and Agricultural Engineering of the Agricultural University of Athens. He holds a Mastère Spécialisé Systèmes d’informations localisées pour l’aménagement des territoires (SILAT) from AgroParisTech and AgroMontpellier and a PhD in Geospatial and Environmental Analysis from Virginia Tech. He now lives in Fresno CA and works for local government, while continuing to pursue his interest in sustaining human life outside our own planet.

By Ioannis Kokkinidis



About noon, when the island was no longer in sight, a whirlwind suddenly arose, spun the boat about, raised her into the air about three hundred furlongs and did not let her down into the sea again; but while she was hung up aloft a wind struck her sails and drove her ahead with bellying canvas. For seven days and seven nights we sailed the air, and on the eighth day we saw a great country in it, resembling an island, bright and round and shining with a great light. Running in there and anchoring, we went ashore, and on investigating found that the land was inhabited and cultivated. By day nothing was in sight from the place, but as night came on we began to see many other islands hard by, some larger, some smaller, and they were like fire in colour. We also saw another country below, with cities in it and rivers and seas and forests and mountains. This we inferred to be our own world. We determined to go still further inland, but we met what they call the Vulture Dragoons, and were arrested. These are men riding on large vultures and using the birds for horses. The vultures are large and for the most part have three heads: you can judge of their size from the fact that the mast of a large merchantman is not so long or so thick as the smallest of the quills they have. The Vulture Dragoons are commissioned to fly about the country and bring before the king any stranger they may find, so of course they arrested us and brought us before him. When he had looked us over and drawn his conclusions from our clothes, he said: “Then you are Greeks, are you, strangers?” and when we assented, “Well, how did you get here, with so much air to cross?”

— Lucian (ca. 125–180 AD), True Story, chapters 9-11 translated by A. M. Harmon (1913).

Lucian of Samosata’s most famous work, True Story, defies easy categorization. He most likely wrote it as a parody of the travel novels popular during the Antonine Era and more specifically Antonius Diogenes’ now lost The Wonders Beyond Thule. Modern critics have called it the first surviving work of both Science Fiction and Fantasy, and ironically it is the only work of both genres that is part of the school curriculum in Greece today.

We can see that already from the earliest work of science fiction space colonization, war and agriculture are important themes. Alas, unlike Lucian’s description, who like Herodotus implores us to go and travel to the places he just described to see for ourselves that he is telling the truth, neither the Sun, nor our Moon nor Venus have an Earth-like biosphere. The use of technology, though, can allow us to produce agricultural products necessary for human survival on other celestial bodies, provided that these bodies can provide in easily available form the resources that agriculture needs. This article at first describes in general terms what sort of resources agriculture can provide, and then lists the important elements and their forms necessary for an artificial ecology to function.


When designing planetary colonization we should take note that the biosphere of Earth provides resources and ecosystem services to people through large scale cycles that are hard to replicate. It is very hard, though, to create a completely enclosed system; resource inputs of several forms will be necessary in order to maintain a system that can sustain human civilization. On Earth cultivated plants assimilate carbon from the atmosphere during the growing season, which is then released back in the short term after the end of the growing season and in the long term through the geologic carbon cycle. Until a colony reaches a very large size, which it might never reach, we will most likely try to maintain our crops in a permanent growing season, planting a crop as soon as the previous is harvested, which in turn would mean that we need to be constantly adding resources instead of allowing them to be slowly released by decomposition.

Furthermore even if we do reach a balance of agricultural inputs and outputs in our artificial ecosystem, it will likely still require a large buffer, far larger than what is being cycled every year. For example if we only use agriculture to grow food and we grow our food exclusively from plants, we only consume a small part of a plant, less than 50% of aboveground biomass for annual crops and an even smaller part of tree crops. It is simply not possible to plan to colonize a body that does not contain in significant quantities easily available elements that we need, unless we set up large scale resource transfer from outside it. I believe that I am not the first person to raise the issues below, though I have not done a systematic search in the literature. All suggestions are welcome.

Image: A fictionalized portrait of Lucian taken from a seventeenth century engraving by William Faithorne (1616-1691). Credit: Wikimedia Commons.

Resources from agriculture


Food, sustenance in all forms for the colonists, is the most readily available reason given to engage in agriculture in space. Any food grown is food that does not need to be transported from Earth, not to mention that there are a variety of psychological benefits from seeing it grow. We can divide edible crops into two categories, autotrophic organisms such as plants and heterotrophic organisms such as fungi and animals. Over the last 10 millennia we have domesticated a huge number of plants of which we eat a very wide variety of plant parts but rarely the entire plant. With heterotrophic organisms we can take advantage of the non-human edible parts of a plant and convert it into edible sources, though again we do not eat entire animals, except perhaps octopuses and their relatives. There is no such thing as the perfect diet for all conditions; we need to balance the macro and micronutrient needs of humans with the available resources and the need to maintain a healthy population. Also since plants produce their edible parts on an irregular basis we also need to store and preserve food, especially to guard against crop failure.


Usually when we talk about plants providing food and fiber, by fiber we often mean wood fiber. While we will likely see trees planted in arboretums, we are not likely to see forest style plantations for harvesting timber; colony space is too valuable and tree growth rate is too slow. Unless we can find a celestial body with forests, wood furniture will likely remain a luxury item reserved for the well off or for very specific uses where it is indispensable. Another use of wood fiber for which we will need a ready substitute is paper, it being much easier to produce paper than a factory making electronics. There is already on the market tree free paper made from bagasse, a byproduct of sugarcane processing, and several other plant waste fibers. Historically, before the invention of paper by the Chinese and its introduction by the Arabs in the 11th century to Europe, papyrus and vellum were the writing material, although it is highly unlikely that we will see vellum used in a non-ceremonial setting in space.

Moving on to other fiber uses, the most obvious one is for cloth making. Cotton fiber is the most popular of the vegetable fibers used, though other plant fibers are also used, such as flax, jute and hemp. Among animal fibers wool is the most popular, though silk and leather are also fine choices. On earth biologically derived fibers are today more expensive than petroleum derived fibers such as polyester. In practice, with the exception of Titan, celestial bodies are not known to harbor large bodies of hydrocarbons from which we can derive artificial fibers. The specific planting of crops and the selection of animals to be used in space will depend on the needs of the colony and the related infrastructure such as cotton gins that are needed to produce usable materials.


Before the industrial revolution most materials used for energy purposes were derived from the active biosphere, e.g. firewood. Today fossil fuels, biogenic in nature, mostly cover the energy needs of human civilization. There has been effort, though, to produce biofuels to substitute for fossil fuels since the oil crises of the 1970s. In Europe, which does not have large petroleum resources, coal has long been mined, and biofuels are subsidized by the Common Agricultural Policy. The purpose is not so much to cover energy needs with European resources but to keep farm prices from dropping too low and thus creating unhappy farmers that block the highways demanding better prices. In the US corn biofuel policy is more related to the political cycle, such as the first in the country Iowa caucus and its voters; after all the US is one of the largest petroleum producers in the world. The most successful bioenergy program in the world is considered to be that of Brazil, blending sugarcane derived ethanol into gasoline and thus abolishing the need for importing oil (Brazil is an oil producing country).

The use of biofuel in space is tied to the selection of the energy cycle for the colony. It is highly unlikely that we will use internal combustion engines to power a colony. Most likely energy sources will be either photovoltaics, which in the long term will require a plant to produce them out of silicon wafers, or nuclear, which requires an entire cycle of mining, refining and isotope enrichment. It is possible that we will see hydrocarbons as energy sources in the colony. Already there are plans to use abiotic processes to produce methane as rocket and rover fuel in future Mars colonies, and there it is possible to produce RP-1 from biological sources if a rocket is to require it. In general, though, I see biofuels occupying a niche source in a future colony. We might create biodiesel out of waste edible oils but we are unlikely to see entire sunflower plantations intended for biodiesel production.


According to Wikipedia there are over 300,000 tons of bioplastics produced each year, or 0.1% of the total global plastics production worldwide. Modern technological civilization is very dependent on a variety of plastics, even inside a greenhouse (e.g. drippers). Unless the celestial body colonized has prodigious amounts of easily available hydrocarbons available such as Titan, we will need to create very early an infrastructure to produce bioplastics for colony needs or else set up a logistic chain for plastics from Earth. Generally for bioplastics the feedstock is readily available plant material, such as cellulose or dextrose, though some animal sources such as casein (a milk protein) have been used. The harder part will be creating a production line for these bioplastics from the local raw material.

Elements for agriculture

What follows is a list of major elements that are necessary for plant growth. Some 17 elements are necessary for plants to survive, though the majority are required in minute amounts often easily available in the soil or as impurities in the fertilizers. Carbon, Hydrogen and Oxygen combined are responsible for 95% of plant mass. Often, though, due to pH element deficiencies can arise despite the presence of the element in the soil.


Carbon enters the biosphere when it is assimilated by plants through photosynthesis in the form of CO2. While there are a few methanotrophic bacteria known, it is unlikely that we will require carbon in any form except CO2 for agriculture. Plants can oxidize CO in the presence of O2 to CO2, but cannot use raw carbon. Thus if carbon is available in the environment but not in the form of CO2, we will likely need to set up processes to produce CO2 before plants can assimilate it.


Plants assimilate hydrogen mostly in the form of water. Water has an important function in plants both as the solvent of biology but also as the stream that allows the transport of elements inside the plant.


Oxygen as an element is assimilated by plants in the form of water and CO2. It is released to the environment in molecular form by photosynthesis, which is critical for the survival of animal life. Plants also use molecular oxygen from the environment during respiration, however they produce far more O2 than they consume, and this allows heterotrophic life to exist.


Image: The colors in the spectra show dips, the size of which reveal the amount of these elements in the atmosphere of a star. The human body on the left uses the same color coding to evoke the important role these elements play in different parts of our bodies, from oxygen in our lungs to phosphorous in our bones (although in reality all elements are found all across the body). In the background is an artist’s impression of the Galaxy, with cyan dots to show the APOGEE measurements of the oxygen abundance in different stars; brighter dots indicate higher oxygen abundance. Credit: Dana Berry/SkyWorks Digital Inc.; SDSS collaboration.


Plants require this element in a variety of forms but unlike the previous three they cannot assimilate it from the atmosphere. Rather they take it through the roots, more specifically through the soil solution in the form of nitrate. Nitrates, though, are highly mobile in the soil, which is why we also fertilize with ammonia, which is converted to nitrate by soil microorganisms over time. Both forms of nitrogen are typically produced in chemical factories on Earth using atmospheric nitrogen as a feedstock. In parts of the outer solar system they are available as rocks and ices.


Phosphorus is another element that is assimilated from the soil solution. Unlike nitrogen, though, it is not found in the earth’s atmosphere, rather we mine phosphate rocks and fertilize with phosphate salts. Some 80% of global phosphate mining exploits deposits of biogenic sedimentary rocks of marine origin. The other 20% is of igneous origin in the form of apatite. Outside earth it is this phosphoric apatite that will likely provide our phosphorus needs


Just as with phosphorus, potassium is mostly mined from sedimentary rocks, more specifically evaporites. While evaporites have been found on Mars and are likely present on Venus, for other bodies of the solar system we will need to locate other forms of the element and process it into the salts that plants require.


Iron has an intermediate position between micro and macronutrients, required in quantities that are small for macronutirents but large for micronutrients. Plants assimilate iron in ferrous (Fe++) form, often from organic iron complexes that contain ferric (Fe+++) form with the expenditure of energy by the plant. Since the concentration and availability of ferrous and ferric iron depend on the soil pH and other ion antagonists in the solution, very often we see plants with iron deficiency despite a large iron concentration in the soil and the parent rock. In hydroponic fertilization and urgent deficiency interventions we tend to use organic iron so as to provide a highly available form to the plants. Organic iron, though, is not necessary if we take pains to control the pH and antagonists such as calcium, phosphorus and carbonates.


Calcium is a micronutrient, not necessary in large quantities for agriculture. However it is often applied in macronutrient quantities in order to control soil pH. In areas of high rainfall such as the eastern US and western Greece we will find many soils that are calciferous in origin but have a low pH, because rainfall washes the Ca++ ions, lowering the pH to acid levels. Calcium is used in hydroponics to raise solution pH and it is likely necessary to stockpile and use calcium for this purpose rather than for the specific need of the plant for this element.


Sulfur is the opposite of Calcium in that it is used to lower soil pH. There is no shortage of sulfur concentration in agricultural soils on Earth; fossil fuel use has spread it far and wide. Pollution control measures have reduced atmospheric deposition in developed countries and it is likely that in a few decades sulfur fertilization will be necessary in some areas. So far, though, we are more likely to see sulfur in hydroponics, raising pH when it falls too low. Just as with calcium, plants do not require large quantities, but we may need to stockpile it for the same reasons.

Other micronutrients

The rest of the elements necessary are required in minute quantities and while pH is very important for their availability, their limited requirements mean that we will not need to seek them specifically. In general, micronutrient fertilization can become necessary and critical if we choose an agricultural system where we remove the entirety of the plant mass from the soil or substrate and do not allow any plant decomposition to take place, which is what we will do at first. The decomposing remains of the previous harvest are often the primary source of micronutrients for the next, even in intensive agriculture. If we remove the entirety of the crop each time, we will need to provide the elements that were mined in the process, though again, it is unlikely that we will need to search for extensive quantities.


This contribution was inspired by news reports of the first NASA Mars landing site selection symposium. They mentioned that along with geologists seeking interesting formations there were also colonization specialists arguing to select sites with mineral resources for metallurgy in the future colony. They did not mention plant specialists looking for areas having resources to grow plants. I did not write this contribution with Mars specifically in mind; it is intended as a general guide for all celestial bodies. Bodies with carbon dioxide in the atmosphere will not require creating it from other elements. Bodies with nitrate rocks are advantageous to those with only gaseous nitrogen in the atmosphere.

Also, while we are fortunate enough to know the surface composition of several bodies of the solar system, we just don’t know enough about exoplanets to be able to judge which are more suitable for colonization. At best we have managed to infer the presence of some elements in the atmosphere of a few exoplanets but we are nowhere near a full resource guide. Human civilization has always been dependent on agriculture for a variety of resources to survive and thrive. This will continue to be true when we move beyond Earth.



Proxima Centauri: The Problem of Arrival

by Paul Gilster on February 2, 2017

Given his key role in the development of sail ideas for interstellar flight, Robert Forward inevitably comes up in any discussion of deep space missions. The late physicist put forward a number of sail concepts and mission ideas, including a laser-driven lightsail to Epsilon Eridani with return capability that would travel at 50 percent of the speed of light. Those were numbers that made a manned mission theoretically possible, though demanding a huge sail (1000 kilometers in diameter) and a mind-bending space-based 75,000 TW laser system.

Yesterday we looked at the critical problem of deceleration in a sail-based interstellar mission, with reference to the new paper by René Heller and Michael Hippke. I only wish Forward were here to give us his thoughts on the newly proposed ‘photogravitational assist’ method of deceleration, because for years his own method for the Epsilon Eridani mission — a ‘staged’ sail that separates, so that one sail ring reflects laser light back onto another — has been the only method I’ve seen for slowing a sailcraft down for orbital insertion at another star.


Image: Forward’s separable sail concept used for deceleration, from his paper “Roundtrip Interstellar Travel Using Laser-Pushed Lightsails,” Journal of Spacecraft and Rockets 21 (1984), pp. 187-195. In the paragraph above, I didn’t even mention the ‘paralens,’ a huge Fresnel lens made of concentric rings of lightweight, transparent material, with free space between the rings and spars to hold the vast structure together, all of this located between the orbits of Saturn and Uranus. The structure would be used to collimate the laser beam.

To my knowledge, Forward never considered the possibility of using stellar photon pressure combined with gravity assists as a means of deceleration. The method wouldn’t have occurred to him in relation to Epsilon Eridani in any case. For one thing, moving at 50 percent of c, his sailcraft would be unable to achieve the needed braking from the method, and for another, Epsilon Eridani, a single star, is the wrong kind of target for this type of maneuver. As Heller and Hippke explain, a multiple star system is the destination of choice.

This quote from the paper gets the point across. In the passage, L refers to stellar luminosity:

In multi-stellar systems, successive fly-bys at the system members can leverage the additive nature of photogravitational assists. For multiple assists to work, however, the stars need to be aligned within a few tens of degrees along the incoming sail trajectory of the sail. Such a successive braking is particularly interesting for multi-stellar system, where bright stars can be used as photon bumpers to decelerate the sail into an orbit around a low-luminosity star, such as Proxima (0.0017 L) in the α Cen system or the white dwarf Sirius B (0.056 L) around Sirius A.

Sirius A? Indeed. For the paper notes that other nearby stars offer more favorable conditions even than the Alpha Centauri triple system for decelerating an incoming lightsail. Sirius A is about twice the distance from the Sun as Alpha Centauri but offers an extremely bright target (25 L) for deceleration, making the maximum injection speed into the system almost 15 percent of lightspeed. It would take something other than a solar photon sail to get the initial payload up to cruise speed for such a journey, but deceleration upon arrival is possible.

We need to learn everything we can about deceleration given the advantages of a sail that operates for years in a bound orbit within a stellar system (and even around a target planet like Proxima b) vs. a flyby mission. Early probes to nearby stars might well be flyby missions, particularly if we build the Breakthrough Starshot infrastructure, which would also be useful here in our own Solar System. But detailed follow-ups could come through decelerating lightsails in those destinations most suited for such methods. Fortunately, the nearest stars to our own form one such system.

I refer you back to yesterday’s post if you’re just coming into the discussion, but the brief summary is that the combination of the gravitational pulls of Centauri A and B along with their photon pressures is what makes deceleration of Heller and Hippke’s 316-meter sail possible. Centauri A is thus the first target, with the flyby there being manipulated through autonomous onboard technologies to maximize the braking effect before sending the sail on to Centauri B.

With the help of Centauri B, we slow from 4.6% of c to about 1280 kilometers per second, the figure that Heller and Hippke have determined would allow entry into a bound orbit around Proxima Centauri. A flight time of 46 years to Proxima ensues. At the destination, the resulting highly elliptical orbit is then circularized over time using photon pressure; we wind up with a functioning, data-returning probe in the star’s habitable zone. This obviously demands extreme and precise maneuvering but needs no onboard fuel.


Image: Artist’s concept of Proxima b orbiting Proxima Centauri. (Image: ESO./L. Calçada/Nick Resigner).

Navigation during the critical period of the photogravitational assists demands careful attention. The paper argues that multiple spacecraft may be one way to handle this. In the passage below, rmin refers to the sail’s minimum distance from the star:

Regarding the nautical issues of an A-B-C trajectory, communication among sails within a fleet could support their navigation during stellar approach, as it will be challenging for an individual sail to perform parallel observations of both the approaching star and its subsequent target star or of other background stars. Course corrections will need to be calculated live on board. In particular, the location of rmin will need to be determined on-the-fly as it will depend on the actual velocity and approach trajectory and, hence, on the local stellar radiation pressure and magnetic fields (Reiners & Basri 2008) along this trajectory.

I mentioned yesterday the question of what any beings on Proxima b might see if a sail like this one were headed for them. In a Frequently Asked Questions document timed for release with the paper, the authors point out that the sail would indeed be observable, appearing as a new star in Proxima b’s skies that would have the same electromagnetic spectrum as Proxima Centauri itself, although blue-shifted. There’s also this:

…any time variability of their host star’s spectrum would be delayed in that star — initially by years, later only by months, weeks, and finally just days or seconds. This new star would also become brighter as the sail approaches Proxima b, and the blue-shift would decrease until, upon the sail’s arrival at Proxima b, the blueshift would disappear and the time delay would be very short, e.g. seconds only. At some point, when the sail would reorient itself into an oblique angle to transfer into an orbit at Proxima b, this fake star would suddenly disappear for an observer on Proxima b. As the sail would orbit the planet over the next months or years, it could occasionally reappear for just a few seconds as a very bright star-like dot in the sky. In principle, if these potential inhabitants of Proxima b were able to identify the sail as being artificial, they might conceive of a way to deliberately betray their presence to the cameras aboard the sail.

Interesting fodder for science fiction! I can recall the incoming lightsail seen by characters in Niven and Pournelle’s The Mote in God’s Eye (Simon & Schuster 1993), but I’m hard pressed to think of other science fictional treatments of this scenario. Perhaps the readers can help me out. Meanwhile, have a look at the Heller and Hippke paper, whose methods offer serious hope for solving the critical question of slowing down at another star.

The paper is Heller, R., & Hippke, M. (2017), “Deceleration of high-velocity interstellar photon sails into bound orbits at α Centauri,” The Astrophysical Journal Letters, Volume 835, L32, DOI:10.3847/2041-8213/835/2/L32 (preprint).



By ‘Photogravitational Assists’ to Proxima b

by Paul Gilster on February 1, 2017

Given the distances involved, faster would always seem to be better when it comes to interstellar flight. Voyager, which took 12 years to get to Neptune and roughly 35 years to encounter the heliopause, would take 75,000 years to cross the 4.22 light years to Proxima Centauri. Voyager’s 17 kilometers per second clearly doesn’t cut it, but how fast can we realistically hope to go?

Let’s say we manage to build the phased laser array contemplated in the early Breakthrough Starshot discussions. Starshot’s researchers contemplate driving small sails to 20 percent of the speed of light, a figure that should allow safe passage through the interstellar medium for a large percentage of the sails sent. But get to Proxima Centauri in 20 years and another problem arises: Each sail blows through the system in mere hours. In fact, at 0.2c, these sails cross a distance equivalent to the Moon’s orbit around the Earth in six seconds. Hence the huge problem: How to explore the system we’ve reached?

A new paper from René Heller (Max Planck Institute for Solar System Research, Göttingen), working with German colleague Michael Hippke, gives us another way to frame the matter. I would say that it’s not so much an alternative to Starshot as an idea that could be pursued along with it, and perhaps implemented as a follow-on to any early sail flybys of Proxima. For Hippke and Heller believe a somewhat slower craft could make the Proxima crossing, but also achieve a bound orbit around the star and perhaps even its planet, Proxima Centauri b.


Image: Artist concept of an Autonomous Active Sail (AAS) approaching the potentially habitable exoplanet Proxima b. The reflection of Proxima Centauri and background stars are seen on the mirror-like surface of the sail. Four communication lasers beams are shown firing from its corners to transmit information back to Earth. The lower right panels of the sail are in the process of becoming darker to change its direction and orientation from differences in radiation pressure. Credit: Planetary Habitability Laboratory, University of Puerto Rico at Arecibo.

At the heart of the concept are what the duo call ‘photogravitational swings,’ which are used to decelerate an incoming light sail and deflect it. Here things get more interesting still, because Heller and Hippke believe the proper use of these maneuvers will allow flybys of both Centauri A and B enroute to Proxima itself. With much higher levels of brightness than the red dwarf Proxima Centauri, Centauri A and B are used as ‘photon bumpers’ to slow the spacecraft, dropping it from the 13,800 kilometers per second of cruise to 1280 km/sec.

Launching from our Solar System involves the Sun’s photons alone. The numbers the authors put forth show a graphene sail closing to within 5 solar radii receiving enough of a ‘sundiver’ style boost to reach 4.6% of lightspeed. Made of graphene, the sail, some 316 meters to the side, takes 95 years to make the crossing to Centauri A, where it uses both photon pressure and the gravitational pull of the star to reduce speed. A second encounter, with Centauri B, allows the sail to drop to 1280 km/sec for transfer into a bound orbit at Proxima, one that could gradually be adjusted into a planetary orbit around Proxima b.

The paper calculates 46 years to make the crossing between the AB binary and Proxima, making for a total travel time of 141 years. That’s a good bit more than the lifetime of a researcher, the figure often cited as acceptable for a deep space mission, but if we abandon that preconception, the advantages are considerable. From the paper:

In a more general context, photogravitational assists of a large, roughly 105 m2 = (316 m)2 -sized graphene sail could (1.) decelerate a small probe into orbit around a nearby exoplanet and therefore reduce the technical demands on the onboard imaging systems substantially; (2.) in principle allow sample return missions from distant stellar systems; (3.) avoid the necessity of a large-scale Earth-based laser launch system by instead using the sun’s radiation at departure from the solar system; (4.) limit accelerations to about 1,000 g compared to some 10, 000 g invoked for a 1 m2 laser-riding sail; and (5.) leave of the order of 10 gram for the sail’s reflective coating and equipment.

These are powerful advantages, especially if they forego the need for a phased laser array on the Earth as the launch system (although it should be pointed out that such an array, once built, would have myriad uses for exploration in the Solar System as well as interstellar applications). And the prospect of a platform in another star system, able to return data for years in a period of close observation, is a huge incentive. It could be argued that we are far from being able to craft the graphene sail depicted in this paper, but several decades of technological development could well make graphene our tool of choice for sail missions.


Image: An interstellar mission of an Autonomous Active Sail (AAS) to the nearest three stars. The sail uses an active reflective surface to change its direction and orientation from photogravitational assists from the stars, including the Sun. A light 90 grams sail could take nearly 100 years to reach Alpha Centauri A and another 46 years to Proxima Centauri. Many engineering challenges will need to be solved to pack enough communication and science instruments in such light but wide interstellar probes. Credit: PHL @ UPR Arecibo.

The paper points out that the maximum injection speed at Centauri A for a photogravitational assist to Centauri B and then Proxima depends on the mass-to-surface ratio of the sail, the idea being to maximize the photon force on the sail and yield the highest decelerations. But can even a graphene sail handle the conditions this one would be exposed to? Returning to the paper:

Close stellar encounters necessarily invoke the risk of impacts of high-energy particles and of thermal overheating. On the one hand, impacts of high-energy particles could damage the physical structure of the sail, its science instruments, its communication systems, or its navigational capacities. On the other hand, if those impacts could be effectively absorbed by the sail, they could even help to decelerate it. As shown in Section (3), heating from the stellar thermal radiation will not have a major effect on a highly reflective sail. However, the electron temperature of the solar corona is > 100,000 K at a distance of five solar radii. The Solar Probe Plus (planned launch in mid-2018) is expected to withstand these conditions for tens of hours (Fox et al. 2015), although the shielding technology for an interstellar sail would need to be entirely different (Hoang et al. 2016), possibly integrated into the highly reflective surface covering.

Also present is the issue of stellar alignments. Heller and Hippke’s analysis found that the optimal conditions for a photogravitational assist to work at Centauri A are when all three Centauri stars are in the same plane as the incoming sail, which minimizes the deflection angle required by the sail to reach the next star, while maximizing the injection speed for the first encounter (this, in turn, makes for the fastest possible travel time from Earth):

Proxima is not located in the orbital plane of the AB binary, but for a distant observer all three stars align about every 79.91 yr (the orbital period of the AB binary). From the perspective of an incoming probe from Earth, the alignment occurs near the time of the AB periastron, the next of which will take place on June 24, 2035 (Beech 2015).

Thus we can define a launch window involving the position of the Centauri stars. The next alignment comes in 2035, clearly out of reach to a probe with such long travel times, but there is another in 2115, likewise unreachable because we would have to launch in 2020 to take advantage of it. The 86 gram ‘fiducial’ sail analyzed by Heller and Hippke would thus have a launch window at the end of this century to make it to destination for the following alignment, though dropping to a 57 gram sail of equivalent size would allow faster travel times, and in some cases allow a launch within 25 years. Thus we have a bit of flexibility depending on advances in material sciences and lightsail technologies in the intervening years.

There is a good deal more to discuss, and rather than trying to cram everything into a single post, I want to go deeper into the photogravitational assist idea tomorrow, with renewed attention to the sail itself and in particular the question of navigation. We’ll also entertain an interesting thought — what would such a sail look like to any observers on Proxima b as it approached their star system?

The paper is Heller, R., & Hippke, M. (2017), “Deceleration of high-velocity interstellar photon sails into bound orbits at α Centauri,” The Astrophysical Journal Letters, Volume 835, L32, DOI:10.3847/2041-8213/835/2/L32 (preprint).



Is it possible to use natural phenomena to boost signals to the stars? In the essay below, Bill St. Arnaud takes a look at the possibilities, noting that civilizations that chose to broadcast information might select a method that mimics by electromagnetic means what the classic von Neumann probe would achieve with physical probes. St. Arnaud is an optical communications engineer, a network and green IT consultant who works with clients on a variety of subjects such as next generation research/education and Internet networks. His interest in practical solutions — free broadband and dynamic charging of electric vehicles — to reduce greenhouse gas emissions is matched by a fascination with interstellar matters, particularly SETI.

By Bill St. Arnaud


In their recent post on Centauri Dreams Roger Guay and Scott Guerin ( make a compelling argument that fading electromagnetic halos may be all that’s left for us to discover of an extraterrestrial civilization. They argue that there is only a short window in the evolution of a sufficiently intelligent species in which it will broadcast its presence through inefficient electromagnetic transmission of radio, TV and radar signals.

Current SETI searches assume that an advanced civilization will use extremely powerful omnidirectional transmitters or highly directional and focused beacons targeted at our solar system. The challenge with either approach is the fact that successful one-way communication between intelligent species is dependent on the “L” term in the Drake equation. L represents the length of time for which such civilizations release detectable signals into space. If L is relatively short then the possibility of two separate intelligent civilizations being coincident in time to send and receive a signal is very small, as demonstrated by Guay-Guerin. So even though there may have been many intelligent civilizations we will probably never be aware of their existence.

On the other hand, Stephen Webb, in his book If the Universe is Teeming with Aliens .. Where Is Everybody? argues that we may be the only intelligent civilization in our galaxy, if not perhaps the known universe. This is often referred to as the Rare Earth Hypothesis. Given the large multiplier of improbabilities from the creation of simple life, through the prokaryotic-eukaryotic transition, and the many divergent evolutionary pathways in human evolution leading to technology savvy beings, Webb argues that the odds of this being replicated elsewhere in the universe are extremely low.

The bottom line from both Guay-Guerin and Stephen Webb is that regardless of whether the universe is teeming with advanced civilizations or limited to only a very few it would seem that the probability of detecting a SETI signal by conventional means is very limited.

In another line of reasoning in SETI exploration it has been suggested that we look for physical artifacts as well as electromagnetic signals. These artifacts could include devices like Von Neumann probes and physical remains of past advanced civilizations.

A Von Neumann probe is a self replicating spacecraft that would travel from stellar system to stellar system through a galaxy where it would seek out raw materials extracted from planets to create replicas of itself. These replicas would then be sent out to other stellar systems.

Searching for small physical artifacts such as Von Neumann probes would seem to be even more daunting than looking for the proverbial “needle in the haystack” electromagnetic signals. If a civilization were advanced enough to launch artifacts through space you would think at some earlier stage in its existence it would have initially deployed electromagnetic beacons or omnidirectional broadcasts. Of course this is assuming that they do want to make their presence aware to other advanced civilizations.


Von Neumann Signaling

But perhaps there is another approach to SETI that avoids many of the challenges of looking for physical artifacts and the limitation of the possibility of a small L in the Drake equation. Maybe we can look for “electromagnetic artifacts.” Electromagnetic artifacts can be thought of as “virtual” Von Neumann probes where instead of having physical devices replicate and propel themselves through the galaxy, electromagnetic signals are initially transmitted where their signaling properties are designed to be amplified and replicated using natural stellar and physical processes. Such natural processes might include gravitational lensing to refocus signals and using stellar lasers or masers to amplify a given signal.

Such self amplifying and replicating electromagnetic signals are different than normal transmissions used in beacons in that they are not intended to be point to point communications. Like Von Neumann probes they are expected to randomly propagate through a galaxy using passing stellar systems to amplify and replicate the original transmission. This capability might allow electromagnetic Von Neumann probes to propagate throughout a galaxy much faster than physical probes. The advantage of a low cost self amplifying and replicating signal is that you only need one replicant signal in a billion or trillion to multihop many hundreds of stars and be detected by another civilization.

Gravitational lensing has been used in astronomy for some time. In an interesting post (, Claudio Maccone calculates that by using gravitational lens and with satellites at the appropriate focal points of each solar system a detectable radio signal could be sent from our solar system to Alpha Centauri that uses less than 10-4 watts, i.e. one tenth of a milliwatt!!


Normally a signal sent from earth would not benefit from gravitational lensing as the focal point would be too far out (past Pluto). The signal might be bent but otherwise it will quickly suffer dispersion like any other signal. To achieving lensing a signal must pass both sides of the sun at roughly the same time and the wavefront recombine coherently on the far side.

One solution would be to put satellites at the focal point as Moccone has suggested. But an easier earthbound solution is to use earth-bound phased array antenna. A signal generated by a phased array could be made with a wavefront that looks like it originated from the Sun’s focal point or even a more distant point. You may want to use a more distant (or closer) artificial focal point in order to use gravitational lens of a more distant star; i.e launch our signal so that it is deliberately out of focus (but collimated) by our sun but comes into focus at a distant star. These are the same principles used in multi-lens cameras or telescopes. The converging point of the signal could be at the focal point of a distant star or perhaps even a multi-hop star.

Once you have a collimated signal with a coherent wavefront and wide aperture (i.e. the Sun’s diameter) you could in theory hop many stars that are in line with our orbital plane (or will be by the time the signal gets there). You could also steer the signal up and down a little bit from the orbital plane with a phased array antenna.

Additional amplification could use natural masers/lasers in our sun or distant star. There are several suitable natural maser frequencies – the choice of appropriate frequency will depend on the types of stars we are aiming at.

Stellar masers have been known for some time. A maser emission may be created in molecular clouds, comets, planetary atmospheres, and stellar atmospheres. They are frequently used in radio astronomy as they provide important information on distant stellar objects, such as temperature, velocity, etc. The first “natural” laser in space was detected by scientists on board NASA’s Kuiper Airborne Observatory (KAO) in 1995 as they trained the aircraft’s infrared telescope on a young, very hot, luminous star in the constellation Cygnus. Since then many other examples of both planetary and stellar lasers have been found.

The problem with natural masers/lasers is their noise level. The same is true of gravitational lensing if the signal passes through, or close to the corona. To extract the signal one would need a reference clock – and this would be the tell tale signal that it is artificial. I would theorize a good reference clock would be a distant highly regular timed quasar.

In effect we have created a beacon, but rather than looking in the water hole, a receiving civilization would have to look at the known natural maser/laser frequencies and then auto-correlate the signal to see if they can extract a reference clock. This is the same technology we use in very long baseline interferometry used in deep space radio dishes.

Now the interesting thing about natural masers/lasers is that they can amplify a given signal not only in the line of propagation but in other orthogonal directions as well. If the signal maintains its coherent wavefront ( still needs to be verified) then a given signal can be replicated in many directions from a given star. If it is still collimated then it would also look like a beacon pointed in some unknown random direction. A single star could produce many beacons like a disco ball based on the original transmission.

On a small scale, real world examples of self amplifying and replicating electromagnetic signals already exist. They are called Long Delayed Echoes (LDEs). They were first discovered in 1927 by amateur radio enthusiasts who noticed echoes of their original radio transmissions delayed by up to 40 seconds.

Up to now I have only been talking about signalling from our limited knowledge. I suspect there are other stellar phonemena, like with long delay echoes, that could be used to replicate and amplify signals.

There is no clear agreement on what causes LDEs, but there are several hypotheses on some possible natural phenomena that may enable electromagnetic echoes. These include such things as reflections from distant plasma clouds originating from the sun, magnetosphere ducting, mode conversion and four wave mixing, etc. While these natural phenomena may not be suitable processes for interstellar electromagnetic transmission they do demonstrate the possibility that perhaps equivalent stellar processes could be used on a larger scale to amplify and replicate electromagnetic signals much greater distances.

Given that they depend on natural physical processes for amplification and replication, the originating transmission will likely not need to be that powerful or directional. It is conceivable that low power transmissions are that all is required to launch a self amplifying and replicating electromagnetic probe. Most importantly, with electromagnetic replication a single instance of the signal may be replicated thousands or millions of times as it propagates through the galaxy or the universe. Compared with physical probes replication could accelerate on an exponential scale increasing the probability of detection, particularly in a ‘rare Earth’ situation.

Issues to Be Surmounted


Although using self amplification and replication sounds like an interesting idea there are a number of theoretical and physical challenges that still must be addressed. How, for example, to account for the proper motion of our sun versus distant stars? Will any such signal just sweep by like a beam from a lighthouse and make detection near impossible (e.g., the ‘Wow’ signal?) Other issues include accuracy and phase noise in phased array antenna – how precisely can we control a given signal?

Thermal noise in stellar atmospheres that are to be used for laser/maser gain is clearly a major issue. The “gain” of a stellar laser or maser is also very limited as there is no resonant cavity. There are also a host of well known problems with current inter-stellar electromagnetic signaling such as attenuation, dispersion, group delay, etc etc. In addition, the proper motion of our solar system and that of any intermediate amplifying and replicating stellar system would seem to make detection difficult.

To address these limitations in detecting such a signal it would be useful to explore how we might deploy a self replicating electromagnetic signal given our current technology limitations. Many techniques currently being used in modern radio and optical communication systems could be deployed to launch a self replicating and amplifying electromagnetic probe.

Clearly an external reference clock or coding reference would be required to extract any signal that was amplified by a stellar laser/maser as the inherent stellar noise would mask any external signal. A quasar may provide such a reference signal. Phased array antennae and signal preconditioning could be used to take advantage of gravitational lensing without placing transmitters at the lens focal point.

Gravitational lenses also act as gradient amplifiers and with time delay from two phased array sources it might be possible to regenerate a given signal (such as timing, shaping etc) using all electromagnetic techniques – a process now largely done by electronics. By constantly steering the phased array transmitter(s) a signal could be directed like a beacon at nearby stars that are aligned with our orbital plane to take advantage of the sun’s gravitational lens. Similarly steering of the phased array might allow a given signal to converge at the gravitational focal point of a nearby star where it could be amplified and replicated by that star to be propagated to even more distant stars.

With a little imagination and speculation on the future direction of these technologies a self amplifying and replicating electromagnetic Von Neumann probe might be within our technology capability. Once we have identified a plausible approach on how we would deploy such signals, the obvious next step would be to see if we can detect such signals.


Up to now we have always assumed that a distant civilization would want to send a direct beam at us and so we have been exploring that part of the electromagnetic spectrum that has the least absorption and attenuation. But if a distant civilization discovered it could use natural low cost processes to amplify and replicate a signal that would be its preferred route, especially if intelligence is a rarity in our galaxy. With laser/maser replication millions or billions of signals could be traversing the galaxy, of which only one needs to be detected by another civilization. This would be a much cheaper approach from an energy perspective than building omni-directional antennas, Dyson spheres, or aiming a beacon at our Sun, etc.

The assumption here is that a distant civilization wants to make contact with us, but I suspect that self replicating and amplifying signals will be transmitted for much more mundane reasons. If a self amplifying and replicating signal can be transmitted practically forever, really cheaply, then forget about contacting other civilizations, I want knowledge of my brief presence here on earth to be preserved forever. Paradoxically, religion and belief in the hereafter may be a driving force to transmit such signals!



A Contact between Civilizations in the 19th Century

by Paul Gilster on January 27, 2017

When we contemplate contact scenarios between ourselves and extraterrestrial civilizations, we can profit from remembering our own history. The European arrival in the Americas is often a model, but there are other events of equal complexity. In the essay below, Michael Michaud looks at America’s encounter with Japan to examine how we might react to a civilization not vastly more advanced in technology than our own. A familiar figure on Centauri Dreams, Michaud is now retired from an extensive diplomatic career that took him from director of the U.S. State Department’s Office of Advanced Technology to chairman of working groups at the International Academy of Astronautics that discuss SETI issues, along with posts as Counselor for Science, Technology and Environment at U.S. embassies in Paris and Tokyo. He is also the author of the seminal Contact with Alien Civilizations (Springer, 2007).

By Michael A.G. Michaud


In the literature about possible future contact with an extraterrestrial civilization, one of the most familiar presumptions is that the aliens will employ technologies vastly more advanced and more powerful than our own. Hollywood has given us binary images of such technologically empowered beings, depicting them as either benign altruistic teachers or as monsters who want to destroy us. There is a lot of room between those extremes.

A countervailing theory suggests that we are most likely to encounter a technological civilization closer to our own level, as the most advanced would ignore us or treat us as irrelevant. What might happen if we came into direct contact with a civilization whose technologies were only a century in advance of ours? Here is an Earthly example.

In 1852, U.S. President Millard Fillmore assigned U.S. Navy Commodore Matthew Perry to force the opening of Japanese ports to American trade. Perry was to deliver a letter from the President to the Emperor of Japan.

At the time, the only authorized port of entry into Japan was Nagasaki, where the Dutch maintained a trading post. The Japanese, forewarned by the Dutch that Perry’s ships were on their way from the United States, refused to change their 220 year old policy of exclusion.

Perry’s mission to Japan was part of a much longer voyage around southern Africa to Asia, a showing of the American flag. After an eight month journey with multiple port calls, Perry’s squadron of four ships reached the entry to Edo (now Tokyo) Bay on July 8, 1853.

Numerous Japanese fishing boats hastily retreated from the American ships, whose crews were at battle stations. Perry’s account reports that the fishermen seemed astonished to see the steam powered American vessels proceed against the wind with their sails furled.

Japanese officials in boats approached the American ships several times, asking them to leave. The first boat bore large banners with characters inscribed on them. The Americans, who could not read Japanese, conjectured that this boat was a government vessel of some kind.

Another Japanese boat approaching the American ships carried a man holding up a scroll which he read aloud. He was admitted aboard to meet with a lower-ranking American officer. The scroll, later found to be a document in French, conveyed an order that the American ships should leave, reiterating that Nagasaki was the only place in Japan for trading with foreigners.

The crews of Japanese “guard boats” made several attempts to board Perry’s ships, but were repelled by Americans with pikes, cutlasses, and pistols. No casualties were reported.


Image: American Navy Commodore Matthew Perry arrives in Japan, August 7, 1853. Credit: Tsukioka Yoshitoshi, woodblock print.

Through his officers, Perry warned that he would not permit Japanese guard boats to remain close to his ships. If they were not immediately removed, he would disperse them by force. When a few guard boats remained, Perry sent armed men to drive them away.

The Japanese made token shows of force on shore, firing out of date cannons and launching rockets. None seemed to be aimed at the American ships.

Perry warned the Japanese that, if they chose to fight, he would destroy them. In a demonstration of force that did no physical harm, he ordered blank shots to be fired from his squadron’s 73 cannons.

The Americans discovered that Japanese defenses were more for show than combat. Using a telescope to observe forts on headlands, they found some in an unfinished state. Screens had been stretched in front of the breastworks, possibly with the intention of making “a false show of concealed force.” The narrative’s writer observed that the Japanese had not calculated on the “exactness of view” afforded by a telescope.

Perry strove to impress the Japanese with “a just idea” of the power and superiority of the United States. He described his demands as “a right,” and not an attempt to solicit a favor. He expected “those acts of courtesy which are due from one civilized nation to another.” If the Japanese assumed superiority, that was a game he could play as well as they.

Perry refused to meet with lower level Japanese officials. His narrative observes that the more exclusive he made himself, and the more unyielding he was, the more respect “these people of forms and ceremonies” would award him.

A Japanese man who was described as a Governor came to Perry’s flagship, where he met with American officers while Perry remained invisible. (The visitor actually was the Deputy Governor.) At one point, lower ranking American officers dealing with the Japanese elevated Perry’s rank from Commodore to Admiral.

As one might expect, language was a problem. The Americans had one interpreter who knew Chinese and another who knew Dutch, but no one who spoke Japanese. The Japanese provided an interpreter who spoke Dutch; the two sides used a third country’s language.

The Americans warned that if the Japanese did not appoint a suitable person to receive the President’s letter and other documents from the American capital, Perry’s forces would go ashore with sufficient force and deliver them in person (by implication, to the Emperor). They pointed out that that one hour’s steaming would bring Perry’s ships in sight of Edo (Tokyo). Perry did send one of his ships closer to Edo, anticipating that this would alarm the Japanese authorities and induce them to give a more favorable answer to his demands.

When Perry sent out boats to survey the coastline, Japanese vessels carrying armed men rushed toward them. The American officer in charge of the surveying party gave orders for his men to arm their weapons. Seeing the armed sailors, the Japanese avoided a direct confrontation.

At last, the real Governor visited Perry’s ship, exhibiting a letter from the Emperor that met Perry’s demand for a high level Japanese official to accept the letter from the American President. The Governor provided the Americans with a copy in Dutch.

Arrangements were made for a ceremony on shore. Japanese officials asked Perry to move his ships close to a beach in modern day Yokosuka (where there is now an American naval base). There he would be allowed to land.


Image: Japanese 1854 print describing Commodore Matthew Perry’s “Black Ships.” Source: Wikimedia Commons.

Perry’s approach the next morning was announced by American guns. In a formal ceremony under a large, decorated tent, he presented the documents from Washington. He promised to return the following year to receive the Japanese reply.

Perry commented in his report that the Japanese officials showed a quiet dignity of manner and never lost their self-possession. However backward the Japanese might be in practical science, he wrote, the best educated among them were “tolerably well-informed of progress among more civilized nations.” Perry expressed a hope in his account that “our attempt to bring a singular and isolated people into the family of civilized nations may succeed without resort to bloodshed.”

Perry returned to Japan in February 1854 with ten ships and 1,600 men, putting even more pressure on the Japanese. After initial resistance, he was permitted to land at Kanagawa, near present-day Yokohama. A month of negotiations led to the first Convention between Japan and the United States.

The American negotiator on the spot had greater latitude in that pre-radio era, when communication with capitals was slow. Unfortunately, Perry was mistaken in his belief that this agreement had been made with the Emperor’s representatives. He did not understand the position of the Shogun, the de facto ruler of Japan.

What lessons can we draw from this? Most obvious is that finding a shared language would be difficult. Points might be made through actions or images rather than words.

Another lesson is that this meeting of technological unequals did not lead to armed conflict. The Americans used their superior weapons and propulsion technologies to intimidate, not to damage or conquer. Despite their threats, they acted with restraint, and got their way without violence.

The Japanese, though making only weak shows of force, insisted on being treated as diplomatic equals through rituals and symbols. The Americans, as the more powerful civilization, maintained a balance between intimidation and respect.

We may want to keep histories like this in mind as we weigh possible contact scenarios. Contact may not be between cultures separated by millennia of scientific and technological development.

Readers interested in a complete account of this event may wish to look at Commodore M.C. Perry, Narrative of the Expedition to the China Seas and Japan, 1852-1854, reprinted by Dover in 2000. As Perry’s visit took place in the pre-photographic age, that report was illustrated with lithographs and woodcuts by American on-board artists.



Wolf 1061 Unlikely to Host Habitable Worlds

by Paul Gilster on January 26, 2017

A key way to learn more about a given exoplanet is to home in on the properties of its star. So argue Stephen Kane (San Francisco State University) and colleagues in a new paper slated for the Astrophysical Journal. The star in question is Wolf 1061 (V2306 Ophiuchi), an M-class red dwarf some 13.8 light years away in the constellation Ophiuchus. In December of 2015, Australian astronomers announced the discovery of three planets around the star.

Drawn out of data from the HARPS spectrograph at La Silla, the planets are all super-Earths, their radial velocity data supplemented with eight years of photometry from the All Sky Automated Survey. All three seem likely to be rocky planets, but firming this up would take transits, which the discovery team at the University of New South Wales estimated might occur, with a likelihood of about 14 percent for the inner world, dropping to 3% for the outer.

Kane and team investigate the transit question in light of the fact that that two recent papers have produced sharply different orbital periods for the outermost planet here, though both find planet c near or within the habitable zone, which itself depends on the star’s luminosity and effective temperature. Both the recent papers see reasonably high transit possibilities, but Kane’s work rules out transits of the two inner worlds, leaving open a possibility for the outer.

The researchers have used observations from the Center for High Angular Resolution Astronomy (CHARA) interferometric array at Mount Wilson Observatory near Los Angeles. From this emerges a precise stellar radius measurement (0.3207±0.0088 R), from which the team has calculated the star’s effective temperature and luminosity. The photometry data reveal a stellar rotation period of 89.3±1.8 days. The work has useful implications for upcoming space-based exoplanet studies. As the paper notes:

The assessment of host star properties is a critical component of exoplanetary studies, at least for the realm of indirect detections through which exoplanet discoveries thus far have predominantly occurred. This situation will remain true for the coming years during which the transit method will primarily be used from space missions such as the Transiting Exoplanet Survey Satellite (TESS), the CHaracterising ExOPlanet Satellite (CHEOPS), and the PLAnetary Transits and Oscillations of stars (PLATO) mission. Of particular interest are the radius and effective temperature of the stars since the radius impacts the interpretation of observed transit events and the combination of radius and temperature is used to calculate the extent of the HZ.

And indeed, it is through this painstaking analysis of stellar properties that the researchers have been able to calculate habitable zone boundaries for Wolf 1061 of 0.09–0.23 AU. Have a look at the paper’s Figure 8, which shows the Wolf 1061 system as diagrammed from above.

Screenshot from 2017-01-26 09-00-26

Image: Figure 8 from the paper shows the orbits of the planets overlaid on the habitable zone. The scale here is 1.0 AU to the side, with the ‘conservative’ habitable zone shown as light gray, and the more optimistic extension shown in dark gray. Credit: Kane et al.

Notice the outer planet (d), which passes briefly through the habitable zone at closest approach to the star before widening further out in its eccentric orbit. Indeed, only 6 percent of the orbital period takes place within the habitable zone. Planet c spends 61 percent of its orbit within the habitable zone, but only within the optimistic assumptions for the HZ. Taking into account the recent orbital solutions for this system, both inner worlds are problematic:

…planet c is quite similar to the case of Kepler-69 c, which was proposed to be a strong super-Venus candidate by Kane et al. (2013). Indeed, both of the inner two planets, terrestrial in nature according to the results of both Wright et al. (2016) and Astudillo-Defru et al. (2016b), lie within the Venus Zone of the host star (Kane et al. 2014) and are thus possible runaway greenhouse candidates.

Measuring stellar parameters unlocks the boundaries of the habitable zone, allowing the researchers to study the planetary orbits and weigh the chances for liquid water on the surface. The results hardly favor the Wolf 1061 system as a promising candidate for life.

We find that, although the eccentric solution for planet c allows it to enter the optimistic HZ, the two inner planets are consistent with possible super-Venus planets (Kane et al. 2013, 2014). Long-term stability analysis shows that the system is stable in the current configuration, and that the eccentricity of the two inner planets frequently reduces to zero, at which times the orbit of planet c is entirely interior to the optimistic HZ. We thus conclude that the system is unlikely to host planets with surface liquid water.

The paper is Kane et al., “Characterization of the Wolf 1061 Planetary System,” accepted for publication at the Astrophysical Journal (preprint).



PROCYON: An Overview of Cometary Water

by Paul Gilster on January 25, 2017

The Japanese PROCYON spacecraft (Proximate Object Close flyby with Optical Navigation) has just given us an interesting case of repurposing a scientific instrument, not to mention drawing value out of a mission whose initial plans had gone awry. Launched together with JAXA’s Hayabusa 2 probe in late December of 2014, PROCYON was to have flown by asteroid 2000 DP107 in 2016, but a malfunctioning ion thruster put an end to that plan.

Fortunately, PROCYON carried LAICA, a telescope that was put to use to study the Earth’s geocorona (the outermost layer of the atmosphere). Developed at Japan’s Rikkyo University, LAICA observes emissions from hydrogen atoms, a useful capability when turned to comet studies, as a team of researchers has now done with comet 67P/Churyumov-Gerasimenko. Water being the most abundant cometary ice, its release rate helps map activity on the comet and offers clues to how water was incorporated into comets in the early Solar System.


Image: The PROCYON spacecraft and comet 67P/Churumov-Gerasimenko (Conceptual Image). Credit: NAOJ/ESA/Go Miyazaki.

The researchers, from the National Astronomical Observatory of Japan, University of Michigan, Kyoto Sangyo University, Rikkyo University and the University of Tokyo, set out to map the entire hydrogen coma of comet 67P, the same comet studied in such spectacular detail by the European Space Agency’s Rosetta mission. The challenge to Rosetta, however, was that being inside the cometary coma, it could not observe the entire coma structure.

None of this was in the original PROCYON mission, but it became clear that the spacecraft could be valuable in supplementing the Rosetta observations. Rosetta had studied specific areas on the comet, but the researchers wanted an estimate of the total amount of water released by the comet per second, which in turn demanded a model for the coma itself. If Rosetta could not provide the answer, PROCYON could study the coma in its entirety.

Adapting PROCYON/LAICA for a comet proved workable. The hydrogen atoms in a cometary coma come from water molecules ejected from the nucleus which are then broken apart by ultraviolet radiation from the Sun. Coma models based on these processes allowed the team to estimate the water release rate based on a brightness map of the hydrogen atoms.


Image: Processed and cropped hydrogen-Lyα image of the comet 67P/C-G in Rayleigh units (upper panel) taken by the LAICA telescope on September 13, 2015 UT and the hydrogen coma appearance predicted by a two-dimensional axi-symmetric model of the atomic hydrogen coma (lower panel). The yellow dotted arrow in the lower panel indicates the direction to the Sun at the time of observation. Credit: NAOJ.

Working with observations of comet 67P’s entire hydrogen coma, the researchers were able to derive its absolute water production rates near the 2015 perihelion. This allowed them to test their developing models for the coma, which could be combined with the Rosetta results to estimate the total ejected mass of the comet during this period. Credit the high spatial resolution of the LAICA instrument and the pointing control of PROCYON for the result.

These are useful results, but let’s also look at what they imply about how we design our missions. PROCYON is a small cube about 60 cm to the side weighing 65 kg. The low-cost mission has been able to support key elements of a much larger mission with additional observations in a way that JAXA believes to be a model for small spacecraft in the future.

As we continue to explore what we can do with CubeSats and other micro-designs, we can think in terms of ‘clusters’ of small spacecraft networking together to reduce overall risk and provide widely dispersed observing platforms for a variety of targets. Swarms of micro-sailcraft to the outer planets are just one scenario that may grow out of spacecraft interactions like these.

The paper is Shinnaka et al. 2017 “Imaging observations of the hydrogen coma of comet 67P/Churyumov-Gerasimenko in September 2015 by the PROCYON/LAICA,” Astronomical Journal, Volume 153, Issue 2, Article number 76 (24 January 2017). Abstract.



Probing the Surface of Ceres

by Paul Gilster on January 24, 2017

It doesn’t stretch credulity to hypothesize that the early Earth benefited from an influx of comet and asteroid material that contributed water and organic compounds to its composition. The surface of a world can clearly be affected by materials from other bodies in the Solar System. Now we’re learning that the dwarf planet Ceres may have a surface dusted by material from asteroid impacts. The findings come from a team of astronomers investigating Ceres with SOFIA, the airborne Stratospheric Observatory for Infrared Astronomy. The observatory is a highly modified 747SP aircraft carrying a 2.5m reflecting telescope.

The study shows that not just Ceres but other asteroids and dwarf planets may be coated with asteroid fragments, a result that adjusts our view of Ceres’ surface composition. After all, what we’re looking at may simply be the result of asteroid impacts in the early days of the Solar System’s formation. Three quarters of all asteroids, including Ceres, have been classified as type C (carbonaceous) on the basis of their colors, but the SOFIA infrared data show a substantial difference between the dwarf planet and C-type asteroids in nearby orbits.

Carbonaceous asteroids are dark (albedo in the range of 0.03-0.09, on a scale where a white, perfectly reflecting surface has an albedo of 1.0), with a composition depleted in hydrogen, helium and other volatiles. What SOFIA shows us is that Ceres doesn’t fit this model. Pierre Vernazza is a research scientist at the Laboratoire d’Astrophysique de Marseille:

“By analyzing the spectral properties of Ceres we have detected a layer of fine particles of a dry silicate called pyroxene. Models of Ceres based on data collected by NASA’s Dawn as well as ground-based telescopes indicated substantial amounts of water-bearing minerals such as clays and carbonates. Only the mid-infrared observations made using SOFIA were able to show that both types of material are present on the surface of Ceres.”

Observations of ceres

Image: Ceres’ surface is contaminated by a significant amount of dry material while the area below the crust contains essentially water-bearing materials. The mid-infrared observations revealed the presence of dry pyroxene on the surface probably coming from interplanetary dust particles. The Internal structure of the Dwarf Planet Ceres was derived from NASA Dawn spacecraft data. Credit: SETI Institute.

Interplanetary dust particles, according to this SETI Institute news release, are the most likely source for the pyroxene, and are also implicated as having accumulated on other asteroid surfaces. Ceres thus takes on the coloration of some of its drier neighbors, while actually housing more substantial resources of water below. The larger picture is that infrared observations may help us better understand an asteroid’s true composition. Vernazza even speculates that ammoniated clays mixing with watery clay on Ceres may point to an origin in the outer parts of the Solar System, with migration occurring later in the dwarf planet’s life.

“The bottom line is that seeing is not believing when it comes to asteroids,” says Franck Marchis, senior planetary astronomer at the SETI Institute, a researcher who collaborated in this project. “We shouldn’t judge these objects by their covers, as it were.”



Jupiter in the Public Eye

by Paul Gilster on January 23, 2017

Have a look at Jupiter as seen by the Juno spacecraft on its third close pass. A view as complex as the one below reminds us how images can be manipulated to bring out detail. This happens so frequently in astronomical images that it’s easy to forget this view is not necessarily what the human eye would see, and we always have to check to find out how a given image was processed. In this case, we’re looking at the work of a ‘citizen scientist,’ one Eric Jorgensen, who enhanced a JunoCam image to highlight the cloud movement.


Image: This amateur-processed image was taken on Dec. 11, 2016, at 1227 EST (1727 UTC), as NASA’s Juno spacecraft performed its third close flyby of Jupiter. At the time the image was taken, the spacecraft was about 24,400 kilometers from the gas giant planet. Credit: NASA/JPL-Caltech/SwRI/MSSS/Eric Jorgensen.

The image shows a region of Jupiter southeast of what is known as the ‘pearl,’ one of eight rotating storms at 40 degrees south latitude on the planet, a region of vast and roiling turbulence. Citizen science efforts like Planet Hunters, SETI@Home and Galaxy Zoo have brought private individuals into contact with scientific data and fostered interest in a wide range of sciences, with Planet Hunters rising to particular visibility thanks to its work with Boyajian’s Star and the still mysterious light curves observed there.

The Juno mission is delving into this realm with the announcement that on the spacecraft’s February 2 pass of Jupiter, the public will have had a voice in the selection of targets for the imaging team. As JPL notes in this news release, JunoCam will begin taking pictures as Juno approaches Jupiter’s north pole. Scientists have to keep an eye on onboard storage limitations as they consider which images to collect with JunoCam. Each close pass (‘perijove’) happens in a 2-hour window as the spacecraft goes from the north pole of the giant planet to the south pole, with JunoCam imaging a circumscribed strip of territory.

The voting for the February 2 flyby is still open, but the process repeats: Each orbit will have a voting page, and each perijove on Juno’s 53-day orbit will have space for two polar images within which the public can participate in prioritizing particular points of interest, in accordance with the science goals the mission is trying to meet. Several pages at the voting site will be devoted to unique points of interest that will be within range of JunoCam’s field of view during the next close approach. Raw images will then be made available for processing.

“The pictures JunoCam can take depict a narrow swath of territory the spacecraft flies over, so the points of interest imaged can provide a great amount of detail,” said Juno co-investigator Candy Hansen, (Planetary Science Institute). “They play a vital role in helping the Juno science team establish what is going on in Jupiter’s atmosphere at any moment. We are looking forward to seeing what people from outside the science team think is important.”

Bear in mind that JunoCam was included on the mission because, working in color and visible light, it could offer a wide field of view that would, among other things, spur public interest and involvement. So it’s not surprising to see this citizen science angle being brought forward, offering engagement not just from amateur scientists but students worldwide. Building public support is also a key component in keeping up the pressure for better space funding.

The February 2 flyby makes its closest approach to Jupiter at 0758 EST (1258 UTC), with the spacecraft about 4300 kilometers above the cloud tops. We’ll see Jupiter up close once again through a spacecraft’s lens, translated for us into images that mimic what we would see with our own eyes before we get to work processing them. If you’re interested in having a say on future JunoCam targets, click here for information on how to get involved.


Image: Jupiter’s south pole as seen during perijove 3, in an image processed by Julien Potier (Planetario Silvia Torres Castilleja, Ags, Mexico), rotated, cropped to get rid of yellowish band, processed with RGB levels, brightness, contrast and HDR Toning.