≡ Menu

‘Oumuamua: Future Study of Interstellar Objects

‘Oumuamua continues to inspire questions and provoke media attention, not only because of its unusual characteristics, but because of the discussion that has emerged on whether it may be a derelict (or active) technology. Harvard’s Avi Loeb examined the interstellar object in these terms in a paper with Shmuel Bialy, one we talked about at length in these pages (see ‘Oumuamua, Thin Films and Lightsails). The paper would quickly go viral.

Those who have been following his work on ‘Oumuamua will want to know about two articles in the popular press in which Loeb answers questions. From the Israeli newspaper Ha’aretz comes an interview conducted by Oded Carmeli, while at Der Spiegel Johann Grolle asks the questions. From the latter, a snippet, in which Grolle asks Loeb what the moment would be like if and when humanity discovers an extraterrestrial intelligence. Loeb’s answer raises intriguing questions:

I can’t tell you what this moment will look like. But it will be shocking. Because we are biased by our own experiences. We imagine other beings to be similar to us. But maybe they are radically different. For example, it is quite possible that we won’t encounter the life forms themselves, but rather only their artifacts. In any case, we ourselves are not designed for interstellar journeys. The only reason astronauts survive in space is that they are under the protection of the Earth’s magnetic field. Even when traveling to Mars, cosmic rays will become a major problem.

Image: Avi Loeb (center) at the Daniel K. Inoue Solar Telescope (DKIST) in June of 2017. Credit: Avi Loeb.

Intriguing, given our conversations here about artificial intelligence and the emergence of non-biological civilizations. After all, we are in the nearby galactic company of numerous stars far older than our own. Would robotic beings supplant their biological cousins, or would the scenario be more like biological beings using artilects as their way of achieving interstellar travel? Either way, Loeb’s guess is that our first evidence will be an encounter with technological debris. The interview goes on to cover the ‘Oumuamua story’s outline thus far.

Meanwhile, two new papers from Loeb have appeared, the first written with John C. Forbes. “Turning Up the Heat on ‘Oumuamua” looks at the interstellar object, whatever it is, from another angle. If we were to discover more objects like this, how could we best analyze them? In earlier work with Manasvi Lingam, Loeb examined the population of interstellar objects that could be trapped within the Solar System, slung by Jupiter into parabolic orbits around the Sun.

The number could be as high as 6,000, a figure based on the deduced abundance of interstellar objects given the fact that we observed ‘Oumuamua as early as we did with instrumentation of the sensitivity of the Pan-STARRS telescopes. The paper references work on the overall abundance of these objects performed in 2017 by Greg Laughlin (UC-Santa Cruz) and Konstantin Batygin (Caltech), as well as a 2018 paper from Aaron Do (University of Hawai’i).

Learning more could involve a flyby mission, says Loeb, but there may be a better way:

In our new paper with John Forbes we proposed instead studying the vapor produced when such objects pass close to the Sun and get evaporated by the intense solar heat. We calculated the likelihood of that happening, keeping in mind that `Oumuamua did not show any signs of a cometary tail or carbon-based gas since it did not pass close enough to the Sun.

We used the known orbit of `Oumumua and assume a population of similar interstellar objects on random orbits in the vicinity of the Sun. This provided us with a likelihood of passages close to the Sun.

These objects would be expected to show a high orbital inclination, and assuming a population of this size, they should be readily detectable by future telescopes, such as the forthcoming Daniel K. Inoue Solar Telescope (DKIST). Another marker of interstellar origin, according to the paper, would be anomalous oxygen isotope ratios. If we can find interstellar objects that pass close to the Sun, we should be able to learn something about their composition. Loeb and Forbes use Monte Carlo methods to determine that such objects collide with the Sun once every 30 years, while about two should pass within the orbit of Mercury each year.

Usefully, spectroscopic study of cometary tails is a well-practiced science. As the paper notes:

Generally these studies are able to classify comets into different groups depending on the inferred production rates of H2O, C2, CN, and NH2 as well as dynamical properties, which likely reflect formation in different parts of the protoplanetary disk (Levison 1996)… The promise of using close encounters with the sun to learn about extrasolar small bodies is that the sun has the ability to disrupt even large cometary nuclei via its intense radiation, sublimating not just surface volatiles but even silicates and iron. In principle this exposes the interiors of these objects to remote spectroscopy, which could place strong constraints on the composition of these objects.

And indeed, two comets — 96P/Machholz 1 and Yanaka (1998r) — have been found to have depleted levels of CN and C2 relative to water. Sun-grazing comets of interstellar origin, assuming we can identify them early through instrumentation like the LSST (Large Synoptic Survey Telescope) should be available for such examination, a way to probe their composition without the need for sending fast flyby missions, although the latter would obviously be useful.

In a second paper, just accepted at Research Notes of the American Astronomical Society. Loeb and Harvard colleague Amir Siraj note that ‘Oumuamua’s shape may be more extreme than we have thought. Noting that the axis ratio for the object has been pegged at between 6:1 and 10:1, the paper delves into the lightcurve, with a startling result, as Loeb explained in an email this morning:

The lightcurve of the interstellar object Oumuamua showed a net brightening by one magnitude between October and November 2017, after corrections for the changing distances to the Sun and Earth and solar phase angle, assuming isotropic uniform albedo and the canonical phase function slope value for cometary and D-class objects of -0.04 magnitude per degree. We used the change in the orientation of `Oumuamua between October and November 2017 to show that this brightening implies a more extreme shape for the object. We inferred a ratio between its brightest and dimmest phases of at least 50:1 for a cigar shape and 20:1 for a pancake-like geometry. The revised values can be avoided if the phase function slope is 3 times larger than the canonical value, implying in turn another unusual property of `Oumuamua.

Variations in albedo could be in play, although here we would be looking at sharp variations for a minor change in viewing angle of ~ 11°, which Loeb and Forbes consider a possibility, though one without precedent in previous studies of asteroids and comets.

The papers are Forbes and Loeb, “Turning Up the Heat on ‘Oumuamua,” submitted to The Astrophysical Journal Letters (preprint); and Siraj and Loeb, “‘Oumuamua’s Geometry Could be More Extreme than Previously Inferred,” accepted at Research Notes of the American Astronomical Society (full text).



Is Most Life in the Universe Lithophilic?

Seeking life on other worlds necessarily makes us examine our assumptions about the detectability of living things in extreme environments. We’re learning that our own planet supports life in regions we once would have ruled out for survival, and as we examine such extremophiles, it makes sense to wonder how similar organisms might have emerged elsewhere. Pondering these questions in today’s essay, Centauri Dreams regular Alex Tolley asks whether we are failing to consider possibly rich biospheres that could thrive without the need for surface water.

By Alex Tolley

Image: An endolithic lifeform showing as a green layer a few millimeters inside a clear rock. The rock has been split open. Antarctica. Credit: https://en.wikipedia.org/wiki/Endolith#/media/File:Cryptoendolith.jpg, Creative Commons).

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is” – The Streetlight Effect

I’m going to make a bold claim that we are searching for life where the starlight can reach, and not where it is most common, in the lithosphere.

One of the outstanding big questions is whether life is common or rare in the universe. With the rapid discovery of thousands of exoplanets, the race is now on to determine if any of those planets have life. This means using spectroscopic techniques to find proxies, such as atmospheric composition, chlorophyll “red edge”, and other signatures that indicate life as we know it. There is the exciting prospect that new telescopes and instruments will give us the answer to whether life exists elsewhere within a decade or two.

The search for life on exoplanets starts with locating rocky planets in the habitable zone (HZ). The HZ is defined as potentially having liquid surface water, which requires an atmosphere dense enough to ensure that water is retained. While complex, multicellular life that visibly populates our planet is the vision most people have of life, as I have argued previously [13], it is most likely that we will detect the signatures of bacterial life, particularly archaean methanogens, as prokaryotes were the only form of life on Earth for over 85% of its existence. Most worlds in the HZ will probably look more like Venus or Mars, either too dry and/or with an insufficient atmosphere to allow surface water. Such worlds will be bypassed for more attractive Earth analogs.

This is particularly important for the most common star type, the M-dwarfs. These stars are often downgraded as having habitable planets due to the flaring of their stars which can strip atmospheres and irradiate the surface. This reduces the likelihood for life at the surface, and for many, is a showstopper.

However, if life established well below the surface, these factors affecting the surface become relatively unimportant. All stars, including M-dwarfs, may well have a retinue of living worlds, but with their life undetectable by current means.

Despite mid-20th-century hopes for multicellular life to be found on Mars or Venus, it is now clear that the surfaces of these planets are devoid of any sort of multicellular based ecosystems. Venus’ surface is too hot for any carbon-based life to survive. The various Martian orbiters and landers have found no multicellular life, and so far no unambiguous evidence of microbial life on or near the surface. The Moon is the only world where surface rock samples have been returned to Earth, and these samples suggest, unsurprisingly, that the lunar surface is sterile [10,12].

NASA’s mantra for the search for life, echoing the HZ requirement, is “Follow the water!” On its face, this makes the lunar surface unlikely as a habitat, similarly Mars, although Mars’ does have an abundance of frozen water below the surface. This leaves the subsurface icy moons as the current favorite for the discovery of life in our solar system, particularly around any hypothetical “hot vents” that mimic Earth’s.

However, when following the trail of liquid water, we now know that the Earth has a huge inventory of water in the mantle, providing a new source of water for the crustal rocks. This water is most likely primordial, sourced from the chondritic material during formation.[6,9] If the Earth has primordial water in the mantle, so might the Moon, as it was formed from the same material as the Earth. A recent analysis of lunar rocks indicates that the bulk of the water in the Moon is also primordial, with concentrations only an order of magnitude less than the water in the Earth’s mantle [1]. While we know Mars has water just below the surface, the same argument about primordial water deep within Mars also follows.

The question then becomes whether this water is in a form suitable for life. Is there a zone in these worlds where water is both liquid and at a temperature below the maximum we know terrestrial thermophiles can survive?

Table 1 below shows some estimates for Earth, Mars and the Moon where a suitable liquid water temperature range exists. The estimated thermal gradients are used to suggest the depths where life might start to be found as temperatures and pressures result in liquid water, and the maximum depth life might survive.

On Earth, the reference planet, the high thermal gradient, and warm surface suggest life can be found at any depth, up to about 5 – 6 km. The Moon, due to a low thermal gradient might only have a habitable zone starting at 15 km below the surface but reaching down to nearly 120 km. Mars is intermediate, with a habitable zone 6-29 km in extent.

Table 1. Estimates of thermal gradients and range of depths where water is liquid, but below 120C as a current approximate maximum for thermophiles

WorldSurface CThermal
Depth (km)
at 120C (with
0C at
Depth (km) at
0C with
surface temp
Depth (km) at
120C with
surface temp
Mars-636.4-10.6 **11-196-1018-29
Moon-18 *1.17 ***10315118

* Assumes the Moon surface temperature would be the same as the Earth without an atmosphere
** [7]
*** [8]

So we have 2 possible rocky worlds in our solar system that may have water reservoirs in their mantles due to primordial asteroids and therefore liquid water in their lithospheres deep below the surface, protected from radiation and with fairly constant temperatures within the range of terrestrial organisms. So our necessary condition of liquid water may exist in these worlds, rather than at the surface.

Given that liquid water may be found deep below the surface, is there any evidence that life exists there too?

In 1999, the iconoclast astrophysicist and astronomer Thomas Gold published a popular account of his theory that fossil fuels were not derived from biological sources, but rather from primordial methane that was contaminated by organisms living deep within the Earth’s crust.[4,5]. While his theory remains controversial, his suggestion that organisms live in the lithosphere has been proven correct. [11]. Bores have shown that microorganisms have been found living at least 4 km below the surface. It has been suggested that the biomass of these organisms may exceed that of humanity on Earth, so life in the lithosphere is not trivial compared to that on the surface of our planet.

Figure 1. Illustration of the search for life in the lithosphere. At this time, life has been found at depths of nearly 4 km, but absent at 9 km where the temperatures were too high.
1. Deep-sea, manned submersibles and remotely operated vehicles collect fluid samples that exit natural points of access to the oceanic crust, such as underwater volcanoes or hydrothermal vents. These samples contain microbes living in the crust beneath.
2. Drilling holes into the Earth’s crust allows retrieval of rock and sediment cores reaching kilometers below the surface. The holes can then be filled with monitoring equipment to make long-term measurements of the deep biosphere.
3. Deep mines provide access points for researchers to journey into the Earth’s continental crust, from where they can drill even deeper into the ground or search for microbes living in water seeping directly out of the rock.

Source: [11]

From the article:

To date, studies of crustal sites all over the world—both oceanic and continental—have documented all sorts of organisms getting by in environments that, until recently, were deemed inhospitable, with some theoretical estimates now suggesting life might survive at least 10 kilometers into the crust. And the deep biosphere doesn’t just comprise bacteria and archaea, as once thought; researchers now know that the subsurface contains various fungal species, and even the occasional animal. Following the 2011 discovery of nematode worms in a South African gold mine, an intensive two-year survey turned up members of four invertebrate phyla—flatworms, rotifers, segmented worms, and arthropods—living 1.4 kilometers below the Earth’s surface.

With our existence proof of a deep, hot biosphere in Earth, is it possible that similar life could exist in the lithospheres of other rocky worlds in our solar system, including our Moon?

Mars is particularly attractive, as there is evidence Mars was both warmer and wetter in the past. There was geologic activity as clearly evident by the Tharsis bulge and the shield volcanoes like Olympus Mons. We know there is frozen water below the surface on Mars. What we are not certain of is whether Mars’ core is still molten and hot, and what the areothermal gradient is. One of the scientific goals of the Insight lander, currently on Mars, is to determine heat flow in Mars. This will help provide the data necessary to determine the range of the habitable zone in the lithosphere.

In contrast, we do have samples of Moon rock. An analysis of the Apollo 11 samples showed that organic material was present, but there was no sign of life except for terrestrial contamination [10, 12]. Since then, very little effort has been applied to look for life in the lunar rocks. The theory that the Moon is desiccated, hostile to life, and sterile, seems to have deterred further work. The early analyses indicated that methane (CH4) is present in the Apollo 11 samples. This may be primordial or delivered subsequently by impacts from asteroids or comets. If we ever discovered pockets of natural gas, even petroleum, on the Moon, this would be a staggering confirmation of Gold’s theory.

So where should we look?

Although the Moon is in our proverbial backyard, the expected depth of liquid water starts well below the bottom of the deepest craters.. This suggests that either deep boring would be necessary, or we must hope for impact ejecta to be recoverable from the needed depths. The prospects for either seem rather remote, although scientific and commercial activities on the Moon might make this possible in this century.

Despite its remoteness, Mars may be more attractive. Sampling at the bottom of crater walls and the sides of the Valles Marineris may give us relatively easy access to samples at the needed depths. Should the transient dark marks on the sides of crater walls prove to be liquid water, we would have samples within easy reach. The recent discovery of a possible subsurface water deposit just 1.5 km beneath the surface of Mars might be another possible target to reach.

The requirement that water is a necessary, but insufficient, condition for life has focused efforts on looking for life where liquid surface water exists. Because of the available techniques, exoplanet targets will be those that satisfy the HZ requirements. While these may prove the first confirmation of extraterrestrial life, they cannot answer some of the fundamental questions that we would like to know, for example, is abiogenesis common, or rare, and is panspermia the means to spread life. For that, we will need samples of such life. For the foreseeable future, that means sampling the solar system. We have 2 nearby worlds, and Gold suggested that there might be 10 suitable Moon-sized and above worlds that might have deep biospheres [5]. That might be ample.

To date, our search for life beyond Earth has been little more than looking for fish in the waves lapping the shore. We need to search more comprehensively. I am arguing that this search needs to focus on the habitable regions of lithospheres of any suitable rocky world. We might start with signs of bacterial fossils in exposed rock strata and ejecta, and then core samples taken from boreholes to look for living organisms. Finding life, especially that from a different genesis would indicate that life is indeed ubiquitous in the universe.


1. Barnes, J. J., Tartèse, R., Anand, M., Mccubbin, F. M., Franchi, I. A., Starkey, N. A., & Russell, S. S. (2014). The origin of water in the primitive Moon as revealed by the lunar highlands samples. Earth and Planetary Science Letters, 390, 244-252. doi:10.1016/j.epsl.2014.01.015

2. Davies, P. C., Benner, S. A., Cleland, C. E., Lineweaver, C. H., Mckay, C. P., & Wolfe-Simon, F. (2009). Signatures of a Shadow Biosphere. Astrobiology, 9(2), 241-249. doi:10.1089/ast.2008.0251

3. Davies, P. C. (2011). ​ The eerie silence: Renewing our search for alien intelligence. ​ Boston: Mariner Books, Houghton Mifflin Harcourt.

4. Gold, T. (1992). The deep, hot biosphere. Proceedings of the National Academy of Sciences, 89(13), 6045-6049. doi:10.1073/pnas.89.13.6045

5. Gold, T. (2010). ​ The deep hot biosphere: The myth of fossil fuels. New York, NY: Copernicus Books.

6. Hallis, L. J., Huss, G. R., Nagashima, K., Taylor, G. J., Halldórsson, S. A., Hilton, D. R., . . . Meech, K. J. (2015). Evidence for primordial water in Earth’s deep mantle. Science, 350(6262), 795-797. doi:10.1126/science.aac4834

7. Hoffman N.(2001) Modern geothermal gradients on Mars and implications for subsurface liquids. Conference on the Geophysical Detection of Subsurface Water on Mars (2001)

8. Kuskov O (2018) Geochemical Constraints on the Cold and Hot Models of the Moon’s Interior: 1–Bulk Composition. Solar System Research, 2018, Vol. 52, No. 6, pp. 467–479.

9. Mccubbin, F. M., Steele, A., Hauri, E. H., Nekvasil, H., Yamashita, S., & Hemley, R. J. (2010). Nominally hydrous magmatism on the Moon. Proceedings of the National Academy of Sciences, 107(25), 11223-11228. doi:10.1073/pnas.1006677107

10. Nagy, B., Drew, C. M., Hamilton, P. B., Modzeleski, V. E., Murphy, S. M., Scott, W. M., . . . Young, M. (1970). Organic Compounds in Lunar Samples: Pyrolysis Products, Hydrocarbons, Amino Acids. Science, 167(3918), 770-773. doi:10.1126/science.167.3918.770

11. Offord, C. (2018) Life Thrives Within the Earth’s Crust. The Scientist, October 1, 2018.

12. Oyama, V. I., Merek, E. L., & Silverman, M. P. (1970). A Search for Viable Organisms in a Lunar Sample. Science,167(3918), 773-775. doi:10.1126/science.167.3918.773

13. Tolley, A Detecting Early Life on Exoplanets. Centauri Dreams, February 2018

14. Way, M. J., Genio, A. D., Kiang, N. Y., Sohl, L. E., Grinspoon, D. H., Aleinov, I., . . . Clune, T. (2016). Was Venus the first habitable world of our solar system? Geophysical Research Letters, 43(16), 8376-8383. doi:10.1002/2016gl069790

15. Woo, M. The Hunt for Earth’s Deep Hidden Oceans. Quanta Magazine, July 11, 2018



Technosearch: An Interactive Tool for SETI

Jill Tarter, an all but iconic figure in SETI, has just launched Technosearch, an Internet tool that includes all published SETI searches from 1960 to the present. A co-founder of the SETI Institute well known for her own research as well as her advocacy on behalf of the field, Tarter presents scientists with a way to track and update all SETI searches that have been conducted, allowing users to submit their own searches and keep the database current. The tool grows out of needs she identified in her own early research, as Tarter acknowledges:

“I started keeping this search archive when I was a graduate student. Some of the original papers were presented at conferences, or appear in obscure journals that are difficult for newcomers to the SETI field to access. I’m delighted that we now have a tool that can be used by the entire community and a methodology for keeping it current.”

Image: Screenshot of the Radio List on https://technosearch.seti.org/.

Among the materials included in Technosearch are:

  • Title of the search paper
  • Name(s) of observers
  • Search date
  • Objects observed
  • Facility where the search was conducted
  • Size and sensitivity of the telescope used
  • Resolving power of the instrument
  • Time spent observing each object
  • A link to the original published research paper
  • Comments that explain the search strategy
  • Observer notes

Technosearch currently contains 102 radio searches and 38 optical searches. The tool was presented yesterday at the 2019 winter meeting of the American Astronomical Society in Seattle and will be maintained by the SETI Institute. The AAS meeting always produces interesting developments, including exoplanet investigations that I intend to discuss next week.

On Technosearch, a personal thought: No one who has not attempted a deep dive into the scholarship on SETI can know how frustrating it is to chase down lesser known investigations or details of major ones. The issue of ready availability extends to the broad field of interstellar flight research, as I learned when compiling materials for my Centauri Dreams book. The trail from conference presentation to published paper can be obscure, while materials relating to specific researchers can be scattered through library collections or spread over a range of journals, some of them with firewalls, or available only in expensive books..

I’ve long advocated for interstellar studies a return to what Robert Forward began with Eugene Mallove, a detailed bibliography whose last appearance was in the Journal of the British Interplanetary Society in 1980. Putting such a resource online opens it worldwide and strengthens a field whose online databases are in many cases incomplete and often do not include older materials. All fields of scholarship will be following this essential path even as we continue to wrestle with academic publishers over questions of access to complete texts.

Technosearch is a step forward for SETI that helps scientists work with consolidated information while building a useful archive of contemporary work going forward. Tarter developed the tool in collaboration with graduate students working with Jason Wright (Penn State), a well-known figure in Dysonian SETI, which culls astronomical data looking for the possible physical artifacts of advanced civilizations. Also in the mix is Research Experience for Undergraduates, a program supporting students in areas of research funded by the National Science Foundation.

Image: Jill Tarter and Andrew Garcia presenting the Technosearch Tool.

SETI Institute REU student Andrew Garcia worked with Tarter in the summer of 2018:

“I started helping Dr. Tarter with this project as a research opportunity during the summer. I’ve become convinced that Technosearch will become an important instrument for astronomers and amateurs interested in exploring the cosmos for indications of other technological civilizations. We can’t know where to look for evidence tomorrow if we don’t know where we have already looked. Technosearch will help us chronicle where and how we’ve looked at the sky. I would like to thank the NSF REU program and the CAMPARE program for their encouragement and support throughout this project.”



The advantages of neutral particle beam propulsion seem clear: Whereas a laser’s photon beams can exchange momentum with the sail, neutral particle beams transfer energy and are considerably more efficient. In fact, as we saw in the first part of this essay, that efficiency can approach 100 percent. A mission concept emerges, one that reaches a nearby star in a matter of decades. But what about the particle beam generators themselves, and the hard engineering issues that demand solution? For that matter, how does the concept compare with Breakthrough Starshot? Read on as James Benford, working in collaboration with Alan Mole, describes the salient issues involved in building an interstellar infrastructure.

By James Benford and Alan Mole

We discuss the concept for a 1 kg probe that can be sent to a nearby star in about seventy years using neutral beam propulsion and a magnetic sail. We describe key elements of neutral particle beam generators, their engineering issues, cost structure and practical realities. Comparison with the Starshot laser beam-driven concept gives roughly similar costs.

Beam Generator Concept

Figure 1 Block diagram of early neutral particle beam generator [1]. Drift-Tube Linac is not shown.

Creation of the neutral particle beam begins with

1. Extraction of a negative ion beam (negative ion with attached electrons) from a plasma source; it then drifts into the first acceleration stage, the RFQ. The first element of the accelerator will appear much like the geometry shown in figure 2. Here ions are extracted from the plasma source on the left by electrostatics and brought by a converging magnetic field to the linear accelerator.

Figure 2. Ion beam on left is propagated along converging magnetic field to the linac.

2. The ion beam enters a radiofrequency quadrupole (RFQ) accelerator, a vane-like structure where the application of radiofrequency power produces a continuous gentle acceleration much like a surfer riding a wave. It also provides strong electrostatic focusing to prevent divergence growth. The structure bunches the particles in phase space.

The RFQ fulfils at the same time three different functions:

  • focusing of the particle beam by an electric quadrupole field, particularly valuable at low energy where space charge forces are strong and conventional magnetic quadrupoles are less effective;
  • adiabatic bunching of the beam: starting from the continuous beam produced by the source it creates with minimum beam loss the bunches at the basic RF frequency that are required for acceleration in the subsequent structures;
  • acceleration of the beam from the extraction energy of the source to the minimum required for injection into the following linac structure.

3. After the ions exit the RFQ at energies of a few MeV, further acceleration to increase the particle energy is done with a drift-tube linac (DTL), which consists of drift tubes separated by acceleration regions, as shown in Figure 3. Particles arriving at the gaps at the proper phase in the radiofrequency waves are given acceleration impulses. When the electric field of the wave reverses, the particles are shielded from being accelerated by passing through the drift tubes. The typical accelerating gradient is a few MeV/m.

Figure 3. Drift-Tube Linac, which consists of drift tubes separated by acceleration regions.

4. In order to maintain low emittance and produce the microradian divergence we desire, the beam is expanded considerably as it exits the accelerator. Beam handling elements must have minimal chromatic and spherical aberrations.

5. Beam pointing to be done by bending magnets with large apertures.

6. Finally, the extra electrons are stripped from the beam, making a neutral particle beam. This can be done with by stripping the electrons in a gas neutralization cell or by photodetachment with a laser beam. It may be possible to achieve 100% neutralization by a combination of methods. Thus far this high-efficiency neutralization has not been demonstrated.

Beamer Engineering

There are several possible schemes for building the beam generator. Both electrostatic and electromagnetic accelerators have been developed to produce high power beams. The most likely approach is to use linear accelerators. In the past, the cost of an electromagnetic accelerator is on the order of a person year per meter of accelerator (~1 man-year/m) but this could be larger for the more sophisticated technologies.

The power system to drive such accelerators could come from nuclear power (fission or fusion) or solar power. Furthermore, if it were to be space-based, the heavy mass of the TW-level high average power required would mean a substantially massive system in orbit. Therefore Mole’s suggestion, that the neutral beam be sited on Earth, has its attractions. There is also the question of the effects of propagating in the atmosphere, on both beam attenuation and on divergence.
If the beam generator were to be on Earth, it should all be at the highest altitude for practical operations. The Atacama Desert, for example, would offer very low humidity and half of sea level pressure. In addition, a way to reduce beam losses in the atmosphere would be to launch a hole-boring laser beam in advance just before the neutral beam. This laser would heat up a cylinder of atmosphere to lower the pressure, allowing the neutral beam to propagate with less loss. Such hole-boring exercises have been conducted in laser weapon studies and does appear to be a viable technique.

The final neutral beam can be generated by many small beam drivers or a single large beam driver. If a great number of driver devices and their associated power supplies are required, increasing the construction and maintenance expense of this portion. Of course, economies of scale will reduce the cost of individual segments of the Beamer by mass production of the system modules. Making such choices is an exercise for future engineers and designers.

Neutral particle beam generators so far have been operated in pulsed mode of at most a microsecond with pulse power equipment at high voltage. Going to continuous beams, which would be necessary for the seconds of beam operation that are required as a minimum for useful missions, would require rethinking the construction and operation of the generator. The average power requirement is quite high, and any adequate cost estimate would have to include substantial prime power and pulsed power (voltage multiplication) equipment, the major cost element in the system. Of course, it will vastly exceed the cost of the Magsails, which is an economic advantage of beamed propulsion.

However, this needs economic analysis to see what the cost optimum would actually be. Such analysis would take into account the economies of scale of a large system as well as the cost to launch into space versus the advantages of beaming from Earth.

Beamer Cost Estimates

The interstellar neutral particle beam system described here is a substantial extrapolation beyond the present state-of-the-art. Nevertheless, estimates can be made of both the capital and operating costs.

The cost of the Beamer is divided between the cost of the accelerator structure (RFQ and DTL) and the power system that drives it. For a cost estimate for the Mercury system, we assume that the present day accelerating gradient is maintained for this very high-power system. That gradient is ~ 2 MeV/m. For the mercury neutral particle beam the length of the 1.35 GeV accelerator would be 675 m.

There is an extensive technology base for drift-tube linacs; there are many in operation around the world [2]. We use as a model the well-documented 200 MeV Brooklyn National Laboratory 200 MeV ion beam system, which was completed in 1978 at a cost of $47M. It used 22 MW of radiofrequency power and was 145m long. In that era, the cost of microwave equipment was ~$1/W. The cost today is ~$3/W, so the 22 MW would cost 22 M$ then and 66 M$ today. Since the total cost of accelerator was $47 M$, the Accelerator structure would cost 47 M$ -22 M$ = $25 M$. Thus at this level the two cost elements are roughly equal. The accelerator structure then costs $25 M$/145 m = $0.17 M$ per meter in 1978. We multiply all costs by a factor of three to account for inflation to get today’s costs.

To estimate the capital cost of the mercury in NPB described here, we have the following relations:

Caccl= 0.5 M$/m x 675 m = 350 M$

Cmicrowave= 3$/W x 18 TW = 5.47 B$

Therefore the dominant cost element would be the microwave system driving the accelerator.

However, high-volume manufacturing will drive costs down. Such economies of scale are accounted for by the learning curve, the decrease in unit cost of hardware with increasing production. This is expressed as the cost reduction for each doubling of the number of units, the learning curve factor f. This factor typically varies with differing fractions of labor and automation, 0.7 < f < 1, the latter value being total automation.

It is well documented that microwave sources have an 85% learning curve, f = 0.85 based on large-scale production of antennas, magnetrons, klystrons, etc [3]. Today’s cost is about $3/W for ~1 MW systems. Note that this includes not only the microwave generating tube, but also the power system to drive that continuous power. The 18 TW power needed would require 18 million such units. Therefore the cost is ~1.1 B$. Adding together the accelerator and microwave power system, the cost will be 1.45 B$.

The electrical power to drive this large system cannot possibly come from the electrical grid of Earth. Therefore a large cost element will be the system that stores the 162 TJ of energy. (Note that the beam power starts at zero and rises with time (t2) to 18 TW at the end.) From Parkin’s estimates of the Starshot energy storage system [10], based on Li-ion batteries, we take the storage cost to be $50 per kilowatt-hour, which is $13,900 $/TJ. Consequently the cost for the energy store is ($13,900 $/TJ) 162 TJ = 2.25 B$. So the energy stores cost is comparable to that of the accelerator.

The total capital cost is

Caccl= 350 M$

Cmicrowave = 1.1 B$

Cstore= 2.25 B$

Total accelerator capital cost is 3.7 B$.

The operating cost to launch a single Magsail is of course far smaller. It is simply the cost of the spacecraft and the energy to launch it. We will assume that the cost of the spacecraft will be on the order of $10 million. The cost of the electricity at the current rate of $.10 per kilowatt-hour is $4.5 million.

Total operating cost for a single launch is ~15M$.

Comparison with Starshot

The neutral particle beam approach is conceptually similar to photon beams such as the laser-driven Starshot project. A disadvantage of reflecting photons from the sail will be that they carry away much of the energy because they exchange only momentum with the sail. Neutral particle beams transfer energy, which is much more efficient. The reflecting particles may in principle be left on moving in space after reflection and thus the efficient energy efficiency can approach 100%.

The Starshot system, a laser beam-driven 1 gram sail with the goal of reaching 0.2c, has been quantified in a detailed system model by Kevin Parkin [4]. Since both the high acceleration neutral particle beam described here and Starshot are both beam-driven high-velocity systems, we make the following comparison between their key parameters and cost elements:

Physical parameters and cost elements of beam-driven probes

 Mercury Neutral Particle Beam SystemStarshot
Sail mass1 kg1 g
Velocity0.06 c0.2 c
Beamer capital cost1.45 B$4.9 B$
Energy store cost2.25 B$3.4 B$
Total capital cost3.7 B$8.3 B$
Energy cost/launch4.5 M$7 M$
Kinetic energy1.6 1014 J1.8 1012 J
Kinetic energy/ capital cost43.2 kJ/$0.2 kJ/$

Here we have summed the accelerator and microwave power system costs for the neutral Beamer and the laser and optics cost for Starshot. A major caveat is that Parkin’s estimates have realistic efficiencies of the systems of Starshot, but our costs assume unrealistically high efficiencies.

Although they differ in detail, the two concepts give the same order of magnitude cost. However, the kinetic energy in the NPB-driven probe is 90 times that of the Starshot probe. This shows the disadvantage of reflecting photons from the sail: they carry away much of the energy because they exchange only momentum with the sail. Neutral particle beams transfer energy, which is much more efficient. The kinetic energy/capital cost ratio is 200 times greater in the NPB case.

It is instructive that the high-energy requirement of interstellar probes drives the existence of a stand–alone storage system, which is a major element in the total cost of both systems. The similarity of costs for these rather different beam- driven systems gives us some confidence that these rough estimates in this paper are credible.

Neutral Particle Beam Realities

Practical realities are always bad news. Performance of most systems degrades to below their design points because of inefficiencies of processes. Note that the beam systems described here are perfectly efficient, as determined from equation 5. That is, the beam reflects from the sailcraft with perfect efficiency, so as to stop dead, transferring all the energy to the spacecraft. The realities of neutral particle beams in the present day are substantially poorer.

To see where the problems lie, we consider a daring experiment called BEAR, conducted 30 years ago [1, 5]. A neutral particle beam generator was actually deployed and operated in space and its performance was measured.

On July 13, 1989 the Beam Experiment Aboard Rocket (BEAR) linear accelerator was successfully launched and operated in space by Los Alamos National Laborotory. The rocket trajectory was sub-orbital, reaching altitude of 220 km. The flight demonstrated that a neutral hydrogen beam could be successfully propagated in an exoatmospheric environment. The cross-section of the rocket is shown in figure 4.

Figure 4. Beam Experiment Aboard Rocket (BEAR) [1].

The accelerator, which was the result of an extensive collaboration between Los Alamos National Laboratory and industrial partners, was designed to produce a 10 rnA, 1 MeV neutral hydrogen beam in 50 microsecond pulses at 5 Hz. The major components were a 30 kev H- injector a 1 MeV radio frequency quadrupole, two 425 MHz RF amplifiers, a gas cell neutralizer, beam optics, vacuum system and controls. The beam extracted was 1 cm in diameter with a beam divergence of 1 mradian. There was no unexpected behavior such as beam instability in space.

The design was strongly constrained by the need for a light- weight rugged system that would survive the rigors of launch and operate autonomously. The payload was parachuted back to Earth. Following the flight the accelerator was recovered and successfully operated again in the laboratory.

From the paper and report describing this experiment we see substantial inefficiencies, which should guide our future expectations.

The input power to the accelerator was 620 kW for 60 µs, a 7.2 J energy input. The beam as extracted was 27 mA at 1 MeV for 50 µs, which gives 1.35 J. The efficiency therefore is 19%, so approximately 4/5 of the energy supplied was lost in the beamline shown in figure 1. The major loss was in the neutralizer which was a xenon gas injected into the beamline. The efficiency of the neutralizer was changed by varying the amount of gas injected. They obtained 50% neutral hydrogen and 25% each of negative and positive hydrogen. Therefore the neutralization process was only 50% efficient in producing a neutral beam. This accounts for most of the loss. The other losses can be accounted for by inefficiencies in the optics of the low-energy beam region and the high-energy beam region.

In the 30 years since the flight, little work on particle beams has occurred at high power levels, because of the termination of the Strategic Defense Initiative. Doubtless substantial improvements can be made in the efficiency of NPB’s, given substantial research funding. Therefore the concept in this paper, with its hundred percent efficiency of energy transfer from the electrical system to the sail, is an upper bound on the performance. Consequently the parameters in Table 1 and the capital and operating cost estimates given here are lower bounds on what would actually occur.


The cost model presented here is lacking in realistic efficiencies. The next level of analysis should address this lack.

We can forsee a development path: a System starts with lower speed, lower mass Magsails for faster missions in the inner solar system. As the system grows, the neutral beam System grows and technology improves. Economies of scale lead to faster missions with larger payloads. As interplanetary commerce begins to develop, making commerce operate efficiently, outcompeting the long transit times of rockets between the planets and asteroids, the System evolves [6]. Nordley and Crowl describe such a development scenario [7]. We conclude that this concept is a promising method for interstellar travel.


1. P. G. Oshey, T. A. Butler, M. T. Lynch, K. F. McKenna, M. B. Pongratz, T. J. Zaugg, “A Linear Accelerator In Space-The Beam Experiment Aboard Rocket”, Proceedings of the Linear Accelerator Conference 1990.

2. H. B. Knowles, “Thirty-Five Years of Drift-Tube Linac Experience” Los Alamos Scientific Laboratory Report, LA-10138-MS, 1984. See also reference 4, pg. 81.

3. J. Benford, J. A. Swegle and E. Schamiloglu, High Power Microwaves, Third Edition, pg. 77, Taylor and Francis, Boca Raton, FL, (2015).

4. K. L. G. Parkin, “The Breakthrough Starshot System Model”, Acta Astronautica 152, 370-384, 2018.

5. G. J. Nutz, “Beam Experiments Aboard a Rocket (BEAR) Project Summary’, LA-11737, 1990.

6. J, Benford, “Beam-Driven Sails and Divergence of Neutral Particle Beams” JBIS 70, pg. 449-452, 2017.

7. G. Nordley and A. J. Crowl, “Mass Beam Propulsion, An Overview”, JBIS 68, pp. 153-166, 2015.



Beamed propulsion has clear advantages when it comes to pushing a payload up to interstellar flight speeds, which is why Breakthrough Starshot is looking at laser strategies. But what about a neutral particle beam in conjunction with a magnetic sail? We’ve discussed the possibilities before (see Interstellar Probe: The 1 KG Mission), where I wrote about Alan Mole’s paper in JBIS, followed by a critique from Jim Benford. Mole, a retired aerospace engineer, is now collaborating with plasma physicist Benford (CEO of Microwave Sciences) to examine a solution to the seemingly intractable problem of beam divergence. Getting around that issue could be a game-changer. Read on for the duo’s thoughts on sending a 1 kg probe to a nearby star system with a flight time in the range of 70 years. Part 2 of this study, outlining engineering issues and the practical realities of cost, will follow.

by James Benford and Alan Mole

We advance the concept for a 1 kg probe that can be sent to a nearby star in about seventy years using neutral beam propulsion and a magnetic sail. The concept has been challenged because the beam diameter was too large, due to inherent divergence, so that most of the beam would miss the sail. Increasing the acceleration from 1000 g’s to 100,000 g’s along with reducing the final speed from 0.1 c to 0.06 c redeems the idea. Such changes greatly reduce the acceleration distance so that the mission can be done with realistic beam spread. Magsail-beam interaction remains an aspect of this concept that needs further study, probably by simulations.

Central features of Neutral Particle Beam Propulsion

Use of a neutral particle beam to drive a Magsail was proposed by Geoffrey Landis as an alternative to photon beam-driven sails [1]. Compared to beam-driven propulsion, such as Starshot, particle beam propelled magnetic sails, Magsails, substitute a neutral particle beam for the laser and a Magsail for the ‘lightsail’, or ‘sailship’. The particle beam intercepts the spacecraft: payload and structure encircled by a magnetic loop. The loop magnetic field deflects the particle beam around it, imparting momentum to the sail. The general ‘mass beam’ approach has been reviewed by Nordley and Crowl [2].

Particle beam propelled Magsails require far less power for acceleration of a given mass. There’s also ~ 103 increase in force on the sail for a given beam power. Deceleration at the target star is possible with the Magsail but not with a laser driven sail.

The neutral particle beam approach is conceptually similar to photon beams such as the laser-driven Starshot project. A disadvantage of reflecting photons from the sail will be that they carry away much of the energy because they exchange only momentum with the sail. Neutral particle beams transfer energy, which is much more efficient. The reflecting particles may in principle be left unmoving in space after reflection and thus the efficient energy efficiency can approach 100%.

The thrust per watt beam power is maximized when the particle velocity is twice the spacecraft velocity. The Magsail, with a hoop force from the magnetic field, is an ideal structure because it is under tension. High-strength low-density fibers make this lightweight system capable of handling large forces from high accelerations. The rapidly moving magnetic field of the Magsail, seen in the frame of the beam as an electric field, ionizes the incoming neutral beam particles. Nordley and Crowl discuss on-board lasers to ionize the incoming beam, although this adds additional on-board mass and power [2]. When the dipole field of the Magsail is inclined to the beam vector the Magsail experiences a force perpendicular to the beam vector, which centers it on the particle beam, perhaps providing beam-riding stability.

Ultrahigh Acceleration

Alan Mole proposed using it to propel a lightweight probe of 1 kg [3]. The probe was accelerated to 0.1 c at 1,000 g by a neutral particle beam of power 300 GW, with 16 kA current, 18.8 MeV per particle. The particle beam intercepts a spacecraft that is a Magsail: payload and structure encircled by a magnetic loop. The loop magnetic field deflects the particle beam around it, imparting momentum to the sail, and it accelerates.

Benford showed that the beam divergence is fundamentally limited by the requirement, at the end of the acceleration process, to strip electrons from a beam of negative hydrogen ions to produce a neutral beam [4,5]. Therefore neutral beam divergence is typically a few microradians. Mole’s beam had an inherent beam divergence of 4.5 µradians.

In Mole’s work, the neutral hydrogen beam at 18.8 MeV per particle and inherent beam divergence of 4.5 µradians accelerated to two-tenths of the speed of light (0.2 c) had acceleration of 103 g’s for 50 minutes [3]. This resulted in a 411 km diameter beam spot, far larger than the Magsail diameter, which was 0.27 km. So most of the beam missed the sail.

But if we use much higher acceleration, the sail will stay within the beam until it reaches the desired final velocity, even with microradian divergence. We choose 105 g’s, 106 m/s2 to accelerate to 0.06 c, 1.8 x 107 m/s.

Numerical experiments with the model developed by Nordley [6], and later replicated by Crowl, showed that the greatest momentum delivery efficiency is when the velocity of the neutral beam is twice the sail velocity. The physics of this is straightforward: Maximum energy efficiency comes when all of the energy goes to the sail and none of it remains in the beam. For a sail that is perfectly reflective, the beam bounces off the sail at the same velocity it impinges the sail. If after reflection it is moving at zero velocity (so none of the energy is left in the beam), the initial beam velocity must be twice the sail velocity, so that it impinges on the sail at a relative velocity equal to the sail velocity.

We take the beam velocity at the end of acceleration to be the twice the final sail velocity, 0.06c The energy of a hydrogen atom is imparted by accelerating through a voltage of 6.6 MeV. The mission parameters for a hydrogen beam then become those shown in Table 1.

The lighter the particle to be accelerated, the shorter the beam driver can be at a fixed field gradient. However, lighter-particle shorter beam drivers, while they may cost less, would require a larger sail due to the higher divergence of the beam.

For a second example, a mercury beam has a minimum divergence of 0.8 µradians, but must use far higher voltage because of the larger mass [4]. Mercury beam parameters are also given in Table 1.

Table 1 Parameters of neutral particle beam-driven sail probes

Beam and Sail ParametersHydrogen BeamMercury Beam
Beam Divergence4.5 microradian0.8 microradian
Acceleration105 g’s=106 m/sec2105 g’s=106 m/sec2
Sail diameter1.46 km260 m
Sail final velocity0.06 c, 1.8 x 107 m/s0.06 c, 1.8 x 107 m/s
Acceleration distance1.6 x 105 km, 10-3 AU1.6 x 105 km, 10-3 AU
Acceleration time18 sec18 sec
Magsail mass1 kg1 kg
Kinetic energy1.6 1014 J4 1014 J
Beam peak power1.8 1013 W, 18 TW1.8 1013 W, 18 TW
Beam voltage6.76 MeV1.35 GeV
Beam current2.66 MA1.33 kA

We will see that when the beam divergence is in reality roughly 3 orders of magnitude higher than previous studies have assumed, from a nanoradian to microradian, rapidly moves the beam generator regime toward being very large and expensive.

Because in Table 1 the hydrogen beam sail diameter is so large, we will focus the rest of this discussion on the mercury beam. Even so, the mercury beam Magsail has a 260 m diameter and 1 kg mass, if the superconducting hoop has a density of steel, the thickness must be no larger than 0.44 cm, if the density of carbon, 0.8 cm.

Magsail-Beam Interaction

Note that the sail diameter given in Table 1 is taken to be simply the diameter of the divergent beam encountering the Magsail. The diameter of the reflection region produced by the magnetic field of the sail could well be somewhat larger than the superconducting hoop diameter. (Of course, early in the acceleration, the beam will hit it at the axis where the magnetic field is greatest.)

When a Magsail driven by a neutral particle beam is at the early stages of the acceleration, the beam will have a considerably smaller spot size on the Magsail than it will later and will hit it at the axis where the magnetic field is greatest. Later on, as the Magsail flies away, the beam will reach a size dictated by its divergence. A question is: does the initial beam high intensity of the beam on the magnetic field tend to push the sails magnetosphere outward radially and make the effective diameter of the Magsail larger? If it does, then the beam divergence can be a bit larger and still strike the Magsail. Or, conversely one could accelerate the Magsail for a longer time because some of the beam would still be captured.

Simulations show the field being compressed; but they are of solar wind, which is taken to be uniform across a magnetic dipole. There are no simulations of the beam smaller than the sail. One would expect the loop generated field to be compressed in the direction of motion, but it seems reasonable for it to be inflated radially, especially if charged particles are trapped in it for significant periods of time.

Andrews and Zubrin have done single particle numerical calculations that do not include modeling dynamic effects (such as field distortions from magnetic pressure) and do not include any such “inflation” of the mirror due to trapped beam ions [7].

Figure 1 is taken from the late Jordan Kare’s NIAC report [8]. (From his figure, he considered using a nuclear detonation to accelerate a Magsail, which is not relevant to our discussion.) From the left a uniform solar wind strikes the Magsail, which in our case would be a non-uniform neutral particle beam. The beam encounters the peak of magnetic field along the axis of the sail. On the right of the figure, the field is distorted, producing a plasma interface shock against the magnetic field of the Magsail. Inflation of the magnetic field due to a particle beam pressure could occur. However, the effect would be to allow the beam divergence to be only a bit larger.

Note also that in this diagram the sail is shown as dragging the payload behind it as it accelerates. If part of the particle beam reaches the payload it could create substantial damage. Consequently, it might it be better to distribute the payload around the superconducting hoop where it would have the most protection against incoming charged particles. Note also the stability of the superconducting loop on a beam of finite width has not been investigated to date. However, the Starshot program is looking at this issue extensively.

Figure 1: Interaction of streaming plasma flow with a Magsail. From Jordan Kare NIAC report [8].

The assumption that the moving magnetic field of the Magsail, seen in the frame of the beam as an electric field, ionizes the incoming neutral beam particles must be quantified.


Since beam divergence is fundamentally limited, high accelerations can be used to insure the sail will stay within the beam until it reaches the desired final velocity, even with microradian divergence. This leads to ultrahigh, 105 g’s, 106 m/s2 to accelerate to 0.06 c. The Starshot system, a laser beam-driven 1 gram sail with the goal of reaching 0.2c, has been quantified in a detailed system model by Kevin Parkin [9]. It too uses 105-106 g’s. Magsail-beam interaction remains an aspect of this concept that needs further study, probably by simulations. This promising method for interstellar travel should receive further attention.


1. G.A. Landis, “Optics and Materials Considerations for Laser-Propelled Lightsail,” IAA-89-664, 1989.

2. G. Nordley and A. J. Crowl, “Mass Beam Propulsion, An Overview”, JBIS 68, pp. 153-166, 2015.

3. Alan Mole, “One Kilogram Interstellar Colony Mission”, JBIS, 66, pp.381-387, 2013.

4. J, Benford, “Beam-Driven Sails and Divergence of Neutral Particle Beams” JBIS 70, pg. 449-452, 2017.

5. Report to the APS of the study on science and technology of directed energy weapons, Rev. Mod. Phys 59, number 3, part II, pg. 80,1987.

6. G. D. Nordley, “Relativistic Particle Beams for Interstellar Propulsion,” JBIS 46, pp 145-150,1993

7. Andrews, D. G. and R. M. Zubrin, “Magnetic Sails and Interstellar Travel”, JBIS 43, pp. 265-272, 1990

8. J. T. Kare, “High-acceleration Micro-scale Laser Sails for Interstellar Propulsion,” Final Report NIAC RG#07600-070, 2002.
www.niac.usra.edu/files/studies/final_report/597Kare.pdf. Accessed 03 Dec 2018.

9. K. L. G. Parkin, “The Breakthrough Starshot System Model”, Acta Astronautica 152, 370-384, 2018.



A Closer Look at Ultima Thule

“We think we are looking at the most primitive object ever imaged by a spacecraft,” said Jeff Moore (NASA Ames) at today’s Ultima Thule press conference. Moore, New Horizons geology and geophysics lead, went on to describe the process of innumerable particles growing into nodes amidst growing low velocity collision and interaction. We are truly looking at primordial materials with Ultima Thule, which is now revealed as a contact binary. Have a look.

Image: This image taken by the Long-Range Reconnaissance Imager (LORRI) is the most detailed of Ultima Thule returned so far by the New Horizons spacecraft. It was taken at 5:01 Universal Time on January 1, 2019, just 30 minutes before closest approach from a range of 18,000 miles (28,000 kilometers), with an original scale of 730 feet (140 meters) per pixel. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.

Bear in mind that New Horizons was working with a Sun 1,900 times fainter than a sunny day on Earth, as mission principal investigator Alan Stern reminded the audience when he unveiled the image above. “It’s a snowman, not a bowling pin,” joked Stern as the image was displayed. Bear in mind as well that these early images are just the beginning. The mission team has now downloaded less than 1 percent of the data available on the spacecraft’s solid state recorders.

One of Jeff Moore’s slides:

And here’s the slide Moore showed to illustrate the process of accretion:

Putting these two lobes together would, Moore said, be gentle enough that “…if you were in a car collision at this speed you wouldn’t bother to fill out the insurance forms.” These are high-Sun images, meaning we see little shadow, but the Sun angle will change as we move into later views at higher resolution. Even so, note the absence of obvious impact craters, and the mottled suggestions of hills and ridges. Also note the brightness of the ‘neck’ between the lobes.

Image: The first color image of Ultima Thule, taken at a distance of 85,000 miles (137,000 kilometers) at 4:08 Universal Time on January 1, 2019, highlights its reddish surface. At left is an enhanced color image taken by the Multispectral Visible Imaging Camera (MVIC), produced by combining the near infrared, red and blue channels. The center image taken by the Long-Range Reconnaissance Imager (LORRI) has a higher spatial resolution than MVIC by approximately a factor of five. At right, the color has been overlaid onto the LORRI image to show the color uniformity of the Ultima and Thule lobes. Note the reduced red coloring at the neck of the object. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.

Ultima Thule’s rotation period is currently pegged at 15 hours, plus or minus an hour. The object turns out to be red, as expected. As to reflectivity, deputy project scientist Kathy Olkin (SwRI) pointed out that the brightest areas reflect about 13 percent of incident sunlight, the darkest areas only 6 percent. Ultima Thule is, in other words, very dark, as dark as potting soil, Olkin added, with significant variation across the surface.



OSIRIS-REx: Orbital Operations at Bennu

Sometimes one mission crowds out another in the news cycle, which is what has happened recently with OSIRIS-REx. The study of asteroid Bennu, significant in so many ways, continues with the welcome news that OSIRIS-REx is now in orbit, making Bennu the smallest object ever to be orbited by a spacecraft. That milestone was achieved at 1943 UTC on December 31, which in addition to the upcoming New Year’s celebration was also deep into the countdown for New Horizons’ epic flyby of MU69, the Kuiper Belt object widely known as Ultima Thule.

Image credit: Heather Roper/University of Arizona.

I suppose the classic case of mission eclipse was the Voyager flyby of Uranus, which occurred on January 24, 1986. I was flying commercial students in a weekend course four days later in Frederick, MD and anxious to hear everything I could about the flyby, its images and their analysis, but mid-morning between flights I learned about the Challenger explosion, and the news for days, weeks, was filled with little else. Now, of course, we can study the striking images of Uranus’ rings and the tortured moon Miranda, putting them in the great context of Voyager exploration, but for a time the story was mute.

OSIRIS-REx has a long period of mapping and sampling ahead of it, with the sample site selection gearing up, and we’ll have plenty to say about it in coming weeks. Ponder that the spacecraft orbits Bennu from a distance just 1.75 kilometers from its center, a tighter value even than Rosetta’s, when it orbited 7 kilometers from the center of comet 67P/Churyumov-Gerasimenko.

From OSIRIS-REx flight dynamics system manager Mike Moreau (NASA GSFC):

“Our orbit design is highly dependent on Bennu’s physical properties, such as its mass and gravity field, which we didn’t know before we arrived. Up until now, we had to account for a wide variety of possible scenarios in our computer simulations to make sure we could safely navigate the spacecraft so close to Bennu. As the team learned more about the asteroid, we incorporated new information to hone in on the final orbit design.”

Using 3-D models of Bennu’s terrain created from OSIRIS-REx’s recent global imaging and mapping campaign, the mission team intensifies its navigation survey, and will analyze changes in the spacecraft’s orbit to study the minute gravitational pull of the object, which should tighten up existing models of not just the gravity field, but Bennu’s thermal properties and spin rate. By the summer of 2020, controllers will be ready for the spacecraft to touch the surface for sampling operations, with the sample scheduled for return to earth in September of 2023.



New Horizons Healthy and Full of Data

We’ve just learned that New Horizons is intact and functional, with a ‘phone home’ message at about 1530 UTC that checked off subsystem by subsystem — all nominal — amidst snatches of applause at the Johns Hopkins Applied Physics Laboratory. The solid state recorders (SSR) are full, with pointers indicating that flyby information is there for the sending, even as the spacecraft continues with outbound science. New Horizons will pass behind the Sun in early January, giving us a break in communications for a few days this weekend. Over the next 20 months we will get the entire package from Ultima Thule. Patience will be in order.

Here’s the approach image that was released yesterday.

Image: Just over 24 hours before its closest approach to Kuiper Belt object Ultima Thule, the New Horizons spacecraft has sent back the first images that begin to reveal Ultima’s shape. The original images have a pixel size of 10 kilometers (6 miles), not much smaller than Ultima’s estimated size of 30 kilometers (20 miles), so Ultima is only about 3 pixels across (left panel). However, image-sharpening techniques combining multiple images show that it is elongated, perhaps twice as long as it is wide (right panel). This shape roughly matches the outline of Ultima’s shadow that was seen in observations of the object passing in front of a star made from Argentina in 2017 and Senegal in 2018. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.

And here’s the best approach image, released a few minutes ago at the press briefing.

The bi-lobate structure is obvious, but is it a single object or two in tight orbit of each other? We should have the answer to that question in an image that will be released tomorrow. Project scientist Hal Weaver displayed the slide below showing the shape and spin of Ultima. The lack of lightcurve is explained by New Horizons approaching along the line of polar rotation.

Some background thoughts:

Ultima Thule has pushed New Horizons to its limits. Mission principal investigator Alan Stern put it best at yesterday’s mid-afternoon news conference when he noted “We are straining at the capabilities of this spacecraft. There are no second chances for New Horizons.”

If the primary mission had been the long-studied flyby of Pluto/Charon, whose orbit had the benefit of decades of analysis, Ultima Thule presented controllers with an object not known until 2014, when it was discovered as part of the deliberate hunt for a Kuiper Belt object within range. Thus much about the orbit was unknown, making for what Stern described as a ‘tough intercept.’ Factor in the increased distance from the Sun far beyond Pluto and its effects on lighting conditions, as well as a power generator now producing less wattage because of its age.

Fortunately, LORRI, the Long Range Reconnaissance Imager, had spotted Ultima as far back as August 16 and the spacecraft had been imaging the object ever since, using long exposure times and co-adding procedures in which multiple optical navigation images are layered over each other, until in the last month of the approach the motion of the target could be seen, as mission project manager Helene Winters showed graphically at the same news event. Hazards like moons and rings were ruled out and the optimal trajectory, with approach to within 3500 kilometers, was available. If all went well, the early imagery will give way to fine detail.

1.6 billion kilometers beyond Pluto, New Horizons needed to hit a 40 square mile box with a timing window of 80 seconds, an epic feat of navigation that will surely wind up discussed in the next edition of David Grinspoon and Alan Stern’s book Chasing New Horizons (Picador, 2018), unless the duo decide to spin Kuiper object exploration into a book of its own. But I think not. New Horizons’ story should be seen whole, a continuing story pushed to its limits and, like the Voyagers that preceded it to system’s edge, still returning priceless data.



Ultima Thule Flyby Approaches

Despite the various governmental breakdowns attendant to the event, the New Horizons flyby of Ultima Thule is happening as scheduled, the laws of physics having their own inevitability. Fortunately, NASA TV and numerous social media outlets are operational despite the partial shutdown, and you’ll want to keep an eye on the schedule of televised events as well as the New Horizons website and the Johns Hopkins Applied Physics Laboratory YouTube channel.

Image: New Horizons’ path through the solar system. The green segment shows where New Horizons has traveled since launch; the red indicates the spacecraft’s future path. The yellow names denote the Kuiper Belt objects New Horizons has observed or will observe from a long distance. (NASA/JHUAPL/SwRI).

We’re close enough now, with flyby scheduled for 0533 UTC on January 1, that the mission’s navigation team has been tightening up its estimates of Ultima Thule’s position relative to the spacecraft, key information when it comes to the timing and orientation of New Horizons’ observations. Raw images from the encounter will be available here. Bear in mind how tiny this object is — in the range of 20 to 30 kilometers across — so that we have yet to learn much about its shape and composition, though we’ve already found that it has no detectable light curve.

On the latter point, mission principal investigator Alan Stern (SwRI):

“It’s really a puzzle. I call this Ultima’s first puzzle – why does it have such a tiny light curve that we can’t even detect it? I expect the detailed flyby images coming soon to give us many more mysteries, but I did not expect this, and so soon.”

Thus the mission proceeds in this last 24 hours before flyby with grayscale, color, near-infrared and ultraviolet observations, along with longer-exposure imaging to look for objects like rings or moonlets around Ultima. Closest approach is to be 3,500 kilometers at a speed of 14.43 kilometers per second. JHU/APL is reporting that the pixel sizes of the best expected color and grayscale images and infrared spectra will be 330 meters, 140 meters and 1.8 kilometers, respectively, with possible images at 33-meter grayscale resolution depending on the pointing accuracy of LORRI, the Long Range Reconnaissance Imager.

Image: New Horizons’ cameras, imaging spectrometers and radio science experiment are the busiest members of the payload during close approach operations. New Horizons will send high-priority images and data back to Earth in the days surrounding closest approach; placed among the data returns is a status check – a “phone home signal” from the spacecraft, indicating its condition. That signal will need just over 6 hours, traveling at light speed, to reach Earth. (NASA/JHUAPL/SwRI).

Post flyby, New Horizons will turn its ultraviolet instrument back toward the Sun to scan for UV absorption by any gases the object may be releasing, while simultaneously renewing the search for rings. Scant hours after the flyby, New Horizons will report back on the success of the encounter, after which the downlinking of approximately 7 gigabytes of data can begin. The entire downlink process, as at Pluto/Charon, is lengthy, requiring about 20 months to complete.

Let’s keep in mind that, assuming all goes well at Ultima Thule, we still have a working mission in the Kuiper Belt, one with the potential for another KBO flyby, and if nothing else, continuing study of the region through April of 2021, when the currently funded extended mission ends (a second Kuiper Belt extended mission is to be proposed to NASA in 2020). The Ultima Thule data return period will be marked by continuing observation of more distant KBOs even as New Horizons uses its plasma and dust sensors to study charged-particle radiation and dust in the Kuiper Belt while mapping interplanetary hydrogen gas produced by the solar wind.

So let’s get this done, and here’s hoping for a successful flyby and continued exploration ahead! It will be mid-afternoon UTC on January 1 (mid-morning Eastern US time) when we get the first update on the spacecraft’s condition, with science data beginning to arrive at 2015 UTC, and a first 100 pixel-across image (and more science data coming in) on January 2 at 0155 UTC. The best imagery is going to take time to be released, perhaps becoming available by the end of February. We’ll be talking about Ultima Thule a good deal between now and then.



Exoplanet Imaging from Space: EXCEDE & Expectations

We are entering the greatest era of discovery in human history, an age of exploration that the thousands of Kepler planets, both confirmed and candidate, only hint at. Today Ashley Baldwin looks at what lies ahead, in the form of several space-based observatories, including designs that can find and image Earth-class worlds in the habitable zones of their stars. A consultant psychiatrist at the 5 Boroughs Partnership NHS Trust (Warrington, UK), Dr. Baldwin is likewise an amateur astronomer of the first rank whose insights are shared with and appreciated by the professionals designing and building such instruments. As we push into atmospheric analysis of planets in nearby interstellar space, we’ll use tools of exquisite precision shaped around the principles described here.

by Ashley Baldwin

This review is going to look at the current state of play with respect to direct exoplanet imaging. To date this has only been done from ground-based telescopes, limited by atmospheric turbulence to wide orbit, luminous young gas giants. However, the imaging technology that has been developed on the ground can be adapted and massively improved for space-based imaging. The technology to do this has matured immeasurably over even the last 2-3 years and we stand on the edge of the next step in exoplanet science. Not least because of a disparate collection of “coronagraphs”, originally a simple physical block placed in the optical pathway of telescopes designed to image the corona of the Sun by French astronomer Bernard Lyot, who lends his name to one type of coronagraph.

This is an instrument that in combination with ground-based pioneering work on telescope “adaptive optics” systems and advanced infrared sensors in the late 1980s and early ’90s progressed in the last ten years or so to the design of space-based instruments – later generations of which have now progressed to the point of driving telescopes like 2.4m WFIRST, 0.7m EXCEDE and 4m HabEX. Different coronagraphs work in different ways, but the basic principle is the same. On-axis starlight is blocked out as much as possible, creating a “dark hole” in the telescope field of view where much dimmer off-axis exoplanets can then be imaged.

Detailed exoplanetary characterisation including formation and atmospheric characteristics is now within tantalising reach. Numerous flagship telescopes are at various stages of development awaiting only the eventual launch of the James Webb Space Telescope (JWST), and its cost overrun, before proceeding. Meantime I’ve taken the opportunity this provides to review where things are by looking at the science through the eyes of an elegant telescope concept called EXCEDE (Exoplanetary Circumstellar Environment & Disk Explorer), proposed for NASA’s Explorer program to observe circumstellar protoplanetary and debris discs and study planet formation around nearby stars of spectral classes M to B.

Image: French astronomer Bernard Lyot.

Although only a concept and not yet selected for development, I believe EXCEDE – or something like it – may yet fly in some iteration or other, bridging the gap between lab maturity and proof of concept in space and in so doing hastening the move to the bigger telescopes to come. Two of which, WFIRST (Wide Field Infrared Survey Telescope) and HabEX (Habitable Exoplanet Imaging Mission) also get coverage here.

Why was telescope segmented deployability so aggressively pursued for the JWST?

“Monolithic”, one-piece mirror telescopes are heavy and bulky – which gives them their convenient rigid stability, of course.

However, even a 4m monolithic mirror-based telescope would take up the full 8.4m fairing of the proposed SLS block 1b and with a starshade added would only just fit in lengthways if it had a partially deployable “scarfed” baffle. The telescope would mass around 20 tonnes built from conventional materials. Though if built with proven lightweight silicon carbide, already proven with the success of the ESA’s 3.5 m Herschel telescope, it would come in at about a quarter of this mass.

Big mirrors made out of much heavier glass ceramics like Zerodur have yet to be used in space beyond the 2.4m Hubble and would need construction of 4m-sized test “blanks” prior to incorporation in a space telescope. Bear in mind too that Herschel also had to carry four years worth of liquid coolant in addition to propellant. With minimal modification, such a similarly proportioned telescope might fit within the fairing of a modified New Glenn launcher too. If NASA shakes off its reticence about using silicon carbide in space telescope construction – something that may yet be driven – like JWST before it – by launcher availability. This given the uncertain future of the SLS and especially its later iterations.

Meantime, at JWST conception there just wasn’t any suitable heavy lift/big fairing rocket available (or indeed now!) to get a single 6.5m mirror telescope into space. Especially not to the prime observation point at the Sun/Earth L2 Lagrange point 900 K miles away in deep space. And that was the aperture deemed necessary to be a worthy successor to Hubble.

An answer was found in a Keck-style segmented mirror which could be folded up for launch and then deployed after launch. Cosmic origami if you will (it may be urban myth but rumour has it origami experts were actually consulted).

The mistake was in thinking that transferring the well established principle of deployable space radio antennae to visible/IR telescopes would be (much) easier than it would turn out. The initially low cost “evolved”, but as it did so did the telescope and its descendants. From infrared cosmology telescope to “Hubble 2” and finally exoplanet characteriser as the new branch of astronomy arose in the late nineties.

A giant had been woken and filled with a terrible resolve.

The killer for JWST hasn’t been the optical telescope assembly itself, so much as folding up the huge attached sunshade for launch and then deploying it. That’s what went horribly wrong with “burst seams” in the latest round of tests and which continues to cause delays. Too many moving parts too – 168 if I recall. Moving parts and hard vacuums just don’t mix and the answer isn’t something as simple as lubricants, given conventional ones would evaporate in space, so that leaves powders, the limitations of which were seen with the failure of Kepler’s infamous reaction wheels. Cutting-edge a few years ago, these are now deemed obsolete for precision imaging telescopes, replaced instead by “microthrusters” – a technology that has matured quietly on the sidelines and will be employed on the upcoming ESA Euclid and then NASA’s HabEX.

From WFIRST to HabEX

The Wide Field IR Space Telescope, WFIRST is more by circumstance than design monolithic, and sadly committed to use reaction wheels, six instead of Kepler’s paltry four admittedly. I have written about this telescope before, but a lot of water as they say, has flowed under the bridge since then. An ocean’s worth indeed and with wider implications with the link as ever being exoplanet science.

To this end, any overview of exoplanet imaging cannot be attempted without starting with JWST and its ongoing travails, before revisiting WFIRST and segueing into HabEX. Then finally seeing how all this can be applied. I will do this by focusing an older but still robust and rather more humble telescope concept, EXCEDE.

Reaction wheels – so long the staple of telescope pointing. But now passé, and why? Exoplanet imaging. The vibration reaction the wheels cause, though slight, can impact on the imaging stability even at the larger 200mas inner working angle (IWA) of the WFIRST coronagraph, IWA being defined as the nearest to the star that maximum contrast can be maintained. In the case of the WFIRST coronagraph this is 6e10 contrast (which has significantly exceeded its original design parameters already.

The angular separation of a planet from its star, or “elongation”, e, can be expressed as e = a/d, where a is the planetary semi-major axis expressed in Astronomical Units (AUs) and d is the distance of the star from Earth in parsecs (3.26 light years). By way of illustration, the Earth as imaged from ten parsecs would thus appear to be 100mas from the Sun – but would require a minimum 3.1m aperture scope to capture enough light and provide enough angular resolution of its own. Angular resolution of a telescope is its ability to resolve two separate points and is expressed as the related ƛ / D, where ƛ is the observation wavelength and D is the aperture of the telescope – in meters. So the shorter the wavelength and bigger the aperture, the greater the angular resolution.

A coronagraph in the optical pathway will impact on this resolution according to the related equation n ƛ / D where n is a nominal integer set somewhere between 1 and 3 and dependent on coronagraph type, with a lower number giving a smaller inner working angle nearer to the resolution/diffraction limit of the parent telescope. In practice n=2 is the best currently theoretically possible for coronagraphs, with HabEX set at 2.4 ƛ / D. EXCEDE’s PIAA coronagraph rather optimistically aimed for 1 ƛ / D – currently unobtainable, though later VVD iterations or perhaps revised PIAA might yet achieve this and what better way to find out than via a small technological demonstrator mission?

This also shows that searching for exoplanets is best done at shorter visible wavelengths between 0.4 and 0.55 microns, with telescope aperture determining how far out from Earth planets can be searched for at different angular distances from their star. This in turn will govern the requirements determining mission design. So for a habitable zone imager like HabEX where n=2.4 and whose 4m aperture can search habitable zones of sun like stars out to a distance of about 12 parsecs. Coronagraph contrast performance varies according to design and wavelength so higher values of n, for instance, might still allow precision imaging further out from a star, perhaps looking for Jupiter/Neptune analogies or exo-Kuiper belts. Coronagraphs also have outer working angles, the maximum angular separation that can be viewed between a star and planet or planetary system (cf starshades,whose outer working angle is limited only by the field of view of the host telescope and is thus large).

Any such telescope, be it WFIRST or HabEX, for success will require numerous imaging impediments to be adequately mitigated – so called “noise”. Noise from many sources: target star activity, stellar jitter, telescope pointing & drift. Optical aberrations. Erstwhile “low-order wavefront errors” – accounting for up to 90% of all telescope optical errors (ground and space) and including defocus, pointing errors like tip/tilt and telescope drift occurring as a target is tracked, due for instance to variations in exposure to sunlight at different angles. Then classical optical “higher order errors” such as astigmatism, coma, spherical aberration & trefoil – due to imperfections in telescope optics. Individually tiny but unavoidably cumulative.

It cannot be emphasised enough that for exoplanet imaging, especially of Earth-mass habitable zone planets, we are dealing with required precision levels down to hundredths of billionths of a meter. Picometers. Tiny fractions of even short optical wavelengths. Such wavefront errors are by far the biggest obstacle to be overcome in high-contrast imaging systems. The image above makes the whole process seem so simple, yet in practice this remains the biggest barrier to direct imaging from space and from the ground even more.

The delay between the (varying) wavefront error being picked up by the sensor, fed to the onboard computer and in turn the deformable modifying mirror to enable correction (along with parallel correction of pointing tip/tilt errors by a delicate “fast steering mirror”), and the precision of that correction – has been too lengthy. The central core of the adaptive optics (AO) system.

It has only been over the the last few years that there have been essential breakthroughs that should finally allow elegant theory to become pragmatic practice. This through a combination of wavefront correction via improved deformable mirrors and wavefront sensors and their enabling computer processing speed all working in tandem. This has led to creation of so-called “extreme adaptive optics” with the general rule that the shorter the observed wavelength, the greater the sensitivity “extremity” of the required AO. It is an even larger impediment on the ground where the atmosphere adds an extra layer of difficulty. These combine to allow a telescope to find and image tiny, faint exoplanets, and more importantly still, to maintain that image for the tens or even hundreds of hours necessary to locate and characterise them. Essentially a space telescope’s adaptive optics.

A word here. Deformable mirrors, fast steering mirrors, wavefront sensors, fine guidance sensors & computers, coronagraphs, microthrusters, software algorithms. All of these, and more, add up to a telescope’s adaptive optics – originally developed and then evolved on the ground, this instrumentation is now being adapted in turn for use in space. It all shares the feature of modifying and correcting any errors in wavefront of light entering a telescope pupil prior to reaching its focal plane and sensors.

Without it imaging via big telescopes would be severely hampered and the incredible precision imaging described here would be totally impossible.

That said, the smaller the IWA the greater the sensitivity to noise and especially vibration and line of sight “tip/tilt” pointing errors, and the greater the need for the highest performance, so called “extreme adaptive optics”. HabEX has a tiny IWA of 65 mas for its coronagraph (to allow imaging of at least 57% of all sun-like star hab zones out as far as 12 parsecs) and operates at a raw contrast as low as 1e11 – a hundred billionth of a metre!

Truly awesome. To be able to image at that kind of level is incredible frankly when this was just theory less than a decade ago.

That’s where the revolutionary Vector Vortex “charge” coronagraph (VVC) now comes in – the “charge 6” version still offers a tiny IWA but is less sensitive to all forms of noise – and especially the low wavefront errors described above – than other ultra high performance coronagraphs, noise arising from small but cumulative errors in the telescope optics.

This played a major if not pivotal role in the VVC 6 selection for HabEX. The downside (compromise) is that only 20% light incident on the telescope pupil gets through to the focal point instruments. This is where the unobscured largish 4m aperture of HabEX helps, to say nothing of removing superfluous causes of diffraction and additional noise in the optical path.

There are other VVC versions, the “charge 2” for instance (see illustration), that allows 70% throughput – but is so sensitive to noise as to be ineffectual at high contrast and low IWA. Always a trade off. That said, at the higher IWA (144mas) and lower contrast (1e8 raw) of a small imager telescope like the Small Explorer Programme concept EXCEDE, where throughput really matters, the charge 2 might work with suitable wavefront control. With a raw contrast (the contrast provided by the coronagraph alone) goal of < 1e8, "post-processing" would bring this down to the 1e9 required to meet the mission goals highlighted below. Post-processing involves increasing contrast post-imaging and includes a number of techniques with varying degrees of effectiveness that can increase contrast by up to an order of magnitude or more. For brevity I will mention only the main three here. Angular differential imaging involves rotating the image (and telescope) through 360 degrees. Stray starlight, so called "speckles", are artefacts and move with the image.

A target planet does not, allowing the speckles to be removed, thus increasing the contrast. This is the second most effective type of post-processing. Speckles tend to be wavelength-specific, so looking at different wavelengths in the spectrum once again allows them to be removed with a planetary target persisting through various wavelengths. So-called spectroscopic differential imaging.

Finally, light reflected from a target tends to be polarised as opposed to starlight, and thus polarised sources can be picked out from background, unpolarised leaked starlight speckles with the use of an imaging polarimeter (see below).

Polarimetric differential imaging. Of the three, the last is generally the most potent and is specifically exploited by EXCEDE. Taken together these processes can improve contrast by at least an order of magnitude. Enter the concept conceived by the Steward Observatory at the University of Arizona. EXCEDE.

EXCEDE: The Exoplanetary Circumstellar Environment & Disk Explorer

Using a PIAA coronagraph with a best IWA of 144 mas (ƛ/D) and a raw contrast of 1e8, the EXCEDE (see illustration) proposal consisted of a three year mission that would involve:

1/ Exploring the amount of dust in habitable zones

2/ Determining if said dust would interfere with future planet-finding missions – the amount of zodiacal dust in the Solar System is set at 1 “zodi”. Exozodiacal dust around other stars is expressed in multiples of this. Though a zodi of 1 appears atypically low, with most observed stellar systems having (far) higher values.

3/ Constraining the composition of material delivered to newly formed planets

4/ Investigating what fraction of stellar systems have large planets in wide orbits (Jupiter & Neptune analogues)

5/ Observing how protoplanetary disks make Solar System architectures and their relationship with protoplanets.

6/ Measuring the reflectivity of giant planets and constraining their compositions.

7/ Demonstrating advanced space coronagraphic imaging

A small and light telescope requiring only a small and cheap launcher to get it to its efficient but economic observation point in a 2000 Kms “sun synchronous” Low Earth Orbit – whereby the telescope would be in a near-polar orbit such that its position with respect to the Sun would remain the same at all points, allowing orientation of its solar panels and field of view to enable near continual viewing. Viewing up to 350 circumstellar & protoplanetary disks and related giant planets, visualised out to a hundred parsecs in 230 star systems.

The giant planets would be “cool” Jupiters and Neptunes located within seven to ten parsecs and orbiting between 0.5-7 AU from their host stars – often in the stellar habitable zone.

No big bandwidths, the coronagraph will image at just two wavelengths, 0.4 and 0.8 microns. Short optical wavelength to maximise coronagraph IWA and utilise an economic CCD sensor. The giant planets will be imaged for the first time (with a contrast well beyond any theoretical maximum from even a high performance ELT) with additional information provided via follow up RV spectroscopy studies – or Gaia astrometry for subsequent concepts. Circumstellar disks have been imaged before by Hubble but its older coronagraphs don’t allow anything like the same detail and are orders of magnitude short of the necessary contrast and inner working angle to view into the habitable zones of stars.

High contrast imaging in visual light is thus necessary to clearly view close-in circumstellar and protoplanetary disks around young and nearby stars, looking for their reaction with protoplanets and especially for the signature of water and organic molecules.

Exozodiacal light arises from starlight reflection from the dust and asteroid/cometary rubble within a star system, material that along with the disks above plays a big role in the development of planetary systems. It also acts as an inhibitor of exoplanetary imaging by acting as a contaminating light source in the dark field created around a star by a coronagraph with the goal of isolating planet targets. Especially warm dust close to a star, e.g in its habitable zone, a specific target for EXCEDE, whose findings could supplement ground-based studies in mapping nearby systems for this.

The Spitzer and Herschel space telescopes (with ALMA on the ground) both imaged exozodiacal light/circumstellar disks but at longer infrared wavelengths and thus much cooler and consequently further from their parent stars. More Kuiper belt than asteroid belt. Making later habitable planet imaging surveys more efficient as above a certain level of “zodis” imaging will be more difficult (larger telescope apertures allow for more zodis) with a median value of 26 zodis for a HabEX 4m scope. Yet another cause of background imaging noise – cf Solar System “zodiacal” light – which is essentially the same light visible within the Solar System (see illustration).

EXCEDE payload:

  • 0.7m unobscured off-axis lightweight telescope
  • Fine steering mirror for precision pointing control
  • Low order wavefront sensor for focus and tip/tilt control
  • MEMs deformable mirror for wavefront error control (see below)
  • PIAA coronagraph
  • Two band imaging polarimeter

EXCEDE as originally envisaged used a Phase Induced Amplitude Apodisation PIAA coronagraph (see illustration), which also has a high throughput ideal for a small 0.7m off-axis telescope.

It was proposed to have an IWA of 144 mas at 5 parsecs in order to image in or around habitable zones – though not any terrestrial planets. However, this type of coronagraph has optics that are very difficult to manufacture and technological maturity has come slowly despite its great early promise (see illustration). To this end it has to be for the time being superseded by other less potent but more robust and testable coronagraphs such as the Hybrid Lyot (see illustration for comparison) earmarked for WFIRST and more recently the related VVC’s greater performance and flexibility. Illustrations of these are available for those who are interested in their design and also as a comparison. Ultimately though one way or the other they block or “reject” the light of the central star and in doing so create a dark hole in the telescope field of view in which dim objects like exoplanets can be imaged as point sources, mapped and then analysed by spectrometry. These are exceedingly faint. The dimmest magnitude star visible to the naked eye has a magnitude of about 6 in good viewing conditions. A nearby exoplanet might have a magnitude of 25 or less. Bear in mind that each successive magnitude is about 2.5 times fainter than its predecessor. Dim!

Returning to the VVC, a variant of it could be easily substituted instead, without impacting excessively on what remains a robust design and practical yet relevant mission concept. Off-axis silicon carbide telescopes of the type proposed for EXCEDE are readily available. Light, strong, cheap and being unobscured, these offer the same imaging benefits as HabEX on a smaller scale. EXCEDE’s three year primary mission should locate hundreds of circumstellar/protoplanetary discs and numerous nearby cool gas giants along with multiple protoplanets – revealing their all important interaction with the disks. The goal is quite unlike ACEsat, a similar concept telescope, which I have described in detail before [see ACEsat: Alpha Centauri and Direct Imaging]. The latter prioritized finding planets around the two principal Alpha Centauri stars.

The EXCEDE scope was made to fit a NASA small Explorer programme $170 million budget, but could easily be scaled according to funding. Northrop Grumman manufactures them up to an aperture of 1.2m. The limited budget excludes the use of a full spectrograph, but instead the concept is designed to look at narrow visual spectrum bandwidths within the coronagraph’s etendue [a property of light in an optical system, which characterizes how “spread out” the light is in area and angle] that coincide with emission of elements and molecules from with the planetary or disk targets, water in particular. All this with a cost effective CCD-based sensor.

Starlight reflected from an exoplanet or circumstellar disk tends to be polarised, unlike direct starlight, and the use of a compact and cheap imaging polarimeter helps pick the targets out of the image formed at the pupil after the coronagraph has removed some but not all of the light of the central star. Some of the starlight “rejected” by the coronagraph is directed to a sensor that links to computers that calculate the various wavefront errors and other sources of noise before sending compensatory instructions to the optical pathway deformable mirrors and fast steering mirror to correct.

The all important deformable mirrors (manipulated from beneath by multiple mobile actuators) and especially the cheap but efficient new MEMs (micro-electro-mechanical mirrors) – 2000 actuators per mirror for EXCEDE, climbing to over 4096, or more, for the more potent HabEX. But yet to be used in space. WFIRST is committed to an older, less efficient “piezoelectric” alternative (more expensive) deformable mirror.

So this might be an ideal opportunity to show that MEMs work on a smaller, less risky scale with a big science return. MEMs may remain untested in space and especially the later more sensitive multi-actuator variety, but the more actuators, the better the wavefront control.

EXCEDE was originally conceived and unsuccessfully submitted in 2011. This was largely due to the immaturity of its coronagraph and related technology like MEMs at that time. The concept remains sound but the technology has now moved forward apace thanks to the incredible development work done by numerous US centres (NASA Ames, JPL, Princeton, Steward Mirror Lab and the Subaru telescope) on the Coronagraphic Instrument, CGI, for WFIRST. I am not aware of any current plans to resurrect the concept.

However the need remains stronger than ever and the time would seem to be more propitious. Exozodiacal light is a major impediment to exoplanet imaging so surveying systems that both WFIRST & HabEX will look at might save time and effort to say nothing of the crucial understanding of planetary formation that imaging of circumstellar disks around young stars will bring. Perhaps via a future NASA Explorer programme round or even via the European Space Agency’s recent “F class” $170 million programme call for submissions. Possibly in collaboration with NASA – whose “missions of opportunity” programme allows materiel up to a value of $55 million to supplement international partner schemes. The next F class gets a “free” ride, too, on the launcher that sends exoplanet telescopes PLATO or ARIEL to L2 in 2026 or 2028. Add in EXCEDE class direct imager and you get an L2 exoplanet observatory.

Mauna Kea in space if you will. By way of comparison, the overall light throughput of obscured WFIRST is just 2%!

The 72m HabEX starshade has an IWA of 45 mas and a throughput of 100% (as does the smaller version proposed for WFIRST) and requires minimal telescopic mitigation/adaptive optics as for coronagraphs. This also makes it ideal for the prolonged observation periods required for spectroscopic analysis of prime exoplanetary targets, where every photon counts. Be it habitable zone planets with HabEX or a smaller-scale proof of concept for a starshade “rendezvous” mission with WFIRST.

By way of comparison, the proposed EXO-S Probe Class programme (circa $1 billion) included an option for a WFIRST/Starshade “rendezvous” mission. This whereby a HabEX-like 30-34m self-propelled Starshade joins WFIRST at the end of its five year primary mission to begin a very much deeper three year exoplanet survey. Though considerably smaller than the HabEX Starshade, it also possesses the like benefits of high optical throughput (even more important on a non-bespoke obscured & smaller 2.4m aperture), a small Inner Working Angle (much less than with the WFIRST coronagraph), significantly reduced star/planet contrast and most important of all as we have already seen above, vastly reduced constraints on telescope stability & related wavefront control.

Bear in mind that WFIRST will still be using vibration-inducing reaction wheels for fine pointing. Operating at closer distances to the telescope than HabEX, the “slew” times between imaging would be significantly reduced too. This addition would increase the exoplanet return (both number and characterisation) many fold, even to the point of a small chance of imaging potentially habitable exoplanets. The more so if there have been the expected advances in performance of the software algorithms required to increase contrast post-processing (see above) and also to allow multi-star wavefront control that permits imaging of promising nearby binary systems (see below). Just a few tens of millions of dollars are required to make WFIRST “starshade” ready prior to launch and would keep this option open for the duration.

The obvious drawback with this approach is the long time required to manoeuvre into position from one target to the next along with the precision “formation flying” (stationed tens of thousands of kms from the starshade according to observed wavelength) required between telescope and starshade. For HabEX, this has a 250 km error margin in the back or forwards axis, but just 1m laterally and just one degree of starshade tilt.

So the observation strategy is done in stages. First the coronagraph searches for planets in each target star system over multiple visits, “epochs”, over the orbital period of the erstwhile exoplanet. This helps map out the orbit and increases chances of discovery . The inclination of any exoplanetary system in relation to the solar system is unknown – unless it closely approaches 90 degrees (edge on) and displays exoplanetary transits. So unless the inclination is zero degrees (the system sits face on to the solar system and lies in the plane of the sky like a saucer seen face on), the apparent angular separation between an exoplanet and its parent star will also vary across the orbital period. This might include a period during which it lies interior to the IWA of the coronagraph – potentially giving rise to false negative results. Multiple observation visits helps compensate for this.

Once the exoplanet discovery and orbital characteristics are constrained, starshade-based observations follow up. With its far larger light throughput (near 100%) the extra light available allows detailed spectroscopy across a wide bandwidth and detailed characterisation of high priority targets. For HabEX, this will include up to 100 of the most promising habitability prospects and some representative other targets. Increasing or reducing the distance between the telescope and the starshade allows analysis across different wavelengths.

In essence “tuning in” the receiver, with smaller telescope/starshade separations for longer wavelengths. For HabEX, this extends from UV through to 1.8 microns in the NIR. The coronagraph can characterise too if required but is limited to multiple overlapping 20% bandwidths with much less resolution due to its heavily reduced light throughput.

Of note, presumed high priority targets like the Alpha Centauri, Eta Cassiopiae and both 70 and 36 Ophiuchi systems are excluded. They are all relatively close binaries and as both the coronagraph and especially the starshade have largish fields of view, the light from binary companions would contaminate the “dark hole” around the imaged star and mask any planet signal. (This is also an issue for background stars and galaxies too, though these are much fainter and easier to counteract.) It is an unavoidable hazard of the “fast” F2 telescope employed – F number being the ratio of focal length to aperture. A “slower”, higher F number scope would have a much smaller field of view, but would need to be longer and consequently even more bulky and expensive. F2 is another compromise, in this case driven largely by fairing size.

Are you beginning to see the logic behind JWST a bit better now? As we saw with ACEsat, NASA Ames are looking to perfect suitable software algorithms to work in conjunction with the telescope adaptive optics hardware (deformable mirrors and coronagraph) to compensate for this (contaminating starlight from the off-axis binary constituent).

This is only at an early stage of development in terms of contrast reduction, as can be seen in the diagram above, but proceeding fast and as software can be uploaded to any telescope mission at any time up to and beyond launch.

Watch that space.

So exoplanetary science finds itself at a crossroads. Its technology is now advancing rapidly but at a bad time for big space telescopes with the JWST languishing. I’m sure JWST will ultimately be a qualified success and its transit spectroscopy characterisation of planets like those around TRAPPIST-1 will open the way to habitable zone terrestrial planets and drive forward telescope concepts like HabEX. As will EXCEDE or something like it around the same time.

A delay that holds up its successors both in time but also in funding. But lessons have been learned, and are likely to be out to good use. Just at the time that exoplanet science is exploding thanks to Kepler, and with TESS only just started, PLATO to come and then the bespoke ARIEL transit spectroscopic imager telescope to follow on. No huge leaps so much as incremental but accumulating gains. ARIEL moving on from just counting exoplanets to provisional characterisation.

Then onto imaging via WFIRST before finally HabEX and characterisation proper. But that will be over a decade or more away and in the meantime expect to see smaller exploratory imaging concepts capitalising on falling technology and launch costs to help mature and refine the techniques required for HabEX. To say nothing of whetting the appetite and keeping exoplanets formally where they belong.

But to finish on a word of perspective. Just twenty five years or so ago, the first true exoplanet was discovered. Now not only do we have thousands with ten times that to come, but the technology is coming to actually see and characterise them. Make no mistake that is an incredible scientific achievement as indeed are all the things described here. The amount of light available for all exoplanet research is utterly minuscule and the pace of progress to stretch its use so far is incredible. All so quick too. Not to resolve them, for sure (that would take hundreds of scopes operating in tandem over hundreds of kms) but to see them and scrutinise their telltale light. Down to Earth-mass and below and most crucially in stellar habitable zones. Precision par excellence. Maybe to even find signs of life. Something philosophers have deliberated over for centuries & “imagined” at length, can now be “imaged” at length.

At the forefront of astronomy, the public consciousness and in the eye of the beholder.


“A white paper in support of exoplanet science strategy”, Crill et al: JPL, March 2018

“Technology update”, Exoplanet Exploration Program, Exopag 18, Siegler & Crill, JPL/Caltech, July 2018

HabEX Interim report, Gaudi et al, August 2018

EXCEDE: Science, mission technology development overview, Schneider et al 2011

EXCEDE technology development I, Belikov et al, Proceedings of SPIE, 2012

EXCEDE technology development II, Belikov et al, Proceedings of SPIE, 2013

EXCEDE technology development III, Belikov et al, Proceedings of SPIE, 2014

“The exozodiacal dust problem for direct imaging of ExoEarths”, Roberge et al, Publications of the Astronomical Society of the Pacific, March 2012

“Numerical modelling of proposed WFIRST-AFTA coronagraphs and their predicted performances”, Krist et al, Journal of Astronomical Telescopes, Instruments & Systems, 2015

“Combining high-dispersion spectroscopy with high contrast imaging. Probing rocky planets around our nearest stellar neighbours”, Snellen et al, Astronomy & Astrophysics, 2015

EXO-S study, Final report, Seager et al, June 2015

ACESat: Alpha Centauri and direct imaging, Baldwin, Centauri Dreams, Dec 2015

Atmospheric evolution on inhabited and lifeless worlds, Catling and Kasting, Cambridge University Press, 2017

WFIRST CGI update, NASA ExoPag July 2018

“Two decades of exoplanetary science with adaptive optics”, Chauvin: Proceedings of SPIE, Aug 2018

“Low order wavefront sensing and control for WFIRST coronagraph”, Shi et al Proceedings of SPIE 2016

“Low order wavefront sensing and control..for direct imaging of exoplanets”, Guyon, 2014

Optic aberration: Wikipedia, 2018

Tilt (optics): Wikipedia, 2018

“The Vector Vortex Coronagraph”, Mawet et al, Proceedings of SPIE, 2010

“Phase-induced amplitude apodisation complex mask coronagraph tolerancing and analysis”, Knight et al, Conference paper: Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation III, July 2018

“Review of high contrast imaging systems for current and future ground and space-based telescopes”, Ruane et al, Proceedings of SPIE 2018

“HabEX telescope WFE stability specification derived from starlight leakage”, Nemati, H Philip Stahl, Mark T Stahl et al, Proceedings of SPIE, 2018

Fast steering mirror: Wikipedia, 2018