Greg Matloff: Conscious Stars Revisited

by Paul Gilster on September 18, 2015

It’s no exaggeration to say that without Greg Matloff, there would have been no Centauri Dreams. After reading his The Starflight Handbook (Wiley, 1989) and returning to it for years, I began working on my own volume in 2001. Research for that book would reveal Matloff’s numerous contributions in the journals, especially on solar sail technologies, where he illustrated early on the methods and materials needed for interstellar applications. A professor of physics at New York City College of Technology (CUNY) as well as Hayden Associate at the American Museum of Natural History, Dr. Matloff is the author of, among others, Deep Space Probes (Springer, 2005) and Solar Sails: A Novel Approach to Interplanetary Travel (with Les Johnson and Giovanni Vulpetti; Copernicus, 2008). His latest, Starlight, Starbright, is now available from Curtis Press, treating the controversial subject of today’s essay.

by Greg Matloff


Introduction: Motivations

As any web search will reveal, most of my research contributions have been in the fields of in-space propulsion, SETI, Earth-protection from asteroid impacts, planetary atmospheres, extra-solar planet detection and spacecraft navigation. Since I have consulted for NASA on solar-sail applications, I have trained myself to err on the side of conservatism. However, a true scientist cannot ignore observational data. He or she must base hypothesis and theories upon such results, not upon previous experience, ideology and dogma.

Image: Gregory Matloff (left) being inducted into the International Academy of Astronautics by JPL’s Ed Stone.

Until 2011, I never expected that I might contribute to the fascinating debate regarding the origin and nature of consciousness. On one side are the epiphenomenonalists, who believe that consciousness is a mere byproduct of bio-chemical activity in the complex brains of higher organisms. On the other side are the panpsychists, who believe that a universal field responsible for consciousness, sometimes referred to as “proto-consciousness,” reacts with matter to produce conscious activity at all levels. The philosophical arguments were fascinating, but to me as a scientist they were a bit disappointing. There seemed to be no way of elevating the argument from the realm of deductive philosophy to the realm of observational/experimental science.

But in 2011, as documented in my June 12, 2012 contribution to this blog – Star Consciousness: An Alternative to Dark Matter – I learned (much to my surprise) that it may be possible now to construct simple models of universal consciousness and test them against observational evidence.

I was primed for this work by several factors. First, an early mentor of mine and a coauthor of several astronautics papers, was the late Evan Harris Walker. With expertise in plasma and quantum physics, Harris (as his friends called him) was a pioneer in the infant field of quantum consciousness. Although I am far from an expert in quantum mechanics, I was fascinated by Harris’ attempt to explain consciousness by the quantum tunneling of wave functions through potential wells created by the inter-synaptic spacing in mammal brains [1].

After the success of The Starflight Handbook and other contributions to interstellar travel studies, I was asked by Apollo 11 astronaut Buzz Aldrin in the early 1990’s to join the team of scientific consultants for a science-fiction novel he was co-authoring with John Barnes [2]. For plot purposes, Buzz required the stable, long-term existence of a Jupiter-like planet at a 1 Astronomical Unit (AU) distance from a Sun-like star. When he asked me to check the possibility of such a planet, I was initially very pessimistic. When I told Buzz that most exoplanet experts believed that the Hydrogen-Helium atmosphere of such a planet would likely evaporate quickly (in cosmic terms), he asked me to check this assumption. I located an appropriate equation in a space science handbook and calculated the estimated lifetime of the giant planet’s atmosphere. I was surprised and Buzz was gratified to learn that the lifetime of the Jovian’s atmosphere at 1 AU would be billions of years. At that point in my career, I was an adjunct professor and consultant. Since I was unable to locate a derivation for the subject equation, I elected not to challenge scientific orthodoxy and attempt to publish these results in a scientific journal. After the discovery of “hot Jupiters” circling Sunlike stars a few years later, I became credited (by Paul Gilster and others) with predicting the existence of hot Jupiters in a science-fiction novel, but not in a peer-reviewed journal. I vowed to never repeat this mistake again and hold back data, if my results challenged established paradigms.

The third influence pointing me in the direction of conscious stars was an undergraduate, liberal arts student at New York City College of Technology. Between the time I became a tenure-track professor in 2003 and my retirement from full-time teaching in 2011, I organized and coordinated the astronomy program at New York City College of Technology (NYCCT). In the first term of the NYCCT astronomy sequence, students learn about astronomical history, aspects of classical and modern physics and solar-system astronomy. In the second term, they investigate the astrophysics of the Sun, stars, and galaxies, cosmology, and the prospects for extraterrestrial life. In one Astronomy 2 section, I was lecturing about dark matter. The existence of this mysterious substance has been invited to explain anomalous stellar motions. When a liberal arts undergraduate interrupted the lecture, I learned that he doubted dark matter’s existence. His supposition was that physics is at an analogous stage to the situation in 1900. A major shift in physical paradigms may be necessary to explain the many anomalies (including dark matter) building up in observational astrophysics.

In 2011, it all came together. Kelvin Long, who edits the Journal of the British Interplanetary Society (JBIS), invited me to participate in a one-day symposium at the London headquarters of the BIS to celebrate the work of Olaf Stapledon, a British science-fiction author and philosopher who has greatly influenced astronomical and astronautical thought. In his 1937 masterwork Star Maker, Stapledon predicted nuclear energy, nuclear war, interstellar travel, space habitats and rearrangement of solar systems by intelligent extraterrestrials. Because I usually author papers on these topics and have often cited Star Maker, I elected to avoid astrotechnology in my contribution to this BIS symposium and instead concentrate on a core aspect of Stapledon’s philosophy: that the stars and indeed the entire universe are in some sense conscious.

A Toy Model of Stellar Consciousness and Astrophysical Evidence

Many people have written about consciousness. Since there is no agreed upon definition of this quality, I decided to investigate a symptom of stellar consciousness. This is Stapleton’s supposition that a fraction of stellar motions around the centers of their galaxies is volitional. According to Stapledon, stars obey the canons of a cosmic dance as they travel through space. Many researchers consider the seat of consciousness in humans and other lifeforms to be neurons or tubules [1,3,4]. I have little knowledge regarding the intimate details of the stellar interior. But I am pretty sure that neurons and tubules do not exist within stars. However, most cooler stars, including the Sun, do have simple molecules in their upper layers.

Contrary to what many of us learned in high school chemistry, the Van der Waals forces that hold the atoms in molecules together are not purely electromagnetic. Some of this attraction is due to the so-called Casimir Effect [5]. Vacuum is not truly empty. Instead, in tiny intervals of space and time, there are enormous fluctuations of energy and matter. Generally, positive and negative energies in these fluctuations exactly balance. But in the opinion of most cosmologists, the Big Bang was a stabilized vacuum fluctuation. All the matter, energy, space and time in the universe inflated from a tiny volume of dynamic vacuum during this event.

An echo of this most creative event in the universe’s history occurs in every molecule. Not all vacuum fluctuations can fit between adjacent molecules. A fraction of the Van der Waals force holding molecules together is produced by the pressure of these vacuum fluctuations.

With astrophysicist Bernard Haisch [6], I assumed that a proto-consciousness field operates through vacuum fluctuations or is identical to these fluctuations. I developed a very simple “toy model” in which this field produces a form of primitive consciousness by its interaction with molecular matter in the Casimir Effect (Fig. 1).

Fig. 1. A “Toy Model” of Proto-Panpsychism.


But models, no matter how simple or complex, are useful in physics only if they can be validated through experiment or observation. So I conducted a Google search for “Star Kinematics Anomaly and Discontinuity”.

Contrary to my expectation, what appeared on my screen was amazing. There was a Soviet-era Russian astronomer named Pavel Parenago (1906-1960). In addition to his astronomical contributions, Dr. Parenago was a very clever man. Unlike many of his colleagues, he avoided an extended vacation in a very cold place by dedicating a monograph to the most highly evolved human of all times – Joseph Stalin!

The anomaly named after Parenago, which is referred to as “Parenago’s Discontinuity”—is his observation that cool, low-mass stars in our galactic vicinity (such as the Sun) move around the center of the Milky Way galaxy a bit faster than their hotter, higher-mass sisters.

I used two sources to quantify Parenago’s Discontinuity for nearby main sequence stars. One was a chapter in Allen’s Astrophysical Quantities, a standard reference in astrophysics [7]. The second was a compilation of observations of 5610 main sequence stars using the European Space Agency (ESA) Hipparcos space observatory out to a distance of ~260 light years [8]. Figure 2, a graph presenting this data, is also included in my June 12, 2012 contribution to this blog and the JBIS paper based on my contribution to the BIS Stapledon symposium [9].

In Fig. 2, star motion in the direction of galactic rotation is plotted against star (B-V) color index, which is a measure of the difference between star radiant output in the blue range of the spectrum and the center of the human eye’s visual sensitivity. Hot, blue, massive stars have low and negative (B-V) color indices. From Table 19.1 of Ref. 7, G spectral class main sequence stars such as the Sun have (B-V) color indices in the range of about 0.6-0.7.

Fig 2: Solar Motion in Direction of Galactic Rotation (V) for Main Sequence Stars vs. Star Color Index (B-V). Diamond Data Points are from Gilmore & Zelik. Square. Data Points are from Binney et al.


Note in Fig. 2 that cooler stars to the right of the discontinuity move as much as ~20 kilometers per second faster than their hotter sisters around the center of the galaxy. As discussed in the June 12, 2012 contribution to this blog and in Ref. 9, Parenago’s Discontinuity occurs near the point where stable molecules begin to appear in stellar spectra.

Recent Work and Consideration of Alternative Hypotheses

Science is essentially a testing ground of alternative hypotheses to explain observational and experimental data. Since data points to at least the local reality of Parenago’s Discontinuity, some astrophysicists have developed rival explanations to Volitional Stars.

One possibility is stellar boil-off from local stellar nurseries. Perhaps this results in faster motions for cooler, low mass stars. But this process should result in a greater velocity dispersion in low mass stars, not a higher velocity of revolution around the galaxy’s center. Also, stellar nurseries typically live for tens of millions of years [10]. Why is there no discontinuity in the motions of short-lived O and B stars?

If Parenago’s Discontinuity is a local phenomena extending out a few hundred light years from the Sun, at least one other alternative explanation is possible. This is the Spiral Arms Density Waves concept [11]. The matter density of the interstellar medium is not uniform. Although the typical density of ions and neutral atoms in the Sun’s vicinity (the so-called Intercloud Medium) is less than 0.1 per cubic centimeter, matter density in the cooler, mostly neutral diffuse nebula that operate as stellar nurseries in the spiral arms of our galaxy is orders of magnitude greater. If a dense diffuse nebula passed through our galactic vicinity in the distant past, low-mass, cool, redder stars might be dragged along faster by the dense cloud than hot, blue, more massive stars.

There are at least two ways to check the validity of the Spiral Arms Density Waves hypothesis. One is to investigate the typical size of diffuse nebula in the Milky Way galaxy. The second is to check observational consequences of this hypothesis.

In a recent book, I reviewed the sizes of diffuse nebula in Messier’s compilation [12]. As part of a recent research paper, I performed a similar review of the more comprehensive Herschel catalog and an on-line listing of New General Catalog (NGC) deep-sky objects [13]. These results are summarized in Fig. 3.

Fig 3: Fraction of Galactic Bright Diffuse Nebulae with Diameters > D Light Years from Messier (Blue), Herschel (Green) and Atlas of the Universe—NGC (Yellow) Compilations.


Note in Fig. 3 that, in all three compilations of deep-sky objects, diffuse nebulae with diameters greater than a few hundred light years are rare. Since the Hipparcos main sequence dataset used in Ref. 8 includes stars in a ~500 light year diameter sphere, Fig. 3 does not support the Spiral Arms Density Wave hypothesis.

But there is worse news for this hypothesis, also derived from Hipparcos data. Giant stars are considerably brighter than their less evolved counterparts on the main sequence and
are consequently visible over greater distances. Richard Branham, an astrophysicist based in Argentina, has analyzed the kinematics of thousands of giant stars in the Hipparcos data set [14]. His conclusion that Parenago’s Discontinuity is present in these results is demonstrated in Fig. 4.

Fig 4: Giant Star Motion (V) in Direction of Sun’s Galactic Revolution. The reduction of Branham’s data to produce Fig. 4 is discussed in Chap. 23 of Ref. 12.


Note that Fig. 4 is not as neat as the corresponding results for main sequence stars in Fig. 2. This may be due to uncertainty in the > 1,000 light year distance estimates for many of the stars in Branham’s Sample.

An interpretation of the above results is that a local explanation for Parenago’s Discontinuity is unlikely. Existing galactic diffuse nebula are simply too small (and widely separated, as discussed in Ref. 12) to produce a stellar kinematics anomaly over a radius greater than 1,000 light years.

However, although the existing data does not support Spiral Arms Density Waves, the sample of stars, which numbers in the thousands, is not large enough to rule out this and other local explanations for Parenago’s Discontinuity. After all, the Milky Way galaxy contains more than a hundred billion stars.

Within the next few years, astrophysicists should know conclusively whether Parenago’s Discontinuity is a local or galactic phenomenon. In December 2013, the European Space Agency (ESA) launched Gaia as a more capable successor to the Hipparcos space observatory. While Hippasrcos accurately determined the distance and motions of perhaps 100,000 stars,
Gaia should gather similar data over the next few years for about a billion stars in the Milky Way galaxy. Gaia, its mission and capabilities are discussed in more detail in Ref. 12.

Fig 5: The European Space Agency’s Gaia Space Observatory (Courtesy ESA).


But even before the data from Gaia is analyzed and released, astronomers using different equipment have gathered preliminary data that may lead to the falsification of the Spiral Arms Density Waves hypothesis. Note in Fig. 6 the structure of M51, a typical nearby spiral galaxy not dissimilar from the Milky Way. The revolution of this galaxy is in the counterclockwise direction, from our point of view. Hundreds of millions of years are required for one
complete revolution [15].

A team of astronomers have carefully analyzed the light received from the leading and lagging edges of spiral arms of twelve nearby spiral galaxies. For the Spiral Arms Density Waves Hypothesis to be correct, differences should be observable between these two locations. Sadly for Density Waves (and happily for Volitional Stars), such an effect was not noticed.

Fig 6: The Whirlpool Galaxy M51 (courtesy NASA).


Since the universe contains ~100 billion spiral galaxies, this result is not conclusive. Using new telescopes, about 300 spirals should be observed to statistically rule out Density Waves. Density Waves is apparently limping, but it cannot yet be completely ruled out.

If observations from Gaia indicate that Parenago’s Discontinuity is a galactic phenomenon rather than a local phenomenon, some astrophysicists will attempt to develop explanations that are alternatives to Volitional Stars. As discussed in Ref. 13, this will be challenging. The only reasonable galaxy-wide explanation might be a collision between the Milky Way galaxy and another large galaxy in the distant past. While such a collision might have produced a galaxy-wide “starburst” episode of rapid star formation, simulations indicate that the ultimate result of such galaxy smash-ups is a giant elliptical galaxy, not a spiral such as the Milky Way.

Volitional Star Kinematics

In my June 12, 2012 contribution to this blog, I considered methods that a volitional star could use to adjust its galactic velocity. One possibility was stellar jets.

Many infant stars eject high-velocity matter streams (Fig. 7). Surprisingly, some of them are unipolar or unidirectional, ejecting more material in one direction than others [16]. In April 2015, Paul Gilster e-mailed a link indicating that solar winds from mature stars like the Sun
enter interstellar space in a complex system of jets [17]. The complexity of these jets is at least partially due to solar galactic motion and the interaction between the solar and galactic magnetic fields. Uni-directional matter jets from infant and young stars are discussed in greater detail in Chap. 15 of Ref. 12.

Fig 7: A Jet of High-Velocity Material Ejected From an Infant Star (courtesy NASA).


If Gaia observations reveal that Parenago’s Discontinuity is a galaxy-wide phenomenon, attention might turn to these unidirectional stellar jets. Are they generally aligned to accelerate molecule-bearing stars in the direction of their galactic motion? Since star galactic revolution velocities generally increase with distance from the galactic center, do jet velocities increase as well?

Although unidirectional material jets from infant and mature stars is one method that a volitional star could use, there is another possibility. This is the admittedly very controversial possibility of a weak psychokinetic (PK) force. Much has been written about the investigation of PK and related paranormal phenomena funded by US intelligence agencies.

As I have described in my earlier treatments of this subject, this is the only scientific controversy that I am privileged to know participants on opposing sides. On one hand are the physicists who claim that Uri Geller, the alleged psychic who scored best on their screening tests, could not possibly have cheated on these tests. On the other hand, I met a retired Time-Warner editor at a cocktail party years ago who demonstrated that Geller’s signature fork bending could be duplicated as a magic trick, and who also claimed to have enlisted a magician The Amazing Randi, to further investigate Geller.

Many web sources conclude that Geller is indeed a trained magician. When my friend Dr. Eric Davis of the Institute for Advanced Studies at Austin (Texas) mentioned (while reviewing a draft copy of Ref. 12) that there is no confirmation of Geller actually having attended a magician’s college, I decided to check what I consider the best reference available on the Geller-Randi controversy. I carefully checked a book by MIT physics professor David Kaiser on this topic and learned that Dr. Davis is apparently correct [18].

Eric Davis also sent me an electronic copy of a report he authored for the US Air Force in 2005. Many countries other than the US have investigated PK and related phenomena in studies funded by government agencies. Some of the results are positive and have reportedly been replicated [19, 20].

As discussed in Refs. 9 and 12 and my June 12, 2012 submission to this blog, a PK force required to accelerate a Sun-like star by 20 km/s during a ~1-billion-year time interval is many orders of magnitude less than that required to bend a kitchen utensil. Perhaps it is time for experimental physicists to put the Geller-Randi controversy aside and perform a new set of carefully controlled experiments to test the existence or non-existence of a weak PK effect.

One possibility discussed by others is to include professional magicians on the experiment design team. Another possibility, raised by a responder to my June 12, 2012 contribution to this blog, is to perform PK tests on the interaction between human subjects and an Einstein-Bose condensate. As further discussed in Ref. 12, an Einstein-Bose condensate is a macroscopic state of matter in which all of the particles share the same quantum state. A human subject might be instructed to see if he or she could “will” the condensate to climb the enclosure wall repeatedly to the same level. This would test not only the validity of PK but the assumption that consciousness is related to quantum phenomena.

Conclusions: A Learning Experience

Since 2011, I have spent a large fraction of my creative time investigating whether the Volitional Star hypothesis can be considered scientific. As reviewed in Ref. 12, it is certainly a venerable concept. Shamans, astrologers, philosophers, mystery-cult members, poets, and fiction authors have considered this possibility for millennia.

It is also interesting that at least a few scientists have walked this path before me. Although the concepts of stellar or universal consciousness are certainly not in the scientific mainstream at present, scientific speculation along these lines is becoming more respectable.

One creative group that apparently welcomes these concepts is fine artists. The chapter frontispiece art in Ref. 12 created by C Bangs has been presented in several artistic forums, including the Arts Program at the 9th IAA Symposium on the Future of Space Exploration, which was held in Turin, Italy in July 2015. A version of one of these images is presented as Fig. 8. Modifications of 18 of these images on 11” X 14” panels painted on both sides in the form of an accordion book are on display at the Manhattan gallery that C Bangs is affiliated with: Central Booking Art Space, 21 Ludlow Street.

Fig 8: Modified Version of C Bangs Chapter frontispiece from Starlight, Starbright.


Recently, with my assistance, C prepared an Artist’s Book entitled Star Bright?. In July 2015, Star Bright? was collected by the Prints and Illustrated Books division of the Museum of Modern Art in Manhattan.

It is of course very premature to claim that the work presented here has proven the case for volitional stars. The toy model of proto-panpsychism is certainly too simple to have much traction in the theoretical world. But it is not impossible that this work might move panpsychism from the realm of deductive philosophy to the realm of observational astrophysics.


1. E. H. Walker, “The Nature of Consciousness,” Mathematical Biosciences, 7, 131-178 (1970). Also see E. H. Walker, The Physics of Consciousness, Perseus, Cambridge, MA (2000).

2. B. Aldrin and J. Barnes, Encounter with Tiber, Warner, NY (1996).

3. L. Margulis, “The Conscious Cell”, Annals of the New York Academy of Sciences, 929, 55-70 (2001).

4. S. Hameroff, “Consciousness, the Brain, and Spacetime Geometry”, Annals of the New York Academy of Sciences, 929, 74-104 (2001) and R. Penrose, “Consciousness, the Brain, and Spacetime Geometry: An Addendum”, Annals of the New York Academy of Sciences, 929, 105-110 (2001).

5. H. Genz, Nothingness: The Science of Empty Space, Perseus, Cambridge, MA (1999).

6. B. Haisch, The God Theory, Weiser, San Francisco, CA (2006).

7. G. F. Gilmore and M. Zelik, “Star Populations and the Solar Neighborhood,” in Allen’s Astrophysical Quantities, 4th ed. A. N. Cox ed., Springer-Verlag, NY (2000), Chap. 19.

8. J. J. Binney, W. Dehnen, N. Houk, C. A. Murray, and M. J. Preston, “Kinematics of Main Sequence Stars from Hipparcos Data,” Proceedings of the ESA Symposium Hipparcos Venice
, SP-402, Venice, Italy, 13-15 May 1997, pp. 473-477 (July, 1997).

9. G. L. Matloff, “Olaf Stapledon and Conscious Stars: Philosophy or Science?”, JBIS, 65, 5-6 (2012).

10. E. Chaisson and S. McMillan, Astronomy Today, 6th ed., Pearson-Addison/Wesley, San Francisco, CA (2008), Chap. 19.

11. R. S. DeSimone, X. Wu, and S. Tremaine, ”The Stellar Velocity Distribution of the Stellar Neighborhood”, Monthly Notices of the Royal Astronomical Society, 350, 627-643 (2004).

12. G. L. Matloff and C Bangs, Starlight, Starbright: Are Stars Conscious?, Curtis Press, UK (2015).

13. G. L. Matloff, “The Non-Locality of Parenago’s Discontinuity and Universal Self Organization”, IAA-FSE-15-06-03. Presented at 9th IAA Symposium on the Future of Space Exploration, Turin, Italy, July 7-9, 2015. Published in Conference Proceedings.

14. R. L. Branham, “The Kinematics and Velocity Ellipsoid of GIII Stars,” Revisita Mexicana de Astronomia y Astrofisica, 47, 197-209 (2011).

15. K. Foyle, H.-W. Rix, C. Dobbs, A. Leroy, and F. Walter, “Observational Evidence Against Long-Lived Spiral Arms in Galaxies,” Astrophysical Journal, 735 (2), Article ID = 101 (2011), arXiv: 1105.5141 [astro-ph.CO].

16. F. Namouni, “On the Flaring of Jet-Sustaining Accretion Disks”, Astrophysical Journal, 659, 1505-1510 (2007).

17. I. O’Neill, “Sun May Blast Two Jets of Plasma into Interstellar Space”,, (March 4, 2015). Also see “A New View of the Solar System: Astrophysical Jets Driven by the Sun”, (February 19, 2015).

18. D. Kaiser, How the Hippies Saved Physics, Norton, NY (2011).

19. E. W. Davis, “Teleportation: Mind and Intelligence”, Report to the US Air Force Future Technology Branch, Future Concepts and Transformation Division Workshop, Mitre Corporation, McLean VA (Oct. 21, 2005).

20. E. W. Davis, “Teleportation Physics Study,” Final Report AFRL-PR-ED-TR-2003-0034, Air Force Research Laboratory, Air Force Materiel Command, Edwards AFB, CA (2004):



New Look at β Pictoris b

by Paul Gilster on September 17, 2015

Given the scale of our own Solar System, the system circling the star Beta Pictoris can’t help but give us pause. Imagine not only the orbiting clouds of gas, dust and debris that we would expect around a young star (8-20 million years old) with a solar system in formation, but also a gas giant planet some ten to twelve times the mass of Jupiter, in an orbit something like Saturn’s. Now factor in this: The disk in question, if translated into our own system’s terms, would extend from about the orbit of Neptune to almost 2000 AU.

Now we have a view of Beta Pictoris b as it moves through a small slice (one and a half years) of a 22 year orbital period. The work of Maxwell Millar-Blanchaer (a doctoral candidate at the University of Toronto) and colleagues, the imagery appears in a paper published yesterday by The Astrophysical Journal. Millar-Blanchaer used observations from the Gemini Planet Imager on the Gemini South telescope in Chile to image Beta Pictoris b, the work being part of the GPI Exoplanet Survey, which will examine some 600 stars in the coming three years.

“The images in the series represent the most accurate measurements of the planet’s position ever made,” says Millar-Blanchaer. “In addition, with GPI, we’re able to see both the disk and the planet at the exact same time. With our combined knowledge of the disk and the planet we’re really able to get a sense of the planetary system’s architecture and how everything interacts.”

Image: A series of images taken between November 2013 to April 2015 with the Gemini Planet Imager (GPI) on the Gemini South telescope in Chile shows the exoplanet β Pic b orbiting the star β Pictoris, which lies over 60 light-years from Earth. In the images, the star is at the center of the left-hand edge of the frame; it is hidden by the Gemini Planet Imager’s coronagraph. We are looking at the planet’s orbit almost edge-on; the planet is closer to the Earth than the star. Credit: M. Millar-Blanchaer, University of Toronto; F. Marchis, SETI Institute.

The intensively studied Beta Pictoris disk is known for a disk asymmetry (one side of the disk is longer and thinner than the other) and a ‘warp’ that has been thought to be the result of disk ‘sculpting’ by the known planet — a 1997 study argued that a planet with an inclination of between 3 and 5 degrees could account for the observed perturbation, with the subsequent discovery of Beta Pictoris b lending weight to the idea. The other possibility posed in the literature was that the disk is actually composed of two disks that appear superimposed in our view, with a roughly 3 degree difference in position angle.

The new paper refines measurements of the planet’s orbit and the circumstellar disk, showing an inner disk that is slightly offset from the main outer disk. The results also indicate that the sculpting effect cannot be accounted for purely through Beta Pictoris b. From the paper:

When considered together, the disk model and the orbital fit indicate that the dynamics of the inner edge of the disk are not consistent with sculpting by the planet β Pic b alone. This could be explained by an as-of-yet undetected planet in-between the known planet and the inner edge of the disk. Under this scenario the less massive, further out planet would dynamically influence the inner regions of disk, while the more massive β Pic b would have a greater effect at larger radii, causing the well known warp. If there is in fact another planet at this location, this will have significant consequences for our understanding of the planet formation history and dynamical evolution of this system.

Beta Pictoris, some 63 light years away in the constellation Pictor (the Painter’s Easel), is a system that is sure to see intensified investigation. In this case, we’re seeing images of the debris disk in polarized light that, as the paper notes, reach angular separations that have been inaccessible to both space- and ground-based telescopes. Learning more will require more sophisticated dust grain models that will allow researchers to further test their theories about the inner part of the disk.

The paper is Millar-Blanchaer et al., “β Pictoris’ inner disk in polarized light and new orbital parameters for β Pictoris b,” published September 16 2015 by The Astrophysical Journal (abstract / preprint). A University of Toronto news release is available.



Enceladus: A Global Ocean

by Paul Gilster on September 16, 2015

Seven years worth of Cassini images of Enceladus have told us what many have long suspected: The intriguing moon does indeed have a subsurface ocean. Not that the presence of water on Enceladus comes as a surprise: The south polar region in the area of the famous ‘tiger stripes’ has long been known to be venting vapor and liquid water from its fractures. The question had become, is this a regional body of water, or is the Enceladus ocean global?

To find out, a team at Cornell University led by Peter Thomas, whose work was just published in Icarus, charted about 5800 surface features, contrasting images taken at different times and at different angles. Using a combination of dynamical modeling and statistical analysis, they sought to find the best values for the interior that would explain an apparent libration or ‘wobble’ (0.120 ± 0.014°) detectable in the imagery, a larger motion by far than would be expected if the surface of Enceladus were solidly connected with its core. The size of the libration tells us that the ocean is indeed global.


Image: What lies beneath… Now we learn that there is solid evidence for an ocean below this entire surface. Credit: NASA/JPL-CalTech.

Matthew Tiscareno, now at the SETI Institute after working on the Enceladus data at Cornell (he is a co-author of the just published paper), explains the significance of the finding for possible astrobiology:

“This exciting discovery expands the region of habitability for Enceladus from just a regional sea under the south pole to all of Enceladus. The global nature of the ocean likely tells us that it has been there for a long time, and is being maintained by robust global effects, which is also encouraging from the standpoint of habitability.”


Image: This illustration is a speculative representation of the interior of Saturn’s moon Enceladus with a global liquid water ocean between its rocky core and icy crust. The thickness of layers shown here is not to scale. Scientists on NASA’s Cassini mission determined that the slight wobble of Enceladus as it orbits Saturn is much too large for the moon to be frozen from surface to core. The wobble, technically referred to as a libration, reveals that the crust of Enceladus is disconnected from its rocky interior. Credit: NASA/JPL-Caltech.

We’ve come a long way since the first Cassini discoveries of the plume of water vapor, ice and organic molecules erupting from Enceladus’ south pole. It was in 2009 that we learned by measuring the saltiness of the geyser particles that their source could only be a reservoir of liquid, and by 2014 it was possible to analyze the gravitational pull of the Saturnian moon on Cassini itself, which showed that at least a regional sea must be present under the ice.

Now we have a global ocean to deal with, although the question of how it remains liquid is still unresolved. Are tidal forces from Saturn’s gravity generating more heat than we realize? Much work remains as we try to answer that question. In the near term, Cassini is scheduled for a close flyby at the end of October in which it will pass a scant 49 kilometers above the moon’s surface. This ‘deep dive’ through Enceladus’ active plumes will be the closest yet. With a global ocean spewing material into space, the case for further work at Enceladus is overwhelming.

The paper is Thomas et al., “Enceladus’ measured physical libration requires a global subsurface ocean,” published online by Icarus on 11 September 2015 (abstract).



CubeSats: Deep Space Possibilities

by Paul Gilster on September 15, 2015

The Planetary Society’s LightSail-A, launched on May 20 of this year, demonstrated sail deployment from a CubeSat despite software problems that plagued the mission. You’ll recall that communications were spotty and the upload of a software fix was compromised because of the spacecraft’s continued tumbling. After a series of glitches, the craft’s sail was deployed on the 7th of June, with LightSail-A entering the atmosphere shortly thereafter, a test flight that did achieve its primary objective, serving as a prototype for the upcoming LightSail-1.

Mixing CubeSats with solar sails seems like an excellent idea once we’ve ironed out the wrinkles in the technology, and as I’ve speculated before, we may one day see interplanetary missions carried out by small fleets of CubeSats propelled by solar sails. Although the LightSail-A demonstrator mission was in a low orbit, LightSail-1 will deploy its four triangular sails once it reaches an orbital altitude of 800 kilometers. A key reading will be what sort of increase in the spacecraft’s orbital speed is observed once it deploys its sail at altitude.

We’ll be watching this one with interest in April of next year, when it is scheduled to launch aboard a SpaceX Falcon Heavy, itself the object of great interest (this will be its first launch). Whether the launch goes on time will depend upon how well SpaceX recovers from the recent Falcon 9 launch failure. Whenever it launches, a successful LightSail-1 flight would lead to two more solar sail projects on The Planetary Society’s agenda, with LightSail-3 traveling to the L1 Lagrangian point, a useful position for monitoring geomagnetic activity on the Sun.

NASA, meanwhile, has CubeSat plans of its own, likewise dependent upon the health of a booster, in this case the Space Launch System (SLS) rocket. The first flight of the SLS, planned for 2018, will carry an uncrewed Orion spacecraft to a deep space orbit beyond the Moon and return it to Earth. It’s interesting to see that the first SLS mission, according to this NASA information sheet, has the ability to accommodate eleven 6U-sized CubeSats. The standard 10×10×11 cm basic CubeSat is a ‘one unit’ (1U) CubeSat, but larger platforms of 6U and 12U allow more complex missions (LightSail-1 is built around a 3U CubeSat format).


The future of CubeSats with NASA is confirmed by the Advanced Exploration Systems Division’s choice of three secondary payloads intended for SLS launch and a destination in deep space. BioSentinel is intriguing because it will mark the first time we’ve sent living organisms to deep space since the days of the Apollo missions. The organisms in question are yeast (S. cerevisiae), useful in studying DNA lesions caused by highly energetic particles. The idea is to operate in a deep space radiation environment for eighteen months, measuring the effects of radiation on living organisms at distances far beyond low Earth orbit. So far, the longest human mission in deep space was 12.5 days, accomplished by the crew of Apollo 17.

Image: Conceptual graphic of a radiation particle causing a DNA Double Strand Break (DSB). Credit: NASA.

Crews aboard the International Space Station have obviously spent far longer in space, but only in low-Earth orbit, leaving us with plenty to learn about the effects of deep space on biological systems. Another use of CubeSats will be to scout out important targets near our planet, which is the mission designed for NEA (Near-Earth Asteroid) Scout. Here we have another solar sail/CubeSat combination, allowing maneuvering during cruise for the approach to an asteroid. The plan is to study a small asteroid less than about 90 meters in diameter, homing in on a range of parameters including the asteroid’s shape, rotational properties, spectral class, local dust and debris field, regional morphology and properties of its regolith. Ideally these data will be used to resolve issues related to the eventual human exploration of NEAs.


Image: Near-Earth Asteroid Scout, or NEA Scout, will perform reconnaissance of an asteroid using a CubeSat and solar sail propulsion. Credit: NASA/JPL.

Lunar Flashlight is the third approved mission, a solar sail craft with a 6U CubeSat intended for insertion into lunar orbit to look for ice deposits and areas best suited for resource extraction by future human crews. So this one is likewise a scout, one whose sail will be able to reflect 50 kW of sunlight and light up dark craters at the lunar poles where surface water ice may be lurking. It’s also the first mission that will attempt to fly an 80 m2 solar sail. Repeated measurements will give us a chart of ice concentrations in these regions, while also building a catalog of places rich enough in materials to support in-situ resource utilization (ISRU).


Image: Lunar Flashlight will map the lunar south pole for volatiles and demonstrate several technological firsts, including being the first CubeSat to reach the Moon, the first mission to use an 80 m2 solar sail, and the first mission to use a solar sail as a reflector for science observations. Credit: NASA/MSFC.

The CubeSats designed for the first SLS mission won’t get a lot of the publicity when the big rocket flies, but if they perform as expected, they’ll be pushing the small modular satellite concept into new areas. Particularly with regard to solar sails, NEA Scout and Lunar Flashlight should give us opportunities to navigate and maneuver with sails, building experience for the larger sail missions of the future. Meanwhile, Japan’s IKAROS sail, in its ten-month solar orbit, remains in hibernation, with its fifth wake-up call scheduled for winter of this year. Remember that a 50 meter sail, an ambitious successor to IKAROS designed for a mission to Jupiter and the Trojan asteroids, is in the works, with launch some time later in the decade.



Pluto/Charon: Complexities Abound

by Paul Gilster on September 14, 2015

Given the flow of new imagery from New Horizons, I began to realize that mission data were changing my prose. To be sure, I still lean to describing the system as Pluto/Charon, because given the relative size of the two bodies, this really seems like a binary object to me. I tend to call it a ‘binary planet’ among friends because I still think of Pluto as a planet, dwarf or not. But when New Horizons blew through the Pluto/Charon system, it was finally possible to start talking separately about Charon, because now we were seeing it, for the first time, up close.

Charon as a distinct object from Pluto is a fascinating thought, one I’ve mused over since the days of the smaller object’s discovery in 1978. An enormous moon hanging in the sky, never changing its position, over a landscape unknown — the imagination ran wild. In the event, New Horizons outdid anything I ever conceived, with imagery of both worlds we’ll be debating for a long time. But in some ways my favorite of the images so far is the one just below.


Image: Details of Pluto’s largest moon, Charon, are revealed in this image from New Horizons’ Long Range Reconnaissance Imager (LORRI), taken July 13, 2015, from a distance of 466,000 kilometers, combined with color information obtained by New Horizons’ Ralph instrument on the same day. The marking in Charon’s north polar region appears to be a thin deposit of dark material over a distinct, sharply bounded, angular feature; scientists expect to learn more by studying higher-resolution images still to come. (Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute).

What’s striking about Charon at first glance is the darkening at the north pole, a phenomenon Carly Howett (Southwest Research Institute, Boulder) discussed in a recent blog article for NASA. Redder and darker it is, a circumstance Howett explains by reference to the surface composition of the northern polar region. A working theory is that traces of Pluto’s atmosphere reach Charon on occasion, where the gases come into contact with polar regions with temperatures between -258 and -213° C. This is a range not a lot higher than absolute zero (−273.15° C, or −459.67° Fahrenheit).

Gases arriving at Charon’s winter pole would simply freeze rather than escaping, so we have a deposit of Pluto’s atmospheric nitrogen, with methane and carbon monoxide, gradually building up. This would not occur at Charon’s somewhat warmer equator. When the winter pole re-emerges into sunlight, solar radiation on these ices produces tholins, which form when simple organic compounds like methane are irradiated. With their higher sublimation temperature, these tholins cannot escape back into space. Howett sums it up this way:

Charon likely has gradually built up a polar deposit over millions of years as Pluto’s atmosphere slowly escapes, during which time the surface is being irradiated by the sun. It appears the conditions on Charon are right to form red tholins similar to those shown, although we have yet to figure out exactly why. This is one of the many things I am looking forward to better understanding as we receive more New Horizons data over the next year and analyze it in conjunction with continued laboratory work.

Tholin color depends on the ratios of the molecules involved and the kind of radiation received — various shades have been produced in the laboratory. We don’t find them on Earth outside of our own laboratories, but tholins are thought to be abundant on the icy objects in the outer system, usually taking on a reddish brown hue. You may recall they’ve also been discussed in relation to Titan, where we’ve learned from Cassini measurements that tholins appear higher in the atmosphere than was once believed (see Titan’s Tholins: Precursors of Life?). Think of them not as a single specific compound but a range of molecules with a generally reddish color.

Did tholin-rich comets play a role in delivering the precursor materials needed for life to develop on Earth? It’s a notion we can’t rule out, but for now what we can do is study tholins in the places they naturally occur, and in the case of Charon, we can see that they are part of a mechanism that can explain surface color variations. The object 28978 Ixion, a Kuiper Belt object in orbital resonance with Neptune, appears to be particularly rich in tholins.

Meanwhile, the flow of data from New Horizons in just the last few days has more than doubled what we can see of Pluto’s surface at the 400 meter per pixel level. We’re finding still more puzzles as we push deeper into the imagery, with features that appear to be dune-like, and what seem to be nitrogen ice flows and valleys that could have been carved by such flows over the surface. The processes at work should provide fodder for countless dissertations. Here’s one of the new images — for more, see New Pluto Images from New Horizons: It’s Complicated.


Image: This 350-kilometer wide view of Pluto from NASA’s New Horizons spacecraft illustrates the incredible diversity of surface reflectivities and geological landforms on the dwarf planet. The image includes dark, ancient heavily cratered terrain; bright, smooth geologically young terrain; assembled masses of mountains; and an enigmatic field of dark, aligned ridges that resemble dunes; its origin is under debate. The smallest visible features are 0.8 kilometers in size. This image was taken as New Horizons flew past Pluto on July 14, 2015, from a distance of 80,000 kilometers. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.

“The surface of Pluto is every bit as complex as that of Mars,” said Jeff Moore, leader of the New Horizons Geology, Geophysics and Imaging (GGI) team at NASA’s Ames Research Center in Moffett Field, California. “The randomly jumbled mountains might be huge blocks of hard water ice floating within a vast, denser, softer deposit of frozen nitrogen within the region informally named Sputnik Planum.”

Heavily cratered terrain next to young icy plains, with the suggestion of dunes on a place whose atmosphere should be too thin to produce them. No wonder William McKinnon (Washington University, St. Louis) calls the latter a ‘head-scratcher.’ The surfaces of Pluto and Charon have delivered complexities galore, and we’re only now learning that Pluto’s atmospheric haze is far more complex than earlier thought, offering a twilight effect that helps light nightside terrain. “If an artist had painted this Pluto before our flyby, I probably would have called it over the top,” says New Horizons principal investigator Alan Stern, “but that’s what is actually there.”

And then there’s this (brought to my attention by Larry Klaes):



Extraterrestrial Life: The Giants are Coming…

by Paul Gilster on September 11, 2015

Finding a biological marker in the atmosphere of an exoplanet is a major goal, but as Ignas Snellen argues in the essay below, space-based missions are not the only way to proceed. A professor of astronomy at Leiden University in The Netherlands, Dr. Snellen makes a persuasive case that technologies like high dispersion spectroscopy and high contrast imaging are at their most effective when deployed at large observatories on the ground. A team of European observers he led has already used these techniques to determine the eight-hour rotation rate of Beta Pictoris b. We’ll need carefully conceived space missions to study those parts of the spectrum inaccessible from the ground, but these will find powerful synergies with the next generation of giant Earth telescopes planned for operations in the 2020s.

by Ignas Snellen


While I was deeply involved by my PhD project, studying the active centers of distant galaxies, a real scientific revolution was unfolding in a very different field of astronomy. In the mid-1990s the first planets were found to orbit stars other then our Sun. For several years I managed to ignore it. Not impeded by any knowledge I was happy to join the many skeptics to dismiss the early results. But soon they could be ignored no more. And when the first transiting planet was found and a little later its atmosphere detected, I radically changed research field and threw myself, like many others, on exoplanet research. More than a decade later the revolution is still going strong.


Not all scientific endeavors were successful during this twenty-year period. Starting soon after the first exoplanet discoveries, enormous efforts were put in the design (and getting the political support) for a spacecraft that could detect potential biomarker gases in the atmospheres of nearby planet systems. European astronomers were concentrating on DARWIN. This mission concept was composed of four to five free-flying spacecraft carrying out high-resolution imaging using nulling interferometry, where the starlight from the different telescopes is combined in such way that it cancels out on-axis light, leaving the potential off-axis planet-light intact. After a series of studies over more than a decade, in 2007 the European Space Agency stopped all DARWIN developments – it was too difficult. Over the same time period, several versions of the Terrestrial Planet Finder (TPF) were proposed to NASA, including a nulling interferometer and a coronagraph. The latter uses a smart optical design to strongly reduce the starlight while letting any planet light pass through. Also these projects have subsequently been cancelled. Arguably an even bigger anticlimax was the Space Interferometry Mission (SIM), which was to hunt for Earth-mass planets in the habitable zones of nearby stars using astrometry. After being postponed several times, it was finally cancelled in 2010.

How pessimistic should we be?

Enormous amounts of people’s time and energy were spent on these projects, costing hundreds of millions of dollars and euros. A real pity, considering all the other exciting projects that could have been funded instead. We should set more realistic goals and learn from greatly successful missions such as the NASA Kepler mission, which was conceived and developed during that same period. A key aspect of the adoption of Kepler as a NASA space mission was the demonstration of technological readiness through ground-based experiments (by Bill Borucki and friends). A mission gets approved only if it is thought to be a guaranteed success. It is this aspect that killed Darwin and TPF, and it is this aspect that worries me about new, very smart spacecraft concepts such as the large external occulter for the New World Mission. Maybe I am just not enough of a (Centauri) dreamer.

In any case, lead times of large space missions, as the Kepler story has shown, are huge. This implies that it is highly unlikely that within the next 25 years we will have a space mission that will look for biomarker gases in the atmospheres of Earth-like planets. If I am lucky I will still be alive to see it happen. My idea is – let’s start from the ground!

The ground-based challenge

The first evidence for extraterrestrial life will come from the detection of so-called biomarkers – absorption from gases that are only expected in an exoplanet atmosphere when produced by biological processes. The prime examples of such biomarkers are oxygen and ozone, as seen in the Earth’s atmosphere. Observing these gases in exoplanet atmospheres will not be the ultimate proof of extraterrestrial life, but it will be a first step. These observations require high-precision spectral photometry, which is very challenging to do from the ground. First of all, our atmosphere absorbs and scatters light. This is a particular problem for observations of Earth-like planets, because their spectra will show absorption bands at the same wavelengths as the Earth’s atmosphere. In addition, turbulence in our atmosphere causes the light that enters ground-based telescopes to become distorted. Therefore, light does not form perfect incoming wavefronts, hampering high-precision measurements. Furthermore, when objects are observed for a longer time during a night, their light-path through the Earth atmosphere changes, as does the way starlight enters an instrument, making stability a big issue. These are the main reasons why many exoplanet enthusiasts thought that it would be impossible to ever probe exoplanet atmospheres from the ground.

The technique

Work over the last decade has shown that one particular ground-based technique – high dispersion spectroscopy (HDS) – is very suitable for detecting absorption features in exoplanet atmospheres. The dispersion of a spectrograph is a measure of the ‘spreading’ of different wavelengths into a spectrum of the celestial object. Space telescopes, such as the Hubble Space Telescope (HST), Spitzer, and the future James Webb (JWST) have instruments on board that are capable of low to medium dispersion spectroscopy, where the incoming light can be measured at typically 1/100th to 1/1000th of a wavelength. With HDS, precisions of 1/100,000th of a wavelength are reached – hence about two orders of magnitude higher than from space. For two reasons this can practically only be done from the ground: 1) the physical size of a spectrograph scales with its dispersion, meaning that HDS instruments are generally too big to launch to space. 2) At high dispersion the light is spread very thinly, requiring a lot of photons to do it right, hence a large telescope. For example, the hot Jupiter tau Bootis b required 3 nights on the 8m Very Large Telescope to measure carbon monoxide in its atmosphere. Scaling this to the HST (pretending it would have an HDS instrument) it would have cost on the order of 200 hours of observing time – more than was spent on the Hubble Deep Field. Hence, HDS is the sole domain of ground-based telescopes.

The high dispersion is key to overcome the challenges that arise from observing through the Earth’s atmosphere. At a dispersion of 1/100,000th of a wavelength, HDS measurements are sensitive to Doppler effects due to the orbital motion of the planet. E.g. the Earth moves with nearly 30 km/sec around the Sun, while hot Jupiters have velocities of 150 km/sec or more. This means that during an observation, the radial component of the orbital velocity of a planet can change by tens of km/sec. While this makes absorption features from the planet move in wavelength, any Earth-atmospheric and stellar absorption lines remain stationary. Clever data analysis techniques can filter out all the stationary components of a time-sequence of spectra, while the moving planet signal is preserved. Ultimately, the signal from numerous individual planet lines can be added up together to boost the planet signal using the cross-correlation technique – weighing the contribution from each line by its expected strength.

Screenshot from 2015-09-11 08:59:46

Image: Illustration of the HDS technique, with the moving planet lines in purple.

So why does this work? Although the Earth atmosphere has a profound influence on the observed spectrum, the absorption and scattering processes are well behaved on scales of 1/100,000th of a wavelength and can be calibrated out. The signal of the planet can be preserved, even if variations in the Earth atmospheres are many orders of magnitude larger. In this way starlight reflected off a planet’s atmosphere can be probed, but also a planet’s transmission spectrum – when a planet crosses the face of a star and starlight filters through its atmosphere. In addition, a planet’s direct thermal emission spectrum can be observed. This is particularly powerful in the infrared. And it works well! In the optical, absorption from sodium has been found in the transmission spectra of several exoplanets. In the near-infrared, carbon monoxide and water vapor have been seen in both the transmission spectra as well as thermal emission spectra of several hot Jupiters – on par with the best observations from space. In the next two years new instruments will come online (such as CRIRES+ and ESPRESSO on the VLT) that will take this significantly further – allowing a complete inventory of the spectroscopically active molecules in the upper atmospheres of hot Jupiters, and extending this research to significantly cooler and smaller planets.

One step beyond

There is more. The HDS technique makes no attempt to spatially separate the planet light from that of the much brighter star – it is only filtered out using its spectral features. Hot Jupiters are much too close to their parent stars to be able to see them separately anyway. However, planets in wider orbits can also be directly imaged, using high-contrast imaging (HCI) techniques (also in combination with coronography). This technique is really starting to flourish using modern adaptive optics in which atmospheric turbulence is compensated by fast-moving deformable mirrors. A few dozen planets have already been discovered using HCI, and new imagers like SPHERE on the VLT and GPI on Gemini, which came online last year, hold a great promise. What I am very excited about is that HDS combined with HCI (let’s call it HDS+HCI) can be even more powerful. While HDS is completely dominated by noise from the host star, HCI strongly reduces the starlight at the planet position – increasing the sensitivity of the spectral separation technique used by HDS by orders of magnitude. Last year we showed the power of HDS+HCI by for the first time measuring the spin velocity of an extrasolar planet, showing beta Pictoris b to have a length of day of 8 hours. [For more on this work, see Night and Day on β Pictoris b].


Image: HDS+HCI observations of beta Pictoris b.

The giants are coming

Both the US and Europe are building a new generation of telescopes that can truly be called giants. The Giant Magellan Telescope (GMT) will consist of six 8.4m mirrors, equivalent of one 24.5m diameter telescope. The Thirty Meter Telescope (TMT) will be as large as the name suggests, while the European Extremely Large Telescope (E-ELT) will be the largest with an effective diameter of 39m. All three projects are in a race with each other and hope to be fully operational in the mid-2020s.

Size is everything in this game – in particular for HDS and HDS+HCI observations. HDS benefits from the number of photons that can be collected, which scales with the diameter squared. Taking into account also other effects, the E-ELT will be >100 times faster than the VLT (in particular using the first-light instrument METIS, and HIRES). This will bring us near the range needed to target molecular oxygen in the atmospheres of Earth-like planets that transit nearby red dwarf stars. We have to be somewhat lucky for such nearby transiting systems to exist, but simulations show that the smaller host star makes the transmission signal of molecular oxygen from an Earth-size planet similar to the carbon monoxide signals we already have detected in hot Jupiter atmospheres – it is just that the systems will be much fainter than tau Bootis requiring the significantly bigger telescopes. The technology is already here, but it is all about collecting enough photons. This could also be solved in a different way if even the ELTs turn out not to be large enough. HDS observations of bright stars do not require precisely shaped mirrors and this could be achieved by arrays of low-precision light collectors, but this is something for the more distant future.


Image: Artist impression of the E-ELT – ready in 2024! (credit: ESO).

Even more promising are the high-contrast imaging capabilities of the future ELTs. Bigger telescopes not only collect more photons, but also see sharper. This makes their capability to see faint planets in the glare of bright stars scale with telescope size up to the fifth power, making the E-ELT more than a 1000 times faster than the VLT. Excitingly, rocky planets in the habitable zones of nearby planets become within reach. Again, simulations show that their thermal emission can be detected around the nearest stars, while HDS+HCI at optical wavelengths can target their reflectance spectra, possibly even including molecular oxygen signatures.

Realistic space missions

Whatever happens with space-based exoplanet astronomy, ground-based telescopes will push their way forward towards characterizing Earth-like planets. This does not mean there is no need for space missions. First of all, I have not done justice to the fantastic, groundbreaking exoplanet science the JWST is going to provide. Secondly, a series of transit missions, TESS from NASA (launch 2017), and CHEOPS and PLATO from ESA (Launch 2018 & 2024), will discover all nearby transiting planet systems, a crucial prerequisite for much of the science discussed here.

Above all, ground-based measurement will not be able to provide a complete picture of a planet’s atmosphere – simply because large parts of the planet’s spectrum are not accessible from the ground. This will mean that the ultimate proof for extraterrestrial life will likely have to come from a space mission type DARWIN or TPF. Imagine how a ground-based detection of say water in an Earth-like atmosphere would open up political possibilities, but the right timing for such missions is of upmost importance. Aiming too high and too early means that lots of time and money will be wasted, at the expense of progress in exoplanet science. It is good to dream, but we should not forget to stay realistic.

Further reading

Snellen et al. (2013) Astrophysical Journal 764, 182: Finding Extraterrestrial Life Using Ground-based High-dispersion Spectroscopy (

Snellen et al. (2014), Nature 509, 63: Fast spin of the young extrasolar planet beta Pictoris b (

Snellen et al. (2015), Astronomy & Astrophysics 576, 59: Combining high-dispersion spectroscopy with high contrast imaging: Probing rocky planets around our nearest neighbors (



The Closed Loop Conundrum

by Paul Gilster on September 10, 2015

In Stephen Baxter’s novel Ultima (Roc, 2015), Ceres is moved by a human civilization in a parallel universe toward Mars, the immediate notion being to use the dwarf planet’s volatiles to help terraform the Red Planet. Or is that really the motive? I don’t want to give too much away (and in any case, I haven’t finished the book myself), but naturally the biggest question is how to move an object the size of Ceres into an entirely new orbit.

Baxter sets up an alternate-world civilization that has discovered energy sources it doesn’t understand but can nonetheless use for interstellar propulsion and the numerous demands of a growing technological society, though one that is backward in comparison to our own. That juxtaposition is interesting because we tend to assume technologies emerge at the same pace, supporting each other. What if they don’t, or what if we simply stumble upon a natural phenomenon we can tap into without being able to reproduce its effects through any known science?

Something of the same juxtaposition occurs in Kim Stanley Robinson’s Aurora (Orbit, 2015), where we find a society that has the propulsion technologies to enable travel at a pace that can get a worldship to Tau Ceti in a few human generations. We’ve discussed Aurora in these pages recently, looking at some of the problems in its science — I’ll let those better qualified than myself have the final word on those — but what I found compelling about the novel was its depiction of what happens aboard that worldship.

Because it’s not at all inconceivable that we might solve the propulsion problem before we solve the closed-loop life support problem, and that is more or less what we see happening in Aurora. A worldship could house habitats of choice, and if you think of some visions of O’Neill cylinders, you’ll recall depictions that made space living seem almost idyllic. But Robinson shows us a ship that’s simply too small for its enclosed ecologies to flourish. Travel between the stars in such a ship would be harrowing, as indeed it turns out to be in the book. Micro-managing a biosphere is no small matter, and we have yet to demonstrate the ability.


Image: The O’Neill cylinder depicted here is one take on what might eventually become an interstellar worldship. Keeping its systems and crew healthy is a skill that will demand space-based experimentation, and plenty of it. Credit: Rick Guidice/NASA.

In Baxter’s Ultima, what happens with Ceres is compounded by the fact that just as humans don’t fully understand their power source, they also have to deal with an artificial intelligence whose motives are opaque. Put the two together and you can see why the movement of Ceres to a new position in the Solar System takes on an aura of menace. Various notions of a ‘singularity’ posit a human future in which our computers are creating entirely new generations of themselves that are designed according to principles we cannot begin to fathom. What happens then, and how do we ensure that the resulting machines want us to survive?

With Ceres very much in mind, I was delighted to receive the new imagery from the Dawn spacecraft at the present-day Ceres (in our non-alternate reality), showing us the bright spots that have commanded so much attention. Here we’re looking at a composite of two different images of Occator crater, one made with a short exposure to capture as much detail as possible, the other a longer exposure that best captures the background surface.


Image: Occator crater on Ceres, home to a collection of intriguing bright spots. The images were obtained by Dawn during the mission’s High Altitude Mapping Orbit (HAMO) phase, from which the spacecraft imaged the surface at a resolution of about 140 meters per pixel. Credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA.

We’re looking at the view from 1470 kilometers, with images offering three times better resolution than we had from the spacecraft’s previous orbit in June. Two eleven-day cycles of surface mapping have now been completed at this altitude, with the third beginning on September 9. All of Ceres is to be mapped six times over the next two months, with each cycle consisting of fourteen orbits. Changing angles in each mapping cycle will allow the Dawn researchers to put together 3-D maps from the resulting imagery.

So we’re learning more about the real Ceres every day. Given our lack of Baxter’s ‘kernels’ — the enigmatic power sources that energize his future civilization as well as the unusual but related culture they encounter — we may do better to consider this dwarf planet as a terraforming possibility in its own right, rather than a candidate for future use near Mars. On that score, I remind you of Robert Kennedy, Ken Roy and David Fields, who have written up a terraforming concept that could be applied to small bodies in or outside of the habitable zone (see Terraforming: Enter the ‘Shell World’ for background and citation).

It will be through myriad experiments in creating sustainable ecologies off-world that we finally conquer the life support problem. It always surprises me that it has received as little attention as it has in science fiction, given that any permanent human presence in space depends upon robust, recyclable systems that reliably sustain large populations. Our earliest attempts at closed-loop life support (think of the BIOS-3 experiments in the 1970s and 80s, and the Biosphere 2 attempt in the 1990s) have revealed how tricky such systems are. Robinson’s faltering starship in Aurora offers a useful cautionary narrative. We’ll need orbital habitats of considerable complexity as we learn how to master the closed-loop conundrum.



Nitrogen Detection in the Exoplanet Toolkit

by Paul Gilster on September 9, 2015

Extending missions beyond their initial goals is much on my mind as we consider the future of New Horizons and its possible flyby past a Kuiper Belt Object. But this morning I’m also reminded of EPOXI, which has given us views of the Earth that help us study what a terrestrial world looks like from a distance, characterizing our own planet as if it were an exoplanet. You’ll recall that EPOXI (Extrasolar Planet Observation and Deep Impact Extended Investigation) is a follow-on to another successful mission, the Deep Impact journey to comet Tempel 1.

As is clear from its acronym, EPOXI combined two extended missions, one following up the Tempel 1 studies with a visit to comet Hartley 2 (this followed an unsuccessful plan to make a flyby past comet 85P/Boethin, which proved to be too faint for accurate orbital calculations). The extrasolar component of EPOXI was called EPOCh (Extrasolar Planet Observation and Characterization), using the craft’s high resolution telescope to make photometric observations of stars with known transiting exoplanets. But the spacecraft produced observations of Earth that have been useful for exoplanet studies, as well as recording some remarkable views.


Image: Four images from a sequence of photos taken by the Deep Impact spacecraft when it was 50 million km from the Earth. Africa is at right. Notice how much darker the moon is compared to Earth. It reflects only as much light as a fresh asphalt road. Credit: Donald J. Lindler, Sigma Space Corporation, GSFC, Univ. Maryland, EPOCh/DIXI Science Teams.

Although communications with EPOXI were lost in the summer of 2013, the mission lives on in the form of the data it produced, some of which are again put to use in a new paper out of the University of Washington. Edward Schwieterman, a doctoral student and lead author on the work in collaboration with the university’s Victoria Meadows, reports on Earth observations from EPOXI that have been compared to three-dimensional planet-modeling data from the university’s Virtual Planet Laboratory. The comparison has allowed confirmation of the signature of nitrogen collisions in our atmosphere, a phenomenon that should have wide implications.

The presence of nitrogen is significant because it can help us determine whether an exoplanet’s surface pressure is suitable for the existence of liquid water. Moreover, if we find nitrogen and oxygen in an atmosphere and are able to measure the nitrogen accurately, we can use the nitrogen as a tool for ruling out non-biological origins for the oxygen. But nitrogen is hard to detect, and the best way to find it in a distant planet’s atmosphere is to measure how nitrogen molecules collide with each other. The paper argues that these ‘collisional pairs’ create a signature we can observe, something the team has modeled and that the EPOXI work has confirmed.

Nitrogen pairs, written as (N2)2, are visible in a spectrum at shorter wavelengths, giving us a useful tool. The paper explains how this works:

A comprehensive study of a planetary atmosphere would require determination of its bulk properties, such as atmospheric mass and composition, which are crucial for ascertaining surface conditions. Because (N2)2 is detectable remotely, it can provide an extra tool for terrestrial planet characterization. For example, the level of (N2)2 absorption could be used as a pressure metric if N2 is the bulk gas, and break degeneracies between the abundance of trace gases and the foreign pressure broadening induced by the bulk atmosphere. If limits can be set on surface pressure, then the surface stability of water may be established if information about surface temperature is available.

It’s interesting as well that for half of Earth’s geological history, there was little oxygen present, despite the presence of life for a substantial part of this time. The paper argues that given Earth’s example, there may be habitable and inhabited planets without O2 we can detect. Moreover, atmospheres with low abundances of gases like N2 and argon are more likely to accumulate O2 abiotically, giving us a false positive for life.

A water dominated atmosphere lacks a cold trap, allowing water to more easily diffuse into the stratosphere and become photo-dissociated, leaving free O2 to build up over time. Direct detection of N2 through (N2)2 could rule out abiotic O2 via this mechanism and, in tandem with detection of significant O2 or O3, potentially provide a robust biosignature. Moreover, the simultaneous detection of N2, O2, and a surface ocean would establish the presence of a significant thermodynamic chemical disequilibrium (Krissansen-Totton et al. 2015) and further constrain the false positive potential.

Combining the EPOXI data with the Virtual Planetary Laboratory modeling demonstrates that nitrogen collisions that are apparent in our own atmosphere should likewise be apparent in exoplanet studies by future space telescopes. EPOXI, then, demonstrated that nitrogen collisions could be found in a planetary spectrum, and the VPL work modeling a variety of nitrogen abundances in an exoplanet atmosphere shows how accurately the gas can be measured. “One of the interesting results from our study,” adds Schwieterman, “is that, basically, if there’s enough nitrogen to detect at all, you’ve confirmed that the surface pressure is sufficient for liquid water, for a very wide range of surface temperatures,”

The paper is Schwieterman et al., “Detecting and Constraining N2 Abundances in Planetary Atmospheres Using Collisional Pairs,” The Astrophysical Journal Vol. 810, No. 1 (28 August 2015). Abstract / preprint.



New Horizons: River of Data Commences

by Paul Gilster on September 8, 2015

Hard to believe it’s been 55 days since the New Horizons flyby. When the event occurred, I was in my daughter’s comfortable beach house working at a table in the living room, a laptop in front of me monitoring numerous feeds. My grandson, sitting to my right with his machine, was tracking social media on the event and downloading images. When I was Buzzy’s age that day, Scott Carpenter’s Mercury flight was in the works, and with all of Gemini and Apollo ahead, I remember the raw excitement as the space program kept pushing our limits. I had a sense of generational hand-off as I worked New Horizons with my similarly enthusiastic grandson.

Carpenter took the second manned orbital flight in the Mercury program when Deke Slayton had to step down because of his heart condition, and the flight may be most remembered for the malfunction in Carpenter’s pitch horizon scanner, leading to the astronaut’s taking manual control of the reentry, which in turn led to overshooting the splashdown point by 400 kilometers. Carpenter’s status during reentry was unknown and fear rose as forty minutes passed before his capsule could be located. Exactly how the overshoot happened remains controversial, at least in some quarters.

But back to New Horizons, which hit its targets so precisely that no controversy is necessary. The intensive downlinking of tens of gigabits of data is now fully launched, with the prospect of about a year before we have the entire package. Principal Investigator Alan Stern (SwRI) explains:

“This is what we came for – these images, spectra and other data types that are going to help us understand the origin and the evolution of the Pluto system for the first time. And what’s coming is not just the remaining 95 percent of the data that’s still aboard the spacecraft – it’s the best datasets, the highest-resolution images and spectra, the most important atmospheric datasets, and more. It’s a treasure trove.”


Image: This close-up image of a region near Pluto’s equator captured by New Horizons on July 14 reveals a range of youthful mountains rising as high as 3.4 kilometers above the surface of the dwarf planet. This iconic image of the mountains, informally named Norgay Montes (Norgay Mountains) was captured about 1 ½ hours before New Horizons’ closest approach to Pluto, when the craft was 77,000 kilometers from the surface of the icy body. The image easily resolves structures smaller than 1.6 kilometers across. The highest resolution images of Pluto are still to come, with an intense data downlink phase commencing on Sept. 5. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.

Given images as rich as the above, the prospect of significantly more detailed views will keep the coming months lively, and after that, we have the possibility of a Kuiper Belt Object flyby in a New Horizons extended mission. Remember that since the flyby, the data being returned has been information collected by the energetic particle, solar wind and space dust instruments. Now we move into higher gear, although it’s a pace that still demands patience. Given the distance of the spacecraft from Earth (as I write, the craft is 65,512,553 kilometers beyond Pluto, and 33.36 AU from the Sun), the downlink rate is no more than 1-4 kilobits per second.


Image: All communications with New Horizons – from sending commands to the spacecraft, to downlinking all of the science data from the historic Pluto encounter – happen through NASA’s Deep Space Network of antenna stations in (clockwise, from top left) Madrid, Spain; Goldstone, California, U.S.; and Canberra, Australia. Even traveling at the speed of light, radio signals from New Horizons need more than 4 ½ hours to travel the 4.83 billion kilometers between the spacecraft and Earth. Credit: NASA.

New Horizons is sometimes described as the fastest spacecraft ever launched, which isn’t correct given the Helios probes, launched in 1974 and 1976, that reached 70 kilometers per second at closest approach to the Sun. Helios II, just slightly faster than its counterpart, can be considered the fastest man-made object in history. But it’s true that New Horizons left Earth traveling outward faster than any previous vehicle. Will it catch up with the Voyagers? No, because although it left Earth faster than either Voyager, it didn’t have the benefit of full-fledged gravitational assists around both Jupiter and Saturn. While Voyager 1 has a heliocentric speed of 17.05 kilometers per second, New Horizons is now at 14.49 kilometers per second.

Unprocessed imagery from New Horizons’ Long Range Reconnaissance Imager (LORRI) becomes available each Friday at the LORRI Images from the Pluto Encounter page, with the next batch due on September 11. And although it’s been widely published, I do want to get the Pluto flyby animation up on Centauri Dreams, and note that Stuart Robbins (SwRI), who created the fly-through sequence, has written up the process in To Pluto and Beyond. Robbins notes that this is a system we’re unlikely to revisit in our lifetimes, but the good news is that we still have an operational craft with the potential for at least one KBO flyby.


The Shape of Space Telescopes to Come

by Paul Gilster on September 4, 2015

Planning and implementing space missions is a long-term process, which is why we’re already talking about successors to the James Webb Space Telescope, itself a Hubble successor that has yet to be launched. Ashley Baldwin, who tracks telescope technologies deployed on the exoplanet hunt, here looks at the prospects not just for WFIRST (Wide-Field InfraRed Survey Telescope) but a recently proposed High-Definition Survey Telescope (HDST) that could be a major factor in studying exoplanet atmospheres in the 2030s. When he is not pursuing amateur astronomy at a very serious level, Dr. Baldwin serves as a consultant psychiatrist at the 5 Boroughs Partnership NHS Trust (Warrington, UK).

by Ashley Baldwin


”It was the best of times, it was the worst of times…” Dickens apart, the future of exoplanet imaging could be about two telescopes rather than two cities. Consider the James Webb Space Telescope (JWST), and Wide-Field InfraRed Survey Telescope (WFIRST), which as we shall see have the power not just to see a long way but also determine any big telescope future. JWST, or rather its performance, will determine whether there is even to be a big telescope future. The need to produce a big telescope and its function are a cause of increasing debate as the next NASA ten year roadmap, the Decadal Survey for 2020, approaches.

NASA will form Science Definition “focus” groups from the full range of its astrophysics community to determine the shape of this map. The Exoplanet Program Analysis Group (ExoPAG) is a dedicated group of exoplanetary specialists tasked with soliciting and coordinating community input to NASA’s exoplanet exploration programme through missions like Kepler, the Hubble Space Telescope, HST and more recently Spitzer. They have produced an outline of their vision in response to NASA’s soliciting of ideas, which is addressed here in conjunction with a detailed look at some of the central elements by way of explaining some of the complex features that exoplanet science requires.

Various members of ExoPAG have been involved in the exoplanet arm of the JWST and most recently in the NASA dark energy mission, which with the adoption of the “free” NRO 2.4m mirror array and a coronagraph is increasingly becoming an ad hoc exoplanet mission too. This mission has also been renamed: Wide-Field InfraRed Survey Telescope (WFIRST), a name that will hopefully go down in history! More about that later.

The Decadal Survey and Beyond

As we build towards the turn of the decade, though, the next Decadal Survey looms. This is effectively a road map of NASA’s plans for the coming decade. Never has there been a decade as important for exoplanet science if it is to build on Kepler’s enormous legacy. To date, over 4000 “candidate” planets have been identified and are awaiting confirmation by other means, such as the radial velocity technique. Recently twelve new planets have been identified in the habitable zones of their parent stars, all with Earth-like masses. Why so many now? New sophisticated software has been developed to automate the screening of the vast number of signals returned by Kepler, increasing the number of potential targets but more importantly, becoming more sensitive to the smaller signals of Earth-sized planets.

So what is next? In general these days NASA can afford one “Flagship” mission. This will be WFIRST for the 2020s. It is not a dedicated mission but as Kepler and ground-based surveys return increasingly exciting data, WFIRST evolves. In terms of the Decadal Survey, the exoplanet fraternity has been asked to develop mission concepts within the still-available funds.

Three “Probe” class concepts — up to and above current Discovery mission cost caps but smaller than flagship-class missions — have been mooted, the first of which is developing a star-shade to accompany WFIRST. This, if you recall, is an external occulting device that blocks out starlight by sitting several tens of thousands of kilometers off between the parent star and the telescope, allowing through the much dimmer accompanying planetary light, making characterisation possible. A recent Probe concept, Exo-S, addressed this very issue and proposed either a small 1.1m dedicated telescope and star-shade, or the addition of a star-shade to a pre existing mission like WFIRST. At that time, the “add on” option wasn’t deemed possible as it was proposed to put WFIRST into a geosynchronous orbit where a star-shade could not function.

The ExoPAG committee have recently produced a consensus statement of intent in response to a NASA request for guidance on an exoplanet roadmap for incorporation into NASA’s generic version for Decadal Survey 2020. As stated above, this group consists of a mixture of different professionals and amateurs (astrophysicists, geophysicists, astronomers, etc) who advise on all things exoplanet including strategy and results. They have been asked to create two science definition teams representing the two schools of exoplanet thinking to contribute to the survey.

One suggestion involved placing WFIRST at the star-shade friendly Earth/Sun Lagrange 2 point (932,000 kilometers from Earth, where the Sun and Earth Gravity cancel each other out, allowing a relatively stable orbit). This if it happens represents a major policy change from the original geosynchronous orbit, and is very exciting as unlike the current exoplanet coronagraph on the telescope, a star-shade of 34m diameter could image Earth-mass planets in the habitable zones of Sun-like stars. More on that below.

WFIRST at 2.4m will be limited in how much atmospheric characterisation it can perform given its relatively small aperture and time-limited observation period (it is not a dedicated exoplanet mission and still has to do dark energy science). The mission can be expected to locate several thousand planets via conventional transit photometry as well as micro-lensing and possibly even a few new Earth-like planets by combining its results with the ESA Gaia mission to produce accurate astrometry (position and mass in three dimensions) within 30 light years or so. There has even been a recent suggestion that exoplanet science or at least the coronagraph actually drives the WFIRST mission. A total turnaround if it happens and very welcome.

The second Probe mission is a dedicated transmission spectroscopy telescope. It would be a telescope of around 1.5m with a spectroscope, fine guidance system and mechanical cooler to spectroscopically analyse the light of a distant star as it passes through the atmosphere of a transiting exoplanet. No image of the planet here, but the spectrum of its atmosphere tells us almost as much as seeing it. The bigger the telescope aperture, the better for seeing smaller planets with thinner atmospheric envelopes. Planets circling M-dwarfs make the best targets, as the planet to star ratio here will obviously be the highest. The upcoming TESS mission is intended to provide such targets for the JWST, although even its 6.5m aperture will struggle to characterise atmospheres around all but the largest planets or perhaps, if lucky, a small number of “super-terrestrial” planets around M-dwarfs. It will be further limited by general astrophysics demand on its time. A Probe telescope would pick up where JWST left off and although smaller, could compensate by being a dedicated instrument with greater imaging time.

The final Probe concept links to WFIRST and Gaia. It would involve a circa 1.5m class telescope as part of a mission that like Gaia observes multiple stars on multiple occasions to measure subtle variations in their position over time, determining the presence of orbiting planets by their effect on the star. Unlike radial velocity methods, it can accurately determine mass and orbital period down to Earth- sized planets around neighbouring stars. A similar concept called NEAT was presented unsuccessfully for ESA funding but rejected despite being robust — a good summary is available through a Google search.

These parameters are obviously useful in their own right but more important provide targets for direct imaging telescopes like WFIRST rather than leaving the telescope to search star systems “blindly,” thus wasting limited time. At present the plan for WFIRST is to image pre-existing radial velocity planets to maximise searching, but nearby RV [radial velocity] planets are largely limited to the larger range of gas giants, and although important to exoplanetary science, they are not the targets that are going to excite the public or, importantly, Congress.

All of these concepts occur against the backdrop of the ESA RV PLATO mission and the new generation of super telescopes, the ELTs. Though ground based and limited by atmospheric interference, these will synergize perfectly with space telescopes, as their huge light-gathering capacity will allow high-resolution spectroscopy of suitable exoplanet targets identified by their space-based peers, especially if also combined with high quality coronagraphs.


Image: A direct, to-scale, comparison between the primary mirrors of the Hubble Space Telescope, James Webb Space Telescope, and the proposed High Definition Space Telescope (HDST). In this concept, the HDST primary is composed of 36 1.7 meter segments. Smaller segments could also be used. An 11 meter class aperture could be made from 54 1.3 meters segments. Credit: C. Godfrey (STScI).

Moving Beyond JWST

So the 2020s have the potential to be hugely exciting. But simultaneously we are fighting a holding battle to keep exoplanet science at the top of the agenda and make a successful case for a large telescope in the 2030s. It should be noted that there is still an element in NASA who are unsure as to what the reaction to the discovery of Earth like planets would be!

A series of “Probe” class missions will run in parallel with or before any flagship mission. No specific plans have been made for a flagship mission but an outline review of its necessary requirements has been commissioned by the Association of Universities for Research in Astronomy (AURA) and released under the descriptive title “High Definition Space Telescope” (HDST). A smaller review has produced an outline for a dedicated exoplanet flagship telescope called HabEX. These have been proposed as happening at the end of the next decade but have met resistance as being too close to the expensive JWST in time. As WFIRST is in effect a flagship mission (although never publicly announced as such), NASA generally can afford one such mission per decade, which means any big telescope will have to wait until the 2030s at the earliest. Decadal 2020 and the exoplanet consensus and science definition groups contributing to it will basically have to play a “holding” role, keeping up the exoplanet case throughout the decade using evidence from available resources to build a case for a subsequent large HDST.

The issue then becomes the launch vehicle upper stage “shroud,” or width. The first version of the Space Launch System (SLS) is only about 8.5m. Ideally the shroud should be at least a meter larger than the payload to allow “give” during launch pressures, which is especially important for a monolithic mirror where the best orientation is “face on”. Given the large stresses of launch, lightweight “honeycomb” versions of traditional mirrors cannot be used and solid versions weigh in at 56 tonnes, even before the rest of the telescope. For the biggest possible monolithic telescopes at least, we will have to wait for the 10m-plus shroud and heavier lifting ability of the SLS or any other large launcher.

A star-shade on WFIRST via one of these Probe missions seems the best bet for a short term driver of change. Internal coronagraphs on 2m class telescopes allow too little light through for eta Earth spectroscopic characterisation, but star-shades will (provided their light enters the telescope optical train high enough up, if like WFIRST the plan is to have internal and external coronagraphs). There will be a smaller inner working angle, too, to get at the habitable zone of later spectrum stars (K). That’s if WFIRST ends up at L2, though L2 is talked about more and more.

The astrometry mission will be a dedicated version of WFIRST/Gaia synergy, saving lots of eta Earth searching time. It should be doable within Probe funding, as the ESA NEAT mission concept came in at under that. It fell through due to the formation flying element, but post PROBA 3 (a European solar coronagraphy mission that will in effect be the first dedicated “precision” formation flying mission) that issue should be resolved.

A billion dollars probably gets a decent transition spectroscopy mission which will have enough resolution to follow up some of the more promising TESS discoveries. Put these together and that’s a lot of exoplanet science with a tantalising amount of habitability material, too. WFIRST status seems to be increasing all the time and at one recent exoplanet meeting led by Gary Blackwood it was even stated (and highlighted) publicly that the coronagraph should LEAD the mission science. That’s totally at odds with previous statements that emphasised the opposite.

Other Probe concepts consider high-energy radiation such as X-rays, and though less relevant to exoplanets, the idea acknowledges the fact that any future telescopes will need to look at all facets of the cosmos and not just exoplanets. Indeed, competition for time on telescopes will become even more intense. Given the very faint targets that exoplanets present it must be remembered that collecting adequate photons takes a lot of precious telescope time, especially for small, close-in habitable zone planetary targets.

The ExoPAG consensus represents a compromise between two schools of thought: Those who wish to prioritise habitable target planets for maximum impact, and those favouring a methodical analysis of all exoplanets and planetary system architecture to build up a detailed picture of what is out there and where our own system fits into this. All of these are factors that are likely to determine the likelihood of life, and both approaches are robust. I would recommend that readers consult this article and related material and reach their own conclusions.


Image: A simulated image of a solar system twin as seen with the proposed High Definition Space Telescope (HDST). The star and its planetary system are shown as they would be seen from a distance of 45 light years. The image here shows the expected data that HDST would produce in a 40-hour exposure in three filters (blue, green, and red). Three planets in this simulated twin solar system – Venus, Earth, and Jupiter – are readily detected. The Earth’s blue color is clearly detected. The color of Venus is distorted slightly because the planet is not seen in the reddest image. The image is based on a state-of-the-art design for a high-performance coronagraph (that blocks out starlight) that is compatible for use with a segmented aperture space telescope. Credit: L. Pueyo, M. N’Diaye (STScI).

Defining a High Definition Space Telescope

What of the next generation of “Super Space Telescope”?. The options are all closely related and fall under the broad heading of High Definition Space Telescope (HDST). Such a telescope requires an aperture of between 10 and twelve metres minimum to have adequate light-capturing ability and resolution to carry out both exoplanet imaging and also wider astrophysics, such as viewing extragalactic phenomena like quasars and related supermassive black holes. Regardless of specifics these parameters require absolute stability with the telescope requiring picometer (10-12 metre) levels in order to function.

The telescope is diffraction limited at 500nm, right in the middle of the visible spectrum. Diffraction limit is effectively the wavelength that any circular mirror gives its best angular resolution, the ability to discern detail. Angular resolution is governed by the equation λ (lambda) or wavelength expressed as a fraction of a metre / telescope aperture (D) expressed in metres; e.g HDST has its optimum functioning or “diffraction limit” at 500nm wavelength, defined by the equation 500nm (10-9)/12m.

The higher the aperture of a telescope the more detail it can see at any given wavelength and conversely the longer the wavelength, the less detail it can see. That is under perfect conditions experienced in space as opposed to the constantly moving atmosphere for ground-based scopes that will rarely approach the diffraction limit. So the HDST will not have the same degree of resolution at infrared wavelengths as visible wavelengths, which is relevant as several potential biosignatures will appear on spectra at longer wavelengths.

Approaching the diffraction limit is possible on the ground with the use of laser-produced guide stars and modern “deformable mirrors or “adaptive optics,” which help compensate. This technique of deformable primary and especially secondary mirrors will be important in space as well, in order to achieve the incredible stability required for any telescope observing distant and dim exoplanets. This is especially true of coronagraphs, though much less so with star-shades, which could be important in determining which starlight suppression technique to employ.

Additionally, the polishing “finish” of the mirror itself requires incredible precision. As a telescope becomes larger, the quality of its mirror needs to improve given the minute wavelengths being worked with. The degree of polish or “finish” required is defined as fractions of a wavelength or wavefront error (WFE). For the HDST this is as low as 1/10 or even 1/20 of the wavelength in question. In its case, generally visible light around 500nm, so this error will be below 50nm, a tiny margin that illustrates the ultra high quality of telescope mirror required.

A large 12m HDST would require a WFE of about 1/20 lambda and possibly even lower, which works out to less than 30nm. The telescope would also require a huge giga-pixel array of sensors to capture any exoplanet detail, electron-magnifying CCDs, Electron Multiplying CCDs (EMCCDs), or their Mercury Cadmium Tellurium-based near infrared equivalent, which would need passive cooling to prevent heat generated from the sensors themselves producing “dark current,” creating a false digital image and background “noise”.

Such arrays already exist in space telescopes like the ESA Gaia, and producing larger versions would be one of the easier design requirements. For a UltraViolet-Optical-InfraRed (UVOIR) telescope an operating temperature of about -100 C would suffice (for the sensors, while only the telescope itself would be near room temperature).

All of the above is difficult but not impossible even today and certainly possible in the near future, with conventional materials like ultra-low expansion glass (ULE) able to meet this requirement, and more recently silicon carbide composites, too. The latter have the added advantage of a very low coefficient of expansion. This last feature can be crucial depending on the telescope sensor’s operating temperature range. Excessive expansion due to a “warm” telescope operating around 0-20 degrees C could disturb the telescope’s stability. It was for this reason that silicon carbide was chosen for the structural frame of the astrometry telescope Gaia, where stability was also key to accurately positioning one billion stars.

A “warm” operating temperature of around room temperature helps reduce telescope cost significantly, as illustrated by the $8 billion cost of the JWST, with an operating temperature of a few tens of Kelvin requiring an expensive and finite volume of liquid helium. Think how sad it was seeing the otherwise operational 3.5m ESA Herschel space telescope drifting off to oblivion when its supply of helium ran out.

The operating temperature of a telescope’s sensors determines its wavelength-sensitive range or “bandpass.” For wavelengths longer than about 5 micrometers (5000 nm), the sensors of the telescope require cooling in order to prevent the temperature of the telescope apparatus from impacting any incoming information. Bandpass is also influenced, generally made much smaller, by passing through a coronagraph. The longer the wavelength, the greater the cooling required. Passive cooling involves attaching the sensors to a metal plate that radiates heat out to space. This is useful for a large telescope that requires precision stability, as it has no moving parts that can vibrate. Cooler temperatures can be reached by mechanical “cryocoolers,” which can get down as low as a few tens of Kelvin (seriously cold) but at the price of vibration.

This was one of the two main reasons why the JWST telescope was so expensive. If required liquid helium to achieve its operating temperature of just a few Kelvin from absolute zero (the point at which a body has no energy and therefore the lowest reachable temperature) without vibration, in order to reach longer infrared wavelengths and look back further into time.

Remember, the further light has travelled since the Big Bang, the more it is stretched or “red-shifted,” and seeing back as far as possible was a big driver for JWST. The problem is that liquid helium only lasts so long before boiling off, with the large volumes required for ten years of service presenting a large mass and also requiring extensive, expensive testing, all of which contributed to the telescope’s cost and time overrun.

The other issue with large telescopes is whether they are made from one single mirror, like Hubble, or are segmented like the Keck telescopes and JWST. The largest currently manufacturable monolithic mirrors are off-axis (unobstructed), 8.4m in diameter, bigger than JWST and perfected in ground scopes like the LBT and GMT. Off-axis means that the focal plane of the telescope is offset from its aperture such that a focusing secondary mirror, sensor array, spectroscope or coronagraph doesn’t obstruct and reduce available light by up to 20% of the aperture. A big attraction to this design is that the unobstructed 8.4m mirror thus collects roughly the equivalent of a 9.2m on-axis mirror, ironically near the minimum requirements of the ideal exoplanet telescope.

Given the construction of six such mirrors for the GMT, this mirror is now almost “mass produced,” and thus very reasonably priced. The off-axis design allows sensor arrays, spectroscopes and especially large coronagraphs to sit outside the telescope without need of suspension within the telescope, with the “spider” attachments creating the “star” shaped interference diffraction patterns in the images we are all familiar with in conventional telescope designs. Despite being cheaper to manufacture and already tested extensively on the ground, the problem arises from the fact that there are currently no launchers big and powerful enough to lift what would in effect be a 50 tonne-plus telescope into orbit (non-lightweight honeycomb design due to high “g” and acoustic vibration forces at launch).

In general, a segmented telescope can be “folded” up inside a launcher fairing very efficiently, up to a maximum aperture of up to 2.5 X the fairing width. The Delta IV heavy launcher has a fairing width of about 5.5m, so in theory a segmented telescope of up to 14m could be launched provided it was below the maximum weight capacity of about 21 tonnes to geosynchronous transfer orbit. So it could be launched tomorrow! It was this novel segmentation that, along with cooling, added to the cost and construction time of the JWST, though hopefully once successfully launched it will have demonstrated its technological readiness and be cheaper next time round.

By the time a HDST variant is ready to launch it is hoped that there will be launchers with fairing widths and power to lift such telescopes, and they will be segmented because at 12m they exceed the monolithic limit. With a wavelength operating range from circa 90nm to 5000nm, they will require passive cooling only and the segmentation design will have been tested already, both of which will help reduce cost, which will be more simply dependent on size and launcher cost. This sort of bandpass, though not so large as a helium cooled telescope, is more than adequate for looking for key biosignatures of life such as ozone, O3, Methane, water vapour and CO2 under suitable conditions and with a good “signal to noise ratio”, the degree to which the required signal stands out from background noise.

Separating Planets from their Stars

Ideally signal to noise ratio should be better than ten. In terms of instrumentation, all exoplanet scientists will want a large telescope of the future to have starlight suppression systems to help directly image exoplanets as near to their parent stars as possible, with a contrast reduction of 10-10 in order to view Earth-sized planets in the liquid water “habitable zone.” The more Earth-like planets and biosignatures the better. There are ways of producing biosignature signs on a spectrograph that are abiotic, so a larger sample of such signatures strengthens the case for a life origin rather than a coincidental non-biological origin.

As has been previously discussed, there are two ways of doing this, with internal and external occulting devices. Internal coronagraphs are a series of masks and mirrors that help “shave off” the offending starlight, leaving only the orbiting planets. The race is on as to how close this can be done to the star. NASA’s WFIRST will tantalisingly achieve contrast reductions between 10-9 and 10-10, which shows how far this technology has come since the mission was conceived three years ago when such levels were pure fantasy.

How close to the parent star a planet can be imaged, the Inner working angle (IWA) is measured in milliarcseconds (mas), and for WFIRST this is slightly more than 100, between Earth and Mars in the solar system. A future HDST coronagraph would hope to get as low as 10 mas, thus allowing habitable zone planets around smaller, cooler (and more common) stars. That said, coronagraphs on segmented telescopes are an order of magnitude more difficult to design for segmented scopes than monolithic designs and little research has yet gone into this area. An external occulter or star-shade achieves the same goals as a coronagraph but by sitting way out in front of a telescope, between it and the target star, casts a shadow to exclude starlight. The recent Probe class concept explored the use of a 34m shade with WFIRST that was up to 35000kms away from the telescope. The throughput of light is 100% versus 20-30% maximum for most coronagraph designs, in an area where photons of light are at a premium. Perhaps just 1 photon per second or less might hit a sensor array from an exoplanet.

A quick word on coronagraph type might be useful. Most coronagraphs consist of a “mask” that sits in the entrance pupil of the focal plane and blocks out the central parent starlight whilst allowing the fainter peripheral exoplanet light to pass and be imaged. Some starlight will diffract around the mask (especially so for longer wavelengths like infrared) but can be removed by shaping the entry pupil or subsequent apodization (i.e., optical filtering technique), a process utilising a series of mirrors to “shave” off additional starlight till just the planet light is left.

For WFIRST the coronagraph is a combination of a “Lyot” mask and shaped pupil. This is efficient at blocking starlight to within 100 mas of the star but at the cost of losing 70-80% of the planet light, as previously stipulated. Such is the current level of technological progression ahead of proposals for the HDST. The reserve design utilises apodization, which has the advantage of removing starlight efficiently but without losing planet light; indeed, as much as 95% gets through. The design has not yet been tested to the same degree as the WFIRST primary coronagraph, though, as the necessary additional mirrors are very hard to manufacture. Its high “throughput” of light is very appealing where light is so precious, and thus the design is likely to see action at a later date. A coronagraph throughput of 95% on an off-axis 8.4m telescope compared to 20-30% for an alternative on even a 12m would allow more light to be analysed.

The advantage here is that the even more stringent stability requirements of a coronagraph are very much relaxed, and the amount of useful light reaching the focal plane of the telescope is near 100%. No place for waste. Star-shades offer deeper spectroscopic analysis compared to coronagraphs, too. The disadvantage is that the star-shade needs two separate spacecraft involved in precision “formation flying” to maintain the star-shade’s shadow in the right place, and the star-shade needs to move into a new position every time a new target is selected, taking days or weeks to get into position and of course having finite propellant supplies limiting its lifespan to a maximum of 5 years, and perhaps thirty or so premium-target exoplanets. Thus it may be that preliminary exoplanet discovery and related target mapping is done rapidly via a coronagraph before atmospheric characterisation via spectroscopy is done later by a star-shade with its greater throughput of light and greater spectroscopic range.

The good news is that the recent NASA ExoPAG consensus criteria require an additional Probe class ($1 billion) star-shade mission for WFIRST as well as a coronagraph. This would need the telescope to be at the stable Sun/Earth Lagrange point, but would make the mission in effect a technological demonstration mission for both types of starlight suppression, saving development costs for any future HDST while imaging up to 30 habitable zone Earth-like planets and locating many more within ten parsecs in combination with the Gaia astrometry results.

The drawback will be that WFIRST has a monolithic mirror and coronagraph development to date has focused on this mode rather than the segmented mirrors of larger telescopes. Star-shades are less affected by mirror type or quality, but a 12m telescope — compared to WFIRST’s 2.4m — would only achieve maximum results with a huge 80m shade. Building and launching a 34m shade is no mean feat but building and launching an enormous 80-100m version might even require fabrication in orbit. It would also need to be 160000-200000Kms from its telescope, making formation flying no easy achievement, especially as all star-shade technology can be tested only in computer simulations or downscaled in practice.

HDST Beyond Exoplanets

So that’s the exoplanet element. Exciting as such science is, it only represents a small portion of all astrophysics and any such HDST is going to be a costly venture, probably in excess of JWST. It will need to have utility across astrophysics, and herein lies the problem. What sort of compromise can be reached amongst different schools of astrophysics in terms of telescope function and also access time? Observing distant exoplanets can take days, and characterising their atmospheres even longer.

Given the price of JWST and its huge cost and time overrun, any Congress will be skeptical of being drawn into a bottomless financial commitment. It is for this reason that increasingly the focus is on both JWST and WFIRST. The first has absolutely GOT to work, well and for a long time, so that all its faults (as with Hubble, ironically) can be forgotten amid the celebration of its achievements. WFIRST must illustrate how a flagship level mission can work at a reasonable cost (circa $2.5 billion) and also show that all the exoplanet technology required for a future large telescope can work and work well.

The HABX2 telescope is in effect a variable aperture-specific variant of HDST (determined by funds) with the maximum possible passively cooled sensor bandpass described above and a larger version of the additional starlight suppression technology of WFIRST. In effect, a dedicated exoplanet telescope. It, too, would use a coronagraph or star-shade.

The overarching terms for all these telescope variants are determined by wavelength; thus the instrument would be referred to as Large Ultraviolet Optical InfraRed (LUVOIR), with specific wavelength range to be determined as necessary. Such a telescope is not a dedicated exoplanet scope and would obviously require suitable hardware. This loose definition is important as there are other telescope types — high energy, for instance, looking at X-Rays. The NASA Chandra telescope doesn’t image the highest energy X-Rays emitted by quasars or black holes. Following on from JWST and between it and the ALMA (Atacama Large Millimeter/submillimeter Array) is far infrared, which can use dedicated telescopes and has not been explored extensively. There are astrophysicist groups lobbying for all these telescope types.

Here WFIRST is again key. It will locate thousands of planets through conventional transition photometry and micro-lensing as well as astrometry, but the directly-imaged planets via its coronagraph and better still its star-shade should, if characterised (with the JWST?), speak for themselves, and if not guarantee a dedicated exoplanet HDST, at least provide NASA and Congress with the confidence to back a large space “ELT” with suitable bandpass and starlight suppression hardware, and time to investigate further. The HDST is an outline of what a future space telescope, be it HABX2 or a more generic instrument, might be.


Image: A simulated spiral galaxy as viewed by Hubble, and the proposed High Definition Space Telescope (HDST) at a lookback time of approximately 10 billion years (z = 2) The renderings show a one-hour observation for each space observatory. Hubble detects the bulge and disk, but only the high image quality of HDST resolves the galaxy’s star-forming regions and its dwarf satellite. The zoom shows the inner disk region, where only HDST can resolve the star-forming regions and separate them from the redder, more distributed old stellar population.
Credit: D. Ceverino, C. Moody, and G. Snyder, and Z. Levay (STScI).

Challenges to Overcome

The concern is that although much of its technology will hopefully be proven through the success of JWST and WFIRST, the step up in size in itself requires a huge technological advance, not least because of the exquisite accuracy required at all levels of its functioning, from observing exoplanets via a star-shade or coronagraph to the actual design, construction and operation of these devices. A big caveat is that it was this technological uncertainty that contributed to the time and cost overrun of JWST, something both the NASA executive and Congress are aware of. It is highly unlikely that such a telescope will launch before the mid-2030s at an optimistic estimate. There has already been pushback on an HDST telescope from NASA. What might be more likely is a compromise, one which delivers a LUVOIR telescope as opposed to an X-Ray or far-infrared alternative, but at more reasonable cost and budgeted for over an extended time prior to a 2030s launch.

Congress are keen to drive forward high profile manned spaceflight. Whatever your thoughts on that, it is likely to lead to the evolution of the SLS and private equivalents like SpaceX launchers. Should these have a fairing of around 10m, it would be possible to launch the largest monolithic mirror in an off-axis format that allows easier and most efficient use of a coronagraph or an intermediate star-shade (50m) with minimal technology development and at substantially less cost. Such a telescope would not present such a big technological advance and would be a relatively straightforward design. Negotiation over telescope usage could lead to greater time devoted to exoplanet science, thus compensating further for the “descoping” from the 12m HDST ideal (only 15% of JWST observation is granted for exoplanet use).Thus the future of manned and robotic spaceflight is intertwined.

A final interesting point is the “other” forgotten NRO telescope. It is identical to its high profile sibling and with “imperfections” in its manufacturing, but a recent NASA executive interview conceded it could still be used for space missions. At present logic would have it as backup for WFIRST. Could it, too, be the centrepiece of an exoplanet mission, one of the Probe concepts perhaps, especially the transit spectroscopy mission where mirror quality is less important?

As with WFIRST, its large aperture would dramatically increase the potency of any mission over a bespoke mirror and deliver a flagship mission at Probe costs. A bonus if, like WFIRST, it too is launched next decade, and as with Hubble and JWST, a bit of overlap with JWST would provide great synergy with the combined light-gathering capacity of the two telescopes, allowing greater spectroscopic characterisation of interesting targets provided by missions like TESS. The JWST workload could also be relieved, somewhat critically extending its active lifespan. Supposition only at this point. I don’t think NASA are sure what to do with it, though Probe funding could represent a way of using it without the need of diverting additional funds from elsewhere.

When all is said and done, the deciding factors are likely to be JWST and evidence collected from exoplanet Probe missions. JWST was five years overdue, five billion dollars overspent and laden with 162 moving parts, yet placed almost a million Kms away. It has simply got to work, and work well, if there is to be any chance of any other big space telescopes. Be nervous and cross fingers when it launches late 2018. Meantime, enjoy TESS and hopefully WFIRST and other Probe missions, which should be more than enough to keep everyone interested even without the arrival of the ELT ground base reinforcements with their high dispersion spectroscopy, which in combination with their own coronagraphs may also characterise habitable exoplanets. These planets and the success of the technology that finds them will be key to the development of the next big space telescope, if there is to be one.

Capturing public interest will be central to this, and we have seen just how much astrophysics missions can achieve in this regard with the recent high-profile successes of Rosetta and New Horizons. With ongoing innovation and the exoplanet missions next decade, this could usher in a golden era of exoplanet science. A final often forgotten facet of space telescopes, central to HDST use, is observing solar system bodies from Mars out to the Kuiper belt. Given the success of New Horizons, it wouldn’t be a surprise to see a similar future flyby of Uranus, but it gives some idea of the sheer potency of an HDST that it could resolve features down to just 300Km resolution. It could clearly image the icy “plumes” from Europa and Enceladus, especially in UV, where the shorter wavelength will allow its best resolving power, which illustrates the need for an ultraviolet capacity on the telescope.

By 2030 we are likely to know several tens of thousands of exoplanets, many characterised and even imaged, and who knows, maybe some exciting hints of biosignatures warranting the kind of detailed examination only a large space telescope can deliver.

Plenty to keep Centauri Dreams going for sure and maybe realise our position in the Universe.


Further reading

Dalcanton, Seager at al., “From Cosmic Birth to living Earths: The future of UVOIR space astronomy.” Full text.

“HABX2: A 2020 mission concept for a flagship at modest cost,” Swain, Redfield et al. A white paper response to the Cosmic Origins Program Analysis Group call for Decadal 2020 Science and Mission concepts. Full text.