Centauri Dreams
Imagining and Planning Interstellar Exploration
Extraterrestrial Life: The Giants are Coming…
Finding a biological marker in the atmosphere of an exoplanet is a major goal, but as Ignas Snellen argues in the essay below, space-based missions are not the only way to proceed. A professor of astronomy at Leiden University in The Netherlands, Dr. Snellen makes a persuasive case that technologies like high dispersion spectroscopy and high contrast imaging are at their most effective when deployed at large observatories on the ground. A team of European observers he led has already used these techniques to determine the eight-hour rotation rate of Beta Pictoris b. We’ll need carefully conceived space missions to study those parts of the spectrum inaccessible from the ground, but these will find powerful synergies with the next generation of giant Earth telescopes planned for operations in the 2020s.
by Ignas Snellen
While I was deeply involved by my PhD project, studying the active centers of distant galaxies, a real scientific revolution was unfolding in a very different field of astronomy. In the mid-1990s the first planets were found to orbit stars other then our Sun. For several years I managed to ignore it. Not impeded by any knowledge I was happy to join the many skeptics to dismiss the early results. But soon they could be ignored no more. And when the first transiting planet was found and a little later its atmosphere detected, I radically changed research field and threw myself, like many others, on exoplanet research. More than a decade later the revolution is still going strong.
DARWIN, TPF, and SIM
Not all scientific endeavors were successful during this twenty-year period. Starting soon after the first exoplanet discoveries, enormous efforts were put in the design (and getting the political support) for a spacecraft that could detect potential biomarker gases in the atmospheres of nearby planet systems. European astronomers were concentrating on DARWIN. This mission concept was composed of four to five free-flying spacecraft carrying out high-resolution imaging using nulling interferometry, where the starlight from the different telescopes is combined in such way that it cancels out on-axis light, leaving the potential off-axis planet-light intact. After a series of studies over more than a decade, in 2007 the European Space Agency stopped all DARWIN developments – it was too difficult. Over the same time period, several versions of the Terrestrial Planet Finder (TPF) were proposed to NASA, including a nulling interferometer and a coronagraph. The latter uses a smart optical design to strongly reduce the starlight while letting any planet light pass through. Also these projects have subsequently been cancelled. Arguably an even bigger anticlimax was the Space Interferometry Mission (SIM), which was to hunt for Earth-mass planets in the habitable zones of nearby stars using astrometry. After being postponed several times, it was finally cancelled in 2010.
How pessimistic should we be?
Enormous amounts of people’s time and energy were spent on these projects, costing hundreds of millions of dollars and euros. A real pity, considering all the other exciting projects that could have been funded instead. We should set more realistic goals and learn from greatly successful missions such as the NASA Kepler mission, which was conceived and developed during that same period. A key aspect of the adoption of Kepler as a NASA space mission was the demonstration of technological readiness through ground-based experiments (by Bill Borucki and friends). A mission gets approved only if it is thought to be a guaranteed success. It is this aspect that killed Darwin and TPF, and it is this aspect that worries me about new, very smart spacecraft concepts such as the large external occulter for the New World Mission. Maybe I am just not enough of a (Centauri) dreamer.
In any case, lead times of large space missions, as the Kepler story has shown, are huge. This implies that it is highly unlikely that within the next 25 years we will have a space mission that will look for biomarker gases in the atmospheres of Earth-like planets. If I am lucky I will still be alive to see it happen. My idea is – let’s start from the ground!
The ground-based challenge
The first evidence for extraterrestrial life will come from the detection of so-called biomarkers – absorption from gases that are only expected in an exoplanet atmosphere when produced by biological processes. The prime examples of such biomarkers are oxygen and ozone, as seen in the Earth’s atmosphere. Observing these gases in exoplanet atmospheres will not be the ultimate proof of extraterrestrial life, but it will be a first step. These observations require high-precision spectral photometry, which is very challenging to do from the ground. First of all, our atmosphere absorbs and scatters light. This is a particular problem for observations of Earth-like planets, because their spectra will show absorption bands at the same wavelengths as the Earth’s atmosphere. In addition, turbulence in our atmosphere causes the light that enters ground-based telescopes to become distorted. Therefore, light does not form perfect incoming wavefronts, hampering high-precision measurements. Furthermore, when objects are observed for a longer time during a night, their light-path through the Earth atmosphere changes, as does the way starlight enters an instrument, making stability a big issue. These are the main reasons why many exoplanet enthusiasts thought that it would be impossible to ever probe exoplanet atmospheres from the ground.
The technique
Work over the last decade has shown that one particular ground-based technique – high dispersion spectroscopy (HDS) – is very suitable for detecting absorption features in exoplanet atmospheres. The dispersion of a spectrograph is a measure of the ‘spreading’ of different wavelengths into a spectrum of the celestial object. Space telescopes, such as the Hubble Space Telescope (HST), Spitzer, and the future James Webb (JWST) have instruments on board that are capable of low to medium dispersion spectroscopy, where the incoming light can be measured at typically 1/100th to 1/1000th of a wavelength. With HDS, precisions of 1/100,000th of a wavelength are reached – hence about two orders of magnitude higher than from space. For two reasons this can practically only be done from the ground: 1) the physical size of a spectrograph scales with its dispersion, meaning that HDS instruments are generally too big to launch to space. 2) At high dispersion the light is spread very thinly, requiring a lot of photons to do it right, hence a large telescope. For example, the hot Jupiter tau Bootis b required 3 nights on the 8m Very Large Telescope to measure carbon monoxide in its atmosphere. Scaling this to the HST (pretending it would have an HDS instrument) it would have cost on the order of 200 hours of observing time – more than was spent on the Hubble Deep Field. Hence, HDS is the sole domain of ground-based telescopes.
The high dispersion is key to overcome the challenges that arise from observing through the Earth’s atmosphere. At a dispersion of 1/100,000th of a wavelength, HDS measurements are sensitive to Doppler effects due to the orbital motion of the planet. E.g. the Earth moves with nearly 30 km/sec around the Sun, while hot Jupiters have velocities of 150 km/sec or more. This means that during an observation, the radial component of the orbital velocity of a planet can change by tens of km/sec. While this makes absorption features from the planet move in wavelength, any Earth-atmospheric and stellar absorption lines remain stationary. Clever data analysis techniques can filter out all the stationary components of a time-sequence of spectra, while the moving planet signal is preserved. Ultimately, the signal from numerous individual planet lines can be added up together to boost the planet signal using the cross-correlation technique – weighing the contribution from each line by its expected strength.
Image: Illustration of the HDS technique, with the moving planet lines in purple.
So why does this work? Although the Earth atmosphere has a profound influence on the observed spectrum, the absorption and scattering processes are well behaved on scales of 1/100,000th of a wavelength and can be calibrated out. The signal of the planet can be preserved, even if variations in the Earth atmospheres are many orders of magnitude larger. In this way starlight reflected off a planet’s atmosphere can be probed, but also a planet’s transmission spectrum – when a planet crosses the face of a star and starlight filters through its atmosphere. In addition, a planet’s direct thermal emission spectrum can be observed. This is particularly powerful in the infrared. And it works well! In the optical, absorption from sodium has been found in the transmission spectra of several exoplanets. In the near-infrared, carbon monoxide and water vapor have been seen in both the transmission spectra as well as thermal emission spectra of several hot Jupiters – on par with the best observations from space. In the next two years new instruments will come online (such as CRIRES+ and ESPRESSO on the VLT) that will take this significantly further – allowing a complete inventory of the spectroscopically active molecules in the upper atmospheres of hot Jupiters, and extending this research to significantly cooler and smaller planets.
One step beyond
There is more. The HDS technique makes no attempt to spatially separate the planet light from that of the much brighter star – it is only filtered out using its spectral features. Hot Jupiters are much too close to their parent stars to be able to see them separately anyway. However, planets in wider orbits can also be directly imaged, using high-contrast imaging (HCI) techniques (also in combination with coronography). This technique is really starting to flourish using modern adaptive optics in which atmospheric turbulence is compensated by fast-moving deformable mirrors. A few dozen planets have already been discovered using HCI, and new imagers like SPHERE on the VLT and GPI on Gemini, which came online last year, hold a great promise. What I am very excited about is that HDS combined with HCI (let’s call it HDS+HCI) can be even more powerful. While HDS is completely dominated by noise from the host star, HCI strongly reduces the starlight at the planet position – increasing the sensitivity of the spectral separation technique used by HDS by orders of magnitude. Last year we showed the power of HDS+HCI by for the first time measuring the spin velocity of an extrasolar planet, showing beta Pictoris b to have a length of day of 8 hours. [For more on this work, see Night and Day on ? Pictoris b].
Image: HDS+HCI observations of beta Pictoris b.
The giants are coming
Both the US and Europe are building a new generation of telescopes that can truly be called giants. The Giant Magellan Telescope (GMT) will consist of six 8.4m mirrors, equivalent of one 24.5m diameter telescope. The Thirty Meter Telescope (TMT) will be as large as the name suggests, while the European Extremely Large Telescope (E-ELT) will be the largest with an effective diameter of 39m. All three projects are in a race with each other and hope to be fully operational in the mid-2020s.
Size is everything in this game – in particular for HDS and HDS+HCI observations. HDS benefits from the number of photons that can be collected, which scales with the diameter squared. Taking into account also other effects, the E-ELT will be >100 times faster than the VLT (in particular using the first-light instrument METIS, and HIRES). This will bring us near the range needed to target molecular oxygen in the atmospheres of Earth-like planets that transit nearby red dwarf stars. We have to be somewhat lucky for such nearby transiting systems to exist, but simulations show that the smaller host star makes the transmission signal of molecular oxygen from an Earth-size planet similar to the carbon monoxide signals we already have detected in hot Jupiter atmospheres – it is just that the systems will be much fainter than tau Bootis requiring the significantly bigger telescopes. The technology is already here, but it is all about collecting enough photons. This could also be solved in a different way if even the ELTs turn out not to be large enough. HDS observations of bright stars do not require precisely shaped mirrors and this could be achieved by arrays of low-precision light collectors, but this is something for the more distant future.
Image: Artist impression of the E-ELT – ready in 2024! (credit: ESO).
Even more promising are the high-contrast imaging capabilities of the future ELTs. Bigger telescopes not only collect more photons, but also see sharper. This makes their capability to see faint planets in the glare of bright stars scale with telescope size up to the fifth power, making the E-ELT more than a 1000 times faster than the VLT. Excitingly, rocky planets in the habitable zones of nearby planets become within reach. Again, simulations show that their thermal emission can be detected around the nearest stars, while HDS+HCI at optical wavelengths can target their reflectance spectra, possibly even including molecular oxygen signatures.
Realistic space missions
Whatever happens with space-based exoplanet astronomy, ground-based telescopes will push their way forward towards characterizing Earth-like planets. This does not mean there is no need for space missions. First of all, I have not done justice to the fantastic, groundbreaking exoplanet science the JWST is going to provide. Secondly, a series of transit missions, TESS from NASA (launch 2017), and CHEOPS and PLATO from ESA (Launch 2018 & 2024), will discover all nearby transiting planet systems, a crucial prerequisite for much of the science discussed here.
Above all, ground-based measurement will not be able to provide a complete picture of a planet’s atmosphere – simply because large parts of the planet’s spectrum are not accessible from the ground. This will mean that the ultimate proof for extraterrestrial life will likely have to come from a space mission type DARWIN or TPF. Imagine how a ground-based detection of say water in an Earth-like atmosphere would open up political possibilities, but the right timing for such missions is of upmost importance. Aiming too high and too early means that lots of time and money will be wasted, at the expense of progress in exoplanet science. It is good to dream, but we should not forget to stay realistic.
Further reading
Snellen et al. (2013) Astrophysical Journal 764, 182: Finding Extraterrestrial Life Using Ground-based High-dispersion Spectroscopy (http://xxx.lanl.gov/abs/1302.3251).
Snellen et al. (2014), Nature 509, 63: Fast spin of the young extrasolar planet beta Pictoris b (http://xxx.lanl.gov/abs/1404.7506).
Snellen et al. (2015), Astronomy & Astrophysics 576, 59: Combining high-dispersion spectroscopy with high contrast imaging: Probing rocky planets around our nearest neighbors (http://xxx.lanl.gov/abs/1503.01136).
The Closed Loop Conundrum
In Stephen Baxter’s novel Ultima (Roc, 2015), Ceres is moved by a human civilization in a parallel universe toward Mars, the immediate notion being to use the dwarf planet’s volatiles to help terraform the Red Planet. Or is that really the motive? I don’t want to give too much away (and in any case, I haven’t finished the book myself), but naturally the biggest question is how to move an object the size of Ceres into an entirely new orbit.
Baxter sets up an alternate-world civilization that has discovered energy sources it doesn’t understand but can nonetheless use for interstellar propulsion and the numerous demands of a growing technological society, though one that is backward in comparison to our own. That juxtaposition is interesting because we tend to assume technologies emerge at the same pace, supporting each other. What if they don’t, or what if we simply stumble upon a natural phenomenon we can tap into without being able to reproduce its effects through any known science?
Something of the same juxtaposition occurs in Kim Stanley Robinson’s Aurora (Orbit, 2015), where we find a society that has the propulsion technologies to enable travel at a pace that can get a worldship to Tau Ceti in a few human generations. We’ve discussed Aurora in these pages recently, looking at some of the problems in its science — I’ll let those better qualified than myself have the final word on those — but what I found compelling about the novel was its depiction of what happens aboard that worldship.
Because it’s not at all inconceivable that we might solve the propulsion problem before we solve the closed-loop life support problem, and that is more or less what we see happening in Aurora. A worldship could house habitats of choice, and if you think of some visions of O’Neill cylinders, you’ll recall depictions that made space living seem almost idyllic. But Robinson shows us a ship that’s simply too small for its enclosed ecologies to flourish. Travel between the stars in such a ship would be harrowing, as indeed it turns out to be in the book. Micro-managing a biosphere is no small matter, and we have yet to demonstrate the ability.
Image: The O’Neill cylinder depicted here is one take on what might eventually become an interstellar worldship. Keeping its systems and crew healthy is a skill that will demand space-based experimentation, and plenty of it. Credit: Rick Guidice/NASA.
In Baxter’s Ultima, what happens with Ceres is compounded by the fact that just as humans don’t fully understand their power source, they also have to deal with an artificial intelligence whose motives are opaque. Put the two together and you can see why the movement of Ceres to a new position in the Solar System takes on an aura of menace. Various notions of a ‘singularity’ posit a human future in which our computers are creating entirely new generations of themselves that are designed according to principles we cannot begin to fathom. What happens then, and how do we ensure that the resulting machines want us to survive?
With Ceres very much in mind, I was delighted to receive the new imagery from the Dawn spacecraft at the present-day Ceres (in our non-alternate reality), showing us the bright spots that have commanded so much attention. Here we’re looking at a composite of two different images of Occator crater, one made with a short exposure to capture as much detail as possible, the other a longer exposure that best captures the background surface.
Image: Occator crater on Ceres, home to a collection of intriguing bright spots. The images were obtained by Dawn during the mission’s High Altitude Mapping Orbit (HAMO) phase, from which the spacecraft imaged the surface at a resolution of about 140 meters per pixel. Credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA.
We’re looking at the view from 1470 kilometers, with images offering three times better resolution than we had from the spacecraft’s previous orbit in June. Two eleven-day cycles of surface mapping have now been completed at this altitude, with the third beginning on September 9. All of Ceres is to be mapped six times over the next two months, with each cycle consisting of fourteen orbits. Changing angles in each mapping cycle will allow the Dawn researchers to put together 3-D maps from the resulting imagery.
So we’re learning more about the real Ceres every day. Given our lack of Baxter’s ‘kernels’ — the enigmatic power sources that energize his future civilization as well as the unusual but related culture they encounter — we may do better to consider this dwarf planet as a terraforming possibility in its own right, rather than a candidate for future use near Mars. On that score, I remind you of Robert Kennedy, Ken Roy and David Fields, who have written up a terraforming concept that could be applied to small bodies in or outside of the habitable zone (see Terraforming: Enter the ‘Shell World’ for background and citation).
It will be through myriad experiments in creating sustainable ecologies off-world that we finally conquer the life support problem. It always surprises me that it has received as little attention as it has in science fiction, given that any permanent human presence in space depends upon robust, recyclable systems that reliably sustain large populations. Our earliest attempts at closed-loop life support (think of the BIOS-3 experiments in the 1970s and 80s, and the Biosphere 2 attempt in the 1990s) have revealed how tricky such systems are. Robinson’s faltering starship in Aurora offers a useful cautionary narrative. We’ll need orbital habitats of considerable complexity as we learn how to master the closed-loop conundrum.
Nitrogen Detection in the Exoplanet Toolkit
Extending missions beyond their initial goals is much on my mind as we consider the future of New Horizons and its possible flyby past a Kuiper Belt Object. But this morning I’m also reminded of EPOXI, which has given us views of the Earth that help us study what a terrestrial world looks like from a distance, characterizing our own planet as if it were an exoplanet. You’ll recall that EPOXI (Extrasolar Planet Observation and Deep Impact Extended Investigation) is a follow-on to another successful mission, the Deep Impact journey to comet Tempel 1.
As is clear from its acronym, EPOXI combined two extended missions, one following up the Tempel 1 studies with a visit to comet Hartley 2 (this followed an unsuccessful plan to make a flyby past comet 85P/Boethin, which proved to be too faint for accurate orbital calculations). The extrasolar component of EPOXI was called EPOCh (Extrasolar Planet Observation and Characterization), using the craft’s high resolution telescope to make photometric observations of stars with known transiting exoplanets. But the spacecraft produced observations of Earth that have been useful for exoplanet studies, as well as recording some remarkable views.
Image: Four images from a sequence of photos taken by the Deep Impact spacecraft when it was 50 million km from the Earth. Africa is at right. Notice how much darker the moon is compared to Earth. It reflects only as much light as a fresh asphalt road. Credit: Donald J. Lindler, Sigma Space Corporation, GSFC, Univ. Maryland, EPOCh/DIXI Science Teams.
Although communications with EPOXI were lost in the summer of 2013, the mission lives on in the form of the data it produced, some of which are again put to use in a new paper out of the University of Washington. Edward Schwieterman, a doctoral student and lead author on the work in collaboration with the university’s Victoria Meadows, reports on Earth observations from EPOXI that have been compared to three-dimensional planet-modeling data from the university’s Virtual Planet Laboratory. The comparison has allowed confirmation of the signature of nitrogen collisions in our atmosphere, a phenomenon that should have wide implications.
The presence of nitrogen is significant because it can help us determine whether an exoplanet’s surface pressure is suitable for the existence of liquid water. Moreover, if we find nitrogen and oxygen in an atmosphere and are able to measure the nitrogen accurately, we can use the nitrogen as a tool for ruling out non-biological origins for the oxygen. But nitrogen is hard to detect, and the best way to find it in a distant planet’s atmosphere is to measure how nitrogen molecules collide with each other. The paper argues that these ‘collisional pairs’ create a signature we can observe, something the team has modeled and that the EPOXI work has confirmed.
Nitrogen pairs, written as (N2)2, are visible in a spectrum at shorter wavelengths, giving us a useful tool. The paper explains how this works:
A comprehensive study of a planetary atmosphere would require determination of its bulk properties, such as atmospheric mass and composition, which are crucial for ascertaining surface conditions. Because (N2)2 is detectable remotely, it can provide an extra tool for terrestrial planet characterization. For example, the level of (N2)2 absorption could be used as a pressure metric if N2 is the bulk gas, and break degeneracies between the abundance of trace gases and the foreign pressure broadening induced by the bulk atmosphere. If limits can be set on surface pressure, then the surface stability of water may be established if information about surface temperature is available.
It’s interesting as well that for half of Earth’s geological history, there was little oxygen present, despite the presence of life for a substantial part of this time. The paper argues that given Earth’s example, there may be habitable and inhabited planets without O2 we can detect. Moreover, atmospheres with low abundances of gases like N2 and argon are more likely to accumulate O2 abiotically, giving us a false positive for life.
A water dominated atmosphere lacks a cold trap, allowing water to more easily diffuse into the stratosphere and become photo-dissociated, leaving free O2 to build up over time. Direct detection of N2 through (N2)2 could rule out abiotic O2 via this mechanism and, in tandem with detection of significant O2 or O3, potentially provide a robust biosignature. Moreover, the simultaneous detection of N2, O2, and a surface ocean would establish the presence of a significant thermodynamic chemical disequilibrium (Krissansen-Totton et al. 2015) and further constrain the false positive potential.
Combining the EPOXI data with the Virtual Planetary Laboratory modeling demonstrates that nitrogen collisions that are apparent in our own atmosphere should likewise be apparent in exoplanet studies by future space telescopes. EPOXI, then, demonstrated that nitrogen collisions could be found in a planetary spectrum, and the VPL work modeling a variety of nitrogen abundances in an exoplanet atmosphere shows how accurately the gas can be measured. “One of the interesting results from our study,” adds Schwieterman, “is that, basically, if there’s enough nitrogen to detect at all, you’ve confirmed that the surface pressure is sufficient for liquid water, for a very wide range of surface temperatures,”
The paper is Schwieterman et al., “Detecting and Constraining N2 Abundances in Planetary Atmospheres Using Collisional Pairs,” The Astrophysical Journal Vol. 810, No. 1 (28 August 2015). Abstract / preprint.
New Horizons: River of Data Commences
Hard to believe it’s been 55 days since the New Horizons flyby. When the event occurred, I was in my daughter’s comfortable beach house working at a table in the living room, a laptop in front of me monitoring numerous feeds. My grandson, sitting to my right with his machine, was tracking social media on the event and downloading images. When I was Buzzy’s age that day, Scott Carpenter’s Mercury flight was in the works, and with all of Gemini and Apollo ahead, I remember the raw excitement as the space program kept pushing our limits. I had a sense of generational hand-off as I worked New Horizons with my similarly enthusiastic grandson.
Carpenter took the second manned orbital flight in the Mercury program when Deke Slayton had to step down because of his heart condition, and the flight may be most remembered for the malfunction in Carpenter’s pitch horizon scanner, leading to the astronaut’s taking manual control of the reentry, which in turn led to overshooting the splashdown point by 400 kilometers. Carpenter’s status during reentry was unknown and fear rose as forty minutes passed before his capsule could be located. Exactly how the overshoot happened remains controversial, at least in some quarters.
But back to New Horizons, which hit its targets so precisely that no controversy is necessary. The intensive downlinking of tens of gigabits of data is now fully launched, with the prospect of about a year before we have the entire package. Principal Investigator Alan Stern (SwRI) explains:
“This is what we came for – these images, spectra and other data types that are going to help us understand the origin and the evolution of the Pluto system for the first time. And what’s coming is not just the remaining 95 percent of the data that’s still aboard the spacecraft – it’s the best datasets, the highest-resolution images and spectra, the most important atmospheric datasets, and more. It’s a treasure trove.”
Image: This close-up image of a region near Pluto’s equator captured by New Horizons on July 14 reveals a range of youthful mountains rising as high as 3.4 kilometers above the surface of the dwarf planet. This iconic image of the mountains, informally named Norgay Montes (Norgay Mountains) was captured about 1 ½ hours before New Horizons’ closest approach to Pluto, when the craft was 77,000 kilometers from the surface of the icy body. The image easily resolves structures smaller than 1.6 kilometers across. The highest resolution images of Pluto are still to come, with an intense data downlink phase commencing on Sept. 5. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.
Given images as rich as the above, the prospect of significantly more detailed views will keep the coming months lively, and after that, we have the possibility of a Kuiper Belt Object flyby in a New Horizons extended mission. Remember that since the flyby, the data being returned has been information collected by the energetic particle, solar wind and space dust instruments. Now we move into higher gear, although it’s a pace that still demands patience. Given the distance of the spacecraft from Earth (as I write, the craft is 65,512,553 kilometers beyond Pluto, and 33.36 AU from the Sun), the downlink rate is no more than 1-4 kilobits per second.
Image: All communications with New Horizons – from sending commands to the spacecraft, to downlinking all of the science data from the historic Pluto encounter – happen through NASA’s Deep Space Network of antenna stations in (clockwise, from top left) Madrid, Spain; Goldstone, California, U.S.; and Canberra, Australia. Even traveling at the speed of light, radio signals from New Horizons need more than 4 ½ hours to travel the 4.83 billion kilometers between the spacecraft and Earth. Credit: NASA.
New Horizons is sometimes described as the fastest spacecraft ever launched, which isn’t correct given the Helios probes, launched in 1974 and 1976, that reached 70 kilometers per second at closest approach to the Sun. Helios II, just slightly faster than its counterpart, can be considered the fastest man-made object in history. But it’s true that New Horizons left Earth traveling outward faster than any previous vehicle. Will it catch up with the Voyagers? No, because although it left Earth faster than either Voyager, it didn’t have the benefit of full-fledged gravitational assists around both Jupiter and Saturn. While Voyager 1 has a heliocentric speed of 17.05 kilometers per second, New Horizons is now at 14.49 kilometers per second.
Unprocessed imagery from New Horizons’ Long Range Reconnaissance Imager (LORRI) becomes available each Friday at the LORRI Images from the Pluto Encounter page, with the next batch due on September 11. And although it’s been widely published, I do want to get the Pluto flyby animation up on Centauri Dreams, and note that Stuart Robbins (SwRI), who created the fly-through sequence, has written up the process in To Pluto and Beyond. Robbins notes that this is a system we’re unlikely to revisit in our lifetimes, but the good news is that we still have an operational craft with the potential for at least one KBO flyby.
The Shape of Space Telescopes to Come
Planning and implementing space missions is a long-term process, which is why we’re already talking about successors to the James Webb Space Telescope, itself a Hubble successor that has yet to be launched. Ashley Baldwin, who tracks telescope technologies deployed on the exoplanet hunt, here looks at the prospects not just for WFIRST (Wide-Field InfraRed Survey Telescope) but a recently proposed High-Definition Survey Telescope (HDST) that could be a major factor in studying exoplanet atmospheres in the 2030s. When he is not pursuing amateur astronomy at a very serious level, Dr. Baldwin serves as a consultant psychiatrist at the 5 Boroughs Partnership NHS Trust (Warrington, UK).
by Ashley Baldwin
?”It was the best of times, it was the worst of times…” Dickens apart, the future of exoplanet imaging could be about two telescopes rather than two cities. Consider the James Webb Space Telescope (JWST), and Wide-Field InfraRed Survey Telescope (WFIRST), which as we shall see have the power not just to see a long way but also determine any big telescope future. JWST, or rather its performance, will determine whether there is even to be a big telescope future. The need to produce a big telescope and its function are a cause of increasing debate as the next NASA ten year roadmap, the Decadal Survey for 2020, approaches.
NASA will form Science Definition “focus” groups from the full range of its astrophysics community to determine the shape of this map. The Exoplanet Program Analysis Group (ExoPAG) is a dedicated group of exoplanetary specialists tasked with soliciting and coordinating community input to NASA’s exoplanet exploration programme through missions like Kepler, the Hubble Space Telescope, HST and more recently Spitzer. They have produced an outline of their vision in response to NASA’s soliciting of ideas, which is addressed here in conjunction with a detailed look at some of the central elements by way of explaining some of the complex features that exoplanet science requires.
Various members of ExoPAG have been involved in the exoplanet arm of the JWST and most recently in the NASA dark energy mission, which with the adoption of the “free” NRO 2.4m mirror array and a coronagraph is increasingly becoming an ad hoc exoplanet mission too. This mission has also been renamed: Wide-Field InfraRed Survey Telescope (WFIRST), a name that will hopefully go down in history! More about that later.
The Decadal Survey and Beyond
As we build towards the turn of the decade, though, the next Decadal Survey looms. This is effectively a road map of NASA’s plans for the coming decade. Never has there been a decade as important for exoplanet science if it is to build on Kepler’s enormous legacy. To date, over 4000 “candidate” planets have been identified and are awaiting confirmation by other means, such as the radial velocity technique. Recently twelve new planets have been identified in the habitable zones of their parent stars, all with Earth-like masses. Why so many now? New sophisticated software has been developed to automate the screening of the vast number of signals returned by Kepler, increasing the number of potential targets but more importantly, becoming more sensitive to the smaller signals of Earth-sized planets.
So what is next? In general these days NASA can afford one “Flagship” mission. This will be WFIRST for the 2020s. It is not a dedicated mission but as Kepler and ground-based surveys return increasingly exciting data, WFIRST evolves. In terms of the Decadal Survey, the exoplanet fraternity has been asked to develop mission concepts within the still-available funds.
Three “Probe” class concepts — up to and above current Discovery mission cost caps but smaller than flagship-class missions — have been mooted, the first of which is developing a star-shade to accompany WFIRST. This, if you recall, is an external occulting device that blocks out starlight by sitting several tens of thousands of kilometers off between the parent star and the telescope, allowing through the much dimmer accompanying planetary light, making characterisation possible. A recent Probe concept, Exo-S, addressed this very issue and proposed either a small 1.1m dedicated telescope and star-shade, or the addition of a star-shade to a pre existing mission like WFIRST. At that time, the “add on” option wasn’t deemed possible as it was proposed to put WFIRST into a geosynchronous orbit where a star-shade could not function.
The ExoPAG committee have recently produced a consensus statement of intent in response to a NASA request for guidance on an exoplanet roadmap for incorporation into NASA’s generic version for Decadal Survey 2020. As stated above, this group consists of a mixture of different professionals and amateurs (astrophysicists, geophysicists, astronomers, etc) who advise on all things exoplanet including strategy and results. They have been asked to create two science definition teams representing the two schools of exoplanet thinking to contribute to the survey.
One suggestion involved placing WFIRST at the star-shade friendly Earth/Sun Lagrange 2 point (932,000 kilometers from Earth, where the Sun and Earth Gravity cancel each other out, allowing a relatively stable orbit). This if it happens represents a major policy change from the original geosynchronous orbit, and is very exciting as unlike the current exoplanet coronagraph on the telescope, a star-shade of 34m diameter could image Earth-mass planets in the habitable zones of Sun-like stars. More on that below.
WFIRST at 2.4m will be limited in how much atmospheric characterisation it can perform given its relatively small aperture and time-limited observation period (it is not a dedicated exoplanet mission and still has to do dark energy science). The mission can be expected to locate several thousand planets via conventional transit photometry as well as micro-lensing and possibly even a few new Earth-like planets by combining its results with the ESA Gaia mission to produce accurate astrometry (position and mass in three dimensions) within 30 light years or so. There has even been a recent suggestion that exoplanet science or at least the coronagraph actually drives the WFIRST mission. A total turnaround if it happens and very welcome.
The second Probe mission is a dedicated transmission spectroscopy telescope. It would be a telescope of around 1.5m with a spectroscope, fine guidance system and mechanical cooler to spectroscopically analyse the light of a distant star as it passes through the atmosphere of a transiting exoplanet. No image of the planet here, but the spectrum of its atmosphere tells us almost as much as seeing it. The bigger the telescope aperture, the better for seeing smaller planets with thinner atmospheric envelopes. Planets circling M-dwarfs make the best targets, as the planet to star ratio here will obviously be the highest. The upcoming TESS mission is intended to provide such targets for the JWST, although even its 6.5m aperture will struggle to characterise atmospheres around all but the largest planets or perhaps, if lucky, a small number of “super-terrestrial” planets around M-dwarfs. It will be further limited by general astrophysics demand on its time. A Probe telescope would pick up where JWST left off and although smaller, could compensate by being a dedicated instrument with greater imaging time.
The final Probe concept links to WFIRST and Gaia. It would involve a circa 1.5m class telescope as part of a mission that like Gaia observes multiple stars on multiple occasions to measure subtle variations in their position over time, determining the presence of orbiting planets by their effect on the star. Unlike radial velocity methods, it can accurately determine mass and orbital period down to Earth- sized planets around neighbouring stars. A similar concept called NEAT was presented unsuccessfully for ESA funding but rejected despite being robust — a good summary is available through a Google search.
These parameters are obviously useful in their own right but more important provide targets for direct imaging telescopes like WFIRST rather than leaving the telescope to search star systems “blindly,” thus wasting limited time. At present the plan for WFIRST is to image pre-existing radial velocity planets to maximise searching, but nearby RV [radial velocity] planets are largely limited to the larger range of gas giants, and although important to exoplanetary science, they are not the targets that are going to excite the public or, importantly, Congress.
All of these concepts occur against the backdrop of the ESA RV PLATO mission and the new generation of super telescopes, the ELTs. Though ground based and limited by atmospheric interference, these will synergize perfectly with space telescopes, as their huge light-gathering capacity will allow high-resolution spectroscopy of suitable exoplanet targets identified by their space-based peers, especially if also combined with high quality coronagraphs.
Image: A direct, to-scale, comparison between the primary mirrors of the Hubble Space Telescope, James Webb Space Telescope, and the proposed High Definition Space Telescope (HDST). In this concept, the HDST primary is composed of 36 1.7 meter segments. Smaller segments could also be used. An 11 meter class aperture could be made from 54 1.3 meters segments. Credit: C. Godfrey (STScI).
Moving Beyond JWST
So the 2020s have the potential to be hugely exciting. But simultaneously we are fighting a holding battle to keep exoplanet science at the top of the agenda and make a successful case for a large telescope in the 2030s. It should be noted that there is still an element in NASA who are unsure as to what the reaction to the discovery of Earth like planets would be!
A series of “Probe” class missions will run in parallel with or before any flagship mission. No specific plans have been made for a flagship mission but an outline review of its necessary requirements has been commissioned by the Association of Universities for Research in Astronomy (AURA) and released under the descriptive title “High Definition Space Telescope” (HDST). A smaller review has produced an outline for a dedicated exoplanet flagship telescope called HabEX. These have been proposed as happening at the end of the next decade but have met resistance as being too close to the expensive JWST in time. As WFIRST is in effect a flagship mission (although never publicly announced as such), NASA generally can afford one such mission per decade, which means any big telescope will have to wait until the 2030s at the earliest. Decadal 2020 and the exoplanet consensus and science definition groups contributing to it will basically have to play a “holding” role, keeping up the exoplanet case throughout the decade using evidence from available resources to build a case for a subsequent large HDST.
The issue then becomes the launch vehicle upper stage “shroud,” or width. The first version of the Space Launch System (SLS) is only about 8.5m. Ideally the shroud should be at least a meter larger than the payload to allow “give” during launch pressures, which is especially important for a monolithic mirror where the best orientation is “face on”. Given the large stresses of launch, lightweight “honeycomb” versions of traditional mirrors cannot be used and solid versions weigh in at 56 tonnes, even before the rest of the telescope. For the biggest possible monolithic telescopes at least, we will have to wait for the 10m-plus shroud and heavier lifting ability of the SLS or any other large launcher.
A star-shade on WFIRST via one of these Probe missions seems the best bet for a short term driver of change. Internal coronagraphs on 2m class telescopes allow too little light through for eta Earth spectroscopic characterisation, but star-shades will (provided their light enters the telescope optical train high enough up, if like WFIRST the plan is to have internal and external coronagraphs). There will be a smaller inner working angle, too, to get at the habitable zone of later spectrum stars (K). That’s if WFIRST ends up at L2, though L2 is talked about more and more.
The astrometry mission will be a dedicated version of WFIRST/Gaia synergy, saving lots of eta Earth searching time. It should be doable within Probe funding, as the ESA NEAT mission concept came in at under that. It fell through due to the formation flying element, but post PROBA 3 (a European solar coronagraphy mission that will in effect be the first dedicated “precision” formation flying mission) that issue should be resolved.
A billion dollars probably gets a decent transition spectroscopy mission which will have enough resolution to follow up some of the more promising TESS discoveries. Put these together and that’s a lot of exoplanet science with a tantalising amount of habitability material, too. WFIRST status seems to be increasing all the time and at one recent exoplanet meeting led by Gary Blackwood it was even stated (and highlighted) publicly that the coronagraph should LEAD the mission science. That’s totally at odds with previous statements that emphasised the opposite.
Other Probe concepts consider high-energy radiation such as X-rays, and though less relevant to exoplanets, the idea acknowledges the fact that any future telescopes will need to look at all facets of the cosmos and not just exoplanets. Indeed, competition for time on telescopes will become even more intense. Given the very faint targets that exoplanets present it must be remembered that collecting adequate photons takes a lot of precious telescope time, especially for small, close-in habitable zone planetary targets.
The ExoPAG consensus represents a compromise between two schools of thought: Those who wish to prioritise habitable target planets for maximum impact, and those favouring a methodical analysis of all exoplanets and planetary system architecture to build up a detailed picture of what is out there and where our own system fits into this. All of these are factors that are likely to determine the likelihood of life, and both approaches are robust. I would recommend that readers consult this article and related material and reach their own conclusions.
Image: A simulated image of a solar system twin as seen with the proposed High Definition Space Telescope (HDST). The star and its planetary system are shown as they would be seen from a distance of 45 light years. The image here shows the expected data that HDST would produce in a 40-hour exposure in three filters (blue, green, and red). Three planets in this simulated twin solar system – Venus, Earth, and Jupiter – are readily detected. The Earth’s blue color is clearly detected. The color of Venus is distorted slightly because the planet is not seen in the reddest image. The image is based on a state-of-the-art design for a high-performance coronagraph (that blocks out starlight) that is compatible for use with a segmented aperture space telescope. Credit: L. Pueyo, M. N’Diaye (STScI).
Defining a High Definition Space Telescope
What of the next generation of “Super Space Telescope”?. The options are all closely related and fall under the broad heading of High Definition Space Telescope (HDST). Such a telescope requires an aperture of between 10 and twelve metres minimum to have adequate light-capturing ability and resolution to carry out both exoplanet imaging and also wider astrophysics, such as viewing extragalactic phenomena like quasars and related supermassive black holes. Regardless of specifics these parameters require absolute stability with the telescope requiring picometer (10-12 metre) levels in order to function.
The telescope is diffraction limited at 500nm, right in the middle of the visible spectrum. Diffraction limit is effectively the wavelength that any circular mirror gives its best angular resolution, the ability to discern detail. Angular resolution is governed by the equation ? (lambda) or wavelength expressed as a fraction of a metre / telescope aperture (D) expressed in metres; e.g HDST has its optimum functioning or “diffraction limit” at 500nm wavelength, defined by the equation 500nm (10-9)/12m.
The higher the aperture of a telescope the more detail it can see at any given wavelength and conversely the longer the wavelength, the less detail it can see. That is under perfect conditions experienced in space as opposed to the constantly moving atmosphere for ground-based scopes that will rarely approach the diffraction limit. So the HDST will not have the same degree of resolution at infrared wavelengths as visible wavelengths, which is relevant as several potential biosignatures will appear on spectra at longer wavelengths.
Approaching the diffraction limit is possible on the ground with the use of laser-produced guide stars and modern “deformable mirrors or “adaptive optics,” which help compensate. This technique of deformable primary and especially secondary mirrors will be important in space as well, in order to achieve the incredible stability required for any telescope observing distant and dim exoplanets. This is especially true of coronagraphs, though much less so with star-shades, which could be important in determining which starlight suppression technique to employ.
Additionally, the polishing “finish” of the mirror itself requires incredible precision. As a telescope becomes larger, the quality of its mirror needs to improve given the minute wavelengths being worked with. The degree of polish or “finish” required is defined as fractions of a wavelength or wavefront error (WFE). For the HDST this is as low as 1/10 or even 1/20 of the wavelength in question. In its case, generally visible light around 500nm, so this error will be below 50nm, a tiny margin that illustrates the ultra high quality of telescope mirror required.
A large 12m HDST would require a WFE of about 1/20 lambda and possibly even lower, which works out to less than 30nm. The telescope would also require a huge giga-pixel array of sensors to capture any exoplanet detail, electron-magnifying CCDs, Electron Multiplying CCDs (EMCCDs), or their Mercury Cadmium Tellurium-based near infrared equivalent, which would need passive cooling to prevent heat generated from the sensors themselves producing “dark current,” creating a false digital image and background “noise”.
Such arrays already exist in space telescopes like the ESA Gaia, and producing larger versions would be one of the easier design requirements. For a UltraViolet-Optical-InfraRed (UVOIR) telescope an operating temperature of about -100 C would suffice (for the sensors, while only the telescope itself would be near room temperature).
All of the above is difficult but not impossible even today and certainly possible in the near future, with conventional materials like ultra-low expansion glass (ULE) able to meet this requirement, and more recently silicon carbide composites, too. The latter have the added advantage of a very low coefficient of expansion. This last feature can be crucial depending on the telescope sensor’s operating temperature range. Excessive expansion due to a “warm” telescope operating around 0-20 degrees C could disturb the telescope’s stability. It was for this reason that silicon carbide was chosen for the structural frame of the astrometry telescope Gaia, where stability was also key to accurately positioning one billion stars.
A “warm” operating temperature of around room temperature helps reduce telescope cost significantly, as illustrated by the $8 billion cost of the JWST, with an operating temperature of a few tens of Kelvin requiring an expensive and finite volume of liquid helium. Think how sad it was seeing the otherwise operational 3.5m ESA Herschel space telescope drifting off to oblivion when its supply of helium ran out.
The operating temperature of a telescope’s sensors determines its wavelength-sensitive range or “bandpass.” For wavelengths longer than about 5 micrometers (5000 nm), the sensors of the telescope require cooling in order to prevent the temperature of the telescope apparatus from impacting any incoming information. Bandpass is also influenced, generally made much smaller, by passing through a coronagraph. The longer the wavelength, the greater the cooling required. Passive cooling involves attaching the sensors to a metal plate that radiates heat out to space. This is useful for a large telescope that requires precision stability, as it has no moving parts that can vibrate. Cooler temperatures can be reached by mechanical “cryocoolers,” which can get down as low as a few tens of Kelvin (seriously cold) but at the price of vibration.
This was one of the two main reasons why the JWST telescope was so expensive. If required liquid helium to achieve its operating temperature of just a few Kelvin from absolute zero (the point at which a body has no energy and therefore the lowest reachable temperature) without vibration, in order to reach longer infrared wavelengths and look back further into time.
Remember, the further light has travelled since the Big Bang, the more it is stretched or “red-shifted,” and seeing back as far as possible was a big driver for JWST. The problem is that liquid helium only lasts so long before boiling off, with the large volumes required for ten years of service presenting a large mass and also requiring extensive, expensive testing, all of which contributed to the telescope’s cost and time overrun.
The other issue with large telescopes is whether they are made from one single mirror, like Hubble, or are segmented like the Keck telescopes and JWST. The largest currently manufacturable monolithic mirrors are off-axis (unobstructed), 8.4m in diameter, bigger than JWST and perfected in ground scopes like the LBT and GMT. Off-axis means that the focal plane of the telescope is offset from its aperture such that a focusing secondary mirror, sensor array, spectroscope or coronagraph doesn’t obstruct and reduce available light by up to 20% of the aperture. A big attraction to this design is that the unobstructed 8.4m mirror thus collects roughly the equivalent of a 9.2m on-axis mirror, ironically near the minimum requirements of the ideal exoplanet telescope.
Given the construction of six such mirrors for the GMT, this mirror is now almost “mass produced,” and thus very reasonably priced. The off-axis design allows sensor arrays, spectroscopes and especially large coronagraphs to sit outside the telescope without need of suspension within the telescope, with the “spider” attachments creating the “star” shaped interference diffraction patterns in the images we are all familiar with in conventional telescope designs. Despite being cheaper to manufacture and already tested extensively on the ground, the problem arises from the fact that there are currently no launchers big and powerful enough to lift what would in effect be a 50 tonne-plus telescope into orbit (non-lightweight honeycomb design due to high “g” and acoustic vibration forces at launch).
In general, a segmented telescope can be “folded” up inside a launcher fairing very efficiently, up to a maximum aperture of up to 2.5 X the fairing width. The Delta IV heavy launcher has a fairing width of about 5.5m, so in theory a segmented telescope of up to 14m could be launched provided it was below the maximum weight capacity of about 21 tonnes to geosynchronous transfer orbit. So it could be launched tomorrow! It was this novel segmentation that, along with cooling, added to the cost and construction time of the JWST, though hopefully once successfully launched it will have demonstrated its technological readiness and be cheaper next time round.
By the time a HDST variant is ready to launch it is hoped that there will be launchers with fairing widths and power to lift such telescopes, and they will be segmented because at 12m they exceed the monolithic limit. With a wavelength operating range from circa 90nm to 5000nm, they will require passive cooling only and the segmentation design will have been tested already, both of which will help reduce cost, which will be more simply dependent on size and launcher cost. This sort of bandpass, though not so large as a helium cooled telescope, is more than adequate for looking for key biosignatures of life such as ozone, O3, Methane, water vapour and CO2 under suitable conditions and with a good “signal to noise ratio”, the degree to which the required signal stands out from background noise.
Separating Planets from their Stars
Ideally signal to noise ratio should be better than ten. In terms of instrumentation, all exoplanet scientists will want a large telescope of the future to have starlight suppression systems to help directly image exoplanets as near to their parent stars as possible, with a contrast reduction of 10-10 in order to view Earth-sized planets in the liquid water “habitable zone.” The more Earth-like planets and biosignatures the better. There are ways of producing biosignature signs on a spectrograph that are abiotic, so a larger sample of such signatures strengthens the case for a life origin rather than a coincidental non-biological origin.
As has been previously discussed, there are two ways of doing this, with internal and external occulting devices. Internal coronagraphs are a series of masks and mirrors that help “shave off” the offending starlight, leaving only the orbiting planets. The race is on as to how close this can be done to the star. NASA’s WFIRST will tantalisingly achieve contrast reductions between 10-9 and 10-10, which shows how far this technology has come since the mission was conceived three years ago when such levels were pure fantasy.
How close to the parent star a planet can be imaged, the Inner working angle (IWA) is measured in milliarcseconds (mas), and for WFIRST this is slightly more than 100, between Earth and Mars in the solar system. A future HDST coronagraph would hope to get as low as 10 mas, thus allowing habitable zone planets around smaller, cooler (and more common) stars. That said, coronagraphs on segmented telescopes are an order of magnitude more difficult to design for segmented scopes than monolithic designs and little research has yet gone into this area. An external occulter or star-shade achieves the same goals as a coronagraph but by sitting way out in front of a telescope, between it and the target star, casts a shadow to exclude starlight. The recent Probe class concept explored the use of a 34m shade with WFIRST that was up to 35000kms away from the telescope. The throughput of light is 100% versus 20-30% maximum for most coronagraph designs, in an area where photons of light are at a premium. Perhaps just 1 photon per second or less might hit a sensor array from an exoplanet.
A quick word on coronagraph type might be useful. Most coronagraphs consist of a “mask” that sits in the entrance pupil of the focal plane and blocks out the central parent starlight whilst allowing the fainter peripheral exoplanet light to pass and be imaged. Some starlight will diffract around the mask (especially so for longer wavelengths like infrared) but can be removed by shaping the entry pupil or subsequent apodization (i.e., optical filtering technique), a process utilising a series of mirrors to “shave” off additional starlight till just the planet light is left.
For WFIRST the coronagraph is a combination of a “Lyot” mask and shaped pupil. This is efficient at blocking starlight to within 100 mas of the star but at the cost of losing 70-80% of the planet light, as previously stipulated. Such is the current level of technological progression ahead of proposals for the HDST. The reserve design utilises apodization, which has the advantage of removing starlight efficiently but without losing planet light; indeed, as much as 95% gets through. The design has not yet been tested to the same degree as the WFIRST primary coronagraph, though, as the necessary additional mirrors are very hard to manufacture. Its high “throughput” of light is very appealing where light is so precious, and thus the design is likely to see action at a later date. A coronagraph throughput of 95% on an off-axis 8.4m telescope compared to 20-30% for an alternative on even a 12m would allow more light to be analysed.
The advantage here is that the even more stringent stability requirements of a coronagraph are very much relaxed, and the amount of useful light reaching the focal plane of the telescope is near 100%. No place for waste. Star-shades offer deeper spectroscopic analysis compared to coronagraphs, too. The disadvantage is that the star-shade needs two separate spacecraft involved in precision “formation flying” to maintain the star-shade’s shadow in the right place, and the star-shade needs to move into a new position every time a new target is selected, taking days or weeks to get into position and of course having finite propellant supplies limiting its lifespan to a maximum of 5 years, and perhaps thirty or so premium-target exoplanets. Thus it may be that preliminary exoplanet discovery and related target mapping is done rapidly via a coronagraph before atmospheric characterisation via spectroscopy is done later by a star-shade with its greater throughput of light and greater spectroscopic range.
The good news is that the recent NASA ExoPAG consensus criteria require an additional Probe class ($1 billion) star-shade mission for WFIRST as well as a coronagraph. This would need the telescope to be at the stable Sun/Earth Lagrange point, but would make the mission in effect a technological demonstration mission for both types of starlight suppression, saving development costs for any future HDST while imaging up to 30 habitable zone Earth-like planets and locating many more within ten parsecs in combination with the Gaia astrometry results.
The drawback will be that WFIRST has a monolithic mirror and coronagraph development to date has focused on this mode rather than the segmented mirrors of larger telescopes. Star-shades are less affected by mirror type or quality, but a 12m telescope — compared to WFIRST’s 2.4m — would only achieve maximum results with a huge 80m shade. Building and launching a 34m shade is no mean feat but building and launching an enormous 80-100m version might even require fabrication in orbit. It would also need to be 160000-200000Kms from its telescope, making formation flying no easy achievement, especially as all star-shade technology can be tested only in computer simulations or downscaled in practice.
HDST Beyond Exoplanets
So that’s the exoplanet element. Exciting as such science is, it only represents a small portion of all astrophysics and any such HDST is going to be a costly venture, probably in excess of JWST. It will need to have utility across astrophysics, and herein lies the problem. What sort of compromise can be reached amongst different schools of astrophysics in terms of telescope function and also access time? Observing distant exoplanets can take days, and characterising their atmospheres even longer.
Given the price of JWST and its huge cost and time overrun, any Congress will be skeptical of being drawn into a bottomless financial commitment. It is for this reason that increasingly the focus is on both JWST and WFIRST. The first has absolutely GOT to work, well and for a long time, so that all its faults (as with Hubble, ironically) can be forgotten amid the celebration of its achievements. WFIRST must illustrate how a flagship level mission can work at a reasonable cost (circa $2.5 billion) and also show that all the exoplanet technology required for a future large telescope can work and work well.
The HABX2 telescope is in effect a variable aperture-specific variant of HDST (determined by funds) with the maximum possible passively cooled sensor bandpass described above and a larger version of the additional starlight suppression technology of WFIRST. In effect, a dedicated exoplanet telescope. It, too, would use a coronagraph or star-shade.
The overarching terms for all these telescope variants are determined by wavelength; thus the instrument would be referred to as Large Ultraviolet Optical InfraRed (LUVOIR), with specific wavelength range to be determined as necessary. Such a telescope is not a dedicated exoplanet scope and would obviously require suitable hardware. This loose definition is important as there are other telescope types — high energy, for instance, looking at X-Rays. The NASA Chandra telescope doesn’t image the highest energy X-Rays emitted by quasars or black holes. Following on from JWST and between it and the ALMA (Atacama Large Millimeter/submillimeter Array) is far infrared, which can use dedicated telescopes and has not been explored extensively. There are astrophysicist groups lobbying for all these telescope types.
Here WFIRST is again key. It will locate thousands of planets through conventional transition photometry and micro-lensing as well as astrometry, but the directly-imaged planets via its coronagraph and better still its star-shade should, if characterised (with the JWST?), speak for themselves, and if not guarantee a dedicated exoplanet HDST, at least provide NASA and Congress with the confidence to back a large space “ELT” with suitable bandpass and starlight suppression hardware, and time to investigate further. The HDST is an outline of what a future space telescope, be it HABX2 or a more generic instrument, might be.
Image: A simulated spiral galaxy as viewed by Hubble, and the proposed High Definition Space Telescope (HDST) at a lookback time of approximately 10 billion years (z = 2) The renderings show a one-hour observation for each space observatory. Hubble detects the bulge and disk, but only the high image quality of HDST resolves the galaxy’s star-forming regions and its dwarf satellite. The zoom shows the inner disk region, where only HDST can resolve the star-forming regions and separate them from the redder, more distributed old stellar population.
Credit: D. Ceverino, C. Moody, and G. Snyder, and Z. Levay (STScI).
Challenges to Overcome
The concern is that although much of its technology will hopefully be proven through the success of JWST and WFIRST, the step up in size in itself requires a huge technological advance, not least because of the exquisite accuracy required at all levels of its functioning, from observing exoplanets via a star-shade or coronagraph to the actual design, construction and operation of these devices. A big caveat is that it was this technological uncertainty that contributed to the time and cost overrun of JWST, something both the NASA executive and Congress are aware of. It is highly unlikely that such a telescope will launch before the mid-2030s at an optimistic estimate. There has already been pushback on an HDST telescope from NASA. What might be more likely is a compromise, one which delivers a LUVOIR telescope as opposed to an X-Ray or far-infrared alternative, but at more reasonable cost and budgeted for over an extended time prior to a 2030s launch.
Congress are keen to drive forward high profile manned spaceflight. Whatever your thoughts on that, it is likely to lead to the evolution of the SLS and private equivalents like SpaceX launchers. Should these have a fairing of around 10m, it would be possible to launch the largest monolithic mirror in an off-axis format that allows easier and most efficient use of a coronagraph or an intermediate star-shade (50m) with minimal technology development and at substantially less cost. Such a telescope would not present such a big technological advance and would be a relatively straightforward design. Negotiation over telescope usage could lead to greater time devoted to exoplanet science, thus compensating further for the “descoping” from the 12m HDST ideal (only 15% of JWST observation is granted for exoplanet use).Thus the future of manned and robotic spaceflight is intertwined.
A final interesting point is the “other” forgotten NRO telescope. It is identical to its high profile sibling and with “imperfections” in its manufacturing, but a recent NASA executive interview conceded it could still be used for space missions. At present logic would have it as backup for WFIRST. Could it, too, be the centrepiece of an exoplanet mission, one of the Probe concepts perhaps, especially the transit spectroscopy mission where mirror quality is less important?
As with WFIRST, its large aperture would dramatically increase the potency of any mission over a bespoke mirror and deliver a flagship mission at Probe costs. A bonus if, like WFIRST, it too is launched next decade, and as with Hubble and JWST, a bit of overlap with JWST would provide great synergy with the combined light-gathering capacity of the two telescopes, allowing greater spectroscopic characterisation of interesting targets provided by missions like TESS. The JWST workload could also be relieved, somewhat critically extending its active lifespan. Supposition only at this point. I don’t think NASA are sure what to do with it, though Probe funding could represent a way of using it without the need of diverting additional funds from elsewhere.
When all is said and done, the deciding factors are likely to be JWST and evidence collected from exoplanet Probe missions. JWST was five years overdue, five billion dollars overspent and laden with 162 moving parts, yet placed almost a million Kms away. It has simply got to work, and work well, if there is to be any chance of any other big space telescopes. Be nervous and cross fingers when it launches late 2018. Meantime, enjoy TESS and hopefully WFIRST and other Probe missions, which should be more than enough to keep everyone interested even without the arrival of the ELT ground base reinforcements with their high dispersion spectroscopy, which in combination with their own coronagraphs may also characterise habitable exoplanets. These planets and the success of the technology that finds them will be key to the development of the next big space telescope, if there is to be one.
Capturing public interest will be central to this, and we have seen just how much astrophysics missions can achieve in this regard with the recent high-profile successes of Rosetta and New Horizons. With ongoing innovation and the exoplanet missions next decade, this could usher in a golden era of exoplanet science. A final often forgotten facet of space telescopes, central to HDST use, is observing solar system bodies from Mars out to the Kuiper belt. Given the success of New Horizons, it wouldn’t be a surprise to see a similar future flyby of Uranus, but it gives some idea of the sheer potency of an HDST that it could resolve features down to just 300Km resolution. It could clearly image the icy “plumes” from Europa and Enceladus, especially in UV, where the shorter wavelength will allow its best resolving power, which illustrates the need for an ultraviolet capacity on the telescope.
By 2030 we are likely to know several tens of thousands of exoplanets, many characterised and even imaged, and who knows, maybe some exciting hints of biosignatures warranting the kind of detailed examination only a large space telescope can deliver.
Plenty to keep Centauri Dreams going for sure and maybe realise our position in the Universe.
——-
Further reading
Dalcanton, Seager at al., “From Cosmic Birth to living Earths: The future of UVOIR space astronomy.” Full text.
“HABX2: A 2020 mission concept for a flagship at modest cost,” Swain, Redfield et al. A white paper response to the Cosmic Origins Program Analysis Group call for Decadal 2020 Science and Mission concepts. Full text.
Equinox at Saturn: Puzzling Out the A Ring
I’m really going to miss Cassini when it takes its plunge into Saturn’s atmosphere in 2017. Having an orbiter in the outer system means that periodically we’ve been handed spectacular imagery and vast amounts of data for present and future analysis. Each new encounter now, such as the recent one with Dione, is a poignant reminder of how successful this mission has been, and how much we could gain with similar instrumentation around the ice giants.
Meanwhile, I look at this striking view of Saturn and its rings from 20 degrees above the ring plane, a mosaic built from 75 exposures using Cassini’s wide angle camera, and marvel at the view. The images were made in August of 2009, a day and a half after Saturn equinox, when the Sun was exactly overhead at the planet’s equator. The result is a darkening of the rings from this perspective because of the Sun’s lower angle to the ring plane, with shadows cast across the ring structure. It will be a while before we see this view again — even if we had another spacecraft somehow in place, equinox on Saturn occurs only once every 15 Earth years.
Image: This close to equinox, illumination of Saturn’s rings by sunlight reflected off the planet vastly dominates any meager sunlight falling on the rings. Hence, the half of the rings on the left illuminated by planetshine is, before processing, much brighter than the half of the rings on the right. On the right, it is only the vertically extended parts of the rings that catch any substantial sunlight. With no enhancement, the rings would be essentially invisible in this mosaic. To improve their visibility, the dark (right) half of the rings has been brightened relative to the brighter (left) half by a factor of three, and then the whole ring system has been brightened by a factor of 20 relative to the planet. So the dark half of the rings is 60 times brighter, and the bright half 20 times brighter, than they would have appeared if the entire system, planet included, could have been captured in a single image. Credit: NASA/JPL/Space Science Institute.
What the equinox event gave Cassini scientists was the opportunity to see unusual shadows and wavy structures that appeared during a time when the temperature of the rings’ icy particles began to drop because of the Sun’s position. A recent study in Icarus shows that during the equinox, Cassini’s Composite Infrared Spectrometer found temperatures that matched models of ring particle cooling over much of the expanse of the rings. But the outermost section — the A ring — turned out to be an anomaly, much warmer than the models predicted, with a temperature spike particularly evident in the middle of the A ring.
The JPL researchers believe that differences in the structure of Saturn’s ring particles account for the variation. As this JPL news release explains, most ring particles are thought to have a fluffy exterior something like fresh snow. The A ring seems to be different, however, composed of particles roughly 1 meter wide made up of solid ice, with only a thin outer layer (regolith). Thus we seem to have a concentration of what JPL’s Ryuji Morishima, who led the recent work, calls “solid ice chunks,” a finding the researcher finds unusual. “Ring particles,” he adds, “usually spread out and become evenly distributed on a timescale of about 100 million years.”
“This particular result is fascinating because it suggests that the middle of Saturn’s A ring may be much younger than the rest of the rings,” says Linda Spilker, Cassini project scientist at JPL and a co-author of the study. “Other parts of the rings may be as old as Saturn itself.”
Another possibility: The particles in question are being confined to their present location, perhaps by a moon that existed in the region within the past hundred million years, to be destroyed later by an impact. In this scenario, ring debris might not have had time to diffuse evenly throughout the ring. The researchers also suggest that small ‘rubble-pile’ moonlets could be breaking up in the A ring under the gravitational influence of Saturn and its larger moons.
Cassini is far from finished, despite my musings about its fate. In fact, the spacecraft will measure the mass of the main rings during its last orbits, with the hope that the data will constrain the rings’ age. All of this will give us further insights into how ring structures like these work, with the equinox data showing how short-lived changes can occur that reveal the rings’ deep structure. Until now, we’ve been unable to probe more than a millimeter below the surface of these countless particles, but we’re learning to build models of what must be there.
The paper is Morishima et al., “Incomplete cooling down of Saturn’s A ring at solar equinox: Implication for seasonal thermal inertia and internal structure of ring particles,” Icarus 23 June 2015 (abstract).