Centauri Dreams

Imagining and Planning Interstellar Exploration

WFIRST: The Starshade Option

What’s ahead for exoplanet telescopes in space? Ashley Baldwin, who tracks today’s exciting developments in telescope technology, today brings us a look at how a dark energy mission, WFIRST, may be adapted to perform exoplanet science of the highest order. One possibility is the use of a large starshade to remove excess starlight and reveal Earth-like planets many light years away. A plan is afoot to make starshades happen, as explained below. Dr. Baldwin is a consultant psychiatrist at the 5 Boroughs Partnership NHS Trust (Warrington, UK), a former lecturer at Liverpool and Manchester Universities and, needless to say, a serious amateur astronomer.

by Ashley Baldwin

baldwin2

Big things have small beginnings. Hopefully.

Many people will be aware of NASA’s proposed 2024 WFIRST mission. Centauri Dreams readers will also be aware that this mission was originally identified in the 2010 Decadal Survey roadmap as a mission with a $1.7 billion budget to explore and quantify “dark energy”. As an add-on, given that the mission involved prolonged viewing of large areas of space, an exoplanet “microlensing” element as a secondary goal, with no intrusion on the dark energy component, was also proposed.

Researchers believed that the scientific requirements would be best met by the use of a wide-field telescope, and various design proposals were put forward. All of these had to sit within a budget envelope that also included testing, 5 years of mission operations costs and a launch vehicle. This allowed a maximum aperture telescope of 1.5 m, a size that has been used many times before, thus reducing the risk and need to perform extensive and costly pre-mission testing, as had happened with JWST. As the budget fluctuated over the years, the aperture was reduced to 1.3 m.

Then suddenly in 2012 the National Reconnaissance Office (NRO) donated two 2.4 m telescopes to NASA. These were similar in size, design and quality to Hubble, so had lots of “heritage”, but critically they were also were wide-field, large, and worked largely but not exclusively in the infrared. What an opportunity: Such a scope would collect up to three times as much light as the 1.3 m telescope.

Tuning Up WFIRST

At this point a proposal was put forward to add in a coronagraph in order to increase the exoplanet and science return of WFIRST, always a big driver in prioritising missions. This coronagraph is a set of mirrors and masks, an “optical train” that is placed between the telescope aperture and the “focal plane” in order to remove starlight reaching the focal plane while allowing the much dimmer reflected light of any orbiting exoplanet to do so, allowing imaging and spectrographic analysis. Given the proposed savings presented by using a “free” mirror it was felt that this addition could be made at no extra cost to the mission itself.

The drawback is that the telescope architecture of the two NRO mirrors is not designed for this purpose. There are internal supporting struts or “spiders” that could get in the way of the complex apparatus of any type of coronagraph, impinging on its performance (this is what makes the spikes on images in reflecting telescopes). The task of fitting a coronagraph was not felt to be insurmountable, though, and the potential huge science return was deemed worth it so NASA funded an investigation into the best use of the new telescopes in terms of WFIRST.

The result was WFIRST-AFTA, ostensibly a new dark energy mission with all of its old requirements but with the addition of a coronagraph. Subsequent independent analyses by organisations such as the NRC (National Research Council – the auditors), however, suggested that the addition of a coronagraph could increase the overall mission cost to over $2 billion. This was largely driven by the use of “new” technology that might need protracted and costly testing. The spectre of JWST and its damaging costs still looms and now influences space telescope policy. Although the matter is disputed, it is based on the premise that despite over ten years of research, the technology is still thought to be somewhat immature and in need of more development. A research grant was gifted to coronagraph testbed centres to rectify this.

WFIRST

Image: An artist’s rendering of WFIRST-AFTA (Wide Field Infrared Survey Telescope – Astrophysics Focused Telescope Assets). From the cover of the WFIRST-AFTA Science Definition Team Final Report (2013-05-23). Credit: NASA / Goddard Space Flight Center WFIRST Project – Mark Melton, NASA/GSFC.

Another issue was whether the WFIRST budget will be delivered in the 2017 budget. This is far from certain, given the ongoing requirement to pay off the JWST “debt”. Those who remember the various TPF initiatives that followed a similar pathway and were ultimately scrapped at a late stage will shiver in apprehension. And that was before the huge overspend and delay of JWST, largely due to the use of numerous new technologies and their related testing. Congress was understandably wary of avoiding a repeat.

This spawned a proposal for the two cheaper “backup” concepts I have talked of before, EXO-S and EXO-C. These are probe-class concepts with a $1 billion budget cap. Primary and backup proposals will be submitted in 2015. EXO-S itself involved using a 1.1 m mirror with a 34 m starshade to block out light, rather than the coronagraphs of WFIRST-AFTA and EXO-C. Its back-up plan proposed using the starshade with a pre-existing telescope, which with JWST already having been irrevocably ruled out, left only WFIRST. Meanwhile, a research budget was allocated to bring coronagraph technology up to a higher “technology readiness level”.

Recently sources have suggested that the EXO-S back up option has come under closer scrutiny, with the possibility of combining WFIRST with a starshade becoming more realistic. At present that would only involve making WFIRST “starshade ready”, which involves the various two-way guidance systems necessary to link the telescope and a starshade to allow for their joint operation. The cost is only a few tens of millions and easily covered by some of the coronagraph research fund plus the assistance of the Japanese space agency JAXA (who may well fund the coronagraph too, taking the pressure off NASA). This doesn’t allow for the starshade itself however, which would need a separate, later funding source.

Rise of the Starshade

What is a starshade? The idea was originally put forward by astronomer Webster Cash from the University of Colorado in 2006; Cash is now part of the EXO-S team led by Professor Sara Seager from MIT. The starshade was proposed as a cheap alternative to the high-quality mirrors (up to twenty times smoother than Hubble) and advanced precision wavefront control of light entering the telescope. A starshade is essentially a large occulting device that acts as an independent satellite and is placed between any star being investigated for exoplanets and a telescope that is staring at them. In doing so, it blocks out the light of the star itself but allows any exoplanet light to pass (the name comes from the fact that starshades were pioneered to allow telescopic imaging of the solar corona). Exactly the same principle as a coronagraph only outside the telescope rather than inside it. The brightness of a planet is not determined by its star’s brightness so much as its own reflectivity and radius, which is why anyone viewing our solar system from a distance would see both Jupiter and Venus easier than Earth.

starshade

Image: A starshade blocks out light from the parent star, allowing the exoplanet under scrutiny to be revealed. Credit: University of Colorado/Northrup Grumman.

Any exoplanetary observatory has two critical parameters that determine its sensitivity. How close it can cut out starlight to the star, allowing nearby planets to be imaged, is the so-called Inner Working Angle, IWA. For the current WFIRST plus starshade mission, this is 100 milliarcseconds (mas), which for nearby stars would incorporate any planet orbiting at Earth’s distance from a sun. Remember that Earth sits within a “habitable zone”, HBZ, determined by a temperature that allows the presence of liquid water on the planetary surface. The HBZ is determined by star temperature and thus size, so 100 mas unfortunately would not allow viewing of planets much closer to a smaller star with a smaller HBZ, such as a K-class star. A slightly less important parameter is the Outer Working Angle, OWA, which determines how far out the imager can see from the star. This applies in the narrow field of coronagraphs but not to starshades, enabling planetary imaging for many astronomical units from the star.

The second key parameter is “contrast brightness” or fractional planetary brightness. Jupiter-sized planets are, on average, a billion times dimmer than their star in visible light, and this worsens by ten times for smaller Earth-sized planets. The difference drops by about 100-1000 times in infrared light. The WFIRST starshade is designed to work predominantly in visible light, just creeping into infrared at 825 nm. This is because the most common and cheap light detectors available for telescopes, CCDs, operate best in this range. Thus to image a planet, the light of a parent star needs reducing by at least a billion times to see gas giants and ten billion to see terrestrial planets. WFIRST reduces starlight by exactly ten billion times before its data is processed, with an increase to 3 to the power of minus 11 after processing. So WFIRST can view both terrestrial and gas giant planets in terms of its contrast imaging. The 100 mas IWA allows imaging of planets in the HBZ of Sunlike stars and brighter K-class stars (whose HBZ overlap with larger stars).

By way of comparison, WFIRST/AFTA can only reduce the star’s light by a billion times, which only allows gas giants to be viewed. Its IWA is just inside Mars’ orbit, so it can see the HBZ only for some Sun-like stars or larger, as light entering the telescope diminishes with distance. EXO-C can do better than this, and aims to view terrestrial planets for nearby stars. It is limited instead by the amount of light it can collect with its smaller 1.5 m aperture, less than half WFIRST and less still when passed through the extended “optical train” of the coronagraph and the deformable mirrors used in the critical precision wavefront control.

One thing that affects starshades and indeed all forms of exoplanet imaging is that other objects appearing in an imaging field can mimic a planet although they are in fact either nearer or further away. Nebulae, supernovae and quasars are a few of the many possibilities. In general, these can be separated out by spectroscopy, as they will have spectra different from exoplanets that mimic their parent star’s light in reflection. One significant exception is “exozodiacal light”. This phenomenon is caused by the reflection of starlight from free dust within its system.

The dust arises from collisions between comets and asteroids within the system. Exozodiacal light is expressed as a ratio of its luminosity to that of the star’s luminosity, the so-called “zodis”, where one zodi is equivalent to the Sun’s system. For our own system this ratio is only 10 -7. In general the age of the system determines how high the zodiacal light ratio is, with older systems like our own having less as their dust dissipates. Thus zodiacal light in the Sol system is lower than in many systems, some of which have up to a thousand times the level. This causes problems in imaging exoplanets, as exozodiacal light is reflected at the same wavelength as planets and can clump together to form planet-like masses. Detailed photometry is needed to separate planets from this exozodiacal confounder, prolonging integration time in the process.

Pros and Cons of Starshades

No one has ever launched a starshade, nor has anyone ever even built a full-scale one, although large-scale operational mock-ups have been built and deployed on the ground. A lot of their testing has been in simulation. The key is that being big, starshades would need to fold up inside their launch vehicle to “deploy” once in orbit. Similar things have been done with conventional antennae in orbit, so the principle is not totally new. The other issue is one of so-called “formation” flying. To work, the shade must cast an exact shadow over the viewing telescope throughout any imaging session, where sessions could be as long as days or weeks. This requires precision control for both telescope and starshade to an accuracy of plus or minus one metre. For WFIRST to operate, the starshade/telescope distance must be 35 thousand kilometres. Such precision has been achieved before, but only over much shorter distances of tens of metres. Even so, this is not perceived as a major obstacle.

starshade_deployed

Image: Schematic of a deployed starshade. TOMS: Thermo-Optical Micrometeorite Shield. Credit: NASA/JPL/Princeton University.

The other issue is “slew” time. Either the telescope or starshade must be self-propelled in order to move into the correct position for imaging. For the WFIRST mission this will be the starshade. To move from one viewing position to another can take days, with an average time between viewings of 11 days. Although moving the starshade is time-consuming, it isn’t a problem — recall that first and foremost WFIRST is a dark energy mission, so the in-between time can be used to conduct such operations.

What are the advantages and disadvantages of starshades? To some extent we have touched on the disadvantages already. They are bulky, out of necessity requiring larger and separate launch vehicles, with unfolding or “deployment” in orbit, factors that increase costs. They require precision flying. I mentioned earlier that at longer wavelengths the contrast between planet and star diminishes, which suggests that viewing in the infrared would be less demanding.

Unfortunately, longer wavelength light is more likely to “leak” by diffraction around the starshade, thus corrupting the image. This is an issue for starshades even in visible light and it is for this reason they are flower shaped, with anything from 12 to 28 petals, a design that has been found to reduce diffraction significantly. The starshade will have a finite amount of propellant, too, allowing up to a maximum of 100 points before running out. A coronagraph, conversely, should last as long the telescope it is working in.

Critically, because of the unpredictable thermal environment around Earth as well as the obstructive effect of both the Earth and Moon ( and Sun), starshades cannot be used in Earth orbit. This means either a Kepler-like “Earth trailing” orbit ( like EXO-C) or an orbit at the Sun-Earth-Lagrange 2 null point, 1.5 million kms outward from Earth. Such orbits are more complex and thus costly options than the geosynchronous orbit originally proposed. WFIRST is also the first telescope to be made service/repair friendly. This is likely to be possible through robotic or manned missions, but a mission to L2 is an order of magnitude harder and more expensive.

The benefits of starshades are numerous. We have talked of formation flying. For coronagraphs, the equivalent, and if anything more difficult issue, is wavefront control. Additional deformable mirrors are required to achieve this, which add yet more elements between the aperture and focal plane. Remember the coronagraph has up to thirty extra mirrors. No mirror is perfect and even with a 95% reflectivity over the length of the “optical train,’ a substantial amount of light is lost, light from a very dim target. Furthermore, coronagraphs demand the use of light-blocking masks, as part of the coronagraph blocks light in a non-circular way, so care has to be taken that unwanted star light outside of this area does not go on to pollute that reaching the telescope focal plane.

Even in bespoke EXO-C, only 37% light gets to the optical plane and WFIRST-AFTA will be worse. It is possible to alleviate this by improving the quality of the primary mirror but obviously with WFIRST this is not possible. In essence, a starshade takes the pressure off the telescope and the IWA and contrast ratio depend largely on it rather than the telescope. Thus starshades can work with most telescopes irrespective of type or quality, without recourse to the complex and expensive wavefront control necessary with a coronagraph.

The Best Way Forward

In summary, it is fair to say that of the various options for a WFIRST/Probe mission, the WFIRST telescope in combination with a starshade is the most potent in terms of its exoplanetary imaging, especially for potentially habitable planets. Once at the focal plane, any light will be analysed by a spectrograph that, depending on the amount of light it receives, will be able to characterise the atmospheres of the planets it views. The more light, the more sensitive the result, so with the 2.4 m WFIRST mirror as opposed to the 1.1 m telescope of EXO-S, discovery or “integration” times will be reduced by up to five times. Conversely and unsurprisingly, the further the target planet, the longer the time to integrate and characterise it. This climbs dramatically for targets further than ten parsecs or about 33 light years away, but there are still plenty of promising stars in and around this distance and certainly up to 50 light years away. Target choice will be important.

At present it is proposed that the mission would characterise up to 52 planets during the time allotted to it during the mission. The issue is which 52 planets to image? Spectroscopy tells us about planetary atmospheres and characterises them but there are many potential targets, some with known planets, others, including many nearer targets, with none. Which stars have planets most likely to reveal exciting “biosignatures”, potential signs of life ?

It is likely that astronomers will use a list of potentially favourable targets produced by respected exoplanet astronomer Margaret Turnbull in 2003. Dr. Turnbull is also part of the EXO-S team and first floated the idea of using a starshade for exoplanetary imaging with a concept paper on a 4 m telescope, New Worlds Observer, at the end of the last decade. This design proposed using a larger, 50 m starshade with an inner working angle of 65 mas, within Venus’ orbit so effective for planets in the habitable zone around smaller, cooler stars than the sun.

To date, the plan is to view 13 known exoplanets and look at the HBZ in 19 more. The expectation is that 2-3 “Earths” or “Super-Earths” will be found from this sample. This is where the larger aperture of WFIRST comes into its own. The more light, the greater the likelihood that any finding or image is indeed a planet standing out from background light or “noise”. This is the so called Signal to Noise ratio, SNR, that is critical to all astronomical imaging. The higher the SNR, the better, eliminating the possibility of false results. A level of ten is set for the WFIRST starshade mission for spectroscopic findings that characterise exoplanetary atmospheres. Spectroscopic SNR is a related concept and is described by a spectrograph’s ability to separate spectral lines due, for example, to a biosignature gas like O2 in an atmosphere. For WFIRST this is set at 70, which while sounding reasonable is tiny when compared to the value used for searching for exoplanets using the radial velocity method, which runs into the hundreds of thousands.

Given the lack of in-space testing, the 34 m of EXO-S is nearest to the scaled model deployed in Earth at Princeton University, and thus the lowest but effective risk-size. The overall idea was considered by NASA but rejected on grounds of cost at a time of JWST development. Apart from this, only “concept” exoplanet telescopes like ATLAST, an 8 m plus scope, have been proposed with vague and distant timelines.

So now it is up to whether NASA feels able to cover making WFIRST starshade-ready and willing to allow a shift of orbit for WFIRST from geosynchronous orbit to L2. That decision could come as early as 2016, when NASA decides if the mission can go forward to see if it can get its budget. Then Congress will need to be persuaded that the $1.7 billion budget will stay at that level and not escalate. Assuming WFIRST is starshade-ready, the battle to fund a starshade will hopefully have begun once 2016 clearance is given. Ironically, NASA bid processes don’t cover starshades, so there would need to be a rule change. There are various funding pools whose timeline would support a WFIRST starshade, however. No one has ever built one, but we do know that the costs run into the hundreds of millions, especially if a launch is included, as it would be unless the next generation of large, cheap launchers like the Falcon Heavy can manage shade and telescope in one go. Otherwise the idea would be for the telescope to go up in 2024, with starshade deployment the following a year once the telescope is operational.

Meanwhile there are private initiatives. The BoldyGo Institute proposes launching a telescope, ASTRO-1, with both a coronagraph and starshade. This telescope will be off-axis, where the light from the primary is directed to a secondary mirror outside of the telescope baffle before being redirected to a focal plane behind the telescope. This helps reduce the obstruction to the aperture caused by an on-axis telescope, where the secondary mirror sits in front of the primary and directs light to a focal plane through a hole in the primary itself. This hole reduces effective mirror size and light gathering even before the optical train is reached and is the feature of a bespoke coronagraphic telescope. For further information consult the website at boldlygo.org.

Exciting but uncertain times. Coronagraph or no coronagraph, starshade or no starshade? Or perhaps with both? Properly equipped, WFIRST will become the first true space exoplanet telescope — one actually capable of seeing the exoplanet as a point source — and a potent one despite its Hubble-sized aperture. Additionally, like Gaia, WFIRST will hunt for exoplanets using astrometry. Coronagraph, starshade, micro lensing and astronomy — four for the price of one! Moreover, this is possibly the only such telescope for some time, with no big U.S. telescopes even planned. Not so much ATLAST as last chance. Either way, it is important to keep this item in the public eye as its results will be exciting and could be revolutionary.

The next decade will see the U.S. resume manned spaceflight with the SpaceX V2 and NASA’s own ORION spacecraft. At the same time, missions like Gaia, TESS, CHEOPS, PLATO and WFIRST will discover literally tens of thousands of exoplanets, which can then be characterised by the new ELTs and JWST. Our exoplanetary knowledge base will expand enormously and we may even discover biosignatures of life on some of them. Such ground-breaking discoveries will lead to dedicated “New Worlds Observer” style telescopes that with the advent of new lightweight materials, autonomous software and cheap launch vehicles, will have larger apertures than ever before, allowing even more detailed atmospheric characterisation. In this way the pioneering work of CoRoT and Kepler will be fully realised.

tzf_img_post

Astrobiology and Sustainability

As the Thanksgiving holiday approaches here in the US, I’m looking at a new paper in the journal Anthropocene that calls the attention of those studying sustainability to the discipline of astrobiology. At work here is a long-term perspective on planetary life that takes into account what a robust technological society can do to affect it. Authors Woodruff Sullivan (University of Washington) and Adam Frank (University of Rochester) make the case that our era may not be the first time “…where the primary agent of causation is knowingly watching it all happen and pondering options for its own future.”

How so? The answer calls for a look at the Drake Equation, the well-known synthesis by Frank Drake of the factors that determine the number of intelligent civilizations in the galaxy. What exactly is the average lifetime of a technological civilization? 500 years? 50,000 years? Much depends upon the answer, for it helps us calculate the likelihood that other civilizations are out there, some of them perhaps making it through the challenges of adapting to technology and using it to spread into the cosmos. A high number would also imply that we too can make it through the tricky transition and hope for a future among the stars.

Sullivan and Frank believe that even if the chances of a technological society emerging are as few as one in 1000 trillion, there will still have been 1000 instances of such societies undergoing transitions like ours in “our local region of the cosmos.” The authors refer to extraterrestrial civilizations as Species with Energy-Intensive Technology (SWEIT) and discuss issues of sustainability that begin with planetary habitability and extend to mass extinctions and their relation to today’s Anthropocene epoch, as well as changes in atmospheric chemistry, comparing what we see today with previous eras of climate alteration, such as the so-called Great Oxidation Event, the dramatic increase in oxygen levels (by a factor of at least 104) that occurred some 2.4 billion years ago.

Out of this comes a suggested research program that models SWEIT evolution and the evolution of the planet on which it arises, using dynamical systems theory as a theoretical methodology. As with our own culture, these ‘trajectories’ (development paths) are tied to the interactions between the species and the planet on which it emerges. From the paper:

Each SWEIT’s history defines a trajectory in a multi-dimensional solution space with axes representing quantities such as energy consumption rates, population and planetary systems forcing from causes both “natural” and driven by the SWEIT itself. Using dynamical systems theory, these trajectories can be mathematically modeled in order to understand, on the astrobiology side, the histories and mean properties of the ensemble of SWEITs, as well as, on the sustainability science side, our own options today to achieve a viable and desirable future.

sustainability_1

Image: Schematic of two classes of trajectories in SWEIT solution space. Red line shows a trajectory representing population collapse. Blue line shows a trajectory representing sustainability. Credit: Michael Osadciw/University of Rochester.

The authors point out that other methodologies could also be called into play, in particular network theory, which may help illuminate the many routes that can lead to system failures. Using these modeling techniques could allow us to explore whether the atmospheric changes our own civilization is seeing are an expected outcome for technological societies based on the likely energy sources being used in the early era of technological development. Rapid changes to Earth systems are, the authors note, not a new phenomenon, but as far as we know, this is the first time where the primary agent of causation is watching it happen.

Sustainability emerges as a subset of the larger frame of habitability. Finding the best pathways forward involves the perspectives of astrobiology and planetary evolution, both of which imply planetary survival but no necessary survival for a particular species. Although we know of no extraterrestrial life forms at this time, this does not stop us from proceeding, because any civilization using energy to work with technology is also generating entropy, a fact that creates feedback effects on the habitability of the home planet that can be modeled.

sustainability2

Image: Plot of human population, total energy consumption and atmospheric CO2 concentration from 10,000 BCE to today as trajectory in SWEIT solution space. Note the coupled increase in all 3 quantities over the last century. Credit: Michael Osadciw/University of Rochester.

Modeling evolutionary paths may help us understand which of these are most likely to point to long-term survival. In this view, our current transition is a phase forced by the evolutionary route we have taken, demanding that we learn how a biosphere adapts once a species with an energy-intensive technology like ours emerges. The authors argue that this perspective “…allows the opportunities and crises occurring along the trajectory of human culture to be seen more broadly as, perhaps, critical junctures facing any species whose activity reaches significant level of feedback on its host planet (whether Earth or another planet).”

The paper is Frank and Sullivan, “Sustainability and the astrobiological perspective: Framing human futures in a planetary context,” Anthropocene Vol. 5 (March 2014), pp. 32-41 (full text). This news release from the University of Washington is helpful, as is this release from the University of Rochester.

tzf_img_post

Our Best View of Europa

Apropos of yesterday’s post questioning what missions would follow up the current wave of planetary exploration, the Jet Propulsion Laboratory has released a new view of NASA’s intriguing moon Europa. The image, shown below, looks familiar because it was published in 2001, though at lower-resolution and with considerable color enhancement. The new mosaic gives us the largest portion of the moon’s surface at the highest resolution, and without the color enhancement, so that it approximates what the human eye would see.

The mosaic of images that go into this view was put together in the late 1990s using imagery from the Galileo spacecraft, which again makes me thankful for Galileo, a mission that succeeded despite all its high-gain antenna problems, and anxious for renewed data from this moon. The original data for the mosaic were acquired by the Galileo Solid-State Imaging experiment on two different orbits through the system of Jovian moons, the first in 1995, the second in 1998.

NASA is also offering a new video explaining why the interesting fracture features merit investigation, given the evidence for a salty subsurface ocean and the potential for at least simple forms of life within. It’s a vivid reminder of why Europa is a priority target.

europa_highres

Image (click to enlarge): The puzzling, fascinating surface of Jupiter’s icy moon Europa looms large in this newly-reprocessed color view, made from images taken by NASA’s Galileo spacecraft in the late 1990s. This is the color view of Europa from Galileo that shows the largest portion of the moon’s surface at the highest resolution. Credit: NASA/Jet Propulsion Laboratory.

Areas that appear blue or white are thought to be relatively pure water ice, with the polar regions (left and right in the image — north is to the right) bluer than the equatorial latitudes, which are more white. This JPL news release notes that the variation is thought to be due to differences in ice grain size in the two areas. The long cracks and ridges on the surface are interrupted by disrupted terrain that indicates broken crust that has re-frozen. Just what do the reddish-brown fractures and markings have to tell us about the chemistry of the Europan ocean, and the possibility of materials cycling between that ocean and the ice shell?

tzf_img_post

Rosetta: Building Momentum for Deep Space?

Even though its arrival on the surface of comet 67P/Churyumov-Gerasimenko did not go as planned, the accomplishment of the Rosetta probe is immense. We have a probe on the surface that was able to collect 57 hours worth of data before going into hibernation, and a mother ship that will stay with the comet as it moves ever closer to the Sun (the comet’s closest approach will be on August 13 of next year).

What a shame the lander’s ‘docking’ system, involving reverse thrusters and harpoons to fasten it to the surface, malfunctioned, leaving it to bounce twice before it landed with solar panels largely shaded. But we do know that the Philae lander was able to detect organic molecules on the cometary surface, with analysis of the spectra and identification of the molecules said to be continuing. The comet appears to be composed of water ice covered in a thin layer of dust. There is some possibility the lander will revive as the comet moves closer to the Sun, according to Stephan Ulamec (DLR German Aerospace Center), the mission’s Philae Lander Manager, and we can look forward to reams of data from the still functioning Rosetta.

What an audacious and inspiring mission this first soft landing on a comet has been. Congratulations to all involved at the European Space Agency as we look forward to continuing data return as late as December 2015, four months after the comet’s closest approach to the Sun.

Rosetta-Osiris_3109773b

Image: The travels of the Philae lander as it rebounds from its touchdown on Comet 67P/Churyumov Gerasimenko. Credit: ESA/Rosetta/Philae/ROLIS/DLR.

A Wave of Discoveries Pending

Rosetta used gravitational assists around both Earth and Mars to make its way to the target, hibernating for two and a half years to conserve power during the long journey. Now we wait for the wake-up call to another distant probe, New Horizons, as it comes out of hibernation for the last time on December 6. Since its January, 2006 launch, the Pluto-bound spacecraft has spent 1,873 days in hibernation, fully two-thirds of its flight time, in eighteen hibernation periods ranging from 36 days to 202 days, a way to reduce wear on the spacecraft’s electronics and to free up an overloaded Deep Space Network for other missions.

When New Horizons transmits a confirmation that it is again in active mode, the signal will take four hours and 25 minutes to reach controllers on Earth, at a time when the spacecraft will be more than 2.9 billion miles from the Earth, and less than twice the Earth-Sun distance from Pluto/Charon. According to the latest report from the New Horizons team, direct observations of the target begin on January 15, with closest approach on July 14.

Nor is exploration slowing down in the asteroid belt, with the Dawn mission on its way to Ceres. Arrival is scheduled for March of 2015. Eleven scientific papers were published last week in the journal Icarus, including a series of high-resolution geological maps of Vesta, which the spacecraft visited between July of 2011 and September of 2012.

vesta_map

Image (click to enlarge): This high-resolution geological map of Vesta is derived from Dawn spacecraft data. Brown colors represent the oldest, most heavily cratered surface. Purple colors in the north and light blue represent terrains modified by the Veneneia and Rheasilvia impacts, respectively. Light purples and dark blue colors below the equator represent the interior of the Rheasilvia and Veneneia basins. Greens and yellows represent relatively young landslides or other downhill movement and crater impact materials, respectively. This map unifies 15 individual quadrangle maps published this week in a special issue of Icarus. Credit: NASA/JPL.

Geological mapping develops the history of the surface from analysis of factors like topography, color and brightness, a process that took two and a half years to complete. We learn that several large impacts, particularly the Veneneia and Rheasilvia impacts in Vesta’s early history and the much later Marcia impact, have been transformative in the development of the small world. Panchromatic images and seven bands of color-filtered images from the spacecraft’s framing camera, provided by the Max Planck Society and the German Aerospace Center, helped to create topographic models of the surface that could be used to interpret Vesta’s geology. Crater statistics fill out the timescale as scientists date the surface.

With a comet under active investigation, an asteroid thoroughly mapped, a spacecraft on its way to the largest object in the asteroid belt, and an outer system encounter coming up for mid-summer of 2015, we’re living in an exciting time for planetary discovery. But we need to keep looking ahead. What follows New Horizons to the edge of the Solar System and beyond? What assets should we be hoping to position around Jupiter’s compelling moons? Is a sample-return mission through the geysers of Enceladus feasible, and what about Titan? Let’s hope Rosetta and upcoming events help us build momentum for following up our current wave of deep space exploration.

tzf_img_post

Slingshot to the Stars

Back in the 1970s, Peter Glaser patented a solar power satellite that would supply energy from space to the Earth, one involving space platforms whose cost was one of many issues that put the brakes on the idea, although NASA did revisit the concept in the 1980’s and 90’s. But changing technologies may help us make space-based power more manageable, as John Mankins (Artemis Innovations) told his audience at the Tennessee Valley Interstellar Workshop.

What Mankins has in mind is SPS-ALPHA (Solar Power Satellite by means of Arbitrarily Large Phased Array), a system of his devising that uses modular and reconfigurable components to create large space systems in the same way that ants and bees form elegant and long-lived ecosystems on Earth. The goal is to harvest sunlight using thin-film reflector surfaces as part of an ambitious roadmap for solar power. Starting small — using small satellites and beginning with propulsion stablization modules — we begin scaling up, one step at a time, to full-sized solar power installations. The energies harvested are beamed to a receiver on the ground.

mankins_1

Image: An artist’s impression of SPS-ALPHA at work. Credit: John Mankins.

All this is quite a change from space-based solar power concepts from earlier decades, which demanded orbital factories to construct and later maintain the huge platforms needed to harvest sunlight. But since the late 1990s, intelligent modular systems have come to the fore as the tools of choice. Self-assembly involving modular 10 kg units possessed of their own artificial intelligence, Mankins believes, will one day allow us to create structures of sufficient size that can essentially maintain themselves. Thin-film mirrors to collect sunlight keep the mass down, as does the use of carbon nanotubes in composite structures.

There is no question that we need the energy if we’re thinking in terms of interstellar missions, though some would argue that fusion may eventually resolve the problem (I’m as dubious as ever on that idea). Mankins harked back to the Daedalus design, estimating its cost at $4 trillion and noting that it would require an in-space infrastructure of huge complexity. Likewise Starwisp, a Robert Forward beamed-sail design, which would need to power up beamers in close solar orbit to impart energy to the spacecraft. Distance and time translates into energy and power.

Growing out of the vast resources of space-based solar power is a Mankins idea called Star Sling, in which SPS-ALPHA feeds power to a huge maglev ring as a future starship accelerates. Unlike a fusion engine or a sail, the Star Sling allows acceleration times of weeks, months or even years, its primary limitation being the tensile strength of the material in the radial acceleration direction (a fraction of what would be needed in a space elevator, Mankins argues). The goal is not a single starship but a stream of 50 or 100 one to ten ton objects sent one after another to the same star, elements that could coalesce and self-assemble into a larger starship along the way.

Like SPS-ALPHA itself, Star Sling also scales up, beginning with an inner Solar System launcher that helps us build the infrastructure we’ll need. Also like SPS-ALPHA, a Star Sling can ultimately become self-sustaining, Mankins believes, perhaps within the century:

“As systems grow, they become more capable. Consider this a living mechanism, insect-class intelligences that recycle materials and print new versions of themselves as needed. The analog is a coral atoll in the South Pacific. Our systems are immortal as we hope our species will be.”

All of this draws from a 2011-2012 Phase 1 project for the NASA Innovative Advanced Concepts program on SPS-ALPHA, one that envisions “…the construction of huge platforms from tens of thousands of small elements that can deliver remotely and affordably 10s to 1000s of megawatts using wireless power transmission to markets on Earth and missions in space.” The NIAC report is available here. SPS-ALPHA is developed in much greater detail in Mankins’ book The Case for Space Solar Power.

Ultra-Lightweight Probes to the Stars

Knowing of John Rather’s background in interstellar technologies (he examined Robert Forward’s beamed sail concepts in important work in the 1970s, and has worked with laser ideas for travel and interstellar beacons in later papers), I was anxious to hear his current thoughts on deep space missions. I won’t go into the details of Rather’s long and highly productive career at Oak Ridge, Lawrence Livermore and the NRAO, but you can find a synopsis here, where you’ll also see how active this kind and energetic scientist remains.

Like Mankins, Rather (Rather Creative Innovations Group) is interested in structures that can build and sustain themselves. He invoked self-replicating von Neumann machines as a way we might work close to the Sun while building the laser installations needed for beamed sails. But of course self-replication plays out across the whole spectrum of space-based infrastructure. As Rather noted:

“Tiny von Neumann machines can beget giant projects. Our first generation projects can include asteroid capture and industrialization, giving us the materials to construct lunar colonies and expand to Mars and the Jovian satellites. We can see some of the implementing technologies now in the form of MEMS – micro electro-mechanical systems – along with 3D printers. As we continue to explore tiny devices that build subsequent machines, we can look toward expanding from colonization of our own Solar System into the problems of interstellar transfer.”

Building our system infrastructure requires cheap access to space. Rather’s concept is called StarTram, an electromagnetic accelerator that can launch unmanned payloads at Mach 10 (pulling 30 g’s at launch). The key here is to drop launch costs down from roughly $20,000 per kilogram to $100 per kilogram. Using these methods, we can turn our attention to asteroid materials that can, via self-replicating von Neumann technologies, build solar concentrators, lightsails and enormous telescope apertures (imagine a Forward-class lens 1000-meters in radius). 100-meter solar concentrators could change asteroid orbits for subsequent mining.

This is an expansive vision that comprises a blueprint for an eventual interstellar crossing. With reference to John Mankins’ Star Slinger, Rather mused that a superconducting magnetically inflated cable 50,000 kilometers in radius could be spun around the Earth, allowing the kind of solar power concentrator just described to power up the launcher. Taking its time to accelerate, a lightweight probe could reach three percent of lightspeed within 300 days, launching a 30 kg payload to the stars. The macro-engineering envisioned by Robert Forward still lives, to judge from both Rather’s and Mankins’ presentations, transformed by what may one day be our ability to create the largest of structures from tiny self-replicating machines.

The Solar Power Pipeline

Back when I was writing Centauri Dreams in 2004, I spent some time at Marshall Space Flight Center in Huntsville interviewing people like Les Johnson and Sandy Montgomery, who were both in the midst of the center’s work on advanced propulsion. A major player in the effort that brought us NanoSail-D, Sandy has been interstellar-minded all long, as I discovered the first time I talked to him. I had asked whether people would be willing to turn their back on everything they ever knew to embark on a journey to another star, and he reminded me of how many people had left their homes in our own history to voyage to and live at the other side of the world.

montgomery

Image: Edward “Sandy” Montgomery, NanoSail-D payload manager at Marshall (in the red shirt) and Charlie Adams, NanoSail-D deputy payload manager, Gray Research, Huntsville, Ala. look on as Ron Burwell and Rocky Stephens, test engineers at Marshall, attach the NanoSail-D satellite to the vibration test table. In addition to characterizing the satellite’s structural dynamic behavior, a successful vibration test also verifies the structural integrity of the satellite, and gauges how the satellite will endure the harsh launch environment. Credit: NASA/MSFC/D. Higginbotham.

We’re a long way from making such decisions, of course, but Montgomery’s interest in Robert Forward’s work has stayed active, and in Oak Ridge he described a way to power up a departing starship that didn’t have to rely on Forward’s 1000-kilometer Fresnel lens in the outer Solar System. Instead, Montgomery points to building a power collector in Mercury orbit that would use optical and spatial filtering to turn sunlight into a coherent light source and stream it out into the Solar System through a series of relays built out of lightweight gossamer structures.

Work the calculations as Montgomery has and you wind up with 23 relays between Earth orbit and the Sun, with more extending deeper into the Solar System. Sandy calls this a ‘solar power pipeline’ that would give us maximum power for a departing sailcraft. The relaying of coherent light has been demonstrated already in experiments conducted by the Department of Defense, in a collector and re-transmitter system developed by Boeing and the US Air Force. Although some loss occurs because of jitter and imperfect coatings, the concept is robust enough to warrant further study. I suspect Forward would have been eager to run the calculations on this idea.

Wrapping Up TVIW

Les Johnson closed the formal proceedings at TVIW late on the afternoon of the 11th, and that night held a public outreach session, where I gave a talk running through the evolution of interstellar propulsion concepts in the last sixty years. Following that was a panel with science fiction writers Sarah Hoyt, Tony Daniel, Baen Books’ Toni Weisskopf and Les Johnson on which I, a hapless non-fiction writer, was allowed to have a seat. A book signing after the event made for good conversations with a number of Centauri Dreams readers.

All told, this was an enthusiastic and energizing conference. I’m looking forward to TVIW 2016 in Chattanooga. What a pleasure to spend time with these people.

tzf_img_post

TVIW: From Wormholes to Orion

People keep asking what I think about Christopher Nolan’s new film ‘Interstellar.’ The answer is that I haven’t seen it yet, but plan to early next week. Some of the attendees of the Tennessee Valley Interstellar Workshop were planning to see the film on the event’s third day, but I couldn’t stick around long enough to join them. I’ve already got Kip Thorne’s The Science of Interstellar queued up, but I don’t want to get into it before actually seeing the film. I’m hoping to get Larry Klaes, our resident film critic, to review Nolan’s work in these pages.

Through the Wormhole

Wormholes are familiar turf to Al Jackson, who spoke at TVIW on the development of our ideas on the subject in science and in fiction. Al’s background in general relativity is strong, and because I usually manage to get him aside for conversation at these events, I get to take advantage of his good humor by asking what must seem like simplistic questions that he always answers with clarity. Even so, I’ve asked both Al and Marc Millis to write up their talks in Oak Ridge, because both of them get into areas of physics that push beyond my skillset.

wormhole1

Al’s opening slide was what he described as a ‘traversable wormhole,’ and indeed it was, a shiny red apple with a wormhole on its face. What we really want to do, of course, is to connect two pieces of spacetime, an idea that has percolated through Einstein’s General Relativity down through Schwarzchild, Wheeler, Morris and Thorne. The science fiction precedents are rich, with a classic appearance in Robert Heinlein’s Starman Jones (1953), the best of his juveniles, in my opinion. Thus our hero Max explains how to get around the universe:

You can’t go faster than light, not in our space. If you do, you burst out of it. Buf it you do it where space is folded back and congruent, you pop right back into our space again but it’s a long way off. How far off depends on how it’s folded. And that depends on the mass in the space, in a complicated fashion that can’t be described in words but can be calculated.

I chuckled when Al showed this slide because the night before we had talked about Heinlein over a beer in the hotel bar and discovered our common admiration for Starman Jones, whose description of ‘astrogators’ — a profession I dearly wanted to achieve when I read this book as a boy — shows how important it is to be precisely where you need to be before you go “poking through anomalies that have been calculated but never tried.” Great read.

If natural wormholes exist, we do have at least one paper on how they might be located, a team effort from John Cramer, Robert Forward, Michael Morris, Matt Visser, Gregory Benford and Geoffrey Landis. As opposed to gravitational lensing, where the image of a distant galaxy has been magnified by the gravitational influence of an intervening galaxy, a wormhole should show a negative mass signature, which means that it defocuses light instead of focusing it.

Al described what an interesting signature this would be to look for. If the wormhole moves between the observer and another star, the light would suddenly defocus, but as it continues to cross in front of the star, a spike of light would occur. So there’s your wormhole detection: Two spikes of light with a dip in the middle, an anomalous and intriguing observation! It’s also one, I’ll hasten to add, that’s never been found. Maybe we can manufacture wormholes? Al described plucking a tiny wormhole from the quantum Planck foam, the math of which implies we’d have to be way up the Kardashev scale to pull off any such feat. For now, about the best we can manage is to keep our eyes open for that astronomical signature, which would at least indicate wormholes actually exist. The paper cited above, by the way, is “Natural Wormholes as Gravitational Lenses,” Physical Review D (March 15, 1995): pp. 3124-27.

wormhole_2

Enter the Space Drive

To dig into wormholes, the new Thorne book would probably be a good starter, though I base this only on reviews, as I haven’t gotten into it yet. Frontiers of Propulsion Science (2009) also offers a look into the previous scholarship on wormhole physics and if you really want to dig deep, there’s Matt Visser’s Lorentzian Wormholes: From Einstein to Hawking (American Institute of Physics, 1996). I wanted to talk wormholes with Marc Millis, who co-edited the Frontiers of Propulsion Science book with Eric Davis, but the tight schedule in Oak Ridge and Marc’s need to return to Ohio forced a delay.

mmillis

In any event, Millis has been working on space drives rather than wormholes, the former being ways of moving a spacecraft without rockets or sails. Is it possible to make something move without expelling any reaction mass (rockets) or in some way beaming momentum to it (lightsails)? We don’t know, but the topic gets us into the subject of inertial frames — frames of reference defined by the fact that the law of inertia holds within them, so that objects observed from this frame will resist changes to their velocity. Juggling balls on a train moving at a constant speed (and absent visual or sound cues), you could not determine whether the train was in motion or parked. The constant-velocity train is considered an inertial frame of reference.

Within the inertial frame, in other words, Newton’s laws of motion hold. An accelerating frame of reference is considered a non-inertial frame because the law of inertia is not maintained in it. If the conductor pulls the emergency brake on the train, you are pushed forward suddenly in this decelerating frame of reference. From the standpoint of the ground (an inertial frame), you aboard the train simply continue with your forward motion when the brake is applied.

We have no good answers on what causes an inertial frame to exist, an area where unsolved physics regarding the coupling of gravitation and inertia to other fundamental forces leave open the possibility that one could be used to manipulate the other. We’re at the early stages of such investigations, asking whether an inertial frame is an intrinsic property of space itself, or whether it somehow involves, as Ernst Mach believed, a relationship with all matter in the universe. That leaves us in the domain of thought experiments, which Millis illustrated in a series of slides that I hope he will discuss further in an article here.

Fusion’s Interstellar Prospects

Rob Swinney, who is the head of Project Icarus, used his time at TVIW to look at a subject that would seem to be far less theoretical than wormholes and space drives, but which still has defeated our best efforts at making it happen. The subject is fusion and how to drive a starship with it. The Daedalus design of the 1970s was based on inertial confinement fusion, using electron beams to ignite fusion in fuel pellets of deuterium and helium-3. Icarus is the ongoing attempt to re-think that early Daedalus work in light of advances in technology since.

But like Daedalus, Icarus will need to use fusion to push the starship to interstellar speeds. Robert Freeland and Andreas Hein, also active players in Icarus, were also in Oak Ridge, and although Andreas was involved with a different topic entirely (see yesterday’s post), Robert was able to update us on the current status of the Icarus work. He illustrated one possibility using Z-pinch methods that can confine a plasma to heat it to fusion conditions.

Three designs are still in play at Icarus, with the Z-pinch version (Freeland coined it ‘Firefly’ because of the intense glow of waste heat that would be generated) relying on the same Z-pinch phenomenon we see in lightning. The trick with Z-pinch is to get the plasma moving fast enough to create a pinch that is free of hydrodynamic instabilities, but Icarus is tracking ongoing work at the University of Washington on the matter. As to fuel, the team has abandoned deuterium/helium-3 in favor of deuterium/deuterium fusion, a choice that must flow from the problem of obtaining the helium-3, which Daedalus assumed would be mined at Jupiter.

Freeland described the Firefly design as having an exhaust velocity of 10,000 kilometers per second, with a 25 year acceleration period to reach cruise speed. The cost: $35 billion a year spread out over 15 years. I noted in Rob Swinney’s talk that the Icarus team is also designing interstellar precursor missions, with the idea of building a roadmap. All told, 35,000 hours of volunteer research are expected to go into this project (I believe Daedalus was 10,000), with the goal of not just reaching another star but decelerating at the target to allow close study.

icarus_pathfinder_vasimr_1l

Image: Artist’s conception of Icarus Pathfinder. Credit: Adrian Mann.

Let me also mention a design from the past that antedates Daedalus, which was begun in 1973. Brent Ziarnick is a major in the US Air Force who described the ARPA-funded work on nuclear pulse propulsion that grew into Orion, with work at General Atomics from 1958 to 1965. Orion was designed around the idea of setting off nuclear charges behind the spacecraft, which would be protected by an ablation shield and a shock absorber system to cushion the blasts.

We’ve discussed Orion often in these pages as a project that might have opened up the outer Solar System, and conceivably produced an interstellar prototype if Freeman Dyson’s 1968 paper on a long-haul Orion driven by fusion charges had been followed up. Ziarnick’s fascinating talk explained how the military had viewed Orion. Think of an enormous ‘battleship’ of a spacecraft that could house a nuclear deterrent in a place that Soviet weaponry couldn’t reach. At least, that was how some saw the Cold War possibilities in the early years of the 1960s.

The military was at this time looking at stretch goals that went way beyond the current state of the art in Project Mercury, and had considered systems like Dyna-Soar, an early spaceplane design. With a variety of manned space ideas in motion and nuclear thermal rocket engines under investigation, a strategic space base that would be invulnerable to a first strike won support all the way up the command chain to Thomas Power at the Strategic Air Command and Curtis LeMay, who was then Chief of Staff of the USAF. Ziarnick followed Orion’s budget fortunes as it ran into opposition from Robert McNamara and ultimately Harold Brown, who worked under McNamara as director of defense research and engineering from 1961 to 1965.

Orion would eventually be derailed by the Atmospheric Test Ban Treaty of 1963, but the idea still has its proponents as a way of pushing huge payloads to deep space. Ziarnick called Orion ‘Starfleet Deferred’ rather than ‘Starflight Denied,’ and noted the possibility of renewed testing of pulse propulsion without nuclear pulse units. The military lesson from Orion:

“The military is not against high tech and will support interstellar research if they can find a defense reason to justify it. We learn from Orion that junior officers can convince senior leaders, that operational commanders like revolutionary tech. Budget hawks distrust revolutionary tech. Interstellar development will be decided by political, international, defense and other concerns.”

Several other novel propulsion ideas, as well as a book signing event, will wrap up my coverage of the Tennessee Valley Interstellar Workshop tomorrow.

tzf_img_post

Charter

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For many years this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image courtesy of Marco Lorenzi).

Now Reading

Version 1.0.0

Recent Posts

On Comments

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to others, I'm probably not going to run it.

Follow with RSS or E-Mail

RSS
Follow by Email

Follow by E-Mail

Get new posts by email:

Advanced Propulsion Research

Beginning and End

Archives