The Emergence of Solitary Stars

by Paul Gilster on October 9, 2014

Looking at the latest work from Carnegie’s Alan Boss reminds me once again of the crucial role computers play in astrophysical calculations. We’re so used to the process that we’ve come to take it for granted, but imagine where we’d be without the ability to model complex gravitational systems. To understand planet formation, we can simulate a protoplanetary disk around a young star and let a billion years pass in front of our eyes. And as our models improve, we can set the process in motion with ever greater fidelity.

Read Caleb Scharf’s The Copernicus Complex ( Farrar, Straus and Giroux, 2014) to see how much we’ve learned by ever more precise modeling. Back in the late 1980s, Jacques Laskar (Bureau des Longitudes, Paris), Gerald Sussman and Jack Wisdom (the latter two at MIT) developed mathematical approaches that could track changes to orbital motions to understand our solar system’s past. Their work and the wave of innovation that followed helped us understand exponential divergence over million-year time periods, a crucial factor, as Scharf shows, in how unpredictable planetary motions can be:

Newton’s physics and its application by scientists like Laplace had appeared to be describing a clockwork universe, a reality based on laws that could always lead you from point A to point B, through space and time. And although the concepts of chaos and nonlinearity were well-known by the time these numerical computer experiments were carried out on planetary motions, this was the first real confirmation that our solar system was neither clockwork nor predictable.

In other words, when dealing with astronomical time-frames, we begin to find outcomes that could not have been predicted as we run our simulations. We’re observing chaos at play in complex gravitational systems, where tiny interactions can ultimately change the trajectories of entire planets. Scharf’s discussion of these matters celebrates the computer’s ability to model these phenomena and observe different results, but it’s also a humbling reminder of our limitations in thinking that with enough information we can always predict the outcome.

Learning How Stars Form

What Alan Boss is modeling is the formation of stars, using three-dimensional models of the collapse of magnetic molecular cloud cores. His simulations depict the formation of stars as clusters of newly formed protostars come apart. What they show is that younger star and protostar populations have a higher frequency of multiple-star systems than older ones. In other words, many single-star systems like our own start out as multi-star systems, with stars being ultimately ejected to achieve stability. You can see the modeling at work below.


Image: The distribution of density in the central plane of a three-dimensional model of a molecular cloud core from which stars are born. The model computes the cloud’s evolution over the free-fall timescale, which is how long it would take an object to collapse under its own gravity without any opposing forces interfering. The free-fall time is a common metric for measuring the timescale of astrophysical processes. In a) the free-fall time is 0.0, meaning this is the initial configuration of the cloud, and moving on the model shows the cloud core in various stages of collapse: b) a free-fall time of 1.40 or 66,080 years; c) a free-fall time of 1.51 or 71,272 years; and d) a free-fall time of 1.68 or 79,296 years. Collapse takes somewhat longer than a free-fall time in this model because of the presence of magnetic fields, which slow the collapse process, but are not strong enough to prevent the cloud from fragmenting into a multiple protostar system (d). For context, the region shown in a) and b) is about 0.21 light years (or 2.0 x 1017 centimeters) across, while the region shown in c) and d) is about 0.02 light years (or 2.0 x 1016 cm) across. Credit: Alan Boss.

As the molecular cloud that will form a star collapses, how it fragments depends, Boss shows, on the initial strength of the magnetic field. If the magnetic field is strong enough, single protostars emerge, but below this level, the cloud begins to fragment into multiple protostars. From the paper:

The calculations produce clumps with masses in the range of ~0.01 to 0.5 M, clumps which will continue to accrete mass and interact gravitationally with each other. It can be expected that the multiple systems will undergo dramatic subsequent orbital evolution, through a combination of mergers and ejections following close encounters, resulting ultimately in a small cluster of stable hierarchical multiple protostars, binary systems, and single protostars. Such evolution appears to be necessary in order [to] produce the binary and multiple star statistics that hold for the solar-type stars in the solar neighborhood…

Those statistics are striking. Roughly two-thirds of the stars within 81 light years of the Earth are either binary or part of multi-star systems. And because what we see today as single stars can also be the result of ejection from a multi-star system, the formation of binary and multi-star systems seems to be commonplace. I’m interested in these findings because if we are to understand our own place in the cosmos, we’re beginning to see that we have to account for why single-star systems do not seem to be the default in the Milky Way.

The paper is Boss and Keiser, “Collapse and Fragmentation of Magnetic Molecular Cloud Cores with the Enzo AMR MHD Code. II. Prolate and Oblate Cores,” in press at The Astrophysical Journal (preprint).



Interstellar Flight: Risks and Assumptions

by Paul Gilster on October 8, 2014

The interstellar mission that Dana Andrews describes in his recent paper — discussed here over the past two posts — intrigues me because I’m often asked what the first possible interstellar mission might be. Sure, we can launch a flyby Voyager-class probe to Alpha Centauri if we’re willing to tolerate seventy-five thousand years in cruise, but what would we accept by way of acceptable cruise times? The lifetime of a human being? Multiple generations? And if we had to launch as soon as possible, what would the mission parameters be?

The mission that Andrews conceives grows out of questions like these. I can say upfront that this isn’t a mission I would want to fly on. For one thing, it’s a generation ship, so entire lives will be spent in cramped quarters, and the prospect of being overtaken by a later, faster ship is always there. But that’s not the point. 18th Century voyagers with a yen for the unknown could have waited for the age of steamships, but how could they have anticipated it? In any case, waiting would have cost them the journey that was in front of them. I think there will always be pioneers in search of experience unique to them, the first to step onboard as long as a viable mission presents itself.

Yesterday we looked at various propulsion strategies for Andrews’ starship, including a personal favorite, the Sailbeam design of Jordin Kare, which uses tiny micro-sails driven by laser as a stream of beamed energy that can be ionized when it arrives at the ship, providing thrust to a magsail. Dana Andrews knows a lot about magsails — working with Robert Zubrin in 1988, he showed that Robert Bussard’s interstellar ramjets would produce more drag than thrust, and the idea of turning a magnetic scoop into a magnetic sail began to grow. We’re seeing that it can be used both for acceleration and deceleration upon arrival.


Image: Interstellar generation ship configured for braking. Credit: Dana Andrews.

Several decades before the Andrews/Zubrin paper, Robert Forward had been taking note of laser developments at Hughes Research Laboratories in Malibu, CA. He already knew about solar sails, which had appeared in the work of Konstantin Tsiolkovsky and Fridrikh Tsander in the 1920s and which had been the subject of a technical paper by Richard Garwin in 1958. As a science fiction writer, Forward was surely aware as well of Carl Wiley’s “Clipper Ships of Space” article, which appeared under the byline Russell Saunders in Astounding Science Fiction. Why not, Forward mused, boost a solar sail with a laser?

I can see why Andrews included Forward’s laser lightsail ideas in the current paper, but the magsail seems like a far more likely candidate for the near-term mission that he describes. Even working with a minimal Forward configuration, we still have to solve problems of deployment and infrastructure that are huge, including, in Andrews’ calculations, a beam aperture fully 20 kilometers in diameter. He goes on to describe a lightsail mission with acceleration of 0.05 gees that reaches 2 percent of c in 155 days at a distance of 267 AU. “The minimum cost system is to invest in really good stationary optics,” he adds, “thereby allowing less power and smaller sails, but then beam jitter begins to dominate.”

Summing up the various propulsion methods discussed, Andrews comments:

We quickly examined four different near-term interstellar propulsion concepts. Each has its issues… The laser-powered ion thruster needs aggressive weights for the design to close, but has no obvious showstoppers. The Neutral Particle Beam concept appears workable at planetary distances, but requires very high acceleration and power levels to maintain divergence angles of one microradian or more. Projecting a beam of neutralized particles presents the problem of re-ionizing a dispersed cloud of particles, which is a definite showstopper. The Sailbeam propulsion has potential, but needs tests of the acceleration capability and is still power hungry (~4000 TW of electrical power for the example presented here). Even at 4000 TW it needs pointing accuracy better than a nanoradian to finish the acceleration. The laser-lightsail actually came off as relatively low risk at 800 TW of electrical power, but that is very dependent on the availability of a 20 km diameter diffraction-limited steering optic, and a one-gram/m2 lightsail (both risk factor 4+).


Image: Total energy usage comparison. Credit: Dana Andrews.

What makes predictions about spaceflight so tricky is that we can’t anticipate the emergence of disruptive technologies. The risk factors that Andrews develops as he looks at the progress of interstellar flight are, by his admission, estimates and ‘guesstimates,’ which is about the best we can do, and he characterizes near-term technologies as less than risk factor 4.


Image: Relative risk between candidate interstellar technologies.

We can all find things we might take exception to here and there in this list. You can see, for example, that Andrews characterizes a breakeven fusion reactor at risk factor 4, with a 40 year development time. Fusion has wreaked havoc with our predictions since the 1950s, and I think it’s optimistic to hope for working fusion power-plants even within this timeframe, though I know fusion-minded people who think we’re much closer. Fusion for starship propulsion he ranks at a risk factor of 7, needing 100 years to develop. Notice, too, that for the purposes of this mission, freezing or suspended animation are ruled out as being at risk factor 9, which would place their development 400 years out. A disruptive advance could negate this.

I find it useful to lay out our assumptions in such direct form. The biggest question I have regards fully closed-cycle biological ECLSS (Environmental Control and Life Support Systems). At a risk factor of 2.5 and 25 years of development, we could deploy these technologies on a generation ship, but will they be tested and ready by late in this century, when the starship would presumably be launched? My own guesstimate would lean toward a higher risk factor and 50 years for development, a tight but perhaps possible fit.

Other things the ship will need: Protection against galactic cosmic radiation (GCR), which Andrews proposes may be resolved by using magnetic fields to deflect charged particles away from the crew areas. He gives the topic a fuller discussion in a 2004 paper (see citation below). Dust in the interstellar medium poses a challenge because at 2 percent of c, impacting particles become plasma and can cause erosion to the spacecraft. Andrews notes this will need to be addressed in any starship design but doesn’t elaborate.

The conclusions [Andrews adds] are that near-term interstellar colonization flights are not completely science fiction, but there has to be a powerful requirement to generate the funding necessary to work many of the problems identified. The alternative is to wait a hundred years or so for low specific power fusion, or much longer for warp drive. We’ll see.

The paper is Andrews, “Defining a Near-Term Interstellar Colony Ship,” presented at the IAC’s Toronto meeting and now being submitted to Acta Astronautica. The 2004 paper is Andrews, “Things To Do While Coasting Through Interstellar Space,” AIAA-2004-3706, 40th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit, Fort Lauderdale, Florida, July 11-14, 2004.



Pondering Interstellar Propulsion Strategies

by Paul Gilster on October 7, 2014


Back in 1950, George Pal produced Destination Moon, a movie that was based (extremely loosely) on Robert Heinlein’s Rocketship Galileo. Under the direction of Irving Pichel, the film explained the basics of a journey to the Moon — using among other things an animated science lesson — to a world becoming intrigued with space travel. I’ve wondered in the past whether we might one day have an interstellar equivalent of this film, a look at ways of mounting a star mission in keeping with the laws of classical mechanics. Call it Destination Alpha Centauri or some such and let’s see what we get.

Christopher Nolan’s film Interstellar doesn’t seem to be that movie, at least based on everything I’ve seen about it so far. I did think the early trailers were interesting, evoking the human urge to overcome enormous obstacles and buzzing with a kind of Apollo-era triumphalism. The most recent trailer looks like starflight is again reduced to magic, although maybe there will be some attempt to explain how what seems to be wormhole transit works. While we wait to find out, I still wonder about what kind of mission we would mount if we really did have a world-ending scenario on our hands that demanded an interstellar trip soon.

[Addendum]: I’ve been reminded that Kip Thorne has a hand in the science of this film, which does make me more interested in seeing it.]

Dana Andrews, as we saw yesterday, has been exploring a near-term interstellar colony ship that could fit into this scenario. Today I want to return to the paper he presented at the recent International Astronautical Congress. Andrews defines near-term this way:

Near-term means we can’t use warp drive, fusion engines, or multi-hundred terawatt lasers (which could destroy civilization on Earth in the wrong hands). Affordable means we’re going to do design and cost trades and select the lowest risk option with reasonable costs…. The goal was to design the system with state of the art technologies, assuming there will be twenty to thirty years of Research and Development before we start construction.

I speculated yesterday that given the need for a robust space-based infrastructure, it was hard to see the methods Andrews studies being turned to the construction of an actual starship before at least 2070, but it’s probably best to leave chronologies out of the picture since every year further out makes prediction that much more likely to be wrong.

The ‘straw man point of departure’ design in Andrews’ paper invoked beamed lasers mounted on an asteroid for risk stabilization and momentum control, with a 500 meter Fresnel lens directing the beam to the spacecraft’s laser reflectors, where it is focused on solar panels to power up ion thrusters using hydrogen propellant. A 32-day boost period gets the mission underway and a magsail is used to slow from 2 percent of c upon arrival. Andrews is assuming a laser output of around 100 TW to make this system work.

Enter the Sailbeam

But the point-of-departure method is only one possibility Andrews addresses. He also looks at Jordin Kare’s Sailbeam concept, in which a large laser array (250 5.8 TW lasers) accelerates, one after another, a vast number of 20 centimeter spinning microsails. The acceleration is a blinding 30 million gees over a period of 244 milliseconds, with the ‘beam’ being kept tight during the brief acceleration by using ‘fast acting off axis lasers,’ and additional lasers downrange. Only for the first 977 kilometers are the sails actually under the beam.


Image: The Sailbeam propulsion schematic. Credit: Dana Andrews.

Notice the requirement for keeping the beam tight, an issue that has bedeviled concepts from laser-pushed lightsails to particle beam acceleration. After all, beam spread means you can’t deliver the bulk of the required power to the vessel that needs its energies. Assuming a tight microsail beam, however, the huge boost delivered at the beginning is sufficient, according to Kare’s numbers, to produce serious thrust for the starship. The beam reaches the ship, which vaporizes and ionizes the incoming sails by its own laser system, creating a plasma pulse that drives a magsail. Numerous issues arise with this method, as Andrews describes:

250 micro sails per second amounts to 3.2 kg/sec, and that requires 150 to 300 MW of laser power to ionize the microsails. Assuming 50% efficient lasers we need about a gigawatt of thermal power. Assuming risk factor three, that’s about 1000 mT of powerplant mass, which is actually less than the laser-powered ion propulsion system.

Could we beam the needed power to the ship? Perhaps, but there are other problems:

To maintain 0.01 gee with 250 microsails/second as the spacecraft approaches 2% c, we need to accelerate each microsail for 0.18 seconds. Since we assume each microsail launcher needs about 0.5 seconds to place, align, and spin up its microsail, we don’t need to increase the number of laser launchers to maintain the 0.01 gee. The problem is the long (730 day) acceleration time, which results in end of boost at 1238 AU. This requires very small sailbeam divergence (< Nano radian).


Image: Microsail plasma interaction magsail dipole field. Credit: Dana Andrews.

With regard to the divergence problem, you’ll remember our recent discussions about neutral particle beams, which occurred in the context of Alan Mole’s paper in JBIS on a small interstellar probe. Mole, in turn, had drawn on earlier work by Andrews in 1994. In that paper, Andrews described a 2000 kilogram probe driven by a neutral particle beam sent to a magsail (the same magsail would be used for deceleration in the destination system). Writing in these pages, James Benford later found this propulsion method inapplicable for interstellar uses because of unavoidable spread of the beam and a corresponding loss of efficiency, although he considered it quite interesting as a driver for fast interplanetary travel.

In the current paper, Andrews revisits particle beam propulsion as a third alternative to the ion thrusters in his point-of-departure design. He cites Benford’s critique in the footnotes and goes on to add that the divergence of the beam will limit the useful range of operation. That implies high accelerations at the beginning of the flight, when the spacecraft can exploit maximum power from the beam as it meets the required divergence angle requirements. To catch up on our Centauri Dreams discussions of these issues, start with The Probe and the Particle Beam and Sails Driven by Diverging Neutral Particle Beams, both of which launched discussions and follow-up articles that can be found in the archives.

But there is a final possibility, the laser-driven lightsail as envisioned by Robert Forward back in the 1960s and refined by him in numerous papers in the years following. Which of these best fits our need to launch a near-term mission aboard a crewed starship? More on the laser lightsail and Andrews’ thoughts on energy requirements when I wrap up the paper tomorrow.

The paper is Andrews, “Defining a Near-Term Interstellar Colony Ship,” presented at IAC 2014 (Toronto) and in preparation for submission to Acta Astronautica. The earlier Andrews paper is “Cost considerations for interstellar missions,” Acta Astronautica 34, pp. 357-365,1994. Alan Mole’s paper is “One Kilogram Interstellar Colony Mission,” Journal of the British Interplanetary Society Vol. 66, No. 12, 381-387 (available in its original issue through JBIS).



Starflight: Near-Term Prospects

by Paul Gilster on October 6, 2014

If our exoplanet hunters eventually discover an Earth-class planet in the habitable zone of its star — a world, moreover, with interesting biosignatures — interest in sending a robotic probe and perhaps a human follow-up mission would be intense. In fact, I’m always surprised to get press questions whenever an interesting exoplanet is found, asking what it would take to get there. The interest is gratifying but I always find myself having to describe just how tough a challenge a robotic interstellar mission would be, much less a crewed one.

But we should keep thinking along these lines because the odds are that exoplanetary science may well uncover a truly Earth-like world long before we are in any position to make the journey. I would expect public fascination with such a discovery to be strong. Dana Andrews (Andrews Space, now retired) has been pondering these matters and recently forwarded a paper he presented at the International Astronautical Congress meeting in Toronto in early October. It’s an intriguing survey of what could be done in the near-term.

How we define ‘near-term’ is of course the key here. Is fusion near-term or not? Let’s think about this in context of the mission Andrews proposes. He’s looking at a minimum transit speed of 2 percent of the speed of light (fifty years per light year), making for transit times in the neighborhood of hundreds of years, depending on destination. A key requirement is the ability to decelerate and rendezvous with the destination planet using a magnetic sail (magsail) that can be built using high-temperature superconductors. Andrews also assumes 20-30 years of research and development before construction actually begins.

The R&D alone takes us out to, say, 2040, but there are other factors we have to look at. As Andrews also notes, we have to assume that a space-based infrastructure sufficient to begin active asteroid mining will be in place, for it would be needed to build any of the systems we can imagine to make a starship. He cites a thirty-five year timeframe for getting these needed systems operational. Some of the R&D could presumably be underway even as this infrastructure is being built, but we also have to take into account what we know about the destination. Remember, it’s the Earth-like world discovery that sets all this in motion.

The ability to detect not just biosignatures but data of the kind needed for a human mission to an Earth-class planet may take twenty years or more to develop, and I think even that is a highly conservative estimate. For naturally we’re not going to launch a mission unless we have not just a hint of a biosignature but solid data about the world to which we are committing the mission. That might mean instruments like a starshade and perhaps interferometric techniques by a flotilla of observatories to pull up information about the planet’s surface, its continents and seas, and its compatibility to Earth-like life.

I’d say that backs us off at least to the mid-2030’s just for the beginning of exoplanet analysis of the destination, after which the planet is declared suitable and mission planning can begin. What technologies, then, might be available to us to begin interstellar R&D for a specific starship mission in, say, 2045, when we may conceivably have such detailed data, aiming at a 2070 departure if all goes well? Andrews doubts that high specific impulse, low power density fusion rockets will be available within the century, if then, and thinks that antimatter, if it ever becomes viable for interstellar propulsion, will follow fusion. That leaves us with a number of interesting alternatives that the paper goes on to analyze.

An Interstellar Point of Departure

Andrews develops an interesting take on using space-based lasers, working in combination with four-grid ion thrusters using hydrogen propellant and optimized for a boost period of thirty-two days. The specific impulse is 316,000 seconds. The lasers are mounted on a small asteroid (one or two kilometers) and take advantage of a 500-meter Fresnel lens that directs their beam to a laser reflector on the spacecraft. Let me take this directly out of the paper:

An actively steered 500 m diameter Fresnel lens (risk scale 3) directs the beam to the laser reflector on the Generation Ship Spacecraft, where it is focused onto hydrogen-cooled solar panels (risk factor 1), using a light-pressure supported ¼ wave Silicon Carbide reflector (risk factor 2.5). The light conversion panels operate at thousands of volts at multiple suns (risk factor 2) to allow direct-drive of four-grid Ion thrusters using hydrogen propellant and optimized for short life (30 days) and very low weight (risk factor 3). The four-grid thrusters provide a Specific Impulse (Isp) of 316,000 seconds operating at 50,000 volts using hydrogen. The triple point liquid hydrogen propellant is stored in the habitat torus during boost (the crew rides in the landing pods for the duration of the 32 day boost period), so there is no mass for propellant tanks. After acceleration the crew warms up the insulated torus, fills it with air, and moves in.


Image: The ‘point of departure’ design with associated infrastructure. Credit: Dana Andrews.

The ‘risk factors’ mentioned above refer to a ranking of relative risk for the development of various technologies that Andrews introduces early in the paper. Low Earth Orbit tourism, for example, ranks as a risk factor of 1, with development time in 10 years. Faster than light transport ranks as a risk factor 10, with development time (if ever) of over 1000 years. Low risk factor elements within the needed time-frame include asteroid-based mining and possible colonies, gigawatt-level beamed power, thorium fluoride nuclear space power and — this is critical for everything that follows — fully closed-cycle biological life support systems.


Image: Candidate generation ship configuration. Credit: Dana Andrews.

The habitation torus is assumed to be 312 meters in radius, rotating at two revolutions per minute to provide artificial gravity for the crew. The paper assumes a crew of 250, selected to provide differing skillsets and large enough to prevent problems of inbreeding — remember, we’re talking about a generation ship. The craft uses two thorium fluoride liquid reactors to provide power, acting as breeder reactors to reprocess fuel during the mission. Andrews comments:

The challenge to the spacecraft designer is to include everything needed within the 4000 mT allocated for end of thrust… This design closes but there is very little margin. For instance, there is only 15% replacement air and only 200 kg of survival equipment per person. All generation ships here use the same basic habitat and power systems.


Image: Laser-powered ion propulsion generation ship description. Credit: Dana Andrews.

The magsail needed for deceleration takes, by Andrews’ calculations, 73 years to slow from 2 percent of c, with deployment at approximately 17000 AU from a star like the Sun, with the goal of entering orbit around the star at 3 AU, after which time the magsail can be used to maneuver within the system. Electric sails seem to work for operating inside the stellar wind, but Andrews notes that their mass scales linearly with drag, whereas a magsail’s mass scales as the square root of the drag desired. The magsail is thus the lighter option for this mission, envisioned here as a 6000 kilometer sail using high temperature superconductors, a technology the author believes will fit the proposed chronology.

The propulsion methods outlined here are what Andrews calls a “straw man point of departure (PoD) design,” with other near-term propulsion possibilities enumerated. Tomorrow I’ll take a look at these alternatives, all of them relatively low on the paper’s risk scale. The paper is Andrews, “Defining a Near-Term Interstellar Colony Ship,” presented at the IAC’s Toronto meeting and now being submitted to Acta Astronautica.



Centauri Dreams welcomes Ravi Kopparapu, a research associate in the Department of Geosciences at Pennsylvania State University. He obtained his Ph.D in Physics from Louisiana State University, working with the LIGO (Laser Interferometer Gravitational-wave Observatory) collaboration. After a brief stint as a LIGO postdoc at Penn State, Ravi switched to the exoplanet field and started working with Prof. James Kasting. His current research work includes estimating habitable zones around different kinds of stars, calculating the occurrence of exoplanets using the data from NASA’s Kepler space telescope, and understanding the bio-signatures that can potentially be detected by future space telescope missions. Dr. Kopparapu’s website is

by Ravi Kopparapu


Imagine this scenario: You are planning to buy a new house in a nice neighborhood. The schools in the area are good, the neighborhood is very safe, but you want to know the ‘kid friendly’ area (so that your kids can have friends). You drive around, looking at the available houses, watching for any ‘kid signatures’. You notice that a good proportion of the homes in your neighborhood show some ‘potential’ to have kids. Based on your observation, you estimate the percentage of houses with kids.

A very similar process is currently being carried out in the field of ‘exoplanets’: planets orbiting other stars. The past two decades have seen a rapid increase in the discoveries of exoplanets (although, if you follow the International Astronomical Union’s definition of a planet, exoplanets are not technically ‘planets’. But that discussion is for another time). Just this year, the number of confirmed exoplanets has almost doubled in number. The discoveries in the first decade and a half were dominated by the radial or Doppler velocity technique, where an orbiting planet causes the star to wobble, and the light from the star, split into various colors, also wobbles, providing some clues about the mass and orbital period of the planet. But the floodgates to the exoplanet discoveries really opened up after the launch of NASA’s Kepler space telescope in March 2009.

Kepler finds planets using the ‘transit’ method: A planet crossing in front of a star blocks a portion of the star’s surface from the observer’s view, causing the light from the star to decrease proportional to the area of the planet’s disc. Just this year, the Kepler mission has more than doubled the number of exopanets discovered. Although it is easy for Kepler to detect large planets, like Jupiter or Saturn, closer to their stars (because they can block a larger portion of the star’s disc), there has been a huge increase in the Earth-size planet population from this recent discovery. And there has been a modest increase in the Jupiter or Saturn type planet population. That means Kepler probably has detected most of the giant planets that it can detect, and a large population of Earth-size planets are just being discovered.

Now, what is of interest (to me, and hopefully to many of you) is how many of these Earth-size planets are potentially habitable? We do not know if these planets indeed have habitable conditions, but we do know how far away they are from their host star. And if they are at the “right” distance from the star, the so-called ‘habitable zone’ (HZ) where liquid water can be sustained on the surface of a planet with appropriate atmospheric conditions, then they are good candidates for potential habitable worlds. But, how do we estimate the habitable zone (HZ) around a star? ‘Not too hot, not too cold region’ is a nice guess, but a more rigorous approach is needed because we can’t measure a distant exoplanet’s surface temperature. Furthermore, the location of the HZ varies depending upon the star. And we are looking for those kinds of planets, which have similar atmospheric composition as Earth (because we know whatever Earth has, works for life and we know what kind of life signatures to look for). This is where climate models come into the picture.

Recently, I along with my group at Pennsylvania State University and collaborators from NASA Astrobiology Institute’s Virtual Planetary Laboratory, used a climate model to estimate HZs around various kinds of stars. We assumed an Earth-mass planet, with Earth-like composition, and determined the boundaries of HZ for different stars [1]. The results are shown in Figure 1. On the horizontal axis is the amount of ‘starlight’ a planet receives: For example, a value of ‘1’ represents a planet receiving the same amount of light from its star, as Earth does from the Sun. A value of 1.25 implies the planet receives 25% more light than Earth, and a 0.75 indicates it receives 25% less light than the Earth. The vertical axis shows stars of different sizes (or temperatures). Hotter stars are at the top, cooler stars at the bottom. The yellow curve with the label ‘runaway greenhouse’ implies that the planet is so hot (because it is closer to the star), that at that location all the water from the surface of a planet is evaporated and resides in the atmosphere (kind of what happened to Venus billions of years ago). This is the ‘conservative inner edge of the HZ’. There is an `optimistic inner edge of the HZ’, which is shown as the red curve labeled `Recent Venus’. This limit is based on the observation that Venus seems to have lost its water by 1 billion years (Gyr) ago, when the Sun was 8% less bright than today. Basically, an Earth-mass planet in the blue shaded area has a good chance of having liquid water on its surface (if it has the right atmospheric conditions), and it may have some water if it lies in the red shaded region.

[1] We also changed the planet mass to see how the HZ changes. Assuming similar composition as Earth, larger planets have wider HZs than do smaller ones.


Figure 1: Habitable Zones around various stars. The horizontal axis indicates the amount of starlight a planet receives compared to the Earth: A value of 0.75 implies the planet receives 25% less light than Earth does from the Sun. Similarly a value of 1.25 indicates the planet receives 25% more light and so on. The vertical axis is the star’s temperature. The blue and red shaded regions are conservative and optimistic widths of habitable zones, respectively. Some of the known exoplanets, that are potential habitable worlds, are also shown. Image credit: Chester Harman.

You can see from Figure 1 that many of the Kepler-discovered (and confirmed) exoplanets that are of Earth or near Earth-size reside in the HZ. So we can count them and obtain an answer to our question: “How common are potential habitable worlds?” But that counting doesn’t tell us the whole answer. The Kepler telescope finds a planet when the planet crosses in front of its star, observing the dip in the starlight. Not every planet does this: some planets may be aligned in such a way that they may not be crossing in front of the star. So we may be missing some of them. And we have not confirmed that all the Kepler detected planets indeed are planets, or even Earth-size. To complicate matters, there are some imposters: two stars orbiting around each other can produce a signal as though a planet is orbiting around a star. So one needs to be careful in considering all these issues when calculating the commonness of Earth-size planets.

In the past year, there have been some estimates of the occurrence of Earth-size planets in the HZs. Prof. David Charbonneau and his graduate student Courtney Dressing at Harvard University used Kepler data and calculated that about 15% of low-mass stars, the so-called M-dwarfs that are cool and red, in our Galaxy have Earth-size planets in the HZ. This is a big number and great news! Dressing and Charbonneau have done a phenomenal job of calculating this number. That means nearly 1 out of 7 M-dwarfs in our Galaxy may have a potential habitable planet. But even this number seemed low to me. M-dwarfs are the most prevalent stars in our Galaxy. About 77% of stars in our Galaxy are M-dwarfs. Within 30 light-years of the Sun, there are nearly 250 M-dwarfs (whereas, there are only about 20 Sun type stars). So I recalculated Dressing & Charbonneau’s estimate of the prevalence of potential habitable planets, using my newly determined HZs (Figure 1). And the number I got is a BIG increase from 15%: a conservative estimate showed that about 48% of M-dwarfs should have Earth-size planets in the HZ. That means, nearly 1 out of 2 M-dwarfs (i.e, approximately 50% of M-dwarfs in our Galaxy) may have Earth-size planets in the HZ!

Did you ever get that euphoric feeling when you discover something that is really cool? Well, I was in that moment and nearly jumped out of my chair! 48%, and that is a conservative estimate! I had absolutely no prior expectations of what that number could be, except that it may be bigger than 15%. For the first time in the history of human civilization, we not only know there are Earth-size planets around stars, but also that there are a good number of them that could be habitable!

The next obvious question is: “How common are potential habitable worlds around Sun-like stars?” A recent study by Eric Petigura and Geoff Marcy of University of California at Berkeley (with Andrew Howard at University of Hawaii) estimated that about 22% of Sun-like stars have Earth-size planets in the HZ. That is 1 out 5 Sun-like stars in our Galaxy have Earth-size planets in the HZ. That is an amazing discovery!


Figure 2: Planet size versus the amount of starlight incident on a planet. The green box shows the assumed HZ width in a study by Petigura et al. (2013) to calculate how common are Earth-size planets around Sun-like stars. Image credit: Petigura et al. (2013), Proceedings of National Academy of Sciences, 110, 48.

Petigura and collaborators assumed that for Earth-size planets around Sun-like stars, the inner edge of the HZ is at 0.5 AU, or at a distance where a planet receives 4 times the starlight the Earth receives from the Sun (See Figure 2.). Some people, including me, think that a planet at 4 times the Earth flux will be too hot to have liquid water on its surface. For example, Venus, which is the hottest planet in our Solar system, receives only 2 times the Sunlight that Earth does. And it is not a habitable place. Furthermore, looking at Figure 1 which shows HZs for different kinds of stars, even the most optimistic HZ estimate from climate models (based on the physics of atmospheres) indicates the inner edge of the HZ cannot be much closer than about 1.75 times Earth flux. So certainly 4 times as inner edge is too close to the star!

When we use the correct HZ limits from Figure 1, the Petigura et al. (2013) estimate for potential habitable worlds around Sun-like stars actually drops to about 10% (i.e, 1 out of 10 stars)! That looks like a low number, but note that Petigura et al. do not consider planets smaller than 1 Earth radii (because their analysis method, quite appropriately, is not sensitive to those small planets). But Dressing & Charbonneau, and my analysis of M-dwarfs, does consider planets smaller than 1 Earth-radii. So, if one wants a consistent comparison between all these studies, a reanalysis of the calculations should be made.

If not 22% (or 10%) for potential habitable worlds around Sun-like stars, what number can we expect? Well, recently, along with my colleagues Stephen Kane and Shawn Domagal-Goldman, I published a paper looking at how common Venus-like planets are. And we found that this number is about 45% for Sun-like stars. Figure 3 shows the “Venus Zone”, with some candidate planets detected by the Kepler mission. Interestingly, Petigura et al. find that the planet distribution remains flat at longer periods. What this means is that the number of planets at longer orbital periods remains the same. So if Venus-like planets, which are just near the Earth-like range, are as prevalent as 45% around Sun-like stars, then does it mean Earth-like planets are also as prevalent as 45%? At this point this is speculation based on non-rigorous analysis.


Figure 3: Similar to Figure 1 that shows the Habitable zone, this figure shows the “Venus zone”, the area around a star in which a planet is likely to exhibit Venus like conditions. Some of the candidate planets discovered by the Kepler mission are shown as yellow circles. The size of the circle is compared to the size of Venus. Image credit: Chester Harman.

Returning back to where we started about the analogy of buying a house: Now let’s say you purchased a house. You moved in with your family. You want to introduce yourself to your neighbors. So, you knock on your neighbor’s door. Nobody answers. You knock on your other neighbor’s door. No response. You try every house on your street and in your neighborhood. Silence is all you hear. There is no response from anyone. There are just houses, no people that you can see or talk to. This is more or less the current situation humanity is pondering. We see lots of houses (planets), but haven’t seen life yet. Maybe we needed to look harder. Finding life on a distant planet has a profound importance to humanity. It can unite us to work towards a common goal, and focus more on our strengths than weaknesses. We have to commit ourselves, to invest in technologies and telescopes, which can find inhabited worlds. We know potential habitable planets exist. We know they are quite common. We even know (or will know soon) where they are in our Sun’s neighborhood. What are we waiting for?



Titan: Polar Weather in Flux

by Paul Gilster on October 2, 2014

Curiosities like the unusual feature in Ligeia Mare we discussed yesterday emphasize how important it is to have a long-term platform from which to study a planetary surface. If we are looking at something related to seasonal change on Titan, we have to remember that each season there lasts about seven Earth years. Winter turned to spring in 2009 in the northern hemisphere and as we approach summer there, we’re seeing rapid activity. Studying these changes over time is essential if we’re to understand meteorology on the only moon in the Solar System with a dense atmosphere.

Alex Tolley mentioned in a comment to yesterday’s post that he wasn’t sure we should rule out evaporation as the explanation for what might be an emerging area of sea floor. The argument against that is that the shoreline of Ligeia Mare seems stable throughout this period, but we have a lot to learn about Ligeia Mare and the other Titan seas, and as Alex notes, it’s possible that we’re seeing erosion at work on a gentle sea-floor rise that is now being revealed. What we really need, of course, is something like AVIATR (Aerial Vehicle for In-situ and Airborne Titan Reconnaissance), a 120 kg unmanned aerial explorer that could fly at will over Titan’s landscape and give us stunning views of its mountains and lakes.

We do know that Ligeia Mare, an ethane/methane sea, is all but glass-like in its appearance, based on Cassini measurements from 2013. The spacecraft bounced radio waves off the surface, finding that the resulting echo was bright, an indication that any waves on Ligeia Mare would be smaller than one millimeter, the sensitivity of Cassini’s radar in the study. An earlier flyby studying Ontario Lacus had indicated a surface just as smooth there.

Immediately after the later Ligeia Mare news came word of another study of Titan’s weather. A paper just published in Nature describes a huge, toxic cloud that appeared over Titan’s south pole as the atmosphere cooled and autumn took hold in the region. As we edge toward winter there, frozen particles of toxic hydrogen cyanide (HCN) have been detected in this polar vortex, which is found some 300 kilometers above the surface. Lead author Remco de Kok (Leiden Observatory and SRON Netherlands Institute for Space Research) notes: “We really didn’t expect to see such a massive cloud so high in the atmosphere.”

The reason: Hydrogen cyanide can only condense to form these frozen particles when atmospheric temperatures get down to minus 148 degrees Celsius. This is fully 100 degrees Celsius colder than our previous models of Titan’s upper atmosphere. Cassini’s Composite Infrared Spectrometer (CIRS) confirms that the southern hemisphere has indeed been cooling rapidly as large masses of gas have been drawn south since autumn began in 2009.


Image: These two views of Saturn’s moon Titan show the southern polar vortex, a huge, swirling cloud that was first observed by NASA’s Cassini spacecraft in 2012. The view at left is a spectral map of Titan obtained with the Cassini Visual and Infrared Mapping Spectrometer (VIMS) on Nov. 29, 2012. The inset image is a natural-color close-up of the polar vortex taken by Cassini’s wide-angle camera. Three distinct components are evident in the VIMS image, represented by different colors: the surface of Titan (orange, near center), atmospheric haze along the limb (light green, at top) and the polar vortex (blue, at lower left). Credit: JPL/Remco de Kok.

This is a region seeing much less sunlight as winter approaches but the frozen HCN molecules and the rapid cooling they represent appear to have caught researchers by surprise. Earl Maize (JPL) is Cassini project manager:

“These fascinating results from a body whose seasons are measured in years rather than months provide yet another example of the longevity of the remarkable Cassini spacecraft and its instruments,” said Maize. We look forward to further revelations as we approach summer solstice for the Saturn system in 2017.”

I wish writing about workable concepts like AVIATR didn’t seem so much like crafting a science fiction story, because this is a mission we can fly if we can get it funded, and other ideas for Titan surface exploration — from Titan Mare Explorer to various balloon designs — could provide not only priceless data but spectacular views from the distant moon. The problem isn’t a lack of ideas but the budgetary constraints that bedevil the space agencies. The Titan Lake In-situ Sampling Propelled Explorer (TALISE) was actually developed with Ligeia Mare in mind by SENER, a private engineering group, with the idea of landing a probe in the middle of the sea and cruising its coast for a six to twelve month period.

Over the long haul, we’ll want both landers and aerial craft, but I particularly like the option of being able to move quickly to observe features anywhere on the moon. Remember that flying on Titan is relatively simple. You’re dealing with a much reduced gravity well and a dense atmosphere. The designers think that AVIATR could stay aloft for a year, drawing power from Advanced Stirling Radioisotope Generators (ASRG), a stable platform for continuing observations. Imagine if we had it available to check out that odd object in Ligeia Mare!

For more on AVIATR, see A Closer Look at the Titan Airplane. The paper on the HCN cloud over Titan’s south pole is de Kok et al., “HCN ice in Titan’s high-altitude southern polar cloud,” Nature 514 (2 October 2014), 65-67 (abstract). JPL’s news release on de Kok’s work is also available.



A Surprise from Ligeia Mare

by Paul Gilster on October 1, 2014

Interesting doings on Titan. I would guess that the odd feature that has cropped up in Ligeia Mare, a large ethane/methane sea in Titan’s northern hemisphere — has something to do with seasonal change, and that’s one possibility this JPL news release explores. After all, summer is coming to the northern hemisphere, and studying what happens during the course of a full seasonal cycle is one of Cassini’s more intriguing duties. Have a look at the image:


Image: These three images, created from Cassini Synthetic Aperture Radar (SAR) data, show the appearance and evolution of a mysterious feature in Ligeia Mare, one of the largest hydrocarbon seas on Saturn’s moon Titan. The dark areas represent the sea, which is thought to be composed of mostly methane and ethane. Most of the bright areas represent land surface above or just beneath the water line. The mysterious bright feature appears off the coast below center in the middle and right images. Credit: NASA/JPL-Caltech/ASI/Cornell.

We’re looking at a feature that covers about 260 square kilometers within the 126,000 square kilometers of Ligeia Mare (the latter is an area a bit larger than Lake Michigan and Lake Huron combined). My first thought was that this could be explained by evaporation, but JPL points out that the shoreline of Ligeia Mare has not changed noticeably in this period, which would seem to rule that out. For now, we can consider the feature an enigma and leave it interestingly unsolved, even as we continue to watch what happens as Titan continues into its cold summer.

Three different Cassini flybys were involved in the above imagery, demonstrating that whatever this is, it was not visible in 2007, appearing only in early July of 2013. The mission’s Visible and Infrared Mapping Spectrometer could not find the feature in late July and in September of that year, and low-resolution Synthetic Aperture Radar (SAR) images from October of 2013 also fail to show it. But by August of 2014 SAR can find the feature again, although it had changed in the eleven months since the last observation.

From the JPL news release:

The SAR observation from Cassini’s August 21, 2014 Titan flyby shows that the feature was still visible, although its appearance changed during the 11 months since it was last observed. The feature seems to have changed in size between the images from 2013 and 2014 — doubling from about 30 square miles (about 75 square kilometers) to about 60 square miles (about 160 square kilometers). Ongoing analyses of these data may eliminate some of the explanations previously put forward, or reveal new clues as to what is happening in Titan’s seas.

Possibilities under discussion include surface waves, rising bubbles and solids either floating on the surface or suspended just below it. Whether or not the enigmatic changes are caused by the approaching summer at the north pole of Titan, we’re learning a great deal about how the seasons impact the distant moon. The image below gives some idea of the effect.


Image: This artist’s impression of Saturn’s moon Titan shows the change in observed atmospheric effects before, during and after equinox in 2009. The Titan globes also provide an impression of the detached haze layer that extends all around the moon (blue). Credit: ESA.

Notice the high-altitude red areas at the north pole in the earlier depictions, when it was summer in the southern hemisphere. These are apparent ‘hot spots’ amidst dense haze over the pole, in a period when the north pole was pointed away from the Sun. At equinox in 2009, both hemispheres are receiving equal amounts of sunlight and the red area is almost gone. As spring arrives in the north and the south plunges toward fall and winter, some of the haze over the north pole persists, but the hot spot is now to be found over the south pole.

Nick Teanby (University of Bristol) commented on the reversal in circulation of Titan’s atmosphere back in 2012, in an ESA news release following publication of a paper for which he was lead author in Nature:

“Even though the amount of sunlight reaching the south pole was decreasing, the first thing we saw there during the six months after equinox was actually an increase in temperature at altitudes of 400–500 km, as atmospheric gases that had been lofted to these heights were compressed as they subsequently sank into a newly forming southern vortex.”

Notice, too, the bluish layer of haze at higher altitude (400-500 kilometers), which can be seen in the limb of the moon throughout these images. It’s a separate layer from the now familiar orange smog that is produced by complex molecules filtering down into the lower atmosphere. We’re looking at a place that receives about 100 times less sunlight than the Earth does. Considering that a Titan year is almost 30 Earth years, the atmospheric changes we can make out in a period of months by Cassini’s close observation are startlingly swift.

Surely the huge area seen surfacing and partially disappearing again in the latest Cassini studies is related to all of this activity? But just what it is, and how that relationship would work, remain unknown. The Daily Mail quotes Emma Bunce (University of Leicester) as speculating that we might be seeing something analogous to an iceberg. An interesting thought, but all we can do now is continue to observe what happens next.

The paper on Titan’s seasonal change is Teanby et al., “Active upper-atmosphere chemistry and dynamics from polar circulation reversal on Titan,” Nature 491 (29 November 2012), 732-735 (abstract).



Myriad Worlds, Some with Clear Skies

by Paul Gilster on September 30, 2014

Like most people, I’m highly interested in the hunt for habitable worlds, planets that could truly be called Earth 2.0. But sometimes we need to step back from the ‘habitable’ preoccupation and think about the extraordinary range of worlds we’ve been finding. I’m reminded of something Caleb Scharf says in his new book The Copernicus Complex (Farrar, Straus and Giroux, 2014), in a chapter where he describes the work of Johannes Kepler and other astronomical pioneers. Kepler’s laws of planetary motion first told us that planetary orbits are ellipses rather than the perfect circles envisioned by the school of Ptolemy.

The implications are striking and lead us to expect just the kind of wild variety we find in the exoplanet hunt, where we’re uncovering everything from ‘hot Jupiters’ to ‘super-Earths’ and a wide variety of Neptune-like worlds. Says Scharf:

If planets follow elliptical paths as a general rule, and those paths need not be all within a single plane around a centrally massive star, the possibility exists for an extraordinary range of planetary motions and arrangements that nonetheless all obey Kepler’s rules (and what would soon be Newton’s physics). I doubt anyone suspected it at the time, but the door had been opened to a universe of far greater abundance and diversity than anything yet imagined, even by the atomists and pluralists of the past.

That’s a variety that carries its own awe — Scharf also talks about Galileo’s telescope and his discovery that the Milky Way, a seemingly smooth cloud of light, was in fact made of stars, something that would have surely set him back on his heels. We’re so lucky to live in a time when exoplanet discoveries are coming at such a rapid pace that we can share in the same kind of wonder.

Which brings me to the planet HAT-P-11b, the subject of new work out of the University of Maryland. Jonathan Fraine and team have been using transmission spectroscopy to study the atmosphere of this world. Here a planet is studied as it transits its parent star. The light of the star filters through the planetary atmosphere to provide us with the signatures of various molecules. What emerges about HAT-P-11b is the discovery of water vapor on a planet about the size of Neptune, the smallest world yet on which we’ve found water vapor.


Image: How transmission spectroscopy (also known as absorption spectroscopy) works. As the planet passes in front of its star, starlight filters through the rim of the planet’s atmosphere and into the telescope. If molecules like water vapor are present, they absorb some of the starlight, leaving distinct signatures in the light that reaches our telescopes. Using this technique, astronomers discovered clear skies and steamy water vapor on the planet. Credit: NASA/JPL-Caltech.

As described in Nature, HAT-P-11b is a Neptune-class planet in a five day orbit around a star located some 120 light years away in the constellation Cygnus. It is four times the size of Earth and 26 times as massive. Exo-Neptunes are another marker for how other solar systems so frequently differ from our own. We’re used to our ice giants — Uranus and Neptune — being in just the kind of orbit we would expect them to have, far from the Sun and well beyond the ‘snowline,’ where ices can readily coalesce.

But now we have Neptunes in close orbits to account for, and this work on HAT-P-11b is helping us dig into their characteristics. We’ve already detected water vapor in the atmospheres of Jupiter-class planets close to their stars, their size making them natural targets for study, whereas the smaller exo-Neptunes have proven more difficult to probe. Thus far such planets have yielded evidence of nothing more than thick layers of clouds and haze. Four Neptune-class worlds have been studied as they transited their stars, all of them yielding no absorption features, probably because of clouds.

But HAT-P-11b evidently lacks the upper atmosphere clouds that might have concealed information about the molecular makeup of its lower atmosphere. Hence the discovery of water vapor, which left a strong signature. Nikku Madhusudhan (University of Cambridge), a member of the study team, comments on the work, which was accomplished with the help of the Hubble telescope’s Wide Field Camera 3, along with Kepler and Spitzer data:

“We set out to look at the atmosphere of HAT-P-11b without knowing if its weather would be cloudy or not. By using transmission spectroscopy, we could use Hubble to detect water vapour in the planet. This told us that the planet didn’t have thick clouds blocking the view and is a very hopeful sign that we can find and analyse more cloudless, smaller, planets in the future. It is groundbreaking!”


Image: A plot of the transmission spectrum for exoplanet HAT-P-11b, with data from NASA’s Kepler, Hubble and Spitzer observatories combined. The results show a robust detection of water absorption in the Hubble data. Transmission spectra of selected atmospheric models are plotted for comparison. Credit: NASA/ESA/STScI.

HAT-P-11b is not only the smallest planet on which water vapor has been detected, but it is also the smallest planet for which spectroscopy has been used to detect molecules of any kind. The work will be extended to other exo-Neptunes, and we can assume that methods like these will eventually be used to study ‘super-Earths,’ planets up to ten times the mass of Earth that have proven to be relatively common. The James Webb Space Telescope, scheduled for a 2018 launch, should be able to make similar detections for these interesting worlds.

And yes, we’re pushing toward rocky planets like the Earth as part of our quest for life elsewhere. But for now, let’s revel in the sheer diversity of the worlds we are finding.

The paper is Fraine et al., “Water vapour absorption in the clear atmosphere of a Neptune-sized exoplanet,” Nature 513 (25 September 2014), 526-529 (abstract). An ESA news release is also available.



Primordial Origins of (Some) of Earth’s Water

by Paul Gilster on September 29, 2014

With one interstellar conference in the books for 2014, I’ll be headed next for the Tennessee Valley Interstellar Workshop, whose upcoming gathering will be held in Oak Ridge this November. Last week’s coverage of the 100 Year Starship Symposium in Houston has allowed several interesting stories to back up in the queue, and I’ll spend the next few days going over some of the latest findings, starting with the discovery that a large fraction of the water in Earth’s oceans may be substantially older than we think. The results make a strong case for water as a common ingredient in planet formation no matter where the planet forms or around what kind of star.

Ilsedore Cleeves (University of Michigan) is lead author on the new paper in Science that argues the case. What Cleeves and colleagues have found is that up to half of the water in our Solar System formed before the Sun itself emerged from the primordial gas and dust cloud that gave it birth. That encompasses more than the Earth’s oceans, of course, because we know that water is found in places as widely disparate as Mercury, the Moon, on comets, and on the moons of gas giants. The question the new work answers is the sequence in which that water forms. Cleeves explains the issue:

“There has been a long-standing question as to whether any of these ancient ices, including water, are incorporated into young planetary systems, or if all the pre-planetary building blocks are reprocessed and/or locally synthesized near the star.

“These two scenarios have very different consequences for the composition of planets. In the latter case, the chemical make-up of the planets, including water, would depend upon what type of star a planet ends up next to. In contrast, the former case implies that all planetary systems would form from similar starting materials, including abundant interstellar water.”


Image: How common are views like this in the galaxy? New work out of the University of Michigan indicates that water in the interstellar medium survives solar system formation and can contribute to the planets that emerge, an opportunity available to all nascent stellar systems.

The researchers studied the question using deuterium, or heavy hydrogen, which is an isotope of hydrogen with a proton and a neutron in the nucleus (ordinary hydrogen has no neutron in the nucleus). On the Earth, deuterium’s abundance is one atom for every 6,420 of hydrogen. This ratio, found not only on Earth but also in comets, is higher than the ratio in the Sun. In fact, deuterium-enriched water should form at temperatures only 10 degrees above absolute zero. Ted Bergin, a University of Michigan astronomer, explains the implication:

“Chemistry tells us that Earth received a contribution of water from some source that was very cold — only tens of degrees above absolute zero, while the Sun being substantially hotter has erased this deuterium, or heavy water, fingerprint.”

Simulating chemical evolution in a planetary disk over a million year period, Cleeves found that the processes at work in the disk were insufficient to make the heavy water that exists throughout the Solar System. Thus the first scenario above is likely — the planetary disk didn’t make all the system’s water but acquired much of it from elsewhere. Deuterium enrichment happens not only under cold temperatures but in the presence of oxygen and ionizing radiation. All of these are found in the interstellar medium, where the radiation is provided by cosmic rays, and deuterium-enriched water has indeed been observed in the ISM.

Like the interstellar medium, the young planetary disk would also offer up radiation, even if the cosmic ray count would drop because of the growing star’s magnetic field and stellar wind. But the process cannot yield heavy water in the amounts needed. From the paper:

With our updated disk ionization model, we can now exclude chemical processes within the disk as an enrichment source term and conclude that the solar nebula accreted and retained some amount of pristine interstellar ices. One potential explanation is that during the formation of the disk, there was an early high temperature episode followed by continued infall from deuterium-enriched interstellar ices.

The numbers that flow out of this work show that 30 to 50% of the water in Earth’s oceans and between 60 to 100% of the water in comets comes from water predating the Sun, arriving as interstellar ices that survived the Solar System’s formation and were incorporated into the various planets and other bodies. The implication is that a fundamental prerequisite for life as we know it is available to all planetary systems in formation. Water is ‘inherited’ from the immediate environment of an emerging planetary system and is therefore widespread elsewhere in the universe, locked up in the ices, gas and dust circling the infant star.

The paper is Cleeves et al., “The ancient heritage of water ice in the solar system,” Science Vol. 345, No. 6204 (26 September 2014), 1590-1593 (abstract / preprint). See also Daniel Clery’s summation in the same issue of Science. Clery quotes Karen Willacy (JPL), who places the new study in context:

“This is a very interesting result. We’ve been debating this for years, whether or not the ices have an interstellar heritage”… She says that other groups have tried to model the collapse of clouds in the ISM into planetary systems to see if ice would survive, but “with various results, that don’t always agree,” Willacy says. “This is much more simple approach, just using the chemistry which is well understood.”

This news release from the University of Exeter is also helpful.



I don’t envy the track chairs at any conference, particularly conferences that are all about getting large numbers of scientists into the right place at the right time. Herding cats? But the track model makes inherent sense when you’re dealing with widely disparate disciplines. Earlier in the week I mentioned how widely the tracks at the 100 Year Starship Symposium in Houston ranged, and I think that track chairs within each discipline — already connected to many of the speakers — are the best way to move the discussion forward after each paper.


Still, what a job. My friend Eric Davis, shown at right, somehow stays relaxed each year as he handles the Propulsion & Energy track at this conference, though how he manages it escapes me, given problems like three already accepted presentations being withdrawn as the deadline approached, and one simple no-show at the conference itself. Unfortunately, there were no-shows in other tracks as well, though the wild weather the night before the first day’s meetings may have had something to do with it.

Processes will need to be put in place before future symposia to keep this kind of thing from happening. Fortunately, Eric is quick on his feet and managed to keep Propulsion & Energy on course, and I assume other track chairs had their own workarounds. A high point of the conference was the chance to have dinner and a good bottle of Argentinian Malbec with Eric and Jeff Lee (Baylor University), who joined my son Miles and myself in the hotel restaurant.

The Antimatter Conundrum

I found two papers on antimatter within Eric’s track particularly interesting given the challenge of producing antimatter in sufficient quantity to make it viable in a future propulsion system. We’d love to master antimatter because of the numbers. A fusion reaction uses maybe one percent of the total energy locked up inside matter. But if you can annihilate a kilogram of antimatter, you can produce ten billion times the energy of a kilogram of TNT. In nuclear energy terms, the antimatter yields a thousand times more energy than nuclear fission, and 100 times more energy than fusion, a compelling thought for interstellar mission needs.

Sumontro Lal Sinha described the requirements for a small, modular antimatter harvesting satellite that could be launched into the Van Allen radiation belt about 15,000 kilometers up. I was invariably reminded of James Bickford’s ideas on creating an antimatter trap in an equatorial orbit around the Earth that could harvest naturally occurring antiparticles — Bickford has always maintained that space harvesting of antimatter using his ‘magnetic scoop’ is five orders of magnitude more cost effective than producing antimatter on Earth. In any case, antimatter resources here and elsewhere in the Solar System offer useful options.

Remember that the upper atmosphere of the planets is under bombardment from high-energy galactic cosmic rays (GCR), which results in ‘pair production’ as the kinetic energy of the GCR is converted into mass after collision with another particle. Out of this we get an elementary particle and its antiparticle. Planets with strong magnetic fields become antimatter sources because particles interact with both the magnetic field and the atmosphere. Sinha’s harvester would be an attempt at pair-production that he describes as lightweight and modular. I haven’t seen a paper on this one so I can’t go into useful detail. I’ll hope to do that later.

Storing macroscopic amounts of antimatter for propulsion purposes is the other side of the antimatter conundrum, an issue tackled by Marc Weber (Washington State), who described long antimatter traps in the form of stacks of wafers that essentially form an array of tubes. Storage is an extreme issue because like charges repel, so that large numbers of positrons, for example, generate repulsive forces that magnetic bottles cannot fully contain. Weber’s long traps are in proof-of-principle testing as he tries to push storage times up.


Image: One of Marc Weber’s slides, illustrating principles behind a new kind of magnetic storage trap for antimatter.

Thermonuclear Propulsion and the Gravitational Lens

It’s always a pleasure to see old friends at these events, and I was happy to have the chance to share breakfast with Claudio Maccone, whose long-standing quest to see the FOCAL mission built and flown has come to define his career. But in addition to speaking about the gravitational lens at 550 AU and beyond, Claudio was in Houston to discuss the Karhunen-Loève Transform (KLT), developed in the 1940s to improve sensitivity to artificial signals by a large factor, another idea he has long championed. The idea here is that the KLT has SETI applications, helping researchers in the challenging task of sifting through signals that may be spread through a wide range of frequencies.

Consider our own civilization’s use of code division multiplexing. Mason Peck was also talking about this at the conference — the reason you can use your cellphone in a conversation is that multiple access methods (code division multiple access, or CDMA) allows several transmitters to send information simultaneously though using the same communications channel. Spread-spectrum methods are at work — the signal is sent over not one but a range of frequencies — and you’re actually dealing with a combination of many bits that acts like a code. If we’re using these methods, perhaps a signal we receive from an extraterrestrial civilization may be as well, and perhaps the best way to unlock it is to use the KLT.

I missed Claudio’s session on the KLT but was able to be there for his talk on using the gravitational lens as a communications tool. Beyond the propulsion question, one of the biggest problems with putting a probe around another star is data return. How do we get a workable signal back to Earth? Fortunately, the gravitational lens can offer huge gains by employing the focusing power of the Sun on electromagnetic radiation from an object on the other side of it. Using conventional radio communications would require huge antennae and substantial (and massive) resources aboard the probe itself. These would not be necessary if we fly the needed precursor mission to the distances needed to use the gravitational lens.

Thus we send a relay spacecraft not toward Alpha Centauri but in exactly the opposite direction. Ordinary radio links can be easily maintained. If we tried conventional methods using a typical Deep Space Network antenna and a 12-meter antenna aboard the spacecraft (assuming a link frequency in the Ka band, or 32 GHz, a bit rate of 32 kbps, and 40 watts of transmitting power), we still get a 50 percent probability of errors. A relay probe at the gravitational lens, however, shows no bit error rate increase out to fully nine light years.


I’m moving quickly here and I can’t go through each presentation, but I do want to mention as well Friedwardt Winterberg’s talk on thermonuclear propulsion options. Dr. Winterberg has a long history in researching nuclear rocketry, dating back to the days of Ted Taylor, Freeman Dyson, and the era of Project Orion (which he could not join because he was not yet a US citizen). The Atmospheric Test Ban Treaty of 1963 was one of the factors that put Orion to rest, but Fred has been championing nuclear micro-bombs with non-fission triggers, an idea he first broached at a fusion workshop all the way back in 1956. His most recent paper reminds us of von Braun’s ideas about assembling a huge fleet in orbit for the exploration of Mars:

A thermonuclear space lift can follow the same line as it was suggested for Orion-type operation space lift, but without the radioactive fallout in the earth atmosphere. With a hydrogen plasma jet velocity of 30 km/s, it is possible to reach the orbital speed of 8 km/s in just one fusion rocket stage, instead of several hundred multi-stage chemical rockets, to assemble in space one Mars rocket, for example. .. The launching of very large payloads in one piece into a low earth orbit has the distinct advantage that a large part of the work can be done on the earth, rather than in space.

Exactly how to ignite a thermonuclear micro-explosion by a convergent shockwave produced without a fission trigger is the subject of the new paper, and I’m looking for someone more conversant with fusion than I am to give it a critical reading to be reported here. The basic Orion concept remains in Winterberg’s work with fission bombs replaced by deuterium-tritium fusion bombs being set off behind a large magnetic mirror rather than Orion’s pusher plate.

All Too Little Time

So many papers occurred in different tracks at conflicting times, exacerbated by the need to attend advisory board meetings, so I missed out on a number of good things. I wish I could have attended Kathleen Toerpe’s entire Interstellar Education track, and there were sessions in Becoming an Interstellar Civilization and Life Sciences in Interstellar that looked very promising. I hope in the future the conference organizers will set up video recording capabilities in each track, so that attendees and others can catch up on what they missed.

Several upcoming articles will deal with subjects touched on at 100YSS. Al Jackson is writing up his SETI ideas using extreme astronomical objects, and I’ll be talking about Ken Wisian’s paper on military planning for interstellar flight — Ken and his lovely wife joined Heath Rezabek, Al Jackson, Miles and myself for dinner. The conversation was far-ranging but unfortunately the Friday night restaurant scene was noisy enough that I missed some of it. Miles and I stopped down the street the next night at the Guadalajara, a good Mexican place with a quiet upstairs bar. Great margaritas, and a fine way to close out the conference. Expect an upcoming article from Miles, shown below, on his recent interstellar presentation in a seriously unconventional venue. I’m giving nothing away, but I think you’ll find it an encouraging story.