A Mass-Radius Relationship for ‘Sub-Neptunes’

by Paul Gilster on May 22, 2015

The cascading numbers of exoplanet discoveries raise questions about how to interpret our data. In particular, what do we do about all those transit finds where we can work out a planet’s radius and need to determine its mass? Andrew LePage returns to Centauri Dreams with a look at a new attempt to derive the relationship between mass and radius. Getting this right will be useful as we analyze statistical data to understand how planets form and evolve. LePage is the author of an excellent blog on exoplanetary science called Drew ex Machina, and a senior project scientist at Visidyne, Inc. specializing in the processing and analysis of remote sensing data.

By Andrew LePage


As anyone with even a passing interest in planetary studies can tell you, we are witnessing an age of planetary discovery unrivaled in the long history of astronomy. Over the last two decades, thousands of extrasolar planets have been discovered using a variety of techniques. The most successful of these to date in terms of sheer number of finds is the transit method – the use of precision photometric measurements to spot the tiny decrease in a star’s brightness as an orbiting planet passes directly between us and the star. The change in the star’s brightness during the transit allows astronomers to estimate the size of the planet relative to the star while the time between successive transits allows the orbital period of the planet to be determined. Combined with information about the properties of the star being observed, other characteristics can be calculated such as the actual size of the planet and its orbit. The most successful campaign to date to search for planets using the transit method has been performed using NASA’s Kepler spacecraft, launched in 2009.

One of the other important bulk properties of a planet that is of interest to scientists is its mass. Unfortunately, the transit method is typically unable to supply us with this information except in special circumstances where planets in a system strongly interact with each other to produce measurable variations in the timing or duration of their transits. The transit timing variation (TTV) or transit duration variation (TDV) methods can be used to estimate the masses of the planets of a system including non-transiting planets that might be present. Based on an analysis of Kepler results to date, however, this method can be used in only about 6% of planetary systems that produce transits.

A more widely applicable method to determine the mass of an extrasolar planet is through the precision measurement of a star’s radial velocity to detect the reflex motion caused by the orbiting planet. Combined with information from transit observations as well as the star’s properties, it is possible to calculate the actual mass of a planet and further refine its orbital properties. Unfortunately, NASA’s Kepler mission has discovered thousands of planets and making precision radial velocity measurements takes a lot of time on a limited number of busy telescopes that are equipped to make the required observations. In addition, many of the stars observed by Kepler are too dim or their planets too small for the current generation of instruments to detect radial velocity variations above the noise. This is especially a problem for sub-Neptune size planets including Earth-size terrestrial planets. Taken as a whole, only a small minority of all of Kepler’s finds currently have had their masses measured.

Puzzling Out a Planetary Mass

While astronomers continue to struggle to measure the masses of thousands of individual extrasolar planets found by Kepler, there have been efforts to derive a mass-radius relationship so that the mass of a planet with a known radius can at least be estimated. In addition to being useful for evaluating the level of accuracy required for detection using radial velocity measurements or other methods, such mass estimates are also valuable for scientists wishing to use Kepler radius and orbit data in statistical studies of planetary properties, dynamics, formation and evolution. Over the past few years, there have been various investigators who have attempted to derive a planetary mass-radius relationship as information on the mass and radius of known planets has expanded. These relationships have taken a mathematical form known as a power law such as M = CRγ where M is the mass of the planet (in terms of Earth mass or ME), R is its radius (in terms of Earth radii or RE) and C and γ are constants determined by analysis.

The latest work to derive a mass-radius relationship for sub-Neptune size planets (i.e. planets whose radii are less than 4RE) is a paper by Angie Wolfgang (University of California – Santa Cruz), Leslie A. Rogers (California Institute of Technology), and Eric B. Ford (Pennsylvania State University), which they recently submitted for publication in The Astrophysical Journal. These sub-Neptune size worlds are of particular interest to the scientific community since they span the size range between the Earth and Neptune where no Solar System analogs exist to provide guidance for deriving a mass-radius relationship.

Earlier work over the last few years on the planetary mass-radius relationship relied on least squares regression analysis of a set of planetary radius and mass measurements – a fairly straightforward mathematical method used to determine the constants of an equation that provides the best fit to a set of data points. Unfortunately, this classic method has some drawbacks. It does not properly take into account the uncertainty in the independent variable (i.e. the planet radius, in this case) or instances where the planet has not been detected using precision radial velocity measurements and only an upper limit of the mass can be derived. Another issue is that the least squares regression method assumes a deterministic relationship where a particular planetary radius value corresponds to a unique mass value. In reality, planets with a given radius can have a range of different mass values, in part reflecting the variation in planetary composition running from massive rocky planets with large iron-nickel cores to less massive, volatile-rich planets with deep atmospheres. These variations are expected to be especially important in sub-Neptune-class worlds.

A Bayesian Approach to the Mass/Radius Problem

Instead of using the least squares regression method, Wolfgang, Rogers and Ford evaluated their data using a hierarchal Bayesian technique which allowed them not only to derive the parameters for a best fit of the available data, but also to quantify the uncertainty in those parameters as well as the distribution of actual planetary mass values. Using their approach, they have derived a probabilistic mass-radius relationship where the most likely mass and the distribution of those values are determined. The team considered a total of 90 extrasolar planets with known radii less than 4RE whose masses have been measured or constrained using radial velocity or TTV methods. Neither unconfirmed planets nor circumbinary planets were considered to keep the sample as homogeneous as possible. The team also truncated the mass distribution to physically plausible values that were no less than zero (since it is physically impossible to have a negative mass) and no greater than the mass of a planet composed of iron (since it is unlikely for a planet to have a composition dominated by any element denser than iron).


Image: This plot shows the available mass and radius data (and associated error bars) used in the latest analysis of the mass-radius relationship for sub-Neptune size planets. Various fits to these data are shown including an earlier analysis by Lauren Weiss and Geoffrey Marcy (black dashed line) as well as fits for radii <8 RE, <4RE and <1.6RE (solid colored lines). (credit: Wolfgang et al.)

The detailed analysis of the dataset by Wolfgang, Rogers and Ford found that the subset of extrasolar planets whose masses were measured using the TTV method has a definite bias towards lower density planets. This bias had been suspected since a low density planet will have a larger radius than a denser planet with the same mass. And all else being equal, a larger planet is more likely to be detected using the transit method than a smaller planet. When only considering the sample of extrasolar planets with masses determined using precision radial velocity measurements, this team found that the best fit for the data set was a power law with C = 2.7 and γ = 1.3 (i.e. M = 2.7R1.3). Based on their statistical analysis, Wolfgang, Rogers and Ford found that the data were consistent with a Gaussian or bell-curve distribution of actual planet masses with a sigma of 1.9ME at any given radius value. Just as has been suspected, planets with radii less than 4RE display a range of compositions that is reflected as a fairly broad distribution of actual mass values.

In earlier work by Rogers, it was found that there seems to be a transition in planet composition at a radius no larger than 1.6 RE, above which planets are unlikely to be dense, rocky worlds like the Earth and much more likely to be less dense, volatile-rich planets like Neptune (see The Transition from Rocky to Non-Rocky Planets in Centauri Dreams for a full discussion of this work). For the sample of planets considered here with radii less than 1.6 RE, the team found that C = 1.4 and γ = 2.3. Unfortunately, the sample considered by Wolfgang, Rogers and Ford has little good data for planets in this size range and the masses with their large uncertainties tend to span the full range of physically plausible values. As a result, this analysis can not rule out the possibility of a deterministic mass-radius relationship where there is only a very narrow range of actual planet masses for any particular radius value. Recent work by others suggests that these smaller planets tend to have a more Earth-like, rocky composition which could be characterized with a more deterministic mass-radius relationship (see The Composition of Super-Earths in Drew Ex Machina for a discussion of this work).

This new work by Wolfgang, Rogers and Ford represents the best attempt to date to determine the mass-radius relationship for planets smaller than Neptune. While more data of better quality for planets in this size range are needed, it does appear that sub-Neptunes can have a range of different compositions and therefore possess a range of mass values at any given radius. This new relation will be most useful to scientists hoping to get the maximum benefit out of the ever-growing list of Kepler planetary finds where only the radius is known. Much more data will be required to determine more accurately the mass-radius distribution of planets with radii less than 1.6 RE and more precisely characterize the transition from large, rocky Earth-like planets to larger, volatile-rich planets like Neptune.

The preprint of the paper by Wolfgang, Rogers and Ford, “Probabilistic Mass-Radius Relationship for Sub-Neptune-Sized Planet”, can be found here.



LightSail Aloft!

by Paul Gilster on May 21, 2015

One of the joys of science fiction is the ability to enter into conjectured worlds at will, tweaking parameters here and there to see what happens. I remember talking a few years ago to Jay Lake, a fine writer especially of short stories who died far too young in 2014. Jay commented that while it was indeed wonderful to move between imagined worlds as a reader, it was even more wondrous to do so as a writer. I’ve mostly written non-fiction in my career, but the few times I’ve done short stories, I’ve experienced a bit of this ‘world-building’ sense of possibility.

Even so, it’s always striking how science and technology keep moving in ways that defy our expectations. Take yesterday’s launch of The Planetary Society’s crowd-funded LightSail, which went aloft thanks to the efforts of a United Launch Alliance Atlas V from Cape Canaveral. LightSail violates expectations on a number of fronts. For one thing, the crowd-funding thing, which is a consequence of an Internet era that science fiction writers lustily engaged, but which enters homes on desktop computers that SF had trouble anticipating.

My old saying applies: It’s the business of the future to surprise us, even those of us who keep thinking about the future every day. Another LightSail surprise is its size. Many science fiction tales have covered solar sails dating back to the wondrous “The Lady Who Sailed the Soul,’ from Cordwainer Smith, and Arthur C. Clarke’s “The Wind from the Sun.” We’ve looked at a number of the early stories in these pages over the years. But imagined sails in those days were vast, just like Robert Forward’s gigantic designs, and I can’t think of anyone in those days who anticipated matching up sails with tiny satellites — CubeSats — which have brought space capabilities down from the level of government organizations to small university groups.


Image: The launch of LightSail aboard an Atlas V, as captured by remote camera on May 20. Credit: Navid Baraty / The Planetary Society.

So we have a CubeSat about the size of a loaf of bread that is about to deploy a sail measuring 32 square meters. CubeSats are cheap, and while they can’t mount missions of the complexity of a Juno or a Cassini, I can see a robust future for them. The beauty of The Planetary Society’s effort here is that while CubeSats can be readily orbited, they’ve had no real propulsion capabilities. Until now. So we’re not testing just one sail. We’re testing a broader concept.

Can we get a CubeSat to another planet? I can see no reason why not if it turns out that the solar sail strategy employed here does the job. And if we can get one CubeSat to another planet, we can surely get more. Thus the possibility of future missions designed around ‘swarms’ of CubeSat descendants, deployed on missions in which the components of a much larger spacecraft are effectively distributed among a host of carriers, all driven by solar photon momentum. Perhaps LightSail is the first step in making such a vision a reality.

Remember, too, that LightSail was launched as only one payload among many. Much media attention went into the launch of the X-37B, understandable because the small space plane has been operated with relative secrecy. But the Atlas V carrying LightSail also carried several other CubeSats into space. Contrast this with the early days of the space program, when each rocket lifted a single payload, and consider where miniaturization and improved design have begun to take us. With Mason Peck’s ‘sprites,’ we’re now exploring an even smaller realm some call ‘satellites on a chip,’ where the idea of swarm operations takes on a whole new luster.

We have about four weeks to wait before LightSail attempts deployment of its mylar sail. Even then the craft will quickly be pulled back into the Earth’s atmosphere, returning along the way images and data on spacecraft performance that will flow to the ground stations at Cal Poly San Luis Obispo and Georgia Tech (LightSail was designed by San Luis Obispo firm Stellar Exploration, Inc.) Data return has already begun. You’ll want to follow Jason Davis’ updates on The Planetary Society’s site as this story unfolds. LightSail’s first telemetry file can be downloaded — according to Jason, the early values appear to be ‘nominal or near predicted ranges.’ Here’s the one item that could be problematic:

The team’s only major concern is a line of telemetry showing the indicator switches for solar panel deployment have been triggered. (Look for line 77 in the telemetry file—the “f” is a hexidecimal value indicating all switches were released.) Under normal circumstances, the solar panels do not open until the sail deployment sequence starts, because the sails have a tendency to start billowing out of their storage cavities.

This telemetry reading, however, does not necessarily mean the panels are open. The switches were once inadvertantly triggered during vibration testing, so it’s possible they popped loose during the ride to orbit. We’ll know for sure after flight day four, when we test out the camera system. This is one time we don’t want to see a pretty picture of Earth—it would mean the panels are open.

I’ll be checking in with Jason’s blog frequently during the mission as we get closer to sail deployment. Meanwhile, be aware that the second iteration of LightSail is scheduled for a 2016 flight, this one a full demonstration of solar sailing in Earth orbit, with launch aboard a SpaceX Falcon Heavy to an orbit of about 720 kilometers. The KickStarter campaign supporting the LightSail project can be accessed here. The level of support that has emerged is encouraging indeed, as success with LightSail will energize the entire community of sail researchers.



Enter the ‘Warm Titan’

by Paul Gilster on May 20, 2015

Our definition of the habitable zone is water-based, focusing on planetary surfaces warm enough that liquid water can exist there. New work by Steven Benner (Foundation for Applied Molecular Evolution) and colleagues considers other kinds of habitable zones, specifically those supporting hydrocarbons, which can be liquids, solids or gases depending on the ambient temperature and pressure. Benner’s work focuses on compounds called ethers that can link together to form polyethers, offering life a chance to emerge and adapt in hydrocarbon environments.

Out of this comes the notion of ‘warm Titans,’ moons with hydrocarbon seas that are not made up of methane. We have no such worlds in our Solar System, and they needn’t be moons of gas giants to fit the bill. Think of them, as this Astrobio.net news release does, as being oily Earths drenched in hydrocarbons like propane or octane. Although they do not appear in any genetic molecules on Earth, ethers may be the key to fill the function of DNA and RNA on such worlds.

The nucleobases in the four-letter code of DNA and RNA can mutate even as the molecule’s form is retained, and out of this come the proteins that help life interact and adapt with its environment. Like DNA, ethers show repeating elements, in this case of carbon and oxygen, in their chemical backbones. But unlike DNA and RNA, they have no outward negative charge of the kind that lets them dissolve and float freely so they can interact with other biomolecules. Says Benner:

“This is the central point of the ‘polyelectrolyte theory of gene,’ which holds that any genetic biopolymer able to support Darwinian evolution operating in water must have an ever-repeating backbone charge. The repeating charges so dominate the physical behavior of the genetic molecule that any changes in the nucleobases that influence genetic information have essentially no significant impact on the molecule’s overall physical properties.”

Molecules like DNA and RNA cannot dissolve in a hydrocarbon ocean, making them unable to provide the necessary interactions on worlds like Titan. But ethers, strung together in complex polyethers, while they lack an outward charge, do have internal charge repulsions that allow small parts of the molecule to function in ways similar to DNA and RNA nucleobases.


Image: An artist’s impression of the low-lit surface of Titan under the moon’s thick, orange haze, with liquid hydrocarbons pooling and eroding the surface much like water on Earth. Credit: Steven Hobbs (Brisbane, Queensland, Australia).

Benner’s experiments with ethers show that they are not soluble when we get down to temperatures as low as Titan’s, making Saturn’s largest moon an unlikely venue for such life. But while methane has a narrow liquid range (between -184 and -173 degrees Celsius), we can still put ethers to work in warmer hydrocarbon oceans. Thus the emergence of the ‘warm Titan,’ a world perhaps covered with propane instead of methane oceans that can stay liquid over a broad range (-184 degrees Celsius to -40 degrees). Octane turns out to be even better, not freezing until it reaches -57 degrees Celsius or vaporizing until it hits a temperature of 125 degrees.

Thus hydrocarbon molecules larger than methane come to the rescue. Once again we reconsider the notion of a habitable zone. Certainly in terms of life that we are familiar with, liquid water at the surface is a prerequisite. But as we’ve seen on the icy moons of our system’s gas giants, oceans can provide subsurface environments where life could conceivably emerge. Now we have to consider a hydrocarbon habitable zone where propane or octane can exist in a liquid state. “Virtually every star,” says Benner, “has a habitable zone for every solvent.”

The paper is Christopher et al., “Solubility of Polyethers in Hydrocarbons at Low Temperatures. A Model for Potential Genetic Backbones on Warm Titans,” Astrobiology Vol. 15, Issue 3 (11 March 2015). Thanks to Ivan Vuletich for the pointer to this one.



Exoplanets: The Hunt for Circular Orbits

by Paul Gilster on May 19, 2015

If you’re looking for planets that may be habitable, eccentric orbits are a problem. Vary the orbit enough and the surface goes through extreme swings in temperature. In our own Solar System, planets tend to follow circular orbits. In fact, Mercury is the planet with the highest degree of eccentricity, while the other seven planets show a modest value of 0.04 (on a scale where 0 is a completely circular orbit — Mercury’s value is 0.21). But much of our work on exoplanets has revealed gas giant planets with a wide range of eccentricities, and we’ve even found one (HD 80606b) with an eccentricity of 0.927. As far as I know, this is the current record holder.

These values have been measured using radial velocity techniques that most readily detect large planets close to their stars, although there is some evidence for high orbital eccentricities for smaller worlds. Get down into the range of Earth and ‘super-Earth’ planets, however, and the RV signal is tiny. But a new paper from Vincent Van Eylen (Aarhus University) and Simon Albrecht (MIT) goes to work on planetary transits. It’s possible to work with Transit Timing Variations to make inferences about eccentricity, but these appear only in a subset of transiting systems.

Instead, van Eylen and Albrecht look at transit duration. The length of a transit can vary depending on the eccentricity and orientation of the orbit. By measuring how long a planetary transit lasts, and weighing the result against what is known about the properties of the star, the eccentricities of the transiting planets can be measured, as explained in the paper:

Here we determine orbital eccentricities of planets making use of Kepler’s second law, which states that eccentric planets vary their velocity throughout their orbit. This results in a different duration for their transits relative to the circular case: transits can last longer or shorter depending on the orientation of the orbit in its own plane, the argument of periastron (ω)… Transit durations for circular orbits are governed by the mean stellar density (Seager & Mallen-Ornelas 2003). Therefore if the stellar density is known from an independent source then a comparison between these two values constrains the orbital eccentricity of a transiting planet independently of its mass…

Using these methods, the researchers have measured the eccentricity of 74 small extrasolar planets orbiting 28 stars, discovering that most of their orbits are close to circular. The systems under study were chosen carefully to avoid false positives — the team primarily used confirmed multi-transiting planet systems around bright host stars, and pulled in asteroseismological data — information on stellar pulsations — to help determine stellar parameters. Asteroseismology can refine our estimates of a star’s mass, radius and density. The stars in the team’s sample have all been characterized in previous asteroseismology studies.


Image: Researchers measuring the orbital eccentricity of 74 small extrasolar planets have found their orbits to be close to circular, similar to the planets in the Solar System. This is in contrast to previous measurements of more massive exoplanets where highly eccentric orbits are commonly found. Credit: Van Eylen and Albrecht / Aarhus University.

No Earth-class planets appear in the team’s dataset, but the findings cover planets with an average radius of 2.8 Earth radii, while orbital periods range from 0.8 to 180 days. Van Eylen and Albrecht conclude that it is plausible that low eccentricity orbits would be common in solar systems like ours, a finding that would have ramifications for habitability and the location of the habitable zone.

Interestingly, when weighed against parameters like the host star’s temperature and age, no trend emerges. But in systems with multiple transiting planets on circular orbits, Van Eylen and Albrecht believe that the density of the host star can be reliably estimated from transit observations. This information can help to rule out false positives, a technique they use to validate candidate worlds in several systems — KOI-270, now Kepler-449, and KOI-279, now Kepler-450, as well as KOI-285.03, now Kepler-92d, in a system with previously known planets.

The work has helpful implications for upcoming space missions that will generate the data needed for putting these methods to further use:

We anticipate that the methods used here will be useful in the context of the future photometry missions TESS and PLATO, both of which will allow for asteroseismic studies of a large number of targets. Transit durations will be useful to confirm the validity of transit signals in compact multi-planet systems, in particular for the smallest and most interest[ing] candidates that are hardest to confirm using other methods. For systems where independent stellar density measurements exist the method will also provide further information on orbital eccentricities.

The TESS mission (Transiting Exoplanet Survey Satellite) is planned for launch in 2017, and is expected to find more than 5000 exoplanet candidates, including 50 Earth-sized planets around relatively nearby stars. PLATO (PLAnetary Transits and Oscillations of stars) will likewise monitor up to a million stars looking for transit signatures, with launch planned by 2024.

The paper is Van Eylen and Albrecht, “Eccentricity from transit photometry: small planets in Kepler multi-planet systems have low eccentricities,” accepted for publication at The Astrophysical Journal (preprint). An Aarhus University news release is available.



Spacecoach on the Stage

by Paul Gilster on May 18, 2015

I’m glad to see that Brian McConnell will be speaking at the International Space Development Conference in Toronto this week. McConnell, you’ll recall, has been working with Centauri Dreams regular Alex Tolley on a model the duo call ‘Spacecoach.’ It’s a crewed spacecraft using solar electric propulsion, one built around the idea of water as propellant. The beauty of the concept is that we normally treat water as ‘dead weight’ in spacecraft life support systems. It has a single use, critical but heavy and demanding a high toll in propellant.

The spacecoach, on the other hand, can use the water it carries for radiation shielding and climate control within the vessel, while crew comfort is drastically enhanced in an environment where water is plentiful and space agriculture a serious option. Along with numerous other benefits that Brian discusses in his recent article A Stagecoach to the Stars, mission costs are sharply reduced by constructing a spaceship that is mostly water. McConnell and Tolley believe that cost reductions of one or two orders of magnitude are possible. Have a look, if you haven’t already seen it, at Alex’s Spaceward Ho! for an imaginative look at what a spacecoach can be.

ISDC is a good place to get this model before an audience of scientists, engineers, business contacts and educators from the military, civilian, commercial and entrepreneurial sectors. ISDC 2014 brought over 1000 attendees into the four-day event, and this year’s conference brings plenary talks and speakers from top names in the field: Buzz Aldrin, Charles Bolden, Neil deGrasse Tyson, Peter Diamandis, Lori Garver, Richard Garriott, Bill Nye, Elon Musk and more. My hope is that a concept as novel but also as feasible as the spacecoach will resonate.


Image: Ernst Stuhlinger’s concept for a solar powered ship using ion propulsion, a notion now upgraded and highly modified in the spacecoach concept, which realizes huge cost savings by its use of water as reaction mass. This illustration, which Alex Tolley found as part of a magazine advertisement, dates from the 1950s.

Towards Building an Infrastructure

We have to make the transition from expensive, highly targeted missions with dedicated spacecraft to missions that can be flown with adaptable, low-cost technologies like the spacecoach. Long-duration missions to Mars and the asteroid belt will be rendered far more workable once we can offer a measure of crew safety and comfort not available today, with all the benefits of in situ refueling and upgradable modularity. Building up a Solar System infrastructure that can one day begin the long expansion beyond demands vehicles that can carry humans on deep space journeys that will eventually become routine.

The response to the two spacecoach articles here on Centauri Dreams has been strong, and I’ll be tracking the idea as it continues to develop. McConnell and Tolley are currently working on a book for Springer that should be out by late summer or early fall. You can follow the progress of the idea as well on the Spacecoach.org site, where the two discuss a round-trip mission from Earth-Moon Lagrange point 2 (EML-2) to Ceres, a high delta-v mission in which between 80 and 90 percent of the mission cost is the cost of delivering water to EML-2.

The idea in this and other missions is to use a SpaceX Falcon 9 Heavy to launch material to low-Earth orbit, with a solar-electric propulsion spiral out to EML-2 (the crew will later take a direct chemical propulsion trajectory to EML-2 to minimize exposure time in the Van Allen belts). The water cost is about $3000 per kilogram. The Falcon 9 Heavy should be able to deliver 53,000 kilograms to low-Earth orbit per launch. McConnell and Tolley figure about 40,000 kilograms of this will be water, while the remainder will be other equipment including the module engines and solar arrays. From EML-2, various destinations can be modeled, with values adjustable within the model so you can see how costs change with different parameters.

The online parametric model has just been updated to calculate mission costs as a function of the number of Falcon Heavy 9 launches required. You can see the new graph below (click on it to enlarge). At a specific impulse of 2000s or better for the solar-electric power engines, only two launches are required for most missions, one taking the crew direct to EML-2, the other carrying the water and durable equipment on a spiral orbit out from LEO. It is only the most ambitious destinations like Ceres that require three launches. At $100 million per launch, even that mission is cheap by today’s spaceflight standards.


Brian notes in a recent email that the launches do not need to be closely spaced, because the spiral transfer from LEO to EML-2 takes months to complete. The crew only goes when everything else is in place at EML-2. For more on this model, see spacecoach.org. I’ll be interested to hear how the idea is received at ISDC, and how the upcoming publication of the spacecoach book helps to put this innovative design for interplanetary transport on the map.



Doppler Worlds and M-Dwarf Planets

by Paul Gilster on May 15, 2015

Finding small and possibly habitable worlds around M-dwarfs has already proven controversial, as we’ve seen in recent work on Gliese 581. The existence of Gl 581d, for example, is contested in some circles, but as Guillem Anglada-Escudé argues below, sound methodology turns up a robust signal for the world. Read on to learn why as he discusses the early successes of the Doppler technique and its relevance for future work. Dr. Anglada-Escudé is a physicist and astronomer who did his PhD work at the University of Barcelona on the Gaia/ESA mission, working on the mission simulator and data reduction prototype. His first serious observational venture, using astrometric techniques to detect exoplanets, was with Alan Boss and Alycia Weinberger during a postdoctoral period at the Carnegie Institution for Science. He began working on high-resolution spectroscopy for planet searches around M-stars during that time in collaboration with exoplanet pioneer R. Paul Butler. In a second postdoc, he worked at the University of Goettingen (Germany) with Prof. Ansgar Reiners, participating in the CRIRES+ project (an infrared spectrometer for the VLT/ESO), and joined the CARMENES consortium. Dr. Anglada-Escudé is now a Lecturer in Astronomy at Queen Mary University of London, working on detection methods for very low-mass extrasolar planets around nearby stars.

by Guillem Anglada-Escudé


The Doppler technique has been the driving force for the first fifteen years of extrasolar planet detection. The method is most sensitive to close-in planets and many of its most exciting results come from planets around low-mass stars (also called M-dwarfs). Although these stars are substantially fainter than our Sun, the noise floor seems to be imposed stellar activity rather than instrumental precision or brightness, meaning that small planets are more easily detected here than around Sun-like stars. In detection terms, the new leading method is space-transit photometry, brilliantly demonstrated by NASA’s Kepler mission.

Despite its efficiency, the transit method requires a fortunate alignment of the orbit with our line of sight, so planets around the closest stars are unlikely to be detected this way. In the new era of space-photometry surveys and given all the caveats associated with accurate radial velocity measurements, the most likely role of the Doppler method for the next few years will be the confirmation of transiting planets, and detection of potentially habitable super-Earths around the nearest M-dwarfs. It is becoming increasingly clear that the Doppler method might be unsuitable to detect Earth analogs, even around our closest sun-like neighbors. Unless there is an unexpected breakthrough in the understanding of stellar Doppler variability, nearby Earth-twin detection will have to wait a decade or two for the emergence of new techniques such as direct imaging and/or precision space astrometry. In the meantime, very exciting discoveries are expected from our reddish and unremarkable stellar neighborhood.

The Doppler years

We knew stars should have planets. After the Copernican revolution, it had been broadly acknowledged that Earth and our Sun occupy unremarkable places in the cosmos. Our solar system has 9 planets, so it was only natural to expect them around other stars. After years of failed or ambiguous claims, the first solid evidence of planets beyond the Solar system arrived in the early 90’s. First came the pulsar planets (PSR+1257). Despite the claims of their existence being well consolidated, these planets were regarded as space oddities. That is, a pulsar is the remnant core of an exploded massive star, so the recoil of planets after such an event is unlikely to be the most universal channel for planet formation.

In 1995, the first planets around main sequence stars were reported. The hot Jupiters came by the hand of M. Mayor and D. Queloz (51 Peg, 1995), and shortly thereafter a series of gas giants were announced by the competing American duo G. Marcy and P. Butler (70 Vir, 47 UMa, etc.). These were days of wonder and the Doppler method was the norm. In a few months, the count grew from nothing to several worlds. These discoveries became possible thanks to the possibility of measuring the radial velocities of stars at ~3 meters-per-second (m/s) precision, that is, human running speed. 51 Peg b periodically moves its host star at 50 m/s and 70 Vir b changes the velocity of its parent star by 300 m/s, so these became easily detectable once precision reached that level.

Lighter and smaller planets

Given the technological improvements, and solid proof that planets were out there in possibly large numbers, the exoplanet cold war ramped up. Large planet-hunting teams built up around Mayor & Queloz (Swiss) and Marcy & Butler (Americans) in a strongly competitive environment. Precision kept improving and, when combined with longer time baselines, a few tens of gas giants were already reported by 2000. Then the first exoplanet transiting in front of its host star was detected. Unlike the Doppler method, the transit method measures the dip in brightness caused by a planet crossing in front of the star. Such alignment happens randomly, so a large numbers of stars (10 000+) need to be monitored simultaneously to find planets using this technique.

Plans to engage in such surveys suddenly started to consolidate (TrES, HAT, WASP) and small (COROT) to medium-class space missions (NASA’s Kepler, Eddington/ESA – cancelled later) started to be seriously considered. By 2002, the Doppler technique led to the first reports of hot Neptunes (GJ 436b) and the first so-called super-Earths (GJ 876d, M ~ 7 Mearth) came into the scene. Let me note that the first discoveries of such ‘smaller’ planets were found around the even more unremarkable small stars called M-dwarfs.

While not obvious at that moment, such a trend would later have serious consequences. Several hot Neptunes and super-Earths followed during the mid-2000’s, mostly coming from the large surveys led by the Swiss and American teams. By then the first instruments specifically designed to hunt for exoplanets had been built, such as the High Accuracy Radial velocity Planet Searcher (or HARPS), by a large consortium led by the Geneva observatory and the European Southern Observatory (ESO). While the ‘American method’ relied on measuring the stellar spectrum simultaneous to the spectral features in Iodine gas, the HARPS concept consisted in stabilizing the hardware as much as possible. After 10 years of operation of HARPS, it has become clear that the stabilized instrument option overperforms the Iodine designs, as it significantly reduces the data-processing effort needed to obtain accurate measurements (~1 m/s or better). Dedicated Iodine spectrometers are now in operation delivering comparable precisions (APF, PFS), which seems to point out towards a fundamental limit in the stars rather than in the instruments.

Sun-like stars (G dwarfs) were massively favoured in the early Doppler surveys. While many factors were folded into target selection, there were two main reasons for this choice. First, sun-like stars were considered more interesting due to the similarity to our host star (search for planets like our own) and second, M-dwarfs are intrinsically fainter so the number of bright enough targets is quite limited. For main sequence stars, the luminosity of the stars grows as the 4th power of the mass and their apparent brightness falls as the square of the distance.

As a result, one quickly runs out of intrinsically faint objects. Most of the stars we see in the night sky have A and F spectral types, some are distant supergiants (eg. Betelgeuse), and only a handful of sun-like G and K dwarfs are visible (Alpha Centauri binary, Tau Ceti, Epsilon Eridani, etc). No M-dwarf is bright enough to be visible by the naked eye. By setting a magnitude cut-off of V ~ 10, early surveys included thousands of yellow G-dwarfs, a few hundreds of orange K dwarfs, and a few tens of red M-dwarfs. Even though M-dwarfs were clearly disfavoured in numbers, many ‘firsts’ and most exciting exoplanet detection results come from these ‘irrelevant’ tiny objects.

M-dwarfs have masses between 0.1 and 0.5 Msun and radii between 0.1 and 0.5 Rsun. Since temperatures are known from optical to near infrared photometry (~3500 K, to be compared to 5800 K of the Sun) the basic physics of blackbody radiation shows that their luminosities are between 0.1% to 5% that of the Sun. As a result, orbits at which planets can keep liquid water on their surface are much closer-in and have shorter periods. All things combined, one finds that ‘warm’ Earth mass planets would imprint wobbles of 1-2 m/s on an M-dwarf (0.1 m/s Earth/Sun), and the same planet would cause a ~0.15% dip in the star-light during transit (0.01% Earth/Sun).

Two papers by the Swiss group from 2007 and 2009 (Udry et al., http://adsabs.harvard.edu/abs/2007A%26A…469L..43U, Mayor et al. http://adsabs.harvard.edu/abs/2009A%26A…507..487M) presented evidence for the first super-Earth with realistic chances of being habitable orbiting around the M-dwarf GJ 581 (GJ 581d). Although its orbit was considered too cold in a first instance, subsequent papers and climatic simulations (for example, see Von Paris et al. 2010, http://cdsads.u-strasbg.fr/abs/2010A%26A…522A..23V) indicated that there was no reason why water could not exist on its surface given the presence of reasonable amounts of greenhouse gases. As of 2010, GJ 581d is considered the first potentially habitable planet found beyond the Solar system. The word potentially is key here. It just acknowledges that, given the known information, the properties of the planet are compatible with having a solid surface and sustainable liquid water over its life-time. Theoretical considerations about the practical habitability of these planets is – yet another – source of intense debate.

GJ 581 was remarkable in another important way. Its Doppler data could be best explained by the existence of (at least) 4 low-mass planets in orbits with periods shorter than ~2 months (orbit of Mercury). A handful of similar other systems were known (or reported) during those days, including HD 69830 (3 Neptunes, G8V), HD 40307 (3 super-Earths, K3V) and 61 Vir (3 sub-Neptunes). These and many other Doppler planet reports for the large surveys led to the first occurrence rate estimates for sub-Neptune mass planets by ~2010. According to those (for example, see http://adsabs.harvard.edu/abs/2010Sci…330..653H ), at least ~30% of the stars hosted one super-Earth within the orbit of our Mercury. Simultaneously, the COROT mission started to produce its first hot-rocky planet candidates (eg. COROT 7b) and the Kepler satellite was slowly building up its high quality space-based light curves.

What Kepler was about to reveal was even more amazing. Not only did 30% of stars host ‘hot’ super-Earths, but, at least, ~30% of the star hosted compact (highly co-planar) planetary systems with small planets, again with orbits interior to our Mercury. Thanks to this unexpected overabundance of compact systems, the Kepler reports of likely planets came in the thousands (famously known as Kepler Objects of Interest, or KOIs), which in astronomy means we can move from interesting individual objects to a fully mature discipline where statistical populations can be drawn. Today, the exoplanet portrait is smoothly covered by ~2000 reasonably robust detections, extending down to sub-Earth sized planets in orbits down to a few hours (eg. Kepler 78) up to thousands of days for those Jupiter analogs that (at the end of the day) have been found to be rather rare (<5% of the stars). Clustering of objects in the different regions of the mass-period diagram (see Figure 1) encodes the tale of planet formation and the origin. This is where we are now in terms of detection techniques.


Figure 1: The exoplanet portrait (data extracted from exoplanet.eu, April 1st 2015). Short period planets are generally favoured by the two leading techniques (transits and Doppler spectroscopy), which explains why the left part of the diagram is the most populated. The ‘classic’ gas giants are on the top right (massive and long periods), and the bottom left is the realm of the hot neptunes, and super-Earths. The relative paucity of planets in some areas of this diagram tells us about important processes that formed them and shaped the architectures of the systems. For example, the horizontal gap between the Neptunes and the Jupiters is likely caused by the runaway accretion of gas once a planet grows a bit larger than Neptune in the protoplanetary nebula, quickly jumping into the Saturn mass regime. The large abundance of hot-jupiters on the top left is an observational bias due to the high detection efficiency (large planets and short period orbits), but the gap between the hot-Jupiters and the classical gas giants is not well understood and probably has to do with the migration process involved in dragging the hot-Jupiters so close to the star. Detection efficiency quickly drops to the right (longer periods) and bottom (very small planets).

Having reached this point, and given the wild success of Kepler, we might ask ourselves what is the relevance of the Doppler method as a detection method for small planets. The transit method requires a lucky alignment of the orbit. Using statistical arguments one can easily find out that most transiting planets will be detected around distant stars. Instead, the Doppler technique can achieve great precision on individual objects and detect the planets irrespective of their orbital inclination (unless in the rare cases when the orbits are close to face-on). Therefore, the hunt for nearby planets remains the niche of the Doppler technique. Small planets around nearby stars should enable unique follow-up opportunities (direct imaging attempts in 10-20 years) and transmission spectroscopy in the rare cases of transits.

However, there are other reasons why nearby stars are really exciting. These are brand new worlds next to ours that might be visited one day. Nearby stars trigger the imagination of the public, science fiction writers, filmmakers and explorers. While the scientific establishment tends to deem this quality irrelevant, many of us still find such motivation perfectly valid. As in many other areas, this in not only about pure scientific knowledge but about exploration.

For those who prefer a more oriented results-per-dollar approach, the motivational aspect of nearby exoplanets cannot be ignored either. Modern mathematics and physical sciences were broadly motivated by the need to improve our understanding of observations of the Solar system. Young scientists keep being attracted to space sciences and technology because of this (combined with the push from the film and video-game industry). A nearby exoplanet is not one more point in a diagram. It represents a place, a goal and a driver. Under this scope, reports and discoveries of nearby Doppler detections (even if tentative) still rival or surpass the social relevance of those exotic worlds in the distant Kepler systems. As long as there is public support and wonder for exploration, we will keep searching for evidence of nearby worlds. And to do this we need spectrometers.

Why is GJ 581 d so relevant?

We have established that nearby M-dwarfs are great places to look for small planets. But there is a caveat. The rotation periods of mature stars are in the 20-100 days range, meaning that spots or features in the stellar surface will generate apparent Doppler signals in the same range. After some years of simulation and solar observations, we think that these spurious signals will produce Doppler amplitudes between 0.5 and 3 m/s, even for the most quiet stars (highly object dependent). Moreover, this variability is not strictly random, which causes all sorts of difficulties. In technical terms, structure in the noise is often referred as correlated noise (or red-noise, activity-induced variability, etc.).

Detecting a small planet is like trying to measure the velocity of a pan filled with boiling water, by looking at its wiggling surface. If we can wait long enough, the surface motion averages out. However, consecutive measurements over fractions of seconds will not be random and can be confused with periodic variability in these same timescales. The same happens with stars. We can get arbitrarily great precision (down to cm/s) but our measurements will also be tracing occasional flows and spectral distortions caused by the variable surface.

Going back to the boiling water example, we could in principle disentangle the true velocity from the jitter if we have access to more information, such as the temperature or the density of the water at each time. Our hope is that this same approach can be applied to stars by looking at the so-called ‘activity indicators’. In the case of Gliese 581d, Robertson et al. subtracted an apparent correlation of the velocities with a measure of a chromospheric activity index. As a result, the signal of GJ 581d vanished, so they argued the planet was unlikely to exist (http://adsabs.harvard.edu/abs/2014Sci…345..440R).

However, in our response to that claim, we argued that one cannot just remove possible effects relevant to the observations. Instead, one needs to fold in all the information in a comprehensive model of the data (http://adsabs.harvard.edu/abs/2015Sci…347.1080A). When doing that, the signal of GJ 581d shows up again as very significant. This is a subtle point with far-reaching consequences. The activity-induced variability is in the 1-3 m/s regime, and the amplitude of the planetary signal is about 2 m/s. Unless activity is modeled at the same level as the planetary signal, there is no hope in obtaining a robust detection. By comparison, the amplitude of the signal induced by Earth on the Sun is 10 cm/s while the Solar spurious variability is on the 2-3 m/s range. With a little more effort, we are likely going to detect many potentially habitable planets around M-stars using new generation spectrometers. Once we can agree on the way to do that, we can try to go one step further and attempt similar approaches with Sun-like stars.

The debate is on and the jury is still out, but clarifying all these points is essential to the viability of the Doppler technique and future plans for new instruments (What’s the need for more precise machines if we have already hit the noise floor?)

This same boiling pan effect sets the physical noise floor for other techniques as well, but the impact on the detection sensitivity can be rather different. For example, photometric measurements (eg. Kepler) are now mostly limited by the noise floor set by the Sun-like stars which, on average, have been found to be twice more active than our Sun. However, the transit ‘signal’ (short box-like feature, strictly periodic) is harder to emulate by stellar variability. It is only a matter of staring longer on target to be sure the transit-like feature repeats itself at a very precise time. The Kepler mission had been extended to 3.5 years to account for this, and it would have probably succeeded if its reaction wheel hadn’t failed (note most ‘warm Earth-sized’ objects are around K and M-stars). The PLATO/ESA mission (http://sci.esa.int/plato/) will likely finish the job and detect a few dozens of Earth twins, among many other things.

So, what’s next?

New generation spectrometers will become available soon. Designed to reach similar or better hardware stability than HARPS, these instruments will extend the useful wavelength range towards the red and near-infrared part of the spectrum. A canonical example is the CARMENES spectrometer (https://carmenes.caha.es/), which will cover from 500 nm up to 1.7 microns (HARPS covers from 380 to 680 nm). CARMENES is expected to go into the telescope this summer. In addition to collecting more photons, access to other regions of the spectrum will enable the incorporation of many more observables in the analysis. In the meantime, a series of increasingly ambitious space-photometry missions will keep identifying planet-sized objects by the thousands. In this context, a careful use of Doppler instruments will provide confirmation and mass measurements for transiting exoplanet candidates.

In parallel, the high follow-up potential and the motivational component of nearby stars justifies the continued use of precision spectrometers, at least on low-mass stars. In addition to this, stabilized spectrometers ‘might’ play a key role in atmospheric characterization of transiting super-Earths around nearby M-dwarf stars. Concerning the nearest Sun-like stars, alternative techniques such as direct imaging or astrometry should be viable once dedicated space missions are built, maybe in the next 15-20 years. However, given the trend towards stagnant economies and increasingly long technological cycles for space instrumentation, we might need to hope for the era of space industrialization (or something as dramatic as a technological singularity taking over the hard work) to catch a glimpse of the best targets for interstellar travel.



Sea Salt in Europa’s Dark Materials?

by Paul Gilster on May 14, 2015

‘Europa in a can’ may be the clue to what’s happening on Jupiter’s most intriguing moon. Created by JPL’s Kevin Hand and Robert Carlson, ‘Europa in a can’ is the nickname for a laboratory setup that mimics conditions on the surface of Europa. It’s a micro-environment of extremes, as you would imagine. The temperature in the vacuum chamber is minus 173 degrees Celsius. Moreover, materials within are bombarded with an electron beam that simulates the effects of Jupiter’s magnetic field. Ions and electrons strike Europa in a constant bath of radiation.

What Hand and Carlson are trying to understand is the nature of the dark material that coats Europa’s long fractures and much of the other terrain that is thought to be geologically young. The association with younger terrain would implicate materials that have welled up from within the moon, providing an interesting glimpse of what is assumed to be Europa’s ocean. Previous studies have suggested that these discolorations could be attributed to sulfur and magnesium compounds, but Hand and Carlson have produced a new candidate: Sea salt.


Image: The Galileo spacecraft gave us our best views thus far of Europa, with the discolorations along linear fractures rendered strikingly clear in this reprocessed color view. Credit: NASA/JPL.

Intense radiation peppers Europa’s surface with particle accelerator intensity. It becomes part of the story, causing the discoloration evident in the terrain. Hand and Carlson tested a variety of candidate substances, collecting the spectra of each to compare them with what our spacecraft and telescopes have found. Sodium chloride and various salt and water mixtures proved the most potent substance. When bombarded with the electron beam, they turned from white to the same reddish brown hues found on Europa in a timeframe of tens of hours, which corresponds to about a century on Europa. Spectral measurements showed a strong resemblance to the color within Europa’s fractures as seen by the Galileo spacecraft.


Image: A closer look at Europa. This is a colorized image pulled from clear-filter grayscale data from one orbit of the Galileo spacecraft combined with lower resolution data taken on a different orbit. The blue-white terrain indicates relatively pure water ice. The new work indicates that although some of the colors of Europa come from radiation-processed sulfur, irradiated salts may explain the color of the youngest regions. Highly intriguing is the possibility that these surface features may have communicated with a global subsurface ocean. Credit: NASA/JPL.

Finding sea salt on Europa’s surface would imply interactions between the ocean and the rocky seafloor, according to this JPL news release, with astrobiological implications. In any case, “This work tells us the chemical signature of radiation-baked sodium chloride is a compelling match to spacecraft data for Europa’s mystery material,” says Hand, who speculates that because the samples grew darker with increasing radiation exposure, we might be able to use color variation to determine the age of features on the moon’s surface.

The paper is Hand and Carlson, “Europa’s surface color suggests an ocean rich with sodium chloride,” accepted at Geophysical Research Letters for publication online (abstract).



SETI and Stellar Drift

by Paul Gilster on May 13, 2015

It was natural enough that Richard Carrigan would come up with the model for what he called ‘Fermi bubbles,’ which I invoked in Monday’s post. A long-time researcher of the infrared sky, Carrigan (Fermi National Accelerator Laboratory, now retired) had mined data from the Infrared Astronomical Satellite (IRAS) in 2009 to mount a search for interesting sources that could be Dyson spheres, entire stars enclosed by a swarm of power stations, or conceivably wrapped entirely by a sphere of material presumably mined from the planetary population of the system.

Carrigan’s work on infrared sources goes back well over a decade, involving not only data mining but theorizing about the nature of truly advanced civilizations. If we were to find a civilization transforming a galaxy by gradually building Dyson spheres to exploit all the energies of its stars, we would be witnessing the transformation from Kardashev Type II (a culture that uses all the power of its star) to Type III (a culture that exploits its entire galaxy’s energies. Carrigan reasoned that areas of such a galaxy would gradually grow dark in visible light, the signature of the civilization’s activities becoming traceable only in the infrared.

Both Carrigan and the researchers in the Glimpsing Heat from Alien Technologies (G-HAT) project at Penn State point out that there are natural phenomena that could mimic the Fermi bubble. In a recent paper, the G-HAT team led by Jason Wright mentions a kind of galaxy known as a flocculent spiral as a case in point. Unlike the classic spiral with well-defined structure, these are galaxies with discontinuous spiral arms. What might be perceived as a ‘bubble’ structure here would almost certainly be a natural feature.


Image: NGC 4414, a flocculent spiral galaxy in an image taken by the Hubble Space Telescope. It would be tricky business to find the signature of a Fermi bubble here given the lack of definition in the spiral arms. A bright foreground star from our Milky Way Galaxy shines in the foreground of the image. Credit: Olivier Vallejo (Observatoire de Bordeaux), HST, ESA, NASA.

Galaxy in Motion

But I think the G-HAT critique of the Fermi bubble idea truly gains strength when we consider the motion of stars in the galaxy vs. the times needed for galactic colonization to occur. For we have to remember that when we’re dealing with a galaxy of stars over billions of years, we have to set the galaxy in motion. In a 2014 paper cited on Monday, Wright and company note this:

The static model of stars, in which a supercivilization can be said to occupy a compact and contiguous region of space, is a reasonable approximation for short times (≲ 105 years) and in the case of fast ships (with velocities in significant excess of the typical thermal or orbital velocities of the stars, so ∼ 10−2 c). In such cases, the stars essentially sit still while the ships move at a significant fraction of c and populate a small region of the galaxy in some small multiple of the region’s light-crossing time.

Remember that the shorter the period for colonization, the briefer the ‘window’ for finding a Fermi bubble. But would such bubbles be apparent even assuming the slowest possible expansion?

The G-HAT team’s work makes a compelling case that they would not. For longer times, and assuming slower ships, the static model fails and fails badly. Stars do not stay in one place, and galactic rotation muddles the works. The G-HAT paper considers what it calls ‘conservative timescales’ for a ‘slow’ colonization of the Milky Way. We can use this work to consider a maximum galaxy colonization time to give us a sense of how apparent galactic colonization would be. It also has ramifications, and significant ones, for Michael Hart’s view that we are alone in the galaxy, but I’m not going to stray from the Fermi bubble question in this post.

Imagine that a single spacefaring civilization emerges that uses colony ships traveling at 10-4 c, a speed not so different from our own interplanetary probes. Also assume a very slow launch rate, so that a ship is launched every 104 years, with a maximum range of 10 parsecs. This is slow travel indeed: The travel time to the nearest stars in this scenario is roughly 105 years, a time during which 10 more ships will be launched. The paper explains that this travel speed is comparable to the velocity dispersion of stars in the galactic midplane, a fact that brings new stars into range of the colony ships.

This is an expansion without pause because as the stars mix locally, a stellar system can continue to populate the ten nearest stars every 105 years:

To first order, the stellar system can thus continue to populate the 10 nearest stars every 105 years, without immediately saturating its neighborhood with colonies or the need to launch faster or longer-lived colony ships to continue its expansion. Further, arrival of the colony ships at the nearby stars should not be modeled as a pause in the expansion of the civilization. Rather, the colonies themselves will continue to travel at these speeds with respect to the home stellar system, and themselves encounter fresh stars for colonization every 105 years, during which time they can also launch 10 colony ships.

Using halo stars, which have high relative velocities in relation to the disk, for gravitational assists can provide a boost in cruise speed that allows higher speeds. We wind up with the capability of crossing the galaxy on a galactic rotational timescale. Here is a model of slow expansion that is anything but the uniform growth imagined in a static field of stars:

The slow expansion of an ETI should thus be modeled not as an expanding circle or sphere, subject to saturation and “fronts” of slower-expanding components of the supercivilization. A better model is as the mixing of a gas, as every colonizing world populates the stars that come near it, and those stars disperse into the galaxy in random directions, further “contaminating” every star they come near. If halo stars are themselves colonizable, then those that counter-rotate and remain near the plane will provide even faster means of colonization, since they will encounter ∼ 10 times as many stars per unit time as disk stars.

Here again we note the key fact that this stellar motion obscures any well-defined Fermi bubble:

Non-circular orbits also provide significant radial mixing, and Galactic shear provides an additional source of mixing that is comparable to that of the velocity dispersion of the disk stars once the colonies have spread to vrotv ∼ 1/10 of galaxy’s size, or ∼ 1 kpc from the home stellar system.

Remember, these are extremely conservative assumptions, and they still show that when a civilization begins to colonize its nearest stars, it will populate the entire galaxy in no more than 108 to 109 years. The maximum timescale for galactic colonization is found to be on the order of a galactic rotation (108 years) even for present-day probe speeds. This has implications for the detectability of Fermi bubbles, for on a rotational timescale, such bubbles will be subject to rotational shear and thermal motions that disperse and ‘mix’ them. Or as Centauri Dreams regular Eniac put it in a comment yesterday, “Such bubbles would be sheared into streamers in relatively short order. The spread of civilization would look more like milk stirred into coffee than a clearly delineated expanding bubble.”

The upshot here is that it will be only during a fairly brief period of a galaxy’s history that a spacefaring civilization will have populated only a single contiguous part of that galaxy. The length of that time depends upon how fast the civilization is capable of expanding — the faster the expansion, the shorter the time to observe the interim period between Kardashev Levels II and III. The transition between this era and the galaxy-spanning civilization to follow is, by galactic standards, relatively brief. And if we assume the slowest possible expansion, our Fermi bubbles would be quickly obscured by natural stellar motion within the galaxy. Fermi bubbles, if they do exist, are going to be exceedingly hard to find.

The paper is Wright et al., “The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. I. Background and Justification,” The Astrophysical Journal Vol. 792, No. 1 (2014), p. 26 (abstract / preprint). I consider the SETI work that Jason Wright and his colleagues Matthew Povich and Steinn Sigurðsson are doing with the Glimpsing Heat from Alien Technologies project to be ground-breaking, and plan to check in with it often.



SETI: Are ‘Fermi Bubbles’ Detectable?

by Paul Gilster on May 11, 2015

I’m enough of a perfectionist that when I get something wrong, I can’t rest easy until I figure out how and why I missed the story. Such a case occurred in an article I wrote for Aeon Magazine called Distant Ruins. The article covered the rise of so-called ‘Dysonian SETI,’ which is adding an entirely new dimension to current radio and optical methods by looking into observational evidence for advanced civilizations in our abundant astronomical data.

In the story, I homed in at one point on the work that Jason Wright and his colleagues Matthew Povich and Steinn Sigurðsson are doing with the Glimpsing Heat from Alien Technologies (G-HAT) project at Penn State. Keith Cooper went over the basics of this effort on Friday, putting his own spin on the group’s recent search of 100,000 galaxies. For more background, see Jason Wright’s Glimpsing Heat from Alien Technologies essay.

I noted in the Aeon article that the G-HAT team was examining infrared data from the Wide-field Infrared Survey Explorer (WISE) and the Spitzer Space Telescope in search of the signs of an advanced civilization. What I had wrong in my description was the statement that “Wright’s group is also looking for ‘Fermi bubbles’, patches of a galaxy that show higher infrared emissions than the rest, which could be a sign that a civilisation is gradually transforming a galaxy as it works its way across it.” I know I drew the idea of Fermi bubbles from Richard Carrigan’s work, and generalized from there, but generalizing was a mistake, because it turns out that the G-HAT team doesn’t believe Fermi bubbles are something we could detect.

Below is the ‘Whirlpool’ galaxy, M51, a beautiful image and a useful object for study because we are looking at a spiral galaxy in many ways like the Milky Way from an angle that lets us see it face-on. Could we see Fermi bubbles here?


Image: The Whirlpool Galaxy is a classic spiral galaxy. At only 30 million light years distant and fully 60 thousand light years across, M51, also known as NGC 5194, is one of the brightest and most picturesque galaxies in the sky. The above image is a digital combination of a ground-based image from the 0.9-meter telescope at Kitt Peak National Observatory and a space-based image from the Hubble Space Telescope. Credit: N. Scoville (Caltech), T. Rector (U. Alaska, NOAO) et al., Hubble Heritage Team, NASA.

Richard Carrigan has studied this galaxy closely, looking for such Fermi bubbles, which he described in a 2010 paper. Here’s my description in Toward an Interstellar Archaeology, written for these pages in the same year:

Suppose a civilization somewhere in the cosmos is approaching Kardashev type III status. In other words, it is already capable of using all the power resources of its star (4*1026 W for a star like the Sun) and is on the way to exploiting the power of its galaxy (4*1037 W). Imagine it expanding out of its galactic niche, turning stars in its stellar neighborhood into a series of Dyson spheres. If we were to observe such activity in a distant galaxy, we would presumably detect a growing void in visible light from the area of the galaxy where this activity was happening, and an upturn in the infrared. Call it a ‘Fermi bubble.’

Carrigan (Fermi National Accelerator Laboratory) studied M51 and concluded that there were no unexplained ‘bubbles’ at the level of 5 percent of the galactic area. The Whirlpool galaxy seems like an ideal place to mount such a search given its orientation towards us. A Fermi bubble, if such things exist, might manifest itself as a void in the visible light we see in the image.

Carrigan talked about an expanding front of colonization as an advanced civilization moved through its galaxy, engulfing the galaxy on a time scale comparable to the galaxy’s rotation period or even less. But M51 produced no ‘bubbles,’ and James Annis would suggest that elliptical, rather than spiral, galaxies might be a better place to look for Fermi bubbles because ellipticals exhibit little structure, so that a potential void would stand out.

Here’s Carrigan in the 2010 paper (citation below) on how a civilization on its way to Kardashev Type III status might proceed:

If it was busily turning stars into Dyson spheres the civilization could create a “Fermi bubble” or void in the visible light from a patch of the galaxy with a corresponding upturn in the emission of infrared light. This bubble would grow following the lines of a suggestion attributed to Fermi that patient space travelers moving at 1/1000 to 1/100 of the speed of light could span a galaxy in one to ten million years. Here “Fermi bubble” is used rather than “Fermi void”, in part because the latter is also a term in solid state physics and also because such a region would only be a visible light void, not a matter void.

Wright and the G-HAT team are not persuaded by Carrigan’s Fermi bubbles. For one thing, as Carrigan has noted himself, bubble-like structures are not unusual in extragalactic astronomy, and spiral galaxies include areas that might mimic a void that would be hard to regard as anything but natural. In one of their recent papers, the G-HAT researchers add that with galactic arm widths on the order of ~ kpc, it is difficult to identify structures below this size scale.

The Annis idea, therefore, seems more useful, but for now let’s home in on that word ‘void.’ In the Aeon story, I referred to the galaxy VIRGOHI21 as a galaxy that contains a ‘void.’ But that’s a mistake, for as Jason Wright explained in a recent email, Virgo HI21 has no emissions at any wavelength except 21cm. It may, in fact, be a starless or ‘dark’ galaxy, a galaxy composed of dark matter, although the nature of the object is still controversial. The G-HAT team, according to Wright, has studied Virgo HI21 and found no infrared emission.

In any case, as Wright explained, the word ‘void’ isn’t appropriate, for galaxies do not actually contain them. Areas where there has been no star formation for the past 10 million years or so may manifest themselves as darker lanes between the spiral arms, and dust lanes may also appear dark, but Wright does not believe the shape of these darker lanes is consistent with the spread of a civilization. In any case, these are not voids. They contain just as many stars as other regions in the galaxy. So detecting Fermi bubbles gets to be more and more problematic.

Fermi bubbles would be hard to detect for other reasons as well, as explained by the G-HAT team and presented in their recent work. This is intriguing stuff, having to do with the time scales involved in the spread of a civilization and the motions of stars in that period — these ‘bubbles’ would not be static! I want to look at this issue next but probably won’t be able to get the piece written and published before Wednesday due to an intersection of competing duties elsewhere.

The Carrigan paper is “Starry Messages: Searching for Signatures of Interstellar Archaeology,” JBIS Vol. 63 (2010), p. 90 (preprint). The G-HAT paper I am discussing today and on Wednesday is Wright et al., “The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. I. Background and Justification,” The Astrophysical Journal Vol. 792, No. 1 (2014), p. 26 (abstract / preprint).



Project Dragonfly Design Competition Funded

by Paul Gilster on May 11, 2015

Andreas Hein recently wrote up the Project Dragonfly design competition, which has been running as a Kickstarter project. Leveraging advances in miniaturization and focusing on laser-beamed lightsail technologies, Project Dragonfly aims to study the smallest possible spacecraft. From the Kickstarter announcement:

Project Dragonfly builds upon the recent trend of miniaturization of space systems. Just a few decades ago, thousands of people were involved in developing the first satellite Sputnik. Today, a handful of university students are able to build a satellite with the same capability as Sputnik, which is much cheaper and weighs hundreds of times less than the first satellite. We simply think further. What could we do with the technologies in about 20-30 years from now? Would it be possible to build spacecraft that can go to the stars but are as small as today’s picosatellites or even smaller?


You can read about the competition in Andreas’ post Project Dragonfly: Design Competitions and Crowdfunding. He tells me that the Kickstar campaign has been fully funded since last Friday. But those interested in supporting the effort further can still do so for another three days. You can access the campaign at https://www.kickstarter.com/projects/1465787600/project-dragonfly-sail-to-the-stars.