LightSail Aloft!

One of the joys of science fiction is the ability to enter into conjectured worlds at will, tweaking parameters here and there to see what happens. I remember talking a few years ago to Jay Lake, a fine writer especially of short stories who died far too young in 2014. Jay commented that while it was indeed wonderful to move between imagined worlds as a reader, it was even more wondrous to do so as a writer. I’ve mostly written non-fiction in my career, but the few times I’ve done short stories, I’ve experienced a bit of this ‘world-building’ sense of possibility.

Even so, it’s always striking how science and technology keep moving in ways that defy our expectations. Take yesterday’s launch of The Planetary Society’s crowd-funded LightSail, which went aloft thanks to the efforts of a United Launch Alliance Atlas V from Cape Canaveral. LightSail violates expectations on a number of fronts. For one thing, the crowd-funding thing, which is a consequence of an Internet era that science fiction writers lustily engaged, but which enters homes on desktop computers that SF had trouble anticipating.

My old saying applies: It’s the business of the future to surprise us, even those of us who keep thinking about the future every day. Another LightSail surprise is its size. Many science fiction tales have covered solar sails dating back to the wondrous “The Lady Who Sailed the Soul,’ from Cordwainer Smith, and Arthur C. Clarke’s “The Wind from the Sun.” We’ve looked at a number of the early stories in these pages over the years. But imagined sails in those days were vast, just like Robert Forward’s gigantic designs, and I can’t think of anyone in those days who anticipated matching up sails with tiny satellites — CubeSats — which have brought space capabilities down from the level of government organizations to small university groups.

lightsail_launch

Image: The launch of LightSail aboard an Atlas V, as captured by remote camera on May 20. Credit: Navid Baraty / The Planetary Society.

So we have a CubeSat about the size of a loaf of bread that is about to deploy a sail measuring 32 square meters. CubeSats are cheap, and while they can’t mount missions of the complexity of a Juno or a Cassini, I can see a robust future for them. The beauty of The Planetary Society’s effort here is that while CubeSats can be readily orbited, they’ve had no real propulsion capabilities. Until now. So we’re not testing just one sail. We’re testing a broader concept.

Can we get a CubeSat to another planet? I can see no reason why not if it turns out that the solar sail strategy employed here does the job. And if we can get one CubeSat to another planet, we can surely get more. Thus the possibility of future missions designed around ‘swarms’ of CubeSat descendants, deployed on missions in which the components of a much larger spacecraft are effectively distributed among a host of carriers, all driven by solar photon momentum. Perhaps LightSail is the first step in making such a vision a reality.

Remember, too, that LightSail was launched as only one payload among many. Much media attention went into the launch of the X-37B, understandable because the small space plane has been operated with relative secrecy. But the Atlas V carrying LightSail also carried several other CubeSats into space. Contrast this with the early days of the space program, when each rocket lifted a single payload, and consider where miniaturization and improved design have begun to take us. With Mason Peck’s ‘sprites,’ we’re now exploring an even smaller realm some call ‘satellites on a chip,’ where the idea of swarm operations takes on a whole new luster.

We have about four weeks to wait before LightSail attempts deployment of its mylar sail. Even then the craft will quickly be pulled back into the Earth’s atmosphere, returning along the way images and data on spacecraft performance that will flow to the ground stations at Cal Poly San Luis Obispo and Georgia Tech (LightSail was designed by San Luis Obispo firm Stellar Exploration, Inc.) Data return has already begun. You’ll want to follow Jason Davis’ updates on The Planetary Society’s site as this story unfolds. LightSail’s first telemetry file can be downloaded — according to Jason, the early values appear to be ‘nominal or near predicted ranges.’ Here’s the one item that could be problematic:

The team’s only major concern is a line of telemetry showing the indicator switches for solar panel deployment have been triggered. (Look for line 77 in the telemetry file—the “f” is a hexidecimal value indicating all switches were released.) Under normal circumstances, the solar panels do not open until the sail deployment sequence starts, because the sails have a tendency to start billowing out of their storage cavities.

This telemetry reading, however, does not necessarily mean the panels are open. The switches were once inadvertantly triggered during vibration testing, so it’s possible they popped loose during the ride to orbit. We’ll know for sure after flight day four, when we test out the camera system. This is one time we don’t want to see a pretty picture of Earth—it would mean the panels are open.

I’ll be checking in with Jason’s blog frequently during the mission as we get closer to sail deployment. Meanwhile, be aware that the second iteration of LightSail is scheduled for a 2016 flight, this one a full demonstration of solar sailing in Earth orbit, with launch aboard a SpaceX Falcon Heavy to an orbit of about 720 kilometers. The KickStarter campaign supporting the LightSail project can be accessed here. The level of support that has emerged is encouraging indeed, as success with LightSail will energize the entire community of sail researchers.

tzf_img_post

Enter the ‘Warm Titan’

Our definition of the habitable zone is water-based, focusing on planetary surfaces warm enough that liquid water can exist there. New work by Steven Benner (Foundation for Applied Molecular Evolution) and colleagues considers other kinds of habitable zones, specifically those supporting hydrocarbons, which can be liquids, solids or gases depending on the ambient temperature and pressure. Benner’s work focuses on compounds called ethers that can link together to form polyethers, offering life a chance to emerge and adapt in hydrocarbon environments.

Out of this comes the notion of ‘warm Titans,’ moons with hydrocarbon seas that are not made up of methane. We have no such worlds in our Solar System, and they needn’t be moons of gas giants to fit the bill. Think of them, as this Astrobio.net news release does, as being oily Earths drenched in hydrocarbons like propane or octane. Although they do not appear in any genetic molecules on Earth, ethers may be the key to fill the function of DNA and RNA on such worlds.

The nucleobases in the four-letter code of DNA and RNA can mutate even as the molecule’s form is retained, and out of this come the proteins that help life interact and adapt with its environment. Like DNA, ethers show repeating elements, in this case of carbon and oxygen, in their chemical backbones. But unlike DNA and RNA, they have no outward negative charge of the kind that lets them dissolve and float freely so they can interact with other biomolecules. Says Benner:

“This is the central point of the ‘polyelectrolyte theory of gene,’ which holds that any genetic biopolymer able to support Darwinian evolution operating in water must have an ever-repeating backbone charge. The repeating charges so dominate the physical behavior of the genetic molecule that any changes in the nucleobases that influence genetic information have essentially no significant impact on the molecule’s overall physical properties.”

Molecules like DNA and RNA cannot dissolve in a hydrocarbon ocean, making them unable to provide the necessary interactions on worlds like Titan. But ethers, strung together in complex polyethers, while they lack an outward charge, do have internal charge repulsions that allow small parts of the molecule to function in ways similar to DNA and RNA nucleobases.

2-ethercompoun

Image: An artist’s impression of the low-lit surface of Titan under the moon’s thick, orange haze, with liquid hydrocarbons pooling and eroding the surface much like water on Earth. Credit: Steven Hobbs (Brisbane, Queensland, Australia).

Benner’s experiments with ethers show that they are not soluble when we get down to temperatures as low as Titan’s, making Saturn’s largest moon an unlikely venue for such life. But while methane has a narrow liquid range (between -184 and -173 degrees Celsius), we can still put ethers to work in warmer hydrocarbon oceans. Thus the emergence of the ‘warm Titan,’ a world perhaps covered with propane instead of methane oceans that can stay liquid over a broad range (-184 degrees Celsius to -40 degrees). Octane turns out to be even better, not freezing until it reaches -57 degrees Celsius or vaporizing until it hits a temperature of 125 degrees.

Thus hydrocarbon molecules larger than methane come to the rescue. Once again we reconsider the notion of a habitable zone. Certainly in terms of life that we are familiar with, liquid water at the surface is a prerequisite. But as we’ve seen on the icy moons of our system’s gas giants, oceans can provide subsurface environments where life could conceivably emerge. Now we have to consider a hydrocarbon habitable zone where propane or octane can exist in a liquid state. “Virtually every star,” says Benner, “has a habitable zone for every solvent.”

The paper is Christopher et al., “Solubility of Polyethers in Hydrocarbons at Low Temperatures. A Model for Potential Genetic Backbones on Warm Titans,” Astrobiology Vol. 15, Issue 3 (11 March 2015). Thanks to Ivan Vuletich for the pointer to this one.

tzf_img_post

Exoplanets: The Hunt for Circular Orbits

If you’re looking for planets that may be habitable, eccentric orbits are a problem. Vary the orbit enough and the surface goes through extreme swings in temperature. In our own Solar System, planets tend to follow circular orbits. In fact, Mercury is the planet with the highest degree of eccentricity, while the other seven planets show a modest value of 0.04 (on a scale where 0 is a completely circular orbit — Mercury’s value is 0.21). But much of our work on exoplanets has revealed gas giant planets with a wide range of eccentricities, and we’ve even found one (HD 80606b) with an eccentricity of 0.927. As far as I know, this is the current record holder.

These values have been measured using radial velocity techniques that most readily detect large planets close to their stars, although there is some evidence for high orbital eccentricities for smaller worlds. Get down into the range of Earth and ‘super-Earth’ planets, however, and the RV signal is tiny. But a new paper from Vincent Van Eylen (Aarhus University) and Simon Albrecht (MIT) goes to work on planetary transits. It’s possible to work with Transit Timing Variations to make inferences about eccentricity, but these appear only in a subset of transiting systems.

Instead, van Eylen and Albrecht look at transit duration. The length of a transit can vary depending on the eccentricity and orientation of the orbit. By measuring how long a planetary transit lasts, and weighing the result against what is known about the properties of the star, the eccentricities of the transiting planets can be measured, as explained in the paper:

Here we determine orbital eccentricities of planets making use of Kepler’s second law, which states that eccentric planets vary their velocity throughout their orbit. This results in a different duration for their transits relative to the circular case: transits can last longer or shorter depending on the orientation of the orbit in its own plane, the argument of periastron (ω)… Transit durations for circular orbits are governed by the mean stellar density (Seager & Mallen-Ornelas 2003). Therefore if the stellar density is known from an independent source then a comparison between these two values constrains the orbital eccentricity of a transiting planet independently of its mass…

Using these methods, the researchers have measured the eccentricity of 74 small extrasolar planets orbiting 28 stars, discovering that most of their orbits are close to circular. The systems under study were chosen carefully to avoid false positives — the team primarily used confirmed multi-transiting planet systems around bright host stars, and pulled in asteroseismological data — information on stellar pulsations — to help determine stellar parameters. Asteroseismology can refine our estimates of a star’s mass, radius and density. The stars in the team’s sample have all been characterized in previous asteroseismology studies.

eccentricity_1

Image: Researchers measuring the orbital eccentricity of 74 small extrasolar planets have found their orbits to be close to circular, similar to the planets in the Solar System. This is in contrast to previous measurements of more massive exoplanets where highly eccentric orbits are commonly found. Credit: Van Eylen and Albrecht / Aarhus University.

No Earth-class planets appear in the team’s dataset, but the findings cover planets with an average radius of 2.8 Earth radii, while orbital periods range from 0.8 to 180 days. Van Eylen and Albrecht conclude that it is plausible that low eccentricity orbits would be common in solar systems like ours, a finding that would have ramifications for habitability and the location of the habitable zone.

Interestingly, when weighed against parameters like the host star’s temperature and age, no trend emerges. But in systems with multiple transiting planets on circular orbits, Van Eylen and Albrecht believe that the density of the host star can be reliably estimated from transit observations. This information can help to rule out false positives, a technique they use to validate candidate worlds in several systems — KOI-270, now Kepler-449, and KOI-279, now Kepler-450, as well as KOI-285.03, now Kepler-92d, in a system with previously known planets.

The work has helpful implications for upcoming space missions that will generate the data needed for putting these methods to further use:

We anticipate that the methods used here will be useful in the context of the future photometry missions TESS and PLATO, both of which will allow for asteroseismic studies of a large number of targets. Transit durations will be useful to confirm the validity of transit signals in compact multi-planet systems, in particular for the smallest and most interest[ing] candidates that are hardest to confirm using other methods. For systems where independent stellar density measurements exist the method will also provide further information on orbital eccentricities.

The TESS mission (Transiting Exoplanet Survey Satellite) is planned for launch in 2017, and is expected to find more than 5000 exoplanet candidates, including 50 Earth-sized planets around relatively nearby stars. PLATO (PLAnetary Transits and Oscillations of stars) will likewise monitor up to a million stars looking for transit signatures, with launch planned by 2024.

The paper is Van Eylen and Albrecht, “Eccentricity from transit photometry: small planets in Kepler multi-planet systems have low eccentricities,” accepted for publication at The Astrophysical Journal (preprint). An Aarhus University news release is available.

tzf_img_post

Spacecoach on the Stage

I’m glad to see that Brian McConnell will be speaking at the International Space Development Conference in Toronto this week. McConnell, you’ll recall, has been working with Centauri Dreams regular Alex Tolley on a model the duo call ‘Spacecoach.’ It’s a crewed spacecraft using solar electric propulsion, one built around the idea of water as propellant. The beauty of the concept is that we normally treat water as ‘dead weight’ in spacecraft life support systems. It has a single use, critical but heavy and demanding a high toll in propellant.

The spacecoach, on the other hand, can use the water it carries for radiation shielding and climate control within the vessel, while crew comfort is drastically enhanced in an environment where water is plentiful and space agriculture a serious option. Along with numerous other benefits that Brian discusses in his recent article A Stagecoach to the Stars, mission costs are sharply reduced by constructing a spaceship that is mostly water. McConnell and Tolley believe that cost reductions of one or two orders of magnitude are possible. Have a look, if you haven’t already seen it, at Alex’s Spaceward Ho! for an imaginative look at what a spacecoach can be.

ISDC is a good place to get this model before an audience of scientists, engineers, business contacts and educators from the military, civilian, commercial and entrepreneurial sectors. ISDC 2014 brought over 1000 attendees into the four-day event, and this year’s conference brings plenary talks and speakers from top names in the field: Buzz Aldrin, Charles Bolden, Neil deGrasse Tyson, Peter Diamandis, Lori Garver, Richard Garriott, Bill Nye, Elon Musk and more. My hope is that a concept as novel but also as feasible as the spacecoach will resonate.

solar_ion003

Image: Ernst Stuhlinger’s concept for a solar powered ship using ion propulsion, a notion now upgraded and highly modified in the spacecoach concept, which realizes huge cost savings by its use of water as reaction mass. This illustration, which Alex Tolley found as part of a magazine advertisement, dates from the 1950s.

Towards Building an Infrastructure

We have to make the transition from expensive, highly targeted missions with dedicated spacecraft to missions that can be flown with adaptable, low-cost technologies like the spacecoach. Long-duration missions to Mars and the asteroid belt will be rendered far more workable once we can offer a measure of crew safety and comfort not available today, with all the benefits of in situ refueling and upgradable modularity. Building up a Solar System infrastructure that can one day begin the long expansion beyond demands vehicles that can carry humans on deep space journeys that will eventually become routine.

The response to the two spacecoach articles here on Centauri Dreams has been strong, and I’ll be tracking the idea as it continues to develop. McConnell and Tolley are currently working on a book for Springer that should be out by late summer or early fall. You can follow the progress of the idea as well on the Spacecoach.org site, where the two discuss a round-trip mission from Earth-Moon Lagrange point 2 (EML-2) to Ceres, a high delta-v mission in which between 80 and 90 percent of the mission cost is the cost of delivering water to EML-2.

The idea in this and other missions is to use a SpaceX Falcon 9 Heavy to launch material to low-Earth orbit, with a solar-electric propulsion spiral out to EML-2 (the crew will later take a direct chemical propulsion trajectory to EML-2 to minimize exposure time in the Van Allen belts). The water cost is about $3000 per kilogram. The Falcon 9 Heavy should be able to deliver 53,000 kilograms to low-Earth orbit per launch. McConnell and Tolley figure about 40,000 kilograms of this will be water, while the remainder will be other equipment including the module engines and solar arrays. From EML-2, various destinations can be modeled, with values adjustable within the model so you can see how costs change with different parameters.

The online parametric model has just been updated to calculate mission costs as a function of the number of Falcon Heavy 9 launches required. You can see the new graph below (click on it to enlarge). At a specific impulse of 2000s or better for the solar-electric power engines, only two launches are required for most missions, one taking the crew direct to EML-2, the other carrying the water and durable equipment on a spiral orbit out from LEO. It is only the most ambitious destinations like Ceres that require three launches. At $100 million per launch, even that mission is cheap by today’s spaceflight standards.

spacecoach_graph_2

Brian notes in a recent email that the launches do not need to be closely spaced, because the spiral transfer from LEO to EML-2 takes months to complete. The crew only goes when everything else is in place at EML-2. For more on this model, see spacecoach.org. I’ll be interested to hear how the idea is received at ISDC, and how the upcoming publication of the spacecoach book helps to put this innovative design for interplanetary transport on the map.

tzf_img_post

Doppler Worlds and M-Dwarf Planets

Finding small and possibly habitable worlds around M-dwarfs has already proven controversial, as we’ve seen in recent work on Gliese 581. The existence of Gl 581d, for example, is contested in some circles, but as Guillem Anglada-Escudé argues below, sound methodology turns up a robust signal for the world. Read on to learn why as he discusses the early successes of the Doppler technique and its relevance for future work. Dr. Anglada-Escudé is a physicist and astronomer who did his PhD work at the University of Barcelona on the Gaia/ESA mission, working on the mission simulator and data reduction prototype. His first serious observational venture, using astrometric techniques to detect exoplanets, was with Alan Boss and Alycia Weinberger during a postdoctoral period at the Carnegie Institution for Science. He began working on high-resolution spectroscopy for planet searches around M-stars during that time in collaboration with exoplanet pioneer R. Paul Butler. In a second postdoc, he worked at the University of Goettingen (Germany) with Prof. Ansgar Reiners, participating in the CRIRES+ project (an infrared spectrometer for the VLT/ESO), and joined the CARMENES consortium. Dr. Anglada-Escudé is now a Lecturer in Astronomy at Queen Mary University of London, working on detection methods for very low-mass extrasolar planets around nearby stars.

by Guillem Anglada-Escudé

guillem_anglada_2

The Doppler technique has been the driving force for the first fifteen years of extrasolar planet detection. The method is most sensitive to close-in planets and many of its most exciting results come from planets around low-mass stars (also called M-dwarfs). Although these stars are substantially fainter than our Sun, the noise floor seems to be imposed stellar activity rather than instrumental precision or brightness, meaning that small planets are more easily detected here than around Sun-like stars. In detection terms, the new leading method is space-transit photometry, brilliantly demonstrated by NASA’s Kepler mission.

Despite its efficiency, the transit method requires a fortunate alignment of the orbit with our line of sight, so planets around the closest stars are unlikely to be detected this way. In the new era of space-photometry surveys and given all the caveats associated with accurate radial velocity measurements, the most likely role of the Doppler method for the next few years will be the confirmation of transiting planets, and detection of potentially habitable super-Earths around the nearest M-dwarfs. It is becoming increasingly clear that the Doppler method might be unsuitable to detect Earth analogs, even around our closest sun-like neighbors. Unless there is an unexpected breakthrough in the understanding of stellar Doppler variability, nearby Earth-twin detection will have to wait a decade or two for the emergence of new techniques such as direct imaging and/or precision space astrometry. In the meantime, very exciting discoveries are expected from our reddish and unremarkable stellar neighborhood.

The Doppler years

We knew stars should have planets. After the Copernican revolution, it had been broadly acknowledged that Earth and our Sun occupy unremarkable places in the cosmos. Our solar system has 9 planets, so it was only natural to expect them around other stars. After years of failed or ambiguous claims, the first solid evidence of planets beyond the Solar system arrived in the early 90’s. First came the pulsar planets (PSR+1257). Despite the claims of their existence being well consolidated, these planets were regarded as space oddities. That is, a pulsar is the remnant core of an exploded massive star, so the recoil of planets after such an event is unlikely to be the most universal channel for planet formation.

In 1995, the first planets around main sequence stars were reported. The hot Jupiters came by the hand of M. Mayor and D. Queloz (51 Peg, 1995), and shortly thereafter a series of gas giants were announced by the competing American duo G. Marcy and P. Butler (70 Vir, 47 UMa, etc.). These were days of wonder and the Doppler method was the norm. In a few months, the count grew from nothing to several worlds. These discoveries became possible thanks to the possibility of measuring the radial velocities of stars at ~3 meters-per-second (m/s) precision, that is, human running speed. 51 Peg b periodically moves its host star at 50 m/s and 70 Vir b changes the velocity of its parent star by 300 m/s, so these became easily detectable once precision reached that level.

Lighter and smaller planets

Given the technological improvements, and solid proof that planets were out there in possibly large numbers, the exoplanet cold war ramped up. Large planet-hunting teams built up around Mayor & Queloz (Swiss) and Marcy & Butler (Americans) in a strongly competitive environment. Precision kept improving and, when combined with longer time baselines, a few tens of gas giants were already reported by 2000. Then the first exoplanet transiting in front of its host star was detected. Unlike the Doppler method, the transit method measures the dip in brightness caused by a planet crossing in front of the star. Such alignment happens randomly, so a large numbers of stars (10 000+) need to be monitored simultaneously to find planets using this technique.

Plans to engage in such surveys suddenly started to consolidate (TrES, HAT, WASP) and small (COROT) to medium-class space missions (NASA’s Kepler, Eddington/ESA – cancelled later) started to be seriously considered. By 2002, the Doppler technique led to the first reports of hot Neptunes (GJ 436b) and the first so-called super-Earths (GJ 876d, M ~ 7 Mearth) came into the scene. Let me note that the first discoveries of such ‘smaller’ planets were found around the even more unremarkable small stars called M-dwarfs.

While not obvious at that moment, such a trend would later have serious consequences. Several hot Neptunes and super-Earths followed during the mid-2000’s, mostly coming from the large surveys led by the Swiss and American teams. By then the first instruments specifically designed to hunt for exoplanets had been built, such as the High Accuracy Radial velocity Planet Searcher (or HARPS), by a large consortium led by the Geneva observatory and the European Southern Observatory (ESO). While the ‘American method’ relied on measuring the stellar spectrum simultaneous to the spectral features in Iodine gas, the HARPS concept consisted in stabilizing the hardware as much as possible. After 10 years of operation of HARPS, it has become clear that the stabilized instrument option overperforms the Iodine designs, as it significantly reduces the data-processing effort needed to obtain accurate measurements (~1 m/s or better). Dedicated Iodine spectrometers are now in operation delivering comparable precisions (APF, PFS), which seems to point out towards a fundamental limit in the stars rather than in the instruments.

Sun-like stars (G dwarfs) were massively favoured in the early Doppler surveys. While many factors were folded into target selection, there were two main reasons for this choice. First, sun-like stars were considered more interesting due to the similarity to our host star (search for planets like our own) and second, M-dwarfs are intrinsically fainter so the number of bright enough targets is quite limited. For main sequence stars, the luminosity of the stars grows as the 4th power of the mass and their apparent brightness falls as the square of the distance.

As a result, one quickly runs out of intrinsically faint objects. Most of the stars we see in the night sky have A and F spectral types, some are distant supergiants (eg. Betelgeuse), and only a handful of sun-like G and K dwarfs are visible (Alpha Centauri binary, Tau Ceti, Epsilon Eridani, etc). No M-dwarf is bright enough to be visible by the naked eye. By setting a magnitude cut-off of V ~ 10, early surveys included thousands of yellow G-dwarfs, a few hundreds of orange K dwarfs, and a few tens of red M-dwarfs. Even though M-dwarfs were clearly disfavoured in numbers, many ‘firsts’ and most exciting exoplanet detection results come from these ‘irrelevant’ tiny objects.

M-dwarfs have masses between 0.1 and 0.5 Msun and radii between 0.1 and 0.5 Rsun. Since temperatures are known from optical to near infrared photometry (~3500 K, to be compared to 5800 K of the Sun) the basic physics of blackbody radiation shows that their luminosities are between 0.1% to 5% that of the Sun. As a result, orbits at which planets can keep liquid water on their surface are much closer-in and have shorter periods. All things combined, one finds that ‘warm’ Earth mass planets would imprint wobbles of 1-2 m/s on an M-dwarf (0.1 m/s Earth/Sun), and the same planet would cause a ~0.15% dip in the star-light during transit (0.01% Earth/Sun).

Two papers by the Swiss group from 2007 and 2009 (Udry et al., http://adsabs.harvard.edu/abs/2007A%26A…469L..43U, Mayor et al. http://adsabs.harvard.edu/abs/2009A%26A…507..487M) presented evidence for the first super-Earth with realistic chances of being habitable orbiting around the M-dwarf GJ 581 (GJ 581d). Although its orbit was considered too cold in a first instance, subsequent papers and climatic simulations (for example, see Von Paris et al. 2010, http://cdsads.u-strasbg.fr/abs/2010A%26A…522A..23V) indicated that there was no reason why water could not exist on its surface given the presence of reasonable amounts of greenhouse gases. As of 2010, GJ 581d is considered the first potentially habitable planet found beyond the Solar system. The word potentially is key here. It just acknowledges that, given the known information, the properties of the planet are compatible with having a solid surface and sustainable liquid water over its life-time. Theoretical considerations about the practical habitability of these planets is – yet another – source of intense debate.

GJ 581 was remarkable in another important way. Its Doppler data could be best explained by the existence of (at least) 4 low-mass planets in orbits with periods shorter than ~2 months (orbit of Mercury). A handful of similar other systems were known (or reported) during those days, including HD 69830 (3 Neptunes, G8V), HD 40307 (3 super-Earths, K3V) and 61 Vir (3 sub-Neptunes). These and many other Doppler planet reports for the large surveys led to the first occurrence rate estimates for sub-Neptune mass planets by ~2010. According to those (for example, see http://adsabs.harvard.edu/abs/2010Sci…330..653H ), at least ~30% of the stars hosted one super-Earth within the orbit of our Mercury. Simultaneously, the COROT mission started to produce its first hot-rocky planet candidates (eg. COROT 7b) and the Kepler satellite was slowly building up its high quality space-based light curves.

What Kepler was about to reveal was even more amazing. Not only did 30% of stars host ‘hot’ super-Earths, but, at least, ~30% of the star hosted compact (highly co-planar) planetary systems with small planets, again with orbits interior to our Mercury. Thanks to this unexpected overabundance of compact systems, the Kepler reports of likely planets came in the thousands (famously known as Kepler Objects of Interest, or KOIs), which in astronomy means we can move from interesting individual objects to a fully mature discipline where statistical populations can be drawn. Today, the exoplanet portrait is smoothly covered by ~2000 reasonably robust detections, extending down to sub-Earth sized planets in orbits down to a few hours (eg. Kepler 78) up to thousands of days for those Jupiter analogs that (at the end of the day) have been found to be rather rare (<5% of the stars). Clustering of objects in the different regions of the mass-period diagram (see Figure 1) encodes the tale of planet formation and the origin. This is where we are now in terms of detection techniques.

anglada_fig1

Figure 1: The exoplanet portrait (data extracted from exoplanet.eu, April 1st 2015). Short period planets are generally favoured by the two leading techniques (transits and Doppler spectroscopy), which explains why the left part of the diagram is the most populated. The ‘classic’ gas giants are on the top right (massive and long periods), and the bottom left is the realm of the hot neptunes, and super-Earths. The relative paucity of planets in some areas of this diagram tells us about important processes that formed them and shaped the architectures of the systems. For example, the horizontal gap between the Neptunes and the Jupiters is likely caused by the runaway accretion of gas once a planet grows a bit larger than Neptune in the protoplanetary nebula, quickly jumping into the Saturn mass regime. The large abundance of hot-jupiters on the top left is an observational bias due to the high detection efficiency (large planets and short period orbits), but the gap between the hot-Jupiters and the classical gas giants is not well understood and probably has to do with the migration process involved in dragging the hot-Jupiters so close to the star. Detection efficiency quickly drops to the right (longer periods) and bottom (very small planets).

Having reached this point, and given the wild success of Kepler, we might ask ourselves what is the relevance of the Doppler method as a detection method for small planets. The transit method requires a lucky alignment of the orbit. Using statistical arguments one can easily find out that most transiting planets will be detected around distant stars. Instead, the Doppler technique can achieve great precision on individual objects and detect the planets irrespective of their orbital inclination (unless in the rare cases when the orbits are close to face-on). Therefore, the hunt for nearby planets remains the niche of the Doppler technique. Small planets around nearby stars should enable unique follow-up opportunities (direct imaging attempts in 10-20 years) and transmission spectroscopy in the rare cases of transits.

However, there are other reasons why nearby stars are really exciting. These are brand new worlds next to ours that might be visited one day. Nearby stars trigger the imagination of the public, science fiction writers, filmmakers and explorers. While the scientific establishment tends to deem this quality irrelevant, many of us still find such motivation perfectly valid. As in many other areas, this in not only about pure scientific knowledge but about exploration.

For those who prefer a more oriented results-per-dollar approach, the motivational aspect of nearby exoplanets cannot be ignored either. Modern mathematics and physical sciences were broadly motivated by the need to improve our understanding of observations of the Solar system. Young scientists keep being attracted to space sciences and technology because of this (combined with the push from the film and video-game industry). A nearby exoplanet is not one more point in a diagram. It represents a place, a goal and a driver. Under this scope, reports and discoveries of nearby Doppler detections (even if tentative) still rival or surpass the social relevance of those exotic worlds in the distant Kepler systems. As long as there is public support and wonder for exploration, we will keep searching for evidence of nearby worlds. And to do this we need spectrometers.

Why is GJ 581 d so relevant?

We have established that nearby M-dwarfs are great places to look for small planets. But there is a caveat. The rotation periods of mature stars are in the 20-100 days range, meaning that spots or features in the stellar surface will generate apparent Doppler signals in the same range. After some years of simulation and solar observations, we think that these spurious signals will produce Doppler amplitudes between 0.5 and 3 m/s, even for the most quiet stars (highly object dependent). Moreover, this variability is not strictly random, which causes all sorts of difficulties. In technical terms, structure in the noise is often referred as correlated noise (or red-noise, activity-induced variability, etc.).

Detecting a small planet is like trying to measure the velocity of a pan filled with boiling water, by looking at its wiggling surface. If we can wait long enough, the surface motion averages out. However, consecutive measurements over fractions of seconds will not be random and can be confused with periodic variability in these same timescales. The same happens with stars. We can get arbitrarily great precision (down to cm/s) but our measurements will also be tracing occasional flows and spectral distortions caused by the variable surface.

Going back to the boiling water example, we could in principle disentangle the true velocity from the jitter if we have access to more information, such as the temperature or the density of the water at each time. Our hope is that this same approach can be applied to stars by looking at the so-called ‘activity indicators’. In the case of Gliese 581d, Robertson et al. subtracted an apparent correlation of the velocities with a measure of a chromospheric activity index. As a result, the signal of GJ 581d vanished, so they argued the planet was unlikely to exist (http://adsabs.harvard.edu/abs/2014Sci…345..440R).

However, in our response to that claim, we argued that one cannot just remove possible effects relevant to the observations. Instead, one needs to fold in all the information in a comprehensive model of the data (http://adsabs.harvard.edu/abs/2015Sci…347.1080A). When doing that, the signal of GJ 581d shows up again as very significant. This is a subtle point with far-reaching consequences. The activity-induced variability is in the 1-3 m/s regime, and the amplitude of the planetary signal is about 2 m/s. Unless activity is modeled at the same level as the planetary signal, there is no hope in obtaining a robust detection. By comparison, the amplitude of the signal induced by Earth on the Sun is 10 cm/s while the Solar spurious variability is on the 2-3 m/s range. With a little more effort, we are likely going to detect many potentially habitable planets around M-stars using new generation spectrometers. Once we can agree on the way to do that, we can try to go one step further and attempt similar approaches with Sun-like stars.

The debate is on and the jury is still out, but clarifying all these points is essential to the viability of the Doppler technique and future plans for new instruments (What’s the need for more precise machines if we have already hit the noise floor?)

This same boiling pan effect sets the physical noise floor for other techniques as well, but the impact on the detection sensitivity can be rather different. For example, photometric measurements (eg. Kepler) are now mostly limited by the noise floor set by the Sun-like stars which, on average, have been found to be twice more active than our Sun. However, the transit ‘signal’ (short box-like feature, strictly periodic) is harder to emulate by stellar variability. It is only a matter of staring longer on target to be sure the transit-like feature repeats itself at a very precise time. The Kepler mission had been extended to 3.5 years to account for this, and it would have probably succeeded if its reaction wheel hadn’t failed (note most ‘warm Earth-sized’ objects are around K and M-stars). The PLATO/ESA mission (http://sci.esa.int/plato/) will likely finish the job and detect a few dozens of Earth twins, among many other things.

So, what’s next?

New generation spectrometers will become available soon. Designed to reach similar or better hardware stability than HARPS, these instruments will extend the useful wavelength range towards the red and near-infrared part of the spectrum. A canonical example is the CARMENES spectrometer (https://carmenes.caha.es/), which will cover from 500 nm up to 1.7 microns (HARPS covers from 380 to 680 nm). CARMENES is expected to go into the telescope this summer. In addition to collecting more photons, access to other regions of the spectrum will enable the incorporation of many more observables in the analysis. In the meantime, a series of increasingly ambitious space-photometry missions will keep identifying planet-sized objects by the thousands. In this context, a careful use of Doppler instruments will provide confirmation and mass measurements for transiting exoplanet candidates.

In parallel, the high follow-up potential and the motivational component of nearby stars justifies the continued use of precision spectrometers, at least on low-mass stars. In addition to this, stabilized spectrometers ‘might’ play a key role in atmospheric characterization of transiting super-Earths around nearby M-dwarf stars. Concerning the nearest Sun-like stars, alternative techniques such as direct imaging or astrometry should be viable once dedicated space missions are built, maybe in the next 15-20 years. However, given the trend towards stagnant economies and increasingly long technological cycles for space instrumentation, we might need to hope for the era of space industrialization (or something as dramatic as a technological singularity taking over the hard work) to catch a glimpse of the best targets for interstellar travel.

tzf_img_post