Centauri Dreams

Imagining and Planning Interstellar Exploration

Exoplanets: The Hunt for Circular Orbits

If you’re looking for planets that may be habitable, eccentric orbits are a problem. Vary the orbit enough and the surface goes through extreme swings in temperature. In our own Solar System, planets tend to follow circular orbits. In fact, Mercury is the planet with the highest degree of eccentricity, while the other seven planets show a modest value of 0.04 (on a scale where 0 is a completely circular orbit — Mercury’s value is 0.21). But much of our work on exoplanets has revealed gas giant planets with a wide range of eccentricities, and we’ve even found one (HD 80606b) with an eccentricity of 0.927. As far as I know, this is the current record holder.

These values have been measured using radial velocity techniques that most readily detect large planets close to their stars, although there is some evidence for high orbital eccentricities for smaller worlds. Get down into the range of Earth and ‘super-Earth’ planets, however, and the RV signal is tiny. But a new paper from Vincent Van Eylen (Aarhus University) and Simon Albrecht (MIT) goes to work on planetary transits. It’s possible to work with Transit Timing Variations to make inferences about eccentricity, but these appear only in a subset of transiting systems.

Instead, van Eylen and Albrecht look at transit duration. The length of a transit can vary depending on the eccentricity and orientation of the orbit. By measuring how long a planetary transit lasts, and weighing the result against what is known about the properties of the star, the eccentricities of the transiting planets can be measured, as explained in the paper:

Here we determine orbital eccentricities of planets making use of Kepler’s second law, which states that eccentric planets vary their velocity throughout their orbit. This results in a different duration for their transits relative to the circular case: transits can last longer or shorter depending on the orientation of the orbit in its own plane, the argument of periastron (ω)… Transit durations for circular orbits are governed by the mean stellar density (Seager & Mallen-Ornelas 2003). Therefore if the stellar density is known from an independent source then a comparison between these two values constrains the orbital eccentricity of a transiting planet independently of its mass…

Using these methods, the researchers have measured the eccentricity of 74 small extrasolar planets orbiting 28 stars, discovering that most of their orbits are close to circular. The systems under study were chosen carefully to avoid false positives — the team primarily used confirmed multi-transiting planet systems around bright host stars, and pulled in asteroseismological data — information on stellar pulsations — to help determine stellar parameters. Asteroseismology can refine our estimates of a star’s mass, radius and density. The stars in the team’s sample have all been characterized in previous asteroseismology studies.

eccentricity_1

Image: Researchers measuring the orbital eccentricity of 74 small extrasolar planets have found their orbits to be close to circular, similar to the planets in the Solar System. This is in contrast to previous measurements of more massive exoplanets where highly eccentric orbits are commonly found. Credit: Van Eylen and Albrecht / Aarhus University.

No Earth-class planets appear in the team’s dataset, but the findings cover planets with an average radius of 2.8 Earth radii, while orbital periods range from 0.8 to 180 days. Van Eylen and Albrecht conclude that it is plausible that low eccentricity orbits would be common in solar systems like ours, a finding that would have ramifications for habitability and the location of the habitable zone.

Interestingly, when weighed against parameters like the host star’s temperature and age, no trend emerges. But in systems with multiple transiting planets on circular orbits, Van Eylen and Albrecht believe that the density of the host star can be reliably estimated from transit observations. This information can help to rule out false positives, a technique they use to validate candidate worlds in several systems — KOI-270, now Kepler-449, and KOI-279, now Kepler-450, as well as KOI-285.03, now Kepler-92d, in a system with previously known planets.

The work has helpful implications for upcoming space missions that will generate the data needed for putting these methods to further use:

We anticipate that the methods used here will be useful in the context of the future photometry missions TESS and PLATO, both of which will allow for asteroseismic studies of a large number of targets. Transit durations will be useful to confirm the validity of transit signals in compact multi-planet systems, in particular for the smallest and most interest[ing] candidates that are hardest to confirm using other methods. For systems where independent stellar density measurements exist the method will also provide further information on orbital eccentricities.

The TESS mission (Transiting Exoplanet Survey Satellite) is planned for launch in 2017, and is expected to find more than 5000 exoplanet candidates, including 50 Earth-sized planets around relatively nearby stars. PLATO (PLAnetary Transits and Oscillations of stars) will likewise monitor up to a million stars looking for transit signatures, with launch planned by 2024.

The paper is Van Eylen and Albrecht, “Eccentricity from transit photometry: small planets in Kepler multi-planet systems have low eccentricities,” accepted for publication at The Astrophysical Journal (preprint). An Aarhus University news release is available.

tzf_img_post

Spacecoach on the Stage

I’m glad to see that Brian McConnell will be speaking at the International Space Development Conference in Toronto this week. McConnell, you’ll recall, has been working with Centauri Dreams regular Alex Tolley on a model the duo call ‘Spacecoach.’ It’s a crewed spacecraft using solar electric propulsion, one built around the idea of water as propellant. The beauty of the concept is that we normally treat water as ‘dead weight’ in spacecraft life support systems. It has a single use, critical but heavy and demanding a high toll in propellant.

The spacecoach, on the other hand, can use the water it carries for radiation shielding and climate control within the vessel, while crew comfort is drastically enhanced in an environment where water is plentiful and space agriculture a serious option. Along with numerous other benefits that Brian discusses in his recent article A Stagecoach to the Stars, mission costs are sharply reduced by constructing a spaceship that is mostly water. McConnell and Tolley believe that cost reductions of one or two orders of magnitude are possible. Have a look, if you haven’t already seen it, at Alex’s Spaceward Ho! for an imaginative look at what a spacecoach can be.

ISDC is a good place to get this model before an audience of scientists, engineers, business contacts and educators from the military, civilian, commercial and entrepreneurial sectors. ISDC 2014 brought over 1000 attendees into the four-day event, and this year’s conference brings plenary talks and speakers from top names in the field: Buzz Aldrin, Charles Bolden, Neil deGrasse Tyson, Peter Diamandis, Lori Garver, Richard Garriott, Bill Nye, Elon Musk and more. My hope is that a concept as novel but also as feasible as the spacecoach will resonate.

solar_ion003

Image: Ernst Stuhlinger’s concept for a solar powered ship using ion propulsion, a notion now upgraded and highly modified in the spacecoach concept, which realizes huge cost savings by its use of water as reaction mass. This illustration, which Alex Tolley found as part of a magazine advertisement, dates from the 1950s.

Towards Building an Infrastructure

We have to make the transition from expensive, highly targeted missions with dedicated spacecraft to missions that can be flown with adaptable, low-cost technologies like the spacecoach. Long-duration missions to Mars and the asteroid belt will be rendered far more workable once we can offer a measure of crew safety and comfort not available today, with all the benefits of in situ refueling and upgradable modularity. Building up a Solar System infrastructure that can one day begin the long expansion beyond demands vehicles that can carry humans on deep space journeys that will eventually become routine.

The response to the two spacecoach articles here on Centauri Dreams has been strong, and I’ll be tracking the idea as it continues to develop. McConnell and Tolley are currently working on a book for Springer that should be out by late summer or early fall. You can follow the progress of the idea as well on the Spacecoach.org site, where the two discuss a round-trip mission from Earth-Moon Lagrange point 2 (EML-2) to Ceres, a high delta-v mission in which between 80 and 90 percent of the mission cost is the cost of delivering water to EML-2.

The idea in this and other missions is to use a SpaceX Falcon 9 Heavy to launch material to low-Earth orbit, with a solar-electric propulsion spiral out to EML-2 (the crew will later take a direct chemical propulsion trajectory to EML-2 to minimize exposure time in the Van Allen belts). The water cost is about $3000 per kilogram. The Falcon 9 Heavy should be able to deliver 53,000 kilograms to low-Earth orbit per launch. McConnell and Tolley figure about 40,000 kilograms of this will be water, while the remainder will be other equipment including the module engines and solar arrays. From EML-2, various destinations can be modeled, with values adjustable within the model so you can see how costs change with different parameters.

The online parametric model has just been updated to calculate mission costs as a function of the number of Falcon Heavy 9 launches required. You can see the new graph below (click on it to enlarge). At a specific impulse of 2000s or better for the solar-electric power engines, only two launches are required for most missions, one taking the crew direct to EML-2, the other carrying the water and durable equipment on a spiral orbit out from LEO. It is only the most ambitious destinations like Ceres that require three launches. At $100 million per launch, even that mission is cheap by today’s spaceflight standards.

spacecoach_graph_2

Brian notes in a recent email that the launches do not need to be closely spaced, because the spiral transfer from LEO to EML-2 takes months to complete. The crew only goes when everything else is in place at EML-2. For more on this model, see spacecoach.org. I’ll be interested to hear how the idea is received at ISDC, and how the upcoming publication of the spacecoach book helps to put this innovative design for interplanetary transport on the map.

tzf_img_post

Doppler Worlds and M-Dwarf Planets

Finding small and possibly habitable worlds around M-dwarfs has already proven controversial, as we’ve seen in recent work on Gliese 581. The existence of Gl 581d, for example, is contested in some circles, but as Guillem Anglada-Escudé argues below, sound methodology turns up a robust signal for the world. Read on to learn why as he discusses the early successes of the Doppler technique and its relevance for future work. Dr. Anglada-Escudé is a physicist and astronomer who did his PhD work at the University of Barcelona on the Gaia/ESA mission, working on the mission simulator and data reduction prototype. His first serious observational venture, using astrometric techniques to detect exoplanets, was with Alan Boss and Alycia Weinberger during a postdoctoral period at the Carnegie Institution for Science. He began working on high-resolution spectroscopy for planet searches around M-stars during that time in collaboration with exoplanet pioneer R. Paul Butler. In a second postdoc, he worked at the University of Goettingen (Germany) with Prof. Ansgar Reiners, participating in the CRIRES+ project (an infrared spectrometer for the VLT/ESO), and joined the CARMENES consortium. Dr. Anglada-Escudé is now a Lecturer in Astronomy at Queen Mary University of London, working on detection methods for very low-mass extrasolar planets around nearby stars.

by Guillem Anglada-Escudé

guillem_anglada_2

The Doppler technique has been the driving force for the first fifteen years of extrasolar planet detection. The method is most sensitive to close-in planets and many of its most exciting results come from planets around low-mass stars (also called M-dwarfs). Although these stars are substantially fainter than our Sun, the noise floor seems to be imposed stellar activity rather than instrumental precision or brightness, meaning that small planets are more easily detected here than around Sun-like stars. In detection terms, the new leading method is space-transit photometry, brilliantly demonstrated by NASA’s Kepler mission.

Despite its efficiency, the transit method requires a fortunate alignment of the orbit with our line of sight, so planets around the closest stars are unlikely to be detected this way. In the new era of space-photometry surveys and given all the caveats associated with accurate radial velocity measurements, the most likely role of the Doppler method for the next few years will be the confirmation of transiting planets, and detection of potentially habitable super-Earths around the nearest M-dwarfs. It is becoming increasingly clear that the Doppler method might be unsuitable to detect Earth analogs, even around our closest sun-like neighbors. Unless there is an unexpected breakthrough in the understanding of stellar Doppler variability, nearby Earth-twin detection will have to wait a decade or two for the emergence of new techniques such as direct imaging and/or precision space astrometry. In the meantime, very exciting discoveries are expected from our reddish and unremarkable stellar neighborhood.

The Doppler years

We knew stars should have planets. After the Copernican revolution, it had been broadly acknowledged that Earth and our Sun occupy unremarkable places in the cosmos. Our solar system has 9 planets, so it was only natural to expect them around other stars. After years of failed or ambiguous claims, the first solid evidence of planets beyond the Solar system arrived in the early 90’s. First came the pulsar planets (PSR+1257). Despite the claims of their existence being well consolidated, these planets were regarded as space oddities. That is, a pulsar is the remnant core of an exploded massive star, so the recoil of planets after such an event is unlikely to be the most universal channel for planet formation.

In 1995, the first planets around main sequence stars were reported. The hot Jupiters came by the hand of M. Mayor and D. Queloz (51 Peg, 1995), and shortly thereafter a series of gas giants were announced by the competing American duo G. Marcy and P. Butler (70 Vir, 47 UMa, etc.). These were days of wonder and the Doppler method was the norm. In a few months, the count grew from nothing to several worlds. These discoveries became possible thanks to the possibility of measuring the radial velocities of stars at ~3 meters-per-second (m/s) precision, that is, human running speed. 51 Peg b periodically moves its host star at 50 m/s and 70 Vir b changes the velocity of its parent star by 300 m/s, so these became easily detectable once precision reached that level.

Lighter and smaller planets

Given the technological improvements, and solid proof that planets were out there in possibly large numbers, the exoplanet cold war ramped up. Large planet-hunting teams built up around Mayor & Queloz (Swiss) and Marcy & Butler (Americans) in a strongly competitive environment. Precision kept improving and, when combined with longer time baselines, a few tens of gas giants were already reported by 2000. Then the first exoplanet transiting in front of its host star was detected. Unlike the Doppler method, the transit method measures the dip in brightness caused by a planet crossing in front of the star. Such alignment happens randomly, so a large numbers of stars (10 000+) need to be monitored simultaneously to find planets using this technique.

Plans to engage in such surveys suddenly started to consolidate (TrES, HAT, WASP) and small (COROT) to medium-class space missions (NASA’s Kepler, Eddington/ESA – cancelled later) started to be seriously considered. By 2002, the Doppler technique led to the first reports of hot Neptunes (GJ 436b) and the first so-called super-Earths (GJ 876d, M ~ 7 Mearth) came into the scene. Let me note that the first discoveries of such ‘smaller’ planets were found around the even more unremarkable small stars called M-dwarfs.

While not obvious at that moment, such a trend would later have serious consequences. Several hot Neptunes and super-Earths followed during the mid-2000’s, mostly coming from the large surveys led by the Swiss and American teams. By then the first instruments specifically designed to hunt for exoplanets had been built, such as the High Accuracy Radial velocity Planet Searcher (or HARPS), by a large consortium led by the Geneva observatory and the European Southern Observatory (ESO). While the ‘American method’ relied on measuring the stellar spectrum simultaneous to the spectral features in Iodine gas, the HARPS concept consisted in stabilizing the hardware as much as possible. After 10 years of operation of HARPS, it has become clear that the stabilized instrument option overperforms the Iodine designs, as it significantly reduces the data-processing effort needed to obtain accurate measurements (~1 m/s or better). Dedicated Iodine spectrometers are now in operation delivering comparable precisions (APF, PFS), which seems to point out towards a fundamental limit in the stars rather than in the instruments.

Sun-like stars (G dwarfs) were massively favoured in the early Doppler surveys. While many factors were folded into target selection, there were two main reasons for this choice. First, sun-like stars were considered more interesting due to the similarity to our host star (search for planets like our own) and second, M-dwarfs are intrinsically fainter so the number of bright enough targets is quite limited. For main sequence stars, the luminosity of the stars grows as the 4th power of the mass and their apparent brightness falls as the square of the distance.

As a result, one quickly runs out of intrinsically faint objects. Most of the stars we see in the night sky have A and F spectral types, some are distant supergiants (eg. Betelgeuse), and only a handful of sun-like G and K dwarfs are visible (Alpha Centauri binary, Tau Ceti, Epsilon Eridani, etc). No M-dwarf is bright enough to be visible by the naked eye. By setting a magnitude cut-off of V ~ 10, early surveys included thousands of yellow G-dwarfs, a few hundreds of orange K dwarfs, and a few tens of red M-dwarfs. Even though M-dwarfs were clearly disfavoured in numbers, many ‘firsts’ and most exciting exoplanet detection results come from these ‘irrelevant’ tiny objects.

M-dwarfs have masses between 0.1 and 0.5 Msun and radii between 0.1 and 0.5 Rsun. Since temperatures are known from optical to near infrared photometry (~3500 K, to be compared to 5800 K of the Sun) the basic physics of blackbody radiation shows that their luminosities are between 0.1% to 5% that of the Sun. As a result, orbits at which planets can keep liquid water on their surface are much closer-in and have shorter periods. All things combined, one finds that ‘warm’ Earth mass planets would imprint wobbles of 1-2 m/s on an M-dwarf (0.1 m/s Earth/Sun), and the same planet would cause a ~0.15% dip in the star-light during transit (0.01% Earth/Sun).

Two papers by the Swiss group from 2007 and 2009 (Udry et al., http://adsabs.harvard.edu/abs/2007A%26A…469L..43U, Mayor et al. http://adsabs.harvard.edu/abs/2009A%26A…507..487M) presented evidence for the first super-Earth with realistic chances of being habitable orbiting around the M-dwarf GJ 581 (GJ 581d). Although its orbit was considered too cold in a first instance, subsequent papers and climatic simulations (for example, see Von Paris et al. 2010, http://cdsads.u-strasbg.fr/abs/2010A%26A…522A..23V) indicated that there was no reason why water could not exist on its surface given the presence of reasonable amounts of greenhouse gases. As of 2010, GJ 581d is considered the first potentially habitable planet found beyond the Solar system. The word potentially is key here. It just acknowledges that, given the known information, the properties of the planet are compatible with having a solid surface and sustainable liquid water over its life-time. Theoretical considerations about the practical habitability of these planets is – yet another – source of intense debate.

GJ 581 was remarkable in another important way. Its Doppler data could be best explained by the existence of (at least) 4 low-mass planets in orbits with periods shorter than ~2 months (orbit of Mercury). A handful of similar other systems were known (or reported) during those days, including HD 69830 (3 Neptunes, G8V), HD 40307 (3 super-Earths, K3V) and 61 Vir (3 sub-Neptunes). These and many other Doppler planet reports for the large surveys led to the first occurrence rate estimates for sub-Neptune mass planets by ~2010. According to those (for example, see http://adsabs.harvard.edu/abs/2010Sci…330..653H ), at least ~30% of the stars hosted one super-Earth within the orbit of our Mercury. Simultaneously, the COROT mission started to produce its first hot-rocky planet candidates (eg. COROT 7b) and the Kepler satellite was slowly building up its high quality space-based light curves.

What Kepler was about to reveal was even more amazing. Not only did 30% of stars host ‘hot’ super-Earths, but, at least, ~30% of the star hosted compact (highly co-planar) planetary systems with small planets, again with orbits interior to our Mercury. Thanks to this unexpected overabundance of compact systems, the Kepler reports of likely planets came in the thousands (famously known as Kepler Objects of Interest, or KOIs), which in astronomy means we can move from interesting individual objects to a fully mature discipline where statistical populations can be drawn. Today, the exoplanet portrait is smoothly covered by ~2000 reasonably robust detections, extending down to sub-Earth sized planets in orbits down to a few hours (eg. Kepler 78) up to thousands of days for those Jupiter analogs that (at the end of the day) have been found to be rather rare (<5% of the stars). Clustering of objects in the different regions of the mass-period diagram (see Figure 1) encodes the tale of planet formation and the origin. This is where we are now in terms of detection techniques.

anglada_fig1

Figure 1: The exoplanet portrait (data extracted from exoplanet.eu, April 1st 2015). Short period planets are generally favoured by the two leading techniques (transits and Doppler spectroscopy), which explains why the left part of the diagram is the most populated. The ‘classic’ gas giants are on the top right (massive and long periods), and the bottom left is the realm of the hot neptunes, and super-Earths. The relative paucity of planets in some areas of this diagram tells us about important processes that formed them and shaped the architectures of the systems. For example, the horizontal gap between the Neptunes and the Jupiters is likely caused by the runaway accretion of gas once a planet grows a bit larger than Neptune in the protoplanetary nebula, quickly jumping into the Saturn mass regime. The large abundance of hot-jupiters on the top left is an observational bias due to the high detection efficiency (large planets and short period orbits), but the gap between the hot-Jupiters and the classical gas giants is not well understood and probably has to do with the migration process involved in dragging the hot-Jupiters so close to the star. Detection efficiency quickly drops to the right (longer periods) and bottom (very small planets).

Having reached this point, and given the wild success of Kepler, we might ask ourselves what is the relevance of the Doppler method as a detection method for small planets. The transit method requires a lucky alignment of the orbit. Using statistical arguments one can easily find out that most transiting planets will be detected around distant stars. Instead, the Doppler technique can achieve great precision on individual objects and detect the planets irrespective of their orbital inclination (unless in the rare cases when the orbits are close to face-on). Therefore, the hunt for nearby planets remains the niche of the Doppler technique. Small planets around nearby stars should enable unique follow-up opportunities (direct imaging attempts in 10-20 years) and transmission spectroscopy in the rare cases of transits.

However, there are other reasons why nearby stars are really exciting. These are brand new worlds next to ours that might be visited one day. Nearby stars trigger the imagination of the public, science fiction writers, filmmakers and explorers. While the scientific establishment tends to deem this quality irrelevant, many of us still find such motivation perfectly valid. As in many other areas, this in not only about pure scientific knowledge but about exploration.

For those who prefer a more oriented results-per-dollar approach, the motivational aspect of nearby exoplanets cannot be ignored either. Modern mathematics and physical sciences were broadly motivated by the need to improve our understanding of observations of the Solar system. Young scientists keep being attracted to space sciences and technology because of this (combined with the push from the film and video-game industry). A nearby exoplanet is not one more point in a diagram. It represents a place, a goal and a driver. Under this scope, reports and discoveries of nearby Doppler detections (even if tentative) still rival or surpass the social relevance of those exotic worlds in the distant Kepler systems. As long as there is public support and wonder for exploration, we will keep searching for evidence of nearby worlds. And to do this we need spectrometers.

Why is GJ 581 d so relevant?

We have established that nearby M-dwarfs are great places to look for small planets. But there is a caveat. The rotation periods of mature stars are in the 20-100 days range, meaning that spots or features in the stellar surface will generate apparent Doppler signals in the same range. After some years of simulation and solar observations, we think that these spurious signals will produce Doppler amplitudes between 0.5 and 3 m/s, even for the most quiet stars (highly object dependent). Moreover, this variability is not strictly random, which causes all sorts of difficulties. In technical terms, structure in the noise is often referred as correlated noise (or red-noise, activity-induced variability, etc.).

Detecting a small planet is like trying to measure the velocity of a pan filled with boiling water, by looking at its wiggling surface. If we can wait long enough, the surface motion averages out. However, consecutive measurements over fractions of seconds will not be random and can be confused with periodic variability in these same timescales. The same happens with stars. We can get arbitrarily great precision (down to cm/s) but our measurements will also be tracing occasional flows and spectral distortions caused by the variable surface.

Going back to the boiling water example, we could in principle disentangle the true velocity from the jitter if we have access to more information, such as the temperature or the density of the water at each time. Our hope is that this same approach can be applied to stars by looking at the so-called ‘activity indicators’. In the case of Gliese 581d, Robertson et al. subtracted an apparent correlation of the velocities with a measure of a chromospheric activity index. As a result, the signal of GJ 581d vanished, so they argued the planet was unlikely to exist (http://adsabs.harvard.edu/abs/2014Sci…345..440R).

However, in our response to that claim, we argued that one cannot just remove possible effects relevant to the observations. Instead, one needs to fold in all the information in a comprehensive model of the data (http://adsabs.harvard.edu/abs/2015Sci…347.1080A). When doing that, the signal of GJ 581d shows up again as very significant. This is a subtle point with far-reaching consequences. The activity-induced variability is in the 1-3 m/s regime, and the amplitude of the planetary signal is about 2 m/s. Unless activity is modeled at the same level as the planetary signal, there is no hope in obtaining a robust detection. By comparison, the amplitude of the signal induced by Earth on the Sun is 10 cm/s while the Solar spurious variability is on the 2-3 m/s range. With a little more effort, we are likely going to detect many potentially habitable planets around M-stars using new generation spectrometers. Once we can agree on the way to do that, we can try to go one step further and attempt similar approaches with Sun-like stars.

The debate is on and the jury is still out, but clarifying all these points is essential to the viability of the Doppler technique and future plans for new instruments (What’s the need for more precise machines if we have already hit the noise floor?)

This same boiling pan effect sets the physical noise floor for other techniques as well, but the impact on the detection sensitivity can be rather different. For example, photometric measurements (eg. Kepler) are now mostly limited by the noise floor set by the Sun-like stars which, on average, have been found to be twice more active than our Sun. However, the transit ‘signal’ (short box-like feature, strictly periodic) is harder to emulate by stellar variability. It is only a matter of staring longer on target to be sure the transit-like feature repeats itself at a very precise time. The Kepler mission had been extended to 3.5 years to account for this, and it would have probably succeeded if its reaction wheel hadn’t failed (note most ‘warm Earth-sized’ objects are around K and M-stars). The PLATO/ESA mission (http://sci.esa.int/plato/) will likely finish the job and detect a few dozens of Earth twins, among many other things.

So, what’s next?

New generation spectrometers will become available soon. Designed to reach similar or better hardware stability than HARPS, these instruments will extend the useful wavelength range towards the red and near-infrared part of the spectrum. A canonical example is the CARMENES spectrometer (https://carmenes.caha.es/), which will cover from 500 nm up to 1.7 microns (HARPS covers from 380 to 680 nm). CARMENES is expected to go into the telescope this summer. In addition to collecting more photons, access to other regions of the spectrum will enable the incorporation of many more observables in the analysis. In the meantime, a series of increasingly ambitious space-photometry missions will keep identifying planet-sized objects by the thousands. In this context, a careful use of Doppler instruments will provide confirmation and mass measurements for transiting exoplanet candidates.

In parallel, the high follow-up potential and the motivational component of nearby stars justifies the continued use of precision spectrometers, at least on low-mass stars. In addition to this, stabilized spectrometers ‘might’ play a key role in atmospheric characterization of transiting super-Earths around nearby M-dwarf stars. Concerning the nearest Sun-like stars, alternative techniques such as direct imaging or astrometry should be viable once dedicated space missions are built, maybe in the next 15-20 years. However, given the trend towards stagnant economies and increasingly long technological cycles for space instrumentation, we might need to hope for the era of space industrialization (or something as dramatic as a technological singularity taking over the hard work) to catch a glimpse of the best targets for interstellar travel.

tzf_img_post

Sea Salt in Europa’s Dark Materials?

‘Europa in a can’ may be the clue to what’s happening on Jupiter’s most intriguing moon. Created by JPL’s Kevin Hand and Robert Carlson, ‘Europa in a can’ is the nickname for a laboratory setup that mimics conditions on the surface of Europa. It’s a micro-environment of extremes, as you would imagine. The temperature in the vacuum chamber is minus 173 degrees Celsius. Moreover, materials within are bombarded with an electron beam that simulates the effects of Jupiter’s magnetic field. Ions and electrons strike Europa in a constant bath of radiation.

What Hand and Carlson are trying to understand is the nature of the dark material that coats Europa’s long fractures and much of the other terrain that is thought to be geologically young. The association with younger terrain would implicate materials that have welled up from within the moon, providing an interesting glimpse of what is assumed to be Europa’s ocean. Previous studies have suggested that these discolorations could be attributed to sulfur and magnesium compounds, but Hand and Carlson have produced a new candidate: Sea salt.

Europa_mosaic

Image: The Galileo spacecraft gave us our best views thus far of Europa, with the discolorations along linear fractures rendered strikingly clear in this reprocessed color view. Credit: NASA/JPL.

Intense radiation peppers Europa’s surface with particle accelerator intensity. It becomes part of the story, causing the discoloration evident in the terrain. Hand and Carlson tested a variety of candidate substances, collecting the spectra of each to compare them with what our spacecraft and telescopes have found. Sodium chloride and various salt and water mixtures proved the most potent substance. When bombarded with the electron beam, they turned from white to the same reddish brown hues found on Europa in a timeframe of tens of hours, which corresponds to about a century on Europa. Spectral measurements showed a strong resemblance to the color within Europa’s fractures as seen by the Galileo spacecraft.

europa_close

Image: A closer look at Europa. This is a colorized image pulled from clear-filter grayscale data from one orbit of the Galileo spacecraft combined with lower resolution data taken on a different orbit. The blue-white terrain indicates relatively pure water ice. The new work indicates that although some of the colors of Europa come from radiation-processed sulfur, irradiated salts may explain the color of the youngest regions. Highly intriguing is the possibility that these surface features may have communicated with a global subsurface ocean. Credit: NASA/JPL.

Finding sea salt on Europa’s surface would imply interactions between the ocean and the rocky seafloor, according to this JPL news release, with astrobiological implications. In any case, “This work tells us the chemical signature of radiation-baked sodium chloride is a compelling match to spacecraft data for Europa’s mystery material,” says Hand, who speculates that because the samples grew darker with increasing radiation exposure, we might be able to use color variation to determine the age of features on the moon’s surface.

The paper is Hand and Carlson, “Europa’s surface color suggests an ocean rich with sodium chloride,” accepted at Geophysical Research Letters for publication online (abstract).

tzf_img_post

SETI and Stellar Drift

It was natural enough that Richard Carrigan would come up with the model for what he called ‘Fermi bubbles,’ which I invoked in Monday’s post. A long-time researcher of the infrared sky, Carrigan (Fermi National Accelerator Laboratory, now retired) had mined data from the Infrared Astronomical Satellite (IRAS) in 2009 to mount a search for interesting sources that could be Dyson spheres, entire stars enclosed by a swarm of power stations, or conceivably wrapped entirely by a sphere of material presumably mined from the planetary population of the system.

Carrigan’s work on infrared sources goes back well over a decade, involving not only data mining but theorizing about the nature of truly advanced civilizations. If we were to find a civilization transforming a galaxy by gradually building Dyson spheres to exploit all the energies of its stars, we would be witnessing the transformation from Kardashev Type II (a culture that uses all the power of its star) to Type III (a culture that exploits its entire galaxy’s energies. Carrigan reasoned that areas of such a galaxy would gradually grow dark in visible light, the signature of the civilization’s activities becoming traceable only in the infrared.

Both Carrigan and the researchers in the Glimpsing Heat from Alien Technologies (G-HAT) project at Penn State point out that there are natural phenomena that could mimic the Fermi bubble. In a recent paper, the G-HAT team led by Jason Wright mentions a kind of galaxy known as a flocculent spiral as a case in point. Unlike the classic spiral with well-defined structure, these are galaxies with discontinuous spiral arms. What might be perceived as a ‘bubble’ structure here would almost certainly be a natural feature.

ngc4414_hst

Image: NGC 4414, a flocculent spiral galaxy in an image taken by the Hubble Space Telescope. It would be tricky business to find the signature of a Fermi bubble here given the lack of definition in the spiral arms. A bright foreground star from our Milky Way Galaxy shines in the foreground of the image. Credit: Olivier Vallejo (Observatoire de Bordeaux), HST, ESA, NASA.

Galaxy in Motion

But I think the G-HAT critique of the Fermi bubble idea truly gains strength when we consider the motion of stars in the galaxy vs. the times needed for galactic colonization to occur. For we have to remember that when we’re dealing with a galaxy of stars over billions of years, we have to set the galaxy in motion. In a 2014 paper cited on Monday, Wright and company note this:

The static model of stars, in which a supercivilization can be said to occupy a compact and contiguous region of space, is a reasonable approximation for short times (? 105 years) and in the case of fast ships (with velocities in significant excess of the typical thermal or orbital velocities of the stars, so ? 10?2 c). In such cases, the stars essentially sit still while the ships move at a significant fraction of c and populate a small region of the galaxy in some small multiple of the region’s light-crossing time.

Remember that the shorter the period for colonization, the briefer the ‘window’ for finding a Fermi bubble. But would such bubbles be apparent even assuming the slowest possible expansion?

The G-HAT team’s work makes a compelling case that they would not. For longer times, and assuming slower ships, the static model fails and fails badly. Stars do not stay in one place, and galactic rotation muddles the works. The G-HAT paper considers what it calls ‘conservative timescales’ for a ‘slow’ colonization of the Milky Way. We can use this work to consider a maximum galaxy colonization time to give us a sense of how apparent galactic colonization would be. It also has ramifications, and significant ones, for Michael Hart’s view that we are alone in the galaxy, but I’m not going to stray from the Fermi bubble question in this post.

Imagine that a single spacefaring civilization emerges that uses colony ships traveling at 10-4 c, a speed not so different from our own interplanetary probes. Also assume a very slow launch rate, so that a ship is launched every 104 years, with a maximum range of 10 parsecs. This is slow travel indeed: The travel time to the nearest stars in this scenario is roughly 105 years, a time during which 10 more ships will be launched. The paper explains that this travel speed is comparable to the velocity dispersion of stars in the galactic midplane, a fact that brings new stars into range of the colony ships.

This is an expansion without pause because as the stars mix locally, a stellar system can continue to populate the ten nearest stars every 105 years:

To first order, the stellar system can thus continue to populate the 10 nearest stars every 105 years, without immediately saturating its neighborhood with colonies or the need to launch faster or longer-lived colony ships to continue its expansion. Further, arrival of the colony ships at the nearby stars should not be modeled as a pause in the expansion of the civilization. Rather, the colonies themselves will continue to travel at these speeds with respect to the home stellar system, and themselves encounter fresh stars for colonization every 105 years, during which time they can also launch 10 colony ships.

Using halo stars, which have high relative velocities in relation to the disk, for gravitational assists can provide a boost in cruise speed that allows higher speeds. We wind up with the capability of crossing the galaxy on a galactic rotational timescale. Here is a model of slow expansion that is anything but the uniform growth imagined in a static field of stars:

The slow expansion of an ETI should thus be modeled not as an expanding circle or sphere, subject to saturation and “fronts” of slower-expanding components of the supercivilization. A better model is as the mixing of a gas, as every colonizing world populates the stars that come near it, and those stars disperse into the galaxy in random directions, further “contaminating” every star they come near. If halo stars are themselves colonizable, then those that counter-rotate and remain near the plane will provide even faster means of colonization, since they will encounter ? 10 times as many stars per unit time as disk stars.

Here again we note the key fact that this stellar motion obscures any well-defined Fermi bubble:

Non-circular orbits also provide significant radial mixing, and Galactic shear provides an additional source of mixing that is comparable to that of the velocity dispersion of the disk stars once the colonies have spread to vrot/?v ? 1/10 of galaxy’s size, or ? 1 kpc from the home stellar system.

Remember, these are extremely conservative assumptions, and they still show that when a civilization begins to colonize its nearest stars, it will populate the entire galaxy in no more than 108 to 109 years. The maximum timescale for galactic colonization is found to be on the order of a galactic rotation (108 years) even for present-day probe speeds. This has implications for the detectability of Fermi bubbles, for on a rotational timescale, such bubbles will be subject to rotational shear and thermal motions that disperse and ‘mix’ them. Or as Centauri Dreams regular Eniac put it in a comment yesterday, “Such bubbles would be sheared into streamers in relatively short order. The spread of civilization would look more like milk stirred into coffee than a clearly delineated expanding bubble.”

The upshot here is that it will be only during a fairly brief period of a galaxy’s history that a spacefaring civilization will have populated only a single contiguous part of that galaxy. The length of that time depends upon how fast the civilization is capable of expanding — the faster the expansion, the shorter the time to observe the interim period between Kardashev Levels II and III. The transition between this era and the galaxy-spanning civilization to follow is, by galactic standards, relatively brief. And if we assume the slowest possible expansion, our Fermi bubbles would be quickly obscured by natural stellar motion within the galaxy. Fermi bubbles, if they do exist, are going to be exceedingly hard to find.

The paper is Wright et al., “The ? Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. I. Background and Justification,” The Astrophysical Journal Vol. 792, No. 1 (2014), p. 26 (abstract / preprint). I consider the SETI work that Jason Wright and his colleagues Matthew Povich and Steinn Sigurðsson are doing with the Glimpsing Heat from Alien Technologies project to be ground-breaking, and plan to check in with it often.

tzf_img_post

SETI: Are ‘Fermi Bubbles’ Detectable?

I’m enough of a perfectionist that when I get something wrong, I can’t rest easy until I figure out how and why I missed the story. Such a case occurred in an article I wrote for Aeon Magazine called Distant Ruins. The article covered the rise of so-called ‘Dysonian SETI,’ which is adding an entirely new dimension to current radio and optical methods by looking into observational evidence for advanced civilizations in our abundant astronomical data.

In the story, I homed in at one point on the work that Jason Wright and his colleagues Matthew Povich and Steinn Sigurðsson are doing with the Glimpsing Heat from Alien Technologies (G-HAT) project at Penn State. Keith Cooper went over the basics of this effort on Friday, putting his own spin on the group’s recent search of 100,000 galaxies. For more background, see Jason Wright’s Glimpsing Heat from Alien Technologies essay.

I noted in the Aeon article that the G-HAT team was examining infrared data from the Wide-field Infrared Survey Explorer (WISE) and the Spitzer Space Telescope in search of the signs of an advanced civilization. What I had wrong in my description was the statement that “Wright’s group is also looking for ‘Fermi bubbles’, patches of a galaxy that show higher infrared emissions than the rest, which could be a sign that a civilisation is gradually transforming a galaxy as it works its way across it.” I know I drew the idea of Fermi bubbles from Richard Carrigan’s work, and generalized from there, but generalizing was a mistake, because it turns out that the G-HAT team doesn’t believe Fermi bubbles are something we could detect.

Below is the ‘Whirlpool’ galaxy, M51, a beautiful image and a useful object for study because we are looking at a spiral galaxy in many ways like the Milky Way from an angle that lets us see it face-on. Could we see Fermi bubbles here?

m51center_hst

Image: The Whirlpool Galaxy is a classic spiral galaxy. At only 30 million light years distant and fully 60 thousand light years across, M51, also known as NGC 5194, is one of the brightest and most picturesque galaxies in the sky. The above image is a digital combination of a ground-based image from the 0.9-meter telescope at Kitt Peak National Observatory and a space-based image from the Hubble Space Telescope. Credit: N. Scoville (Caltech), T. Rector (U. Alaska, NOAO) et al., Hubble Heritage Team, NASA.

Richard Carrigan has studied this galaxy closely, looking for such Fermi bubbles, which he described in a 2010 paper. Here’s my description in Toward an Interstellar Archaeology, written for these pages in the same year:

Suppose a civilization somewhere in the cosmos is approaching Kardashev type III status. In other words, it is already capable of using all the power resources of its star (4*1026 W for a star like the Sun) and is on the way to exploiting the power of its galaxy (4*1037 W). Imagine it expanding out of its galactic niche, turning stars in its stellar neighborhood into a series of Dyson spheres. If we were to observe such activity in a distant galaxy, we would presumably detect a growing void in visible light from the area of the galaxy where this activity was happening, and an upturn in the infrared. Call it a ‘Fermi bubble.’

Carrigan (Fermi National Accelerator Laboratory) studied M51 and concluded that there were no unexplained ‘bubbles’ at the level of 5 percent of the galactic area. The Whirlpool galaxy seems like an ideal place to mount such a search given its orientation towards us. A Fermi bubble, if such things exist, might manifest itself as a void in the visible light we see in the image.

Carrigan talked about an expanding front of colonization as an advanced civilization moved through its galaxy, engulfing the galaxy on a time scale comparable to the galaxy’s rotation period or even less. But M51 produced no ‘bubbles,’ and James Annis would suggest that elliptical, rather than spiral, galaxies might be a better place to look for Fermi bubbles because ellipticals exhibit little structure, so that a potential void would stand out.

Here’s Carrigan in the 2010 paper (citation below) on how a civilization on its way to Kardashev Type III status might proceed:

If it was busily turning stars into Dyson spheres the civilization could create a “Fermi bubble” or void in the visible light from a patch of the galaxy with a corresponding upturn in the emission of infrared light. This bubble would grow following the lines of a suggestion attributed to Fermi that patient space travelers moving at 1/1000 to 1/100 of the speed of light could span a galaxy in one to ten million years. Here “Fermi bubble” is used rather than “Fermi void”, in part because the latter is also a term in solid state physics and also because such a region would only be a visible light void, not a matter void.

Wright and the G-HAT team are not persuaded by Carrigan’s Fermi bubbles. For one thing, as Carrigan has noted himself, bubble-like structures are not unusual in extragalactic astronomy, and spiral galaxies include areas that might mimic a void that would be hard to regard as anything but natural. In one of their recent papers, the G-HAT researchers add that with galactic arm widths on the order of ~ kpc, it is difficult to identify structures below this size scale.

The Annis idea, therefore, seems more useful, but for now let’s home in on that word ‘void.’ In the Aeon story, I referred to the galaxy VIRGOHI21 as a galaxy that contains a ‘void.’ But that’s a mistake, for as Jason Wright explained in a recent email, Virgo HI21 has no emissions at any wavelength except 21cm. It may, in fact, be a starless or ‘dark’ galaxy, a galaxy composed of dark matter, although the nature of the object is still controversial. The G-HAT team, according to Wright, has studied Virgo HI21 and found no infrared emission.

In any case, as Wright explained, the word ‘void’ isn’t appropriate, for galaxies do not actually contain them. Areas where there has been no star formation for the past 10 million years or so may manifest themselves as darker lanes between the spiral arms, and dust lanes may also appear dark, but Wright does not believe the shape of these darker lanes is consistent with the spread of a civilization. In any case, these are not voids. They contain just as many stars as other regions in the galaxy. So detecting Fermi bubbles gets to be more and more problematic.

Fermi bubbles would be hard to detect for other reasons as well, as explained by the G-HAT team and presented in their recent work. This is intriguing stuff, having to do with the time scales involved in the spread of a civilization and the motions of stars in that period — these ‘bubbles’ would not be static! I want to look at this issue next but probably won’t be able to get the piece written and published before Wednesday due to an intersection of competing duties elsewhere.

The Carrigan paper is “Starry Messages: Searching for Signatures of Interstellar Archaeology,” JBIS Vol. 63 (2010), p. 90 (preprint). The G-HAT paper I am discussing today and on Wednesday is Wright et al., “The ? Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. I. Background and Justification,” The Astrophysical Journal Vol. 792, No. 1 (2014), p. 26 (abstract / preprint).

tzf_img_post

Charter

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For many years this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image courtesy of Marco Lorenzi).

Now Reading

Version 1.0.0

Recent Posts

On Comments

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to others, I'm probably not going to run it.

Follow with RSS or E-Mail

RSS
Follow by Email

Follow by E-Mail

Get new posts by email:

Advanced Propulsion Research

Beginning and End

Archives