≡ Menu

Selection is going to be a key issue for future ground- and space-based observatories. Given lengthy observing times for targets of high interest, we have to know how to cull from our exoplanet catalog those specific worlds that can tell us the most about life in the universe. Recently, Ramses Ramirez (Earth-Life Science Institute, Tokyo Institute of Technology) went to work on the question of habitable zones for complex life, which are narrower than the classic habitable zone defined by the potential for water on the surface. In today’s essay, Alex Tolley looks at Ramirez’ recent paper, which examines the question in relation to the solubility of gases in lipid membranes. What emerges in this work is a constrained habitable zone suited to complex life, with limits Alex explores. The model has interesting ramifications right here in the Solar System, but it also points the way toward constraining the list of planets upon which we’ll apply our emerging tools for atmospheric characterization.

By Alex Tolley

Daggerwrist on Darwin IV. Artist Wayne Douglas Barlowe. Source: Expedition.

Life on Earth, until its last three quarter-billion years, was almost entirely represented by unicellular organisms. As we explored in Detecting Early Life on Exoplanets, biosignatures for microbial life are likely to be far more prevalent than for worlds with complex life. While rocky worlds in the classic habitable zone (HZ) are still relatively few, academic PR departments trumpet every find as “Earth-like”, and a selection of these worlds will be targeted for biosignatures. However, as the number of these worlds increases, scientists will want to distinguish worlds that have a biosphere that can be characterized as more Earth-like, with verdant landscapes and megafauna in the seas and on land.

When the term “Earth-like” is used, the public thinks of a world that looks like Earth, with oceans, continents variously clothed in verdant landscapes, and perhaps most importantly of all, “charismatic megafauna”, the animals that you went to see at the zoo, or watched on David Attenborough’s excellent nature programs. A blue sea lapping on a muddy beach, despite teeming with microbes and other unicellular life, looks dead to the unpracticed eye, which means most of the human population. It is those human-scale animals like the daggerwrist pictured above from Barlowe’s “Expedition: Being an Account in Words and Artwork of the 2358 A.D. Voyage to Darwin IV” that excites the public.

If life is rare, then the classic HZ will have the least constraints, although most of those worlds will still have biospheres populated only with microbes, and fewer probably with unicellular plants and animals. If life is not rare, then there will be a desire to discover true Earth-like worlds with complex life, which may mean limiting the range of the HZ that will allow for such life to flourish.

The classic HZ range is defined by the possibility of liquid water remaining continuously on the surface, warmed by the star’s radiation and an atmosphere of sufficient pressure and with some greenhouse gases. This is because all Earth’s life requires liquid water and this has led to the mantra “Follow the water” for missions in the search for life. Inside the inner HZ limit, there will be a runaway greenhouse that eventually desiccates the planet, like Venus. Towards the outer edge, the atmosphere needs to be increasingly composed of greenhouse gases, particularly carbon dioxide (CO2) until a limit is reached.

For the solar system, the classic HZ lies at about 0.95 AU, inside Earth’s orbit, but excludes Venus, and extends to about 1.67 AU, outside of Mars’ orbit. It is this that offers the possibility of a second genesis and possibility of finding extant life in refuges and in the lithosphere beneath the now inhospitable Martian surface.

Complex, or multicellular, life on Earth emerged less than 1 billion years ago as photosynthesis reduced the CO2 in the atmosphere and replaced it with oxygen (O2). Except for a few recently discovered species, all multicellular life is aerobic and requires a rich O2 atmosphere. It is the much greater energy released by aerobic respiration compared to anaerobic respiration that allows for the energetic lifestyles of multicellular animal life (metazoa). At least for our planet, we believe that the conditions for complex life to survive are constrained; Earth has its own habitable zone limits that are narrower than the classic HZ. The question is, “What might those HZ limits be for complex life, and how does that translate for exoplanets around different stellar types?”

CO2 is one of the main greenhouse gases that extend the outer boundary of the HZ. Nitrogen (N2) also helps extend the outer edge of the HZ although it is not a greenhouse gas but a main constituent of the atmosphere. Are there limits to the pressures of these gases due to effects on complex life that limit the range of the possible HZ for multicellular life living on the planet’s surface?

A new paper by Dr. Ramses Ramirez attempts to answer that question by applying the relationship between the solubility of gases in lipid membranes and their anesthetic potency (see figure 1 below). This theory, a partial explanation for the still imperfectly understood mechanism of anesthesia, is that the solubility of gases in lipid membranes is correlated with their anesthetic potency. Anesthetists must monitor the use of these gases to maintain unconsciousness. Too little and the patient remains conscious of the pain during surgery, too much anesthetic, and the patient stops breathing and dies.

The anesthetic gases are to the bottom right of the chart in figure 1. Nitrous oxide (N2O) is less potent and still used in dentistry (as well as at “nitrous parties”). Less well known is that CO2 also acts as such a gas with solubility similar to N2O. Although physiologically CO2 initially increases breathing rate to flush it out of the lungs, at higher concentrations it then invokes respiratory, and later metabolic, acidosis, which sets in as CO2 dissolved in the blood serum eventually causes cessation of respiration and death. As can be seen in figure 1 below, N2 has low solubility in lipid membranes, 2 – 3 orders of magnitude lower than CO2, and concomitantly similar orders of magnitude lower anesthetic potency.

However, we are probably also familiar with the effects of high-pressure N2 as nitrogen narcosis that is experienced by divers breathing compressed air at depth. The argument is that both CO2 and N2 dissolving in the lipid membranes of cells will cause death if those gas concentrations reach the anesthetic level for complex life.

Figure 1: The Meyer-Overton correlation of oil/gas solubility versus anesthetic potential of inhaled gases. Figure recreated from published data. Source Ramirez [1].

Figure 1 above shows the relationship between gases and their anesthetic potential. CO2 solubility is similar to nitrous oxide, while N2 is far less potent and therefore apparently less of a constraint. Note that helium is at the upper end of the range and has low solubility and low anesthetic potency. This is why helium is used to replace N2 when deep diving in soft suits.

While the Meyer-Overton correlation is primarily for humans, it has been shown to apply across several different phyla as it is a physical, rather than physiologic effect. Determining the tolerance limits for CO2 and N2 provides a constraint that limits the HZ to a “Complex Life Habitable Zone (CLHZ).” Dr. Ramirez supports the general applicability of the lipid gas solubility to metazoa from prior experimental work, primarily on mammals, but also with other animals, to suggest that 0.1 bar (1/10th of surface atmospheric pressure or 1.4 psi) of CO2 might be a reasonable, conservative limit for complex life to tolerate CO2. N2 limits are primarily set by experiments for human divers. 2 bar of N2 seems to be the safe limit at which divers do not get narcosis. This is just 10 meters below the surface, a depth even beginner scuba divers can safely operate for short durations. Using upper limits for 0.1 bar CO2 and 2 bar N2, Dr. Ramirez finds that his radiative-convective model (RC) gives an estimated HZ for complex life (CLHZ) of 0.95 – 1.21 AU. Using an advanced energy balance model (EBM) that allows for different temperatures on the Earth’s surface, thus allowing for liquid water at the equator, but not at the poles, this CLHZ is extended from 0.95 – 1.31 AU.

The new outer range for this 2 bar N2 and 0.1 bar CO2 is 1.36 AU using the Energy Balance Model (EBM). This range is shown in figure 2 below not just for Earth, but for a range of main sequence star types. The relative decrease in the CLHZ compared to the HZ is greatest for cooler stars, the type we have most exoplanet examples in the HZ currently.

Figure 2. The Complex Life Habitable Zone (CLHZ) for A – M stars (2,600 – 9,000 K) compared to other definitions.The CLHZ is for a 0.1 bar, 2 bar N2 atmosphere which is compared to the classic HZ. While the inner edge of the HZ and CLHZ are the same at 0.95 AU, the outer edge of the CLHZ is now well inside the orbit of Mars. Image source: Ramirez.

Dr. Ramirez compares his results to a similar paper by Dr, Edward Schwieterman that looks at the same problem but through the lens of CO2 chemistry, with the note that carbon monoxide (CO), while not limiting the CLHZ, is toxic and could be limiting to the evolution of complex life [2]. (The CO is created by photolysis of CO2.) Schwieterman uses a 1D radiative-convective climate model for his calculations across a range of CO2 levels. Schwieterman does not investigate higher N2 pressures which results in his modeling having a narrower CLHZ than Dr. Ramirez’s most comparable modeling. However, the CO toxicity does not appear significant except for planets orbiting cool stars such as M dwarfs.

While both authors attempt to redefine the likely boundaries for the HZ of complex life based on Earth’s biological evolution, only Dr. Ramirez employs the possibility of increasing the N2 pressure to increase the outer limit.

To quote from the paper:

“The CLHZ is slightly wider at the higher N2 pressure because of increased N2-N2 collision induced absorption and a decrease in the outgoing infrared flux, which more than offset an increase in planetary albedo.”

Dr. Ramirez also states:

“I consider how our solar system’s HZ changes if we assume (for the moment) that complex life could evolve to breathe in a hypothetical 5-bar N2 atmosphere. For this sensitivity study, the RC model predicts that such worlds in our solar system can remain habitable at 1.24AU (SEFF = 0.65) whereas atmospheric collapse can be avoided as far as 1.36 AU (SEFF = 0.54) in the EBM (nearly 60% classical HZ width). I find that the additional N2 opacity is sufficient to counter the ice-albedo feedback, allowing for effective planetary heat transfer even at relatively far distances.”

Dr. Ramirez’s 0.1 bar constraint for CO2 should be put in context for life on Earth. CO2 is currently at about 0.04% (0.0056 psi) of the Earth’s atmosphere. Even during the Cambrian period when multicellular animals were rapidly diversifying into phyla, the atmospheric component of CO2 was never more than 1% and it fell fairly continuously during this period. The Great Permian Extinction which saw 90-95% of all complex life become extinct primarily by anoxia in the oceans, the CO2 levels were little more than 0.1% at their peak. [See “Climate Change and Mass Extinctions: Implications for Exoplanet Life”] and figure 3 below. For highly cognitive humans, NASA conservatively stipulated that the highest emergency level of CO2 in the Apollo Command and Lunar modules should be no more than 0.29 psi (0.02 bar) in an atmosphere of 5 psi O2 before cognitive skills become impaired [40]. The Centers for Disease Control and Prevention (CDC) guidelines for CO2 is that 0.04 bar CO2 is immediately dangerous [i].

It should also be noted that the analysis is limited to surface living, air-breathing animals. Bathypelagic organisms, such as oceanic fish may be adapted to tolerate far higher N2 pressures.

Figure 3. O2 and CO2 levels in the Phanerozoic. [3] While the Permian extinction is associated with a rise in CO2 levels to about 0.1%, and a decline in O2 levels from the Carboniferous, the CO2 levels were far higher at 1% at the start of the Cambrian and still high in the Devonian (the age of fishes).

But what about multicellular organisms other than animals? While Dr. Ramirez acknowledges that complex life includes plants and fungi, not just metazoa (animals), he is unable to address the possible range of CO2 and N2 pressures these complex life forms might be adapted to because there is next to no data on the effect high pressures and concentrations these gases have on plants or fungi, beyond incremental increases in CO2 to experiment on plant photosynthesis limits and productivity. Where we do have data is Earth’s history of complex life that indicates that relatively low levels of CO2 in the atmosphere due to volcanic emissions, and reduced plant life to draw down CO2 and replenish the O2 due to sulfur acid rains and ash-darkened skies, are sufficient to force most species, including plants, to extinction. We do not know what factor or combination of factors is important, nor whether it is primary factors such as anoxia, or n-th order factors that resulted in their final extinction.

Now that the inventory of exoplanets is rapidly increasing, it is certainly time that we start thinking more critically about what sort of life we are looking for and what that might mean for the range of the habitable zone that supports these different life forms. Rather than allowing the widest possible HZ that allows any atmospheric composition and pressure allowing liquid water, we could also be looking for possible constraints that appear required for the sort of surface, air-breathing complex life that will give rise to the charismatic fauna that we have on Earth. Dr. Ramirez has posited one interesting idea for terrestrial complex life that is based on respiration across a range of metazoans which then constrains the atmospheric gas composition and hence the HZ.

As Ramirez’ CLHZ has an outer limit well inside the orbit of Mars, this invites speculation that if Mars ever had any life during its earlier, wetter, period, it did not have complex life. If this model proves correct, while we may find subterranean microbial life on Mars, we will not find metazoan fossils, such as mollusk shells or vertebrate skeletons.

It should be borne in mind that life as a whole maintains Earth’s low CO2 levels to keep the surface temperature equitable for itself, maximizing biodiversity and biomass. While hotter (e.g. the Eocene maximum) and cooler (ice ages) periods upset that equitable temperature, life in concert with much slower geological processes act as a thermostat. It is also the case that biomass and diversity are greatest in the tropical forests and the lowest at the poles. It must have been relatively sparse during the “snowball Earth” period but recovered once the global ice sheets melted. Life has evolved on the Earth as it is, and has biochemistry that matches that requirement.

Today, that requirement is for an atmosphere that has a low CO2 level. On exoplanets, where much higher CO2 levels are needed to keep the planet warm, different biochemistries might develop, and this is a caveat that Ramirez considers for his analysis. However, without examples of such life, we are forced to use Earth’s life as our only model. In a half-billion or so years in the future, as the sun increases its luminosity, the required CO2 level to keep Earth cool enough will be below that needed by plants. A technological species might utilize technology like orbital sunshades or perhaps genetic engineering to maintain life on Earth.

The more important point is that we may be able to provide more granular characterizations of exoplanets. Rather than the binary in or out of the classic HZ for exoplanets and therefore potentially living or not, we can add granularity, such as inside the CLHZ and therefore capable of hosting complex life too. This conclusion does depend on exo-life following our terrestrial biology. If it doesn’t then we have to fall back to the more generous HZ calculations alone.

References

1. Ramirez, Ramses M. “A Complex Life Habitable Zone Based On Lipid Solubility Theory.” Scientific Reports, vol. 10, no. 1, 2020, doi:10.1038/s41598-020-64436-z.

2. Schwieterman, Edward W., et al. “A Limited Habitable Zone for Complex Life.” The Astrophysical Journal, vol. 878, no. 1, 2019, p. 19., doi:10.3847/1538-4357/ab1d52.

3. CO2 and O2 levels in the phanerozoic. Web accessed May 11, 2020. https://notrickszone.com/2018/05/28/2-new-papers-permian-mass-extinction-coincided-with-global-cooling-falling-sea-levels-and-low-co2/

4. Michel, E. L., et al, SP-368 Biomedical Results of Apollo – Chap. 5 Environmental Factors. Accessed from web, May 11th, 2020. https://history.nasa.gov/SP-368/s2ch5.htm

tzf_img_post
{ 0 comments }

Exoplanet Hunting with CubeSats

55 Cancri e is a confirmed planet, and thus a departure from our topic of the last two days, which was the act of exoplanet confirmation as regards Proxima Centauri b and c, the latter still in need of further work before it can be considered confirmed. But 55 Cancri e has its uses in offering a tight orbit around a Sun-like star that can be detected using the transit method. That was just what was needed for ASTERIA (Arcsecond Space Telescope Enabling Research in Astrophysics), a technology demonstration mission involving a tiny CubeSat.

Sara Seager (MIT) has been at the heart of the investigation of CubeSats as exoplanet research platforms. I think the idea is brilliant. If we want to mount the most effective search of nearby Sun-like stars for Earth analogs, multiple telescopes must be in use. CubeSats are cheap. Why not launch a fleet of them, each with the task of monitoring a single star at a time. Launched in 2017, ASTERIA was the prototype, a nanosatellite equipped with precision pointing control and thermal stability of the sort needed to meet the tight tolerances of such observations.

Image: Left to right: Electrical Test Engineer Esha Murty and Integration and Test Lead Cody Colley prepare the ASTERIA spacecraft for mass-properties measurements in April 2017 prior to spacecraft delivery ahead of launch. ASTERIA was deployed from the International Space Station in November 2017. Credit: NASA/JPL-Caltech.

ASTERIA is a collaboration between the Jet Propulsion Laboratory and MIT, one in which MIT retains the lead in science operations while JPL handles overall project management. Seager is principal investigator on the project. Three mission extensions pushed the original 90 day prime mission into extensive prototype testing, which culminated in the CubeSat using its fine pointing control to detect 55 Cancri e’s transits. This is quite an achievement for the tiny satellite, given the need for a steady platform without movement or vibration as the star is examined.

And ponder this: 55 Cancri e blocks only 0.04% of the host star’s light. Mary Knapp is ASTERIA project scientist at MIT’s Haystack Observatory and lead author of the paper on this work, which will appear in the Astronomical Journal:

“We went after a hard target with a small telescope that was not even optimized to make science detections – and we got it, even if just barely. I think this paper validates the concept that motivated the ASTERIA mission: that small spacecraft can contribute something to astrophysics and astronomy.”

Image: The super-Earth exoplanet 55 Cancri e, depicted with its star in this artist’s concept, likely has an atmosphere thicker than Earth’s but with ingredients that could be similar to those of Earth’s atmosphere. Scientists say the planet may be entirely covered in lava. The planet is so close to its star that one face of the planet consistently faces the star, resulting in a dayside and a nightside. Credit: NASA/JPL-Caltech.

Yesterday we saw how data from three different sources were used to investigate Proxima Centauri c, strengthening the case for its existence but not yet confirming it. On its own, the ASTERIA data would be suggestive of a planet but not proof of it, but it was when comparing the CubeSat data with previous observations that it could be determined that the CubeSat had indeed seen the planet. As we develop CubeSat capabilities, we can use them to follow up on detections made by larger telescopes, focusing on one star and keeping our gaze fixed.

That’s especially useful for potential Earth analogs, where around G-class stars orbital times are long enough to require persistence if we are to see a transit. Thus it’s good news that Sara Seager has been awarded a NASA Astrophysics Science SmallSat Studies grant to develop a follow-on mission involving a constellation of satellites, each about twice the size of ASTERIA.

This excerpt from the paper describes the constellation concept:

ASTERIA was a successful technology demonstration of a future constellation of up to dozens of satellites, dubbed the ExoplanetSat Constellation. Each satellite would share ASTERIA’s precision pointing and thermal control capabilities, operate independently from the others, but may have different aperture sizes in order to reach down to fainter stars than ASTERIA’s current capability. The primary motivation is the fact that if there is a transiting Earth size planet in an Earth-like orbit about the nearest, brightest (V<7) Sun-like stars, we currently have no way to discover them; current missions saturate on these bright stars. The ultimate goal for the constellation is to monitor dozens of the brightest sun-like stars, searching for transiting Earth-size planets in Earth-like (i.e., up to one year) orbits.

The advantages of this ‘fleet’ approach are apparent. The paper continues:

Because the brightest sun-like stars are spread all across the sky, a single telescope will not do. Instead, each satellite would monitor a single sun-like star target of interest for as long as possible, before switching to another star, with targets only limited by the Sun, Earth and Moon constraints. To narrow down the approximately 3,000 target stars brighter than V=7, one would have to find a way to constrain the stellar inclinations and assume the planets orbit within about 10 degrees of the stars equatorial plane. This would reduce the number of target stars from about 3000 to about 300 (Beatty & Seager 2010), a much more tractable number of targets. The ExoplanetSat Constellation has a unique niche in context of existing and planned space transit surveys…, but is still in concept phase.

How to keep these spacecraft small? The use of CMOS detectors (complementary metal-oxide-semiconductor) working in visible light allowed ASTERIA to operate without a large cooling system, as would have been required by a CCD (charge-coupled device) to keep the instrument cold. We’ll follow MIT’s CubeSat work as the lessons learned from ASTERIA are drawn into the next design.

The paper is Knapp et al., “Demonstrating high-precision photometry with a CubeSat: ASTERIA observations of 55 Cancri e,” in process at the Astronomical Journal (preprint). Thanks to Centauri Dreams regular Andrew Tribick for the heads-up on this paper.

tzf_img_post
{ 5 comments }

Confirmation of Proxima Centauri c?

Hard on the heels of the confirmation of Proxima Centauri b, we get news of Proxima c, which has now been analyzed in new work by Fritz Benedict (McDonald Observatory, University of Texas at Austin). Benedict has presented his findings at the ongoing virtual meeting of the American Astronomical Society, which ends today. The work follows up and lends weight to the discovery of Proxima c announced earlier this year by a team led by Mario Damasso of Italy’s National Institute for Astrophysics (INAF), which had used radial velocity methods to observe the star. We need further work, however, to say that Proxima c has been confirmed, as Dr. Benedict explained in an email this morning.

But first, let’s straighten out a question of identity. Yesterday, when discussing the confirmation of habitable zone world Proxima b, we talked about a second signal in data culled by the ESPRESSO spectrograph. If the second ESPRESSO signal does turn out to be a planet, it will be a third Proxima Centauri planet, not Proxima c. That signal does not rise to candidate planet status, nor does the ESPRESSO team claim it as such, but it suggests a minimum mass about a third of Earth’s at an orbital distance inside Proxima b in a five-day orbit.

Proxima c as studied by Benedict is a different world entirely. What the new work addresses is the Damasso finding of a planet in a 1,907-day orbit at a distance of 1.5 AU, well outside the star’s habitable zone. Seeing Damasso’s work, Benedict made the decision to re-examine data he had collected on Proxima Centauri using the Fine Guidance Sensors (FGS) on the Hubble Space Telescope. This is a classic case of tapping old data, as the Hubble work was done in the 1990s.

Image: Fritz Benedict, emeritus senior research scientist with the University of Texas at Austin’s McDonald Observatory. Credit: McDonald Observatory

And while Damasso used radial velocity methods (examining the star’s movements toward and away from Earth as influenced by planetary companions), the Hubble FGS, which were designed for pointing accuracy, allowed Benedict to use astrometry, the measurement of the positions and motions of stars. In the earlier study, Benedict worked with Barbara MacArthur, also at McDonald Observatory, to look for planets with orbital periods of 1,000 days or fewer, and found none. A re-investigation of the dataset looking for planets in longer orbital periods turned up the signal at 1,907 days.

Benedict then turned to images collected by INAF’s Raffaele Gratton using the SPHERE instrument on the Very Large Telescope in Chile, which showed what could be Proxima c at several points in its orbit. In An Image of Proxima c?, I ran a figure from the Gratton paper reproduced below, along with the paper’s caption.

Image: This is Figure 2 from the paper. The SPHERE images were acquired during four years through a survey called SHINE, and as the authors note, “We did not obtain a clear detection.” The figure caption in the paper reads like this: Fig. 2. Individual S/N maps for the five 2018 epochs. From left to right: Top row: MJD 58222, 58227, 58244; bottom row: 58257, 58288. The candidate counterpart of Proxima c is circled. Note the presence of some bright background sources not subtracted from the individual images. However, they move rapidly due to the large proper motion of Proxima, so that they are not as clear in the median image of Figure 1. The colour bar is the S/N. S/N detection is at S/N=2.2 (MJD 58222), 3.4 (MJD 58227), 5.9 (MJD 58244), 1.2 (MJD=58257), and 4.1 (MJD58288). Credit: Gratton et al.

What we now have on Proxima c, then, is the result of Hubble astrometry, radial velocity studies (Damasso et al.) and direct imaging (Gratton et al.), all of which allowed Benedict to refine the mass of the planet to about 7 times that of Earth. Older data serve us well.

“Basically, this is a story of how old data can be very useful when you get new information,” Benedict said. “It’s also a story of how hard it is to retire if you’re an astronomer, because this is fun stuff to do!”

Amen to that. Indeed, it’s hard to see how any astronomers specializing in planets around other stars could bring themselves to retire as we go ever deeper into what will surely be described as the ‘golden age’ of exoplanet studies.

When I contacted Dr. Benedict this morning, he told me that for now, his official statement on Proxima Centauri c is “A Preliminary Mass for Proxima Centauri C,” in Research Notes of the AAS Volume 4, Issue 4, id.46 (full text).

Because the individual detections from FGS, radial velocity and imaging, are all at the limit of detection, we should look toward more observations from SPHERE and future Gaia data on orbital perturbation at Proxima Centauri to serve as a further check for confirmation.

tzf_img_post
{ 9 comments }

Confirming Proxima b

I’ve always liked the image of Proxima Centauri b that the ESO’s Martin Kornmesser has conjured directly below, and have used it in a couple of previous articles about the planet. Indeed, you’ll see it propagated widely when the topic comes up. But like all of these exoplanet artist impressions, it’s made up of educated guesses, as it has to be. We don’t even know, for example, whether the world we see here even has an atmosphere, as depicted.

Whether or not it does is important because it affects the possibilities for life around the star nearest to our own. Twenty times closer to its star than the Earth is to the Sun, Proxima b nonetheless receives roughly the same energy, meaning we could have surface temperatures there that would support liquid water on the surface. But the planet also receives 400 times more X-rays than the Earth, which leads the University of Geneva’s Christophe Lovis to ask:

“Is there an atmosphere that protects the planet from these deadly rays? And if this atmosphere exists, does it contain the chemical elements that promote the development of life (oxygen, for example)? How long have these favourable conditions existed? We’re going to tackle all these questions, especially with the help of future instruments like the RISTRETTO spectrometer, which we’re going to build specially to detect the light emitted by Proxima b, and HIRES, which will be installed on the future ELT 39 m giant telescope that the European Southern Observatory (ESO) is building in Chile.”

Image; This artist’s impression shows a view of the surface of the planet Proxima b orbiting the red dwarf star Proxima Centauri, the closest star to the Solar System. © ESO/M. Kornmesser.

RISTRETTO is exciting stuff, but I don’t want to get ahead of myself. Lovis (University of Geneva, or UNIGE) is responsible for data processing on ESPRESSO. The spectrograph that is currently the most accurate of any in our arsenal, ESPRESSO has indeed just confirmed Proxima Centauri b’s existence, the world having been first discovered by a team led by Guillem Anglada-Escudé some four years ago (and boy do I remember when that news came in).

Anglada-Escudé’s work was brilliant, as the ESPRESSO work confirms, for his team was working with the older HARPS spectrograph, a formidable instrument in its own right, but one with three times less precision than ESPRESSO. In fact, the ESPRESSO work that confirmed Proxima b, led by Francesco Pepe at UNIGE, currently works with an accuracy of 30 centimeters per second (cm/s), with the goal of eventually reaching 10 cm/s. By way of comparison, the ELODIE spectrograph that found 51 Peg b, the first main sequence exoplanet discovered, operated with an accuracy of 10 meters per second.

So the news is two-fold. First, we have a solid success from an instrument that is changing the game in terms of radial velocity detections, one that presages great things to come. Second, we have tightened up the numbers on Proxima b, which is now shown to have a minimum mass of 1.17 Earth masses, as opposed to the previous estimate of 1.3, in an orbit of 11.2 days. Both the 2016 discovery and the 2020 confirmation represent radial velocity work at the highest level.

Bubbling interestingly in the mix is the possibility of a second planet in the ESPRESSO results. We do have a signal in the data but the cause remains problematic. Radial velocity studies have to contend with photospheric and chromospheric phenomena on the surface of the star (associated with magnetic fields) that can look much like the signal of a planet. Here I turn to the paper on this work, which will appear in Astronomy & Astrophysics:

We find evidence for a second short period signal with a period of 5.15 days and a semi-amplitude of 0.4 m·s−1. If caused by a planetary companion, it would correspond to a minimum mass of 0.29 ± 0.08 M⊕ at an orbital distance of 0.02895 ± 0.00022 AU, with an equilibrium temperature of 330 ± 30 K. Further ESPRESSO observations will be needed to confirm the presence of the signal and establish its origin. We do not detect any additional companions up to 0.4 M⊕ at orbital distances shorter than the HZ of the star.

Take a look at that mass, less than one-third that of the Earth. If this is a planet, it’s the smallest planet ever measured with radial velocity methods. What an achievement if ESPRESSO can pull that one out of the noise!

The paper is Mascareño et al., “Revisiting Proxima with ESPRESSO,” in process at Astronomy & Astrophysics (preprint).

tzf_img_post
{ 23 comments }

Sublake Settlements for Mars

Terraforming a world is a breathtaking task, one often thought about in relation to making Mars into a benign environment for human settlers. But there are less challenging alternatives for providing shelter to sustain a colony. As Robert Zubrin explains in the essay below, ice-covered lakes are an option that can offer needed resources while protecting colonists from radiation. The founder of the Mars Society and author of several books and numerous papers, Zubrin is the originator of the Mars Direct concept, which envisions exploration using current and near-term technologies. We’ve examined many of his ideas on interstellar flight, including magsail braking and the nuclear salt water rocket concept, in these pages. Now president of Pioneer Astronautics, Zubrin’s latest book is The Case for Space: How the Revolution in Spaceflight Opens Up a Future of Limitless Possibility, recently published by Prometheus Books.

by Robert Zubrin

Abstract

This paper examines the possibilities of establishing Martian settlements beneath the surface of ice-covered lakes. It is shown that such settlements offer many advantages, including the ability to rapidly engineer very large volumes of pressurized space, comprehensive radiation protection, highly efficient power generation, temperature regulation, copious resource availability, outdoor recreation, and the creation of a vibrant local biosphere supporting both the nutritional and aesthetic needs of a growing human population.

Introduction

The surface of Mars offers many challenges to human settlement. Atmospheric pressure is only about 1 percent that of Earth, imposing a necessity for pressurized habits, making spacesuits necessary for outdoor activity, and providing less than optimum shielding against cosmic radiation. For these reasons some have proposed creating large subsurface structures, comparable to city subway systems, to provide pressurized well-shielded volumes for human habitation [1]. The civil engineering challenges of constructing such systems, however, are quite formidable. Moreover, food for such settlements would have to be grown in greenhouses, limiting potential acreage, and imposing either huge power requirements if placed underground, or the necessity of building large transparent pressurized structures on the surface. Water is available on the Martian surface as either ice or permafrost. These materials can be mined and the product transported to the base, but the logistics of doing so, while greatly superior to anything possible on the Moon, are considerably less convenient than the direct access to liquid water available to nearly all human settlements on Earth. While daytime temperatures are acceptably close to 0 C, nighttime temperatures drop to -90 C, imposing issues on machinery and surface greenhouses. Yet despite the cold night temperatures, the efficiency of nuclear power is impaired by the necessity of rejecting waste heat to a near-vacuum environment.

All of these difficulties could readily be solved by terraforming the planet [2]. However, that is an enormous project whose vast scale will require an already-existing Martian civilization of considerable size and industrial power to be seriously undertaken. For this reason, some have proposed the idea of “para terraforming,” [3] that is, roofing over a more limited region of the Red Planet, such as the Valles Marineris, and terraforming just that part. But building such a roof would itself be a much larger engineering project than any yet done in human history.

There are, however, locations on Mars that have already been roofed over. These are the planet’s numerous ice-filled craters.

Making Lakes on Mars

Earth’s Arctic and Antarctic regions feature numerous permanently ice covered or “sub glacial” lakes [4]. These lakes have been shown to support active microbial and planktonic ecosystems.

Most sub Arctic and high latitude temperate lakes are ice-covered in winter, but many members of their aquatic communities remain highly active, a fact well-known to ice fishermen.

Could there be comparable ice-covered lakes on Mars?

At the moment, it appears that there are not. The ESA Mars Express orbiter has detected highly-saline liquid water deep underground on Mars using ground penetrating radar, and such environments are of great interest for scientific sampling via drilling. But to be of use for settlement, we need ice-covered lakes that are directly accessible from the surface. There are plenty of ice-filled craters on Mars. These are not lakes, however, as while composed of nearly pure water ice, they are frozen top to bottom. But might this shortcoming be correctable?

I believe so. Let us examine the problem by considering an example.

Korolev is an ice-filled impact crater in the Mare Boreum quadrangle of Mars, located at 73° north latitude and 165° east longitude (Fig. 1). It is 81.4 kilometers in diameter and contains about 2,200 cubic kilometers of water ice, similar in volume to Great Bear Lake in northern Canada. Why not use a nuclear reactor to melt the water under the ice to create a huge ice-covered lake?

Fig. 1. Korolev Crater could provide a home for sublake city on Mars. Photo by ESA/DLR.

Let’s do the math. Melting ice at 0 C requires 334 kJ/kg. We will need to supply this plus another 200 kJ/kg, assuming that the ice’s initial temperature is -100 C, for 534 kJ/kg in all. Ice has a density of 0.92 kg/liter, so melting 1 cubic kilometer of ice would require 4.9 x 1017 J, or 15.6 GW-years of energy. A 1 GWe nuclear power plant on Earth requires about 3 GWt of thermal power generation. This would also be true in the case of a power plant located adjacent to Korolev, since it would be using the ice water it was creating in the crater as an excellent heat rejection medium. With the aid of 5 such installations, using both their waste heat and the dissipation from their electric power systems, we could melt a cubic kilometer of ice every year.

Korolev averages 500 m in depth, which is much deeper than we need. So rather than try to melt it all the way through, an optimized strategy might be to focus on coastal regions with an average depth of perhaps 40 meters. In that case, each cubic kilometer of ice melted would open 25 square kilometers of liquid lake for settlement. Alternatively, we could just choose a smaller crater with less depth, and melt the whole thing, except the ice cover at its top.

Housing in a Martian Lake

On Earth, 10 meters of water creates one atmosphere of pressure. Because Martian gravity is only 38 percent as great as that of Earth, 26 meters of water would be required to create the same pressure. But so much pressure is not necessary. With as little as 10 meters of water above, we would still have 0.38 bar of outside pressure, or 5.6 psi, allowing a 3 psi oxygen/2.6 psi nitrogen atmosphere comparable to that used on the Skylab space station. Reducing nitrogen atmospheric content in this way could also be advantageous because nitrogen is only a small minority constituent of the Martian atmosphere, making it harder to come by on Mars, and limiting the nitrogen fraction of breathing air would also facilitate traveling to lower pressure environments without fear of getting the bends. Ten meters of water above an underwater habitat would also provide shielding against cosmic rays equivalent to that provided by Earth’s atmosphere at sea level.

Construction of the habitats could be done using any the methods employed for underwater habitats on Earth. These include closed pressure vessels, like submarines, or open-bottom systems, like diving bells. The latter offer the advantage of minimizing structural mass since they have an interior pressure nearly equal to that of the surrounding environment, and direct easy access to the sea via their bottom doors, without any need for airlocks. Thus, while closed submarines are probably better for travel, as their occupants do not experience pressure changes with depth, open bottom habitats offer superior options for settlement. We will therefore focus our interest on the latter.

Consider an open-bottom settlement module consisting of a dome 100 m in diameter, whose peak is 4 meters below the surface and whose base in 16 meters below the surface. The dome thus has four decks, with 3 meters of head space for each. The dome is in tension, because all the air in it is all at a pressure of 9 psi, corresponding to the lake water pressure at its base, while the lake water pressure at its top is only about 2.2 psi, for an outward pressure on the dome material near the top of 6.8 psi. The dome has a radius of curvature of 110 m.

The required yield stress of the material composing a pressurized sphere is given by:

σ = xPR/2t (1)

Where σ is the yield stress, P is the pressure, R is the radius, t is the dome thickness, and x is the safety factor. Let’s say the dome is made of steel with a yield stress of 100,000 psi and x=2. In that case, equation (1) says that:

100,000 = (6.8)(110)/t, or t= 0.0075 m = 7.5 mm.

The mass of the steel would be about 600 tons. That’s not to bad, for creating a habitat with about 30,000 square meters of living space.

If instead of using steel, we made a tent dome from spectra fabric, which has 4 times the strength of steel and 1/9th the density, the mass of the dome would only need to be about 17 tons. It would, however, need to be tied down around its circumference. Ballast weights of 90,000 tons of rocks could be used for this purpose. Otherwise the tie down lines could be anchored to stakes driven deep into the frozen ground under the lake.

An attractive alternative to these engineering methods for creating a dome out of manufactured materials could be to simply melt the dome out of the ice covering the lake itself. For example, let’s say the ice cover is 20 m thick, and we melt a dome into it that is 12 m tall, 100 m in diameter, and has a radius of curvature of 110 m. Filling this with an oxygen/nitrogen gas mixture would provide a habitat of equal size to that discussed above. The pressure under 20 m of ice (density = 0.92) is 0.7 bar, or 10.3 psi. The roof of the dome is under 8 m of ice, whose mass exerts of compressive pressure of 0.28 bar, or 4.1 psi, leaving a pressure difference of 6.2 psi to be held by the strength of the ice. The tensile strength of ice is about 150 psi, so sticking these values into equation (1) we find that the safety factor, x, at the dome’s thinnest point would be:

150 = x(6.2)(110)/[(8)(2)], or x = 3.52

This safety factor is more than adequate. Networks of domes of this size could be melted into the ice cover, linked by tunnels through the thick material at their bases. If domes with a much larger radius of curvature were desired, the ice could be greatly strengthened by freezing a spectra net into it.

The mass of ice melted to create each such dome is about 80,000 tons, requiring 1 MWt-year of energy to do the melting. It would also require about 90 tons of oxygen to fill the dome with gas. This could be generated via water electrolysis. Assuming 80% efficient electrolysis units, this would require 1950 GJ, or 62 kWe-year of electric power to produce. Such large habitation domes could therefore be constructed and filled with breathable gas well in advance of the creation of the lake using much more modest power sources.

Compressive habitation structures can be created under ice that are much larger still. This is so because ice has 92 percent the density of water, so that if a 50 meters deep column of ice beneath the lake’s ice surface were melted, it would yield a column of water 42 meters deep and 8 meters of void, which could be filled with air.

So, let’s say we had an ice crater, section of an ice crater, or even a glacier 5 km in radius and 70 meters or more deep. We melt a section of it starting 20 m under the top of the ice and going down 50 m. As noted, this would create a headroom space 4 m thick above the water. The ice above this void would have a weight of 7 psi, so we would fill the void with an oxygen/nitrogen gas mixture with a pressure of 6.999 psi. This would negate almost all the weight to leave the ice roof in an extremely mild state of compression. (Mild compression is preferred to mild tension, because the compressive strength of ice is about 1500 psi – ten times the tensile strength.) Under such circumstances the radius of curvature of the overhanging surface could be unlimited. As a result, a pressurized and amply shielded habitable region of 78 square kilometers would be created. Habitats could be placed on rafts or houseboats on this indoor lake, or an ice shelf formed to provide a solid floor for conventional buildings over much of it.

The total amount of water that would need to be melted to create this indoor lake city would be 4 cubic kilometers. This could be done in about 4 years by our proposed 5 GWe power system. Further heating would continue to expand the habitable region laterally over time. If the lake were deep, so that there was ice beneath the water column, it would gradually melt, increasing the headroom over the settlement as well.

Terraforming the Lake

The living environment of the sublake Mars settlement need not be limited to the interior of the air-filled habitats. By melting the ice, we are creating the potential for a vibrant surrounding aquatic biosphere, which could be readily visited by Mars colonists wearing ordinary wet suits and SCUBA gear.

The lake is being melted using hot water produced by the heat rejection of onshore or floating nuclear reactors. If the heat is rejected near the bottom of the lake, forceful upwelling will occur, powerfully fertilizing the lake water with mineral nutrients.

Assuming that the ice cover is reduced to less than 30 meters, there will be enough natural light during daytime to support phytoplankton growth, as has been observed in the Earth’s Arctic ocean [5]. The lake’s primary biological productivity could be greatly augmented, however, by the addition of artificial light.

The Arctic ocean exhibits high biological activity as far north as 75 N, where the sea receives an average day/night year-round solar illumination of about 50 W/m2. If we take this as our standard, then each GW of our available electric power could be used to illuminate 20 square kilometers of lake. Combined with the mineral-rich water produced by thermal upwelling, and artificial delivery of CO2 from the Martian atmosphere as required, this illumination could serve to create an extremely productive biosphere in the waters surrounding the settlement.

The first organisms to be released into the lake should be photosynthetic phytoplankton and other algae, including macroscopic forms such as kelp. These would serve to oxygenate the water. Once that is done, animals could be released, starting with zooplankton, with a wide range of aquatic macrofauna, potentially including sponges, corals, worms, mollusks, arthropods, and fish coming next. Penguins and sea otters could follow.

As the lake continues to grow, its cities would multiply, giving birth to a new branch of human civilization, supported by and supporting a lively new biosphere on a new world.

Conclusion

We find that the best places to settle Mars could be under water. By creating lakes beneath the surface of ice-covered craters, we can create miniature worlds, providing acceptable pressure, temperature, radiation protection, voluminous living space, and everything else needed for life and civilization. The sublake cities of Mars could serve as bases for the exploration and development of the Red Planet, providing homes within which new nations can be born and grow in size, technological ability, and industrial capacity, until such time as they can wield sufficient power to go forth and take on the challenge of terraforming Mars itself.

References

1. Frank Crossman, editor, Mars Colonies: Plans for Settling the Red Planet, The Mars Society, Polaris Books, 2019

2. Robert Zubrin with Richard Wagner, The Case for Mars: The Planet to Settle the Red Planet and Why We Must, Simon and Schuster, NY, 1996, 2011.

3. Richard S. Taylor, “Paraterraforming: The Worldhouse Concept,” Journal of the British Interplanetary Society, vol. 45, no. 8, Aug. 1992, p. 341-352.

4. Sub Glacial Lake, Wikipedia, https://en.wikipedia.org/wiki/Subglacial_lake#Biology accessed May 15, 2020.

5. Kevin Arrigo, et al, “Massive Phytoplankton Blooms Under Sea Ice,” Science, Vol. 336, page 1408, June 15, 2012 https://www2.whoi.edu/staff/hsosik/wp-content/uploads/sites/11/2017/03/Arrigo_etal_Science2012.pdf. Accessed May 15, 2020.

tzf_img_post
{ 101 comments }

Modeling Hot Jupiter Clouds

Studying the atmospheres of exoplanets is a process that is fairly well along, especially when it comes to hot Jupiters. Here we have a massive target so close to its star that, when a transit occurs, we can look at the star’s light filtering through the atmosphere of the planet. Even so, clouds are a problem because they prevent accurate readings of atmospheric composition below the upper cloud layers. Aerosols — suspended solid particles or droplets in a gas — are common, range widely in composition, and make studying a planet’s atmosphere harder.

We’d like to learn more about which aerosols are where and in what kind of conditions, for we have a useful database of planets to work with. Over 70 exoplanets currently have transmission spectra available. A wide range of cloud types, many of them exotic indeed, have been proposed by astronomers to explain what they are seeing.

Imagine clouds of sapphire, or rubies, which is essentially what we get with aerosols of aluminum oxides like corundum. Potassium chloride can produce a molten salt. Sulfides of manganese or zinc can be components, as well as organic hydrocarbon compounds. Which of these are most likely to form and affect our observations? And what about silicates?

A new model, produced by an international team of astronomers, bodes well for future work. The model predicts that the most common type of hot Jupiter cloud consists not of the most exotic of these ingredients but of liquid or solid droplets of silicon and oxygen — think melted quartz.

But much depends on the temperature, with the cooler hot Jupiters (below about 950 Kelvin) marked by hydrocarbon hazes. Peter Gao (UC-Berkeley) is first author of a paper describing the model that pulls all these and more possibilities together:

“The kinds of clouds that can exist in these hot atmospheres are things that we don’t really think of as clouds in the solar system. There have been models that predict various compositions, but the point of this study was to assess which of these compositions actually matter and compare the model to the available data that we have… The idea is that the same physical principles guide the formation of all types of clouds. What I have done is to take this model and bring it out to the rest of the galaxy, making it able to simulate silicate clouds and iron clouds and salt clouds.”

Some planets have clear atmospheres, making spectroscopy easier, but all too frequently high clouds block observations of the gases below them. Gao considers such clouds a kind of contamination in the data, making it hard to trace atmospheric elements like water and methane. The new model examines how gases of various atoms or molecules condense into cloud droplets, their patterns of growth or evaporation, and their transport by local winds.

Image: Predicted cloud altitudes and compositions for a range of temperatures common on hot Jupiter planets. The range, in Kelvin, corresponds to about 800-3,500 degrees Fahrenheit, or 427-1,927 degrees Celsius. Credit: UC Berkeley. Image by Peter Gao.

The team worked with computer models of Earth’s clouds and extended them to planets like Jupiter, where we find ammonia and methane clouds, before moving on to hot Jupiter temperatures up to 2,800 K (2,500 degrees Celsius) and the kind of elements that could condense into clouds under these conditions. The scientists simulated the distribution of aerosol particles, studying cloud formation through thermochemical reactions and haze formation through methane photochemistry. This is intricate stuff, modeling condensation from one gas to another, so that we can simulate the emergence of unusual clouds, but it draws on 30 of the exoplanets with recorded transmission spectra as a check on the model’s accuracy.

Using the model, we can move through layers of atmosphere as mediated by temperature, with the hottest atmospheres showing condensation of aluminum oxides and titanium oxides, producing high-level clouds, while lowering the temperature allows such clouds to form deeper in the planet’s atmosphere, leaving them obscured by bands of higher silicate clouds. Lower the temperatures further and the upper atmosphere becomes clear as the silicate clouds form further down. High-level hazes can form at lower temperatures still.

Looking for a clear sky to study the atmosphere without hindrance? Planets in the range of 950 to 1,400 K are the most likely to produce a cloudless sky, but planets hotter than 2,200 K also fit the bill, says Gao. Hannah Wakeford (University of Bristol, UK) is a co-author on the paper:

“The presence of clouds has been measured in a number of exoplanet atmospheres before, but it is when we look collectively at a large sample that we can pick apart the physics and chemistry in the atmospheres of these worlds. The dominant cloud species is as common as sand — it is essentially sand — and it will be really exciting to be able to measure the spectral signatures of the clouds themselves for the first time with the upcoming James Webb Space Telescope (JWST).”

The key finding here is that only one type of cloud made of silicates dominates cloud opacity over a wide range of temperatures, and thus has the greatest implications for observation. Silicates dominate above planetary equilibrium temperatures of 950 K and extend out to 2,000 K, while hydrocarbon hazes dominate below 950 K. Many of the most exotic cloud types proposed in the literature simply require too much energy to condense.

Too bad. I liked the idea of sapphire clouds. But as the paper notes: “The observed trends in warm giant exoplanet cloudiness is a natural consequence of the dominance of only two types of aerosol.” And it continues:

Even though we do not consider the day- and nightside cloud opacity of warm giant exoplanets explicitly in our modelling, our finding that only one type of cloud—silicates—dominates exoplanet cloud opacity over a wide range of temperatures has important implications for exoplanet emission and reflected light observations. For example, the brightness temperature of an atmosphere with an optically thick silicate cloud deck would be fixed to a value slightly below the condensation temperature of silicates where the cloud deck becomes optically thin, resulting in minimal variations in the atmospheric brightness temperature for 950 K < Teq < 2,100 K. This is indeed what is observed for the nightsides of warm giant exoplanets, which all have brightness temperatures of ~1,100 K… Meanwhile, the relatively high albedo of certain warm giant exoplanets such as Kepler-7b could also be explained by the dominance of silicate clouds, which are highly reflective at optical wavelengths.

The paper is Gao et al., “Aerosol composition of hot giant exoplanets dominated by silicates and hydrocarbon hazes,” Nature Astronomy 25 May 2020 (abstract).

tzf_img_post
{ 6 comments }

A New Class of Astronomical Transients

Some of the fastest outflows in nature are beginning to turn up in the phenomena known as Fast Blue Optical Transients (FBOTs). These are observed as bursts that quickly fade but leave quite an impression with their spectacular outpouring of energy. The transient AT2018cow was found in 2018, for example, in data from the ATLAS-HKO telescope in Hawaii, an explosion 10 to 100 times as bright as a typical supernova that appeared in the constellation Hercules. It was thought to be produced by the collapse of a star into a neutron star or black hole.

Now we have a new FBOT that is brighter at radio wavelengths than AT2018cow, the third of these events to be studied at radio wavelengths. The burst occurred in a small galaxy about 500 million light years from Earth and was first detected in 2016. Let’s call it CSS161010 (short for CRTS-CSS161010 J045834-081803), and note that it completely upstages its predecessors in terms of the speed of its outflow. The event launched gas and particles at more than 55 percent of the speed of light. Such FBOTs, astronomers believe, begin with the explosion of a massive star, with differences from supernovae and GRBs only showing up in the aftermath.

Deanne Coppejans (Northwestern University) led the study:

“This was unexpected. We know of energetic explosions that can eject material at almost the speed of light, specifically gamma-ray bursts, but they only launch a small amount of mass — about 1 millionth the mass of the sun. CSS161010 launched 1 to 10 percent the mass of the sun at more than half the speed of light — evidence that this is a new class of transient.”

Image: Keck’s view of where the CSS161010 explosion (red circle) occurred in a dwarf galaxy. Credit: Giacomo Terreran/Northwestern University.

Meanwhile, a second explosion, called ZTF18abvkwla (“The Koala”), has turned up in a galaxy considerably further out at 3.4 billion light years. Caltech’s Anna Ho led the study of this one, with both teams gathering data from the Very Large Array, the Giant Metrewave Radio Telescope in India and the Chandra X-ray Observatory. In both cases, it was clear that the type of explosion, bright at radio wavelengths, differed from both supernovae explosions and gamma-ray bursts. “When I reduced the data,” said Ho, “I thought I had made a mistake.”

FBOTs became recognized as a specific class of object in 2014, but the assumption is that our archives contain other examples of what Coppejans’ co-author Raffaella Margutti calls ‘weird supernovae,’ a concession to the fact that it is hard to gather information on these objects solely in the optical. The location of the CSS161010 explosion is a dwarf galaxy containing roughly 10 million stars in the southern constellation Eridanus.

Bright FBOTs like CSS161010 and AT2018cow have thus far turned up only in dwarf galaxies, which the authors note is reminiscent of some types of supernovae as well as gamma-ray bursts (GRBs). A transient like this flares up so quickly that it may prove impossible to pin down its origin,, but black holes and neutron stars are prominent in the astronomers’ thinking:

“The Cow and CSS161010 were very different in how fast they were able to speed up these outflows,” Margutti said. “But they do share one thing — this presence of a black hole or neutron star inside. That’s the key ingredient.”

Even so, the differences between the three FBOTs thus far studied at multiple wavelengths is notable. In the excerpt below, the authors of the Coppejans paper use the term ‘engine-driven’ to refer to the rotating accretion disk that produces jets in a neutron star or black hole produced by a supernova core-collapse, which can propel narrow jets of material outward in opposite directions. The authors believe that FBOTs produce this kind of engine, but in this case one surrounded by material shed by the star before it exploded. The surrounding shell as it is struck by the blast wave would be the source of the FBOT’s visible light burst and radio emission.

From the paper:

The three known FBOTs that are detected at radio wavelengths are among the most luminous and fastest-rising among FBOTs in the optical regime… Intriguingly, all the multi-wavelength FBOTs also have evidence for a compact object powering their emission… We consequently conclude… that at least some luminous FBOTs must be engine-driven and cannot be accounted for by existing FBOT models that do not invoke compact objects to power their emission across the electromagnetic spectrum. Furthermore, even within this sample of three luminous FBOTs with multiwavelength observations, we see a wide diversity of properties of their fastest ejecta. While CSS161010 and ZTF18abvkwla harbored mildly relativistic outflows, AT 2018cow is instead non-relativistic.

Which is another way of saying that we have a long way to go to understand FBOTs. We see characteristics of supernovae as well as GRBs but distinctive differences. Further observations in radio and X-ray wavelengths are critical for learning more about their physics.

Image: Artist’s conception of the new class of cosmic explosions called Fast Blue Optical Transients. Credit: Bill Saxton, NRAO/AUI/NSF.

The first paper is Coppejans, Margutti et al., “A Mildly Relativistic Outflow from the Energetic, Fast-rising Blue Optical Transient CSS161010 in a Dwarf Galaxy,” Astrophysical Journal Letters Vol. 895, No. 1 (26 May 2020). Abstract.

On the FBOT ZTF18abvkwla, see Ho et al., “The Koala: A Fast Blue Optical Transient with Luminous Radio Emission from a Starburst Dwarf Galaxy at z = 0.27,” Astrophysical Journal Vol. 895, No. 1 (26 May 2020). Abstract.

tzf_img_post
{ 5 comments }

Star Formation and Galactic Mergers

Our galaxy is 10,000 times more massive than Sagittarius, a dwarf galaxy discovered in the 1990s. But we’re learning that Sagittarius may have had a profound effect on the far larger galaxy it orbits, colliding with it on at least three occasions in the past six billion years. These interactions would have triggered periods of star formation that we can, for the first time, begin to map with data from the Gaia mission, a challenge tackled in a new study in Nature Astronomy.

The paper in question, produced by a team led by Tomás Ruiz-Lara (Instituto de Astrofísica de Canarias, Tenerife), argues that the influence of Sagittarius was substantial. The data show three periods of increased star formation, with peaks at 5.7 billion years ago, 1.9 billion years ago and 1 billion years ago, corresponding to the passage of Sagittarius through the Milky Way disk.

The work is built around Gaia Data Release 2, examining the photometry and parallax information combined with modeling of observed color-magnitude diagrams to build a star formation history within a bubble around the Sun with a radius of 2 kiloparsecs (about 6500 light years). The star formation ‘enhancements,’ as the paper calls them, are well-defined, though with decreasing strength, with a possible fourth burst spanning the last 70 million years

Ruiz-Lara sees the disruption caused by Sagittarius as substantial, a follow-on to an earlier merger:

“At the beginning you have a galaxy, the Milky Way, which is relatively quiet. After an initial violent epoch of star formation, partly triggered by an earlier merger as we described in a previous study, the Milky Way had reached a balanced state in which stars were forming steadily. Suddenly, you have Sagittarius fall in and disrupt the equilibrium, causing all the previously still gas and dust inside the larger galaxy to slosh around like ripples on the water.”

Image: The Sagittarius dwarf galaxy has been orbiting the Milky Way for billions for years. As its orbit around the 10,000 times more massive Milky Way gradually tightened, it started colliding with our galaxy’s disc. The three known collisions between Sagittarius and the Milky Way have, according to a new study, triggered major star formation episodes, one of which may have given rise to the Solar System. Credit: ESA.

The idea is that higher concentrations of gas and dust are produced in some areas as others empty, the newly dense material triggering star formation. According to the paper, the 2 kiloparsec local volume is:

…characterized by an episodic SFH [star formation history], with clear enhancements of star formation ~ 5.7, 1.9 and 1.0 Gyr ago. All evidence seems to suggest that recurrent interactions between the Milky Way and Sgr dwarf galaxy are behind such enhancements. These findings imply that low mass satellites not only affect the Milky Way disk dynamics, but also are able to trigger notable events of star formation throughout its disk. The precise dating of such star forming episodes provided in this work sets useful boundary conditions to properly model the orbit of Sgr and its interaction with the Milky Way. In addition, this work provides important constraints on the modelling of the interstellar medium and star formation within hydrodynamical simulations, manifesting the need of understanding physical processes at subresolution scales and of further analysis to unveil the physical mechanisms behind global and repeated star formation events induced by satellite interaction.

Could the passage of Sagittarius through the Milky Way be behind the Sun’s formation? That seems a stretch given the length of time between the first disruption and the Sun’s formation some 4.6 billion years ago, but co-author Carme Gallart (IAC) doesn’t rule it out:

“The Sun formed at the time when stars were forming in the Milky Way because of the first passage of Sagittarius. We don’t know if the particular cloud of gas and dust that turned into the Sun collapsed because of the effects of Sagittarius or not. But it is a possible scenario because the age of the Sun is consistent with a star formed as a result of the Sagittarius effect.”

What I learned here is that understanding the physical processes behind star formation and incorporating that understanding into workable models is a problematic issue for astronomers today, because ongoing work is challenging earlier views of what happens when galaxies merge. The paper points out that while we have a number of colliding galaxies to examine, there is little theoretical work on the impact of a single satellite galaxy on a spiral galaxy.

And a key point: “…although we can easily link the reported enhancements with possible perientric passages of Sgr, we cannot pinpoint what exact physical mechanisms are triggering such events.” Plenty of opportunity ahead for researchers looking into the Milky Way’s history.

The paper is Ruiz-Lara et al., “The recurrent impact of the Sagittarius dwarf on the star formation history of the Milky Way,” published in Nature Astronomy 25 May 2020 (abstract).

tzf_img_post
{ 3 comments }

On SETI, International Law, and Realpolitik

When Ken Wisian and John Traphagan (University of Texas at Austin) published “The Search for Extraterrestrial Intelligence: A Realpolitik Consideration” (Space Policy, May 2020), they tackled a problem I hadn’t considered. We’ve often discussed Messaging to Extraterrestrial Intelligence (METI) in these pages, pondering the pros and cons of broadcasting to the stars, but does SETI itself pose issues we are not considering? Moreover, could addressing these issues possibly point the way toward international protocols to address METI concerns?

Ken was kind enough to write a post summarizing the paper’s content, which appears below. A Major General in the USAF (now retired), Dr. Wisian is currently Associate Director of the Bureau of Economic Geology, Jackson School of Geosciences at UT. He is also affiliated with the Center for Space Research and the Center for Planetary Systems Habitability at the university. A geophysicist whose main research is in geothermal energy systems, modeling, and instrumentation & data analysis, he is developing a conference on First Contact Protocols to take place at UT-Austin this fall, a follow-on to his recent session at TVIW 2019 in Wichita.

by Ken Wisian

The debate over the wisdom of active Messaging to ExtraTerrestrial Intelligence (METI), has been vigorously engaged for some time. The progenitor of METI and the more accepted passive Search for ExtraTerrestrial Intelligence (SETI) has been largely assumed to be of little or no risk. The reasons for this assumption appear to be:

1. It does not alert ETI to our existence and therefore we should not face a threat of invasion or destruction from aliens (if it is even practical to do so over interstellar distances)

2. The minor Earthbound threat from extremists (of various possible persuasions) who might not like the possibility of ETI’s existence conflicting with their “world view” would be no more than an annoyance.

Implicit in the above is the underlying assumption that the only realistic threat that could arise from METI or SETI is that from a hostile ETI. In other words, the threat is external to humanity. What this too simple reasoning overlooks is human history, particularly international affairs, conflicts and war. [1]

SETI as used here is the passive searching for electromagnetic signals from ETI. This is currently primarily considered to be in the form of radio or laser signal, deliberately sent to somewhere. The search for non-signal evidence (e.g. inadvertent laser propulsion leakage, etc) is not considered here, though it could tie in to the discussion in a distant, indirect manner. Note: an ETI artifact (e.g. a spaceship) could have similar import as a SETI detection discussed here.

So what harm could SETI do? Looking at current and historical international affairs, particularly great-power competition, the answer is readily apparent – competition for dominance. In international affairs, nations compete, sometimes violently, for position in the world. This can be for economic or military advantage, more land or control over the seas, or merely survival. Witness the South China Sea today, stealing the secrets to nuclear weapons in the 1940’s and 1950’s, or the Byzantine Empire engaging in industrial espionage to steal the secret to making silk from China.

Now contemplate the potential technology advances that could come with a SETI detection. This could range from downloading the “Encyclopedia Galactica” to establishing two-way dialogue that includes sharing technology. With the potential for revolutionary science and technology leaps, whether directly destructive or not, to say the great & super powers would be “interested” is a monumental understatement.

Now think about the potential advantage (read as domination-enabling) that could accrue to one country if they were the only beneficiaries of said technology advances. “How?” you ask. “Anyone can point a radio telescope to the sky” Not so fast. Unless the signal comes from within our own galactic back yard, most likely within the Solar System, it will take a relatively large, complicated industrial complex (physical plant) with very specialized personnel to run, in order to send and receive interstellar communications. This is the key fact that could lead to SETI/METI being the next “Great Game” [2] of international affairs.

Large, specialized complexes & associated personnel are limited in number and therefore subject to physical control. For the sake of argument, let’s say there are a dozen such facilities in the world. This is far less than the number of critical infrastructure sites the US and coalition forces decided had to be taken out in Iraq in the Gulf Wars in order to reduce their military capability – a very manageable target set size. Now you begin to see the problem; superpowers, seeing a potentially world-dominating advantage in monopolizing the ETI communication channel, might also see as feasible preserving their access to ETI while at the same time denying the same to all other countries.

While “Great Games” like this can sometimes be kept in the purely political realm, that is relatively rare. Competition of this sort often includes violent espionage or proxy wars and occasionally can escalate to direct super-power competition. Thus, an actual SETI detection could lead rapidly to the first true information war – a superpower war (with all the existential risk that carries) fought purely for control of knowledge.

Monopolizing communication with ETI could be the trigger for the first information-driven world war.

Realization of the risk that even passive SETI presents should drive further actions:

1. The development of realistic and binding international treaties on the handling of first contact protocols – admittedly a long-shot. The existing post-detection protocol is a very small and completely non-binding first step in this direction.

2. Formation of deliberately international SETI facilities with uninterruptible data sharing to partner countries (and/or the UN). These would also have interleaved internal chains of command from member countries. While this would be somewhat inefficient, the offset to risk is well worth the effort. A phase 2 to this would be a similar arrangement for METI. This would implicitly force the adoption of international standards and provide a process for METI.

3. Further (renewed?) discussion and research into SETI risk. This should bring in many disciplines that are often not involved deeply in the SETI/METI fields, from government policy to history to psychology and many others. In staring so hard at the very obvious risk of METI, we missed the risk from SETI alone. We need to turn around and explore that road before proceeding further down the highway to METI.

Notes

[1] What I am getting at here is that unfortunately, this is a stereotypical “ivory tower” point of view, too idealistic and disconnected from messy, illogical human affairs. I say this reluctantly as a “card-carrying” (i.e. Ph.D.) member of the academic world.

[2] I am definitely abusing the term “Great Game” in multiple ways. The term refers to the 19th competition between the British and Russian empires for control of Central and South Asia. It was a deadly serious and deadly game in actuality, but the term captures well the feeling of being in a fierce competition.

PG: Let me insert here this excerpt from the paper highlighting the question of international law and the issues it raises:

The potentially enormous value to nation states of monopolizing communication with ETI, for the purpose of technological dominance in human affairs, is a significant factor in understanding possible scenarios after a confirmed contact event and bears further thinking by scholars and policy specialists. History shows that in circumstances that states perceive as vital they will likely act in their perceived best interest in accordance with principles of realpolitik thinking. In these circumstances, international law is frequently not a strong constraint on the behavior of governments and a protocol developed by scientists on how to handle first contact is unlikely to be of any concern at all. This risk needs to be acknowledged and understood by the larger international community to include scientists active in SETI in addition to political leaders and international relations scholars.

The paper is Wisian and Traphagan, “The Search for Extraterrestrial Intelligence: A Realpolitik Consideration,” Space Policy May, 2000 (abstract).

tzf_img_post
{ 61 comments }

Astrobiological Science Fiction

I had never considered the possibilities for life on Uranus until I read Geoffrey Landis’ story “Into the Blue Abyss,” which first ran in Asimov’s in 1999, and later became a part of his collection Impact Parameter. Landis’ characters looked past the lifeless upper clouds of the 7th planet to go deep into warm, dark Uranian oceans, his protagonist a submersible pilot and physicist set to explore:

Below the clouds, way below, was an ocean of liquid water. Uranus was the true water-world of the solar system, a sphere of water surrounded by a thick atmosphere. Unlike the other planets, Uranus has a rocky core too small to measure, or perhaps no solid core at all, but only ocean, an ocean that has actually dissolved the silicate core of the planet away, a bottomless ocean of liquid water twenty thousand kilometers deep.

It would be churlish to give away what turns up in this ocean, so I’m going to direct you to the story itself, now available for free in a new anthology edited by Julie Novakova. Strangest of All is stuffed with good science fiction by the likes of David Nordley, Gregory Benford, Geoffrey Landis and Peter Watts. Each story is followed by an essay about the science involved and the implications for astrobiology.

Although I’ve been reading science fiction for decades, our discussions of it in these pages are generally sparse, related to specific scientific investigations. That’s because SF is a world in itself, and one I can cheerfully get lost in. I have to tread carefully to be able to stay on topic. But now and then something comes along that tracks precisely with the subject matter of Centauri Dreams. Strangest of All is such a title, downloadable as a PDF, .mobi or .epub file. I use both a Kindle Oasis and a Kobo Forma for varying reading tasks, and I’ve downloaded the .epub for use on the Forma, but .mobi works just fine for the Kindle.

What we have here is a collaborative volume, developed through the European Astrobiology Institute, containing work by authors we’ve talked about in these pages before because of their tight adherence to physics amidst literary skills beyond the norm. The quote introducing the volume still puts a bit of a chill down my spine:

“…this strangest of all things that ever came to earth from outer space must have fallen while I was sitting there, visible to me had I only looked up as it passed.”

That’s H. G. Wells from The War of the Worlds (1898), still a great read since the first time I tackled it as a teenager. What Novakova wants to do is use science fiction to make astrobiology more accessible, which is why the science commentaries following each story are useful. Strangest of All looks to be a classroom resource for those who teach, part of what the European Astrobiology Institute plans to be a continuing publishing program in outreach and education. We’ve talked before about science fiction’s role as a career starter for budding physicists and engineers.

Gerald Nordley’s “War, Ice, Egg, Universe” takes us to an ocean world with a frozen surface on top, a place like Europa, where the tale has implications for how we approach the exploration of Europa and Enceladus, and perhaps Ganymede as well. In fact, with oceans now defensibly proposed for objects ranging from Titan to Pluto, we are looking at potential venues for astrobiology that defy conventional descriptions of habitable zones as orbital arcs supporting liquid water on the surface. Referring to characters in the story, the EAI essay following Nordley’s tale comments:

Chyba (2000) and Chyba & Phillips (2001) tried to work even with these unknowns and calculate the amount of energy for putative Europan life, and to describe what ecosystems might potentially thrive there. According to these estimates, even a purely surface radiation-driven ecosystem might yield cell counts of over one cell per cubic centimeter; perhaps even a thousand cells per cubic centimeter in the uppermost ocean layers. Putative hydrothermal vents, of course, would create a different source of energy and chemicals for life (albeit one much more difficult to discover – in contrast, life near the icy shell might erupt into space in the geysers and be discovered by “simple” flybys). Any macrofauna, though, seems highly improbable given the energy estimates. Since Loudpincers was about eight times larger than the human Cyndi, by his own account, we’ll really have to look for his civilization elsewhere, perhaps on a larger moon of some warm Jupiter.

You see the method — the follow-up essay explores the ideas, but goes beyond that to provide references for continuing the investigation in the professional literature. This essay also speculates about Ganymede, where a liquid water ocean may be caught between two layers of ice. The ‘club sandwich’ model for Ganymede posits several layers of oceans and ice, which would make Ganymede perhaps the most bizarre ocean-bearing world in the Solar System, one with incredible pressures bearing down on high-pressure ice at the bottom (20 times the pressure of the bottom of the Mariana Trench on Earth).

Gregory Benford’s “Backscatter” is likewise ice-oriented, this time in the remote reaches of the Kuiper Belt and the Oort Cloud beyond. From the essay following the story:

Although it’s difficult to imagine a path from putative simple life in early water-soaked asteroids heated by the radioactive aluminum to vacflowers blooming on the surface of an iceteroid, life in the Kuiper Belt, the Oort Cloud and beyond cannot be ruled out – and we haven’t even touched the issue of rogue planets, which might have vastly varying surface conditions stemming from their size, mass, composition, history and any orbiting bodies.

The essay gives us an overview of the science that, as in Benford’s story, conceives of possible life sustained by sparse inner heat and the presence of ammonia and salts, perhaps with tidal heating thrown in for good measure. Cold brines would demand chemical and energy gradients to sustain life, a difficult thing to discover or measure unless cryovolcano activity coughs up evidence of the ocean below the ice. Some silicon compounds may support a form of life in ice as far out as the Oort, or perhaps in liquid nitrogen. Usefully, the essay on “Backscatter” runs through the scholarship.

The European Astrobology Institute has put together a project team around “Science Fiction as a Tool for Astrobiology Outreach and Education,” out of which has come this initial volume. The references in the science essays make Strangest of All valuable even for those of us who have encountered some of these stories before, for the fiction has lost none of its punch. Thomas Bucknell’s “A Jar of Goodwill” looks at new forms of plant metabolism on a world dominated by chlorine and a key question in addressing alien life: Will we know intelligence when we see it? Peter Watts’ “The Island” looks at Dyson spheres in an astrobiologically relevant form that Dyson himself never thought of (well, he probably did — I bet it’s somewhere in his notebooks).

All told, there are eight stories here along with the essays that explore their implications, an easy volume to recommend given the EAI’s willingness to make the volume available at no cost to readers. See what you think about the Fermi paradox as addressed in D. A. Xiaolin Spires “But Still I Smile.” Plenty of material here for discussion of the sort we routinely do here on Centauri Dreams!

tzf_img_post
{ 26 comments }