Sublake Settlements for Mars

Terraforming a world is a breathtaking task, one often thought about in relation to making Mars into a benign environment for human settlers. But there are less challenging alternatives for providing shelter to sustain a colony. As Robert Zubrin explains in the essay below, ice-covered lakes are an option that can offer needed resources while protecting colonists from radiation. The founder of the Mars Society and author of several books and numerous papers, Zubrin is the originator of the Mars Direct concept, which envisions exploration using current and near-term technologies. We’ve examined many of his ideas on interstellar flight, including magsail braking and the nuclear salt water rocket concept, in these pages. Now president of Pioneer Astronautics, Zubrin’s latest book is The Case for Space: How the Revolution in Spaceflight Opens Up a Future of Limitless Possibility, recently published by Prometheus Books.

by Robert Zubrin

Abstract

This paper examines the possibilities of establishing Martian settlements beneath the surface of ice-covered lakes. It is shown that such settlements offer many advantages, including the ability to rapidly engineer very large volumes of pressurized space, comprehensive radiation protection, highly efficient power generation, temperature regulation, copious resource availability, outdoor recreation, and the creation of a vibrant local biosphere supporting both the nutritional and aesthetic needs of a growing human population.

Introduction

The surface of Mars offers many challenges to human settlement. Atmospheric pressure is only about 1 percent that of Earth, imposing a necessity for pressurized habits, making spacesuits necessary for outdoor activity, and providing less than optimum shielding against cosmic radiation. For these reasons some have proposed creating large subsurface structures, comparable to city subway systems, to provide pressurized well-shielded volumes for human habitation [1]. The civil engineering challenges of constructing such systems, however, are quite formidable. Moreover, food for such settlements would have to be grown in greenhouses, limiting potential acreage, and imposing either huge power requirements if placed underground, or the necessity of building large transparent pressurized structures on the surface. Water is available on the Martian surface as either ice or permafrost. These materials can be mined and the product transported to the base, but the logistics of doing so, while greatly superior to anything possible on the Moon, are considerably less convenient than the direct access to liquid water available to nearly all human settlements on Earth. While daytime temperatures are acceptably close to 0 C, nighttime temperatures drop to -90 C, imposing issues on machinery and surface greenhouses. Yet despite the cold night temperatures, the efficiency of nuclear power is impaired by the necessity of rejecting waste heat to a near-vacuum environment.

All of these difficulties could readily be solved by terraforming the planet [2]. However, that is an enormous project whose vast scale will require an already-existing Martian civilization of considerable size and industrial power to be seriously undertaken. For this reason, some have proposed the idea of “para terraforming,” [3] that is, roofing over a more limited region of the Red Planet, such as the Valles Marineris, and terraforming just that part. But building such a roof would itself be a much larger engineering project than any yet done in human history.

There are, however, locations on Mars that have already been roofed over. These are the planet’s numerous ice-filled craters.

Making Lakes on Mars

Earth’s Arctic and Antarctic regions feature numerous permanently ice covered or “sub glacial” lakes [4]. These lakes have been shown to support active microbial and planktonic ecosystems.

Most sub Arctic and high latitude temperate lakes are ice-covered in winter, but many members of their aquatic communities remain highly active, a fact well-known to ice fishermen.

Could there be comparable ice-covered lakes on Mars?

At the moment, it appears that there are not. The ESA Mars Express orbiter has detected highly-saline liquid water deep underground on Mars using ground penetrating radar, and such environments are of great interest for scientific sampling via drilling. But to be of use for settlement, we need ice-covered lakes that are directly accessible from the surface. There are plenty of ice-filled craters on Mars. These are not lakes, however, as while composed of nearly pure water ice, they are frozen top to bottom. But might this shortcoming be correctable?

I believe so. Let us examine the problem by considering an example.

Korolev is an ice-filled impact crater in the Mare Boreum quadrangle of Mars, located at 73° north latitude and 165° east longitude (Fig. 1). It is 81.4 kilometers in diameter and contains about 2,200 cubic kilometers of water ice, similar in volume to Great Bear Lake in northern Canada. Why not use a nuclear reactor to melt the water under the ice to create a huge ice-covered lake?

Fig. 1. Korolev Crater could provide a home for sublake city on Mars. Photo by ESA/DLR.

Let’s do the math. Melting ice at 0 C requires 334 kJ/kg. We will need to supply this plus another 200 kJ/kg, assuming that the ice’s initial temperature is -100 C, for 534 kJ/kg in all. Ice has a density of 0.92 kg/liter, so melting 1 cubic kilometer of ice would require 4.9 x 1017 J, or 15.6 GW-years of energy. A 1 GWe nuclear power plant on Earth requires about 3 GWt of thermal power generation. This would also be true in the case of a power plant located adjacent to Korolev, since it would be using the ice water it was creating in the crater as an excellent heat rejection medium. With the aid of 5 such installations, using both their waste heat and the dissipation from their electric power systems, we could melt a cubic kilometer of ice every year.

Korolev averages 500 m in depth, which is much deeper than we need. So rather than try to melt it all the way through, an optimized strategy might be to focus on coastal regions with an average depth of perhaps 40 meters. In that case, each cubic kilometer of ice melted would open 25 square kilometers of liquid lake for settlement. Alternatively, we could just choose a smaller crater with less depth, and melt the whole thing, except the ice cover at its top.

Housing in a Martian Lake

On Earth, 10 meters of water creates one atmosphere of pressure. Because Martian gravity is only 38 percent as great as that of Earth, 26 meters of water would be required to create the same pressure. But so much pressure is not necessary. With as little as 10 meters of water above, we would still have 0.38 bar of outside pressure, or 5.6 psi, allowing a 3 psi oxygen/2.6 psi nitrogen atmosphere comparable to that used on the Skylab space station. Reducing nitrogen atmospheric content in this way could also be advantageous because nitrogen is only a small minority constituent of the Martian atmosphere, making it harder to come by on Mars, and limiting the nitrogen fraction of breathing air would also facilitate traveling to lower pressure environments without fear of getting the bends. Ten meters of water above an underwater habitat would also provide shielding against cosmic rays equivalent to that provided by Earth’s atmosphere at sea level.

Construction of the habitats could be done using any the methods employed for underwater habitats on Earth. These include closed pressure vessels, like submarines, or open-bottom systems, like diving bells. The latter offer the advantage of minimizing structural mass since they have an interior pressure nearly equal to that of the surrounding environment, and direct easy access to the sea via their bottom doors, without any need for airlocks. Thus, while closed submarines are probably better for travel, as their occupants do not experience pressure changes with depth, open bottom habitats offer superior options for settlement. We will therefore focus our interest on the latter.

Consider an open-bottom settlement module consisting of a dome 100 m in diameter, whose peak is 4 meters below the surface and whose base in 16 meters below the surface. The dome thus has four decks, with 3 meters of head space for each. The dome is in tension, because all the air in it is all at a pressure of 9 psi, corresponding to the lake water pressure at its base, while the lake water pressure at its top is only about 2.2 psi, for an outward pressure on the dome material near the top of 6.8 psi. The dome has a radius of curvature of 110 m.

The required yield stress of the material composing a pressurized sphere is given by:

? = xPR/2t (1)

Where ? is the yield stress, P is the pressure, R is the radius, t is the dome thickness, and x is the safety factor. Let’s say the dome is made of steel with a yield stress of 100,000 psi and x=2. In that case, equation (1) says that:

100,000 = (6.8)(110)/t, or t= 0.0075 m = 7.5 mm.

The mass of the steel would be about 600 tons. That’s not to bad, for creating a habitat with about 30,000 square meters of living space.

If instead of using steel, we made a tent dome from spectra fabric, which has 4 times the strength of steel and 1/9th the density, the mass of the dome would only need to be about 17 tons. It would, however, need to be tied down around its circumference. Ballast weights of 90,000 tons of rocks could be used for this purpose. Otherwise the tie down lines could be anchored to stakes driven deep into the frozen ground under the lake.

An attractive alternative to these engineering methods for creating a dome out of manufactured materials could be to simply melt the dome out of the ice covering the lake itself. For example, let’s say the ice cover is 20 m thick, and we melt a dome into it that is 12 m tall, 100 m in diameter, and has a radius of curvature of 110 m. Filling this with an oxygen/nitrogen gas mixture would provide a habitat of equal size to that discussed above. The pressure under 20 m of ice (density = 0.92) is 0.7 bar, or 10.3 psi. The roof of the dome is under 8 m of ice, whose mass exerts of compressive pressure of 0.28 bar, or 4.1 psi, leaving a pressure difference of 6.2 psi to be held by the strength of the ice. The tensile strength of ice is about 150 psi, so sticking these values into equation (1) we find that the safety factor, x, at the dome’s thinnest point would be:

150 = x(6.2)(110)/[(8)(2)], or x = 3.52

This safety factor is more than adequate. Networks of domes of this size could be melted into the ice cover, linked by tunnels through the thick material at their bases. If domes with a much larger radius of curvature were desired, the ice could be greatly strengthened by freezing a spectra net into it.

The mass of ice melted to create each such dome is about 80,000 tons, requiring 1 MWt-year of energy to do the melting. It would also require about 90 tons of oxygen to fill the dome with gas. This could be generated via water electrolysis. Assuming 80% efficient electrolysis units, this would require 1950 GJ, or 62 kWe-year of electric power to produce. Such large habitation domes could therefore be constructed and filled with breathable gas well in advance of the creation of the lake using much more modest power sources.

Compressive habitation structures can be created under ice that are much larger still. This is so because ice has 92 percent the density of water, so that if a 50 meters deep column of ice beneath the lake’s ice surface were melted, it would yield a column of water 42 meters deep and 8 meters of void, which could be filled with air.

So, let’s say we had an ice crater, section of an ice crater, or even a glacier 5 km in radius and 70 meters or more deep. We melt a section of it starting 20 m under the top of the ice and going down 50 m. As noted, this would create a headroom space 4 m thick above the water. The ice above this void would have a weight of 7 psi, so we would fill the void with an oxygen/nitrogen gas mixture with a pressure of 6.999 psi. This would negate almost all the weight to leave the ice roof in an extremely mild state of compression. (Mild compression is preferred to mild tension, because the compressive strength of ice is about 1500 psi – ten times the tensile strength.) Under such circumstances the radius of curvature of the overhanging surface could be unlimited. As a result, a pressurized and amply shielded habitable region of 78 square kilometers would be created. Habitats could be placed on rafts or houseboats on this indoor lake, or an ice shelf formed to provide a solid floor for conventional buildings over much of it.

The total amount of water that would need to be melted to create this indoor lake city would be 4 cubic kilometers. This could be done in about 4 years by our proposed 5 GWe power system. Further heating would continue to expand the habitable region laterally over time. If the lake were deep, so that there was ice beneath the water column, it would gradually melt, increasing the headroom over the settlement as well.

Terraforming the Lake

The living environment of the sublake Mars settlement need not be limited to the interior of the air-filled habitats. By melting the ice, we are creating the potential for a vibrant surrounding aquatic biosphere, which could be readily visited by Mars colonists wearing ordinary wet suits and SCUBA gear.

The lake is being melted using hot water produced by the heat rejection of onshore or floating nuclear reactors. If the heat is rejected near the bottom of the lake, forceful upwelling will occur, powerfully fertilizing the lake water with mineral nutrients.

Assuming that the ice cover is reduced to less than 30 meters, there will be enough natural light during daytime to support phytoplankton growth, as has been observed in the Earth’s Arctic ocean [5]. The lake’s primary biological productivity could be greatly augmented, however, by the addition of artificial light.

The Arctic ocean exhibits high biological activity as far north as 75 N, where the sea receives an average day/night year-round solar illumination of about 50 W/m2. If we take this as our standard, then each GW of our available electric power could be used to illuminate 20 square kilometers of lake. Combined with the mineral-rich water produced by thermal upwelling, and artificial delivery of CO2 from the Martian atmosphere as required, this illumination could serve to create an extremely productive biosphere in the waters surrounding the settlement.

The first organisms to be released into the lake should be photosynthetic phytoplankton and other algae, including macroscopic forms such as kelp. These would serve to oxygenate the water. Once that is done, animals could be released, starting with zooplankton, with a wide range of aquatic macrofauna, potentially including sponges, corals, worms, mollusks, arthropods, and fish coming next. Penguins and sea otters could follow.

As the lake continues to grow, its cities would multiply, giving birth to a new branch of human civilization, supported by and supporting a lively new biosphere on a new world.

Conclusion

We find that the best places to settle Mars could be under water. By creating lakes beneath the surface of ice-covered craters, we can create miniature worlds, providing acceptable pressure, temperature, radiation protection, voluminous living space, and everything else needed for life and civilization. The sublake cities of Mars could serve as bases for the exploration and development of the Red Planet, providing homes within which new nations can be born and grow in size, technological ability, and industrial capacity, until such time as they can wield sufficient power to go forth and take on the challenge of terraforming Mars itself.

References

1. Frank Crossman, editor, Mars Colonies: Plans for Settling the Red Planet, The Mars Society, Polaris Books, 2019

2. Robert Zubrin with Richard Wagner, The Case for Mars: The Planet to Settle the Red Planet and Why We Must, Simon and Schuster, NY, 1996, 2011.

3. Richard S. Taylor, “Paraterraforming: The Worldhouse Concept,” Journal of the British Interplanetary Society, vol. 45, no. 8, Aug. 1992, p. 341-352.

4. Sub Glacial Lake, Wikipedia, https://en.wikipedia.org/wiki/Subglacial_lake#Biology accessed May 15, 2020.

5. Kevin Arrigo, et al, “Massive Phytoplankton Blooms Under Sea Ice,” Science, Vol. 336, page 1408, June 15, 2012 https://www2.whoi.edu/staff/hsosik/wp-content/uploads/sites/11/2017/03/Arrigo_etal_Science2012.pdf. Accessed May 15, 2020.

tzf_img_post

Modeling Hot Jupiter Clouds

Studying the atmospheres of exoplanets is a process that is fairly well along, especially when it comes to hot Jupiters. Here we have a massive target so close to its star that, when a transit occurs, we can look at the star’s light filtering through the atmosphere of the planet. Even so, clouds are a problem because they prevent accurate readings of atmospheric composition below the upper cloud layers. Aerosols — suspended solid particles or droplets in a gas — are common, range widely in composition, and make studying a planet’s atmosphere harder.

We’d like to learn more about which aerosols are where and in what kind of conditions, for we have a useful database of planets to work with. Over 70 exoplanets currently have transmission spectra available. A wide range of cloud types, many of them exotic indeed, have been proposed by astronomers to explain what they are seeing.

Imagine clouds of sapphire, or rubies, which is essentially what we get with aerosols of aluminum oxides like corundum. Potassium chloride can produce a molten salt. Sulfides of manganese or zinc can be components, as well as organic hydrocarbon compounds. Which of these are most likely to form and affect our observations? And what about silicates?

A new model, produced by an international team of astronomers, bodes well for future work. The model predicts that the most common type of hot Jupiter cloud consists not of the most exotic of these ingredients but of liquid or solid droplets of silicon and oxygen — think melted quartz.

But much depends on the temperature, with the cooler hot Jupiters (below about 950 Kelvin) marked by hydrocarbon hazes. Peter Gao (UC-Berkeley) is first author of a paper describing the model that pulls all these and more possibilities together:

“The kinds of clouds that can exist in these hot atmospheres are things that we don’t really think of as clouds in the solar system. There have been models that predict various compositions, but the point of this study was to assess which of these compositions actually matter and compare the model to the available data that we have… The idea is that the same physical principles guide the formation of all types of clouds. What I have done is to take this model and bring it out to the rest of the galaxy, making it able to simulate silicate clouds and iron clouds and salt clouds.”

Some planets have clear atmospheres, making spectroscopy easier, but all too frequently high clouds block observations of the gases below them. Gao considers such clouds a kind of contamination in the data, making it hard to trace atmospheric elements like water and methane. The new model examines how gases of various atoms or molecules condense into cloud droplets, their patterns of growth or evaporation, and their transport by local winds.

Image: Predicted cloud altitudes and compositions for a range of temperatures common on hot Jupiter planets. The range, in Kelvin, corresponds to about 800-3,500 degrees Fahrenheit, or 427-1,927 degrees Celsius. Credit: UC Berkeley. Image by Peter Gao.

The team worked with computer models of Earth’s clouds and extended them to planets like Jupiter, where we find ammonia and methane clouds, before moving on to hot Jupiter temperatures up to 2,800 K (2,500 degrees Celsius) and the kind of elements that could condense into clouds under these conditions. The scientists simulated the distribution of aerosol particles, studying cloud formation through thermochemical reactions and haze formation through methane photochemistry. This is intricate stuff, modeling condensation from one gas to another, so that we can simulate the emergence of unusual clouds, but it draws on 30 of the exoplanets with recorded transmission spectra as a check on the model’s accuracy.

Using the model, we can move through layers of atmosphere as mediated by temperature, with the hottest atmospheres showing condensation of aluminum oxides and titanium oxides, producing high-level clouds, while lowering the temperature allows such clouds to form deeper in the planet’s atmosphere, leaving them obscured by bands of higher silicate clouds. Lower the temperatures further and the upper atmosphere becomes clear as the silicate clouds form further down. High-level hazes can form at lower temperatures still.

Looking for a clear sky to study the atmosphere without hindrance? Planets in the range of 950 to 1,400 K are the most likely to produce a cloudless sky, but planets hotter than 2,200 K also fit the bill, says Gao. Hannah Wakeford (University of Bristol, UK) is a co-author on the paper:

“The presence of clouds has been measured in a number of exoplanet atmospheres before, but it is when we look collectively at a large sample that we can pick apart the physics and chemistry in the atmospheres of these worlds. The dominant cloud species is as common as sand — it is essentially sand — and it will be really exciting to be able to measure the spectral signatures of the clouds themselves for the first time with the upcoming James Webb Space Telescope (JWST).”

The key finding here is that only one type of cloud made of silicates dominates cloud opacity over a wide range of temperatures, and thus has the greatest implications for observation. Silicates dominate above planetary equilibrium temperatures of 950 K and extend out to 2,000 K, while hydrocarbon hazes dominate below 950 K. Many of the most exotic cloud types proposed in the literature simply require too much energy to condense.

Too bad. I liked the idea of sapphire clouds. But as the paper notes: “The observed trends in warm giant exoplanet cloudiness is a natural consequence of the dominance of only two types of aerosol.” And it continues:

Even though we do not consider the day- and nightside cloud opacity of warm giant exoplanets explicitly in our modelling, our finding that only one type of cloud—silicates—dominates exoplanet cloud opacity over a wide range of temperatures has important implications for exoplanet emission and reflected light observations. For example, the brightness temperature of an atmosphere with an optically thick silicate cloud deck would be fixed to a value slightly below the condensation temperature of silicates where the cloud deck becomes optically thin, resulting in minimal variations in the atmospheric brightness temperature for 950 K < Teq < 2,100 K. This is indeed what is observed for the nightsides of warm giant exoplanets, which all have brightness temperatures of ~1,100 K… Meanwhile, the relatively high albedo of certain warm giant exoplanets such as Kepler-7b could also be explained by the dominance of silicate clouds, which are highly reflective at optical wavelengths.

The paper is Gao et al., “Aerosol composition of hot giant exoplanets dominated by silicates and hydrocarbon hazes,” Nature Astronomy 25 May 2020 (abstract).

tzf_img_post

A New Class of Astronomical Transients

Some of the fastest outflows in nature are beginning to turn up in the phenomena known as Fast Blue Optical Transients (FBOTs). These are observed as bursts that quickly fade but leave quite an impression with their spectacular outpouring of energy. The transient AT2018cow was found in 2018, for example, in data from the ATLAS-HKO telescope in Hawaii, an explosion 10 to 100 times as bright as a typical supernova that appeared in the constellation Hercules. It was thought to be produced by the collapse of a star into a neutron star or black hole.

Now we have a new FBOT that is brighter at radio wavelengths than AT2018cow, the third of these events to be studied at radio wavelengths. The burst occurred in a small galaxy about 500 million light years from Earth and was first detected in 2016. Let’s call it CSS161010 (short for CRTS-CSS161010 J045834-081803), and note that it completely upstages its predecessors in terms of the speed of its outflow. The event launched gas and particles at more than 55 percent of the speed of light. Such FBOTs, astronomers believe, begin with the explosion of a massive star, with differences from supernovae and GRBs only showing up in the aftermath.

Deanne Coppejans (Northwestern University) led the study:

“This was unexpected. We know of energetic explosions that can eject material at almost the speed of light, specifically gamma-ray bursts, but they only launch a small amount of mass — about 1 millionth the mass of the sun. CSS161010 launched 1 to 10 percent the mass of the sun at more than half the speed of light — evidence that this is a new class of transient.”

Image: Keck’s view of where the CSS161010 explosion (red circle) occurred in a dwarf galaxy. Credit: Giacomo Terreran/Northwestern University.

Meanwhile, a second explosion, called ZTF18abvkwla (“The Koala”), has turned up in a galaxy considerably further out at 3.4 billion light years. Caltech’s Anna Ho led the study of this one, with both teams gathering data from the Very Large Array, the Giant Metrewave Radio Telescope in India and the Chandra X-ray Observatory. In both cases, it was clear that the type of explosion, bright at radio wavelengths, differed from both supernovae explosions and gamma-ray bursts. “When I reduced the data,” said Ho, “I thought I had made a mistake.”

FBOTs became recognized as a specific class of object in 2014, but the assumption is that our archives contain other examples of what Coppejans’ co-author Raffaella Margutti calls ‘weird supernovae,’ a concession to the fact that it is hard to gather information on these objects solely in the optical. The location of the CSS161010 explosion is a dwarf galaxy containing roughly 10 million stars in the southern constellation Eridanus.

Bright FBOTs like CSS161010 and AT2018cow have thus far turned up only in dwarf galaxies, which the authors note is reminiscent of some types of supernovae as well as gamma-ray bursts (GRBs). A transient like this flares up so quickly that it may prove impossible to pin down its origin,, but black holes and neutron stars are prominent in the astronomers’ thinking:

“The Cow and CSS161010 were very different in how fast they were able to speed up these outflows,” Margutti said. “But they do share one thing — this presence of a black hole or neutron star inside. That’s the key ingredient.”

Even so, the differences between the three FBOTs thus far studied at multiple wavelengths is notable. In the excerpt below, the authors of the Coppejans paper use the term ‘engine-driven’ to refer to the rotating accretion disk that produces jets in a neutron star or black hole produced by a supernova core-collapse, which can propel narrow jets of material outward in opposite directions. The authors believe that FBOTs produce this kind of engine, but in this case one surrounded by material shed by the star before it exploded. The surrounding shell as it is struck by the blast wave would be the source of the FBOT’s visible light burst and radio emission.

From the paper:

The three known FBOTs that are detected at radio wavelengths are among the most luminous and fastest-rising among FBOTs in the optical regime… Intriguingly, all the multi-wavelength FBOTs also have evidence for a compact object powering their emission… We consequently conclude… that at least some luminous FBOTs must be engine-driven and cannot be accounted for by existing FBOT models that do not invoke compact objects to power their emission across the electromagnetic spectrum. Furthermore, even within this sample of three luminous FBOTs with multiwavelength observations, we see a wide diversity of properties of their fastest ejecta. While CSS161010 and ZTF18abvkwla harbored mildly relativistic outflows, AT 2018cow is instead non-relativistic.

Which is another way of saying that we have a long way to go to understand FBOTs. We see characteristics of supernovae as well as GRBs but distinctive differences. Further observations in radio and X-ray wavelengths are critical for learning more about their physics.

Image: Artist’s conception of the new class of cosmic explosions called Fast Blue Optical Transients. Credit: Bill Saxton, NRAO/AUI/NSF.

The first paper is Coppejans, Margutti et al., “A Mildly Relativistic Outflow from the Energetic, Fast-rising Blue Optical Transient CSS161010 in a Dwarf Galaxy,” Astrophysical Journal Letters Vol. 895, No. 1 (26 May 2020). Abstract.

On the FBOT ZTF18abvkwla, see Ho et al., “The Koala: A Fast Blue Optical Transient with Luminous Radio Emission from a Starburst Dwarf Galaxy at z = 0.27,” Astrophysical Journal Vol. 895, No. 1 (26 May 2020). Abstract.

tzf_img_post

Star Formation and Galactic Mergers

Our galaxy is 10,000 times more massive than Sagittarius, a dwarf galaxy discovered in the 1990s. But we’re learning that Sagittarius may have had a profound effect on the far larger galaxy it orbits, colliding with it on at least three occasions in the past six billion years. These interactions would have triggered periods of star formation that we can, for the first time, begin to map with data from the Gaia mission, a challenge tackled in a new study in Nature Astronomy.

The paper in question, produced by a team led by Tomás Ruiz-Lara (Instituto de Astrofísica de Canarias, Tenerife), argues that the influence of Sagittarius was substantial. The data show three periods of increased star formation, with peaks at 5.7 billion years ago, 1.9 billion years ago and 1 billion years ago, corresponding to the passage of Sagittarius through the Milky Way disk.

The work is built around Gaia Data Release 2, examining the photometry and parallax information combined with modeling of observed color-magnitude diagrams to build a star formation history within a bubble around the Sun with a radius of 2 kiloparsecs (about 6500 light years). The star formation ‘enhancements,’ as the paper calls them, are well-defined, though with decreasing strength, with a possible fourth burst spanning the last 70 million years

Ruiz-Lara sees the disruption caused by Sagittarius as substantial, a follow-on to an earlier merger:

“At the beginning you have a galaxy, the Milky Way, which is relatively quiet. After an initial violent epoch of star formation, partly triggered by an earlier merger as we described in a previous study, the Milky Way had reached a balanced state in which stars were forming steadily. Suddenly, you have Sagittarius fall in and disrupt the equilibrium, causing all the previously still gas and dust inside the larger galaxy to slosh around like ripples on the water.”

Image: The Sagittarius dwarf galaxy has been orbiting the Milky Way for billions for years. As its orbit around the 10,000 times more massive Milky Way gradually tightened, it started colliding with our galaxy’s disc. The three known collisions between Sagittarius and the Milky Way have, according to a new study, triggered major star formation episodes, one of which may have given rise to the Solar System. Credit: ESA.

The idea is that higher concentrations of gas and dust are produced in some areas as others empty, the newly dense material triggering star formation. According to the paper, the 2 kiloparsec local volume is:

…characterized by an episodic SFH [star formation history], with clear enhancements of star formation ~ 5.7, 1.9 and 1.0 Gyr ago. All evidence seems to suggest that recurrent interactions between the Milky Way and Sgr dwarf galaxy are behind such enhancements. These findings imply that low mass satellites not only affect the Milky Way disk dynamics, but also are able to trigger notable events of star formation throughout its disk. The precise dating of such star forming episodes provided in this work sets useful boundary conditions to properly model the orbit of Sgr and its interaction with the Milky Way. In addition, this work provides important constraints on the modelling of the interstellar medium and star formation within hydrodynamical simulations, manifesting the need of understanding physical processes at subresolution scales and of further analysis to unveil the physical mechanisms behind global and repeated star formation events induced by satellite interaction.

Could the passage of Sagittarius through the Milky Way be behind the Sun’s formation? That seems a stretch given the length of time between the first disruption and the Sun’s formation some 4.6 billion years ago, but co-author Carme Gallart (IAC) doesn’t rule it out:

“The Sun formed at the time when stars were forming in the Milky Way because of the first passage of Sagittarius. We don’t know if the particular cloud of gas and dust that turned into the Sun collapsed because of the effects of Sagittarius or not. But it is a possible scenario because the age of the Sun is consistent with a star formed as a result of the Sagittarius effect.”

What I learned here is that understanding the physical processes behind star formation and incorporating that understanding into workable models is a problematic issue for astronomers today, because ongoing work is challenging earlier views of what happens when galaxies merge. The paper points out that while we have a number of colliding galaxies to examine, there is little theoretical work on the impact of a single satellite galaxy on a spiral galaxy.

And a key point: “…although we can easily link the reported enhancements with possible perientric passages of Sgr, we cannot pinpoint what exact physical mechanisms are triggering such events.” Plenty of opportunity ahead for researchers looking into the Milky Way’s history.

The paper is Ruiz-Lara et al., “The recurrent impact of the Sagittarius dwarf on the star formation history of the Milky Way,” published in Nature Astronomy 25 May 2020 (abstract).

tzf_img_post

On SETI, International Law, and Realpolitik

When Ken Wisian and John Traphagan (University of Texas at Austin) published “The Search for Extraterrestrial Intelligence: A Realpolitik Consideration” (Space Policy, May 2020), they tackled a problem I hadn’t considered. We’ve often discussed Messaging to Extraterrestrial Intelligence (METI) in these pages, pondering the pros and cons of broadcasting to the stars, but does SETI itself pose issues we are not considering? Moreover, could addressing these issues possibly point the way toward international protocols to address METI concerns?

Ken was kind enough to write a post summarizing the paper’s content, which appears below. A Major General in the USAF (now retired), Dr. Wisian is currently Associate Director of the Bureau of Economic Geology, Jackson School of Geosciences at UT. He is also affiliated with the Center for Space Research and the Center for Planetary Systems Habitability at the university. A geophysicist whose main research is in geothermal energy systems, modeling, and instrumentation & data analysis, he is developing a conference on First Contact Protocols to take place at UT-Austin this fall, a follow-on to his recent session at TVIW 2019 in Wichita.

by Ken Wisian

The debate over the wisdom of active Messaging to ExtraTerrestrial Intelligence (METI), has been vigorously engaged for some time. The progenitor of METI and the more accepted passive Search for ExtraTerrestrial Intelligence (SETI) has been largely assumed to be of little or no risk. The reasons for this assumption appear to be:

1. It does not alert ETI to our existence and therefore we should not face a threat of invasion or destruction from aliens (if it is even practical to do so over interstellar distances)

2. The minor Earthbound threat from extremists (of various possible persuasions) who might not like the possibility of ETI’s existence conflicting with their “world view” would be no more than an annoyance.

Implicit in the above is the underlying assumption that the only realistic threat that could arise from METI or SETI is that from a hostile ETI. In other words, the threat is external to humanity. What this too simple reasoning overlooks is human history, particularly international affairs, conflicts and war. [1]

SETI as used here is the passive searching for electromagnetic signals from ETI. This is currently primarily considered to be in the form of radio or laser signal, deliberately sent to somewhere. The search for non-signal evidence (e.g. inadvertent laser propulsion leakage, etc) is not considered here, though it could tie in to the discussion in a distant, indirect manner. Note: an ETI artifact (e.g. a spaceship) could have similar import as a SETI detection discussed here.

So what harm could SETI do? Looking at current and historical international affairs, particularly great-power competition, the answer is readily apparent – competition for dominance. In international affairs, nations compete, sometimes violently, for position in the world. This can be for economic or military advantage, more land or control over the seas, or merely survival. Witness the South China Sea today, stealing the secrets to nuclear weapons in the 1940’s and 1950’s, or the Byzantine Empire engaging in industrial espionage to steal the secret to making silk from China.

Now contemplate the potential technology advances that could come with a SETI detection. This could range from downloading the “Encyclopedia Galactica” to establishing two-way dialogue that includes sharing technology. With the potential for revolutionary science and technology leaps, whether directly destructive or not, to say the great & super powers would be “interested” is a monumental understatement.

Now think about the potential advantage (read as domination-enabling) that could accrue to one country if they were the only beneficiaries of said technology advances. “How?” you ask. “Anyone can point a radio telescope to the sky” Not so fast. Unless the signal comes from within our own galactic back yard, most likely within the Solar System, it will take a relatively large, complicated industrial complex (physical plant) with very specialized personnel to run, in order to send and receive interstellar communications. This is the key fact that could lead to SETI/METI being the next “Great Game” [2] of international affairs.

Large, specialized complexes & associated personnel are limited in number and therefore subject to physical control. For the sake of argument, let’s say there are a dozen such facilities in the world. This is far less than the number of critical infrastructure sites the US and coalition forces decided had to be taken out in Iraq in the Gulf Wars in order to reduce their military capability – a very manageable target set size. Now you begin to see the problem; superpowers, seeing a potentially world-dominating advantage in monopolizing the ETI communication channel, might also see as feasible preserving their access to ETI while at the same time denying the same to all other countries.

While “Great Games” like this can sometimes be kept in the purely political realm, that is relatively rare. Competition of this sort often includes violent espionage or proxy wars and occasionally can escalate to direct super-power competition. Thus, an actual SETI detection could lead rapidly to the first true information war – a superpower war (with all the existential risk that carries) fought purely for control of knowledge.

Monopolizing communication with ETI could be the trigger for the ?rst information-driven world war.

Realization of the risk that even passive SETI presents should drive further actions:

1. The development of realistic and binding international treaties on the handling of first contact protocols – admittedly a long-shot. The existing post-detection protocol is a very small and completely non-binding first step in this direction.

2. Formation of deliberately international SETI facilities with uninterruptible data sharing to partner countries (and/or the UN). These would also have interleaved internal chains of command from member countries. While this would be somewhat inefficient, the offset to risk is well worth the effort. A phase 2 to this would be a similar arrangement for METI. This would implicitly force the adoption of international standards and provide a process for METI.

3. Further (renewed?) discussion and research into SETI risk. This should bring in many disciplines that are often not involved deeply in the SETI/METI fields, from government policy to history to psychology and many others. In staring so hard at the very obvious risk of METI, we missed the risk from SETI alone. We need to turn around and explore that road before proceeding further down the highway to METI.

Notes

[1] What I am getting at here is that unfortunately, this is a stereotypical “ivory tower” point of view, too idealistic and disconnected from messy, illogical human affairs. I say this reluctantly as a “card-carrying” (i.e. Ph.D.) member of the academic world.

[2] I am definitely abusing the term “Great Game” in multiple ways. The term refers to the 19th competition between the British and Russian empires for control of Central and South Asia. It was a deadly serious and deadly game in actuality, but the term captures well the feeling of being in a fierce competition.

PG: Let me insert here this excerpt from the paper highlighting the question of international law and the issues it raises:

The potentially enormous value to nation states of monopolizing communication with ETI, for the purpose of technological dominance in human affairs, is a significant factor in understanding possible scenarios after a confirmed contact event and bears further thinking by scholars and policy specialists. History shows that in circumstances that states perceive as vital they will likely act in their perceived best interest in accordance with principles of realpolitik thinking. In these circumstances, international law is frequently not a strong constraint on the behavior of governments and a protocol developed by scientists on how to handle first contact is unlikely to be of any concern at all. This risk needs to be acknowledged and understood by the larger international community to include scientists active in SETI in addition to political leaders and international relations scholars.

The paper is Wisian and Traphagan, “The Search for Extraterrestrial Intelligence: A Realpolitik Consideration,” Space Policy May, 2000 (abstract).

tzf_img_post