Remote Observation: What Could ET See?

As we puzzle out the best observing strategies to pick up a bio- or technosignature, we’re also asking in what ways our own world could be observed by another civilization. If such exist, they would have a number of tools at their disposal by which to infer our existence and probe what we do. Extrapolation is dicey, but we naturally begin with what we understand today, as Brian McConnell does in this, the third of a three-part series on SETI issues. A communications systems engineer, Brian has worked with Alex Tolley to describe a low-cost, high-efficiency spacecraft in their book A Design for a Reusable Water-based Spacecraft Known as the Spacecoach (Springer, 2015). His latest book is The Alien Communication Handbook — So We Received A Signal, Now What? recently published by Springer Nature. Is our existence so obvious to the properly advanced observer? That doubtless depends on the state of their technology, about which we know nothing, but if the galaxy includes billion-year old cultures, it’s hard to see how we might be missed.

by Brian McConnell

In SETI discussions, it is often assumed that an ET civilization would be unaware of our existence until they receive a signal from us. I Love Lucy is an often cited example of early broadcasts they might stumble across. Just as we are developing the capability to directly image exoplanets, a more astronomically advanced civilization may already be aware of our existence, and may have been for a long time. Let’s consider several methods by which an ET could take observations of Earth:

  • Spectroscopic analysis of Earth’s atmosphere
  • Deconvolution of Earth’s light curve
  • Solar gravitational lens telescopes
  • Solar system scale interferometers
  • High speed flyby probes (e.g. Starshot)
  • Slow traveling probes that loiter in near Earth space (Lurkers, Bracewell probes)

Spectroscopic Analysis

We are already capable of conducting spectroscopic analysis of the light passing through exoplanet atmospheres, and as a result, are able to learn about their general characteristics. This capability will soon be extended to include Earth sized planets. An ET astronomer that had been studying Earth’s atmosphere over the past several centuries would have been able to see the rapid accumulation of carbon dioxide and other fossil fuel waste gases. This signal is plainly evident from the mid 1800s onward. Would this be a definitive sign of an emergent civilization? Probably not, but it would be among the possible explanations, and perhaps a common pattern as an industrial civilization develops. Other gases, such as fluorocarbons (CFCs and HFCs) have no known natural origin, and would more clearly indicate more recent industrial activity.

There is also no reason not to stop at optical/IR, and not conduct similar observations in the microwave band, both to look for artificial signals such as radars, but also to study the magnetic environment of exoplanets, much like we are using the VLA to study the magnetic fields of exoplanets. It’s worth noting that most of the signals we transmit are not focused at other star systems, and would appear very weak to a distant observer, though they might notice a general brightening in the microwave spectrum, much like artificial illumination might be detectable. This would be a sure sign of intelligence, but we have not been “radio bright” for very long, so this would only be visible to nearby systems.

Deconvolution

Even if we can only obtain a single pixel image of an exoplanet, we can use a technique called deconvolution to develop a low resolution image of it by measuring how its brightness and color varies as the planet rotates. This is not unlike building an image by moving a light meter across a surface to build a map of light levels that can be translated into an image. It won’t be possible to build a high resolution image, but it will be possible to see large-scale features such as oceans, continents and ice caps. While it would not be possible to directly see human built structures, it would be clear that Earth has oceans and vegetation. Images of Pluto taken before the arrival of the New Horizons probe offer an example of what can be done with a limited amount of information.

Comparison of images of Pluto taken by the New Horizons probe (left) and the Hubble Space Telescope via light curve reconstruction (right). Image credit: NASA / Planetary Society.

Svetlana Berdyugina and Jeff Kuhn presented a presentation on this topic at the 2018 NASA Techno Signatures symposium where they simulated what the Earth would look like through this deconvolution process. In the simulated image, continents, oceans and ice caps are clearly visible, and because the Earth’s light curve can be split out by wavelength, it would be possible to see evidence of vegetation.

Solar Gravitational Lens Telescopes

A telescope placed along a star’s gravitational lens focal line will be able to take multi pixel images of exoplanets at considerable distances. Slava Turyshev et al show in this NASA NIAC paper that it will be possible to use an SGL telescope to image exoplanets at 1 kilometer per pixel resolution out to distances of 100 light years. A SGL telescope pointed at Earth might be able to see evidence of large scale agriculture, urban centers, night side illumination, reservoirs, and other signs of civilization. Moreover, pre-industrial activity and urban settlements might be visible to this type of instrument, which raises the possibility that an ET civilization with this capability would have been able to see evidence of human civilization centuries ago, perhaps Longer.

A simulated image of an exoplanet as seen from an SGL telescope. Image credit: NASA/JPL

A spacefaring civilization that happens to have access to a nearby black hole would have an even better lens to use (the Sun’s gravitational lens is slightly distorted because of the Sun’s rotation and oblate shape).

Solar System Scale Interferometers

The spatial resolution of a telescope is a function of its aperture size and the wavelength of the light being observed. Using interferometry, widely separated telescopes can combine their observations, and increase the effective aperture to the distance between the telescopes. The Black Hole Event Horizon Telescope used interferometry to create a virtual radio telescope whose aperture was the size of Earth. With it, we were able to directly image the accretion disc of galaxy M87’s central black hole, some 53 million light years away.

Synthetic microwave band image of M87’s central black hole’s shadow and nearby environment. Image credit: Event Horizon Telescope

Now imagine a fleet of optical interferometers in orbit around a star. They would have an effective aperture measuring tens to hundreds of millions of kilometers, and would be able to see small surface details on distant exoplanets. This is beyond our capabilities to build today, but the underlying physics say they will be possible to build, which is to say it is an expensive and difficult engineering problem, something a more advanced civilization may have built. Indeed, we began to venture down this path with the since canceled SIM (Space Interferometry Mission) and LISA (Laser Interferometer Space Antenna) telescopes.

A solar system scale constellation of optical interferometers would be able to resolve surface details of distant objects at a resolution of 1-10 meters per pixel, comparable to satellite imagery of the Earth, meaning that even early agriculture and settlements would be visible to them.

Fast Flyby Probes

Fast lightsail probes, similar to the Breakthrough Starshot probes that we hope to fly in a few decades, will be able to take high resolution images of exoplanets as the probes fly past target planets. Images taken of Pluto by the New Horizons probe probably give an idea of what to expect in terms of resolution. It was able to return images at a resolution of less than 100 meters per pixel, smaller than a city block.

The primary challenges in obtaining high resolution images from probes like these are: the speed at which the probe flies past its target (0.2c in the case of the proposed starshot probe),and transmitting observations back to the home system. Both of these are engineering problems. For example, the challenge of capturing images can be solved by taking as many images as possible during the flyby and then using on board post processing to create a synthesized image. Communication is likewise an engineering problem that can be solved with better onboard power sources and/or large receiving facilities at the home system. If the probe itself is autonomous and somewhat intelligent, it can also decide which parts of the collected imagery are most interesting and prioritize their transmission.

The Breakthrough Starshot program envisions launching a large number of cheap, lightweight lightsails on a regular cadence, so while an individual probe might only be able to capture a limited set of observations, in aggregate they may be able to return extensive observations and imagery over an extended period of time.

Slow Loitering Probes (Lurkers and Bracewell Probes)

An ET civilization that has worked out nuclear propulsion would be able to send slower traveling probes to loiter in near Earth space. These probes could be long lived, and could be designed for a variety of purposes. Being in close proximity to Earth, they would be able to take high resolution images over an extended period of time. Consider that the Voyager probes, among the first deep space probes we built, are still operational today. ET probes could be considerably more long lived and capable of autonomous operation. If they are operating in our vicinity, they would have been able to see early signs of human activity back to antiquity. One important limitation is that only nearby civilizations would be able to launch probes to our vicinity within a few hundred years.

The implication of this is not just that an ETI could be able to see us today, they could have been able to study the development of human civilization from afar, over a period spanning centuries or millennia. Beyond that, Earth has had life for 3.5 billion years, and life on land for several hundred million years. So if other civilizations are surveying habitable worlds on an ongoing basis, Earth may have been noticed and flagged as a site of interest long before we appeared on the scene.

One of the criticisms of SETI is that the odds of two civilizations going “online” within an overlapping time frame may be vanishingly small, which implies that searching for signals from other civilizations may be a lost cause. But what if early human engineering projects, such as the Pyramids of Giza, had been visible to them long ago? Then the sphere of detectability expands by orders of magnitude, and more importantly, these signals we have been broadcasting unintentionally have been persistent and visible for centuries or millennia.

This has ramifications for active SETI (METI) as well. Arguments against transmitting our own artificial signals, on the basis that we might be risking hostile action by neighbors, may be moot if most advanced civilizations have some of the capabilities mentioned in this article. At the very least, they would know Earth is an inhabited world and a site for closer study, and may well have been able to see early signs of human civilization long ago. So perhaps it is time to revisit the METI debate, but this time with a focus on understanding what unintentional signals or techno signatures we have been sending and who could see them.

tzf_img_post

Wind Rider: A High Performance Magsail

Can you imagine the science we could do if we had the capability of sending a probe to Jupiter with travel time of less than a month? How about Neptune in 18 weeks? Alex Tolley has been running the numbers on a concept called Wind Rider, which derives from the plasma magnet sail he has analyzed in these pages before (see, for example, The Plasma Magnet Drive: A Simple, Cheap Drive for the Solar System and Beyond). The numbers are dramatic, but only testing in space will tell us whether they are achievable, and whether the highly variable solar wind can be stably harnessed to drive the craft. A long-time contributor to Centauri Dreams, Alex is co-author (with Brian McConnell) of A Design for a Reusable Water-Based Spacecraft Known as the Spacecoach (Springer, 2016), focusing on a new technology for Solar System expansion.

by Alex Tolley

In 2017 I outlined a proposed magnetic sail propulsion system called the Plasma Magnet that was presented by Jeff Greason at an interstellar conference [6]. It caught my attention because of its simplicity and potential high performance compared to other propulsion approaches. For example, the Breakthrough Starshot beamed sail required hugely powerful and expensive phased-array lasers to propel a sail into interstellar space. By contrast, the Plasma Magnet [PM] required relatively little energy and yet was capable of propelling a much larger mass at a velocity exceeding any current propulsion system, including advanced solar sails.

The Plasma Magnet was proposed by Slough [5] and involved an arrangement of coils to co-opt the solar wind ions to induce a very large magnetosphere that is propelled by the solar wind. Unlike earlier proposals for magnetic sails that required a large electric coil kilometers in diameter to create the magnetic field, the induction of the solar wind ions to create the field meant that the structure was both low mass and that the size of the resulting magnetic field increased as the surrounding particle density declined. This allowed for a constant acceleration as the PM was propelled away from the sun, very different from solar sails and even magsails with fixed collecting areas.

The PM concept has been developed further with a much sexier name: the Wind Rider, and missions to use this updated magsail vehicle are being defined.

Wind Rider was presented at the 2021 Division of Planetary Sciences (DPS) meeting by the team led by Brent Freeze, showing their concept of the design for a Jupiter mission they called JOVE. The December meeting of the American Geophysical Union was the venue for a different Wind Rider concept mission to the SGL, called Pathfinder.

The main upgrade from the earlier PM to the Wind Rider is the substitution of superconducting coils. This allows the craft to maintain the magnetic field without requiring constant power to maintain the electric current, reducing the required power source. Because the superconducting coils would quickly heat up in the inner system and lose their superconductivity, a gold foil reflective sun shield is deployed to shield the coils from the sun’s radiation. This is shown in the image above with the shield facing the sun to keep the coils in shadow. The shield is also expected to do double duty as a radio antenna, reducing the net parasitic mass on the vehicle.

The performance of the Wind Rider is very impressive. Calculations show that it will accelerate very rapidly and reach the velocity of the solar wind, about 400 km/s. This has implications for the flight trajectory of the vehicle and the mission time.

The first mission proposal is a flyby of Jupiter – Jupiter Observing Velocity Experiment (JOVE) – much like the New Horizons mission did at Pluto.

Figure 1. The Wind Rider on a flyby of Jupiter. The solar panels are hidden behind the sun shield facing the sun. The 16U CubeSat chassis is at the intersection of the 2 coils and sun shield.

The JOVE mission proposal is for an instrumented flyby of Jupiter [2]. The chassis is a 16U CubeSat. The scientific instrument payload is primarily to measure data on the magnetic field and ion density around Jupiter. The sail is powered by 4 solar panels that also double as struts to support the sun shield and generate about 1300 W at 1 AU and fall to about 50W at Jupiter.

Figure 2. Trajectory of the Wind Rider from Earth to Jupiter

The flight trajectory is effectively a beeline directly to Jupiter, starting the flight almost at opposition. No gravity assists from Earth or Venus are required, nor a long arcing trajectory to intercept Jupiter. Figure 2 shows the trajectory, which is almost a straight-line course with the average velocity close to that of the solar wind.

Although the mission is planned as a flyby, a future mission could allow for orbital insertion if the craft approaches Jupiter’s rotating magnetosphere to maximize the impinging field velocity. Although not mentioned by the authors, it should be noted that Slough has also proposed using a PM as an aerobraking shield that decelerates the craft as it creates a plasma in the upper atmosphere of planets.

How does the performance of the Wind Rider compare to other comparable missions?

The JUNO space probe to Jupiter had a maximum velocity of about 73 km/s as Jupiter’s gravity accelerated the craft towards the planet. The required gravity assists and long flight path, about 63 AU or over 9 billion km, mean that its average velocity was about 60 km/s. This is not the fairest comparison as the JUNO probe had to attain orbital insertion at Jupiter.

A fairer comparison is the fastest probe we have flown – the New Horizons mission to Pluto — which reached 45 km/s as it left Earth but slowed to 14 km/s as it flew by Pluto. New Horizons took 1 year to reach Jupiter to get a gravity assist for its 9 year mission to Pluto, and therefore a maximum average velocity of 19 km/s between Earth and Jupiter.

Wind Rider can reach Jupiter in less than a month. Figure 2 shows the almost straight-line trajectory to Jupiter. Launched just before opposition, Wind Rider reaches Jupiter in just over 3 weeks. Because opposition happens annually, a new mission could be launched every year.

As the Wind Rider quickly reaches its terminal velocity at the same velocity as the solar wind, it can reach the outer planets with comparably short times with the same trajectory and annual launch windows.

The Wind Rider can fly by Saturn in just 6 weeks, and Neptune in 18 weeks. Compare that to the Voyager 2 probe launched in 1977 that took 4 years and 12 years to fly by the same planets respectively. Pluto could be reached by Wind Rider in just 6 months.

Because of its high terminal velocity that does not reduce during its mission, the Wind Rider is also ideally suited for precursor interstellar missions.

The second proposed mission is called Pathfinder [1], proposed to ultimately reach the solar gravity focal line around 550 AU from the sun. Flight time is less than 7 years, making this a viable project for a science and engineering team and not a multi-generation one based on existing rocket propulsion technology. As the flight trajectory is a straight line, this makes the craft well suited to follow the focal line while imaging a target star or exoplanet using the sun’s diameter as a large aperture telescope to increase the resolving power.

As the Wind Rider reaches the solar wind velocity, it may even be able to ride the gusts of higher solar wind velocities, perhaps reaching closer to 550 km/s.

While solar sails have been considered the more likely means to reach high velocities, especially when making sun-diver maneuvers, even advanced sails with proposed areal densities well below anything available today would reach solar system escape velocities in the range of 80-120 km/s [3]. If the Wind Rider can indeed reach the velocity of the solar wind, it would prove a far faster vehicle than any solar sail being planned, and would not need a boost from large laser arrays, nor risky sun-diver maneuvers.

I would inject some caution at this point regarding the performance. The performance is based entirely on theoretical work and a small scale laboratory experiment. What is needed is a prototype launched into cis-lunar space to test the performace on actual hardware and confirm the capability of the technology to operate as theorized.

It should also be noted that despite its theoretical high performance, there is a potential issue with propelling a probe with a magnetic sail. Compared to a solar sail or a vehicle with reaction thrusters, the Wind Rider as described so far has no crosswind capability. It just runs in front of the solar wind like a dandelion seed in the wind. This means that it would have to be aimed very accurately at its target, and subject to the vagaries of the strength of the solar wind that is far less stable than the sun’s photon emissions. Like the dandelion, if the Wind Rider was very inexpensive, many could be launched in the expectation that at least one would successfully reach its target.

However, there is a possibility that some crosswind capability is possible. This is based on modelling by Nishida [4]. This paper was recommended by Dr. Freeze [7].

The study modeled the effect of the angle of attack of the magnetic field of a coil against the solar wind. The coil in this case would represent the induced circular movement of the solar wind induced by the primary Wind Rider/PM coils.

Theoretically, the angle of attack has an impact on the total force pushing past the magnetic field.

Figure 3 shows the pressure and on the field as the coil is rotated from 0 through 45 and 90 degrees to the solar wind.

The force experienced is maximal at 90 degrees. This is shown visually in figure 3 and graphically in figure 4.

Figure 4. Force on the coil effected by angle of attack. A near 90 degrees angle of attack increases the force about 50%.

The angle of attack also induces a change in the thrust vector experienced by the coil, which would act as a crosswind maneuvering capability, allowing for trajectory adjustments as well as a longer launch window for the Wind Rider.

Figure 5. The angle of attack affects the thrust vector. But note the countervailing torque on the coil.

If the coil can maintain an angle of attack with respect to teh solar wind, then the Wind Rider can steer across the solar wind to some extent.

Figure 6. (left) Angle of attack, and steering angle. (right) angle of attack and the torque on the coil.

Figure 6 shows that the craft could steer up to 12 degrees away from the solar wind direction. However, maintaining that angle of attack requires a constant force to oppose the torque restoring the angle of attack to zero or 90 degrees. The coil therefore acts like a weather vane, always trying to align itself with the solar wind. To maintain the angle of attack would be difficult. Reaction wheels like those on the Kepler telescope could only act in a transient manner. Another possibility suggested is to move the center of gravity of the craft in some way. Adding booms with coils might be another solution, albeit by adding mass and complexity, undesirable for this first generation probe. Jeff Greason has an upcoming paper to be published in 2022 on theoretical navigation with possible ranges of steering capability.

In summary, the Wind Rider is an upgraded version of the Plasma Magnet propulsion concept, now applied to a reference design for 2 missions, a fast flyby of Jupiter, and an interstellar precursor mission that could reach the solar gravity lens focus. The performance of the design is primarily based on modelling and as yet there is no experimental evidence to support a finite lift/drag ratio for the craft.

Having said that, the propulsion principle and hardware necessary are not expensive, and there seems to be much interest by the AIAA. Maybe this propulsion method can finally be built, flown and evaluated. If it works as advertised, it would open up the solar system to exploration by fast, cheap robotic probes and eventually crewed ships.

References

1. Freeze, B et al Wind Rider Pathfinder Mission to Trappist-1 Solar Gravitational Lens Focal Region in 8 Years (poster at AGU – Dec 13th, 2021). https://agu.confex.com/agu/fm21/meetingapp.cgi/Paper/796237

2. Freeze, B et al Jupiter Observing Velocity Experiment (JOVE), Introduction to Wind Rider Solar Electric Propulsion Demonstrator and Science Objective.
https://baas.aas.org/pub/2021n7i314p05/release/1

3. Vulpetti, Giovanni, et al. (2008) Solar Sails: A Novel Approach to Interplanetary Travel. New York: Springer, 2008.

4. Nishida, Hiroyuki, et al. “Verification of Momentum Transfer Process on Magnetic Sail Using MHD Model.” 41st AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, 2005.
https://doi.org/10.2514/6.2005-4463

5. Slough, J. “Plasma Magnet NASA Institute for Advanced Concepts Phase I Final Report.” 2004. http://www.niac.usra.edu/files/studies/final_report/860Slough.pdf. See Figure 2.

6. Tolley, A “The Plasma Magnet Drive: A Simple, Cheap Drive for the Solar System and Beyond” (2017).
https://www.centauri-dreams.org/2017/12/29/the-plasma-magnet-drive-a-simple-cheap-drive-for-the-solar-system-and-beyond/

7. Generous email communications with Dr. Brent Freeze in preparation of this article.

tzf_img_post

What If SETI Finds Something, Then What?

Beyond its immediate cultural and philosophical implications, the reception of a signal from another civilization will call for analysis across all academic disciplines as we try to make sense of it. Herewith a proposal for an Interstellar Communication Relay, both data repository and distribution system designed to apply worldwide resources to the problem. Author Brian McConnell is an American computer engineer who has written three technical books, two about SETI (the search for extraterrestrial intelligence), and one about electric propulsion systems for spacecraft. The latter, A Design for a Reusable Water-Based Spacecraft Known as the Spacecoach (Springer, 2015) has been the subject of extensive discussion on Centauri Dreams (see, for example, Brian’s A Stagecoach to the Stars, and Alex Tolley’s Spaceward Ho!). Brian has also published numerous peer reviewed scientific papers and book chapters related to SETI, and is an expert on interstellar communication systems and on translation technology. His new paper on the matter is just out.

by Brian McConnell

SETI organizations understandably focus most of their efforts on the initial step of detecting and vetting candidate signals. This work mostly involves astronomers and signal processing experts, and as such involves a fairly small group of subject matter experts.

But what if SETI succeeds in discovering an information bearing signal from another civilization? The process of analyzing and comprehending the information encoded in an extraterrestrial signal will involve a much broader community. Anyone with a computer and a hypothesis to test will be able to participate in this effort. I would wager that the most important insights will come from people who are not presently involved in SETI research. What will that process look like?

The first step following the detection of an extraterrestrial signal will be to determine if the signal is modulated to transmit information. Let’s consider the case of a pulsed laser signal that optical SETI (OSETI) instruments look for. This type of signal consists of a laser that emits very bright but very short pulses on nanosecond time scales. By transmitting very short pulses, the laser can outshine its background star while it is active, and without requiring excessive amounts of energy. OSETI detectors work by counting individual photons as they arrive. Photons from the background star will be randomly distributed over time, while the pulsed signal’s photos will arrive in tight clusters.

This type of signal can be modulated to transmit information in several ways. The duration of each pulse can be altered, as can the time interval between pulses. The transmitter can also transmit on several different wavelengths (colors) to further increase the data rate of the combined signal.

Image: Pulse interval modulation varies the delay between individual pulses.

This type of modulation will be easy to see with currently deployed OSETI detectors, so it is possible that in the case of an OSETI detection, we would also be able to extract data from the signal right away.

How much information can be encoded in an OSETI signal that is also designed to be easy to detect? We can calculate the transmission rate as follows.

Let’s work an example as follows. The signal has 20 distinct color channels and chirps on average about ten times per second. Each pulse can have a duration of 1, 2, 3 or 4 nanoseconds, and so it encodes two bits of information in the pulse width. The interval between pulses can have 256 unique values, and so it encodes 8 bits of information in the pulse interval. Plugging these numbers into the equation, we get 2,000 bits per second. While this is glacially slow compared to high speed internet connections, this works out to 172 megabits of data per day, or 21.6 megabytes per day. At this rate, the sender could transmit several thousand high resolution images per year.

The Interstellar Communication Relay, described in a recently published paper in the International Journal of Astrobiology, is a system that will be deployed in the event of a detection of an information bearing signal. It is modeled off the Deep Space Network, although it will be much less expensive to build and operate, as it will use virtualized / cloud based computing and data transfer services. The ICR will enable millions of amateur and professional researchers worldwide to obtain data extracted from an ET signal, and to participate in the analysis and comprehension effort that will follow the initial detection.

What type of information might we encounter in an alien transmission? This is anyone’s guess, and that is why it will be important to have a broad range of people and expertise represented in the message analysis and comprehension effort. Anything that can be represented in a digital format could potentially be included in a transmission.

Let’s consider images. A civilization that is capable of interstellar communication will, by definition, be proficient at astronomy and photography. Images are trivially easy to encode in a digital communication channel. Images are an interesting medium because they are easy to encode, and can represent objects and scenes on microscopic to cosmological scales. Certain types of images, such as planetary images, will be especially easy to recognize, and can be used to calibrate the decoding process on the receiver’s end.

The bitstream below is an example of what an undecoded image might look like in a raw binary stream. The receiver only needs to guess the number of pixels per row to see the image in its correct aspect ratio. This image is encoded with nine bits per pixel, with the nine bits arranged in 3×3 cells, so the undecoded image appears in its correct aspect ratio. Even before the image is decoded, it is obvious that it depicts a spheroid object against a black background, which is what a planetary image will look like,

The receiver only needs to work through a small number of parameters to decode the image successfully, and once they have learned the transmitter’s preferred encoding scheme(s), they will be able to decode arbitrarily complex images. Because planetary images have well understood properties, the receiver can also use these to calibrate the decoding algorithm, for example to implement non-linear brightness encodings.

Image: The bitstream above decoded as a grayscale (monochrome) image. Credit: NASA / Apollo 17.

What about color? Color is a physical property that will be well understood by any astronomically literate civilization. The sender can assist the receiver in decoding photographs with multiple color channels by sending photographs of mutually observable objects such as nebulae.

Image: The Cat’s Eye nebula, imaged in red, green and blue color channels.

Image: Combining these color channels yields the following image. A receiver can work out which color channels were used in an image by combining them and comparing the output against images they have taken of the same object.

Images are a good example of observables. Observables, such as images and audio, are straightforward to encode digitally. Communicating qualia, internal experiences, may be quite difficult or impossible due to the lack of shared senses and experiences, but it will be possible to communicate quite a bit through observables which, in and of themselves, may be quite interesting. Photographs from another inhabited world would surely captivate scientists and the general public.

Computer programs or algorithms are another type of information to be on the watch for. Computer programs will be useful in interstellar communication for a number of reasons. The sender can describe an interpreted programming language using a small collection of math and logic symbols. While this foundation can be quite simple, with about a dozen elemental symbols, the programs written in this language can be arbitrarily complex and possibly even intelligent.

An algorithmic communication system will have a number of advantages over static content. The programs can interact with their receivers in real-time, and thus eliminate the long delays associated with two-way communication across interstellar distances. Algorithms can also make the communication link itself more reliable, for example by implementing robust forward error correction and compression algorithms that both boost the information carrying capacity of the link, and allow transmission errors to be detected and corrected without requesting retransmission of data.

Take images as an example. Lossy compression algorithms, similar to the JPEG format, can reduce the amount of information needed to encode an image by a factor of 10:1 or more. Order of magnitude improvements like this will favor the use of algorithmic systems compared to static, uncompressed data.

These are just a couple of examples of the types of information we should be on the watch for, but the range of possible information types we may encounter is much greater than that. That’s why it will be important to draw in people representing many different areas of expertise to evaluate and understand the information conveyed by an ET signal.

The paper is McConnell, “The interstellar communication relay,” International Journal of Astrobiology 26 August 2020 (abstract).

tzf_img_post

Introducing the Q-Drive: A concept that offers the possibility of interstellar flight

If Breakthrough Starshot is tackling the question of velocities at a substantial percentage of lightspeed, what do we do about the payload question? A chip-sized spacecraft is challenging in terms of instrumentation and communications, not to mention power. Enter Jeff Greason’s Q-Drive, with an entirely different take on high velocity missions within the Solar System and beyond it. Drawing its energies from the medium to deploy an inert propellant, the Q-Drive ups the payload enormously. But can it be engineered? Alex Tolley has been doing a deep dive on the concept and talking to Dr. Greason about the possibilities, out of which today’s essay has emerged. A Centauri Dreams regular, Alex has a history of innovative propulsion work, and with Brian McConnell is co-author of A Design for a Reusable Water-Based Spacecraft Known as the Spacecoach (Springer, 2016),

by Alex Tolley

Technical University of Munich for Project Icarus. Credit: Adrian Mann.

The interstellar probe coasted at 4% c after her fusion drive first stage was spent. It massed 50,000 kg, mostly propellant water ice stored as a conical shield ahead of the probe that did double duty as a particle shield. The probe extended a spine, several hundred kilometers in length behind the shield. Then the plasma magnet sails at each end started to cycle, using just the power from a small nuclear generator. The magsails captured and extracted power from the ISM streaming by. This powered the ionization and ejection of the propellant. Ejected at the streaming velocity of the ISM, the probe steadily increased in velocity, eventually reaching 20% c after exhausting 48,000 kg of propellant. The probe, targeted at Proxima Centauri, would reach its destination in less than 20 years. It wouldn’t be the first to reach that system, the Breakthrough microsails had done that decades earlier, but this probe was the first with the scientific payload to make a real survey of the system and collect data from its habitable world.

(sound of a needle skidding across a vinyl record). Wait, what? How can a ship accelerate to 20% c without expending massive amounts of power from an onboard power plant, or an intense external power beam from the solar system?

In a previous article, I explained the plasma magnet drive, a magsail technology that did not require a large physical sail structure, but rather a compact electromechanical engine whose magnetic sail size was dependent on the power and the surrounding medium’s plasma density.

Like other magsail and electric sail designs, the plasma magnet could only run before the solar wind, making only outward bound trips and a velocity limited by the wind speed. This inherently limited the missions that a magsail could perform compared to a photon sail. Where it excelled was the thrust was not dependent on the distance from the sun that severely limits solar sail thrust, and therefore this made the plasma magnet sail particularly suited to missions to the outer planets and beyond.

Jeff Greason has since considered how the plasma magnet could be decelerated to allow the spacecraft to orbit a target in the outer system. Following the classic formulations of Fritz Zwicky, Greason considered whether the spacecraft could use onboard mass but external energy to achieve this goal. This external energy was to be extracted from the external medium, not solar or beamed energy, allowing it to operate anywhere where there was a medium moving relative to the vehicle.

The approach to achieve this was to use the momentum and energy of a plasma stream flowing past the ship and using that energy to transfer momentum to an onboard propellant to drive the ship. That plasma stream would be the solar wind inside the solar system (or another star system), and an ionized interstellar medium once beyond the heliosphere.

Counterintuitively, such a propulsion system can work in principle. By ejecting the reaction mass, the ship’s kinetic energy energy is maintained by a smaller mass, and therefore increases its velocity. There is no change in the ship’s kinetic energy, just an adjustment of the ship’s mass and velocity to keep the energy constant.

Box 1 shows that net momentum (and force) can be attained when the energy of the drag medium and propellant thrust are equal. However this simple momentum exchange would not be feasible as a drive as the ejection mass would have to be greater than the intercepted medium resulting in very high mass ratios. In contrast, the Q-Drive, achieves a net thrust with a propellant mass flow far less than the medium passing by the craft, resulting in a low mass ratio yet high performance in terms of velocity increase.

Figure 1 shows the principle of the Q-Drive using a simple terrestrial vehicle analogy. Wind blowing through a turbine generates energy that is then used to eject onboard propellant. If the energy extracted from the wind is used to eject the propellant, in principle the onboard propellant mass flow can be lower than the mass of air passing through the turbine. The propellant’s exhaust velocity is matched to that of the wind, and under these conditions, the thrust can be greater than the drag, allowing the vehicle to move forward into the wind.

Box 2 below shows the basic equations for the Q-Drive.

Let me draw your attention to equations 1 & 2, the drag and thrust forces. The drag force is dependent on the velocity of the wind or the ship moving through the wind which affects the mass flow of the medium. However, it is the change in velocity of the medium as it passes through the energy harvesting mechanism rather than the wind velocity itself that completes this equation. Compare that to the thrust from the propellant where the mass flow is dependent on the square of the exhaust velocity. When the velocity of the ship and the exhaust are equal, the ratio of the mass flows is dependent on the ratio of the change in velocity (delta V) of the medium and the exhaust velocity. The lower the delta V of the medium as the energy is extracted from it, the lower the mass flow of the propellant. As long as the delta V of the medium is greater than zero, as the delta V approaches zero, the mass of the stream of medium is greater than the mass flow of the propellant. Conversely, as the delta V approaches the velocity of the medium, i.e. slowing it to a dead stop relative to the ship, the closer the medium and exhaust mass flows become.

Equations 3 and 7 are for the power delivered by the medium and the propellant thrust. As the power needed for generating the thrust cannot be higher than than delivered by the medium, at 100% conversion the power of each must be equal. As can be seen, the power generated by the energy harvesting is the drag force multiplied by the speed of the medium. However, the power to generate the thrust is ½ the force of the thrust multiplied by the exhaust velocity, which is the same as the velocity of the medium. Therefore the thrust is twice that of the drag force and therefore a net thrust equal to the drag force is achieved [equation 9]. [Because the sail area must be very large to capture the thin solar wind and the even more rarified ISM, the drag force on the ship itself can be discounted.]

Because the power delivered from the external medium increases as the ship increases in velocity, so does the delivered power, which in turn is used to increase the exhaust velocity to match. This is very different from our normal expectations of powering vehicles. Because of this, the Q-Drive can continue to accelerate a ship for as long as it can continue to exhaust propellant.

Figure 2 shows the final velocity versus the ship’s mass ratio performance of the Q-Drive compared to a rocket with a fixed exhaust velocity, and the rocket equation using a variable exhaust but with the thrust reduced by 50% to match the Q-drive net thrust equaling 50% of the propellant thrust. With a mass ratio below 10, a rocket with an exhaust equal to the absolute wind velocity would marginally outperform the Q-drive, although it would need its own power source to run, such as a solar array or nuclear reactor. Beyond that, the Q-drive rapidly outperforms the rocket. This is primarily because as the vehicle accelerates, the increased power harvested from the wind is used to commensurately increase the exhaust velocity. If a rocket could do this, for example like the VASIMR drive, the performance curve is the same. However, the Q-drive does not need a huge power supply to work, and therefore offers a potential for very high velocity without needing a matching power supply.

Equation A16 [1] and Box 3 equation 1 show that the Q-Drive has a velocity multiplier that is the square root of the mass ratio. This is highly favorable compared to the rocket equation. The equations 2 and 3 in Box 3 show that the required propellant and hence mass ratio is reduced the less the medium velocity is reduced to extract power. However, reducing the delta V of the medium also reduced the acceleration of the craft. This implies that the design of the ship will be dependent on mission requirements rather than some fixed optimization.

Box 4 provides some illustrative values for the size of the mag sails in the solar system for the Q-Drive and the expected performance for a 1 tonne craft. While the magnetic sail radii are large, they are achievable and allow for relatively high acceleration. As explained in [4], the plasma magnet sails increase in size as the medium density decreases, maintaining the forces on the sail. Once in interstellar space, the ISM is yet more rarefied and the sails have to commensurately expand.

How might the plasma medium’s energy be harvested?

The wind turbine shown in figure 1 is replaced by an arrangement of the plasma magnet sails. To harvest the energy of the medium, it is useful to conceptualize the plasma magnet sail as a parachute that slows the wind to run a generator. At the end of this power stroke, the parachute is collapsed and rewound to the starting point to start the next power cycle. This is illustrated in figure 3. A ship would have 2 plasma magnet sails that cycle their magnetic fields at each end of a long spine that is aligned with the wind direction to mimic this mechanism. The harvested energy is then used to eject propellant so that the propellant exhaust velocity is optimally the same as the medium wind speed. By balancing the captured power with that needed to eject propellant, the ship needs no dedicated onboard power beyond that for maintenance of other systems, for example, powering the magnetic sails.

Within the solar system, the Q-Drive could therefore push a ship towards the sun into the solar wind, as well as away from the sun with the solar wind at its back. Ejecting propellant ahead of the ship on an outward bound journey would allow the ship to decelerate. Ejecting the propellant ahead of the ship as it faced the solar wind would allow the ship to fall towards the sun. In both cases, the maximum velocity is about the 400 km/s of the peak density velocity of the solar wind.

Can the drive achieve velocities greater than the solar wind?

With pure drag sails, whether photon or magnetic, the maximum velocity is the same as the medium pushing on the sail. For a magnetic sail, this is the bulk velocity of the solar wind, about 400 km/s at the sun’s equator, and 700 km/s at the sun’s poles.

Unlike drag sails, the Q-Drive can achieve velocities greater than the medium, e.g. the solar wind. As long as the wind is flowing into the bow of the ship, the ship can accelerate indefinitely until the propellant is exhausted. The limitation is that this can only happen while the ship is facing into the wind (or the wind vector has a forward facing component). In the solar system, this requires that there is sufficient distance to allow the ship to accelerate until its velocity is higher than the solar wind before it flies past the sun. Once past perihelion, the ship is now running into the solar wind from behind, and can therefore keep accelerating.

What performance might be achievable?

To indicate the possible performance of the Q-drive in the solar system, 2 missions are explored, both requiring powered flight into the solar wind.

Two Solar System Missions

1. Mercury Rendezvous

To reach Mercury quickly requires the probe to reduce its orbital speed around the sun to drop down to Mercury’s orbit and then reduce velocity further to allow orbital insertion. The Q-Drive ship points its bow towards the sun, and ejects propellant off-axis. This quickly pushed the probe into a fast trajectory towards the sun. Further propellant ejection is required to prevent the probe from a fast return trajectory and to remain in Mercury’s sun orbital path. From there a mix of propellant ejection and simple drag alone can be used to place the probe in orbit around Mercury. Flight time is of the order of 55 days. Figure 4 illustrates the maneuver.

2. Sundiver with Triton Flyby

The recent Centauri Dreams post on a proposed flyby mission to Triton indicated a flight time of 12 years using gravity assists from Earth, Venus, and Jupiter.. The Q-Drive could reduce most of that flight time using a sundiver approach. Figure 5 shows the possible flight path. The Q-Drive powers towards the sun against the solar wind. It must have a high enough acceleration to ensure that at perihelion it is now traveling faster than the solar wind. This allows it to now continue on a hyperbolic trajectory continually accelerating until its propellant is exhausted.

This sundiver maneuver allows the Q-Drive craft to fly downwind faster than the wind.

For a ship outward bound beyond the heliosphere, the ISM medium is experienced as a wind coming from the bow, While extremely tenuous, there is enough medium to extract the energy for continued acceleration as long as the ship has ejectable mass.

Up to this point, I have been careful to state this works IN PRINCIPLE. In practice there are some very severe engineering challenges. The first is to be able to extract energy from the drag of the plasma winds with sufficient efficiency to generate the needed power for propellant ejection. The second is to be able to eject propellant with a velocity that matches the speed of the vehicle, IOW, the exhaust velocity must match the vehicle’s velocity, unlike the constant exhaust velocity of a rocket. If the engines to eject propellant can only eject mass at a constant velocity, the delta V of the drive would look more like a conventional rocket, with a natural logarithm function of the mass flow. The ship would still be able to extract energy from the medium, but the mass ratio would have to be very much higher. The chart in Figure 2 shows the difference between a fixed velocity exhaust and the Q-Drive with variable velocity.

The engineering issues to turn the Q-Drive into hardware are formidable. To extract the energy of the plasma medium whether solar wind or ISM, with high efficiency, is non-trivial. Greason’s idea is to have 2 plasma magnet drag sails at each end of the probe’s spine that cycle in power to extract the energy. The model is rather like a parachute that is open to create drag to push on the parachute to run a generator, then collapse the parachute to release the trapped medium and restart it at the bow (see figure 3). Whether this is sufficient to create the needed energy extraction efficiency will need to be worked out. If the efficiencies are like those of a vertical axis wind turbine that works like drag engines, the efficiencies will be far too low. The efficiency would need to be higher than that of horizontal axis wind turbines to reduce the mass penalties for the propellant. It can be readily seen that if the efficiencies combine to be lower than 50%, then the Q-Drive effectively drops back into the regime illustrated in Box 1, that is that the mass of propellant must become larger than the medium and ejected more slowly. This hugely raises the mass ratio of the craft and in turn reduces its performance.

The second issue is how to eject the propellant to match the velocity of the medium streaming over the probe. Current electric engines have exhaust velocities in the 10s of km/s. Theoretical electric engines might manage the solar wind velocity. Efficiencies of ion drives are in the 50% range at present. To reach a fraction of light speed for the interstellar mission is orders of difficulty harder. Greason suggests something like a magnetic field particle accelerator that operates the length of the ship’s spine. Existing particle accelerators have low efficiencies, so this may present another very significant engineering challenge. If the exhaust velocity cannot be matched to the speed of the ship through the medium, the performance looks much more like a rocket, with velocity increases that depend on the natural logarithm of the mass ratio, rather than the square root. For the interstellar mission, increasing the velocity from 4% to 20% light speed would require a mass ratio of not just 25, but rather closer to 150.

Figure 6 shows my attempt to illustrate a conceptual Q-Drive powered spacecraft for interstellar flight. The propellant is at the front to act as a particle shield in the ISM. There is a science platform and communication module behind this propellant shield. Behind stretches a many kilometers long spine that has a plasma magnet at either end to harvest the energy in the ISM and to accelerate the propellant. Waste heat is handled by the radiator along this spine.

In summary, the Q-Drive offers an interesting path to high velocity missions both intra-system and interstellar, with much larger payloads than the Breakthrough Starshot missions, but with anticipated engineering challenges comparable with other exotic drives such as antimatter engines. The elegance of the Q-Drive is the capability of drawing the propulsion energy from the medium, so that the propellant can be common inert material such as water or hydrogen.

The conversion of the medium’s momentum to net thrust is more efficient than a rocket with constant exhaust velocity using onboard power allowing far higher velocities with equivalent mass ratios. The two example missions show the substantial improvements in mission time for both and inner system rendezvous and an outer system flyby. The Q-Drive also offers the intriguing possibility of interstellar missions with reasonable scientific and communication payloads that are not heroic feats of miniaturization.

References

1. Greason J. “A Reaction Drive Powered by External Dynamic Pressure” (2019) JBIS v72 pp146-152.

2. Greason J. ibid. equation A4 p151.

3. Greason J. “A Reaction Drive Powered by External Dynamic Pressure” (2019) TVIW video https://youtu.be/86z42y7DEAk

4. Tolley A. “The Plasma Magnet Drive: A Simple, Cheap Drive for the Solar System and Beyond” (2017) https://www.centauri-dreams.org/2017/12/29/the-plasma-magnet-drive-a-simple-cheap-drive-for-the-solar-system-and-beyond/

5. Zwicky F. The Fundamentals of Power (1946). Manuscript for the International Congress of Applied Mechanics in Paris, September 22-29, 1946.

tzf_img_post

Climate Change and Mass Extinctions: Implications for Exoplanet Life

The right kind of atmosphere may keep a planet habitable even if it crowds the inner region of the habitable zone. But atmospheric evolution involves many things, including the kind of geological activity our own planet has experienced, leading to sudden, deep extinctions. Centauri Dreams regular Alex Tolley today takes a look at a new paper that examines the terrestrial extinction of marine species in the Permian event some 252 million years ago. As we examine exoplanet habitability, it will be good to keep the factors driving such extinctions in mind. Tolley is a lecturer in biology at the University of California and author, with Brian McConnell, of A Design for a Reusable Water-Based Spacecraft Known as the Spacecoach (Springer, 2016). A key question in his essay today: Is our definition of the habitable zone simply too broad?

by Alex Tolley

In the search for life on exoplanets, questions about whether the planet is within the HZ given a plausible atmosphere is based on timescales as a fraction of stellar lifetime on the main sequence. With water may come the emergence of life as we know it, and then the long, slow evolution to multicellular life and possible technological civilization. Planets may initially form too close to a pre-main sequence star to be in the HZ, then enter the HZ, only to leave it again as the star increases in luminosity with age. Earth has experienced about a 30% increase in solar luminosity over its lifetime. The CO2 level needed to maintain a constant surface temperature via the greenhouse effect has had to decline to offset the increased insolation. In 1 to 2 billion years, the further increase in solar luminosity will require the CO2 levels to decline below that needed for photosynthesis, or the Earth’s surface will heat up beyond that sustainable for life.

Yet when considering the environment on a world in the HZ, we should be cognizant that climatic instability may create shocks in the short term that have major impacts on life. Earth has experienced 5 major extinctions based on our reading of the fossil record. The most famous being the dinosaur-killing KT event that ended the Cretaceous and allowed mammals to evolve into the newly vacated ecological niches. However, the largest extinction is known as the Permian extinction, or “Great Dying” when over 95% of marine species became extinct about 252 mya. Unlike the KT event, which was a cosmic throw of the dice, the Permian extinction is believed to be due to massive volcanism of the Siberian Traps that released vast quantities of CO2 into the atmosphere that increased its concentration at least several fold. This caused a rapid temperature rise of 10s of degrees Fahrenheit and was accompanied by ocean acidification.

A new paper by Julian Penn et al suggests that this global temperature change caused the extinction of marine species primarily by metabolic stress and hypoxia.

The core idea is that multicellular, aerobic organisms require critical oxygen pressures to live, with their lowest levels of metabolism during resting, and higher levels for activities, such as swimming or feeding. Sessile organisms may have just a 1.5x increase in active metabolic rate over resting, while energetic organisms like fish may be 5x or more. As temperatures rise, so does the metabolic rate. This, in turn, requires adequate oxygen for respiration. But as the temperatures rise, the dissolved oxygen levels fall, placing additional stress on the animals to maintain their respiration rate. Penn integrated climate models to compute the temperature change and dissolved oxygen partial pressures, with the estimated metabolic rates for the activity of various modern animals to represent Permian species, to determine how ocean habitat temperatures impact the metabolisms of marine genera and probable extinction rates.

Figure 1 shows the relation between metabolic rate and temperature, and the temperature increased metabolic index of ocean habitat by latitude and depth. The polar latitudes and shallower depths show the highest changes in the metabolic index, indicating the most stressed habitats.

Figure 1. Physiological and ecological traits of the Metabolic Index (F) and its end-Permian distribution. (A) The critical O2 pressure (pO2 crit) needed to sustain resting metabolic rates in laboratory experiments (red circles, Cancer irroratus) vary with temperature with a slope proportional to Eo from a value of 1/Ao at a reference temperature (Tref), as estimated by linear regression when F = 1 (19). Energetic demands for ecological activity increase hypoxic thresholds by a factor Fcrit above the resting state, a value estimated from the Metabolic Index at a species’ observed habitat range limit. (B) Zonal mean distribution of F in the Permian simulation for ecophysiotypes with average 1/Ao and Eo (~4.5 kPa and 0.4 eV, respectively). (C and D) Variations in F for an ecophysiotype with weak (C) and strong (D) temperature sensitivities (Eo = 0 eV and 1.0 eV, respectively), both with 1/Ao ~ 4.5 kPa. Example values of Fcrit (black lines) outline different distributions of available aerobic habitat for a given combination of 1/Ao and Eo. Credit: Justin Penn and Curtis Deutsch, University of Washington.

Figure 2 shows the spatial changes in ocean temperature and oxygen concentrations. Oceanic temperatures rise, particularly towards the poles, and with it a reduction in dissolved oxygen. As expected the greatest rises in temperature are at the shallower depths, particularly with the highly productive continental shelves. Oxygen level declines are most widely seen at all depths at the poles, but far less so in the tropics.

Figure 2. Permian/Triassic ocean temperature and O2. (A) Map of near surface (0 to 70 m) ocean warming across the Permian/Triassic (P/Tr) transition simulated in the Community Earth System Model. The region in gray represents the supercontinent Pangaea. (B) Simulated near surface ocean temperatures (red circles) in the eastern Paleo-Tethys (5°S to 20°N) and reconstructed from conodont d18Oapatite measurements (black circles) (4). The time scale of the d18Oapatite data (circles) has been shifted by 700,000 years to align it with d18Oapatite calibrated by U-Pb zircon dates (open triangles) (1), which also define the extinction interval (gray band). Error bars are 1°C. (C) Simulated zonal mean ocean warming (°C) across the P/Tr transition. (D) Map of seafloor oxygen levels in the Triassic simulation. Hatching indicates anoxic regions (O2 < 5 mmol/m3). (E) Simulated seafloor anoxic fraction ƒanox (red circles). Simulated values are used to drive a published one-box ocean model of the ocean’s uranium cycle (8) and are compared to d238U isotope measurements of marine carbonates formed in the Paleo-Tethys (black circles). Error bars are 0.1‰. (F) Same as in (C) but for simulated changes in O2 concentrations (mmol/m3). Credit: Justin Penn and Curtis Deutsch, University of Washington.

The authors conclude:

The correspondence between the simulated and observed geographic patterns of selectivity strongly implicates aerobic habitat loss, driven by rapid warming, as a main proximate cause of the end-Permian extinction.

However, while the temperature is the proximate cause, the authors note that other factors are also involved.

“In our simulations, net primary productivity is reduced by ~40% globally, with strongest declines in the low latitudes, where essential nutrient supply to phytoplankton is most curtailed.”

Ocean acidification is also a potential factor, as we may be seeing today. Acidification will be higher at the poles, creating a habitat barrier for species that require more calcification.

Figure 3 is a schematic of the model, fitting the probable extinction rates to the fossil record. Their model predicts a latitudinal impact of warming that is also suggested by the fossil record. Their explanation for this spatial pattern is that tropical organisms are preadapted to warmer temperatures and lower O2 levels. As the oceans warm, these organisms migrate polewards to cooler waters. However, polar species have nowhere to migrate to, and therefore have a higher rate of extinction.

Figure 3. An illustration depicting the percentage of marine animals that went extinct at the end of the Permian era by latitude, from the model (black line) and from the fossil record (blue dots). The color of the water shows the temperature change, with red representing the most severe warming and yellow less warming. At the top is the supercontinent Pangaea, with massive volcanic eruptions emitting carbon dioxide. The images below the line represent some of the 96 percent of marine species that died during the event. Credit: Justin Penn and Curtis Deutsch, University of Washington.

As our current analog of the Permian climate change impacts the oceans, we are already seeing warm water species appearing in the cold North Atlantic, far north of their historic ranges. We can also expect species like the Arctic ice fish that has no red blood cells due to the high O2 concentrations in polar waters to become extinct as polar waters continue to warm.

What about the extinction of terrestrial life? 70% of terrestrial faunal species went extinct. The attractiveness of this theory is that it also applies to terrestrial life, although the oxygen depletion was not a factor. What is clear as well is that the CO2 increase heated the planet, overwhelming any cooling from dust blown up into the atmosphere, as experienced with the 2 year global cooling after Mt. Pinatubo erupted.

Had the Earth been closer to our sun, or temperatures risen further due to greater volcanic activity, the extinctions might conceivably have been 100% for all multicellular genera. Earth life might have been pushed back to primarily archaea and bacteria. The atmosphere might have reverted back to its Archaean state. If photosynthesizers were still present, how long would it take for aerobic multicellular life to evolve again?

The major extinctions have implications for life on exoplanets. Worlds closer to the inner edge of the HZ may be supportive of life if the atmosphere stays stable. However, as we have seen with the example of the Permian extinction, geologic processes can upset that balance, potentially making a world uninhabitable for a period, forcing any life to be restricted to simpler forms. How frequently could such events cause mass, even total extinctions, on other worlds, despite long-term conditions being favorable for life? It is perhaps worth considering whether the inner edge HZ limits should be made more conservative to allow for such events.

The paper is Penn et al., “Temperature-dependent hypoxia explains biogeography and severity of end-Permian marine mass extinction” Science Vol. 362, Issue 6419 (7 December 2018). Abstract (Full Text behind paywall).

tzf_img_post