Centauri Dreams
Imagining and Planning Interstellar Exploration
‘Oumuamua: A Shard of Nitrogen Ice?
I’m only just getting to Steven Desch and Alan Jackson’s two papers on ‘Oumuamua, though in a just world (where I could clone myself and work on multiple stories simultaneously) I would have written them up sooner. Following Avi Loeb’s book on ‘Oumuamua, the interstellar object has been in the news more than ever, and the challenge it throws out by its odd behavior has these two astrophysicists, both at Arizona State, homing in on a possible solution.
No extraterrestrial technologies in this view, but rather an unusual object made of nitrogen ice, common in the outer Solar System and likely to be similarly distributed in other systems. Think of it as a shard of a planet like Pluto, where nitrogen ice is ubiquitous. Desch and Jackson calculated the object’s albedo, or reflectivity, with the idea in mind, realizing that the ice would be more reflective than astronomers had assumed ‘Oumuamua was, and thus it could be smaller. As the authors note: “Its brightness would be consistent with an albedo of 0.64, which is exactly consistent with the albedo of the surface of Pluto, which is > 98% N2 ice.”
That’s a useful finding because a nitrogen ice object would behave like ‘Oumuamua was observed to do. Recall the salient problem this interloper presented as it left the system. It moved away from the Sun at a slightly larger velocity than an average comet should have. Desch and Jackson discovered that if it were made of nitrogen ice, and thus smaller (and more reflective than thought), the so-called ‘rocket effect’ could be accounted for. A tiny object is affected by a small amount of escaping gas to a greater extent than a larger, more massive one.
The effect can be calculated by examining how different kinds of ices sublimate, moving from a solid to a gas with no intervening liquid state. As to how an object constituted of nitrogen ice might have gone interstellar, the astronomers worked out the rate that breakaway nitrogen ice pieces would have been produced through collisions in the outer system of an exoplanet. Says Jackson:
“It was likely knocked off the surface by an impact about half a billion years ago and thrown out of its parent system. Being made of frozen nitrogen also explains the unusual shape of ‘Oumuamua. As the outer layers of nitrogen ice evaporated, the shape of the body would have become progressively more flattened, just like a bar of soap does as the outer layers get rubbed off through use.”
That question of ‘Oumuamua’s shape continues to intrigue me, though, as I ponder my bar of Irish Spring. We’ve never observed anything of this shape in the Solar System. And exactly what would have happened at perihelion? I turned to the first of the two papers for more:
Our modelling shows that, perhaps surprisingly, an N2 ice fragment can survive passing the Sun at a perihelion distance of 0.255 au, in part because evaporative cooling maintains surface temperatures less than 50 K. Despite being closer to the Sun than Mercury, ‘Oumuamua’s surface temperatures remained closer to those of Pluto.
Even so, surely nitrogen ice would have been a huge factor in its behavior:
The volatility of N2 did, however, lead to significant mass loss – we calculate that by the time ‘Oumuamua was observed, a month after perihelion, it retained only around 8% of the mass it had on entering the solar system. This loss of mass is key to explaining the extreme shape of ‘Oumuamua: isotropic irradiation and removal of ice by sublimation increases the axis ratios, a process also identified by Seligman & Laughlin (2020). Between entering the Solar system and the light curve observations the loss of mass from ‘Oumuamua increased its axis ratios from an unremarkable 2:1 to the extreme observed value of around 6:1.
In other words, we are dealing with a ‘flattening’ that occurred in our own system and not at place of origin. I suspect this flattening process is going to receive a thorough vetting in the community, key as it is to explaining a salient oddity about ‘Oumuamua.
And so we wind up with a theory that presents ‘Oumuamua as shown below, an unusual aspect ratio to be sure (and yes, reminiscent of what could be a lightsail), but Desch and Jackson think their theory of nitrogen ice matches every aspect of ‘Oumuamua’s behavior without the need for invoking alien technology. Desch comments:
“…it’s important in science not to jump to conclusions. It took two or three years to figure out a natural explanation — a chunk of nitrogen ice — that matches everything we know about ‘Oumuamua. That’s not that long in science, and far too soon to say we had exhausted all natural explanations.”
Image: This painting by William K. Hartmann, who is a senior scientist emeritus at the Planetary Science Institute in Tucson, Arizona, is based on a commission from Michael Belton and shows a concept of the ‘Oumuamua object as a pancake-shaped disk. Credit: William Hartmann.
Thus ‘Oumuamua, in the eyes of Desch and Jackson, might be considered a chunk of an exo-Pluto, which in itself opens the topic of studying interstellar objects for information about their parent systems. We’ve never observed an exo-Pluto before, so ‘Oumuamua may probe the surface composition of worlds like this.
Moreover, if our first identified interstellar interloper is made of nitrogen ice, then the existence of exo-Plutos must be common, although we’ll have to decide exactly how we want to define the term (and I suppose we can now invoke the ruckus about Pluto’s planetary status by asking whether exo-Plutos are actually to be referred to as ‘exo-dwarf planets’).
As the Vera Rubin Observatory/Large Synoptic Survey Telescope in Chile comes online, regular surveys of the southern sky will doubtless up the number of interstellar objects we can identify, helping us home in further on their composition and likely origin.
Image: Illustration of a plausible history for ‘Oumuamua: Origin in its parent system around 0.4 billion years ago; erosion by cosmic rays during its journey to the solar system; and passage through the solar system, including its closest approach to the Sun on Sept. 9, 2017, and its discovery on October 2017. At each point along its history, this illustration shows the predicted size of ‘Oumuamua, and the ratio between its longest and shortest dimensions. Credit: S. Selkirk/ASU.
The first paper is Jackson et al., “1I/’Oumuamua as an N2 ice fragment of an exo?Pluto surface: I. Size and Compositional Constraints,” Journal of Geophysical Research: Planets (16 March 2021). Abstract / Preprint. The second is Desch et al. “1I/’Oumuamua as an N2 ice fragment of an exo?pluto surface II: Generation of N2 ice fragments and the origin of ‘Oumuamua,” Journal of Geophysical Research: Planets (16 March 2021). Abstract / Preprint.
Technosignatures and the Age of Civilizations
Given that we are just emerging as a spacefaring species, it seems reasonable to think that any civilizations we are able to detect will be considerably more advanced — in terms of technology, at least — than ourselves. But just how advanced can a civilization become before it does irreparable damage to itself and disappears? This question of longevity appears as a factor in the famous Drake Equation and continues to bedevil SETI speculation today.
In a paper in process at The Astronomical Journal, Amedeo Balbi (Università degli Studi di Roma “Tor Vergata”) and Milan ?irkovi? (Astronomical Observatory of Belgrade) explore the longevity question and create a technosignature classification scheme that takes it into account. Here we’re considering the kinds of civilization that might be detected and the most likely strategies for success in the technosignature hunt. The ambiguity in Drake’s factor L is embedded in its definition as the average length of a civilization’s communication phase.
Immediately we’re in shifting terrain, for in the early days of SETI, radio communication was the mode of choice, but even in the brief decades since Project Ozma, we’ve seen our own civilization drastically changing the radio signature it produces through new forms of connection. And as Balbi and ?irkovi? point out, the original L in Drake’s equation leaves open a rather significant matter: How do we treat the possibility of civilizations that have gone extinct?
These two authors have written before about what they call ‘temporal Copernicanism,’ which leads us to ask how the longevity of a civilization is affected by its location in our past or in our future. We are, after all, dealing with a galaxy undergoing relentless processes of astrophysical evolution. As we speculate, we have to question a value for L based on a civilization (our own) whose duration we cannot know. How can we know how far our own L extends into the future?
Image: Messier 107, a globular cluster around the disk of the Milky Way in the constellation Ophiuchus, is a reminder of the variety of stellar types and ages we find in our galaxy. What kind of technosignature might we be able to detect at a distance of about 20,000 light-years, and would ancient clusters like these in fact make reasonable targets for a search? Many factors go into our expectations as we formulate search strategies. This image was taken with the Wide Field Camera of Hubble’s Advanced Camera for Surveys. Credit: ESA/NASA.
Thinking about these matters always gets me thinking of Arthur C. Clarke’s 1956 novel The City and the Stars, set in the city of Diaspar a billion years from now. How do we wrap our heads around a civilization measured not just in millennia but in gigayears? Speculative as they are, I find a kind of magic in playing around with terms like ?, cited here as the average rate of appearance of communicating civilizations (with L, as before, their average longevity), so that if we take ? as constant in time, its value can be estimated as the total number of technosignatures over the history of the galaxy (Ntot) divided by the age of the galaxy (TG). Thus Balbi and ?irkovi? cite the equation:
From the paper:
It is apparent that the number of technosignatures that we can detect is a fraction of the total number that ever existed: the fraction is precisely L/TG. Because TG ? 1010 years, L/TG is presumed to be generally small; any specific precondition imposed on the origination of technosignatures, like the necessity of terrestrial planets for biological evolution, will act to reduce the fraction. This is the quantitative argument that justifies one of the most widely cited assertions of classical SETI, i.e. that the chances of finding ETIs depend on the average longevity of technological civilizations. (In fact, it is well-known that Frank Drake himself used to equate N to L.)
The equation clarifies the idea that SETI depends upon the average longevity of technological cultures, but the authors point out that another way to look at the matter is this: L needs to be large, for we’re requiring a high number of technosignatures indeed to have any chance for detecting a single one. Spread out over time, many such signatures need to have existed for us to make a single detection, or at best a few, with our present level of technology.
And here is where Balbi and ?irkovi? take us away from the more conventional approach derived above. Is the number of detectable technosignatures, N, static over time? From the paper:
…both ? and L are average quantities, and there is an implicit assumption that N is stationary over the history of the Galaxy. There are good reasons to believe that this is not the case. Of course, it is unrealistic to assume that ? is constant with cosmic time. Even if we limit ourselves to the last ? 10 Gyr of existence of thin disk Pop I stars which are likely to harbour the predominant fraction of all possible habitats for intelligent species, their rate of emergence is likely to be very nonuniform. One obvious source of nonuniformity is the changing rate of emergence of planetary habitats, as first established by Lineweaver (2001) and subsequently elaborated by Behroozi & Peeples (2015), as well as by Zackrisson et al. (2016). This nonuniformity can be precisely quantified today and some contemporary astrobiological numerical simulations have taken it into account (Ðošovi? et al. 2019).
We should assume, the authors argue, that the appearance of technosignatures varies with time. They are interested less in coming up with a figure for N — and again, this is defined in their terms (not Drake’s) as ‘the number of detectable technosignatures’ — than in spotlighting the most likely type of technosignature we can detect. Their classification scheme for technosignatures as filtered through the lens of longevity goes like this:
Type A: technosignatures that last for a duration comparable to the typical timescale of technological and cultural evolution on Earth, ? ? 103 years
Type B: technosignatures that last for for a duration comparable to the typical timescale of biological evolution of species on Earth, ? ? 106 years
Type C: technosignatures that last for for a duration comparable to the typical timescale of stellar and planetary evolution, ? ? 109 years
The scheme carries an interesting subtext: The longevity of technosignatures does not have to coincide with the longevity of the species that created the detectable technology. Here we’re at major variance from the L in Frank Drake’s equation, which had to do with the lifetime of a civilization that was capable of communicating. Balbi and ?irkovi? are tightly focused on the persistence not of civilizations but of artifacts. Notice that a technosignature search is likewise not limited to planetary systems — an interstellar probe could throw its own technosignature.
We might assume that technosignatures of long duration could only be produced by highly advanced civilizations capable of planetary engineering, say, but let’s not be too sure of ourselves on that score, for some technosignatures might be left behind by species well down on the Kardashev scale of civilizations. Consider Breakthrough Starshot, for example. Let’s push its ambitions back a bit and just say that perhaps within a century, we may be able to launch flocks of small sailcraft to nearby stars using some variation of its methods.
These would constitute a technosignature if detected by another civilization, as would remnant probes like Voyager and Pioneer, as would some forms of atmospheric pollution or simple space debris. A single civilization could readily produce different kinds of technosignatures over the course of its lifetime. As the authors note:
Our species has not yet produced Type A technosignatures, if we only consider the leakage of radio transmissions or the alteration of atmospheric composition by industrial activity; but its artifacts, such as the Voyager 1 and 2, Pioneer 10 and 11, and New Horizons probes, could in principle become type B or even C in the far future, even if our civilization should not survive that long. Similarly, a Type C technosignature can equally be produced by a very long-lived civilization, or by one that has gone extinct on a shorter time scale but has left behind persistent remnants, such as a beacon in a stable orbit or a Dyson-like megastructure.
Persistent remnants. I think of the battered, but more or less intact, Voyager 2 as it passes the red dwarf Ross 248 at about 111,000 AU some 40,000 years from now (Ross 248 will, in that era, be the closest star to the Sun). That’s a technosignature waiting to be found, one produced by a civilization low on the Kardashev scale, but it bears the same message, of a culture that explores space. I wonder what kind of a technosignature Clarke’s billion year old civilization in Diaspar would have thrown?
Whatever it might be, it would surely be more likely to be detected than our Voyager 2, a stray bit of flotsam among the stars. That said, I keep in mind what we learned from the TechnoClimes workshop — and Jim Benford’s continuing work on ‘artifact’ SETI — making the point that we can’t rule out a local artifact in our own system. And, of course, if Avi Loeb is correct, we may already have found one, though suitably ambiguous in its interpretation. Clearly, if we did detect technosignatures close to home, the implication would be that they are found widely in the galaxy, and that would dramatically change the nature of the hunt.
So the scope for technosignatures is wide, but drawing the lessons of this paper together, the authors find that the technosignature we are most likely to detect with present technological tools is a long-lived one, meaning in Balbi and ?irkovi?’s terms, one with a duration of at least 106 years. Technosignatures younger than this may be detectable but only if it turns out they are common, as thus relatively nearby and easier for us to find. Of course we can search for them, but the authors believe these searches are unlikely to pay off. Their thought:
This suggests that an anthropocentric approach to SETI is flawed: it is rational to expect that the kind of technosignatures we are most likely to get in contact with is wildly different, in terms of duration, from what has been produced over the course of human history. This conclusion strengthens the case for the hitherto downplayed hypothesis (which is not easily labeled as “optimistic” or “pessimistic”) that a significant fraction of detectable technosignatures in the Galaxy are products of extraterrestrial civilizations which are now extinct.
How to proceed? The authors’ focus on longevity leads them to conclude that our most likely targets may well be rare and they may flag extinct civilizations, but the value N that Balbi and ?irkovi? are talking about is different than classical SETI’s N, which needs a large value to ensure detection. It only takes one technosignature, and a few of the Type C signatures would be much more likely to be detected than a spectacularly high number of Type A signatures:
Dysonesque megastructures, interstellar probes, persistent beacons—as well as activities related to civilizations above Type 2 of the Kardashev scale, or to artificial intelligence—should be the preferred target for future searches. These technosignatures would not only be ‘weird’ when measured against our own bias, but could arguably be less common than short-lived ones. Such [a] conclusion deflates the emphasis on large N (and human-like technosignatures) that informed much of classical SETI’s literature.
If this sounds discouraging, it need not be. It simply tells us the kind of strategy that has the greatest chance for success:
…the supposed rarity of long-lived technosignatures should not be regarded, in itself, as a hindrance for the SETI enterprise: in fact, a few Type C technosignatures, over the course of the entire history of the Galaxy, would have much higher chance of being detected than a large number of Type A. Also, possible astrophysical mechanisms which could lead to a posteriori synchronization of shorter lived technosignatures should be investigated, to constrain the parameter space of this possibility, if nothing else.
Civilizations that appeared long ago and survived have conceivably found a way to persist, and therefore may still be active, but for detection purposes their existence now is less significant than what they may have left behind. Just how they grew to the point where they could begin the construction of detectable technosignatures is explored in the paper’s discussion of ‘phase-transition’ scenarios via a mathematical framework used to model longevity. “Achieving such form[s] of institutions and social structures might count as an advanced engineering feat in its own right,” as the authors note.
Technosignature work is young and constitutes a significant extension of the older SETI paradigm. Thus modeling how to proceed, as we saw both here and in the previous post on NASA’s TechnoClimes workshop, is the only path toward developing a search strategy that is both sound in its own right and also may have something to teach us about how our own civilization views its survival. The kinds of insight technosignature modeling could produce would take us well beyond the foolish notion of some early SETI critics that its only didactic function is as a form of religion, looking for salvation in the form of the gift of interstellar knowledge. To the contrary, the search may tell us much more about ourselves.
The paper is Balbi and ?irkovi?, “Longevity is the key factor in the search for technosignatures,” in process at The Astronomical Journal (preprint).
A Path Forward for Technosignature Searches
Héctor Socas-Navarro (Instituto de Astrofísica de Canarias) is lead author of a paper on technosignatures that commands attention. Drawing on work presented at the TechnoClimes 2020 virtual meeting, under the auspices of NASA at the Blue Marble Space Institute of Science in Seattle, the paper pulls together a number of concepts for technosignature detection. Blue Marble’s Jacob Haqq-Misra is a co-author, as is James Benford (Microwave Sciences), Jason Wright (Pennsylvania State) and Ravi Kopparapu (NASA GSFC), all major figures in the field, but the paper also draws on the collected thinking of the TechnoClimes workshop participants.
We’ve already looked at a number of technosignature possibilities in these pages, so let me look for commonalities as we begin, beyond simply listing possibilities, to point toward a research agenda, something that NASA clearly had in mind for the TechnoClimes meeting. The first thing to say is that technosignature work is nicely embedded within more traditional areas of astronomy, sharing a commensal space with observations being acquired for other reasons. Thus the search through archival data will always be a path for potential discovery.
The Socas-Navarro paper, however, homes in on new projects and mission concepts that could themselves provide useful data for other areas of astronomy and astrophysics. A broad question is what kind of civilization we would be likely to detect if technosignature research succeeds. Only technologies much superior to our own could be detected with our current tools. Recent work on a statistical evaluation of the lifespan of technological civilizations points to the same conclusion: First detection would almost certainly be of a high-order technology. Would it also be a signature of a civilization that still exists? As we’ll see in the next post, there are reasons for thinking this will not be the case.
Image: Artistic recreation of a hypothetical exoplanet with artificial lights on the night side. Credit: Rafael Luis Méndez Peña/Sciworthy.com.
This is a useful paper for those looking for an overview of the technosignature space, and it also points to the viability of new searches on older datasets as well as data we can expect from already scheduled missions and new instrumentation on the ground. Thus exoplanet observations offer obvious opportunities for detecting unusual phenomena as a byproduct of their work. The workshop suggested taking advantage of this fact by modeling, with technosignatures in mind, for complex light curve analysis, photometric and spectroscopic searches for night-time illumination, and developing new algorithms for analyzing optimal communications pathways between exoplanets in a given volume of interstellar space:
A region of space with the right distribution of suitable worlds to become a communication hub may be a promising place to search. TS [technosignatures] might be more abundant there, just like Earth TS are more abundant wherever there is a high density of human population, which in turn tends to clutter in the form [of] network structures.
Other methods piggyback on existing exoplanet campaigns. Observing planetary atmospheres, for instance, is useful because it ties in to existing biosignature detection efforts. Future projects on missions observing in the mid-infrared like the Large Ultraviolet Optical Infrared Surveyor (LUVOIR) could explore this space. See Technosignatures: Looking to Planetary Atmospheres, for example, for Ravi Kopparapu’s work on nitrogen dioxide (NO2) as an industrial byproduct, a kind of search we have only begun to explore. Back to the paper:
A nice advantage of this method of detecting atmospheric technosignatures is that the same instruments and telescopes can be used to characterize atmospheres of exoplanets. Our view of habitability and technosignatures is based on our own Earth’s evolutionary history. There are innumerable examples in the history of science where new phenomena were discovered serendipitously. By having a dedicated mission to look for atmospheric technosignatures that also covers exoplanet science, we can increase our chances of detecting extraterrestrial technology on an unexpected exoplanet, or may discover a spectral signature that we usually do not associate with technology. The only way to know is to search.
Where else might we push with new observational and mission concepts? A 3-meter space telescope performing an all-sky survey with high point source sensitivity in the infrared could provide benefits to astrophysics as well as being sensitive to Dyson spheres at great distances. The paper argues for a dedicated effort to develop fast infrared detectors capable of nanosecond timing to enable a space mission searching the entire infrared sky. Such detectors would be sensitive to transients like pulsars and fast radio bursts as well as broadband pulses.
The paper also makes the case for a radio observatory on the far side of the Moon. Here we are all but completely free from contamination from radio interference by our own species, although even now the matter is complicated by satellites like China’s Queqiao, which has been at the Earth-Moon L2 Lagrange point for almost three years. Issues of radio protection of the far side will grow in importance as we try to protect this resource, where Earth radio waves are attenuated by 10 orders of magnitude or more. Again, we are dealing with a future facility that would also be of inestimable value for conventional astronomy and lunar exploration.
Close encounters with other stars (which occur as another star penetrates the Sun’s Oort Cloud every 105 years or so) highlight the possibility that extraterrestrial civilizations, having noted biosignatures from Earth, could have placed probes in our system. Few searches for such artifacts have been conducted, but as Jim Benford has discussed in these pages (see Looking for Lurkers: A New Way to do SETI) a host of objects could be easily examined for artifacts. Few have been studied in depth, but Benford has made the case that both the surface of the Moon and the Earth Trojans can now be studied at an unprecedented level of detail.
We already have monthly mapping of the Moon at high resolution via the Lunar Reconnaissance Orbiter (LRO) with a resolution of 100m/pixel (LRO can also work at a higher resolution mode of 0.5m/pixel, but this mode has not been widely used). Future exploration might include an orbiter working in ultra high-resolution at the ?10cm per pixel level. The workshop also discussed high-resolution mapping of Mars and, perhaps, Mercury and larger asteroids coupled with machine learning techniques identifying anomalies.
Not surprisingly, given the high visibility (in public interest) of objects like ‘Oumuamua or 2I/Borisov, a ready to launch intercept mission also comes into consideration here to plan for the study of future interstellar arrivals. Other possibilities for pushing the technosignature envelope include an asteroid polarimetry mission studying either main belt asteroids or the Jupiter Trojans, gathering information that would be useful for our understanding of small objects with a potential for impact on the Earth. The Jupiter mission could probe for natural and possible artificial objects that might have wound up being ensnared over time in Jupiter’s gravitational well. The asteroid mission would produce a statistical description of small objects in solar orbit. The paper describes it this way:
A telescope similar to Kepler would be sensitive to objects of 10 m up to a distance of 0.02 AU (assuming a high albedo of 0.8) or 0.01 AU for typical asteroid albedos. Extrapolating the current knowledge of asteroid size distribution, there should be some 250,000 asteroids of 10 m in the radius of 0.02 AU accessible to such [a] telescope in the asteroid belt. The mission could be designed with an elliptical orbit having the perihelion near the Earth’s orbit and the aphelion in the asteroid belt. Under these conditions it would regularly dive into a different region of the belt, probing a different space in every orbit.
Such are some of the ways we can extend the search for technosignatures while supporting existing astronomical and astrophysical work. The paper goes into new ground in introducing a framework for future work for the different types of technosignatures, defining what it calls the ‘ichnoscale’ and analyzing it in relation to the number of targets and the persistence of a possible signal. The ichnoscale parameter is “the relative size scale of a given TS [technosignaure] in units of the same TS produced by current Earth technology.“
We’re only beginning to map out a path forward for technosignature investigation, but the authors believe that given advances in exoplanet research, astrobiology and astrophysics, we are at the right place to inject new energy into the attempt. Thus what the community is trying to do is to learn the best avenues for proceeding while developing a framework to advance the effort by quantifying targets and potential signals. Along the way, we may well discover new astrophysical phenomena as a byproduct.
I’m particularly interested in the thorny question of how long technological civilizations can be expected to live, and am looking into a new paper from Amedeo Balbi and Milan ?irkovi? on the matter. I’ll be exploring some thoughts from this paper in the next entry.
The paper for today is Hector Socas-Navarro et al, “Concepts for future missions to search for technosignatures,” Acta Astronautica Volume 182 (May 2021), pp. 446-453 (abstract / preprint).
FTL: Thoughts on a New Paper by Erik Lentz
I see that Erik Lentz (Göttingen University) has just begun a personal blog, something that may begin to attract attention given that Dr. Lentz has offered up a new paper on faster than light travel. At the moment, the blog is bare-bones, listing only the paper itself (citation below) and an upcoming online talk that may be of interest. Here’s what the Lentz blog has on this:
Upcoming online talk to be given on 18 March 2021 at 3pm Eastern Standard Time for the Science Speaker Series at the Jim and Linda Lee Planetarium: https://youtu.be/6O8ji46VBK0
I checked the URL and found the page with a countdown timer, so I assume the event is publicly accessible. I would imagine it will draw a number of curious scientists and lay-people.
On the subject of faster than light travel, much of the work in the journals has evolved from Miguel Alcubierre’s now well known paper “The Warp Drive: Hyper-fast travel within general relativity,” which presented the idea of a ‘bubble’ of spacetime within which a volume of flat space could exist. In other words, it might be possible to enclose a spacecraft within such a bubble. While there is a physical restriction on objects within spacetime moving faster than the speed of light, spacetime itself is theoretically capable of expansion without limit — this is essentially the notion of ‘inflation’ that drives most current thinking about the earliest moments of the universe.
Alcubierre’s paper ran in May, 1994 in the prestigious journal Classical and Quantum Gravity, a venue whose demanding standards of peer review and acceptance give it high credibility. In other words, papers in this journal rightfully attract attention because of the demanding requirements of publication. I had more or less overlooked the new paper by Dr. Lentz until I realized that it was published here, after which I began to take notice.
This does not mean, of course, that either the Alcubierre ‘warp drive’ concept or the much different ideas of Erik Lentz can ever be engineered, but it does offer a great deal of interest from the standpoint of the mathematics of warped spacetime. After the Alcubierre paper, much of the ongoing work has been involved in exploring how negative energy operates, for ‘negative energy density’ is exotic and vast amounts would be required to form the needed ‘bubble’ of spacetime. The Lentz paper does away with negative energy. I’m hearing it described as an idea more in conformance with conventional physics, though that may also need clarification.
The essential notion put forward by Dr. Lentz is that there are configurations of spacetime curvature that can be explored as ‘solitons,’ which are a solution he deems physically viable, and thus not dependent on negative energy at all. Here we’re already in deep water. A soliton, as I have been learning, is a wave that can retain its shape and move at constant velocity. That such curiosities are within the realm of physical possibility is made clear by the origin of the study of solitons. They actually go back to an observation by British engineer John S. Russell in 1834. In a famous and oft-quoted passage delivered ten years later to the British Association for the Advancement of Science, Russell had this to say about what he called a ‘wave of translation’:
I was observing the motion of a boat which was rapidly drawn along a narrow channel by a pair of horses, when the boat suddenly stopped – not so the mass of water in the channel which it had put in motion; it accumulated round the prow of the vessel in a state of violent agitation, then suddenly leaving it behind, rolled forward with great velocity, assuming the form of a large solitary elevation, a rounded, smooth and well-defined heap of water, which continued its course along the channel apparently without change of form or diminution of speed. I followed it on horseback, and overtook it still rolling on at a rate of some eight or nine miles an hour, preserving its original figure some thirty feet long and a foot to a foot and a half in height. Its height gradually diminished, and after a chase of one or two miles I lost it in the windings of the channel. Such, in the month of August 1834, was my first chance interview with that singular and beautiful phenomenon which I have called the Wave of Translation.
Thus was born the study of solitons, which now extends into nuclear physics, optics and other fields, now including exotic propulsion. Notice that what Russell describes is a wave that is stable and can travel. His use of the word ‘translation’ means that this is not a wave made up of the same water that travels the length of the channel he was observing, but rather a wave that moves through the medium. Water is moving but being displaced in the process. We can think of the wave of translation — or at least I’ve seen it referred to this way — as a ‘wave packet’ that can maintain its shape, as it did in Scotland’s Union Canal for Russell.
I turned to Hilborn and Cross’ Chaos and Nonlinear Dynamics (Oxford University Press, 2000) to see solitons described as ‘nonlinear wave phenomena.’ Thus:
A soliton is a spatially localized wave disturbance that can propagate over long distances without changing itsshape. In brief, many nonlinear spatial modes become synchronized to produce a stable localized disturbance.
Solitons turn out to be remarkably stable. A great deal of mathematics has gone on since as solition concepts evolved, all much beyond my pay grade. I looked again at Dr. Lentz’ website to get a notion of what he was proposing in his own words, because I find it hard to make the considerable jump from the early observations of Russell to today’s understanding of solitons. Here’s Lentz with a vest-pocket description of faster than light travel that does not violate Einsteinian relativity:
Hyper-fast (as in faster than light) solitons within modern theories of gravity have been a topic of energetic speculation for the past three decades. One of the most prominent critiques of compact mechanisms of superluminal motion within general relativity is that the geometry must largely be sourced from a form of negative energy density, though there are no such known macroscopic sources in particle physics. I was recently able [to] disprove this position by constructing a new class of hyper-fast soliton solutions within general relativity that are sourced purely from positive energy densities, thus removing the need for exotic negative-energy-density sources. This is made possible through considering hyperbolic relations between components of the space-time metric’s shift vector. Further, these solutions are sourceable by a classical electronic plasma, placing superluminal phenomena into the purview of known physics. This is a very exciting breakthrough that I hope to have more [to] report on soon.
I take this to mean that there are mathematical solutions for spacetime curvature that use solitons as the mode of organization. Alcubierre’s ‘warp bubble’ becomes, in soliton mode, a wave that maintains its shape and moves at constant velocity. The key here, Lentz believes, is that this is a way of altering spacetime geometry without the use of exotic negative energy. Moreover, Lentz’ equations evidently show that tidal forces within the bubble can be minimized. The passage of time inside the soliton can be adjusted to match the time outside the bubble.
Image: Artistic impression of different spacecraft designs considering theoretical shapes of different kinds of “warp bubbles.” Credit: E Lentz.
We would still need enormous amounts of energy, but we are dealing with the kind of energy we understand rather than the far more amorphous ‘negative energy.’ Here’s Lentz again:
“The energy required for this drive travelling at light speed encompassing a spacecraft of 100 meters in radius is on the order of hundreds of times of the mass of the planet Jupiter. The energy savings would need to be drastic, of approximately 30 orders of magnitude to be in range of modern nuclear fission reactors… Fortunately, several energy-saving mechanisms have been proposed in earlier research that can potentially lower the energy required by nearly 60 orders of magnitude.”
Such energy savings methods would be prodigious indeed and it is to these that Dr. Lentz apparently turns next. The paper is Lentz, “Breaking the Warp Barrier: Hyper-Fast Solitons in Einstein-Maxwell-Plasma Theory,” Classical and Quantum Gravity Vol. 38, No. 7 (March, 2021). Abstract. We are in very deep mathematical waters here, so all I want to do is point to the paper and urge those interested to take in Dr. Lentz’ talk on the 18th.
A Useful Nearby Super-Earth
Gliese 486b is, in the words of astronomer Ben Montet, “the type of planet we’ll be studying for the next 20 years.” Montet (University of New South Wales) is excited about this hot super-Earth because it’s the closest such planet we’ve found to our own Solar System, at about 26 light years away. That has implications for studying its atmosphere, if it has one, and by extension sharpening our techniques for atmospheric analysis of other nearby worlds. The goal we’re moving toward is being able to examine smaller rocky planets for biosignatures.
But we’re not there yet, and what we have in Gliese 486b is an exoplanet that has now been identified as a prime target for future space- and ground-based instruments, one that, given its proximity, is an ideal next step to push our methods forward. The paper on this work shows that two techniques can be deployed here, the first being transmission spectroscopy, when this transiting world passes in front of its star and starlight filters through the atmosphere.
So-called emission spectroscopy happens when the planet orbits around to the other side of the star, making parts of the illuminated surface visible (think phases of the Moon as an analogy). Astronomers can deploy spectrographic tools in both methods to work out the chemical composition of the atmosphere, and according to Montet, Gliese 486b is the best single planet yet found for emission spectroscopy out of all the rocky planets we know. Moreover, says the astronomer, it’s the second best for transmission spectroscopy.
I asked Dr. Montet about this, wondering about the absolute best planet for transmission spectroscopy. His reply:
The best planet for transmission spectroscopy is TRAPPIST-1 b. Our new planet is #2, and the third best is L98-59 d, a planet discovered by TESS in 2019. We’re quantifying relative goodness using the Transmission Spectroscopy Metric from Eliza Kempton’s work in 2018.
Image: The graph illustrates the orbit of a transiting rocky exoplanet like Gliese 486b around its host star. During transit, the planet obscures the stellar disk. Simultaneously, a tiny portion of the starlight passes through the planet’s atmospheric layer. While Gliese 486b continues to orbit, parts of the illuminated hemisphere become visible like lunar phases until the planet vanishes behind the star. Credit: © MPIA graphics department.
430 degrees Celsius make Gliese 486b a nightmarish place, perhaps one with rivers of lava and, moreover, gravity that is 70 percent stronger than Earth’s. The planet orbits its star, an M-dwarf, every 36 hours. We can only imagine what an orbit this tight means for flares and coronal mass ejections on the surface, and it’s likely that the atmosphere itself could be threatened. What we find here in future studies will help us calibrate atmospheric survival and composition on planets orbiting red dwarfs.
The work on Gliese 486b comes out of the CARMENES Consortium, which is leading an effort that includes over 200 scientists and engineers from eleven institutions in Spain and Germany who have designed the 3.5 meter telescope at Calar Alto in southern Spain. The purpose is to monitor 350 M-dwarf stars in search of low-mass planets using a spectrograph mounted on the instrument. The word CARMENES is actually an acronym, and one that takes us into epic territory in terms of length: Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Échelle Spectrographs.
So far, what we know about Gliese 486b — using transit photometry and radial velocity spectroscopic data from a variety of Earth-based instruments as well as space-based TESS — is that it is about 2.8 times as massive as the Earth and about 30 percent larger. Calculating from mass and radius measurements, the astronomers on the team led by Trifon Trifonov (Max Planck Institute for Astronomy) find a mean density that indicates a rocky world with a metallic core, and as mentioned above, a gravitational pull about 70 percent stronger than Earth’s.
But is this planet’s tight orbital distance (2.5 million kilometers) too close for any atmosphere to survive? The paper makes it clear that this is possible but by no means certain:
With a radius of 1.31 RE, Gliese 486b is located well below the radius range of 1.4 to 1.8 RE, under which planets are expected to have lost their primordial hydrogen-helium atmospheres owing to photoevaporation processes. It remains unknown how stellar irradiation and planet surface gravity affect the formation and retention of secondary atmospheres.
Which makes this an interesting test case, because the numbers are provocative:
Planets with Teq > 880 K, such as 55 Cancri e, are expected to have molten (lava) surfaces and no atmospheres, except for vaporized rock. Gliese 486 b is not hot enough to be a lava world, but its temperature of ~700 K makes it suitable for emission spectroscopy and phase curve studies in search of an atmosphere. Our orbital model constrains the secondary eclipse time to within 13 min (at 1σ uncertainty), which is necessary for efficient scheduling of observations. Compared with other known nearby rocky planets around M dwarfs, Gliese 486 b has a shorter orbital period and correspondingly higher equilibrium temperature of ~700 K and orbits a brighter, cooler, and less active stellar host.
Image: The diagram provides an estimate of the interior compositions of selected exoplanets based on their masses and radii in Earth units. The red marker represents Gliese 486b, and orange symbols depict planets around cool stars like Gliese 486. Grey dots show planets hosted by hotter stars. The coloured curves indicate the theoretical mass-radius relationships for pure water at 700 Kelvin (blue), for the mineral enstatite (orange), for the Earth (green), and pure iron (red). For comparison, the diagram also highlights Venus and the Earth. Credit: Trifonov et al./MPIA graphics department.
We’re going to be learning a lot more about Gliese 486b as the effort to investigate it continues. How well rocky planets retain their atmospheres under extreme conditions will help us understand possible atmospheric processes going on in their stars’ presumably more clement habitable zones. Given their ubiquity, red dwarfs could be interesting places to look for life, but as this planet shows us, that investigation is in its early stages. For now, hot super-Earths are the best way to proceed.
The paper is Trifonov et al., “A nearby transiting rocky exoplanet that is suitable for atmospheric investigation,” Science Vol. 371, Issue 6533 (5 March 2021), pp. 1038-1041 (abstract).
Delivery Mechanism? Comet Catalina Shows Abundance of Carbon
Were the rocky worlds of the inner Solar System depleted in carbon as they formed, the so-called ‘carbon deficit problem’? There is evidence for a system-wide carbon gradient in that era, which makes for interesting interactions between our Sun’s habitable zone and the far reaches of the system, for as the planets gradually cooled, the carbon so necessary for life as we know it would have been available only far from the Sun.
How much of a factor were early comets in bringing carbon into the inner system? This question underlies new work by Charles Woodward and colleagues. Woodward (University of Minnesota Twin Cities / Minnesota Institute of Astrophysics) focuses on Comet Catalina, which was discovered in early 2016. He sees carbon in the context of life:
“Carbon is key to learning about the origins of life. We’re still not sure if Earth could have trapped enough carbon on its own during its formation, so carbon-rich comets could have been an important source delivering this essential element that led to life as we know it.”
Image: Illustration of a comet from the Oort Cloud as it passes through the inner Solar System with dust and gas evaporating into its tail. SOFIA’s observations of Comet Catalina reveal that it is carbon-rich, suggesting that comets delivered carbon to the terrestrial planets like Earth and Mars as they formed in the early system. Credit: NASA/SOFIA/ Lynette Cook.
Let’s zoom in on this a little more closely. Volatile ices of water, carbon monoxide and carbon dioxide are found mixing with dust grains in the outer system, an indication that the young Solar System beyond the snowline was, in the authors’ words, “not entirely ‘primordial’ but was ‘polluted’ with the processed materials from the inner disk, the ‘hot nebular product.'” Or to slip the metaphor slightly, we can say that comets were salted with materials that were originally produced at higher temperatures. Comets can offer a window into this process.
The work is anything but straightforward, for although we’ve learned a lot through missions like Giotto, Rosetta/Philae and Deep Impact (including, of course, abundant telescope observations from Earth and a sample return mission called Stardust), the interplanetary dust particles we’ve been able to analyze from comets 81P/Wild 2 and 26P/Grigg-Skjellerup differ considerably. The paper explains:
The former contains material processed at high temperature (Zolensky et al. 2006), while the latter is very “primitive” (Busemann et al. 2009). For these reasons, it is necessary to determine as best as we can the properties of dust grains from a large sample of comets using remote techniques (Cochran et al. 2015). These include observations of both the thermal (spectrophotometric) and scattered light (spectrophotometric and polarimetric). The former technique provides our most direct link to the composition (mineral content) of the grains.
The research team drew on data from the Stratospheric Observatory for Infrared Astronomy (SOFIA), a Boeing 747 aircraft carrying a 2.7-meter reflecting telescope with an effective diameter of 2.5 meters. At altitude (SOFIA generally operates between 38,000 and 45,000 feet), the observatory is above 99 percent of Earth’s atmosphere, which can block infrared wavelengths. SOFIA data show Catalina as a carbon-rich object.
The paper points out that carbon dominates as well in other comets we’ve seen, both those in closer orbits (103P/Hartley 2) and Oort Cloud comets like C/2007 N3 and C/2001 HT50. It also turns out that dusty material from comet 67P/Churyumov-Gerasimenko was rich in carbon, although the authors note that comets can show changes in their silicate-to-carbon ratio, sometimes even during the course of a single night’s observations. The paper adds:
A dark refractory carbonaceous material darkens and reddens the surface of the nucleus of 67P/Churyumov-Gerasimenko. Comet C/2013 US10 (Catalina) is carbon rich. Analysis of comet C/2013 US10 (Catalina)’s grain composition and observed infrared spectral features compared to interplanetary dust particles, chondritic materials, and Stardust samples suggest that the dark carbonaceous material is well represented by the optical properties of amorphous carbon. We argue that this dark material is endemic to comets.
All this suggests that carbon delivered by comets is a part of the evolution of the early Solar System. Each carbon-rich comet we study has implications for how life may have been spurred by impacts, making the investigation of carbon-rich Oort Cloud comets a continuing priority for SOFIA, which can be deployed quickly when comets are found entering the inner system.
The paper is Woodward et al, “The Coma Dust of Comet C/2013 US10 (Catalina): A Window into Carbon in the Solar System,” The Planetary Science Journal (2021). Abstract / Full Text.