Comet Impact Enables Probe of Jupiter’s Winds

Scientists at the European Southern Observatory are describing newly observed wind processes on Jupiter as “a unique meteorological beast.” I like the phrase and can see its application to the 1450 kilometer per hour jets they’ve uncovered near Jupiter’s poles. Just how they made this detection is fascinating in its own right, since they drew on a spectacular natural event, the 1994 collision of comet Shoemaker-Levy 9 with the planet, to deduce current conditions. The molecules that were produced in that impact are the lever that moves the investigation, which is headed by Thibault Cavalié (Laboratoire d’Astrophysique de Bordeaux).

Image: This image shows an artist’s impression of winds in Jupiter’s stratosphere near the planet’s south pole, with the blue lines representing wind speeds. These lines are superimposed on a real image of Jupiter, taken by the JunoCam imager aboard NASA’s Juno spacecraft. Jupiter’s famous bands of clouds are located in the lower atmosphere, where winds have previously been measured. But tracking winds right above this atmospheric layer, in the stratosphere, is much harder since no clouds exist there. By analysing the aftermath of a comet collision from the 1990s and using the ALMA telescope, in which ESO is a partner, researchers have been able to reveal incredibly powerful stratospheric winds, with speeds of up to 1450 kilometres an hour, near Jupiter’s poles. Credit: ESO/L. Calçada & NASA/JPL-Caltech/SwRI/MSSS.

Thus we use an astronomical event to reveal information about another, otherwise unrelated system. And a good thing the Atacama Large Millimeter/submillimeter Array (ALMA) made this possible, because directly measuring winds in Jupiter’s stratosphere has been impossible up to now. The stratosphere is cloud free, eliminating cloud-tracking as an option.

But the Shoemaker-Levy 9 impacts produced hydrogen cyanide (HCN) that the scientists have tracked as it moves with the system of stratospheric jets on Jupiter. Similar to Earth’s jet streams, these narrow bands of winds pack quite a punch. The highest wind speeds occur under the aurorae near Jupiter’s poles, more than twice the maximum winds found in the planet’s Great Red Spot. Co-author Bilal Benmahi, also of the Laboratoire d’Astrophysique de Bordeaux, notes that “these jets could behave like a giant vortex with a diameter of up to four times that of Earth, and some 900 kilometres in height.” A unique meteorological beast indeed.

Image: This image, taken with the MPG/ESO 2.2-metre telescope and the IRAC instrument, shows comet Shoemaker-Levy 9 impacting Jupiter in July 1994. Credit: ESO.

The strong winds near Jupiter’s poles were previously thought to exist hundreds of kilometers above the stratosphere, and their existence this deep in the atmosphere is the major surprise delivered by the paper, which has just been published in Astronomy & Astrophysics. The winds can be traced by measuring the Doppler shift in the ALMA data, which records the minute changes in the frequency of radiation emitted by the HCN molecules as they are carried in the flow. Strong stratospheric winds reaching 600 kilometers per hour likewise occur around Jupiter’s equator, but the polar winds clearly take the prize in a pattern of atmospheric circulation that turns out to be considerably more complex than we’ve ever realized.

Image: Jupiter taken in infrared light on the night of 17 August 2008 with the Multi-Conjugate Adaptive Optics Demonstrator (MAD) prototype instrument mounted on ESO’s Very Large Telescope. This false colour photo is the combination of a series of images taken over a time span of about 20 minutes, through three different filters (2, 2.14, and 2.16 microns). The image sharpening obtained is about 90 milli-arcseconds across the whole planetary disc, a real record on similar images taken from the ground. This corresponds to seeing details about 300 km wide on the surface of the giant planet. The great red spot is not visible in this image as it was on the other side of the planet during the observations. The observations were done at infrared wavelengths where absorption due to hydrogen and methane is strong. This explains why the colours are different from how we usually see Jupiter in visible-light. This absorption means that light can be reflected back only from high-altitude hazes, and not from deeper clouds. These hazes lie in the very stable upper part of Jupiter’s troposphere, where pressures are between 0.15 and 0.3 bar. Mixing is weak within this stable region, so tiny haze particles can survive for days to years, depending on their size and fall speed. Additionally, near the planet’s poles, a higher stratospheric haze (light blue regions) is generated by interactions with particles trapped in Jupiter’s intense magnetic field. Credit: ESO/F. Marchis, M. Wong, E. Marchetti, P. Amico, S. Tordo.

The paper is Cavalié et al., “First direct measurement of auroral and equatorial jets in the stratosphere of Jupiter,” Astronomy & Astrophysics Vol. 647, L8 (March 2021). Abstract / Full Text.

tzf_img_post

Thoughts on Acceleration, Nitrogen Ice & the Local Standard of Rest

I’ve used the discovery of ‘Oumuamua as a learning opportunity. I knew nothing about the Local Standard of Rest (LSR) when the analysis of the object began, but soon learned that it measured the mean motion of interstellar materials in the Milky Way near the Sun. The Sun moves clockwise as viewed from galactic north, with an orbital speed that has been measured, through interferometric techniques, at 255.2 kilometers per second, give or take 5.1 km/s. Invoking the LSR in this connection calls for a quote from Eric Mamajek (JPL/Caltech) in his paper “Kinematics of the Interstellar Vagabond 1I/’Oumuamua (A/2017 U1)” (abstract here):

‘Oumuamua’s velocity is within 5 km/s of the median Galactic velocity of the stars in the solar neighborhood (<25 pc), and within 2 km/s of the mean velocity of the local M dwarfs. Its velocity appears to be statistically “too” typical for a body whose velocity was drawn from the Galactic velocity distribution of the local stars (i.e. less than 1 in 500 field stars in the solar neighborhood would have a velocity so close to the median UVW velocity). In the Local Standard of Rest frame (circular Galactic motion), ‘Oumuamua is remarkable for showing both negligible radial (U) and vertical (W) motion, while having a slightly sub-Keplerian circular velocity (V; by ~11 km/s). These calculations strengthen the interpretation that A/2017 U1 has a distant extrasolar origin, but not among the very nearest stars. Any formation mechanism for this interstellar asteroid should account for the coincidence of ‘Oumuamua’s velocity being so close to the LSR.

Below is what ‘Oumuamua really looks like. As opposed to the artists’ conceptions we’ve all seen, each of which makes its own set of assumptions based on current studies, this is what we have on this object visually. As you can see, it isn’t much to work with.

Image: This very deep combined image shows the interstellar object ‘Oumuamua at the center of the image. It is surrounded by the trails of faint stars that are smeared as the telescopes tracked the moving object. Credit: ESO/K. Meech et al.

Harvard’s Avi Loeb has likewise noted that an interstellar object should have inherited the motion of its birth star, and thus would not have been likely to be found at the Local Standard of Rest. Now, in a new piece for Scientific American called Was the Interstellar Object ‘Oumuamua a Nitrogen Iceberg?, Loeb references the paper by Alan Jackson and Steven Desch that argues for ‘Oumuamua being a nitrogen ‘shard’ from an outer system object, the equivalent of Pluto in our own Solar System. We looked at this paper on Friday.

Loeb points out that a nitrogen iceberg would be an oddity given that we’ve never seen one of these among objects found in our own Oort Cloud, that agglomeration of perhaps trillions of comets that extends halfway to the Alpha Centauri triple system. Yet if the first interstellar object detected turns out to be made of nitrogen, the implication is that objects like this are common, and perhaps they are in other stellar systems. Fortunately, this is something we’ll be able to study in the near future as observatories like the Rubin LSST become available. The catalog of interstellar objects should grow quickly and the likelihood of nitrogen ice can be examined.

It always helps to know what to look for, but then, ‘Oumuamua has surprised us on several fronts. Loeb also points out that about a tenth of the object’s mass would need to have evaporated to explain its deviation from the expected path as it left the Solar System (Desch and Jackson cite a much higher fraction of the object’s mass). This point continues to draw attention because post-perihelion evaporation should have caused jitter on the departing object, a change to the rate at which it was tumbling. As far as I can see, the issue of ‘Oumuamua’s behavior is still problematic.

Meanwhile, the Oort itself retains its fascination. A cloud of cometary material reaching out to what may well be a similar Oort Cloud at Alpha Centauri (its existence is not proven, of course) would provide a far future civilization a way to move slowly outward, exploiting resources along the way in a multi-generational wave that took advantage of the ubiquity of such objects. For today’s purposes, you would think we might be able to exploit exo-Oort Clouds, if they are there, to work backward along the route of ‘Oumuamua to find the star from which it came, but Loeb points out that there would simply be too many Oort analogs along the line of sight to allow any firm identification.

So we move forward in the ‘Oumuamua discussion, with nitrogen ice now in the mix, with its own caveats, as a potential explanation. As we work out the implications, Loeb also notes that the population of interstellar objects may exceed anything we’ve previously estimated:

…most objects within the Oort cloud volume may not be bound to the sun. In another paper with Amir [Siraj], we showed that the recent discovery of the interstellar comet 2I/Borisov, which most likely originated from an Oort cloud around another star, implies that interstellar objects may outnumber solar system objects within our own Oort cloud. In other words, the Oort cloud objects bound to the sun are swimming in an ocean of background interstellar objects that come and go. Moreover, the discovery of 2I/Borisov implies that about a percent of all the carbon and oxygen in the Milky Way galaxy may be locked in interstellar objects.

The opportunity presented by such a wealth of interstellar materials ranging from dust particles to free-floating planets is immense, and if there is a growth engine embedded within the astronomical community for the next decade or so, it’s surely here, as we begin identifying more and more objects born around stars other than our own. Who knew even a few years ago that we might be able to take advantage of so widespread a phenomenon to study materials from other stellar systems without the need to put our own probes around their stars?

tzf_img_post

‘Oumuamua: A Shard of Nitrogen Ice?

I’m only just getting to Steven Desch and Alan Jackson’s two papers on ‘Oumuamua, though in a just world (where I could clone myself and work on multiple stories simultaneously) I would have written them up sooner. Following Avi Loeb’s book on ‘Oumuamua, the interstellar object has been in the news more than ever, and the challenge it throws out by its odd behavior has these two astrophysicists, both at Arizona State, homing in on a possible solution.

No extraterrestrial technologies in this view, but rather an unusual object made of nitrogen ice, common in the outer Solar System and likely to be similarly distributed in other systems. Think of it as a shard of a planet like Pluto, where nitrogen ice is ubiquitous. Desch and Jackson calculated the object’s albedo, or reflectivity, with the idea in mind, realizing that the ice would be more reflective than astronomers had assumed ‘Oumuamua was, and thus it could be smaller. As the authors note: “Its brightness would be consistent with an albedo of 0.64, which is exactly consistent with the albedo of the surface of Pluto, which is > 98% N2 ice.”

That’s a useful finding because a nitrogen ice object would behave like ‘Oumuamua was observed to do. Recall the salient problem this interloper presented as it left the system. It moved away from the Sun at a slightly larger velocity than an average comet should have. Desch and Jackson discovered that if it were made of nitrogen ice, and thus smaller (and more reflective than thought), the so-called ‘rocket effect’ could be accounted for. A tiny object is affected by a small amount of escaping gas to a greater extent than a larger, more massive one.

The effect can be calculated by examining how different kinds of ices sublimate, moving from a solid to a gas with no intervening liquid state. As to how an object constituted of nitrogen ice might have gone interstellar, the astronomers worked out the rate that breakaway nitrogen ice pieces would have been produced through collisions in the outer system of an exoplanet. Says Jackson:

“It was likely knocked off the surface by an impact about half a billion years ago and thrown out of its parent system. Being made of frozen nitrogen also explains the unusual shape of ‘Oumuamua. As the outer layers of nitrogen ice evaporated, the shape of the body would have become progressively more flattened, just like a bar of soap does as the outer layers get rubbed off through use.”

That question of ‘Oumuamua’s shape continues to intrigue me, though, as I ponder my bar of Irish Spring. We’ve never observed anything of this shape in the Solar System. And exactly what would have happened at perihelion? I turned to the first of the two papers for more:

Our modelling shows that, perhaps surprisingly, an N2 ice fragment can survive passing the Sun at a perihelion distance of 0.255 au, in part because evaporative cooling maintains surface temperatures less than 50 K. Despite being closer to the Sun than Mercury, ‘Oumuamua’s surface temperatures remained closer to those of Pluto.

Even so, surely nitrogen ice would have been a huge factor in its behavior:

The volatility of N2 did, however, lead to significant mass loss – we calculate that by the time ‘Oumuamua was observed, a month after perihelion, it retained only around 8% of the mass it had on entering the solar system. This loss of mass is key to explaining the extreme shape of ‘Oumuamua: isotropic irradiation and removal of ice by sublimation increases the axis ratios, a process also identified by Seligman & Laughlin (2020). Between entering the Solar system and the light curve observations the loss of mass from ‘Oumuamua increased its axis ratios from an unremarkable 2:1 to the extreme observed value of around 6:1.

In other words, we are dealing with a ‘flattening’ that occurred in our own system and not at place of origin. I suspect this flattening process is going to receive a thorough vetting in the community, key as it is to explaining a salient oddity about ‘Oumuamua.

And so we wind up with a theory that presents ‘Oumuamua as shown below, an unusual aspect ratio to be sure (and yes, reminiscent of what could be a lightsail), but Desch and Jackson think their theory of nitrogen ice matches every aspect of ‘Oumuamua’s behavior without the need for invoking alien technology. Desch comments:

“…it’s important in science not to jump to conclusions. It took two or three years to figure out a natural explanation — a chunk of nitrogen ice — that matches everything we know about ‘Oumuamua. That’s not that long in science, and far too soon to say we had exhausted all natural explanations.”

Image: This painting by William K. Hartmann, who is a senior scientist emeritus at the Planetary Science Institute in Tucson, Arizona, is based on a commission from Michael Belton and shows a concept of the ‘Oumuamua object as a pancake-shaped disk. Credit: William Hartmann.

Thus ‘Oumuamua, in the eyes of Desch and Jackson, might be considered a chunk of an exo-Pluto, which in itself opens the topic of studying interstellar objects for information about their parent systems. We’ve never observed an exo-Pluto before, so ‘Oumuamua may probe the surface composition of worlds like this.

Moreover, if our first identified interstellar interloper is made of nitrogen ice, then the existence of exo-Plutos must be common, although we’ll have to decide exactly how we want to define the term (and I suppose we can now invoke the ruckus about Pluto’s planetary status by asking whether exo-Plutos are actually to be referred to as ‘exo-dwarf planets’).

As the Vera Rubin Observatory/Large Synoptic Survey Telescope in Chile comes online, regular surveys of the southern sky will doubtless up the number of interstellar objects we can identify, helping us home in further on their composition and likely origin.

Image: Illustration of a plausible history for ‘Oumuamua: Origin in its parent system around 0.4 billion years ago; erosion by cosmic rays during its journey to the solar system; and passage through the solar system, including its closest approach to the Sun on Sept. 9, 2017, and its discovery on October 2017. At each point along its history, this illustration shows the predicted size of ‘Oumuamua, and the ratio between its longest and shortest dimensions. Credit: S. Selkirk/ASU.

The first paper is Jackson et al., “1I/’Oumuamua as an N2 ice fragment of an exo?Pluto surface: I. Size and Compositional Constraints,” Journal of Geophysical Research: Planets (16 March 2021). Abstract / Preprint. The second is Desch et al. “1I/’Oumuamua as an N2 ice fragment of an exo?pluto surface II: Generation of N2 ice fragments and the origin of ‘Oumuamua,” Journal of Geophysical Research: Planets (16 March 2021). Abstract / Preprint.

tzf_img_post

Technosignatures and the Age of Civilizations

Given that we are just emerging as a spacefaring species, it seems reasonable to think that any civilizations we are able to detect will be considerably more advanced — in terms of technology, at least — than ourselves. But just how advanced can a civilization become before it does irreparable damage to itself and disappears? This question of longevity appears as a factor in the famous Drake Equation and continues to bedevil SETI speculation today.

In a paper in process at The Astronomical Journal, Amedeo Balbi (Università degli Studi di Roma “Tor Vergata”) and Milan ?irkovi? (Astronomical Observatory of Belgrade) explore the longevity question and create a technosignature classification scheme that takes it into account. Here we’re considering the kinds of civilization that might be detected and the most likely strategies for success in the technosignature hunt. The ambiguity in Drake’s factor L is embedded in its definition as the average length of a civilization’s communication phase.

Immediately we’re in shifting terrain, for in the early days of SETI, radio communication was the mode of choice, but even in the brief decades since Project Ozma, we’ve seen our own civilization drastically changing the radio signature it produces through new forms of connection. And as Balbi and ?irkovi? point out, the original L in Drake’s equation leaves open a rather significant matter: How do we treat the possibility of civilizations that have gone extinct?

These two authors have written before about what they call ‘temporal Copernicanism,’ which leads us to ask how the longevity of a civilization is affected by its location in our past or in our future. We are, after all, dealing with a galaxy undergoing relentless processes of astrophysical evolution. As we speculate, we have to question a value for L based on a civilization (our own) whose duration we cannot know. How can we know how far our own L extends into the future?

Image: Messier 107, a globular cluster around the disk of the Milky Way in the constellation Ophiuchus, is a reminder of the variety of stellar types and ages we find in our galaxy. What kind of technosignature might we be able to detect at a distance of about 20,000 light-years, and would ancient clusters like these in fact make reasonable targets for a search? Many factors go into our expectations as we formulate search strategies. This image was taken with the Wide Field Camera of Hubble’s Advanced Camera for Surveys. Credit: ESA/NASA.

Thinking about these matters always gets me thinking of Arthur C. Clarke’s 1956 novel The City and the Stars, set in the city of Diaspar a billion years from now. How do we wrap our heads around a civilization measured not just in millennia but in gigayears? Speculative as they are, I find a kind of magic in playing around with terms like ?, cited here as the average rate of appearance of communicating civilizations (with L, as before, their average longevity), so that if we take ? as constant in time, its value can be estimated as the total number of technosignatures over the history of the galaxy (Ntot) divided by the age of the galaxy (TG). Thus Balbi and ?irkovi? cite the equation:

From the paper:

It is apparent that the number of technosignatures that we can detect is a fraction of the total number that ever existed: the fraction is precisely L/TG. Because TG ? 1010 years, L/TG is presumed to be generally small; any specific precondition imposed on the origination of technosignatures, like the necessity of terrestrial planets for biological evolution, will act to reduce the fraction. This is the quantitative argument that justifies one of the most widely cited assertions of classical SETI, i.e. that the chances of finding ETIs depend on the average longevity of technological civilizations. (In fact, it is well-known that Frank Drake himself used to equate N to L.)

The equation clarifies the idea that SETI depends upon the average longevity of technological cultures, but the authors point out that another way to look at the matter is this: L needs to be large, for we’re requiring a high number of technosignatures indeed to have any chance for detecting a single one. Spread out over time, many such signatures need to have existed for us to make a single detection, or at best a few, with our present level of technology.

And here is where Balbi and ?irkovi? take us away from the more conventional approach derived above. Is the number of detectable technosignatures, N, static over time? From the paper:

…both ? and L are average quantities, and there is an implicit assumption that N is stationary over the history of the Galaxy. There are good reasons to believe that this is not the case. Of course, it is unrealistic to assume that ? is constant with cosmic time. Even if we limit ourselves to the last ? 10 Gyr of existence of thin disk Pop I stars which are likely to harbour the predominant fraction of all possible habitats for intelligent species, their rate of emergence is likely to be very nonuniform. One obvious source of nonuniformity is the changing rate of emergence of planetary habitats, as first established by Lineweaver (2001) and subsequently elaborated by Behroozi & Peeples (2015), as well as by Zackrisson et al. (2016). This nonuniformity can be precisely quantified today and some contemporary astrobiological numerical simulations have taken it into account (Ðošovi? et al. 2019).

We should assume, the authors argue, that the appearance of technosignatures varies with time. They are interested less in coming up with a figure for N — and again, this is defined in their terms (not Drake’s) as ‘the number of detectable technosignatures’ — than in spotlighting the most likely type of technosignature we can detect. Their classification scheme for technosignatures as filtered through the lens of longevity goes like this:

Type A: technosignatures that last for a duration comparable to the typical timescale of technological and cultural evolution on Earth, ? ? 103 years

Type B: technosignatures that last for for a duration comparable to the typical timescale of biological evolution of species on Earth, ? ? 106 years

Type C: technosignatures that last for for a duration comparable to the typical timescale of stellar and planetary evolution, ? ? 109 years

The scheme carries an interesting subtext: The longevity of technosignatures does not have to coincide with the longevity of the species that created the detectable technology. Here we’re at major variance from the L in Frank Drake’s equation, which had to do with the lifetime of a civilization that was capable of communicating. Balbi and ?irkovi? are tightly focused on the persistence not of civilizations but of artifacts. Notice that a technosignature search is likewise not limited to planetary systems — an interstellar probe could throw its own technosignature.

We might assume that technosignatures of long duration could only be produced by highly advanced civilizations capable of planetary engineering, say, but let’s not be too sure of ourselves on that score, for some technosignatures might be left behind by species well down on the Kardashev scale of civilizations. Consider Breakthrough Starshot, for example. Let’s push its ambitions back a bit and just say that perhaps within a century, we may be able to launch flocks of small sailcraft to nearby stars using some variation of its methods.

These would constitute a technosignature if detected by another civilization, as would remnant probes like Voyager and Pioneer, as would some forms of atmospheric pollution or simple space debris. A single civilization could readily produce different kinds of technosignatures over the course of its lifetime. As the authors note:

Our species has not yet produced Type A technosignatures, if we only consider the leakage of radio transmissions or the alteration of atmospheric composition by industrial activity; but its artifacts, such as the Voyager 1 and 2, Pioneer 10 and 11, and New Horizons probes, could in principle become type B or even C in the far future, even if our civilization should not survive that long. Similarly, a Type C technosignature can equally be produced by a very long-lived civilization, or by one that has gone extinct on a shorter time scale but has left behind persistent remnants, such as a beacon in a stable orbit or a Dyson-like megastructure.

Persistent remnants. I think of the battered, but more or less intact, Voyager 2 as it passes the red dwarf Ross 248 at about 111,000 AU some 40,000 years from now (Ross 248 will, in that era, be the closest star to the Sun). That’s a technosignature waiting to be found, one produced by a civilization low on the Kardashev scale, but it bears the same message, of a culture that explores space. I wonder what kind of a technosignature Clarke’s billion year old civilization in Diaspar would have thrown?

Whatever it might be, it would surely be more likely to be detected than our Voyager 2, a stray bit of flotsam among the stars. That said, I keep in mind what we learned from the TechnoClimes workshop — and Jim Benford’s continuing work on ‘artifact’ SETI — making the point that we can’t rule out a local artifact in our own system. And, of course, if Avi Loeb is correct, we may already have found one, though suitably ambiguous in its interpretation. Clearly, if we did detect technosignatures close to home, the implication would be that they are found widely in the galaxy, and that would dramatically change the nature of the hunt.

So the scope for technosignatures is wide, but drawing the lessons of this paper together, the authors find that the technosignature we are most likely to detect with present technological tools is a long-lived one, meaning in Balbi and ?irkovi?’s terms, one with a duration of at least 106 years. Technosignatures younger than this may be detectable but only if it turns out they are common, as thus relatively nearby and easier for us to find. Of course we can search for them, but the authors believe these searches are unlikely to pay off. Their thought:

This suggests that an anthropocentric approach to SETI is flawed: it is rational to expect that the kind of technosignatures we are most likely to get in contact with is wildly different, in terms of duration, from what has been produced over the course of human history. This conclusion strengthens the case for the hitherto downplayed hypothesis (which is not easily labeled as “optimistic” or “pessimistic”) that a significant fraction of detectable technosignatures in the Galaxy are products of extraterrestrial civilizations which are now extinct.

How to proceed? The authors’ focus on longevity leads them to conclude that our most likely targets may well be rare and they may flag extinct civilizations, but the value N that Balbi and ?irkovi? are talking about is different than classical SETI’s N, which needs a large value to ensure detection. It only takes one technosignature, and a few of the Type C signatures would be much more likely to be detected than a spectacularly high number of Type A signatures:

Dysonesque megastructures, interstellar probes, persistent beacons—as well as activities related to civilizations above Type 2 of the Kardashev scale, or to artificial intelligence—should be the preferred target for future searches. These technosignatures would not only be ‘weird’ when measured against our own bias, but could arguably be less common than short-lived ones. Such [a] conclusion deflates the emphasis on large N (and human-like technosignatures) that informed much of classical SETI’s literature.

If this sounds discouraging, it need not be. It simply tells us the kind of strategy that has the greatest chance for success:

…the supposed rarity of long-lived technosignatures should not be regarded, in itself, as a hindrance for the SETI enterprise: in fact, a few Type C technosignatures, over the course of the entire history of the Galaxy, would have much higher chance of being detected than a large number of Type A. Also, possible astrophysical mechanisms which could lead to a posteriori synchronization of shorter lived technosignatures should be investigated, to constrain the parameter space of this possibility, if nothing else.

Civilizations that appeared long ago and survived have conceivably found a way to persist, and therefore may still be active, but for detection purposes their existence now is less significant than what they may have left behind. Just how they grew to the point where they could begin the construction of detectable technosignatures is explored in the paper’s discussion of ‘phase-transition’ scenarios via a mathematical framework used to model longevity. “Achieving such form[s] of institutions and social structures might count as an advanced engineering feat in its own right,” as the authors note.

Technosignature work is young and constitutes a significant extension of the older SETI paradigm. Thus modeling how to proceed, as we saw both here and in the previous post on NASA’s TechnoClimes workshop, is the only path toward developing a search strategy that is both sound in its own right and also may have something to teach us about how our own civilization views its survival. The kinds of insight technosignature modeling could produce would take us well beyond the foolish notion of some early SETI critics that its only didactic function is as a form of religion, looking for salvation in the form of the gift of interstellar knowledge. To the contrary, the search may tell us much more about ourselves.

The paper is Balbi and ?irkovi?, “Longevity is the key factor in the search for technosignatures,” in process at The Astronomical Journal (preprint).

tzf_img_post

A Path Forward for Technosignature Searches

Héctor Socas-Navarro (Instituto de Astrofísica de Canarias) is lead author of a paper on technosignatures that commands attention. Drawing on work presented at the TechnoClimes 2020 virtual meeting, under the auspices of NASA at the Blue Marble Space Institute of Science in Seattle, the paper pulls together a number of concepts for technosignature detection. Blue Marble’s Jacob Haqq-Misra is a co-author, as is James Benford (Microwave Sciences), Jason Wright (Pennsylvania State) and Ravi Kopparapu (NASA GSFC), all major figures in the field, but the paper also draws on the collected thinking of the TechnoClimes workshop participants.

We’ve already looked at a number of technosignature possibilities in these pages, so let me look for commonalities as we begin, beyond simply listing possibilities, to point toward a research agenda, something that NASA clearly had in mind for the TechnoClimes meeting. The first thing to say is that technosignature work is nicely embedded within more traditional areas of astronomy, sharing a commensal space with observations being acquired for other reasons. Thus the search through archival data will always be a path for potential discovery.

The Socas-Navarro paper, however, homes in on new projects and mission concepts that could themselves provide useful data for other areas of astronomy and astrophysics. A broad question is what kind of civilization we would be likely to detect if technosignature research succeeds. Only technologies much superior to our own could be detected with our current tools. Recent work on a statistical evaluation of the lifespan of technological civilizations points to the same conclusion: First detection would almost certainly be of a high-order technology. Would it also be a signature of a civilization that still exists? As we’ll see in the next post, there are reasons for thinking this will not be the case.

Image: Artistic recreation of a hypothetical exoplanet with artificial lights on the night side. Credit: Rafael Luis Méndez Peña/Sciworthy.com.

This is a useful paper for those looking for an overview of the technosignature space, and it also points to the viability of new searches on older datasets as well as data we can expect from already scheduled missions and new instrumentation on the ground. Thus exoplanet observations offer obvious opportunities for detecting unusual phenomena as a byproduct of their work. The workshop suggested taking advantage of this fact by modeling, with technosignatures in mind, for complex light curve analysis, photometric and spectroscopic searches for night-time illumination, and developing new algorithms for analyzing optimal communications pathways between exoplanets in a given volume of interstellar space:

A region of space with the right distribution of suitable worlds to become a communication hub may be a promising place to search. TS [technosignatures] might be more abundant there, just like Earth TS are more abundant wherever there is a high density of human population, which in turn tends to clutter in the form [of] network structures.

Other methods piggyback on existing exoplanet campaigns. Observing planetary atmospheres, for instance, is useful because it ties in to existing biosignature detection efforts. Future projects on missions observing in the mid-infrared like the Large Ultraviolet Optical Infrared Surveyor (LUVOIR) could explore this space. See Technosignatures: Looking to Planetary Atmospheres, for example, for Ravi Kopparapu’s work on nitrogen dioxide (NO2) as an industrial byproduct, a kind of search we have only begun to explore. Back to the paper:

A nice advantage of this method of detecting atmospheric technosignatures is that the same instruments and telescopes can be used to characterize atmospheres of exoplanets. Our view of habitability and technosignatures is based on our own Earth’s evolutionary history. There are innumerable examples in the history of science where new phenomena were discovered serendipitously. By having a dedicated mission to look for atmospheric technosignatures that also covers exoplanet science, we can increase our chances of detecting extraterrestrial technology on an unexpected exoplanet, or may discover a spectral signature that we usually do not associate with technology. The only way to know is to search.

Where else might we push with new observational and mission concepts? A 3-meter space telescope performing an all-sky survey with high point source sensitivity in the infrared could provide benefits to astrophysics as well as being sensitive to Dyson spheres at great distances. The paper argues for a dedicated effort to develop fast infrared detectors capable of nanosecond timing to enable a space mission searching the entire infrared sky. Such detectors would be sensitive to transients like pulsars and fast radio bursts as well as broadband pulses.

The paper also makes the case for a radio observatory on the far side of the Moon. Here we are all but completely free from contamination from radio interference by our own species, although even now the matter is complicated by satellites like China’s Queqiao, which has been at the Earth-Moon L2 Lagrange point for almost three years. Issues of radio protection of the far side will grow in importance as we try to protect this resource, where Earth radio waves are attenuated by 10 orders of magnitude or more. Again, we are dealing with a future facility that would also be of inestimable value for conventional astronomy and lunar exploration.

Close encounters with other stars (which occur as another star penetrates the Sun’s Oort Cloud every 105 years or so) highlight the possibility that extraterrestrial civilizations, having noted biosignatures from Earth, could have placed probes in our system. Few searches for such artifacts have been conducted, but as Jim Benford has discussed in these pages (see Looking for Lurkers: A New Way to do SETI) a host of objects could be easily examined for artifacts. Few have been studied in depth, but Benford has made the case that both the surface of the Moon and the Earth Trojans can now be studied at an unprecedented level of detail.

We already have monthly mapping of the Moon at high resolution via the Lunar Reconnaissance Orbiter (LRO) with a resolution of 100m/pixel (LRO can also work at a higher resolution mode of 0.5m/pixel, but this mode has not been widely used). Future exploration might include an orbiter working in ultra high-resolution at the ?10cm per pixel level. The workshop also discussed high-resolution mapping of Mars and, perhaps, Mercury and larger asteroids coupled with machine learning techniques identifying anomalies.

Not surprisingly, given the high visibility (in public interest) of objects like ‘Oumuamua or 2I/Borisov, a ready to launch intercept mission also comes into consideration here to plan for the study of future interstellar arrivals. Other possibilities for pushing the technosignature envelope include an asteroid polarimetry mission studying either main belt asteroids or the Jupiter Trojans, gathering information that would be useful for our understanding of small objects with a potential for impact on the Earth. The Jupiter mission could probe for natural and possible artificial objects that might have wound up being ensnared over time in Jupiter’s gravitational well. The asteroid mission would produce a statistical description of small objects in solar orbit. The paper describes it this way:

A telescope similar to Kepler would be sensitive to objects of 10 m up to a distance of 0.02 AU (assuming a high albedo of 0.8) or 0.01 AU for typical asteroid albedos. Extrapolating the current knowledge of asteroid size distribution, there should be some 250,000 asteroids of 10 m in the radius of 0.02 AU accessible to such [a] telescope in the asteroid belt. The mission could be designed with an elliptical orbit having the perihelion near the Earth’s orbit and the aphelion in the asteroid belt. Under these conditions it would regularly dive into a different region of the belt, probing a different space in every orbit.

Such are some of the ways we can extend the search for technosignatures while supporting existing astronomical and astrophysical work. The paper goes into new ground in introducing a framework for future work for the different types of technosignatures, defining what it calls the ‘ichnoscale’ and analyzing it in relation to the number of targets and the persistence of a possible signal. The ichnoscale parameter is “the relative size scale of a given TS [technosignaure] in units of the same TS produced by current Earth technology.“

We’re only beginning to map out a path forward for technosignature investigation, but the authors believe that given advances in exoplanet research, astrobiology and astrophysics, we are at the right place to inject new energy into the attempt. Thus what the community is trying to do is to learn the best avenues for proceeding while developing a framework to advance the effort by quantifying targets and potential signals. Along the way, we may well discover new astrophysical phenomena as a byproduct.

I’m particularly interested in the thorny question of how long technological civilizations can be expected to live, and am looking into a new paper from Amedeo Balbi and Milan ?irkovi? on the matter. I’ll be exploring some thoughts from this paper in the next entry.

The paper for today is Hector Socas-Navarro et al, “Concepts for future missions to search for technosignatures,” Acta Astronautica Volume 182 (May 2021), pp. 446-453 (abstract / preprint).

tzf_img_post