Exploring the Contact Paradox

Keith Cooper is a familiar face on Centauri Dreams, both through his own essays and the dialogues he and I have engaged in on interstellar topics. Keith is the editor of Astronomy Now and the author of both The Contact Paradox: Challenging Assumptions in the Search for Extraterrestrial Intelligence (Bloomsbury Sigma), and Origins of the Universe: The Cosmic Microwave Background and the Search for Quantum Gravity (Icon Books) to be published later this year. The Contact Paradox is a richly detailed examination of the history and core concepts of SETI, inspiring a new set of conversations, of which this is the first. With the recent expansion of the search through Breakthrough Listen, where does SETI stand both in terms of its likelihood of success and its perception among the general public?

  • Paul Gilster

Keith, we’re 60 years into SETI and no contact yet, though there are a few tantalizing things like the WOW! signal to hold our attention. Given that you have just given us an exhaustive study of the field and mined its philosophical implications, what’s your take on how this lack of results is playing with the general public? Are we more or less ready today than we were in the days of Project Ozma to receive news of a true contact signal?

And despite what we saw in the film Contact, do you think the resultant clamor would be as widespread and insistent? Because to me, one of the great paradoxes about the whole idea of contact is that the public seems to get fired up for the idea in film and books, but relatively uninterested in the actual work that’s going on. Or am I misjudging this?

  • Keith Cooper

What a lot of people don’t realise is just how big space is. Our Galaxy is home to somewhere between 100 billion and 200 billion stars. Yet, until Yuri Milner’s $100 million Breakthrough Listen project, we had looked and listened, in detail, at about a thousand of those stars. And when I say listened closely, I mean we pointed a telescope at each of those stars for half an hour or so. Even Breakthrough Listen, which will survey a million stars in detail, finds the odds stacked against it. Let’s imagine there are 10,000 technological species in our Galaxy. That sounds like a lot, but on average we’d have to search between 10 million and 20 million stars just to find one of those species.

And remember, we’re only listening for a short time. If they’re not transmitting during that time frame, then we won’t detect them, at least not with a radio telescope. Coupled with the fact that incidental radio leakage will be much harder to detect than we thought, then it’s little wonder that we’ve not found anyone out there yet. Of course, the public doesn’t see these nuances – they just see that we’ve been searching for 60 years and all we’ve found is negative or null results. So I’m not surprised that the public are often uninspired by SETI.

Some of this dissatisfaction might stem from the assumptions made in the early days of SETI, when it was assumed that ETI would be blasting out messages through powerful beacons that would be pretty obvious and easy to detect. Clearly, that doesn’t seem to be the case. Maybe that’s because they’re not out there, or maybe it’s because the pure, selfless altruism required to build such a huge, energy-hungry transmitter to beam messages to unknown species is not very common in nature. Certainly on Earth, in the animal kingdom, altruism usually operates either on the basis of protecting one’s kin, or via quid pro quo, neither of which lend themselves to encouraging interstellar communication.

So I think we – that is, both the public and the SETI scientific community – need to readjust our expectations a little bit.

Are we ready to receive a contact signal? I suspect that we think we are, but that’s different from truly being ready. Of course, it depends upon a number of variables, such as the nature of the contact, whether we can understand the message if one is sent, and whether the senders are located close in space to us or on the other side of the Galaxy. A signal detected from thousands of light years away and which we can’t decode the message content of, will have much less impact than one from, say, 20 or 30 light years away, and which we can decode the message content and perhaps even start to communicate with on a regular basis.

  • Paul Gilster

I’ll go further than that. To me, the optimum SETI signal to receive first would be one from an ancient civilization, maybe one way toward galactic center, which would make by virtue of its extreme distance a non-threatening experience. Or at least it would if we quickly went to work on expanding public understanding of the size of the Galaxy and the Universe itself, as you point out. An even more ancient signal from a different galaxy would be even better, as even the most rabid conspiracy theorist would have little sense of immediate threat.

I suppose the best scenario of all would be a detection that demonstrated other intelligent life somewhere far away in the cosmos, and then a century or so for humanity to digest the idea, working it not only into popular culture, but also into philosophy, art, so that it becomes a given in our school textbooks (or whatever we’ll use in the future in place of school textbooks). Then, if we’re going to receive a signal from a relatively nearby system, let it come after this period of acclimatization.

Great idea, right? As if we could script what happens when we’re talking about something as unknowable as SETI contact. I don’t even think we’d have to have a message we could decode at first, because the important thing would be the simple recognition of the fact that other civilizations are out there. On that score, maybe Dysonian SETI turns the trick with the demonstration of a technology at work around another star. The fact of its existence is what we have to get into our basic assumptions about the universe. I used to assume this would be easy and come soon, and while I do understand about all those stars out there, I’m still a bit puzzled that we haven’t turned up something. I’d call that no more than a personal bias, but there it is.

Image: The Parkes 64m radio telescope in Parkes, New South Wales, Australia with the Milky Way overhead. Breakthrough Listen is now conducting a survey of the Milky Way galactic plane over 1.2 to 1.5 GHz and a targeted search of approximately 1000 nearby stars over the frequency range 0.7 to 4 GHz. Credit: Wikimedia Commons / Daniel John Reardon.

  • Keith Cooper

It’s the greatest puzzle that there is. Radio SETI approaches things from the assumption that ET just sat at home belting out radio signals, and yet, as we know, the Universe is so old that ET has had ample time to reach us, or to build some kind of Dysonian artefact, or to do something to make their presence more obvious. And over the years we’ve all drawn our own conclusions as to why this does not seem to be the case – maybe they are here but hidden, watching us like we’re in some kind of cosmic zoo. Or maybe interstellar travel and building megastructures are more difficult than we envision. Perhaps they are all dead, or technological intelligence is rare, or they were never out there in the first place. We just don’t know. All we can do is look.

I think science fiction has also trained us to expect alien life to be out there – and I don’t mean that as a criticism of the genre. Indeed, in The Contact Paradox, I often use science fiction as allegory, largely because that’s where discussions about what form alien life may take and what might happen during contact have already taken place. So let me ask you this, Paul: From all the sf that you’ve read, are there any particular stories that stand out as a warning about the subtleties of contact?

  • Paul Gilster

I suppose my favorite of all the ‘first contact through SETI’ stories is James Gunn’s The Listeners (1972). Here we have multiple narrators working a text that is laden with interesting quotations. Gunn’s narrative methods go all the way back to Dos Passos and anticipate John Brunner (think Stand on Zanzibar, for example). It’s fascinating methodology, but beyond that, the tumult that greets the decoding of an image from Capella transforms into acceptance as we learn more about a culture that seems to be dying and await what may be the reply to a message humanity had finally decided to send in response. So The Listeners isn’t really a warning as much as an exploration of this tangled issue in all its complexity.

Of course, if we widen the topic to go beyond SETI and treat other forms of contact, I love what Stanislaw Lem did with Solaris (1961). A sentient ocean! I also have to say that I found David Brin’s Existence (2012) compelling. Here competing messages are delivered by something akin to Bracewell probes, reactivated after long dormancy. Which one do you believe, and how do you resolve deeply contradictory information? Very interesting stuff! I mean, how do we respond if we get a message, and then a second one saying “Don’t pay any attention to that first message?”

What are some of your choices? I could go on for a bit about favorite science fiction but I’d like to hear from you. I assume Sagan’s Contact (1985) is on your list, but how about dazzling ‘artifact’ contact, as in the Strugatsky brothers’ Roadside Picnic (1972)? And how do we fit in Cixin Liu’s The Three Body Problem (2008)? At first glance, I thought we were talking about Alpha Centauri, but the novel shows no familiarity with the actual Centauri system, while still being evocative and exotic. Here the consequences of contact are deeply disturbing.

  • Keith Cooper

I wish I were as well read as you are, Paul! I did read The Three Body Problem, but it didn’t strike a chord with me, which is a shame. For artefact contact, however, I have to mention the Arthur C. Clarke classic, Rendezvous with Rama (1973). One of the things I liked about that story is that it removed us from the purpose of Rama. We just happened to be bystanders, oblivious to Rama’s true intent and destination (at least until the sequel novels).

Clarke’s story feels relevant to SETI today, in which embracing the search for ‘technosignatures’ has allowed researchers to consider wider forms of detection than just radio signals. In particular, we’ve seen more speculation about finding alien spacecraft in our own Solar System – see Avi Loeb pondering whether 1I/‘Oumuamua was a spacecraft (I don’t think it was), or Jim Benford’s paper about looking for lurkers.

I’ve got mixed feelings about this. On the one hand, although it’s speculative and I really don’t expect us to find anything, I see no reason why we shouldn’t look for probes in the Solar System, just in case, and it would be done in a scientific manner. On the other hand, it sets SETI on a collision course with ufology, and I’d be interested to see how that would play out in the media and with the public.

It could also change how we think about contact. Communication over many light years via radio waves or optical signals is one thing, but if the SETI community agrees that it’s possible that there could be a probe in our Solar System, then that would bring things into the arena of direct contact. As a species, I don’t think we’re ready to produce a coherent response to a radio signal, and we are certainly not ready for direct contact.

Contact raises ethical dilemmas. There’s the obvious stuff, such as who has the right to speak for Earth, and indeed whether we should respond at all, or stay silent. I think there are other issues though. There may be information content in the detected signal, for example a message containing details of new technology, or new science, or new cultural artefacts.

However, we live in a world in which resources are not shared equally. Would the information contained within the signal be shared to the whole world, or will governments covet that information? If the technological secrets learned from the signal could change the world, for good or ill, who should we trust to manage those secrets?

These issues become amplified if contact is direct, such as finding one of Benford’s lurkers. Would we all agree that the probe should have its own sovereignty and keep our distance? Or would one or more nations or organisations seek to capture the probe for their own ends? How could we disseminate what we learn from the probe so that it benefits all humankind? And what if the probe doesn’t want to be captured, and defends itself?

My frustration with SETI is that we devote our efforts to trying to make contact, but then shun any serious discussion of what could happen during contact. The search and the discussion should be happening in tandem, so that we are ready should SETI find success, and I’m frankly puzzled that we don’t really do this. Paul, do you have any insight into why this might be?

  • Paul Gilster

You’ve got me. You and I are on a slightly different page when it comes to METI, for example (Messaging to Extraterrestrial Intelligence). But we both agree that while we search for possible evidence of ETI, we should be having this broad discussion about the implications of success. And if we’re talking about actually sending a signal without any knowledge whatsoever of what might be out there, then that discussion really should take priority, as far as I’m concerned. I’d be much more willing to accept the idea of sending signals if we came to an international consensus on the goal of METI and its possible consequences.

As to why we don’t do this, I hear a lot of things. Most people from the METI side argue that the cat is already out of the bag anyway, with various private attempts to send signals proliferating, and the assumption that ever more sophisticated technology will allow everyone from university scientists to the kid in the basement to send signals whenever they want. I can’t argue with that. But I don’t think the fact that we have sent messages means we should give up on the idea of discussing why we’re doing it and why it may or may not be a sound idea. I’m not convinced anyway that any signals yet sent have the likelihood of being received at interstellar distances.

But let’s leave METI alone for a moment. On the general matter of SETI and implications of receiving a signal or finding ETI in astronomical data, I think we’re a bit schizophrenic. When I talk about ‘we,’ I mean western societies, as I have no insights into how other traditions now view the implications of such knowledge. But in the post-Enlightenment tradition of places like my country and yours, contacting ETI is on one level accepted (I think this can be demonstrated in recent polling) while at the same time it is viewed as a mere plot device in movies.

This isn’t skepticism, because that implies an effort to analyze the issue. This is just a holdover of old paradigms. Changing them might take a silver disc touching down and Michael Rennie strolling out. On the day that happens, the world really would stand still.

Let’s add in the fact that we’re short-sighted in terms of working for results beyond the next dividend check (or episode of a favorite show). With long-term thinking in such perilously short supply (and let’s acknowledge the Long Now Foundation’s heroic efforts at changing this), we have trouble thinking about how societies change over time with the influx of new knowledge.

Our own experience says that superior technologies arriving in places without warning can lead to calamity, whether intentional or not, which in and of itself should be a lesson as we ponder signals from the stars. A long view of civilization would recognize how fragile its assumptions can be when faced with sudden intervention, as any 500 year old Aztec might remind us.

Image: A 17th century CE oil painting depicting the Spanish Conquistadores led by Hernan Cortes besieging the Aztec capital of Tenochtitlan in 1519 CE. (Jay I. Kislak Collection).

Keith, what’s your take on the ‘cat out of the bag’ argument with regard to METI? It seems to me to ignore the real prospect that we can change policy and shape behavior if we find it counterproductive, instead focusing on human powerlessness to control our impulses. Don’t we on the species level have agency here? How naive do you think I am on this topic?

  • Keith Cooper

That is the ‘contact paradox’ in a nutshell, isn’t it? This idea that we’re actively reaching out to ETI, yet we can’t agree on whether it’s safe to do so or not. That’s the purpose of my book, to try and put the discussion regarding contact in front of a wider audience.

In The Contact Paradox, I’m trying not to tell people what they should think about contact, although of course I give my own opinions on the matter. What I am asking is that people take the time to think more carefully about this issue, and about our assumptions, by embarking on having the broader debate.

Readers of Centauri Dreams might point out that they have that very debate in the comments section of this website on a frequent basis. And while that’s true to an extent, I think the debate, whether on this site or among researchers at conferences or even in the pages of science fiction, has barely scratched the surface. There are so many nuances and details to examine, so many assumptions to challenge, and it’s all too easy to slip back into the will they/won’t they invade discussion, which to me is a total straw-man argument.

To compound this, while the few reviews that The Contact Paradox has received so far have been nice, I am seeing a misunderstanding arise in those reviews that once again brings the debate back down to the question of whether ETI will be hostile or not. Yet the point I am making in the book is that even if ETI is benign, contact could potentially still go badly, through misunderstandings, or through the introduction of disruptive technology or culture.

Let me give you a hypothetical example based on a science-fiction technology. Imagine we made contact with ETI, and they saw the problems we face on Earth currently, such as poverty, disease and climate change. So they give us some of their technology – a replicator, like that in Star Trek, capable of making anything from the raw materials of atoms. Let’s also assume that the quandaries that I mentioned earlier, about who takes possession of that technology and whether they horde it, don’t apply. Instead, for the purpose of this argument, let’s assume that soon enough the technology is patented by a company on Earth and rolled out into society to the point that replicators became as common a sight in people’s homes as microwave ovens.

Just imagine what that could do! There would be no need for people to starve or suffer from drought – the replicators could make all the food and water we’d ever need. Medicine could be created on the spot, helping people in less wealthy countries who can’t ordinarily get access to life-saving drugs. And by taking away the need for industry and farming, we’d cut down our carbon emissions drastically. So all good, right?

But let’s flip the coin and look at the other side. All those people all across the world who work in manufacturing and farming would suddenly be out of a job, and with people wanting for nothing, the economy would crash completely, and international trade would become non-existent – after all, why import cocoa beans when you can just make them in your replicator at home? We’d have a sudden obesity crisis, because when faced with an abundance of resources, history tells us that it is often human nature to take too much. We’d see a drugs epidemic like never before, and people with malicious intent would be able to replicate weapons out of thin air. Readers could probably imagine other disruptive consequences of such a technology.

It’s only a thought experiment, but it’s a useful allegory showing that there are pros and cons to the consequences of contact. What we as a society have to do is decide whether the pros outweigh the cons, and to be prepared for the disruptive consequences. We can get some idea of what to expect by looking at contact between different societies on Earth throughout history. Instead of the replicator, consider historical contact events where gunpowder, or fast food, or religion, or the combustion engine have been given to societies that lacked them. What were the consequences in those situations?

This is the discussion that we’re not currently having when we do METI. There’s no risk assessment, just a bunch of ill-thought-out assumptions masquerading as a rationale for attempting contact before we’re ready.

There’s still time though. ETI would really have to be scrutinising us closely to detect our leakage or deliberate signals so far, and if they’re doing that then they would surely already know we are here. So I don’t think the ‘cat is out of the bag’ just yet, which means there is still time to have this discussion, and more importantly to prepare. Because long-term I don’t think we should stay silent, although I do think we need to be cautious, and learn what is out there first, and get ready for it, before we raise our voice. And if it turns out that no one is out there, then we’ve not wasted our time, because I think this discussion can teach us much about ourselves too.

  • Paul Gilster

We’re on the same wavelength there, Keith. I’m not against the idea of communicating with ETI if we receive a signal, but only within the context you suggest, which means thinking long and hard about what we want to do, making a decision based on international consultation, and realizing that any such contact would have ramifications that have to be carefully considered. On balance, we might just decide to stay silent until we gathered further information.

I do think many people have simply not considered this realistically. I was talking to a friend the other day whose reaction was typical. He had been asking me about SETI from a layman’s perspective, and I was telling him a bit about current efforts like Breakthrough Listen. But when I added that we needed to be cautious about how we responded, if we responded, to any reception, he was incredulous, then thoughtful. “I’ve just never thought about that,” he said. “I guess it just seems like science fiction. But of course I realize it isn’t.”

So we’re right back to paradox. If we have knowledge of the size of the galaxy — indeed, of the visible cosmos — why do we not see more public understanding of the implications? I think people could absorb the idea of a SETI reception without huge disruption, but it will force a cultural shift that turns what had been fiction into the realm of possibility.

But maybe we should now identify the broad context within which this shift can occur. In the beginning of your book, Keith, you say this: “Understanding altruism may ultimately be the single most significant factor in our quest to make contact with other intelligent life in the Universe.”

I think this is exactly right, and the next time we talk, I’d like us to dig into why this statement is true, and its ramifications for how we deal with not only extraterrestrial contact but our own civilization. Along with this, let’s get into that thorny question of ‘deep time’ and how our species sees itself in the cosmos.

tzf_img_post

G 9-40b: Confirming a Planet Candidate

M-class dwarfs within 100 light years are highly sought after objects these days, given that any transiting worlds around such stars will present unusually useful opportunities for atmospheric analysis. That’s because these stars are small, allowing large transit depth — in other words, a great deal of the star’s light is blocked by the planet. Studying a star’s light as it filters through a planetary atmosphere — transmission spectroscopy — can tell us much about the chemical constituents involved. We’ll soon extend that with space-based direct imaging.

While the discoveries we’re making today are exciting in their own right, bear in mind that we’re also building the catalog of objects that next generation ground telescopes (the extremely large, or ELT, instruments on the way) and their space-based cousins can examine in far greater depth. And it’s also true that we are tuning up our methods for making sure that our planet candidates are real and not products of data contamination.

Thus a planet called G 9-40b orbiting its red dwarf host about 90 light years out is significant not so much for the planet itself but for the methods used to confirm it. Probably the size of Neptune or somewhat smaller, G 9-40b is a world first noted by Kepler (in its K2 phase) as the candidate planet made transits of the star every six days. Confirmation that this is an actual planet has been achieved through three instruments. The first is the Habitable-zone Planet Finder (HPF), a spectrograph developed at Penn State that has been installed on the 10m Hobby-Eberly Telescope at McDonald Observatory in Texas.

HPF provides high precision Doppler readings in the infrared, allowing astronomers to exclude possible signals that might have mimicked a transiting world — we now know that G 9-40b is not a close stellar or substellar binary companion. HPF is distinguished by its spectral calibration using a laser frequency comb built by scientists at the National Institute of Standards and Technology and the University of Colorado. The instrument was able to achieve high precision in its radial velocity study of this planet while also observing the world’s transits across the star.

A post on the Habitable Zone Planet Finder blog notes that the brightness of the host star (given its proximity) and the large transit depth of the planet makes G 9-40b “…one of the most favorable sub-Neptune-sized planets orbiting an M-dwarf for transmission spectroscopy with the James Webb Space Telescope (JWST) in the future…”

But the thing to note about this work is the collaborative nature of the validation process, putting different techniques into play. High contrast adaptive optics imaging at Lick Observatory showed no stellar companions near the target, helping researchers confirm that the transits detected in the K2 mission were indeed coming from the star G 9-40. The Apache Point observations using high-precision diffuser-assisted photometry (see the blog entry for details on this technique) produced a transit plot that agreed with the K2 observations and allowed the team to tighten the timing of the transit. The Apache Point observations grew out of lead author Guðmundur Stefánsson’s doctoral work at Penn State. Says Stefánsson:

“G 9-40b is amongst the top twenty closest transiting planets known, which makes this discovery really exciting. Further, due to its large transit depth, G 9-40b is an excellent candidate exoplanet to study its atmospheric composition with future space telescopes.”

Image: Drawn from the HPF blog. Caption: Precise radial velocities from HPF (left) on the 10m Hobby-Eberly Telescope (right) allowed us to place an upper limit on the mass of the planet of 12 Earth masses. We hope to get a further precise mass constraint by continuing to observe G 9-40 in the future. Image credit: Figure 11a from the paper (left), Gudmundur Stefansson (right).

Near-infrared radial velocities from HPF allowed the 12 MEarth mass determination, the tightening of which through future work will allow the composition of the planet to be constrained. All of this is by way of feeding a space-based instrument like the James Webb Space Telescope with the data it will need to study the planet’s atmosphere. In such ways do we pool the results of our instruments, with HPF continuing its survey of the nearest low-mass stars in search of other planets in the Sun’s immediate neighborhood.

The paper is Stefansson et al., “A Sub-Neptune-sized Planet Transiting the M2.5 Dwarf G 9-40: Validation with the Habitable-zone Planet Finder,” Astronomical Journal Vol. 159, No. 3 (12 February 2020). Abstract / preprint.

tzf_img_post

How NASA Approaches Deep Space Missions

Centauri Dreams reader Charley Howard recently wrote to ask about how NASA goes about setting its mission priorities and analyzing mission concepts like potential orbiter missions to the ice giants. It’s such a good question that I floated it past Ashley Baldwin, who is immersed in the evolution of deep space missions and moves easily within the NASA structure to extract relevant information. Dr. Baldwin had recently commented on ice giant mission analysis by the Outer Planets Advisory Group. But what is this group, and where does it fit within the NASA hierarchy? Here is Ashley’s explanation of this along with links to excellent sources of information on the various mission concepts under analysis for various targets, and a bit of trenchant commentary.

By Ashley Baldwin

Each of the relevant NASA advisory groups has its own page on the NASA site with archives stuffed full of great presentations. The most germane to our discussion here is the Outer Planets Assessment Group (OPAG). My own focus has been on the products OPAG and the other PAGs produce, though OPAG produces the most elegant presentations with interesting subject matter. Product more than process is my focus, along with politics with a little ‘p’ within the NASA administration, and ‘high’ politics with a big P.

There are a number of such “advisory groups” feeding into NASA through its Planetary Science Advisory Committee (PAC), some of them of direct interest to Centauri Dreams readers::

Exoplanet Exploration Program Analysis Group (ExoPAG);

Mars Exploration Program Analysis Group (MEPAG);

Venus Exploration Analysis Group (VEXAG);

Lunar Exploration Analysis Group (LEAG);

Small Bodies Assessment Group (SBAG)

The relative influence of these groups doubtless waxes and wanes over time, with Mars in the ascendancy for a long time and Venus in inferior conjunction for ages. Most were formed in 2004, with the exoplanet group unfortunately a year later (see * below for my thoughts on why and how this happened).

These groups are essentially panels of relevant experts/academics — astronomers, astrophysicists, geophysicists, planetary scientists, astronautical engineers, astrobiologists etc — from within the various NASA centers (JPL, Glenn, Goddard et al.), along with universities and related institutions. The chairpersons are elected and serve a term of three years. James Kasting, for instance, chaired the exoplanetary advisory group ExoPAG during the first decade of this century.

Each group has two to three full member meetings per year which are open to the public. They have set agendas and take the form of plenary sessions discussing presentations – all of which are made available in the meeting archives, which over the years tell the story of what is being prioritised as well as offering a great deal on planetary science. There are also more frequent policy committee meetings, some of which I have attended via Skype. The PAGs also work in collaboration with other space agencies, the European Space Agency (ESA) and Japan Aerospace Exploration Agency (JAXA) in particular. This all creates technological advice that informs and is informed by NASA policy, which is in turn informed politically, as you would imagine. All of this leads to the missions under consideration, such as Europa Clipper, the Space Launch System (SLS), the James Webb Space Telescope (JWST), the International Space Station (ISS) and the planning for future manned Lunar/Martian landings.

NASA can task the advisory groups to produce work relating to particular areas, such as ice giant missions, and with contributing towards the Decadal studies via a report that is due in March of 2023. On the Decadals: The National Research Council (NRC) conducts studies that provide a science community consensus on key questions being examined by NASA and other agencies. The broadest of these studies in NASA’s areas of research are the Decadal surveys.

So once each decade NASA and its partners ask the NRC to project 10 or more years into the future in order to prioritize research areas and develop mission concepts needed to make the relevant observations. You can find links to the most recent Decadal surveys here.

There is obviously jostling and internal competition for each group to get its priorities as high up the Decadal priority list as possible. Bearing in mind that there is a similar and equally competitive pyramid lobby for astrophysics, earth science and heliophysics.

Each PAG is encouraged to get its members to both individually and collectively submit ‘white papers’ championing research areas they feel are relevant. That’s thousands, so no wonder they need some serious and time consuming collation to produce the final document. This time around it will be Mars sample return versus the ice giants vying for the all important top spot (anything less than this and you are unlikely to receive a once-a-decade flagship mission).

The Planetary Science Advisory Committee, in turn, advises the central NASA Advisory Council (NAC). Its members are appointed at the discretion of and are directly advisory to the NASA administrator on all science matters within NASA’s purview. NAC was formed from the merger of several related groups in 1977, though its origins predate NASA’s formation in 1958.

The Discovery (small) and New Frontiers (medium ) Planetary Science programmes (with “flagship” missions like Clipper effectively being “large,” occurring generally once per decade) each run over a five year cycle, with one New Frontiers being picked each round and up to two Discovery missions chosen. This after short-listing from all concepts submitted in response to “an announcement of opportunity” – the formal NASA application process. The Discovery and New Frontiers programmes are staggered, as are the missions chosen under those programmes, with the aim of having a mission launching roughly on a 24 monthly rolling basis, presumably to help spread out their operational costs.

Both Discovery and New Frontiers come with a set budget cap, the $850-1000 million New Frontiers and $500 million Discovery. However, on top of this they have receive a free launcher (from a preselected list), some or all operational costs for the duration of the primary mission (which without extensions is about 2 years for Discovery like Insight and 3-4 years for a New Frontiers). There are also varying additional government furnished equipment (GFEs) on offer, consisting of equipment, special tooling, or special test equipment.

Sometimes other additional cost technology is included such as multi-mission radioisotope thermoelectric generators (MMRTG). Two have been slotted this time around for Discovery, which is very unusual as MMRTGs are at a premium and generally limited to New Frontiers missions or bigger. There were three on offer for last year’s New Frontiers round but as Dragonfly to Titan only needs one, there were two left over and they only have a limited shelf life.

This Discovery round also has broken with former policy in so much as ALL operations costs are being covered, including those outside of the mission proper (i.e whilst in transit to the target), thus removing cost penalties for missions with long transit times, like Trident to Triton. Even in hibernation there are system engineering costs and maintaining a science team that together add up to several million dollars per year. A big clue as to NASA’s Planetary Science Division’s priorities? I hope so!

The Explorer programme is the Astrophysics Division parallel process, run in similar fashion with one medium Explorer and one small Explorer (budget $170 million) picked every five years, though each programme is again staggered to effectively push out a mission about every two and a half years. There is some talk of the next Decadal study creating a funded “Probe” programme. Such programmes are generally only conceptual, but there is talk of a $1 billion budget for some sort of astrophysics mission, hopefully exoplanet related. No more than gossip at this point, though.

* And here is the ExoPAG bone of contention I mentioned above. Kepler was selected as a Discovery mission in 2003 prior to the formation of ExoPAG, and the rest of the planetary science groups went ballistic. This led to NASA excluding exoplanet missions from future Discovery and New Frontier rounds. Despite the tremendous success of Kepler, this limited ExoPAG to analogous but smaller Astrophysics Explorer funding. These are small- and medium-class, PI-led astrophysics missions, as well as astrophysics missions of opportunity.

Imagine what could have been produced, for instance, if the ESA’s ARIEL (or EChO) transit telescope had been done in conjunction with a New Frontiers budget instead of Astrophysics Explorer. The Medium Explorer budget reaches $200 million plus; New Frontiers gets up to $850-1000 million.

tzf_img_post

Juno: Looking Deep into Jupiter’s Atmosphere

We’re learning more about the composition of Jupiter’s atmosphere, and in particular, the amount of water therein, as a result of data from the Juno mission. The data come in the 1.25 to 22 GHz range from Juno’s microwave radiometer (MWR), depicting the deep atmosphere in the equatorial region. Here, water (considered in terms of its component oxygen and hydrogen) makes up about 0.25 percent of the molecules in Jupiter’s atmosphere, almost three times the percentage found in the Sun. All of this gets intriguing when compared to the results from Galileo.

You’ll recall that the Galileo probe descended into the Jovian atmosphere back in 1995, sending back spectrometer measurements of the amount of water it found down to almost 120 kilometers, where atmospheric pressure reached 320 pounds per square inch (22 bar). Unlike Juno, Galileo showed that Jupiter might be dry compared to the Sun — there was in fact ten times less water than expected — but it also found water content increasing even as it reached its greatest depth, an oddity given the assumption that mixing in the atmosphere would create a constant water content. Did Galileo run into some kind of meteorological anomaly?

A new paper in Nature Astronomy looks at the matter as part of its analysis of the Juno results, which also depict an atmosphere not well mixed:

The findings of the Galileo probe were puzzling because they showed that where ammonia and hydrogen sulfide become uniformly mixed occurs at a level much deeper (~10 bar) than what was predicted by an equilibrium thermochemical model. The concentration of water was subsolar and still increasing at 22 bar, where radio contact with the probe was lost, although the concentrations of nitrogen and sulfur stabilized at ~3 times solar at ~10 bar. The depletion of water was proposed to be caused by meteorological effects at the probe location. The observed water abundance was assumed not to represent the global mean water abundance on Jupiter, which is an important quantity that distinguishes planetary formation models and affects atmospheric thermal structure.

Now Juno has found water content greater than what Galileo measured. But the fact that Galileo showed a water concentration that was still increasing when the probe no longer could send data makes its results inconclusive. The matter is important for those interested in planet formation because as the likely first planet to form, Jupiter would have contained the great bulk of gas and dust that did not go into the composition of the Sun. Thus planet formation models are keyed to factors like the amount of water the young planet would have assimilated. Scott Bolton, Juno principal investigator at the Southwest Research Institute in San Antonio, comments:

“Just when we think we have things figured out, Jupiter reminds us how much we still have to learn. Juno’s surprise discovery that the atmosphere was not well mixed even well below the cloud tops is a puzzle that we are still trying to figure out. No one would have guessed that water might be so variable across the planet.”

Image: The JunoCam imager aboard NASA’s Juno spacecraft captured this image of Jupiter’s southern equatorial region on Sept. 1, 2017. The image is oriented so Jupiter’s poles (not visible) run left-to-right of frame. Credit: NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill.

The research team, led by Cheng Li (JPL/Caltech) used data from Juno’s first eight science flybys, focusing on the equatorial region first because the atmosphere appears to be better mixed there than in other regions. Juno’s microwave radiometer can measure the absorption of microwave radiation by water at multiple depths at the same time. Using these methods, Juno could collect data from deeper in the atmosphere than Galileo, where pressures reach about 480 psi (33 bar). The next move will be to compare this with other regions, giving us a picture of water abundance as Juno coverage extends deeper into Jupiter’s northern hemisphere. Of particular interest will be what Juno will find at the planet’s poles.

From the paper:

We have shown that the structure of Jupiter’s EZ [equatorial zone] is steady, relatively uniform vertically and close to a moist adiabat [a region where heat does not enter or leave the system]; from this we have derived its water abundance. The thermal structure outside of the equator is still ambiguous owing to the non-uniform distribution of ammonia gas, for which we do not know the physical origin. Deriving the thermal structure outside of the equator in the future not only hints about the water abundance on Jupiter at other latitudes but also places constraints on the atmospheric circulation model for giant planets in the Solar System and beyond.

Image: Thick white clouds are present in this JunoCam image of Jupiter’s equatorial zone. These clouds complicate the interpretation of infrared measurements of water. At microwave frequencies, the same clouds are transparent, allowing Juno’s Microwave Radiometer to measure water deep into Jupiter’s atmosphere. The image was acquired during Juno’s flyby of the gas giant on Dec. 16, 2017. Credit: NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill.

The authors add that Juno has already revealed a deep atmosphere that is surprisingly variable as a function of latitude, highlighting the need to tread cautiously before making any assumptions about the planet’s overall water abundance. Extending these observations into other regions of the planet will be useful because oxygen is the most common element after hydrogen and helium in Jupiter’s atmosphere, and as water ice may thus have been the primary condensable in the protoplanetary disk. Consider this a deep probe into planet formation.

The paper is Li et al., “The water abundance in Jupiter’s equatorial zone,” Nature Astronomy 10 February 2020 (abstract).

tzf_img_post

Trident: Firming up the Triton Flyby

It’s not a Triton, or even a Neptune orbiter, but Trident is still an exciting mission, a Triton flyby that would take a close look at the active resurfacing going on on this remarkable moon. Trident has recently been selected by NASA’s Discovery Program as one of four science investigations that will lead to one to two missions being chosen at the end of the study for development and launch in the 2020s.

These are nine-month studies, and they include, speaking of young and constantly changing surfaces, the Io Volcanic Observer (IVO). The other two missions are the Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy (VERITAS) mission, and DAVINCI+ (Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging Plus).

Each of these studies will receive $3 million to bring its concepts to fruition, concluding with a Concept Study Report, at which point we’ll get word on the one or two that have made it to further development and flight. The NASA Discovery program has been in place since 1992, dedicated to supporting smaller missions with lower cost and shorter development times than the larger flagship missions. That these missions can have serious clout is obvious from some of the past selections: Kepler, Dawn, Deep Impact, MESSENGER, Stardust and NEAR.

Active missions at the moment include Lunar Reconnaissance Orbiter and InSight, but we leave the inner system with Lucy, a Discovery mission visiting a main belt asteroid as well as six Jupiter trojans, and Psyche, which will explore the unusual metal asteroid 16 Psyche. Discovery missions set a $500 million cost-cap excluding launch vehicle operations, data analysis or partner contributions. The next step up in size is New Frontiers, now with a $1 billion cost-cap — here we can mention New Horizons, OSIRIS-REx and Juno as well as Dragonfly.

I assume that New Horizons’ success at Pluto/Charon helped Trident along, showing how much good science can be collected from a flyby. Triton makes for a target of high interest because of its atmosphere and erupting plumes, along with the potential for an interior ocean. The goal of Trident is to characterize the processes at work while mapping a large swath of Triton and learning whether in fact the putative ocean beneath the surface exists. A mid-2020s launch takes advantage of a rare and efficient gravity assist alignment to make the mission feasible. Louise Prockter, director of the Lunar and Planetary Institute in Houston, is principal investigator.

Image: Dr. Louise Prockter, program director for the Universities Space Research Association, as well as director of the Lunar and Planetary Institute, is now principal investigator for Trident. Credit: USRA.

We can thank Voyager 2 for providing our only close-up images of Triton, which was revealed to be a place where explosive venting blows dark material from beneath the ice into the air, material which falls back onto the surface to create new features. The terrain is varied and notable for the striking ‘cantaloupe’ pattern covering large areas. With its distinctive retrograde rotation, orbiting opposite to Neptune’s rotation, and high inclination orbit, Triton may well be an object captured from the Kuiper Belt, in an orbit where tidal forces likely lead to interior heating that could maintain an ocean. What we learn here could inform our understanding not just of KBOs, but also giant moons like Titan and Europa, and smaller ocean worlds like Enceladus.

This would be a flyby with abundant opportunities for data collection, as this precis from the 2019 Lunar and Planetary Science Conference makes clear:

An active-redundant operational sequence ensures unique observations during an eclipse of Triton – and another of Neptune itself – and includes redundant data collection throughout the flyby… High-resolution imaging and broad-spectrum IR imaging spectroscopy, together with high-capacity onboard storage, allow near-full-body mapping over the course of one Triton orbit… Trident passes through Triton’s thin atmosphere, within 500 km of the surface, sampling its ionosphere with a plasma spectrometer and performing magnetic induction measurements to verify the existence of an extant ocean. Trident’s passage through a total eclipse allows observations through two atmospheric radio occultations for mapping electron and neutral atmospheric density, Neptune-shine illuminated eclipse imaging for change detection since the 1989 Voyager 2 flyby, and high-phase angle atmospheric imaging for mapping haze layers and plumes.

Image: Global color mosaic of Triton, taken in 1989 by Voyager 2 during its flyby of the Neptune system. Color was synthesized by combining high-resolution images taken through orange, violet, and ultraviolet filters; these images were displayed as red, green, and blue images and combined to create this color version. With a radius of 1,350 kilometers (839 mi), about 22% smaller than Earth’s moon, Triton is by far the largest satellite of Neptune. It is one of only three objects in the Solar System known to have a nitrogen-dominated atmosphere (the others are Earth and Saturn’s giant moon, Titan). Triton has the coldest surface known anywhere in the Solar System (38 K, about -391 degrees Fahrenheit); it is so cold that most of Triton’s nitrogen is condensed as frost, making it the only satellite in the Solar System known to have a surface made mainly of nitrogen ice. The pinkish deposits constitute a vast south polar cap believed to contain methane ice, which would have reacted under sunlight to form pink or red compounds. The dark streaks overlying these pink ices are believed to be an icy and perhaps carbonaceous dust deposited from huge geyser-like plumes, some of which were found to be active during the Voyager 2 flyby. The bluish-green band visible in this image extends all the way around Triton near the equator; it may consist of relatively fresh nitrogen frost deposits. The greenish areas includes what is called the cantaloupe terrain, whose origin is unknown, and a set of “cryovolcanic” landscapes apparently produced by icy-cold liquids (now frozen) erupted from Triton’s interior.
Credit: NASA/JPL/USGS.

If it flies, Trident would launch in 2026 and reach Triton in 2038, using gravity assists at Venus, the Earth and, finally, Jupiter for a final course deflection toward Neptune. The current thinking is to bring the spacecraft, which will weigh about twice New Horizons’ 478 kg, within 500 kilometers of Triton, a close pass indeed compared to New Horizons’ 12,500 kilometer pass by Pluto. This is indeed close enough for the spacecraft to sample Triton’s ionosphere and conduct the needed magnetic induction measurements to confirm or refute the existence of its ocean. As this mission firms up, we’ll be keeping a close eye on its prospects in the outer system. Remember, too, the 2017 workshop in Houston examining a possible Pluto orbiter, still a long way from being anything more than a concept, but interesting enough to make the pulse race.

My friend Ashley Baldwin, who sent along some good references re Trident, also noted that Trident’s trajectory is such that the gravity assist around Jupiter could, at 1.24 Jupiter radii, provide a close flyby of Io. Interesting in terms of the competing Io Volcanic Observer entry.

tzf_img_post