Centauri Dreams
Imagining and Planning Interstellar Exploration
Voyager and the Deep Space Network Upgrade
The fault protection routines programmed into Voyager 1 and 2 were designed to protect the spacecraft in the event of unforeseen circumstances. Such an event occurred in late January, when a rotation maneuver planned to calibrate Voyager 2’s onboard magnetic field instrument failed to occur because an unexpected delay in its execution left two systems consuming high levels of power (in Voyager terms) at the same time, overdrawing the available power supply.
We looked at this event not long after it happened, and noted that within a couple of days, the Voyager team was able to turn off one of the systems and turn the science instruments back on. Normal operations aboard Voyager 2 were announced on March 3, with five operating science instruments that had been turned off once again returning their data. Such autonomous operation is reassuring because Voyager 2 is now going to lose the ability to receive commands from Earth, owing to upgrades to the Deep Space Network in Australia. This is a temporary situation but one that will last the entire 11 months of the upgrade period.
Fortunately, scientists will still be able to receive science data from the craft, which is now 17 billion kilometers from Earth, but they will not be able to send commands to it during this period. The Canberra site is critical to the Voyager interstellar mission because its 70-meter wide antenna is the only one of the three DSN antennae that can communicate with Voyager 2, which is moving relative to the Earth’s orbital plane in such a way that it can only be seen from the southern hemisphere. Thus the California (Goldstone) and Spain (Robledo de Chavela) sites are ruled out, and there is no southern hemisphere antenna other than Canberra’s DSS43 capable of sending S-band signals powerful enough to communicate with Voyager 2.
Image: DSS43 is a 70-meter-wide (230-feet-wide) radio antenna at the Deep Space Network’s Canberra facility in Australia. It is the only antenna that can send commands to the Voyager 2 spacecraft. Credit: NASA/Canberra Deep Space Communication Complex.
The maintenance at DSS43 is essential, because we have communication and navigation needs for missions like the Mars 2020 rover and future exploration plans for both the Moon and Mars including at some point the crewed missions to the Moon in the Artemis program. Canberra has, in addition to the 70-meter dish, three 34-meter antennae that can receive the Voyager 2 signal, but are unable to transmit commands. During the period in question, Voyager 2 will continue to return data, according to Voyager project manager Suzanne Dodd:
“We put the spacecraft back into a state where it will be just fine, assuming that everything goes normally with it during the time that the antenna is down. If things don’t go normally – which is always a possibility, especially with an aging spacecraft – then the onboard fault protection that’s there can handle the situation.”
Expect the work at Canberra to be completed by January of 2021, placing an updated and more reliable antenna back into service and, presumably, continuing the active work managing Voyager 2’s ongoing mission. Better this, engineers reason, than dealing with future unplanned outages as DSS43 ages, while the upgrades will add state-of-the-art technology to the site. Putting all this in perspective is the fact that the dish has been in service for fully 48 years.
Calculating Life’s Possibilities on Titan
With surface temperatures around -180° C, Titan presents problems for astrobiology, even if its seasonal rainfall, lakes and seas, and nitrogen-rich atmosphere bear similarities to Earth. Specifically, what kind of cell membrane can form and function in an environment this cold? Five years ago, researchers at Cornell used molecular simulations to screen for the possibilities, suggesting a membrane the scientists called an azotosome, which would be made out of the nitrogen, carbon and hydrogen molecules known to exist in Titan’s seas.
The azotosome was a useful construct because the phospholipid bilayer membranes giving rise to liposomes on Earth need an analog that can survive Titan’s conditions, a methane-based membrane that can form in cryogenic temperatures. And the Cornell work suggested that azotosomes would create a similar flexibility to cell membranes found on Earth. Titan’s seas of methane and ethane, then, might offer us the chance for a novel form of life to emerge.
Now we have new work out of Chalmers University of Technology in Gothenburg, Sweden that raises serious doubts about whether azotosomes could develop on Titan. The Cornell work examined the liquid organic compound acrylonitrile, found in Titan’s atmosphere, and built the azotosome idea around it, but the Swedish team’s calculations show that azotosomes are unlikely to be able to self-assemble in Titan’s conditions, for the acrylonitrile would crystalize into its molecular ice.
Martin Rahm (Department of Chemistry and Chemical Engineering, Chalmers University of Technology) is co-author of the paper:
“Titan is a fascinating place to test our understanding of the limits of prebiotic chemistry – the chemistry that precedes life. What chemical, or possibly biological, structures might form, given enough time under such different conditions? The suggestion of azotosomes was a really interesting proposal for an alternative to cell membranes as we understand them. But our new research paper shows that, unfortunately, although the structure could indeed tolerate the extremes of Titan, it would not form in the first place.”
This is interesting work, and not only because we are on track to launch Dragonfly in 2026, a mission to investigate the surface and sample different locations around the moon in an assessment of prebiotic chemistry. What we’re seeing is the emergence of computational astrobiology, the necessary follow-on to studies like the predictive work of 2015. The idea is to model the properties and formation routes of the materials proposed as supporting possible biological processes. In this case, we learn that the azotosome structure that looked so promising is not thermodynamically feasible.
But this work hardly eliminates the possibility of life on Titan. What if, the authors speculate, the cell structure itself is not critical? From the paper:
…on Titan, any hypothetical life-bearing macromolecule or crucial machinery of a life form will exist in the solid state and never risk destruction by dissolution. The question is then whether these biomolecules would benefit from a cell membrane. Already rendered immobile by the low temperature, biological macromolecules on Titan would need to rely on the diffusion of small energetic molecules, such as H2, C2H2, or HCN, to reach them in order for growth or replication to ensue. Transport of these molecules might proceed in the atmosphere or through the surrounding methane/ethane environment. A membrane would likely hinder this beneficial diffusion. Similarly, a membrane would likely hinder necessary removal of waste products of metabolism, such as methane and nitrogen, in the opposite direction.
Image: Researchers looking for life on Titan, Saturn’s largest moon, used quantum mechanical calculations to investigate the viability of azotosomes, a potential form of cell membrane. Credit: NASA / Yen Strandqvist / Chalmers.
At this stage, as the authors note, the limits of prebiotic chemistry and biology on Titan will have to stay in the realm of speculation, but computations like these can inform the choice of sites for Dragonfly as it explores the moon, helping us to match the reality on the ground with theory.
The paper is Sandström & Rahm, “Can polarity-inverted membranes self-assemble on Titan?” Science Advances Vol. 6, No. 4 (24 January 2020). Full text. The 2015 paper on azotosomes is Stevenson, Lunine & Clancy, “Membrane alternatives in worlds without oxygen: Creation of an azotosome,” Science Advances Vol. 1, No. 1 (27 February 2015), e1400067 (full text).
On Freeman Dyson
Freeman Dyson’s response to the perplexity of our existence was not purely scientific. A polymath by nature, he responded deeply to art and literature and often framed life’s dilemmas through their lens. Always thinking of himself as a mathematician first, he unified quantum electrodynamics and saw the Nobel Prize go to the three who had formulated, in different ways, its structure, but he would cast himself as the Ben Jonson to Richard Feynman’s Shakespeare, a fact noted by Gregory Benford in his review of Phillip F. Schewe’s recent biography. That would be a typical allusion for a man whose restless intellect chafed at smug over-specialization, something neither he nor Feynman could ever be accused of.
Feynman, Julian Schwinger and Shinichiro Tomonaga each came up with ways to describe how electrons and photons interrelate, but it was Dyson, on one of his long cross-continental bus trips, who worked out the equivalence of their theories, giving us QED. He would publish the unifying paper in Physical Review in 1949. A year later, he met Tomonaga at Princeton, describing him in a June 24, 1950 letter to his parents as “a charming man, like so many of the really good ones. He talked with me for three hours with much humour and common sense… I have the impression that he is an exceptionally unselfish person.”
Which is exactly the impression I had of Dyson in the one interaction (other than email) I had with him, back in 2003 while I was pulling together material for Centauri Dreams and called the Institute for Advanced Study, his scholarly home since 1953, to schedule an interview. It was a spring day and, unfortunately for my purposes, a loud lawn mower was moving up and down outside Dyson’s window. I was having to shout to be heard, a nuisance, and I had trouble hearing him, but we persisted with much repetition and his good humor.
Always associated with Project Orion, the dramatic concept to propel a spacecraft by exploding nuclear charges behind it, Dyson had moved away from the idea, and indeed from nuclear energy entirely. He wanted to talk about microwave and laser propulsion, and expressed an interest in Clifford Singer’s ideas on pellet streams, an idea he liked because of the lack of diffraction. Over a close pass by the outside lawnmower, I heard him clearly: “Nuclear energy doesn’t cut it! Nuclear energy is too small. You’re using less than one percent of the mass with any kind of nuclear reaction so you’re limited to less than a tenth of lightspeed. Nuclear is great inside the Solar System, but not very interesting outside of it.”
If you would know something of this man, of his values and his conception of life, I direct you to the splendid Maker of Patterns: An Autobiography Through Letters, published in 2018. The concept is daring, for by eschewing standard autobiography to present himself largely through letters he wrote at the time, Dyson gives up the opportunity to edit his persona. None of us can point to a lifetime without contradiction, which is just another way of describing growth. Dyson was willing for that growth to be in full view. Thus the Dyson of 1958, writing about the Project Orion work he would later discount:
The basic idea is absurdly simple. One is amazed that nobody thought of it before. But the only man who could think of it was somebody who had been working and thinking for years with bombs, so that he could know exactly what a bomb of a given size will do. It was not an accident that this man happened to be Ted [Taylor]. The problem is to convince oneself that one can sit on top of a bomb without being fried… Ted’s genius led him to question the obvious impossibility. For the last six months Ted has spent his time talking to people in the government and trying to convince them that this idea is not crazy. He has had a hard time. But it seems we have now a lot of influential people on our side… Ted and I will fly to Los Alamos this evening. We travel like Paul and Barnabas.
Nothing would come of these travels, of course, because of the signing of the Limited Nuclear Test Ban Treaty of 1963, though Dyson would later support the treaty amid his deep concern over nuclear destruction. The idea of Orion still tantalizes many interstellar advocates today.
The lack of self-justifying ego — so rare in all too many quarters — that informs Dyson’s writings informs his wide reach into non-scientific markets, where he became the eloquent explainer of concepts he worked with in the course of his long life. I doubt there are many Centauri Dreams readers who do not have at least a few of his titles, books like Disturbing the Universe (1981) and Infinite in All Directions (1988). So many concepts sprang from his insistence on seeing things from a cosmological perspective, including for our interstellar purposes the Dyson sphere and the biological, self-replicating probe called ‘astrochicken’ that was enabled by artificial intelligence.
Image: Around the table clockwise are Dyson, Gregory Benford, Jim Benford and David Brin. Taken Jan. 30, 2019, before a discussion between Greg and Dyson at the Clarke Center (available here on YouTube).
All of these concepts he could relate to the general public through a style that was at once clear and enabling, so that the reader would, like this one, often look up from his or her reading to take in the audacity of ideas that were as logical as they were innovative. The archives of this site are awash with references to Dyson’s contributions, a tribute to his range and his reach. Remarkably, that intellect never deserted him even as his physical strength began to fail. Jim Benford, who has known Dyson since the 1960s, told me on the day of Dyson’s death that he had continued his yearly trips across the country to his La Jolla (CA) residence up until last year. This time around, at 96, he told Jim his doctors had argued against it. He would die a week later, a loss as deep to this field as his contribution was rich.
We shall know what we go to Mars for only after we get there. The study of whatever forms of life exist on Mars is likely to lead to better understanding of life in general. This may well be of more benefit to humanity than irrigating ten Saharas. But that is only one of many reasons for going. The main purpose is a general enlargement of human horizons.
Thus Dyson in a letter from La Jolla in 1958. Really, you must read Maker of Patterns. And from my 2003 interview with him:
Look at how people spread around the Earth. It’s not clear why we want to travel so much, but we do. It seems to be characteristic of humans from the time we left Africa. Why do people leave Africa to spread out to all these desolate places, to Siberia and across the Pacific? We know that people just do this. It’s part of human nature…
I think of him foremost as a deeply sane man, one who saw both the aspirations of the human mind as well as its limitations and took on the challenge of explaining life’s mysteries with a fierce joy. No one who reads, and re-reads, his essays and papers can miss this affirmation of mind at work, always building in new directions, unifying, shaping, questioning. It would be superfluous to try to summarize his many accomplishments in one post, for we will, inevitably, be turning his ideas over in our discussions for the rest of the lifetime of Centauri Dreams.
Exploring the Contact Paradox
Keith Cooper is a familiar face on Centauri Dreams, both through his own essays and the dialogues he and I have engaged in on interstellar topics. Keith is the editor of Astronomy Now and the author of both The Contact Paradox: Challenging Assumptions in the Search for Extraterrestrial Intelligence (Bloomsbury Sigma), and Origins of the Universe: The Cosmic Microwave Background and the Search for Quantum Gravity (Icon Books) to be published later this year. The Contact Paradox is a richly detailed examination of the history and core concepts of SETI, inspiring a new set of conversations, of which this is the first. With the recent expansion of the search through Breakthrough Listen, where does SETI stand both in terms of its likelihood of success and its perception among the general public?
- Paul Gilster
Keith, we’re 60 years into SETI and no contact yet, though there are a few tantalizing things like the WOW! signal to hold our attention. Given that you have just given us an exhaustive study of the field and mined its philosophical implications, what’s your take on how this lack of results is playing with the general public? Are we more or less ready today than we were in the days of Project Ozma to receive news of a true contact signal?
And despite what we saw in the film Contact, do you think the resultant clamor would be as widespread and insistent? Because to me, one of the great paradoxes about the whole idea of contact is that the public seems to get fired up for the idea in film and books, but relatively uninterested in the actual work that’s going on. Or am I misjudging this?
- Keith Cooper
What a lot of people don’t realise is just how big space is. Our Galaxy is home to somewhere between 100 billion and 200 billion stars. Yet, until Yuri Milner’s $100 million Breakthrough Listen project, we had looked and listened, in detail, at about a thousand of those stars. And when I say listened closely, I mean we pointed a telescope at each of those stars for half an hour or so. Even Breakthrough Listen, which will survey a million stars in detail, finds the odds stacked against it. Let’s imagine there are 10,000 technological species in our Galaxy. That sounds like a lot, but on average we’d have to search between 10 million and 20 million stars just to find one of those species.
And remember, we’re only listening for a short time. If they’re not transmitting during that time frame, then we won’t detect them, at least not with a radio telescope. Coupled with the fact that incidental radio leakage will be much harder to detect than we thought, then it’s little wonder that we’ve not found anyone out there yet. Of course, the public doesn’t see these nuances – they just see that we’ve been searching for 60 years and all we’ve found is negative or null results. So I’m not surprised that the public are often uninspired by SETI.
Some of this dissatisfaction might stem from the assumptions made in the early days of SETI, when it was assumed that ETI would be blasting out messages through powerful beacons that would be pretty obvious and easy to detect. Clearly, that doesn’t seem to be the case. Maybe that’s because they’re not out there, or maybe it’s because the pure, selfless altruism required to build such a huge, energy-hungry transmitter to beam messages to unknown species is not very common in nature. Certainly on Earth, in the animal kingdom, altruism usually operates either on the basis of protecting one’s kin, or via quid pro quo, neither of which lend themselves to encouraging interstellar communication.
So I think we – that is, both the public and the SETI scientific community – need to readjust our expectations a little bit.
Are we ready to receive a contact signal? I suspect that we think we are, but that’s different from truly being ready. Of course, it depends upon a number of variables, such as the nature of the contact, whether we can understand the message if one is sent, and whether the senders are located close in space to us or on the other side of the Galaxy. A signal detected from thousands of light years away and which we can’t decode the message content of, will have much less impact than one from, say, 20 or 30 light years away, and which we can decode the message content and perhaps even start to communicate with on a regular basis.
- Paul Gilster
I’ll go further than that. To me, the optimum SETI signal to receive first would be one from an ancient civilization, maybe one way toward galactic center, which would make by virtue of its extreme distance a non-threatening experience. Or at least it would if we quickly went to work on expanding public understanding of the size of the Galaxy and the Universe itself, as you point out. An even more ancient signal from a different galaxy would be even better, as even the most rabid conspiracy theorist would have little sense of immediate threat.
I suppose the best scenario of all would be a detection that demonstrated other intelligent life somewhere far away in the cosmos, and then a century or so for humanity to digest the idea, working it not only into popular culture, but also into philosophy, art, so that it becomes a given in our school textbooks (or whatever we’ll use in the future in place of school textbooks). Then, if we’re going to receive a signal from a relatively nearby system, let it come after this period of acclimatization.
Great idea, right? As if we could script what happens when we’re talking about something as unknowable as SETI contact. I don’t even think we’d have to have a message we could decode at first, because the important thing would be the simple recognition of the fact that other civilizations are out there. On that score, maybe Dysonian SETI turns the trick with the demonstration of a technology at work around another star. The fact of its existence is what we have to get into our basic assumptions about the universe. I used to assume this would be easy and come soon, and while I do understand about all those stars out there, I’m still a bit puzzled that we haven’t turned up something. I’d call that no more than a personal bias, but there it is.
Image: The Parkes 64m radio telescope in Parkes, New South Wales, Australia with the Milky Way overhead. Breakthrough Listen is now conducting a survey of the Milky Way galactic plane over 1.2 to 1.5 GHz and a targeted search of approximately 1000 nearby stars over the frequency range 0.7 to 4 GHz. Credit: Wikimedia Commons / Daniel John Reardon.
- Keith Cooper
It’s the greatest puzzle that there is. Radio SETI approaches things from the assumption that ET just sat at home belting out radio signals, and yet, as we know, the Universe is so old that ET has had ample time to reach us, or to build some kind of Dysonian artefact, or to do something to make their presence more obvious. And over the years we’ve all drawn our own conclusions as to why this does not seem to be the case – maybe they are here but hidden, watching us like we’re in some kind of cosmic zoo. Or maybe interstellar travel and building megastructures are more difficult than we envision. Perhaps they are all dead, or technological intelligence is rare, or they were never out there in the first place. We just don’t know. All we can do is look.
I think science fiction has also trained us to expect alien life to be out there – and I don’t mean that as a criticism of the genre. Indeed, in The Contact Paradox, I often use science fiction as allegory, largely because that’s where discussions about what form alien life may take and what might happen during contact have already taken place. So let me ask you this, Paul: From all the sf that you’ve read, are there any particular stories that stand out as a warning about the subtleties of contact?
- Paul Gilster
I suppose my favorite of all the ‘first contact through SETI’ stories is James Gunn’s The Listeners (1972). Here we have multiple narrators working a text that is laden with interesting quotations. Gunn’s narrative methods go all the way back to Dos Passos and anticipate John Brunner (think Stand on Zanzibar, for example). It’s fascinating methodology, but beyond that, the tumult that greets the decoding of an image from Capella transforms into acceptance as we learn more about a culture that seems to be dying and await what may be the reply to a message humanity had finally decided to send in response. So The Listeners isn’t really a warning as much as an exploration of this tangled issue in all its complexity.
Of course, if we widen the topic to go beyond SETI and treat other forms of contact, I love what Stanislaw Lem did with Solaris (1961). A sentient ocean! I also have to say that I found David Brin’s Existence (2012) compelling. Here competing messages are delivered by something akin to Bracewell probes, reactivated after long dormancy. Which one do you believe, and how do you resolve deeply contradictory information? Very interesting stuff! I mean, how do we respond if we get a message, and then a second one saying “Don’t pay any attention to that first message?”
What are some of your choices? I could go on for a bit about favorite science fiction but I’d like to hear from you. I assume Sagan’s Contact (1985) is on your list, but how about dazzling ‘artifact’ contact, as in the Strugatsky brothers’ Roadside Picnic (1972)? And how do we fit in Cixin Liu’s The Three Body Problem (2008)? At first glance, I thought we were talking about Alpha Centauri, but the novel shows no familiarity with the actual Centauri system, while still being evocative and exotic. Here the consequences of contact are deeply disturbing.
- Keith Cooper
I wish I were as well read as you are, Paul! I did read The Three Body Problem, but it didn’t strike a chord with me, which is a shame. For artefact contact, however, I have to mention the Arthur C. Clarke classic, Rendezvous with Rama (1973). One of the things I liked about that story is that it removed us from the purpose of Rama. We just happened to be bystanders, oblivious to Rama’s true intent and destination (at least until the sequel novels).
Clarke’s story feels relevant to SETI today, in which embracing the search for ‘technosignatures’ has allowed researchers to consider wider forms of detection than just radio signals. In particular, we’ve seen more speculation about finding alien spacecraft in our own Solar System – see Avi Loeb pondering whether 1I/’Oumuamua was a spacecraft (I don’t think it was), or Jim Benford’s paper about looking for lurkers.
I’ve got mixed feelings about this. On the one hand, although it’s speculative and I really don’t expect us to find anything, I see no reason why we shouldn’t look for probes in the Solar System, just in case, and it would be done in a scientific manner. On the other hand, it sets SETI on a collision course with ufology, and I’d be interested to see how that would play out in the media and with the public.
It could also change how we think about contact. Communication over many light years via radio waves or optical signals is one thing, but if the SETI community agrees that it’s possible that there could be a probe in our Solar System, then that would bring things into the arena of direct contact. As a species, I don’t think we’re ready to produce a coherent response to a radio signal, and we are certainly not ready for direct contact.
Contact raises ethical dilemmas. There’s the obvious stuff, such as who has the right to speak for Earth, and indeed whether we should respond at all, or stay silent. I think there are other issues though. There may be information content in the detected signal, for example a message containing details of new technology, or new science, or new cultural artefacts.
However, we live in a world in which resources are not shared equally. Would the information contained within the signal be shared to the whole world, or will governments covet that information? If the technological secrets learned from the signal could change the world, for good or ill, who should we trust to manage those secrets?
These issues become amplified if contact is direct, such as finding one of Benford’s lurkers. Would we all agree that the probe should have its own sovereignty and keep our distance? Or would one or more nations or organisations seek to capture the probe for their own ends? How could we disseminate what we learn from the probe so that it benefits all humankind? And what if the probe doesn’t want to be captured, and defends itself?
My frustration with SETI is that we devote our efforts to trying to make contact, but then shun any serious discussion of what could happen during contact. The search and the discussion should be happening in tandem, so that we are ready should SETI find success, and I’m frankly puzzled that we don’t really do this. Paul, do you have any insight into why this might be?
- Paul Gilster
You’ve got me. You and I are on a slightly different page when it comes to METI, for example (Messaging to Extraterrestrial Intelligence). But we both agree that while we search for possible evidence of ETI, we should be having this broad discussion about the implications of success. And if we’re talking about actually sending a signal without any knowledge whatsoever of what might be out there, then that discussion really should take priority, as far as I’m concerned. I’d be much more willing to accept the idea of sending signals if we came to an international consensus on the goal of METI and its possible consequences.
As to why we don’t do this, I hear a lot of things. Most people from the METI side argue that the cat is already out of the bag anyway, with various private attempts to send signals proliferating, and the assumption that ever more sophisticated technology will allow everyone from university scientists to the kid in the basement to send signals whenever they want. I can’t argue with that. But I don’t think the fact that we have sent messages means we should give up on the idea of discussing why we’re doing it and why it may or may not be a sound idea. I’m not convinced anyway that any signals yet sent have the likelihood of being received at interstellar distances.
But let’s leave METI alone for a moment. On the general matter of SETI and implications of receiving a signal or finding ETI in astronomical data, I think we’re a bit schizophrenic. When I talk about ‘we,’ I mean western societies, as I have no insights into how other traditions now view the implications of such knowledge. But in the post-Enlightenment tradition of places like my country and yours, contacting ETI is on one level accepted (I think this can be demonstrated in recent polling) while at the same time it is viewed as a mere plot device in movies.
This isn’t skepticism, because that implies an effort to analyze the issue. This is just a holdover of old paradigms. Changing them might take a silver disc touching down and Michael Rennie strolling out. On the day that happens, the world really would stand still.
Let’s add in the fact that we’re short-sighted in terms of working for results beyond the next dividend check (or episode of a favorite show). With long-term thinking in such perilously short supply (and let’s acknowledge the Long Now Foundation‘s heroic efforts at changing this), we have trouble thinking about how societies change over time with the influx of new knowledge.
Our own experience says that superior technologies arriving in places without warning can lead to calamity, whether intentional or not, which in and of itself should be a lesson as we ponder signals from the stars. A long view of civilization would recognize how fragile its assumptions can be when faced with sudden intervention, as any 500 year old Aztec might remind us.
Image: A 17th century CE oil painting depicting the Spanish Conquistadores led by Hernan Cortes besieging the Aztec capital of Tenochtitlan in 1519 CE. (Jay I. Kislak Collection).
Keith, what’s your take on the ‘cat out of the bag’ argument with regard to METI? It seems to me to ignore the real prospect that we can change policy and shape behavior if we find it counterproductive, instead focusing on human powerlessness to control our impulses. Don’t we on the species level have agency here? How naive do you think I am on this topic?
- Keith Cooper
That is the ‘contact paradox’ in a nutshell, isn’t it? This idea that we’re actively reaching out to ETI, yet we can’t agree on whether it’s safe to do so or not. That’s the purpose of my book, to try and put the discussion regarding contact in front of a wider audience.
In The Contact Paradox, I’m trying not to tell people what they should think about contact, although of course I give my own opinions on the matter. What I am asking is that people take the time to think more carefully about this issue, and about our assumptions, by embarking on having the broader debate.
Readers of Centauri Dreams might point out that they have that very debate in the comments section of this website on a frequent basis. And while that’s true to an extent, I think the debate, whether on this site or among researchers at conferences or even in the pages of science fiction, has barely scratched the surface. There are so many nuances and details to examine, so many assumptions to challenge, and it’s all too easy to slip back into the will they/won’t they invade discussion, which to me is a total straw-man argument.
To compound this, while the few reviews that The Contact Paradox has received so far have been nice, I am seeing a misunderstanding arise in those reviews that once again brings the debate back down to the question of whether ETI will be hostile or not. Yet the point I am making in the book is that even if ETI is benign, contact could potentially still go badly, through misunderstandings, or through the introduction of disruptive technology or culture.
Let me give you a hypothetical example based on a science-fiction technology. Imagine we made contact with ETI, and they saw the problems we face on Earth currently, such as poverty, disease and climate change. So they give us some of their technology – a replicator, like that in Star Trek, capable of making anything from the raw materials of atoms. Let’s also assume that the quandaries that I mentioned earlier, about who takes possession of that technology and whether they horde it, don’t apply. Instead, for the purpose of this argument, let’s assume that soon enough the technology is patented by a company on Earth and rolled out into society to the point that replicators became as common a sight in people’s homes as microwave ovens.
Just imagine what that could do! There would be no need for people to starve or suffer from drought – the replicators could make all the food and water we’d ever need. Medicine could be created on the spot, helping people in less wealthy countries who can’t ordinarily get access to life-saving drugs. And by taking away the need for industry and farming, we’d cut down our carbon emissions drastically. So all good, right?
But let’s flip the coin and look at the other side. All those people all across the world who work in manufacturing and farming would suddenly be out of a job, and with people wanting for nothing, the economy would crash completely, and international trade would become non-existent – after all, why import cocoa beans when you can just make them in your replicator at home? We’d have a sudden obesity crisis, because when faced with an abundance of resources, history tells us that it is often human nature to take too much. We’d see a drugs epidemic like never before, and people with malicious intent would be able to replicate weapons out of thin air. Readers could probably imagine other disruptive consequences of such a technology.
It’s only a thought experiment, but it’s a useful allegory showing that there are pros and cons to the consequences of contact. What we as a society have to do is decide whether the pros outweigh the cons, and to be prepared for the disruptive consequences. We can get some idea of what to expect by looking at contact between different societies on Earth throughout history. Instead of the replicator, consider historical contact events where gunpowder, or fast food, or religion, or the combustion engine have been given to societies that lacked them. What were the consequences in those situations?
This is the discussion that we’re not currently having when we do METI. There’s no risk assessment, just a bunch of ill-thought-out assumptions masquerading as a rationale for attempting contact before we’re ready.
There’s still time though. ETI would really have to be scrutinising us closely to detect our leakage or deliberate signals so far, and if they’re doing that then they would surely already know we are here. So I don’t think the ‘cat is out of the bag’ just yet, which means there is still time to have this discussion, and more importantly to prepare. Because long-term I don’t think we should stay silent, although I do think we need to be cautious, and learn what is out there first, and get ready for it, before we raise our voice. And if it turns out that no one is out there, then we’ve not wasted our time, because I think this discussion can teach us much about ourselves too.
- Paul Gilster
We’re on the same wavelength there, Keith. I’m not against the idea of communicating with ETI if we receive a signal, but only within the context you suggest, which means thinking long and hard about what we want to do, making a decision based on international consultation, and realizing that any such contact would have ramifications that have to be carefully considered. On balance, we might just decide to stay silent until we gathered further information.
I do think many people have simply not considered this realistically. I was talking to a friend the other day whose reaction was typical. He had been asking me about SETI from a layman’s perspective, and I was telling him a bit about current efforts like Breakthrough Listen. But when I added that we needed to be cautious about how we responded, if we responded, to any reception, he was incredulous, then thoughtful. “I’ve just never thought about that,” he said. “I guess it just seems like science fiction. But of course I realize it isn’t.”
So we’re right back to paradox. If we have knowledge of the size of the galaxy — indeed, of the visible cosmos — why do we not see more public understanding of the implications? I think people could absorb the idea of a SETI reception without huge disruption, but it will force a cultural shift that turns what had been fiction into the realm of possibility.
But maybe we should now identify the broad context within which this shift can occur. In the beginning of your book, Keith, you say this: “Understanding altruism may ultimately be the single most significant factor in our quest to make contact with other intelligent life in the Universe.”
I think this is exactly right, and the next time we talk, I’d like us to dig into why this statement is true, and its ramifications for how we deal with not only extraterrestrial contact but our own civilization. Along with this, let’s get into that thorny question of ‘deep time’ and how our species sees itself in the cosmos.
G 9-40b: Confirming a Planet Candidate
M-class dwarfs within 100 light years are highly sought after objects these days, given that any transiting worlds around such stars will present unusually useful opportunities for atmospheric analysis. That’s because these stars are small, allowing large transit depth — in other words, a great deal of the star’s light is blocked by the planet. Studying a star’s light as it filters through a planetary atmosphere — transmission spectroscopy — can tell us much about the chemical constituents involved. We’ll soon extend that with space-based direct imaging.
While the discoveries we’re making today are exciting in their own right, bear in mind that we’re also building the catalog of objects that next generation ground telescopes (the extremely large, or ELT, instruments on the way) and their space-based cousins can examine in far greater depth. And it’s also true that we are tuning up our methods for making sure that our planet candidates are real and not products of data contamination.
Thus a planet called G 9-40b orbiting its red dwarf host about 90 light years out is significant not so much for the planet itself but for the methods used to confirm it. Probably the size of Neptune or somewhat smaller, G 9-40b is a world first noted by Kepler (in its K2 phase) as the candidate planet made transits of the star every six days. Confirmation that this is an actual planet has been achieved through three instruments. The first is the Habitable-zone Planet Finder (HPF), a spectrograph developed at Penn State that has been installed on the 10m Hobby-Eberly Telescope at McDonald Observatory in Texas.
HPF provides high precision Doppler readings in the infrared, allowing astronomers to exclude possible signals that might have mimicked a transiting world — we now know that G 9-40b is not a close stellar or substellar binary companion. HPF is distinguished by its spectral calibration using a laser frequency comb built by scientists at the National Institute of Standards and Technology and the University of Colorado. The instrument was able to achieve high precision in its radial velocity study of this planet while also observing the world’s transits across the star.
A post on the Habitable Zone Planet Finder blog notes that the brightness of the host star (given its proximity) and the large transit depth of the planet makes G 9-40b “…one of the most favorable sub-Neptune-sized planets orbiting an M-dwarf for transmission spectroscopy with the James Webb Space Telescope (JWST) in the future…”
But the thing to note about this work is the collaborative nature of the validation process, putting different techniques into play. High contrast adaptive optics imaging at Lick Observatory showed no stellar companions near the target, helping researchers confirm that the transits detected in the K2 mission were indeed coming from the star G 9-40. The Apache Point observations using high-precision diffuser-assisted photometry (see the blog entry for details on this technique) produced a transit plot that agreed with the K2 observations and allowed the team to tighten the timing of the transit. The Apache Point observations grew out of lead author Guðmundur Stefánsson’s doctoral work at Penn State. Says Stefánsson:
“G 9-40b is amongst the top twenty closest transiting planets known, which makes this discovery really exciting. Further, due to its large transit depth, G 9-40b is an excellent candidate exoplanet to study its atmospheric composition with future space telescopes.”
Image: Drawn from the HPF blog. Caption: Precise radial velocities from HPF (left) on the 10m Hobby-Eberly Telescope (right) allowed us to place an upper limit on the mass of the planet of 12 Earth masses. We hope to get a further precise mass constraint by continuing to observe G 9-40 in the future. Image credit: Figure 11a from the paper (left), Gudmundur Stefansson (right).
Near-infrared radial velocities from HPF allowed the 12 MEarth mass determination, the tightening of which through future work will allow the composition of the planet to be constrained. All of this is by way of feeding a space-based instrument like the James Webb Space Telescope with the data it will need to study the planet’s atmosphere. In such ways do we pool the results of our instruments, with HPF continuing its survey of the nearest low-mass stars in search of other planets in the Sun’s immediate neighborhood.
The paper is Stefansson et al., “A Sub-Neptune-sized Planet Transiting the M2.5 Dwarf G 9-40: Validation with the Habitable-zone Planet Finder,” Astronomical Journal Vol. 159, No. 3 (12 February 2020). Abstract / preprint.
How NASA Approaches Deep Space Missions
Centauri Dreams reader Charley Howard recently wrote to ask about how NASA goes about setting its mission priorities and analyzing mission concepts like potential orbiter missions to the ice giants. It’s such a good question that I floated it past Ashley Baldwin, who is immersed in the evolution of deep space missions and moves easily within the NASA structure to extract relevant information. Dr. Baldwin had recently commented on ice giant mission analysis by the Outer Planets Advisory Group. But what is this group, and where does it fit within the NASA hierarchy? Here is Ashley’s explanation of this along with links to excellent sources of information on the various mission concepts under analysis for various targets, and a bit of trenchant commentary.
By Ashley Baldwin
Each of the relevant NASA advisory groups has its own page on the NASA site with archives stuffed full of great presentations. The most germane to our discussion here is the Outer Planets Assessment Group (OPAG). My own focus has been on the products OPAG and the other PAGs produce, though OPAG produces the most elegant presentations with interesting subject matter. Product more than process is my focus, along with politics with a little ‘p’ within the NASA administration, and ‘high’ politics with a big P.
There are a number of such “advisory groups” feeding into NASA through its Planetary Science Advisory Committee (PAC), some of them of direct interest to Centauri Dreams readers::
Exoplanet Exploration Program Analysis Group (ExoPAG);
Mars Exploration Program Analysis Group (MEPAG);
Venus Exploration Analysis Group (VEXAG);
Lunar Exploration Analysis Group (LEAG);
Small Bodies Assessment Group (SBAG)
The relative influence of these groups doubtless waxes and wanes over time, with Mars in the ascendancy for a long time and Venus in inferior conjunction for ages. Most were formed in 2004, with the exoplanet group unfortunately a year later (see * below for my thoughts on why and how this happened).
These groups are essentially panels of relevant experts/academics — astronomers, astrophysicists, geophysicists, planetary scientists, astronautical engineers, astrobiologists etc — from within the various NASA centers (JPL, Glenn, Goddard et al.), along with universities and related institutions. The chairpersons are elected and serve a term of three years. James Kasting, for instance, chaired the exoplanetary advisory group ExoPAG during the first decade of this century.
Each group has two to three full member meetings per year which are open to the public. They have set agendas and take the form of plenary sessions discussing presentations – all of which are made available in the meeting archives, which over the years tell the story of what is being prioritised as well as offering a great deal on planetary science. There are also more frequent policy committee meetings, some of which I have attended via Skype. The PAGs also work in collaboration with other space agencies, the European Space Agency (ESA) and Japan Aerospace Exploration Agency (JAXA) in particular. This all creates technological advice that informs and is informed by NASA policy, which is in turn informed politically, as you would imagine. All of this leads to the missions under consideration, such as Europa Clipper, the Space Launch System (SLS), the James Webb Space Telescope (JWST), the International Space Station (ISS) and the planning for future manned Lunar/Martian landings.
NASA can task the advisory groups to produce work relating to particular areas, such as ice giant missions, and with contributing towards the Decadal studies via a report that is due in March of 2023. On the Decadals: The National Research Council (NRC) conducts studies that provide a science community consensus on key questions being examined by NASA and other agencies. The broadest of these studies in NASA’s areas of research are the Decadal surveys.
So once each decade NASA and its partners ask the NRC to project 10 or more years into the future in order to prioritize research areas and develop mission concepts needed to make the relevant observations. You can find links to the most recent Decadal surveys here.
There is obviously jostling and internal competition for each group to get its priorities as high up the Decadal priority list as possible. Bearing in mind that there is a similar and equally competitive pyramid lobby for astrophysics, earth science and heliophysics.
Each PAG is encouraged to get its members to both individually and collectively submit ‘white papers’ championing research areas they feel are relevant. That’s thousands, so no wonder they need some serious and time consuming collation to produce the final document. This time around it will be Mars sample return versus the ice giants vying for the all important top spot (anything less than this and you are unlikely to receive a once-a-decade flagship mission).
The Planetary Science Advisory Committee, in turn, advises the central NASA Advisory Council (NAC). Its members are appointed at the discretion of and are directly advisory to the NASA administrator on all science matters within NASA’s purview. NAC was formed from the merger of several related groups in 1977, though its origins predate NASA’s formation in 1958.
The Discovery (small) and New Frontiers (medium ) Planetary Science programmes (with “flagship” missions like Clipper effectively being “large,” occurring generally once per decade) each run over a five year cycle, with one New Frontiers being picked each round and up to two Discovery missions chosen. This after short-listing from all concepts submitted in response to “an announcement of opportunity” – the formal NASA application process. The Discovery and New Frontiers programmes are staggered, as are the missions chosen under those programmes, with the aim of having a mission launching roughly on a 24 monthly rolling basis, presumably to help spread out their operational costs.
Both Discovery and New Frontiers come with a set budget cap, the $850-1000 million New Frontiers and $500 million Discovery. However, on top of this they have receive a free launcher (from a preselected list), some or all operational costs for the duration of the primary mission (which without extensions is about 2 years for Discovery like Insight and 3-4 years for a New Frontiers). There are also varying additional government furnished equipment (GFEs) on offer, consisting of equipment, special tooling, or special test equipment.
Sometimes other additional cost technology is included such as multi-mission radioisotope thermoelectric generators (MMRTG). Two have been slotted this time around for Discovery, which is very unusual as MMRTGs are at a premium and generally limited to New Frontiers missions or bigger. There were three on offer for last year’s New Frontiers round but as Dragonfly to Titan only needs one, there were two left over and they only have a limited shelf life.
This Discovery round also has broken with former policy in so much as ALL operations costs are being covered, including those outside of the mission proper (i.e whilst in transit to the target), thus removing cost penalties for missions with long transit times, like Trident to Triton. Even in hibernation there are system engineering costs and maintaining a science team that together add up to several million dollars per year. A big clue as to NASA’s Planetary Science Division’s priorities? I hope so!
The Explorer programme is the Astrophysics Division parallel process, run in similar fashion with one medium Explorer and one small Explorer (budget $170 million) picked every five years, though each programme is again staggered to effectively push out a mission about every two and a half years. There is some talk of the next Decadal study creating a funded “Probe” programme. Such programmes are generally only conceptual, but there is talk of a $1 billion budget for some sort of astrophysics mission, hopefully exoplanet related. No more than gossip at this point, though.
* And here is the ExoPAG bone of contention I mentioned above. Kepler was selected as a Discovery mission in 2003 prior to the formation of ExoPAG, and the rest of the planetary science groups went ballistic. This led to NASA excluding exoplanet missions from future Discovery and New Frontier rounds. Despite the tremendous success of Kepler, this limited ExoPAG to analogous but smaller Astrophysics Explorer funding. These are small- and medium-class, PI-led astrophysics missions, as well as astrophysics missions of opportunity.
Imagine what could have been produced, for instance, if the ESA’s ARIEL (or EChO) transit telescope had been done in conjunction with a New Frontiers budget instead of Astrophysics Explorer. The Medium Explorer budget reaches $200 million plus; New Frontiers gets up to $850-1000 million.