If we were to send a message to an extraterrestrial civilization and make contact, should we assume it would be significantly more advanced than us? The odds say yes, and the thinking goes like this: We are young enough that we have only been using radio for a century or so. How likely is it that we would reach a civilization that has been using such technologies for an even shorter period of time? As assumptions go, this one seems sensible enough.

But let’s follow it up. In an interesting piece in the New York Times Magazine, Steven Johnson makes the case this way: Given the age of the universe, almost 14 billion years, that means it would have taken 13,999,999,900 years before radio communications became a factor here on Earth. Now let’s imagine a civilization that deviates from our own timeline of development by just one tenth of one percent. If they are more advanced than us, they will have been using technologies like radio and its successors for 14 million years.

Assumptions can be tricky. We make them because we have no hard data on any civilization outside our own. About this one, we might ask: Why should there be any universal ‘timeline’ of development? Are there ‘plateaus’ when the steep upward climb of technological change goes flat? Soon we have grounds for an ever deeper debate. What constitutes civilization? What constitutes intelligence, and is it necessarily beneficial, or a path toward extinction?

Image: The Arecibo Observatory in Puerto Rico, from which a message was broadcast to the globular cluster M13 in 1974.

Airing out the METI Debate

I want to commend Johnson’s piece, which is titled “Greetings, E.T. (Please Don’t Murder Us.” As you can fathom from the title, the author is looking at our possible encounter with alien civilizations in terms not of detection but of contact, and that means we’re talking METI — Messaging Extraterrestrial Intelligence. What I like about Johnson’s treatment is that he goes out of his way to talk to both sides of a debate known more for its acrimony than its enlightenment. Civility counts, because both sides of the METI issue need to listen to each other. And the enemies of civilized discussion are arrogance and facile assertion.

It was Martin Ryle, then Astronomer Royal of Britain, who launched the first salvo in the METI debate in response to the Arecibo message of 1974, asking the International Astronomical Union to denounce the sending of messages to the stars. In the forty years since, about a dozen intentional messages have been sent. The transmissions of Alexander Zaitsev from Evpatoria are well known among Centauri Dreams readers (see the archives). Douglas Vakoch now leads a group called METI that plans to broadcast a series of messages beginning in 2018. The Breakthrough Listen initiative has also announced a plan to design the kind of messages with which we might communicate with an extraterrestrial civilization.

All of this will be familiar turf for Centauri Dreams readers, but Johnson’s essay is a good refresher in basic concepts and a primer for those still uninitiated. He’s certainly right that the explosion of exoplanet discovery has materially fed into the question of when we might detect ETI and how we could communicate with it. It has also raised questions of considerable significance about the Drake Equation; specifically, about the provocative term L, meant to represent the lifespan of a technological civilization.

Johnson runs through the Fermi question — Where are they? — by way of pointing to L’s increasing significance. After all, when Frank Drake drew up the famous equation and presented it at a 1961 meeting at Green Bank (the site of his Project Ozma searches), no one knew of a single planet beyond our Solar System. Now we’re learning not just how frequently they occur but how often we’re likely to find planets in the habitable zone around their stars. The numbers may still be rough, but they’re substantial. There are billions of habitable zone planets in the galaxy, so the likelihood of success for SETI would seem to rise.

And if we continue to observe no other civilizations? The L factor may be telling us that there is a cap to the success of intelligent life, a filter ahead of us in our development through which we may not pass, whether it be artificial intelligence or nuclear weaponry or nanotechnology. METI’s critics thus worry about planet-wide annihilation, and wonder if a limiting factor for L, at least for some civilizations, might be interactions with other, more advanced cultures. Far better for our own prospects if the ‘filter’ is behind us, perhaps in abiogenesis itself.

Hasn’t our own civilization already announced its presence, not just through an expanding wavefront of old TV and radio shows but also through the activity of our planetary radars, and the chemistry of our atmosphere? After all, even at our level of technology, we’re closing in on the ability to study the atmospheres of Earth-class planets around other stars. If this is the case, are we simply being watched from afar because we’re just one of many civilizations, and perhaps not one worth communicating with? METI proponents will argue that this is another reason to send a message: Announce that, at long last, we are ready to talk.

The counter-argument runs like this: A deliberately targeted message is a far different thing than the detection of life-signs on a distant planet. The targeted message is a wake-up call, saying that we are intent on reaching the civilizations around us and are beginning the process. Passive signal leakage is one thing; targeting a specific star implies an active level of interest. And the problem is, we have no way of knowing how an alien culture might respond.

Procedures for Consensus

In his article, Johnson is well served by the interviews he conducted with with Frank Drake (anti-METI, but largely because he would prefer to see METI funding applied to conventional SETI); METI proponent and former SETI scientist Vakoch; anti-METI spokesman and author David Brin; and anthropologist Kathryn Denning, who supports broad consultation on METI. Johnson does an admirable job in summarizing the key questions, one of which is this: If we are dealing with technologies whose use has huge consequences, do individuals and small groups have the right to decide when and how these technologies should be used?

I think Johnson hits the right note on this matter:

Wrestling with the METI question suggests, to me at least, that the one invention human society needs is more conceptual than technological: We need to define a special class of decisions that potentially create extinction-level risk. New technologies (like superintelligent computers) or interventions (like METI) that pose even the slightest risk of causing human extinction would require some novel form of global oversight. And part of that process would entail establishing, as Denning suggests, some measure of risk tolerance on a planetary level. If we don’t, then by default the gamblers will always set the agenda, and the rest of us will have to live with the consequences of their wagers.

Easier said than done, of course. How does global oversight work? And how can we bring about a discussion that legitimately represents the interests of humanity at large?

Consultation also meets an invariable response: You can talk all you want, but someone is going to do it anyway. In fact, various groups already have. In any case, when have you ever heard of human beings turning their back on technological developments? For that matter, how often have we deliberately chosen not to interact with another society? Johnson adds:

But maybe it’s time that humans learned how to make that kind of choice. This turns out to be one of the surprising gifts of the METI debate, whichever side you happen to take. Thinking hard about what kinds of civilization we might be able to talk to ends up making us think even harder about what kind of civilization we want to be ourselves.

The METI debate is robust and sometimes surprising because of what doesn’t get said. Under the frequent assumption that human civilization is debased, we assume an older culture will invariably have surmounted its own challenges to become enlightened and altruistic. Possibly so, but without data, how can we know that other civilizations may not be more or less like ourselves, in having the capacity for great achievement as well as the predatory instincts that can cause them to turn on themselves and on others? Is there a way of living with expansive technologies while remaining a flawed and striving culture that can still make huge mistakes?

We can’t know the characteristics of any civilization without data, which is why a robust SETI effort remains so crucial. As for METI, I’ll be publishing tomorrow a response to Johnson’s article from a group of METI’s chief opponents exploring these and other points.

tzf_img_post