The wish that humans will one day walk on exoplanets is a natural one. After all, the history of exploration is our model. We look at the gradual spread of humanity, its treks and voyages of discovery, and seamlessly apply the model to a future spacefaring civilization. Science fiction has historically made the assumption through countless tales of exploration. This is the Captain Cook model, in which a crew embarks on a journey into unknown regions, finds new lands and cultures, and returns with samples to stock museums and tales of valor and curiosity.

Captain Cook didn’t have a generation ship, but HMS Endeavour was capable of voyages lasting years, stocking itself along the way and often within reach of useful ports of call. A scant 250 years later, however, we need to consider evolutionary trends and ask ourselves whether our ‘anthropocene’ era will itself be short-lived. Even as we ask whether human biology is up for voyages of interstellar magnitude, we should also question what happens when evolution is applied to the artificial intelligence growing in our labs. This is Martin Rees territory, the UK’s Astronomer Royal having discussed machine intelligence in books like his recent The End of Astronauts (Belknap Press, 2022) and in a continuing campaign of articles and talks.

I won’t comment further on The End of Astronauts because I haven’t read it yet, but its subtitle – Why Robots Are the Future of Exploration – makes clear where Rees and co-author Donald Goldsmith are heading. The title is a haunting one, reminding me of J.G. Ballard’s story “The Dead Astronaut,” a tale in which the Florida launch facilities that propelled the astronaut skyward are now overgrown and abandoned, and the astronaut’s widow awaits the automated return of her long-dead husband. It was an almost surreal experience to read this in the Apollo-infused world of 1971, when it first ran:

Cape Kennedy has gone now, its gantries rising from the deserted dunes. Sand has come in across the Banana River, filling the creeks and turning the old space complex into a wilderness of swamps and broken concrete. In the summer, hunters build their blinds in the wrecked staff cars; but by early November, when Judith and I arrived, the entire area was abandoned. Beyond Cocoa Beach, where I stopped the car, the ruined motels were half hidden in the sawgrass. The launching towers rose into the evening air like the rusting ciphers of some forgotten algebra of the sky.

“[T]he rusting ciphers of some forgotten algebra of the sky.” Can this guy write or what?

You’ll find no spoilers here (Ballard’s The Complete Short Stories is the easiest place to find it these days) but suffice it to say that not everything is as it seems and the scenario plays out in ways that explore human psychology coming to grips with a frontier of deeply uncertain implications. As uncertain, perhaps, as the implications Ballard did not explore here, the growth of artificial intelligence with its own evolutionary path. For that, we can investigate the work of Stanislaw Lem, in particular The Invincible (1964). N. Katherine Hayles wrote a fine foreword to the novel in 2020. Non-human, indeed non-biological evolutionary paths are at the heart of the work.

The scenario should intrigue anyone interested in interstellar exploration. Assume for a moment that a starship carrying both biological beings and what we can call artilects – AI enabled beings, or automata – once landed on a distant planet, where the biological crew died. The surviving artilects cope with the local life forms and evolve gradually toward smaller and smaller beings that operate through swarm intelligence. The driver is the need to function with ever smaller sources of power (the artilects operate via solar power and hence need less as their size decreases), creating an evolutionary pressure that results in intelligent ‘mites.’

A long time later, another crew, the humans of the starship Invincible, has arrived and must cope with the result. As long ago as 1964, before the first Gemini mission had flown, the prescient Lem was saying that swarm intelligence was a viable path, something that later research continues to confirm. As Hayles points out in her foreword, it takes only a few rules to produce complex behaviors in swarming creatures like fish, birds and bees, with each creature essentially in synch with only the few creatures immediately around it. Simple behaviors (in computer terms, only a few lines of code) lead to complex results. Let me quote Hayles on this:

Decades before these ideas became disseminated within the scientific community, Lem intuited that different environmental constraints might lead to radically different evolutionary results in automata compared to biological life forms. Although on Earth the most intelligent species (i.e., humans) has tended to fare the best, their superior intelligence comes with considerable costs: a long period of maturation; a lot of resources invested in each individual; socialization patterns that emphasize pair bonding and community support; and a premium on individual achievement. But these are not cosmic universals, and different planetary histories might result in the triumph of very different kinds of qualities.

In this environment, a visiting starship crew must confront an essential difference in values between the two types of being. Humans bring assumptions drawn out of our experience as a species, including the value of the individual life as opposed to the collective. Remember, we are some years off from Star Trek’s Borg, so once again Lem is pushing the envelope of more conventional science fiction. Hayles will point out that shorn of our anthropocentrism, we may find ourselves encountering forms of artificial life whose behavior can only be traduced by profoundly unsettling experience. A world of collective ‘mites’ may overwhelm all our values.

Given all this, we have to ask whether several more centuries of AI will produce artilects we are comfortable with. The question of control seems almost moot, as what Martin Rees refers to as ‘inorganic intelligence’ quickly moves past our own mental functioning if left to its own devices. We are in the realm of what today’s technologists call ‘strong AI,’ where the artificial intelligence is genuinely alive in its own right, as opposed to being a kind of simulacrum emulating programmed life. A strong AI outcome places us in a unique relationship with our own creations.

The result is a richer and stranger evolutionary path than even Darwin could have dreamed up. We don’t have to limit ourselves to swarms, of course, but I think we can join Rees in saying that creatures evolving out of current AI will probably be well beyond our ability to understand. In a recent essay for BBC Future, Rees quoted Darwin on the entire question of intentionality: “A dog might as well speculate on the mind of [Isaac] Newton.” Not even my smartest and most beloved Border Collie could have done that. At least I don’t think she could, although she frequently surprised me.

A side-note: I would be interested in suggestions for science fiction stories dealing with swarm concepts — as opposed to basic robotics — in the early years of science fiction. Were authors exploring this before Lem?

Rees is always entertaining as well as provocative. He takes an all but Olympian view of the cosmos that draws on his lifetime of scientific speculation, and writes a supple, direct prose that is without self-regard. I’ve only met him once and at that only briefly, but it’s clear that this is just who he is. In a way, what I might consider his detachment from the nonsensical frenzy of too much tenured academic science mirrors deeper changes that could occur as intelligence moves into inanimate matter. Why, for example, keep things like egotism or pomposity (and we all know examples in our various disciplines)? Why keep aggression if your goal is contemplation? For that matter, why live on planets and not between stars?

But for that matter, can we ever know the goal of such beings? As Rees writes:

Pessimistically, they could be what philosophers call “zombies”. It’s unknown whether consciousness is special to the wet, organic brains of humans, apes and dogs. Might it be that electronic intelligences, even if their intellects seem superhuman, lack self-awareness or inner life? If so, they would be alive, but unable to contemplate themselves, or the beauty, wonder and mystery of the Universe. A rather bleak prospect.

For all these what-ifs, I strongly second another Rees statement about first contact: “We will not be able to fathom their motives or intentions.”

As you might guess, Rees is all for pursuing what I always call ‘Dysonian SETI,’ meaning looking for evidence of non-natural phenomena (he includes the study of ‘Oumuamua as possibly technological in the realm of valid investigation). From the standpoint of our interests on Centauri Dreams, we should also consider whether fast-moving AI will not be our best path, at least in the early going, for interstellar exploration of our own. Our biological nature is a tremendous problem for the mechanics of starflight as presently conceived, given travel times of centuries. Until we surmount such issues, I find the prospect of exploration by artilect a rational alternative. What’s intriguing, of course, is whether we can even prevent it.