Frank Wilczek has used the neologism ‘quintelligence’ to refer to the kind of sentience that might grow out of artificial intelligence and neural networks using genetic algorithms. I seem to remember running across Wilczek’s term in one of Paul Davies books, though I can’t remember which. In any case, Davies has speculated himself about what such intelligences might look like, located in interstellar space and exploiting ultracool temperatures.

A SETI target? If so, how would we spot such a civilization?

Wilczek is someone I listen to carefully. Now at MIT, he’s a mathematician and theoretical physicist who was awarded the Nobel Prize in Physics in 2004, along with David Gross and David Politzer, for work on the strong interaction. He’s also the author of several books explicating modern physics to lay readers. I’ve read his The Lightness of Being: Mass, Ether, and the Unification of Forces (Basic Books, 2008) and found it densely packed but rewarding. I haven’t yet tackled 2015’s A Beautiful Question: Finding Nature’s Deep Design.

Perhaps you saw Wilczek’s recent piece in The Wall Street Journal, sent my way by Michael Michaud. Here we find the scientist going at the Fermi question that we have tackled so many times in these pages, always coming back to the issue that we have a sample of one when it comes to life in the universe, much less technological society, and our sample is right here on Earth. For the record, Wilczek doesn’t buy the idea that life is unusual; in fact, he states not only that he thinks life is common, but also makes the case for advanced civilizations:

Generalized intelligence, that produces technology, took a lot longer to develop, however, and the road from amoebas to hominids is littered with evolutionary accidents. So maybe we’re our galaxy’s only example. Maybe. But since evolution has supported many wild and sophisticated experiments, and because the experiment of intelligence brings spectacular adaptive success, I suspect the opposite. Plenty of older technological civilizations are out there.

Civilizations may, of course, develop and then, in Wilczek’s phrase, ‘flame out,’ just as Edward Gibbon would describe the fall of Rome as “the natural and inevitable effect of immoderate greatness. . . . The stupendous fabric yielded to the pressure of its own weight.” We can pile evidence onto that one, from the British and Spanish empires to the decline of numerous societies like the Aztec and the Mayan. Catastrophe is always a possible human outcome.

But is it an outcome for non-human technological societies? Wilczek doubts that, preferring to hark back to the idea with which we opened. The most advanced quantum computation — quintelligence — he believes, works best where it is cold and dark. And a civilization based on what we would today call artificial intelligence may be one that basically wants to be left alone.

Image: The Local Group of galaxies. Is the most likely place for advanced civilization to be found in the immensities between stars and galaxies? Credit: Andrew Z. Colvin.

All this is, like all Fermi question talk, no more than speculation, but it’s interesting speculation, for Wilczek goes on to discuss the notion that one outcome for a hyper-advanced civilization may be to embrace the small. After all, the speed of light is a limit to communications, and effective computation involves communications that are affected by that limit. The implication: Fast AI thinking works best when it occurs in relatively small spaces. Thus:

Consider a computer operating at a speed of 10 gigahertz, which is not far from what you can buy today. In the time between its computational steps, light can travel just over an inch. Accordingly, powerful thinking entities that obey the laws of physics, and which need to exchange up-to-date information, can’t be spaced much farther apart than that. Thinkers at the vanguard of a hyper-advanced technology, striving to be both quick-witted and coherent, would keep that technology small.

A civilization based, then, on information processing would achieve its highest gains by going small in search of the highest levels of speed and integration. We’re now back out in the interstellar wastelands, which may in this scenario actually contain advanced and utterly inconspicuous intelligences. As I mentioned earlier, it’s hard to see how SETI finds these.

Unstated in Wilczek’s article is a different issue. Let’s concede the possibility of all but invisible hyper-intelligence elsewhere in the cosmos. We don’t know how long it would take to develop such a civilization, moving presumably out of its initial biological state into the realm of computation and AI. Surely along the way, there would still be societies in biological form leaving detectable traces of themselves. Or should we assume that the Singularity really is near enough that even a culture like ours may be succeeded by AI in the cosmic blink of an eye?

tzf_img_post