Talk of a ‘singularity’ in which artificial intelligence reaches such levels that it moves beyond human capability and comprehension plays inevitably into the realm of interstellar studies. Some have speculated, as Paul Davies does in The Eerie Silence, that any civilization we make contact with will likely be made up of intelligent machines, the natural development of computer technology’s evolution. But even without a singularity, it’s clear that artificial intelligence will have to play an increasing role in space exploration.

If we develop the propulsion technologies to get an interstellar probe off to Alpha Centauri, we’ll need an intelligence onboard that can continue to function for the duration of the journey, which could last centuries, or at the very least decades. Not only that, the onboard AI will have to make necessary repairs, perform essential tasks like navigation, conduct observations and scientific studies and plan and execute arrival into the destination system. And when immediate needs arise, it will have to do all of this without human help, given the travel time for radio signals to reach the spacecraft.

Consider how tricky it is just to run rover operations on Mars. Opportunity’s new software upgrade is called AEGIS, for Autonomous Exploration for Gathering Increased Science. It’s a good package, one that helps the rover identify the best targets for photographs as it returns data to Earth. AEGIS had to be sent to the three transmitting sites and forwarded on to the Odyssey orbiter, from which it could be beamed to Opportunity on the surface. A new article in h+ Magazine takes a look at AEGIS in terms of what it portends for the future of artificial intelligence in space. Have a look at it, and ponder that light-travel time to Mars is measured in minutes, not the hours it takes to get to the outer system.

Where do the early AI applications like AEGIS lead us? Writer Jason Louv asked Benjamin Bornstein, who leads JPL’s Machine Learning team, for a comment on machines and the near future:

“We absolutely need people in the loop, but I do see a future where robotic explorers will coordinate and collaborate on science observations,” Bornstein predicts. “For example, the MER dust devil detector, a precursor to AEGIS, acquires a series of Navcam images over minutes or hours and downlinks to Earth only those images that contain dust devils. A future version of the dust devil detector might alert an orbiter to dust storms or other atmospheric events so that the orbiter can schedule additional science observations from above, time and resources permitting. Dust devils and rover-to-orbiter communication are only one example. A smart planetary seismic sensor might alert an orbiting SAR [synthetic aperture radar] instrument, or a novel thermal reading from orbit could be followed up by ground spectrometer readings… Also, for missions to the outer planets, with one-way light time delays, onboard autonomy offers the potential for far greater science return between communication opportunities.”

One-way light delays are obviously critical as we look at the outer planets and beyond. Voyager 1, for example, as of April 12, was 113 AU from the Sun, having passed the termination shock. It’s now moving into the heliosheath. At these distances, the round-trip light time is 31 hours 34 minutes. That’s just to the edge of the Solar System. A probe to the Oort Cloud will have much longer delays, with round-trip signal times ranging from 82 to 164 weeks. Pushing on to the Alpha Centauri stars obviously lengthens the round-trip time yet again, so that we face up to 4.2 years delay just in getting a message to a probe at Proxima, with another 4.2 years for acknowledgement. The chances of managing short-term problems from Earth are obviously nil.

Image: Comet Hale Bopp’s orbit (lower, faint orange); one light-day (yellow spherical shell with yellow Vernal point arrow as radius); the Termination Shock (blue shell); positions of Voyager 1 (red arrow) and Pioneer 10 (green arrow); Kuiper Belt (small faint gray torus); orbits of Pluto (small tilted ellipse inside Kuiper Belt) and Neptune (smallest ellipse); all to scale. Credit: Paul Stansifer/84user/Wikimedia Commons.

Just how far could an artificial intelligence aboard a space probe be taken? Greg Bear’s wonderful novel Queen of Angels posits an AI that has to learn to deal not only with the situation it finds in the Alpha Centauri system, but also with what appears to be its growing sense of self-awareness. But let’s back the issue out to a broader context. Suppose that a culture at a technological level a million years in advance of ours is run by AIs that have supplanted the biological civilization that created their earliest iterations.

Think it’s hard to guess what an alien culture would do when it’s biological? Try extending the question to a post-singularity world made up of machines whose earliest ancestors were constructed by non-humans.

Will machine intelligence work side by side with the beings that created it, or will it render them obsolete? If Paul Davies’ conjecture that a SETI contact will likely be with a machine civilization proves true, are we safe in believing that the AIs that run it will act according to human logic and aspirations? There is much to speculate on here, but the answer is by no means obvious. In any case, it’s clear that work on artificial intelligence will have to proceed if we’re to operate spacecraft of any complexity outside our own Solar System. Any other species bent on exploring its neighborhood will have had to do the same thing, so the idea of running into non-biological aliens seems just as plausible as encountering their biological creators.

tzf_img_post