New Earths: A Crossroads Moment

A symposium called Crossroads: The Future of Human Life in the Universe seems timely about now (the site has been down all morning but should be up soon). With the Kepler mission undergoing calibration and CoRoT actively searching for small extrasolar worlds, we’re probably within a few dozen months of the detection of an Earth-like world around another star (and maybe, by other methods, much closer). This is sometimes referred to as the ‘Holy Grail’ of planetary sciences, but as soon as we accomplish it, a new ‘Grail’ emerges: The discovery of life on these worlds. And then another: Finding intelligent life.

We can kick the Fermi Paradox around all day, and enjoyably so because it forces us to use our imaginations, but ultimately we hope to put together the hard data that will tell us which of our speculations is most accurate. I see that the Crossroads symposium, which will take place May 1-2 as part of the Cambridge Science Festival, will include Frank Drake’s re-examination of his famous Drake Equation, but will also question whether crisis points like exhaustion of our natural resources may be the kind of ‘filter’ that any intelligent species must overcome.

Chokepoints for Technological Cultures

That’s a good Fermi solution if you posit the emergence of a million technological civilizations in our galaxy, as Carl Sagan once did. Those of us who think intelligent life is rare see no real contradiction in our lack of observed neighbors, but where are all those highly adapted technological cultures otherwise? Thus the plausibility of the doomsday hypothesis: Getting through that phase when a society is capable of destroying itself may be too high a hurdle for most to overcome. There are other forms of cultural collapse, too, as our own experience with the fragmentation of Roman culture in the 5th Century and later makes clear.

Then again, maybe that ‘great silence’ is only a transient phenomenon. Yesterday I talked about Seth Shostak’s new book Confessions of an Alien Hunter, and because it’s germane to this discussion (and sitting right here on my desk), I’ll return to it. Shostak notes that ever more powerful computers are ramping up SETI’s powers to the point that by the year 2030, the Allen Telescope Array ought to be able to check for signals in the direction of a million or more star systems. He points to Sagan as well as Frank Drake’s estimate of 10,000 communicating civilizations in examining the implications:

That’s enough to offer success if Drake is correct. If Sagan’s right, a signal will be found sooner. In other words, either we will discover evidence for ET within the lifetime of the present generation or we’ve erred badly in our presumptions.

Moore’s inexorable law thus makes our generation possibly the first with a real chance to witness that detection. The pace of change in digital technology seems inexorable. By 2020, a desktop computer should have the computational capabilities of a human being. Will we have something — electromagnetic leakage, a beacon, a directed transmission — by then?

A Machine on the Line

Assuming we are indeed at that crossroads the symposium notes in its title, it’s also plausible to speculate that another key filter is the development of artificial intelligence. If we do go through a ‘singularity’ event and our intelligent equipment begins to evolve on the fly in directions we cannot imagine, it’s more than possible that any SETI signal we receive is going to come, as Shostak notes, from a machine. All of which has ramifications for where we look for a signal:

Serbian astronomer Milan ?irkovi? has suggested that the best location for cerebrating hardware would be the outer fringes of the galaxy. In those godforsaken neighborhoods, where temperatures are colder than dead penguins, energy-consuming machinery could run most efficiently. That’s basic thermodynamics. But while ?irkovi?’s argument has its appeal, the galactic boondocks might be too dull for big brains with semi-eternity on their hands. They might prefer to exchange thermal efficiency for the opportunity to be situated closer to the galaxy’s central regions, where there’s a lot more astronomical action.

But then, if we’re truly dealing with machines at this order of complexity, it’s clear that our ability to gauge their intentions is going to be minimal. A crossroads indeed looms ahead as we ponder all this, hoping to hear a signal from another star system, wondering whether Earth-like worlds are indeed as common as some have come to believe, and speculating on the survivability of a nuclear-tipped species like our own whose digital tools may one day be beyond our ability to control. All good reasons to check out this symposium, which will be available as a Webcast.

Mulling Robots and Their Names

Lee Gutkind takes a look at the Robotics Institute at Carnegie Mellon in Almost Human: Making Robots Think (W.W. Norton, 2007), a book entertainingly reviewed in this weekend’s Los Angeles Times. Out of which this wonderful clip from reviewer M.G. Lord:

I wish Gutkind had spent more time on an area that I find fascinating: the anthropomorphizing and gendering of robots, which science-fiction author Robert A. Heinlein famously explored in his novel The Moon Is a Harsh Mistress. What Heinlein created was a computer that, depending on circumstances, could switch between masculine and feminine identities. Robots are heaps of hardware, not biological entities, yet humans apparently feel more comfortable if they assign them a gender, regardless of the crudeness of the gender stereotype. The institute, for example, has robot receptionists with gendered personalities: Valerie, a “female” who complains about her dates with vacuum cleaners and cars, and Tank, a “male,” who has blundered so often that he has been placed “where he can do no harm,” — in other words, in a job traditionally for women.

Tank, however, gave me the first real evidence that computers might eventually think for themselves. The robot appears contemptuous of the antediluvian gender roles that engineers (and Gutkind) project upon them. “I saw a very pretty blonde student type Tank an intimate message: ‘I love you,'” Gutkind writes, “to which Tank replied, ‘You don’t even know me.'”

I could never get through The Moon Is a Harsh Mistress. In fact, I had trouble with all the late Heinlein, pretty much everything after Stranger in a Strange Land. But the question about biological vs. machine identity is indeed fascinating, and it’s also instructive to learn that it crops up even with today’s limited robots. The little round vacuum cleaner robot called the Roomba from iRobot inspires owners to assign gender and names to their machines, a phenomenon the company acknowledges. As robots get smarter, we may find them less alien and more ‘human’ than we thought, if only because we can’t resist making them so.

Deep Water and Europa

If humans ever do establish a presence on Europa, it will surely be somewhere under the ice. Assuming, that is, that the ice isn’t too thick, and to learn about that we have to await further study, and probably a Galilean moon orbiter of some kind that can observe Europa up close and for lengthy periods. But assuming the ice is more than a few meters thick, it should provide radiation screening, and getting down into that presumed Europan ocean is where we want to be in the search for life.

DEPTHX in the water

Of course, the first undersea explorations on the Jovian moon will have to be robotic, and here we can talk about technologies under development today. NASA has funded a self-contained robot submarine called the Deep Phreatic Thermal Explorer (DEPTHX) that operates with an unusual degree of autonomy, navigating with an array of 56 sonar sensors and an inertial guidance system. Now a series of tests in Mexico at a geothermal sinkhole, or cenote, called La Pilita have tested out key components, proving DEPTHX can manage unexplored three-dimensional spaces.

Image: DEPTHX in the water at Cenote la Pilita. Credit: David Wettergreen/CMU.

What’s ahead for the technology is a much more challenging task: to explore the Zacatón sinkhole in the Mexican state of Tamaulipas in May. La Pilita seems easy by comparison. It’s about 100 meters deep, filled with overhanging rock and interesting biology. The depth of the Zacatón site is unknown. But Bill Stone (Stone Aerospace), leader of (DEPTHX) mission, sees La Pilita as a powerful proof of concept:

“The fact that it ran untethered in a complicated, unexplored three-dimensional space is very impressive. That’s a fundamentally new capability never before demonstrated in autonomous underwater vehicles (AUVs).”

Even so, don’t underestimate the challenge at Zacatón. From a Pittsburgh Tribune-Review story on the technology:

Divers have explored Zacaton for decades, but a turning point came on April 6, 1994, when cave-diving pioneers Jim Bowden and Sheck Exley strapped on scuba tanks and tried to reach Zacaton’s elusive bottom. Bowden made it to a depth of 925 feet — a world record for deep-water diving since broken — but tragedy overshadowed his feat. Exley did not return to the surface.

Using software called SLAM (Simultaneous Localization and Mapping) developed at CMU, DEPTHX maneuvered close to rocky walls at La Pilita and was able to take core samples. With Zacatón on the horizon, the robot’s ability to determine its position within 15 centimeters using sonar seems reassuring. Will technologies like this one day explore a Europan sea? Perhaps, but they’ll be just one part of a much larger challenge depending on how deeply we need to drill to reach liquid water.

And if any of this work on autonomous exploration technologies sounds familiar, it may be because DEPTHX’s software is being developed by CMU’s David Wettergreen, who was project leader for the four-wheeled Zoë robot recently tested in Chile’s Atacama desert. Think of the Atacama as a Mars analogue, while Zacatón reflects — at least in some respects — our exploratory needs on Europa. Autonomy is the key, operating unassisted in the remotest environments, and this work may one day ensure that when we do get to Europa, we’re up to the challenge.

A Human Future Among the Stars?

Speaking at the Space Technology and Applications International Forum (STAIF 2007) in Albuquerque yesterday, space historian Roger Launius questioned whether the idea of a human future in interstellar space is still relevant. From a USA Today story:

“We may already be Cyborgs,” Launius pointed out, looking out into an audience filled with people wearing glasses, hearing aids and sporting hip and knee replacements—not to mention those clinging to their handheld mobile phones and other communication devices.

Projecting hundreds of years into the future, Launius said he believed that it is likely humans will evolve in ways that cannot be fathomed today, into a form of species perhaps tagged Homo sapiens Astro. “Will our movement to places like the Moon and Mars hasten this evolutionary process? … I don’t know the answer,” he said.

Neither does any of us. You can read the whole thing here.

Toward a Soft Machine

When Project Daedalus was being designed back in the 1970s, the members of the British Interplanetary Society who were working on the starship envisioned it being maintained by ‘wardens,’ robots that would keep crucial systems functional over the 50-year mission to Barnard’s Star. Invariably, that calls up images of metallic machines, stiff in construction and marked by a certain ponderous clumsiness. True or false, it’s a view of robotics that has persisted until relatively recently.

But if you’re going to do long-term maintenance on a starship, you’d better be more flexible. And that makes a Tufts initiative interesting not just from a space perspective but for applications in medicine, electronics, manufacturing and more. The Biomimetic Technologies for Soft-bodied Robots project aims to produce machines that draw on the model of living cells and tissues. Five Tufts departments will work with a $730,000 grant from the W.M. Keck Foundation to get the job started.

Check out what’s going on at the university’s Biomimetic Devices Laboratory. The researchers here are calling for us to stop evaluating the line between living and mechanical on the basis of materials. “Many machines incorporate flexible materials at their joints and can be tremendously fast, strong and powerful,” says Tufts biology professor Barry Trimmer, “but there is no current technology that can match the performance of an animal moving through natural terrain.”

Trimmer is a neurobiologist whose work with caterpillars feeds directly into the robot concept. How do you build a simple machine that can move flexibly without joints? Like caterpillars, the new robots are to be soft, but unlike them, the robots will also be capable of collapsing into small volumes, and unlike today’s robots, they’ll be able to crawl along wires or burrow themselves into tiny spaces to do their work. Thus does biology meet nano-fabrication and bioengineering, with results that may prove exceedingly useful for long-haul space missions.

Back to the Daedalus wardens for a moment. They were designed not only to test and repair key onboard systems, but also to operate thousands of kilometers away from the ship as needed, coordinating a variety of experiments through the ship’s main computer. But they were envisioned as weighing five tons apiece and in the thinking of that era, would have been incapable of the kind of adaptive — one could say ‘evolutionary’ — behavior that is suggested by flexible hardware and genetic algorithms.

For software that generates variations in its own code and tests a variety of mutations has been under study for some time — look at the work Jordan Pollack and Hod Lipson have done at Brandeis, for example. Wedding genetic algorithms with a new flexibility in robotic structure promises great things for missions that must proceed without human intervention. Specialized robots will continue to fly on our space missions, but in forms that look less and less like the Daedalus wardens.

And incidentally, if you want to see those Daedalus wardens up close, the best reference is T. J. Grant, “Project Daedalus: The Need for On-Board Repair,” in A. R. Martin, ed. Project Daedalus Final Report. Supplement to the Journal of the British Interplanetary Society, 1978, pp. S172-S179.

Deep Space Challenge: Shrinking the Tools

Shrinking our instrumentation is one of the great hopes for extending spacecraft missions into the Kuiper Belt and beyond. No matter what kind of propulsion system we’re talking about, lower payload weight gets us more bang for the buck. That’s why a new imaging system out of Rochester Institute of Technology catches my eye this morning. It will capture images better than anything we can fly today, working at wavelengths from ultraviolet to mid-infrared.

It also uses a good deal less power, but here’s the real kicker: The new system shrinks the required hardware on a planetary mission from the size of a crate down to a chip no bigger than your thumb. The creation of Zeljko Ignjatovic and team (University of Rochester), the detector uses an analog-to-digital converter at each pixel. “Previous attempts to do this on-pixel conversion have required far too many transistors, leaving too little area to collect light,” said Ignjatovic. “First tests on the chip show that it uses 50 times less power than the industry’s current best, which is especially helpful on deep-space missions where energy is precious.”

Precious indeed. But imagine the benefits of carrying miniaturization still further. Nanotechnology pioneer Robert Freitas has speculated provocatively about space probes shrunk from the bulk of a Galileo or Cassini into a housing no larger than a sewing needle. Launched by the thousands to nearby stars, such probes could turn their enclosed nano-scale assemblers loose on the soil of asteroids or moons in the destination system. They could build a macro-scale research station, working from the molecular level up to create tools for continuing investigation and communicating data back to Earth.

The new sensor out of Rochester is a long way from that kind of miniaturization, but surely the dramatic changes in computing over the past few decades have shown us how potent shrinking our tools — and packing more and more capability into them — can be. And when you’re working with finite payload weight and can insert a new set of tools because they’re smaller than before, you’ve dramatically extended what a given space mission can accomplish. Getting a millimeter-wide needle to Alpha Centauri may not be Star Trek, but it could be how we start.