My friend David Warlick and I were having a conversation yesterday about what educators should be doing to anticipate the technological changes ahead. Dave is a specialist in using technology in the classroom and lectures all over the world on the subject. I found myself saying that as we moved into a time of increasingly intelligent robotics, we should be emphasizing many of the same things we’d like our children to know as they raise their own families. Because a strong background in ethics, philosophy and moral responsibility is something they will have to bring to their children, and these are the same values we’ll want to instill into artificial intelligence.

The conversation invariably summoned up Asimov’s Three Laws of Robotics, first discussed in a 1942 science fiction story (‘Runaround,’ in Astounding Science Fiction‘s March issue) but becoming the basic principles of all his stories about robots. In case you’re having trouble remembering them, here are the Three Laws:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov is given credit for these laws but was quick to acknowledge that it was through a conversation with science fiction editor John Campbell in 1940 that the ideas within them fully crystallized, so we can in some ways say that they were a joint creation. As Dave and I talked, I was also musing about the artificial intelligence aboard the Alpha Centauri probe in Greg Bear’s Queen of Angels (1990), which runs into existential issues that force it into an ingenious solution, one it could hardly have been programmed to anticipate.

We are a long way from the kind of robotic intelligence that Asimov depicts in his stories, but interesting work out of Cornell University (thanks to Larry Klaes for the tip) points to the continued growth in that direction. At Cornell’s Personal Robotics Lab, researchers have been figuring out how to understand the relationship between people and the objects they use. Can a robot arrange a room in a way that would be optimal for humans? To make it possible, the robot would need to have a basic sense of how people relate to things like furniture and gadgets.

It should be easy enough for a robot to measure the distances between objects in a room and to arrange furniture, but people are clearly the wild card. What the Cornell researchers are doing is teaching the robots to imagine where people might stand or sit in a room so that they can arrange objects in ways that support human activity. Earlier work in this field was based on developing a model that showed the relationship between objects, but that didn’t factor in patterns of human use. A TV remote might always be near a TV, for example, but if a robot located it directly behind the set, the people in the room might have trouble finding it.

Here’s the gist of the idea as expressed in a Cornell news release:

Relating objects to humans not only avoids such mistakes but also makes computation easier, the researchers said, because each object is described in terms of its relationship to a small set of human poses, rather than to the long list of other objects in a scene. A computer learns these relationships by observing 3-D images of rooms with objects in them, in which it imagines human figures, placing them in practical relationships with objects and furniture. You don’t put a sitting person where there is no chair. You can put a sitting person on top of a bookcase, but there are no objects there for the person to use, so that’s ignored. The computer calculates the distance of objects from various parts of the imagined human figures, and notes the orientation of the objects.

Image: Above left, random placing of objects in a scene puts food on the floor, shoes on the desk and a laptop teetering on the top of the fridge. Considering the relationships between objects (upper right) is better, but the laptop is facing away from a potential user and the food higher than most humans would like. Adding human context (lower left) makes things more accessible. Lower right: how an actual robot carried it out. Credit: Personal Robotics Lab.

The goal is for the robot to learn constants of human behavior, thus figuring out how humans use space. The work involves images of various household spaces like living rooms and kitchens, with the robots programmed to move things around within those spaces using a variety of different algorithms. In general, factoring in human context made the placements more accurate than working just with the relationships between objects, but the best results came from combining human context with object-to-object programming, as shown in the above image.

We’re a long way from Asimov’s Three Laws, not to mention the brooding AI of the Greg Bear novel. But it’s fascinating to watch the techniques of robotic programming emerge because what Cornell is doing is probing how robots and humans will ultimately interact. These issues are no more than curiosities at the moment, but as we learn to work with smarter machines — including those that begin to develop a sense of personal awareness — we’re going to be asking the same kind of questions Asimov and Campbell did way back in the 1940s, when robots seemed like the wildest of science fiction but visionary writers were already imagining their consequences.

tzf_img_post