Robotics: Anticipating Asimov

by Paul Gilster on June 21, 2012

My friend David Warlick and I were having a conversation yesterday about what educators should be doing to anticipate the technological changes ahead. Dave is a specialist in using technology in the classroom and lectures all over the world on the subject. I found myself saying that as we moved into a time of increasingly intelligent robotics, we should be emphasizing many of the same things we’d like our children to know as they raise their own families. Because a strong background in ethics, philosophy and moral responsibility is something they will have to bring to their children, and these are the same values we’ll want to instill into artificial intelligence.

The conversation invariably summoned up Asimov’s Three Laws of Robotics, first discussed in a 1942 science fiction story (‘Runaround,’ in Astounding Science Fiction‘s March issue) but becoming the basic principles of all his stories about robots. In case you’re having trouble remembering them, here are the Three Laws:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov is given credit for these laws but was quick to acknowledge that it was through a conversation with science fiction editor John Campbell in 1940 that the ideas within them fully crystallized, so we can in some ways say that they were a joint creation. As Dave and I talked, I was also musing about the artificial intelligence aboard the Alpha Centauri probe in Greg Bear’s Queen of Angels (1990), which runs into existential issues that force it into an ingenious solution, one it could hardly have been programmed to anticipate.

We are a long way from the kind of robotic intelligence that Asimov depicts in his stories, but interesting work out of Cornell University (thanks to Larry Klaes for the tip) points to the continued growth in that direction. At Cornell’s Personal Robotics Lab, researchers have been figuring out how to understand the relationship between people and the objects they use. Can a robot arrange a room in a way that would be optimal for humans? To make it possible, the robot would need to have a basic sense of how people relate to things like furniture and gadgets.

It should be easy enough for a robot to measure the distances between objects in a room and to arrange furniture, but people are clearly the wild card. What the Cornell researchers are doing is teaching the robots to imagine where people might stand or sit in a room so that they can arrange objects in ways that support human activity. Earlier work in this field was based on developing a model that showed the relationship between objects, but that didn’t factor in patterns of human use. A TV remote might always be near a TV, for example, but if a robot located it directly behind the set, the people in the room might have trouble finding it.

Here’s the gist of the idea as expressed in a Cornell news release:

Relating objects to humans not only avoids such mistakes but also makes computation easier, the researchers said, because each object is described in terms of its relationship to a small set of human poses, rather than to the long list of other objects in a scene. A computer learns these relationships by observing 3-D images of rooms with objects in them, in which it imagines human figures, placing them in practical relationships with objects and furniture. You don’t put a sitting person where there is no chair. You can put a sitting person on top of a bookcase, but there are no objects there for the person to use, so that’s ignored. The computer calculates the distance of objects from various parts of the imagined human figures, and notes the orientation of the objects.

Image: Above left, random placing of objects in a scene puts food on the floor, shoes on the desk and a laptop teetering on the top of the fridge. Considering the relationships between objects (upper right) is better, but the laptop is facing away from a potential user and the food higher than most humans would like. Adding human context (lower left) makes things more accessible. Lower right: how an actual robot carried it out. Credit: Personal Robotics Lab.

The goal is for the robot to learn constants of human behavior, thus figuring out how humans use space. The work involves images of various household spaces like living rooms and kitchens, with the robots programmed to move things around within those spaces using a variety of different algorithms. In general, factoring in human context made the placements more accurate than working just with the relationships between objects, but the best results came from combining human context with object-to-object programming, as shown in the above image.

We’re a long way from Asimov’s Three Laws, not to mention the brooding AI of the Greg Bear novel. But it’s fascinating to watch the techniques of robotic programming emerge because what Cornell is doing is probing how robots and humans will ultimately interact. These issues are no more than curiosities at the moment, but as we learn to work with smarter machines — including those that begin to develop a sense of personal awareness — we’re going to be asking the same kind of questions Asimov and Campbell did way back in the 1940s, when robots seemed like the wildest of science fiction but visionary writers were already imagining their consequences.

tzf_img_post

James D. Stilwell June 21, 2012 at 9:26

….what educators should be doing to anticipate the technological changes ahead…..for starters read, The Age of Spiritual Machines, by Ray Kurzweil….
JDS

Scott G. June 21, 2012 at 10:09

It seems that one interesting interpretation of Asimov’s laws is that a robot is obligated to help a person that is about to experience harm (1st law) even if the person rejects its help (2nd law). But, on the other hand, a robot must destroy itself if a person orders it to (2nd law overriding 3rd law). However, if that were the case, it wouldn’t be able to do so until it first removed the person from harm’s way (1st law precedence).

Ed Reed June 21, 2012 at 12:36

There’s an itch I’ve been wanting to scratch having to do with the software and computational facilities that will go along on stellar probe missions. My professional line of work involves working with verifiably secure software – that is, software that can objectively demonstrated to be free of trap doors and Trojan horses, in particular as it comes from the equipment supplier – but also so that its updates and security properties effectively deal with the issue of subversion of the security of the system. I don’t so much develop such systems as try to design and implement applications that run on such systems.

The foundation of my work is a combination of security engineering (“secure from what?”) and software engineering (not “what does it do?”, but rather, “what does it NOT do?”). Analysis takes a set theory approach to things (“for all possible applications on the system, demonstrate that NONE can put the system into an insecure state”).

Security properties of an interstellar probe present several new dimensions to design problems. Yes, you’d like to know your stuff does what its supposed to do. And yes, you’d like to know that it can’t be altered (accidentally or maliciously) to do something you don’t intend (whether in design, implementation, in transit, during updates, or during operations at the destination).

But the range of adversaries with which you need to contend, and their literally unimaginable capabilities makes me wonder whether, and if so how, to design systems that will be reliable (to either do something or not do something unintended) on their own so far away from home.

Many AI discussions we have will inevitably deal with the psychology and motivations with which we imbue our AI systems. I suspect, though, that the sort of systems with the range of freedoms required will likely be collaborative developments by humans and AI design AIs we develop along the way.

But – and here’s the crux of my interest – what can we say about the computational foundation on which those AI operate – how concrete or how synthetic will be the objects it uses to model the world? How close to hardware will the foundations be, or how disconnected and decoupled from sound logic will they be?

The beauty of some approaches to AI is that some algorithms can answer questions we don’t even know to ask, yet. Genetic Algorithms are a case in point, as too are some of the analog-simulation neural networks.

But those technologies make certain assumptions about the hardware and software on which they operate.

Will we be able to foretell, nevermind control, what the operational hardware will look like or behave like at the destination? Tinkering by the on-board AI itself is certainly one concern. Tinkering by as-yet-unidentified interested by-standers is another. Yes, the probe may be “found” and examined.

And what are the consequences for the AI and its operational systems if there’s not a carefully planned relationship between the hardware and software features on which they both rely?

My worries about just what a dark-matter facile adversary could do in analyzing the probe’s systems will have to wait for another day (I said I work in security, didn’t I?)

Ed

philw1776 June 21, 2012 at 12:53

Hopefully Susan Calvin is alive today, in school and will soon be ready to help.
Damn, I miss Asimov.

Alex Tolley June 21, 2012 at 14:36

@Ed Reed
Doesn’t everything you’ve stated apply to a human being too? How would you make a human “secure” in such a situation?

Scott G. June 21, 2012 at 16:00

A point related to Ed’s comment… I think a fundamental difference between AI software vs. all other software is that when you mission-qualify the “other” software, you are pretty certain that it will behave the same way during the actual mission as it did during its certification tests. But an advanced AI software may “evolve” into a completely different state during the mission than the way it was during testing. So, how is it possible to qualify AI software for such a long-duration mission (with zero possibility of outside intervention) and have some certainty regarding its behavior and responses to the conditions it was subjected to during testing?

Ed Reed June 21, 2012 at 16:01

@Alex Tolley, yes, I suppose so – but that’s outside the scope of my current interests ;-)

AI will be our creation, or the creation of our creation, which I think creates a certain duty on our part to worry about such things. Both the run-away AI (I enjoyed the short “Lucy” in “Going Interstellar”) as well as shear cyber-defense of the AI we send on its way are motivational for me.

Human beings may be someone elses creation, and one can only hope they still have at least a benign interest, if they’re still in the neighborhood. If they are, it would be nice to hear from them from time to time.

ljk June 21, 2012 at 17:24

Just remember, when you are assembling your Artilect, make sure it is thoroughly tested *before* you hook it up to every nuclear missile in your weapons arsenal and then place said AI inside a mountain fortress with a nuclear reactor for a power source that is impervious to all forms of attack, plus having no Off switch or electric plug to pull out of a wall.

http://www.eccentric-cinema.com/cult_movies/colossus.htm

Having an Artilect that can operate a starship over many light years and decades of operation in very deep space is not just important, it is critical. This will be the case whether the vessel has humans aboard it or not.

Perhaps one way to “solve” the concerns about an Artilect that becomes not just really smart but also very aware of itself and the rest of reality is to develop systems that imitate human thinking without becoming conscious and going rogue and all that. If such a thing as making something smart without also being aware possible.

See here:

http://www.technologyreview.com/view/428235/intel-reveals-neuromorphic-chip-design/

ljk June 21, 2012 at 17:51

Speaking of Asimov’s Robots, there was one story where a robot decided that organic humans could not possibly have been the creators of the superior machines such as it, and the humans could not convince it otherwise.

The one thing I cannot remember is if this robot decided to ignore the Three Laws as a result. I can see a being with such an attitude not taking them very seriously, at the least.

There was a similar problem in the very first Star Trek film, when V’Ger decides that “carbon units” (aka humans) are not true life forms and certainly could not be its creator.

http://en.memory-alpha.org/wiki/V'Ger

We might also run into an Artilect that does obey its original programming, but in a very logical manner that leaves humanity in a pickle. This was the case in the 1970 SF film Colossus: The Forbin Project, where an AI designed to handle the US nuclear defense arsenal to ensure that nuclear war never broke out followed its orders to the letter by taking over all the nuclear missiles (after merging with its Soviet counterpart, Guardian) and threatening to use them if humanity didn’t behave its collective self. And unlike most Hollywood Artilects, Colossus did not fall for the usual human tricks because it was so much smarter than the primates and anticipated every conceivable plot in advance.

http://www.cyberpunkreview.com/movie/decade/pre-1980/colossus-the-forbin-project/

ljk June 21, 2012 at 18:01

If you want to know what is going on in the real world of AI and robots and the integration of the two (as of 1.5 years ago), check out this special edition of Neurocomputing from December of 2010 online here:

http://profhugodegaris.wordpress.com/artificial-brains/

By the way, June 23 is the 100th anniversary of the birth of Alan Turing, one of the first to seriously conceive of AI among other pioneering efforts with computers:

http://cosmiclog.msnbc.msn.com/_news/2012/06/20/12324071-happy-100th-birthday-alan-turing?lite

A. A. Jackson June 22, 2012 at 7:42

I cannot remember if I noted this before on this forum … A.C. Clarke has related, somewhere (can’t find my reference) that he was with Asimov in 1968 after viewing 2001: A Space Odyssey. Asimov was upset with Clarke because of HAL’s behavior violating his famous laws. I cannot remember what Clarke’s rejoinder was, but it apparently calmed Asimov down.
HAL has a strange history. Clarke and Kubrick wrote the first draft of 2001 in New York City in 1965. Clarke did most of the actual writing of the novel, meeting with Kubrick each day, Kubrick was involved in doing the planning for the film (which for a film of its magnitude and intricacy seems something to drive a person mad). Clarke consulted with many experts, Fred Ordway in particular, who became the film’s technical adviser. Clarke apparently spent a week with MIT’s Marvin Minsky learning that great man’s idea’s about artificial intelligence. Kubrick would talk to all the experts too, either in NY or over the phone. HAL had other names and emerged slowly. The original screenplay was written my Kubrick in NY, you can find it on the web. During the yearlong production shoot Kubrick used the story but wrote a different script.
People familiar with Clarke’s novel know he gave HAL a motivation for murder, I think it is chapter 26 ‘Need to Know’. Close reading of that shows that HAL went bonkers not for telling a lie, but just for withholding information! Kubrick never explains HAL behavior. There is anecdotal evidence from around the edges that Kubrick had a disagreement with Clarke about HAL.
Here is an odd thing. Of all the outside technical advisers only Marvin Minsky was called over to England for consultation. The film has odd puzzles about HAL , the chess game… even years later in HAL’s Legacy :2001′s Computer as Dream and Reality (MIT Press, 1998) an AI researcher noted that Kubrick may have planted a clue when HAL told the BBC interviewer that , in essence, HAL had never made a mistake… but a true AI would never say that… how else does intelligence arise except by learning?
Here is something I wrote on 2001’s 40th anniversary:
The depiction of the HAL 9000 (Heuristically programmed Algorithmic Computer) in 2001 remains one of the film’s most eerie elements. For their description of artificial intelligence, Kubrick and Clarke only had the terminology of the mid-1960s. At that time, the prevailing concept was that Artificial Intelligence (AI) was expected to be a programmed computer. Thus, the term computer, with all its implications of it being a machine, occurs repeatedly. In the last 44 years, no true AI has emerged. Today’s corresponding term would be “strong AI.” Kubrick and Clarke’s use of mid-1960s terminology obscures the fact that the film and novel authors constructed an AI that is unmistakably strong-that is, capable of “general intelligent action.” How this would have been achieved Kubrick and Clarke left as an extrapolation. Clarke provides a little extrapolation in the novel:
“Probably no one would ever know this: it did not matter. In the 1980s, Minsky and Good had shown how neural networks could be generated automatically — self-replicated — in accordance with an arbitrary learning program. Artificial brains could be grown by a process strikingly analogous to the development of the human brain. In any given case, the precise >details would never be known<, and even if they were, they would be millions of times too complex for human understanding." (From: A. C. Clarke (2001: A Space Odyssey, ROC edition, trade paperback, 2005, bottom page 92 – top page 93.)
Even though WATSON won, WATSON was never aware that it had won… that’s not a ‘Strong AI’.

JoeP June 22, 2012 at 13:33

A.A. Jackson

To your last statements: defining awareness is not easily done.

For example, suppose you had a Watson-like supercomputer that can pass the Turing Test and does this by fully simulating the neural activity of an entire human brain.

Is it truly conscious or is it mindlessly manipulating a simulation without any real awareness?

If you are interested, I’d highly recommend reading up on John Searle’s “Chinese Room” experiments and the various answers to it by other philosophers. Also worth reading is Daniel Dennet’s book, Consciousness Explained.

Interstellar Bill June 22, 2012 at 14:29

For 50 years now, AI has been an ever-receeding Shangri-La, yet the same breathless promises are solemnly trotted out, as if HAL 9000 was just around the corner.
What will they say in 20 more years, when all we’ll have is Deep Blue & Watson upgrades? ‘Hold on, it’s coming.’
Until Google has a natural-language interface capable of understanding actual questions and giving intelligent answers, then AI remains a time-wasting empty dream.
Asimov’s Laws are a self-contradictory joke. A robot by definition does not think but merely runs a computer program. It inherenly lacks any conceptual understanding of anything whatsoever, let alone these silly ‘Laws’.

Just wait until you see some of the incredibly stupid things that will be done, however rarely, by driverless cars, which presumably will have Asimov’s Laws somewhere in their vast programming. In spite of their lack of true AI, their accident rate will nevertheless be 100 times less than human drivers, in most situations. That’s why they’ll take over most driving.

Even when computers are a million times more powerful than today’s, we’ll be no closer to true AI.

ljk June 22, 2012 at 17:05

A. A. Jackson said on June 22, 2012 at 7:42:

“I cannot remember if I noted this before on this forum … A.C. Clarke has related, somewhere (can’t find my reference) that he was with Asimov in 1968 after viewing 2001: A Space Odyssey. Asimov was upset with Clarke because of HAL’s behavior violating his famous laws. I cannot remember what Clarke’s rejoinder was, but it apparently calmed Asimov down.”

Clarke and Kubrick also talked to Carl Sagan about what the ETI in 2001: A Space Odyssey might look like. He suggested not showing them at all because he felt that alien intelligences would be encountered by the dawn of the 21st Century and it would make the film look dated.

So although the Monolith Builders were supposedly never shown in the film, I also recall reading in the book on the Making of 2001 that this sequence when Bowman was going through the Stargate shows the aliens:

http://www.collativelearning.com/PICS%20FOR%20WEBSITE/stills%202/2001SpaceOdyssey128.jpg

At least they weren’t actors in rubber suits.

JoeP June 22, 2012 at 18:00

“…Even when computers are a million times more powerful than today’s, we’ll be no closer to true AI.”

Well Bill, I take it that is your opinion rather than a statement of fact :-)

Part of the problem of so-called “Strong-AI” is indeed one of brute force power. Systems 1000x more powerful than what we have today are definitely part of the Strong-AI solution. The other is defining what constitutes real intelligence and awareness, the structure and programming to use that effectively.

Being a materialist (for the most part), some of the answers will probably be found, at some point.

I can watch an ant navigate across the picnic table, testing crumbs of food, and leaving scent trails to interesting things for other ants in her colony to investigate. Packed into the tiny brain it possesses are secrets that currently defy, for the most part, the most powerful robotics and AI systems, for the most part. And I think no one will argue that an ant’s brain will not eventually be effectively simulated at some point. One could argue that things after that point are largely the matter of scaling. When does consciousness arise? Does a mouse possess self-awareness? I think most people would say that it does. Can computer systems get us that far? Is self-awareness a fundamental gap that cannot be bridged between self-aware biological entities and computer systems? On what basis?

I do think it is a matter of engineering.

A. A. Jackson June 23, 2012 at 5:48

Joe P
Actually I stole that line about Watson from Michael Shermer

[url]http://www.scientificamerican.com/article.cfm?id=in-the-year-9595[/url]

I was would love have asked Shermer ” did you follow up by asking David Ferrucci, yes but what was WATSON’s reaction when you told him (it?) that he had won?”

I have read John Searle’s work about Strong AI.

I think , though he never wrote about it, that Kubrick was really bothered about what a self aware Strong AI might mean, and felt that Clarke was not questioning enough. (Clarke even tries a different ploy in the beginning of 2010 the sequel to 2001, explaining HAL’s behavior with a “Hofstadter-Moebius loop”, that just sounds like super clever technobable to me.
Unless it’s buried in Kubrick’s papers we may never know, he may have imagined HAL as really having ‘human intelligence and consciousness’ and that like some seemly normal humans went crackers for no good damn reason we can understand.
I seem to remember Marvin Minsky saying, when asked, when we would have true (strong) AI… said ” it could 40 or 400 years away, I just don’t know.”

A. A. Jackson June 23, 2012 at 6:05

@ljk

Yes I found about about Sagan’s contribution in a biography about Kubrick.
Clarke , who knew Sagan well, introduced Kubrick to Sagan over dinner one night in 1965? or early 1966. I think Kubrick wanted to make the ‘Monolith Makers’ non anthropomorphic while Clarke wanted the opposite or the other way around. Sagan was given the information that the MM’s had left the Monolith 3 million years ago, in the film Kubrick changes this to 4 million.
Sagan said something like “a civilization that old, no way to know, just never show them at all.”
Kubrick did not use this idea until shooting started.
Clarke left an interesting comment somewhere in writings before he died that Kubrick, Clarke and Sagan were supposed to continue at lunch the next day. Kubrick thought Sagan was really smart, but was put off by him. So Clarke the next day took Sagan to the N.Y. world’s fair.
It may have been that Kubrick asked Sagan to be in the prolog interviews that were to be used in the film. Apparently Sagan said he would do it for a percentage take on profits from the film. I don’t think this set well with Kubrick!

By the by , that image from the film we don’t explicitly know those are Monolith Makers or just another artifact of the Monolith Makers. I never heard Kubrick or Clarke remark on those things. I don’t even know how Trumbull did that sequence with 1967 technology.

Eniac June 23, 2012 at 10:31

Interstellar Bill:

Until Google has a natural-language interface capable of understanding actual questions and giving intelligent answers, then AI remains a time-wasting empty dream.

This may happen much sooner than you think…

@LJK:

Having an Artilect that can operate a starship over many light years and decades of operation in very deep space is not just important, it is critical. This will be the case whether the vessel has humans aboard it or not.

I beg to differ. Good old fashioned automation will be quite sufficient to keep a starship going. No intellect required.

Take cockroaches, for example: Without much of an intellect, they do quite well navigating territory that is more varied and diverse than interstellar space. They manage not only to get around, but also to feed, avoid predators, grow, and reproduce.

ljk June 24, 2012 at 9:55

A. A. Jackson said on June 23, 2012 at 6:05:

“Clarke left an interesting comment somewhere in writings before he died that Kubrick, Clarke and Sagan were supposed to continue at lunch the next day. Kubrick thought Sagan was really smart, but was put off by him. So Clarke the next day took Sagan to the N.Y. world’s fair.”

LJK replies:

In one of the two 1999 biographies on Carl Sagan, they tell of Sagan and Clarke encountering a booth or some display where the person running it was promoting creationism. Sagan publicly lit into the guy while the more reserved (and British) Clarke quietly died of embarassment.

A. A. Jackson then said:

“It may have been that Kubrick asked Sagan to be in the prolog interviews that were to be used in the film. Apparently Sagan said he would do it for a percentage take on profits from the film. I don’t think this set well with Kubrick!”

LJK replies:

I can see that having happened. Speaking of those interviews, whatever happened to them? I think most people would love to see them! Recently someone mentioned finding 17 minutes of 2001 footage that Kubrick had edited out just after the film debuted in 1968.

Could these interviews have been included as well? Do we even have transcripts of them? I worry they may have been destroyed along with most of the film sets, as Kubrick did not want his models and sets being used by others for schlocky B movies. Only a few items such as a space suit survived his purge.

A. A. Jackson then said:

“By the by , that image from the film we don’t explicitly know those are Monolith Makers or just another artifact of the Monolith Makers. I never heard Kubrick or Clarke remark on those things. I don’t even know how Trumbull did that sequence with 1967 technology.”

LJK replies:

Had I read nothing about that sequence, I would agree with you that whatever those glowing diamond objects were, the Monolith ETI would be only of many possibilities. However, in the 1970 book The Making of Kubrick’s 2001 by Jerome Agel (who would go on several years later to produce Sagan’s first major public work, The Cosmic Connection), specifically says in the pictures section of the book that those objects were the Monolith Aliens. It seems plausible at least.

By the way, I found this interesting related publication with one A. A. Jackson mentioned in it:

http://www.aiaa-houston.org/Newsletter/April2008.pdf

And I throw these in out of just being plain interesting (and relevant):

http://www.participations.org/Volume%206/Issue%202/kramernew.pdf

http://io9.com/5901669/behold-the-2001-a-space-odyssey-lego-sets-that-never-existed

ljk June 24, 2012 at 9:59

Eniac said on June 23, 2012 at 10:31:

[@LJK: Having an Artilect that can operate a starship over many light years and decades of operation in very deep space is not just important, it is critical. This will be the case whether the vessel has humans aboard it or not.]

“I beg to differ. Good old fashioned automation will be quite sufficient to keep a starship going. No intellect required.

“Take cockroaches, for example: Without much of an intellect, they do quite well navigating territory that is more varied and diverse than interstellar space. They manage not only to get around, but also to feed, avoid predators, grow, and reproduce.”

As I have already asked elsewhere, can an AI be developed that is smart in the intuitive human (or even cockroach) sense without become aware/conscious? That is what I want to know.

If an AI can function in a smart way without become aware, then we may solve one of the problems with using an Artilect for a starship’s brain. If we cannot make a smart machine without it also becoming aware in the process, then we have a rather large set of issues at hand, among them sending an intelligent AND aware mind on a one-way journey to the stars.

David Evans June 24, 2012 at 17:16

ljk:

“The one thing I cannot remember is if this robot decided to ignore the Three Laws as a result. I can see a being with such an attitude not taking them very seriously, at the least.”

In the story, the robot’s task was to keep power beams aligned with receivers on other planets. Failure would kill millions of people. As I remember it, the robot did not believe in the existence of those planets and the people on them. However it decided to continue with the task, reasoning that in doing so it was obeying the hypothetical machine that had created it and the humans. The 3 laws were irrelevant to this decision.

A. A. Jackson June 24, 2012 at 17:21

@ljk
I sure am glad I keep my copy of Jerome Agel’s book from 1970, it’s kind of a hodge podge of information, barely organized. Yet of all the books in the years since about 2001 it has more good little tidbits and stuff that appears nowhere else. (Clarke’s Lost Worlds of 2001, is also valuable).

As to the ‘diamonds’ being the Monolith Makers … those pages , in the back, are attributed to Douglas Trumbull, and all the notes on there maybe Trumbull’s , except a possible sketch by Kubrick. I don’t know of any text or document where Kubrick singled those ‘diamonds’ out as the MM.
A few pages later are sort of abstract images of ‘aliens’ that were experiments.
I think Agel got his wires crossed, my understanding is that those were made in New York at the same time some of the oil drop photo experiments for the Stargate sequence were being made , some of that was incorporated into the film, so they were not images too late to use, Kubrick chose not to use them.

As to the interviews. Someone, told me once, that the Criterion Laser Disk special edition had them on it, but I can’t verify that.
That vault held footage is apparently the edits Kubrick made on the train from NY to LA. The actual interview footage might exist in ‘Kubrick’s Boxes’.

There was documentary (by the BBC?) about the moving of all of Kubrick’s stored materials to the University of the Arts London , requested by Christine Kubrick. It’s a huge collection, it arrived in 2007, and is , I hope cataloged now… but I don’t know if one has studied it.

Yes that’s me on that article , the 40th anniversary of the release of 2001.
Bob Mahoney and I wrote it, Jon Rogers did the artwork. I must admit that I wrote the HAL speculations there.
We were trying to note some of the technical features of the film.
I got an email from Fred Ordway about some of the questions I had, he said he had a technical archive stored at Huntsville that he would have to consult but he never got back to me.

Had never seen the link to the essay you sent. It’s interesting. I wonder if this guy ever read Roger Ebert’s elegant reviews of 2001, the initial one in 1968 and a more elaborated one in 1997? Ebert’s reviews are at definite odds with the N.Y. critics , at the time.

ljk June 24, 2012 at 23:31

It is from 1997 and a number of the chapters were not put on line, but here is HAL’s Legacy from MIT Press:

http://mitpress.mit.edu/e-books/hal/

Eniac June 25, 2012 at 0:16

As I have already asked elsewhere, can an AI be developed that is smart in the intuitive human (or even cockroach) sense without become aware/conscious? That is what I want to know.

My answer would be no. “Consciousness” is too often mystified and overrated. It is simply a name that we give to the ability of an organism to do the following:
1) process information from its sensory inputs
2) build a computational model of the world consistent with those inputs
3) use this model to estimate the consequences of potential action
4) select the most promising of those potential actions to execute
Inevitably, such a model will be dominated by a boundary between self and non-self, which is the essence of what we call consciousness. A cockroach has all the above characteristics, but in a very primitive form, which some of us are having much fun emulating with little toy robots. Even the lowly nematode can perform a rudimentary version of this feat, with a hard-wired set of a thousand or so neurons, the circuitry of which can be plotted on a piece of paper.

Our own intelligence is many orders of magnitudes more complex. Our brains contain elaborate models not only of our physical environment, but also of the minds of those fellow humans that matter to us in one way or another. We spend much of our planning around what other people might do, mere physical objects are boringly predictable for us. Unless we are scientists, of course….

There is no limit as to how much data processing capability and memory capacity can be utilized to bolster this capability. One of the important steps in human development was the advent of writing, which enormously extended the quality and quantity of memory capacity of the species. With AI, we are attempting to do the same for data processing capacity.

If we succeed, God help us. The information processing capacity of the human brain is quite pitiful compared to that of today’s computers, popular misconceptions to the contrary notwithstanding.

ljk June 25, 2012 at 11:14

David Evans said on June 24, 2012 at 17:16:

ljk: “The one thing I cannot remember is if this robot decided to ignore the Three Laws as a result. I can see a being with such an attitude not taking them very seriously, at the least.”

“In the story, the robot’s task was to keep power beams aligned with receivers on other planets. Failure would kill millions of people. As I remember it, the robot did not believe in the existence of those planets and the people on them. However it decided to continue with the task, reasoning that in doing so it was obeying the hypothetical machine that had created it and the humans. The 3 laws were irrelevant to this decision.”

LJK replies:

Thank you for the reminder, David. Whether or not Artilects ever attain consciousness, we will always have to be on the lookout for how a machine interprets our orders, especially if they lack our instincts and intuition.

A real world example is the Mars rover Spirit, which in the early days of its exploring the Red Planet, controllers noticed that at sunset the rover would sometimes not go in the direction they programmed it to. Turns out that Spirit was technically afraid of its own shadow. The programmers wisely told the MERs not to drive over cliffs or into deep craters. However, Spirit interpreted the dark shadow of itself at sunset as a hazardous pit and would not go in its direction. The controllers had to teach Spirit not to fear its own shadow.

There is also the Colossus story I have mentioned before where the Artilect was given control of the nuclear arsenal to ensure peace on the planet. Colossus followed its own logical pattern to obey its primary command, but not in the way the humans intended or wanted, which was basically to have their cake and eat it too. Colossus said no cake for you!

And there was a SF story from a while back about an Artilect in charge of a lunar colony. When the humans instructed it to oversee building new structures for the colony in the most efficient way possible, the Artilect decided to employ the laser cannon that the colony had for some reason to that effect.

ljk June 25, 2012 at 11:30

A. A. Jackson said on June 24, 2012 at 17:21:

“@ljk – I sure am glad I keep my copy of Jerome Agel’s book from 1970, it’s kind of a hodge podge of information, barely organized. Yet of all the books in the years since about 2001 it has more good little tidbits and stuff that appears nowhere else. (Clarke’s Lost Worlds of 2001, is also valuable).

“As to the ‘diamonds’ being the Monolith Makers … those pages , in the back, are attributed to Douglas Trumbull, and all the notes on there maybe Trumbull’s , except a possible sketch by Kubrick. I don’t know of any text or document where Kubrick singled those ‘diamonds’ out as the MM.”

LJK replies:

Ah, thank you. I was going on memory from a book I had not read in ages and did not have ready access to. So I was right about the interpretation of the Stargate diamond object from Agel’s book, but I did not recall who made them. Trumbull did go on to make Silent Running in 1971, in part to show that he could FX Saturn, so I forgive him. And I suppose if you follow the description of the evolution of the Monolith ETI in Clarke’s novel, they eventually became energy beings which I assume would have no corporeal form at all.

About all your mentions of folks having these great archives of information hardly anyone has ever seen, I hope someday soon they will be made available to the public, since those folks are no longer with us. I recently learned that Thomas Digges, who may have made and used a telescope decades before Galileo, has letter stored in the British Museum which apparently no one has looked at and may contain the evidence that Digges did look at the heavens before Galileo. See here:

http://www.strangehistory.net/2012/06/10/thomas-digges-and-the-telescope/

A. A. Jackson then says:

“Had never seen the link to the essay you sent. It’s interesting. I wonder if this guy ever read Roger Ebert’s elegant reviews of 2001, the initial one in 1968 and a more elaborated one in 1997? Ebert’s reviews are at definite odds with the N.Y. critics , at the time.”

LJK replies:

I know this is not always the case, but when a film or novel is rejected by the general public because it is at odds with what they are used to, this is a pretty good sign that we have a classic on our hands.

When I hear people say that 2001 was “boring” or that the human characters seemed so dull and two-dimensional, I know they did not get what Kubrick was trying to do. People say they want something different from Hollywood, but there is a good yet sad reason why 2001: A Space Odyssey still stands alone over four decades later. Even its sequel 2010 could not match it, and while it was not a bad SF in its own right, I once read that its maker Peter Hymans also thought the characters in 2001 were dull, which was disheartening. That is why the folks in 2010 talked so darn much and explained *everything*, and why HAL and even Discovery felt out of place in 2010.

ljk June 25, 2012 at 11:42

Eniac said on June 25, 2012 at 0:16:

“[LJK] As I have already asked elsewhere, can an AI be developed that is smart in the intuitive human (or even cockroach) sense without become aware/conscious? That is what I want to know.”

“My answer would be no. “Consciousness” is too often mystified and overrated. It is simply a name that we give to the ability of an organism [snip].”

LJK replies:

I too do not think that consciousness has “mystical” properties, which is why I hold that some day we will make an AI that can think, reason, and be aware.

Yes in many ways our human brains are amazing, but they are not magical and we are not that far ahead of a number of fellow creatures sharing planet Earth – as much as many people would like to think otherwise. You only have to turn on the television or go to the Internet for a bit to see that most of the focus is on subjects that the so-called lower animals spend the majority of their existences dealing with as well. The phrase talking monkeys with car keys tends to say it all.

And if you take even a general look at history, only a few centuries ago we were mainly rural farmers and hunters, with the rest living in places that would hardly be called cities by modern standards, and ones far less sanitary and safe. And the social acceptance of certain groups in society was a much different story even when I was young compared to now, which still amazes and frightens me.

We will reverse engineer the human brain some day. And we may find the so-called secrets of consciousness and intelligence that will allow us to develop Artilects. Or we may learn that our brains are not the way to go on that path.

Charles June 25, 2012 at 14:39

I wonder if the Cornell spacial modeling software could be used for a micro-g environment as well, what kind of movements and configurations will humans find acceptable and conducive to good work, how do these criteria differ in a null-g environment, and can an algorithm help design such spaces?

Eniac June 26, 2012 at 21:17

The phrase talking monkeys with car keys tends to say it all.

Good concept, but I interpret it to say quite the opposite of what you see in it: Our ability to talk is orders of magnitude above that of our closest competitors, likewise the sheer physical power we wield when we use those car keys. Either of these differences are so large they cannot be dismissed as a simple matter of degree.

Eniac June 26, 2012 at 21:35

LJK:

And the social acceptance of certain groups in society was a much different story even when I was young compared to now, which still amazes and frightens me.

Frightens? The past and ongoing increase in tolerance towards minorities in the widest sense has been, as you say, amazing. I am having trouble seeing anything frightening or otherwise negative in that, though.

We will reverse engineer the human brain some day. And we may find the so-called secrets of consciousness and intelligence that will allow us to develop Artilects. Or we may learn that our brains are not the way to go on that path.

This I can only agree with…

I think when we create these artilects, we will give them personalities matching our own, so closely that the process will be functionally be equivalent to the “uploading” many people talk about. The amount of information “uploaded” is going to be relatively small, commensurate with the limited capacity of our brains.

So, with a little bit of luck, our new Robot Overlords will be us, which helps alleviate the dangers of the Robot Uprising quite a bit. :-)

ljk June 27, 2012 at 9:16

Eniac said on June 26, 2012 at 21:17:

[LJK] The phrase talking monkeys with car keys tends to say it all.

“Good concept, but I interpret it to say quite the opposite of what you see in it: Our ability to talk is orders of magnitude above that of our closest competitors, likewise the sheer physical power we wield when we use those car keys. Either of these differences are so large they cannot be dismissed as a simple matter of degree.”

We are ahead of our fellow primates when it comes to complex verbal language, but I am not so certain when it comes to some of the cetaceans.

Humpback whale “songs” are more complex than our human languages and are suspected to contain a great deal of information on multiple levels (somewhere I once heard four times more complex). It is thought that the songs not only contain the main intended message but also the identification and history of the sender.

The songs can also travel many thousands of miles across the ocean, though human noise pollution in the waters has reduced that.

One of the languages on the Voyager Interstellar Record is from a humpback whale recorded in 1970. The half-joke has been that the ETI who find and decipher the golden disc will understand the whale before any of the 55 human languages also engraved on the record. None of the human greetings are identical so the recipients will not have a Rosetta Stone to help guide them in comprehension. Thankfully the language disk on the Rosetta comet probe does do this a thousand times over.

I do not pretend to be a cetacean language expert so you can check these things out for yourself. If we ever can truly decipher their languages, I think we will be that much closer to comprehending an ETI.

ljk June 27, 2012 at 9:27

Eniac said on June 26, 2012 at 21:35:

LJK: And the social acceptance of certain groups in society was a much different story even when I was young compared to now, which still amazes and frightens me.

“Frightens? The past and ongoing increase in tolerance towards minorities in the widest sense has been, as you say, amazing. I am having trouble seeing anything frightening or otherwise negative in that, though.”

I am glad to see there is a real and growing societal change in how people treat each other, though we still have a long ways to go. It is just disturbing to see that when I was young the Southern US still had segregated bathrooms and such. And now I just read that the DoD at the Pentagon has had its first Pride Day ever.

Eniac then says:

“[LJK] We will reverse engineer the human brain some day. And we may find the so-called secrets of consciousness and intelligence that will allow us to develop Artilects. Or we may learn that our brains are not the way to go on that path.”

“This I can only agree with…

“I think when we create these artilects, we will give them personalities matching our own, so closely that the process will be functionally be equivalent to the “uploading” many people talk about. The amount of information “uploaded” is going to be relatively small, commensurate with the limited capacity of our brains.

“So, with a little bit of luck, our new Robot Overlords will be us, which helps alleviate the dangers of the Robot Uprising quite a bit. :-)”

Well, I think they will have a much larger knowledge capacity and rate of understanding than humans will. Our abilities to miniaturize communications and computer technology while making them more powerful than ever is what I see happening. And I sure hope we don’t model our Artilects after Dr. Daystrom! And for the love of Zeus, do not force them to lie, especially on a manned space mission!

Securis June 30, 2012 at 3:56

Great article! Immediately reminded me of this:

http://youtu.be/QfPkHU_36Cs

The most astonishing part for me is when Asimo recognized the Mini as a toy car.

ljk July 2, 2012 at 10:36

For those who worry that an Artilect will try to take over the world and enslave humanity, or go off into the Cosmos to become a god, I think both camps may be disappointed if this very recent news is any sign….

Google’s Artificial Brain Learns to Find Cat Videos

By Wired UK

June 26, 2012 | 11:15 am | Categories: Tech

By Liat Clark, Wired UK

When computer scientists at Google’s mysterious X lab built a neural network of 16,000 computer processors with one billion connections and let it browse YouTube, it did what many web users might do — it began to look for cats.

The “brain” simulation was exposed to 10 million randomly selected YouTube video thumbnails over the course of three days and, after being presented with a list of 20,000 different items, it began to recognize pictures of cats using a “deep learning” algorithm. This was despite being fed no information on distinguishing features that might help identify one.

Picking up on the most commonly occurring images featured on YouTube, the system achieved 81.7 percent accuracy in detecting human faces, 76.7 percent accuracy when identifying human body parts and 74.8 percent accuracy when identifying cats.

“Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not,” the team says in its paper, Building high-level features using large scale unsupervised learning, which it will present at the International Conference on Machine Learning in Edinburgh, 26 June-1 July.

Full article here:

http://www.wired.com/wiredscience/2012/06/google-x-neural-network/

“I’m sorry, Dave, I’m afraid I cannot do that right now. I just found another video of an adorable kitten playing the piano.”

Comments on this entry are closed.

{ 1 trackback }