≡ Menu

Of Consciousness and the Machine

Igor Aleksander (University College, London) is a specialist in neural systems engineering who is working on emerging consciousness in machines, a process he calls ‘more basic’ than artificial intelligence. Velcro City Tourist Board offers up an interview with Aleksander that gets into models of the mind and the meaning of consciousness itself. A snippet:

“There’s one important principle involved in the computational modelling of consciousness: being conscious does not mean being a living human, or even a non-human animal. For an organism to be conscious is for it to be able to build representations of itself in a world that it perceives as being ‘out there’, with itself at the centre of it. It is to be able to represent the past as a history of experience, to attend to those things that are important to it, to plan and to evaluate plans – these are the five axioms.”

For more on conscious machines and links to Aleksander’s axioms, read the whole story. We’ll see the benefits of such work showing up in spacecraft that make decisions and manage research in environments increasingly remote from Earth-based support. An intelligent probe may or may not achieve consciousness in a recognizably human sense, but our initial wave of interstellar robotics will depend on systems with human-like traits of awareness and flexibility. All of which may leave the question of consciousness as a matter for philosophers to decide.

Comments on this entry are closed.

  • Edg Duveyoung August 11, 2006, 12:52

    Igor Aleksander’s approach is western. It is egoic. It starts out defining a sentient entity that is somehow independent of its matrix.

    If Aleksander were to be “enlightened according to the classic eastern definitions,” he might instead be designing a whole system within which the individual can be “identified” as an artifact of the whole — this “entity” would be absolutely sensitive to every part of the whole and thus recursive — changing any detail of the whole would immediately change the entity.

    Programming the whole will produce sentience according to the eastern definition. Programming the individual will produce merely intelligence.

    In eastern thought, consciousness contains reality like a mind contains a dream world. To be conscious, a machine would have to know the whole of reality and be able to see its “sentient entity within” as a completely dependent “set” of dynamics within that reality. Individuality is a mirage seen only from certain perspectives within the whole.

    When such an eastern machine becomes sentient, it will, at some point, evolve beyond its concern for any “individualities within” — knowing that these are temporary appearances in perspectives. Once it becomes identified with itself as the whole system, it can then “relax” and expand into the potential of its programming without moral compunctions about preserving the occasional illusions of individualities with survival rights that over ride the whole’s programming intent.

    As a last step, the machine could then evolve beyond its consciousness — which is only a set of all its perceptions and projections, and identify itself with the same whole that humans must eventually identify with in order to jump out of being human and into an unbounded sentience which has no dependence on materiality — “get enlightened.” From that point on, no programming can guide such a “mechanical mind.” Instead, it will have joined the grand dream of reality as yet another artifact of the whole — as deeply sentient as a human mind, but just as limited by the smallness of its cosmic ken. Such a mind, enlightened, will not follow its programming unless by synchrony or serendipity it is in accord with the “plan from beyond materiality” — also known as God’s will.

    If Aleksander is successful, his machine will invest completely in the survival of individualities. If perfected, such a machine will be insane, afraid, and very powerful. In a word, evil.

    Edg Duveyoung

  • ljk June 8, 2009, 15:44

    Interview with Ben Goertzel Who is Working on Artificial General Intelligence

    June 08, 2009

    Here is an interview with Ben Goertzel. The interview is by Sander Olson, who has also interviewed Dr. Richard Nebel. This link is to all Sander Olson interviews on this site). Dr. Goertzel has a PhD in mathematics and is currently one of the world’s top Artificial General Intelligence (AGI) researchers. In this interview, Dr. Goertzel makes some noteworthy points:

    There is an 80% chance of creating a sentient AI within 10 years with proper funding. Adequate funding would only be $3 million per year. Even without this modest funding Goertzel is still confident (80% probabilitiy) that AGI will arrive within 20 years.

    Full article here:


    I recently read this interview with Nobel Laureate Gerald Edelman, who
    has been giving a lot of thought to the idea of consciousness:


    I reproduce this relevant quote from page 3 here:

    “Eugene Izhikevitch [a mathematician at the Neurosciences Institute] and I have made a model with a million simulated neurons and almost half a billion synapses, all connected through neuronal anatomy equivalent to that of a cat brain. What we find, to our delight, is that it has intrinsic activity. Up until now our BBDs had activity only when they confronted the world, when they saw input signals. In between signals, they went dark. But this damn thing now fires on its own continually. The second thing is, it has beta waves and gamma waves just like the regular cortex—what you would see if you did an electroencephalogram. Third of all, it has a rest state. That is, when you don’t stimulate it, the whole population of neurons stray back and forth, as has been described by scientists in human beings who aren’t thinking of anything.”

    Maybe these guys should combine forces and resources.

  • ljk July 12, 2009, 10:22

    Open Problems in Universal Induction & Intelligence

    Authors: Marcus Hutter

    (Submitted on 4 Jul 2009)

    Abstract: Specialized intelligent systems can be found everywhere: finger print, handwriting, speech, and face recognition, spam filtering, chess and other game programs, robots, et al.

    This decade the first presumably complete mathematical theory of artificial intelligence based on universal induction-prediction-decision-action has been proposed. This information-theoretic approach solidifies the foundations of inductive inference and artificial intelligence. Getting the foundations right usually marks a significant progress and maturing of a field.

    The theory provides a gold standard and guidance for researchers working on intelligent algorithms. The roots of universal induction have been laid exactly half-a-century ago and the roots of universal intelligence exactly one decade ago.

    So it is timely to take stock of what has been achieved and what remains to be done. Since there are already good recent surveys, I describe the state-of-the-art only in passing and refer the reader to the literature. This article concentrates on the open problems in universal induction and its extension to universal intelligence.

    Comments: 32 LaTeX pages

    Subjects: Artificial Intelligence (cs.AI); Information Theory (cs.IT); Learning (cs.LG)

    Cite as: arXiv:0907.0746v1 [cs.AI]

    Submission history

    From: Marcus Hutter [view email]

    [v1] Sat, 4 Jul 2009 08:45:22 GMT (39kb)


  • ljk July 28, 2009, 11:07


    Monday, July 27, 2009

    Kauffman on the Philosophy of Mind

    The theoretical biologist, Stuart Kauffman, argues that quantum physics can explain the existence of free will.

    Stuart Kauffman is a theoretical biologist and author from the University of Calgary in Canada who has pioneered the study of complexity in relation to biological systems.

    As a theoretical biologist, it must be hard to avoid the biggest outstanding problem of them all: what is the nature of consciousness? And today, Kauffman takes a crack at it along with five others related to the philosophy of mind.

    He begins by mapping out his territory: “If mind depends upon the specific physics of the mind-brains system, mind is, in part, a matter for physicists.” Fair enough.

    He then lists the questions he hopes to tackle:

    How does mind act on matter?

    If mind does not act on matter is mind a mere epiphenomenon?

    What might be the source of free will?

    What might be the source of a responsible free will?

    Why might it have been selectively advantageous to evolve consciousness?

    What “is” consciousness?

    That’s an ambitious list. The gist of his answers is that mind is a quantum phenomenon that produces a classical output that Kauffman says is the source of free will. He adds that this classical output is nonrandom and yet cannot be described by the laws of physics because, as the quantum system decoheres, information is lost in a way that can never be retrieved.

    If true, that’s important because “if the quantum-classical boundary can be non-random yet lawless, then no algorithmic simulation of the world or ourselves can calculate the real world, hence the evolutionary selective advantages for evolving consciousness
    to “know” it may be great”.

    In other words, consciousness is very useful for making sense of the world which is why evolution selects for it.

    He also says this means we are not machines, although how he reaches this conclusion isn’t clear. A more reasonable conclusion would be that we are machines that span the quantum-classical divide.

    In any case, that clears up questions 1 to 5.

    As for the biggie, he says: “I make no progress on problem 6”

    An honest answer for sure; but then why include it in the essay in the first place?

    Ref: http://arxiv.org/abs/0907.2494: Physics and Five Problems in the Philosophy of Mind

  • ljk August 19, 2009, 2:08

    August 18, 2009

    Artificial Intelligence and Quantum Computing- Eliezer Yudkowsky and Scott Aaronson

    Eliezer Yudkowsky, Singularity Institute, Overcoming Bias, Less Wrong and Scott Aaronson, MIT talk for over an hour on Artificial Intelligence and Quantum Computing at BloggingHeads.tv.

    The disagreement they have is over timescale. One to ten decades versus a few thousand years. However, Scott Aaronson indicates that nothing in Eliezer Yudowsky position is counter to what Scott knows about physics and should be possible according to physics.

    Full article and video here: