Igor Aleksander (University College, London) is a specialist in neural systems engineering who is working on emerging consciousness in machines, a process he calls ‘more basic’ than artificial intelligence. Velcro City Tourist Board offers up an interview with Aleksander that gets into models of the mind and the meaning of consciousness itself. A snippet:
“There’s one important principle involved in the computational modelling of consciousness: being conscious does not mean being a living human, or even a non-human animal. For an organism to be conscious is for it to be able to build representations of itself in a world that it perceives as being ‘out there’, with itself at the centre of it. It is to be able to represent the past as a history of experience, to attend to those things that are important to it, to plan and to evaluate plans – these are the five axioms.”
For more on conscious machines and links to Aleksander’s axioms, read the whole story. We’ll see the benefits of such work showing up in spacecraft that make decisions and manage research in environments increasingly remote from Earth-based support. An intelligent probe may or may not achieve consciousness in a recognizably human sense, but our initial wave of interstellar robotics will depend on systems with human-like traits of awareness and flexibility. All of which may leave the question of consciousness as a matter for philosophers to decide.