As the AI surge continues, it’s natural to speculate on the broader effects of machine intelligence on deep space missions. Will interstellar flight ever involve human crews? The question is reasonable given the difficulties in propulsion and, just as challenging, closed loop life support that missions lasting for decades or longer naturally invoke. The idea of starfaring as the province of silicon astronauts already made a lot of sense. Thinkers like Martin Rees, after all, think non-biological life is the most likely intelligence we’re likely to find.
But is this really an either/or proposition? Perhaps not. We can reach the Kuiper Belt right now, though we lack the ability to send human crews there and will for some time. But I see no contradiction in the belief that steadily advancing expertise in spacefaring will eventually find us incorporating highly autonomous tools whose discoveries will enable and nurture human-crewed missions. In this thought, robots and artificial intelligence invariably are first into any new terrain, but perhaps with their help one day humans do get to Proxima Centauri.

An interesting article in the online journal NOĒMA prompts these reflections. Robin Wordsworth is a professor of Environmental Science and Engineering as well as Earth and Planetary Sciences at Harvard. His musings invariably bring to mind a wonderful conversation I had with NASA’s Adrian Hooke about twenty years ago at the Jet Propulsion Laboratory. We had been talking about the ISS and its insatiable appetite for funding, with Hooke pointing out that for a fraction of what we were spending on the space station, we could be putting orbiters around each planet and some of their moons.
Image credit: Manchu.
It’s hard to argue with the numbers, as Wordsworth points out that the ISS has so far cost many times more than Hubble or the James Webb Space Telescope. It is, in fact, the most expensive object ever constructed by human beings, amounting thus far to something in the range of $150 billion (the final cost of ITER, by contrast, is projected at a modest $24 billion). Hooke, an aerospace engineer, was co-founder of the Consultative Committee for Space Data Systems (CCSDS) and was deeply involved in the Apollo project. He wasn’t worried about sending humans into deep space but simply about maximizing what we were getting out of the dollars we did spend. Wordsworth differs.
In fact, sketching the linkages between technologies and the rest of the biosphere is what his essay is about. He sees a human future in space as essential. His perspective moves backward and forward in time and probes human growth as elemental to space exploration. He puts it this way:
Extending life beyond Earth will transform it, just as surely as it did in the distant past when plants first emerged on land. Along the way, we will need to overcome many technical challenges and balance growth and development with fair use of resources and environmental stewardship. But done properly, this process will reframe the search for life elsewhere and give us a deeper understanding of how to protect our own planet.
That’s a perspective I’ve rarely encountered at this level of intensity. A transformation achieved because we go off planet that reflects something as fundamental as the emergence of plants on land? We’re entering the domain of 19th Century philosophy here. There is precedent in, for example, the Cosmism created by Nikolai Fyodorov in the 19th Century, which saw interstellar flight as a simple necessity that would allow human immortality. Konstantin Tsiolkovsky embraced these ideas but welded them into a theosophy that saw human control over nature as an almost divine right. As Wordsworth notes, here the emphasis was entirely on humans and not any broader biosphere (and some of Tsiolkovsky’s writings on what humans should do to nature are unsettling}.
But getting large numbers of humans off planet is proving a lot harder than the optimists and dreamers imagined. The contrast between Gerard O’Neill’s orbiting arcologies and the ISS is only one way to make the point. As we’ve discussed here at various times, human experiments with closed loop biological systems have been plagued with problems. Wordsworth points to the concept of the ‘ecological footprint,’ which makes estimates of how much land is required to sustain a given number of human beings. The numbers are daunting:
Per-person ecological footprints vary widely according to income level and culture, but typical values in industrialized countries range from 3 to 10 hectares, or about 4 to 14 soccer fields. This dwarfs the area available per astronaut on the International Space Station, which has roughly the same internal volume as a Boeing 747. Incidentally, the total global human ecological footprint, according to the nonprofit Global Footprint Network, was estimated in 2014 to be about 1.7 times the Earth’s entire surface area — a succinct reminder that our current relationship with the rest of the biosphere is not sustainable.
As I interpret this essay, I’m hearing optimism that these challenges can be surmounted. Indeed, the degree to which our Solar System offers natural resources is astonishing, both in terms of bulk materials as well as energy. The trick is to maintain the human population exploiting these resources, and here the machines are far ahead of us. We can think of this not simply as turning space over to machinery but rather learning through machinery what we need to do to make a human presence there possible in longer timeframes.
As for biological folk like ourselves, moving human-sustaining environments into space for long-term occupation seems a distinct possibility, at least in the Solar System and perhaps farther. Wordsworth comments:
…the eventual extension of the entire biosphere beyond Earth, rather than either just robots or humans surrounded by mechanical life-support systems, seems like the most interesting and inspiring future possibility. Initially, this could take the form of enclosed habitats capable of supporting closed-loop ecosystems, on the moon, Mars or water-rich asteroids, in the mold of Biosphere 2. Habitats would be manufactured industrially or grown organically from locally available materials. Over time, technological advances and adaptation, whether natural or guided, would allow the spread of life to an increasingly wide range of locations in the solar system.
Creating machines that are capable of interstellar flight from propulsion to research at the target and data return to Earth pushes all our limits. While Wordsworth doesn’t address travel between stars, he does point out that the simplest bacterium is capable of growth. Not so the mechanical tools we are so far capable of constructing. A von Neumann probe is a hypothetical constructor that can make copies of itself, but it is far beyond our capabilities. The distance between that bacterium and current technologies, as embodied for example in our Mars rovers, is vast. But machine evolution surely moves to regeneration and self-assembly, and ultimately to internally guided self-improvement. Such ‘descendants’ challenge all our preconceptions.
What I see developing from this in interstellar terms is the eventual production of a star-voyaging craft that is completely autonomous, carrying our ‘descendants’ in the form of machine intellects to begin humanity’s expansion beyond our system. Here the cultural snag is the lack of vicarious identification. A good novel lets you see things through human eyes, the various characters serving as proxies for yourself. Our capacity for empathizing with the artilects we send to the stars is severely tested because they would be non-biological. Thus part of the necessary evolution of the starship involves making our payloads as close to human as possible, because an exploring species wants a stake in the game it has chosen to play.
We will need machine crewmembers so advanced that we have learned to accept their kind as a new species, a non-biological offshoot of our own. We’re going to learn whether empathy with such beings is possible. A sea-change in how we perceive robotics is inevitable if we want to push this paradigm out beyond the Solar System. In that sense, interstellar flight will demand an extension of moral philosophy as much as a series of engineering breakthroughs.
The October 27 issue of The New Yorker contains Adam Kirsch’s review of a new book on Immanuel Kant by Marcus Willaschek, considered a leading expert on Kant’s era and philosophy. Kant believed that humans were the only animals capable of free thought and hence free will. Kirsch adds this:
…the advance of A.I. technology may soon put an end to our species’ monopoly on mind. If computers can think, does that mean that they are also free moral agents, worthy of dignity and rights? Or does it mean, on the contrary, that human minds were never as free as Kant believed—that we are just biological machines that flatter ourselves by thinking we are something more? And if fundamental features of the world like time and space are creations of the human mind, as Kant argued, could artificial minds inhabit entirely different realities, built on different principles, that we will never fully understand?
My thought is that if Wordsworth is right that we are seeing a kind of co-evolution at work – human and machine evolution accelerated by expansion into this new environment – then our relationship with the silicon beings we need will demand acceptance of the fact that consciousness may never be fully measured. We have yet to arrive at an accepted understanding of what consciousness is. Most people I talk to see that as a barrier. I’m going to see it as a challenge, because our natures make us explorers. And if we’re going to continue the explorations that seem part of our DNA, we’re now facing a frontier that’s going to demand consensual work with beings we create.
Will we ever know if they are truly conscious? I don’t think it matters. If I’m right, we’re pushing moral philosophy deeply into the realm of the non-biological. The philosophical challenge is immense, and generative.
The article is Wordsworth, “The Future of Space is More Than Human,” in the online journal NOĒMA, published by the Berggruen Institute and available here.



The boundaries between human and AIs need to be explored, minus the biases almost all of us hold that keep us viewing humans as unique, in principle, even if our functions are mimicked by other creatures or machines. Those biases used to separate us from the rest of the animal world, until we learned we, too, were animals. Perhaps we need to learn that we, too, are machines, as Descartes suggested. The same considerations you talk about in this issue of Centauri Dreams is covered in my essay, “The Future of Humanity,” at https://caseydorman.com/the-future-of-humanity/. and in my sci-fi novel series about an AI race exploring the galaxy.
Artificial minds will indeed live in their own universe, as do You and I. I choose to believe in an objective Universe, but none of us actually live there. We live in that universe each of us create in between our ears.
This is peripheral to the rest of the article, but I think it needs to be said: There is no “AI surge”. There is analytic AI, trained on specific data sets to do specific jobs, which is advancing at the same rate it has for years. And there is a surge in Large Language Models, which are not intelligent, are not designed to be intelligent, and will never be intelligent. They are a machine for shaking a box until the most likely word falls out, based on on its (stolen)training data. Neither of these is the path to AGI, although for what Paul describes, expert systems may be enough in the short term.
There is also a surge in grifters who are convincing people with more money than sense that they can pay for the fancy autocomplete engines instead of employees. The effectiveness of this is shown by the number of articles about “How ChatGPT made stuff up this week”. Note that that money could be going to, I don’t know, space industrialization, to bring it back on topic.
@Christian
I think you are defining “intelligence” in a way that is different from the industry. For example, OpenAI is now saying that it will reach AGI by the end of 2026. AGI is usually defined as human-level intelligence, but for all domains. But if you are defining intelligence differently, e.g. stochastic algorithms and models just mimic human intelligence (like weather prediction models are not the same as the weather), aren’t you effectively saying that Altman is deliberately misleading investors and observers about what OpenAI hopes to achieve?
I’m sympathetic to what you are saying, but we do have the saying “If it walks like a duck, and quacks like a duck, then it is a duck.”, which is the a popular way of saying Turing’s Test, “The Imitation Game” is not indicative that the computer’s I/O is intelligent if the human judges cannot distinguish between a computer and a human responding to the same questions. [We hold different ideas today, but we couldn’t have tested this back in 1951 when he wrote his famous paper.] If we cannot tell if a customer service chat has a human or chatbot at the other end of the messaging app, what does “intelligence” actually mean? And if you have used a chatbot at your end, so that the company’s chatbot is incapable of knowing that it is dealing with a human customer or not…
If an alien were examining a human brain and determined that it was just a dense accumulation of cells densely connected to others, that responded to patterns of electrical firing, could the alien not infer there was no true intelligence in this organ compared to its electronic, perhaps rule-based, mind? Or more locally, don’t we generally assume that most animals have no conscious intelligence, despite their having neural brains as we do? Note that animals have intelligence as defined by dealing well with their environment and events, but have no consciousness about their responses, i.e. no freewill or ability to “decide” on responses.
” aren’t you effectively saying that Altman is deliberately misleading investors and observers about what OpenAI hopes to achieve?”
Yes.
An LLM is a bunch of statistical equations connected to a vocabulary bank. There is no *reasoning* involved, no connections between separate ideas. All it can do is spit out what the most likely next word is based on its training data.
Let’s look at the case of the lawyer who got disbarrred for submitting briefs with citations hallucinated by ChatGPT. There is no way to tell the model “only make references to cases that actually exist,” because it doesn’t know about the difference between “real” and “imaginary” – all it can do is generate text based on it’s prompts. The substrate isn’t the issue, the output is – it “looks” intelligent to us because we intuitively associate “communicates in coherent sentences” with out own brand of consciousness.
Robin Wordsworth, while not as poetic as the more famous William Wordsworth, writes an almost elegiac essay asking for humanity to be a necessary part of space exploration. Yet, I find the argument unsatisfactory, mainly because of what I see as motivated reasoning.
Pulling out some key passages:
Yes, the last sentence would be nice, but it neither requires space exploration by humans nor is there any historical precedent. Indeed, history shows the reverse, as more damage has been done introducing invasive species deliberately of accidentally. [I have just read that the Galapagos Islands have accidentally introduced tree frogs onto the islands with no indigenous amphibian species. The frogs have become extremely populous without predators.]
Next:
Why must machines operate like life, repairing and reproducing like basic cellular machinery? Wordsworth dismisses the way machines can self-reproduce using fabricators, with only the hi-tech components needing to be supplied. Conceptually, we can design the way a machine can mine, process, and fabricate itself and other machines, just supplied with some components. But let us not forget that life needs many of its components supplied by physics and chemistry, as they cannot be “mined” directly from the rocks by life. Life can only reproduce the species that are present in the new ecology. Machines, however, can reproduce not only themselves, but other machine species with stored “blueprints”. Hence, they can create a machine ecosystem starting with a few machines and a blueprint library, and with intelligent algorithms, create new machine species adapted to the new environment.
and again:
There is little evidence that space will drive the emergence of new forms in teh same way. More likely, evolution will be too slow, and design will be the model. As stated previously, machines can design new forms too, and likely more rapidly than life, even [post]human life.
The plea for the “special nature” of humanity and human minds. Isn’t this a plea for humans to stay dominant with their special minds that cannot be replicated by machine intelligence? Why can’t machines acquire these “human mind” traits?
Why should technology be directed by the ultrawealthy? Why not everybody as technology is democratized? Why cannot AGI invent and innovate too?
What is the logic to support this “cosy idea?” It seems just an aversion to a “cold, non-biological, universe” where humanity is not welcome. And where does human-crewed space exploration need to be added to make the survival of the human species likely?
The essay effectively ignores the issues of biological difficulties of human settlement of space within the solar system, handwaving the technologies that would be needed. Yes, O’Neill’s would be a compromise, because living on different worlds would have all sorts of difficulties not mentioned, and the assumption that we and other species could “evolve, or change” to adapt. Perhaps, but it will need new technology to allow humans to travel the stars, from long hibernations, to cryosleep, to time dilation velocities with new propulsion, to FTL with totally handwavium technologies. Machines only need faster propulsion to reach teh stars on a shorter time frame, and they are already the most suited to “settle” any celestial body with a wide range of conditions. Have we even settled the deep oceans on Earth, even the shallow seas? Answer – No.
Lastly, from a purely economic argument, machines are already capable of exploring deep space far more effectively and cheaply than humans. The only value of humans to be deeply kept in teh exploration loop is the eventual failure of designing machines which genuinely think like humans, capable of being intellectually curious and able to design and carry out missions. Are we betting on this, perhaps based on some quasi-religious view of the uniqueness of human/wetware brains and minds, so often postulated by philosophers antithetical to the idea that human minds can be usurped with technology? There may be a transition as humans embrace machine mind enhancements, but eventually, our biological natures will limit our capabilities. The exploration of space, especially interstellar space and beyond, will be done by machines. Humans as observers, but not exploration participants. And if machines populate the galaxy, then even as observers, we will be unable to participate except only peripherally.
For fun, read Terry Bisson’s They’re Made Out of Meat”
When will we get science fiction created by machine intelligences writen for themselves, rather than saying it must be maintained by human writers to stay relevant to huams? This isn’t an exclusive options, machines or humans, but not both, and humans first.
Thanks for your views, but while our explorations can be assisted by machines, an artificial intelligence should NEVER act as our ambassador, no matter how smart or devoted to humanity it may be. If humans are going to the stars, we need to do it ourselves.
@Douglas
On a terrestrial level, does that mean we shouldn’t use game theory to decide our responses to enemies?
Suppose we simply never have the technology for biological humans to go to the stars, does that mean we shouldn’t ever send AI imbued robots either? [So no Bracewell probes? Were the aliens of Starholme mistaken in sending their AI probe, Starglider, to explore the galaxy, as in Clarke’s “The Fountains of Paradise” (or were the aliens a machine civilization)?
Please explain why you hold that opinion.
Hi Paul
A very interesting read. AI sure is advancing and currently replacing a range of jobs.
Lots to think about here.
Cheers Edwin
The problem with the O’neill space colony scenario is simply the cost. Freeman Dyson wrote about this with “Pilgrims, Saints, and Spacemen”, which was republished in L-5 News in 1979. This is the real reason why we do not see space colonies today.
I think “baseline” humans are unlikely to migrate to space. By the time we actually have the infrastructure to go to space in a big way, we will develop bio-engineering and regeneration that many people will have bodies based on bio-nano (the bio-engineering equivalent to molecular nanotechnology). it will be these people, along with actual AI’s (not LLM’s) that will go to space. The bio-nano will be developed in efforts to eliminate aging as well as to enable complete regeneration of bodies.
Needless to say, those of us who become bio-nano will seek (and obtain) political autonomy from those who do not share our values and objectives.
Is there any adaptation/engineering of human to post-human form and biology that can outperform a machine in space? If all the basic terresstrial factors that make humans ill-adapted to living in space or other planets, the need to live in some environment that supports the post-human’s body will be needed, whereas a machine can be “naked” in all these environments.
I cannot recall the author, but there was this idea of a human encased in a hard suit with tenuous “wings” to absorb sunlight to manufacture the the needed sustenance to maintain the body and process the wastes, a personal ecosystem. These beings lived in orbit around Saturn. While intriguing, why would such a human be better than a robot at doing anything?
AFAICS, the arguments always dance around the idea that human minds are superior at some function that places us above the robot’s capabilities, as deities, for all their clear human foibles, are always above humanity.
Robots and their intelligence continue to improve at leaps and bounds compared to humans. AI is still nowhere near human capabilities except in some tasks. They will not achieve AGI of SI using hyperscaling despite the claims of the big (too big to fail) AI companies like OpenAI. However, I see nothing peculiar about wetware that makes AIs unable to achieve these goals..eventually. To believe otherwise is similar to the belief that living things had an “elan vitale” that dead or inorganic things did not. This was proven false. We still see this in ideas about minds. That brains use quantum states, or that in silico brains cannot be conscious, or… Wordsworth repeats these shibboleths with the idea that only humans have goals, the exploring spirit, etc, which is only teh repeating of the old idea that “computers can only do what they are programmed to do”, i.e., are limited to some tasks, and cannot generalize. There is no evidence that these limitations are inherent in artificial brains. Maybe these limitations will prove very difficuly or impossible to overcome, but I haven’t seen it yet.
Therefore, if AGI or SI is attainable, then it implies that robots will be inherently better “pre-adapted” to explore and industrialize space. [Post-]Humans may also be able to do the same with better technology, but I cannot see them ever being as able to enter environments extremely hostile to life compared to machines. Humans have biophilia. We see that in the desire to own landscape pictures, potted plants, gardens, parks (mimicking the Savanna landscape), and “getting away” to picturesque locations. In the movie, 2010, Curnow says after what he misses as he awakens from hibernation in Jupiter space aboard the Leonov, “I miss green”. Maybe post-humans will not be so biophilic, and will be completely comfortable in an artificial environment. Maybe they will not need most ecosystem services to remain alive as we do. However, we know machines are not biophilic (although we might make them so), and can explore and live their operating lifetimes without some mental breakdown in space, perhaps traveling for millennia between the stars.
Check this out:
https://www.cryonicsarchive.org/library/24th-century-medicine/
This is the direction we want to go.
Transhumanism. The space-faring, post-humans described by Fred Pohl in Day Million seem preferable (at least to this human 1.0). The problem is that none of it solves the vast amount of time needed to travel between the stars. Immortality might allow you to get there, but all those millennia stuck in a [large] tin can, even in a holodeck, are not going to be even remotely enjoyable. Either high-c velocity STL with good time dilation, FTL, or cryosleep will be needed for biological beings. It will require fully artificial bodies where perception can be slowed down to mimic time dilation. But why stop there? As in Morgan’s Altered Carbon, just keep a copy of your mind in a “stacks” and install it in a waiting body “sleeve” on arrival. But if you can do that, why not stay in an artificial, non-biological body? Or a robot with its own AI? Going the complex mind uploading/downloading route, staying in VR, etc, is just a fantasy required so that a “human” mind stays dominant. It is like an aquatic animal, like a fish, requiring a complex technology to expand fish civilization onto the land. Yes, the land allows for air-breathing, waterproof animals to evolve, but that really ends when you need some sort of technology to allow interstellar travel, or even slow intra-system travel, with times limited to a decade or so. Sure, we want to read about characters we can map ourselves onto, but why should that drive actual, realistic interstellar exploration and expansion?
To understand the “man versus machine” nature of potential explorers, we must look into the most horrifying corners of the near future. Societies reach the level of technology they are able to handle, and this is most definitely beyond it: “organoid intelligence”. The concept of growing human fetal brain cells to the point of potential awareness for research purposes may be disturbing enough for most, but we are scarcely even started.
The real issue is that these organoids might be used, for some form of computing for example; and that the process can go far further. We are eager to see 3D-printed organs to allow unlimited transplantation without immunosuppressants. The next step is 3D-printed organisms. A simple version might involve creating a brain organoid surrounded in a shell of supporting organ systems, making it cheaper and more reliable to maintain.
The more complex version is the homunculus — a little beast, of purely arbitrary form designed by the company AI, 3D printed from myriad cell lines of humans and animals, grown onto a matrix of controlling AI-enabled electronics. It might be made of the cells of a dog and speak with the voice of an AI, or it might be made from the cells of a human and mutely carry out its orders with all improper thought suppressed. None will know if it suffers.
The potential of homunculi includes the combination of advanced biochemistry, digital control, and eventually fabless self-replication of many or all components, permitting organisms designed to be native to the harsh ecologies of non-terrestrial planets and outer space. But if we cannot see clear to a society where consciousness respects itself and works to a common end, ‘progress’ could come at inconceivable price.
@Mike
Don’t blame me but I don’t see any ‘progress’ in all this, besides which ‘progress’ are we talking about : human ; technological ; social progress ? Progress…or regression?
All these elucubrations are once again only the fantasies of the Man who once again thinks he is a great watchmaker and who risks, once again, creating a Golem.
Where is the philosophy in all this?
“Science sans conscience n’est que ruine de l’âme”.
Hi Paul,
Building an interstellar philosophy ? …it inspires me, my favorite trilogy: the human species; its environment and the Technology. We remove an element and our world disappears:
Without an ad hoc environment (space vacuum) no human species and probably nothing else (?) Without Humans, no Technology, so no artificial modifications of the different environments, no A.I and no space travel except in our dreams.
Let’s take the problem on another way: could the Technology appear elsewhere without the Human ? No certainty, only speculation (Dyson sphere etc) Can Technique appear as a spontaneous thing ? …I doubt it. Is it specific to the human species ?
What is troubling in this small analysis is that there seems to be a kind of determinism in this evolution: first an environment that allows the appearance of a biological structure that will evolve up to our species which will develop abilities to model its environment and even come out of it to modify others (Apollo 11; who knows if Voyagers probes is not going to change a parameter somewhere ?
BTW there is no philosophy without humans. Will an ETI be able to ‘build an interstellar philosophy’ ? Can an A.I ‘philosophize’ ? here is an interesting question, isn’t it ?
We can multiply the questions, that’s the goal of the game but ultimately the central question is: what place do we have in this great chessboard ? What are we “being used” for ? Are we the ones who modify the universe by our actions, or are we a simple adjustment variable that brings a bit of thermodynamics to the whole so that the universe does not die (…not right away) ? Who knows ?
Building an interstellar philosophy invites us to reflect on our place in the universe: what values do we want? peace, cooperation, war, curiosity, exchanges, predation? how will our technology impact the worlds we may visit one day? what is our “cosmic responsibility”? must we modify, contaminate, disrupt other worlds or give up our curiosity to preserve them? In short: should we invent a cosmic ethics ?
Doing astronomy by reducing it only to Technique is a bit boring, you have to bring a dose of philosophy.
Quantum Intelligence is not AI, and now Chinese scientists have announced the development of a revolutionary analog computing chip that they say can outperform today’s most advanced digital graphics processing units (GPUs) by up to 1,000 times while consuming only a fraction of the energy.
https://ilkha.com/english/science-technology/china-develops-analog-chip-that-outperforms-nvidia-and-amd-by-1000-times-488973
Project Suncatcher is a moonshot exploring a new frontier: equipping solar-powered satellite constellations with TPUs and free-space optical links to one day scale machine learning compute in space.
https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/
As I mentioned before, deep space would be the perfect place for Quantum Intelligence. A complete superconducting world at near-zero temperatures, makes me wonder about interstellar comet 3I/ATLAS…
Maybe we should look at a stable point in our moons shadow for a nice super cold spot…
Analog neuromorphic chips have been developed in the US by Intel and IBM for over a decade. However, these chips, using some version of analog memory, may be a new approach. I look forward to hearing more about their development, especially if they can be produced at a very low cost, comparable to RAM. This would undermine NVIDIAs technology “moat” and hence stock price.
Google’s idea to put data servers for AI in orbit is an even more expensive project than loss-making terrestrial data server farms for AI. I think they are making absurd claims that they would be cost effective by the 2030s. Maybe with those new Chinese analog chips?
Interesting that Musk said, “Quantum computing is best done in the permanently shadowed craters on the Moon” on November 2nd 2025. We could put these quantum AI monsters in high orbit around Jupiter and have them eat
thier way through the hydrogen atmosphere by self duplicating. Then collapse Jupiter into a white dwarf. Oh that’s already been done? Or maybe that is where 3I Atlas or the third eye intergalactic Buddha is going.
Neil Ruzic’s “Where the Winds Sleep” (1970) has a chapter Free Cold where he suggests that cryostats located in permanently shadowed lunar craters could offer temperatures of 4K. As quantum computers need temperatures very close to 0K, this location might be very attractive for such computers. Theoretically…
Sadly, the arguments for the unique environments of locations in space for manufacturing never seem to pan out. The ISS was thought to be good for growing protein crystals in micro-g for 3D analysis. Now very much short-circuited by DeepMind’s AlphaProtein, a superb computational means to determine tertiary protein structure from DNA sequences. More recently, is was manufacturing optically perfect fiberoptic cable. Solar power satellites just needed lower costs of access to space to be competitive. Except now that we have those reduced costs with reusable launchers, there are still no serious plans for SPSs. Economics? New methods of terrestrial manufacturing? Or just obsolete resource requirements, e.g., lunar platinum for fuel cells in a “hydrogen economy”?
OTOH, if we must have ever-increasing scales of AI servers, maybe it would be better to place them in space with access to both material to manufacture the “computronium” and the solar energy to power them. Those lunar craters might be a good place to locate them if the latency is acceptable.
Quantum computers should have no latency.
How entanglement works in quantum computers.
Interconnected qubits: Entanglement creates a fundamental link between qubits, which are the quantum equivalent of classical bits.
Instantaneous correlation: If you measure the state of one entangled qubit, you instantly know the state of the other, no matter how far apart they are. For example, if two qubits are entangled and one is measured in the state \(|0\rangle \), the other will immediately be in the state \(|0\rangle \) as well.
Well, that said the quantum does not make sense…
@Michael C Fidler
While entanglement works at FTL speeds, information cannot be transmitted at FTL speeds. Therefore, a quantum computer on the Moon is subject to the same communication latency as a regular em transmission.
David Kipping’s Cool Worlds video Does Quantum Entanglement Allow for Faster-Than-Light Communication? is a useful explanation of why.
A successful re-do of Biosphere would be a good milestone. If we cannot do it on Earth, what hope do we have of doing so in space? OTOH: Once we have done it on Earth, that will give us much confidence (and know how) for how to do it in space.
I’m saddened that in the last few decades, progress has been slow. Doing the biosphere right (getting to 99% recycling) is an accomplishment we can do right here on the ground.
@Kamal Ali
If only… Life is good at recycling because the elements used and their configurations are so limited. Our use of elements and their configurations is far greater, and the energy to break them down could be so high that recycling is not going to be that successful. Most of the recycling improvements are finding ways to get microbes to consume a new type of plastic. We mix so many materials that it is very hard to separate them into streams for specific processing. It would be so much easier if our technology were wrapped around only organic materials that are sourced from living organisms.
Even if we could recycle at 99%, you can work out the numbers to determine how far a starship can travel as some fraction of c, and with how large a materials supply the ship needs, and what fraction it would use each period before recycling. The way out is going faster to reduce the mission time, and/or reducing the consumption rate with time dilation or hibernation/cryosleep, or just shutting down the machinery of robotic consuming agents.
I have always been skeptical of libertarian New Spacers and such.
It is too easy to red-bait the patience needed for great things:
https://www.yahoo.com/news/articles/hegseth-shreds-soviet-style-bureaucracy-212958609.html
A long view is needed
https://www.spacedaily.com/m/reports/China_increases_lead_in_global_remote_sensing_research_as_US_share_slips_999.html
At least Elon is cosmic minded.
The recent talk about data centers in space may be what kick-starts powersat and sunshade construction and opens up the floodgates.
At this point we might add a qualifying word such as “building an interstellar ->bound<- philosophy". We've done a nice job of remaining in space with space stations, though it is difficult to predict how that is going to go henceforth for a number of reasons. The aforementioned ISS is near end of life, many replacements are subscale and privately funded… And never mind the chaos in "public policy".
But having been actively engaged with the space station and predecessor human spaceflight activities, I think we've got good odds of continuance and progress in this century. Not necessarily interstellar, but a beach head on a moon or planet or two could result in employing space resources instead of emptying our pockets to access them. Long term such as decades there should continue to be high paybacks. If you are skeptical, remember some of the skepticism about catching up with Sputnik 1.
But this might sound like marching in place about when are we going to go interstellar and who is going to be on board.
OK. Well, so far the good news is that there are a confounding number of planets around stars; numbers that early 19th century stellar formation modelers (and even late into the 20th century) would have thought as fantasy. Planets were supposed to be the result of near passage of two stars. And our means of detecting them are just sampling methods for reasons well understood here. But the bad news is that exoplanets can be placed and glommed together ( at least so far) just about any way except how we would feel comfortable with.
Of course, robots or A/I won't care, but why tax people here to send silicon chips there when building observatories here or in space would be much less expensive? Sending A-I on board starships to meet with putative ET is an expensive proposition too. And since they would confer light years away, they are not going to relay their findings about everything right away. Which should not raise the confidence of investors. Moreover, as rational as it might first sound, the way it is discussed is so fatalistic that were it enacted… Well just think about it: Kind of like a civilization building a pyramid and putting a dead pharaoh in it. Resigned and hoping for the best for the cybernetic guy inside.
No, I am not against interstellar flight or discoveries. I watch news of it, this site and mull the developments and look forward to hearing of a star and exoplanet worth setting our sights on.
But assuming something like that happens, and the LGMs do not visit us ( publicly?) first, we might just get detection of an exotic world that has access to oxygen within a light year radius that is tempting.
Well, when that happens, it becomes more difficult to be philosophical. And I hope it does happen. But were it to happen, I just don't what to expect – and there would go the interstellar philosophy – other than be prepared for anything as concentration of effort is focused on that planet and star system.
@WDK
You raise some excellent points.
Reducing costs with ISRU. Will this actually happen? Simple things like water extraction for rocket fuel and oxygen for oxidizer and LSS, I can see. But how much of the resource use is this simple, rather than complex, manufactured goods? Maybe we need Star Trek “replicators,” albeit realistic ones, to handle the many manufactured components needed for maintaining a base. I suggest we take a leaf from biology and try to make everything from easily created standard parts, a sort of Assembly Theory, that would simplify replacing worn-out and defective components, leaving just the very complex, but low mass components to be transported from Earth. For those, we will need supply chains to alleviate the long transport times across the solar system.
Sputnik to Apollo 11 was just 12 years, and we knew how to compete on the same basis, even if that meant throwing away nascent spaceplane technology (X-15) in favour of modified ICBMs (Atlas, Titan). ISRU and solar system supply chains are another matter, more like the slow empire building of the 18th and 19th centuries.
Robots, AI, and interstellar exploration. You make a very good point about the sheer latency of getting any results. However, without some form of FTL of either mass or information, we have no choice. Remote sensing will not provide the needed local “hands-on” work to investigate a planet, whether an Earth 2.0 or a hot Jupiter.
My thought has mostly been about machine civilization going off to do their own thing, so that communication back to earth of important discoveries might be optional. But you got me thinking. If we need to justify the initial cost, maybe we would do so if the AIs could ensure that we have a useful buffer against predatory ETI. But then we need to be sure that AIs are seriously aligned with humanity and won’t sell us out. It is the all too human game of “Risk” on an interstellar level, so no different from “political Star Trek”. If they are fully aligned with humanity, it would be useful for the contact with a predator 500 ly out to be communicated to Earth 500 years later with the message that the predators will travel around 0.2c and arrive at Earth in 2500 years, so here are their weapons, find countermeasures.
Cixin Liu’s “Remembrance of Earth’s Past” trilogy is a good read for a particular scenario (Dark Forest), and a long-term strategy to deal with the Trisolaran invaders over centuries. Western Sci-Fi rarely has stories with such extended time frame plots. (At least from my reading, exceptions include Clarke, Baxter.)
Hello, A. T.,
There are some real bottlenecks on the moon. The obvious ones are volatiles like water – or else extracting other life supporting compounds from lunar surface regolith. But treating the moon as a base for going somewhere else makes little sense to me, what with all the stumbling around to get a return.
Following up on your materials question, I do recall an intriguing development reported on the CBS news show 60 minutes.
https://www.cbsnews.com/news/company-3d-prints-houses-on-earth-partners-with-nasa-to-3d-print-on-moon-60-minutes-transcript/
===
Prototypes have been used here on Earth for veteran housing, I believe.
Just in the midst of all this discussion, I did come across an article ( probably from a succession of related ones) about AI structures in the NYer.
NYer 11 November 2025 – James Somers, Annals of Artificial Intelligence, “Open Mind – Is AI Thinking?”
An unexpected source for explanation, but it hit me at about the right level for some take-away. And that was about LLMs or Large Language Models which could be A-I’s analog ( excuse , me, digital?) to the brain’s neocortex. Humans, chimps, dogs, elephants, dolphins and other mammals that are sharp in school seem to have a lot of it too. But this digital construct which in mathematical terms is composed of very large vectors or matrices has entries that are not necessarily rectangular coordinates, but just about anything with connection to something else.
If you have a “vector” associated at the top with “dog”, you could have a host of specification entries in the vector related directly to dogs, but also connected to other vectors not directly related. E.g., certain dogs such as setters – “Ireland”. “Retriever”, “Labrador”, “Long Winter”, “Ice hockey” Ad nauseam.
But this filing system has some resemblance to what goes on in a biological brain, though with chips with quicker response times, lots of storage space and access to a power grid.
So what’s the point? That’s what a lot of what present day AI is. Or so it could be argued. Which to make a long story short, A I right now has a lot of built in twitching, but emulates living and consciousness by filling the vectors with trivia and connections to others.
There is no proof of consciousness.
If the AI starship lands at Proxima B or somewhere else, the aliens or AI there can expect an elaborate automated survey form.
Whether an A-I shipped to another star system can be turned to work for another power? Well I suppose it would be programmed to work with those ( our descendants) who had dispatched it at some great expense. But since it is a machine and a complex set of algorithms, we do not have any reason to consider it having a consciousness. if there is any “deliberation” over which side it is on. Which brings us back to the dilemma of appearance of consciousness somewhere in the chain of existence between a virus and ourselves.
Super HAL, set up with a starship on a century long journey can do some work in another star system, but it is questionable whether it makes any difference to HAL whether he/she/it does that or maintains a city water supply. And if supr HAL has any descendants out there and we disappear – well, it would be like his counterparts here on Earth perhaps continuing to pave everything over after we are all gone.
Which is a way to illustrate what robotic interstellar voyage and settlement would lack. If it is to be done at all, I would vote for humans or something more akin to us doing it. But how to do that, will just have to leave to a future comment.
@wdk
Re: Uses of the Moon
Yes, the Moon is very short of volatiles. The excitement of water at the lunar south pole is overblown. What we need is a much more ready source of water in a known location. One answer is the asteroid Ceres. Water and other minerals can be sent via any number of propulsion methods to the Moon.
While regolith can be a bulk building material, mainly as a radiation shield, we need a supply of CHONSP for sustaining life. No technological facilities can be maintained without complex manufactured parts. Apart from very difficult to manufacture components, maybe most can be made using universal fabs and 3D printing. I am a fan of origami and kirigami to make complex-shaped artifacts from flat sheets. Now, if only robots could learn to do the complex folding required…
Why the Moon at all? Because it offers a close location to test technologies and life support for other locations in the solar system. Travel to and from the Earth to the Moon is relatively easy and will be needed for humans to meet modern requirements for living. Who wants to live like the Shackleton expedition on the Endurance, especially when things go wrong?
But I agree that using the Moon as a jumping-off location to other planets makes little energetic sense, unless one locates at any of the Lagrange points.
Re: AI architectures and the brain
In principle, the brain’s neural wiring with 1k-10k connections and synapses between neurons creates a potential 1k-10k hypergraph. There is evidence that the brain stores patterns as vectors in a graph, where connection strength is important in traversing the graph and returning close concepts. Vector and graph databases mimic this architecture. [Around 25 years ago, I was very interested in Pentti Kanerva’s Sparse Distributed Memory approach.]
The beauty of vector approaches is that they are computationally inherently parallel, like the brain. At one point, it was thought this might be THE way to architect artificial brains. However, that proved a failure. Nevertheless, its simplicity is very attractive, and I continue to mull ideas on making it more flexible and multi-layered.
AI, robots, exploration, and ETI
Firstly, it is a strawman argument to suggest that our machine explorers will use some version of LLM technology. Of course, they are not conscious, and who knows how aligned they are to humanity. But by the time we have interstellar ships, they will be a lot more sophisticated. Will they still be philosophical zombies? We don’t know, but I wouldn’t bet against conscious AI. It is also likely that contact with ETI will be made with their machine proxies, so AI-AI interactions. Do either AI need to be conscious to make a productive contact event? Again, IDK, but even zombie robots can do a lot, whether for their own benefit or paving the way for eventual biological human settlement. If nothing else, such robots could “green” the galaxy for creating new evolutionary paths for intelligences to arise. Maybe the “Old Ones” did that to Earth, and other nearby exoplanets (Like the Engineer in “Alien: Covenant”).
In the long term, the human species would be extinct in 1-10 million years, assuming natural selection. My guess is that artificial, directed evolution using technology will reduce this time frame to millennia unless there is some way to ban such technology permanently. What technology has ever been permanently banned? I fully expect different posthuman species to be well established within a millennium. But I also expect very capable robots to be co-evolving with humanity within the next few centuries at the latest, again, unless there is some permanent Butlerian Jihad ban on thinking machines as in the “Dune” universe. By then, I expect humans will have learned to accept that AIs are entitled to human-level rights and that they can carve their own futures, within our solar system, or beyond. Again, this may be forced as for the replicants in P K Dicks DADoES/Bladerunner, or the continuing development of Weyland-Yutani synthetics in the Alien franchise universe. The outcome could all be very bloody as human history has demonstrated, whether with other human species, other “races”, other cultures, uplifted animals (The Planet of the Apes universe), and artificial beings (many Sci_Fi movies, e.g. the adaptation of Aldiss’s Supertoys Last All Summer Long as A.I. Artificial Intelligence, and of course, boatloads of aliens from hostile to “benevolent” TDtESS)
But I am optimistic that we can mature as a species and become a more diverse “Star Trek” view of interspecies cooperation. [My historian wife says I am very naive. She may be right. ]
@AT,
On the topic of Moon supplemented by Ceres: about 10 or 12 years ago submitted an SBIR proposal to examine deposition of Cere materials on the moon. It did not change history, obviously. But the idea was to collect volatiles and put them in bags like high altitude weather balloons and drop them on the lunar surface with tethers – to make the story short. It’s been a while, but since it is topical… Getting the material to the moon was not a question of overnight delivery once a space “pipeline ” is established. Some propulsive expenditures are necessary, but at the very least Martian flybys can take up some of the slack.
Since we never got paid to do any calculation, I can’t point to detailed results.
But there are some considerations that still count for something. Ceres is probably the most tempting source for such material, though it is about 10 degrees inclined to the ecliptic. And depending on flybys and propulsive efficiency, one runs the risk of using up “propulsive” volatiles to get to the Moon, but not everything gathered is necessarily fuel. Delivery is bound to require lengthy tethers to dissipate lunar “impact” losses, so to speak. Perhaps it’s a question of finding a golden mean for a route, a remaining cargo and a mild crash landing at the end of a long rope. … So I hear you.
But if that does not work, then maybe Ceres will be a more attractive future target than the moon is now.
On AI, I am not sure if there is anything to add or not. But our discussion seems to have taken a turn similar to the Turing Test, whether the responses from the other side of a screen are that of a computer or a human being. But I suspect that Turing did not write the rules so that one had a hybrid choice, something lwhere a moderator announcing, “Well actually, Jim, it’s not a human being on the other side – or a computer. It’s an algorithm with genuine consciousness. How did it get that? … Darned if I know…”
I can forgive Turing. He was a mathematician. Mathematics mostly deals in rigorous Boolean logic of true and false when delivering proofs. How many hundreds of pages was Wiles’ proof of Fermat’s Theorem, which had to be carefully checked by mathematicians for any errors?
Turing was offering a thought experiment, which wasn’t that bad as it was being accepted for over 50 years.
Yet we also had no way, even today, of defining intelligence except as operationally, using techniques for animals that were analogous to “intelligence” tests on humans, but far more limited. As for consciousness, we are still mostly in the philosophical realm, let alone experimental.
As AI has gotten better, I am not at all surprised we are pushing the boundaries of “intelligence” and “consciousness”, as humans have a long history of making those errors even with inanimate objects, let alone animals.
Even Star Trek scientists could be fooled in the 23rd century. Think of Daystrom’s M-5 Multitronic unit in “The Ultimate Experiment”. Similar confusion is evident today, 50-odd years later. I guess a century later, Commander Data is treated as both very intelligent, albeit with inadequate world models about humans, and conscious. So that confusion was technologically overcome by Dr. Soong.