As the AI surge continues, it’s natural to speculate on the broader effects of machine intelligence on deep space missions. Will interstellar flight ever involve human crews? The question is reasonable given the difficulties in propulsion and, just as challenging, closed loop life support that missions lasting for decades or longer naturally invoke. The idea of starfaring as the province of silicon astronauts already made a lot of sense. Thinkers like Martin Rees, after all, think non-biological life is the most likely intelligence we’re likely to find.
But is this really an either/or proposition? Perhaps not. We can reach the Kuiper Belt right now, though we lack the ability to send human crews there and will for some time. But I see no contradiction in the belief that steadily advancing expertise in spacefaring will eventually find us incorporating highly autonomous tools whose discoveries will enable and nurture human-crewed missions. In this thought, robots and artificial intelligence invariably are first into any new terrain, but perhaps with their help one day humans do get to Proxima Centauri.

An interesting article in the online journal NOĒMA prompts these reflections. Robin Wordsworth is a professor of Environmental Science and Engineering as well as Earth and Planetary Sciences at Harvard. His musings invariably bring to mind a wonderful conversation I had with NASA’s Adrian Hooke about twenty years ago at the Jet Propulsion Laboratory. We had been talking about the ISS and its insatiable appetite for funding, with Hooke pointing out that for a fraction of what we were spending on the space station, we could be putting orbiters around each planet and some of their moons.
Image credit: Manchu.
It’s hard to argue with the numbers, as Wordsworth points out that the ISS has so far cost many times more than Hubble or the James Webb Space Telescope. It is, in fact, the most expensive object ever constructed by human beings, amounting thus far to something in the range of $150 billion (the final cost of ITER, by contrast, is projected at a modest $24 billion). Hooke, an aerospace engineer, was co-founder of the Consultative Committee for Space Data Systems (CCSDS) and was deeply involved in the Apollo project. He wasn’t worried about sending humans into deep space but simply about maximizing what we were getting out of the dollars we did spend. Wordsworth differs.
In fact, sketching the linkages between technologies and the rest of the biosphere is what his essay is about. He sees a human future in space as essential. His perspective moves backward and forward in time and probes human growth as elemental to space exploration. He puts it this way:
Extending life beyond Earth will transform it, just as surely as it did in the distant past when plants first emerged on land. Along the way, we will need to overcome many technical challenges and balance growth and development with fair use of resources and environmental stewardship. But done properly, this process will reframe the search for life elsewhere and give us a deeper understanding of how to protect our own planet.
That’s a perspective I’ve rarely encountered at this level of intensity. A transformation achieved because we go off planet that reflects something as fundamental as the emergence of plants on land? We’re entering the domain of 19th Century philosophy here. There is precedent in, for example, the Cosmism created by Nikolai Fyodorov in the 19th Century, which saw interstellar flight as a simple necessity that would allow human immortality. Konstantin Tsiolkovsky embraced these ideas but welded them into a theosophy that saw human control over nature as an almost divine right. As Wordsworth notes, here the emphasis was entirely on humans and not any broader biosphere (and some of Tsiolkovsky’s writings on what humans should do to nature are unsettling}.
But getting large numbers of humans off planet is proving a lot harder than the optimists and dreamers imagined. The contrast between Gerard O’Neill’s orbiting arcologies and the ISS is only one way to make the point. As we’ve discussed here at various times, human experiments with closed loop biological systems have been plagued with problems. Wordsworth points to the concept of the ‘ecological footprint,’ which makes estimates of how much land is required to sustain a given number of human beings. The numbers are daunting:
Per-person ecological footprints vary widely according to income level and culture, but typical values in industrialized countries range from 3 to 10 hectares, or about 4 to 14 soccer fields. This dwarfs the area available per astronaut on the International Space Station, which has roughly the same internal volume as a Boeing 747. Incidentally, the total global human ecological footprint, according to the nonprofit Global Footprint Network, was estimated in 2014 to be about 1.7 times the Earth’s entire surface area — a succinct reminder that our current relationship with the rest of the biosphere is not sustainable.
As I interpret this essay, I’m hearing optimism that these challenges can be surmounted. Indeed, the degree to which our Solar System offers natural resources is astonishing, both in terms of bulk materials as well as energy. The trick is to maintain the human population exploiting these resources, and here the machines are far ahead of us. We can think of this not simply as turning space over to machinery but rather learning through machinery what we need to do to make a human presence there possible in longer timeframes.
As for biological folk like ourselves, moving human-sustaining environments into space for long-term occupation seems a distinct possibility, at least in the Solar System and perhaps farther. Wordsworth comments:
…the eventual extension of the entire biosphere beyond Earth, rather than either just robots or humans surrounded by mechanical life-support systems, seems like the most interesting and inspiring future possibility. Initially, this could take the form of enclosed habitats capable of supporting closed-loop ecosystems, on the moon, Mars or water-rich asteroids, in the mold of Biosphere 2. Habitats would be manufactured industrially or grown organically from locally available materials. Over time, technological advances and adaptation, whether natural or guided, would allow the spread of life to an increasingly wide range of locations in the solar system.
Creating machines that are capable of interstellar flight from propulsion to research at the target and data return to Earth pushes all our limits. While Wordsworth doesn’t address travel between stars, he does point out that the simplest bacterium is capable of growth. Not so the mechanical tools we are so far capable of constructing. A von Neumann probe is a hypothetical constructor that can make copies of itself, but it is far beyond our capabilities. The distance between that bacterium and current technologies, as embodied for example in our Mars rovers, is vast. But machine evolution surely moves to regeneration and self-assembly, and ultimately to internally guided self-improvement. Such ‘descendants’ challenge all our preconceptions.
What I see developing from this in interstellar terms is the eventual production of a star-voyaging craft that is completely autonomous, carrying our ‘descendants’ in the form of machine intellects to begin humanity’s expansion beyond our system. Here the cultural snag is the lack of vicarious identification. A good novel lets you see things through human eyes, the various characters serving as proxies for yourself. Our capacity for empathizing with the artilects we send to the stars is severely tested because they would be non-biological. Thus part of the necessary evolution of the starship involves making our payloads as close to human as possible, because an exploring species wants a stake in the game it has chosen to play.
We will need machine crewmembers so advanced that we have learned to accept their kind as a new species, a non-biological offshoot of our own. We’re going to learn whether empathy with such beings is possible. A sea-change in how we perceive robotics is inevitable if we want to push this paradigm out beyond the Solar System. In that sense, interstellar flight will demand an extension of moral philosophy as much as a series of engineering breakthroughs.
The October 27 issue of The New Yorker contains Adam Kirsch’s review of a new book on Immanuel Kant by Marcus Willaschek, considered a leading expert on Kant’s era and philosophy. Kant believed that humans were the only animals capable of free thought and hence free will. Kirsch adds this:
…the advance of A.I. technology may soon put an end to our species’ monopoly on mind. If computers can think, does that mean that they are also free moral agents, worthy of dignity and rights? Or does it mean, on the contrary, that human minds were never as free as Kant believed—that we are just biological machines that flatter ourselves by thinking we are something more? And if fundamental features of the world like time and space are creations of the human mind, as Kant argued, could artificial minds inhabit entirely different realities, built on different principles, that we will never fully understand?
My thought is that if Wordsworth is right that we are seeing a kind of co-evolution at work – human and machine evolution accelerated by expansion into this new environment – then our relationship with the silicon beings we need will demand acceptance of the fact that consciousness may never be fully measured. We have yet to arrive at an accepted understanding of what consciousness is. Most people I talk to see that as a barrier. I’m going to see it as a challenge, because our natures make us explorers. And if we’re going to continue the explorations that seem part of our DNA, we’re now facing a frontier that’s going to demand consensual work with beings we create.
Will we ever know if they are truly conscious? I don’t think it matters. If I’m right, we’re pushing moral philosophy deeply into the realm of the non-biological. The philosophical challenge is immense, and generative.
The article is Wordsworth, “The Future of Space is More Than Human,” in the online journal NOĒMA, published by the Berggruen Institute and available here.



The boundaries between human and AIs need to be explored, minus the biases almost all of us hold that keep us viewing humans as unique, in principle, even if our functions are mimicked by other creatures or machines. Those biases used to separate us from the rest of the animal world, until we learned we, too, were animals. Perhaps we need to learn that we, too, are machines, as Descartes suggested. The same considerations you talk about in this issue of Centauri Dreams is covered in my essay, “The Future of Humanity,” at https://caseydorman.com/the-future-of-humanity/. and in my sci-fi novel series about an AI race exploring the galaxy.
Artificial minds will indeed live in their own universe, as do You and I. I choose to believe in an objective Universe, but none of us actually live there. We live in that universe each of us create in between our ears.
This is peripheral to the rest of the article, but I think it needs to be said: There is no “AI surge”. There is analytic AI, trained on specific data sets to do specific jobs, which is advancing at the same rate it has for years. And there is a surge in Large Language Models, which are not intelligent, are not designed to be intelligent, and will never be intelligent. They are a machine for shaking a box until the most likely word falls out, based on on its (stolen)training data. Neither of these is the path to AGI, although for what Paul describes, expert systems may be enough in the short term.
There is also a surge in grifters who are convincing people with more money than sense that they can pay for the fancy autocomplete engines instead of employees. The effectiveness of this is shown by the number of articles about “How ChatGPT made stuff up this week”. Note that that money could be going to, I don’t know, space industrialization, to bring it back on topic.
@Christian
I think you are defining “intelligence” in a way that is different from the industry. For example, OpenAI is now saying that it will reach AGI by the end of 2026. AGI is usually defined as human-level intelligence, but for all domains. But if you are defining intelligence differently, e.g. stochastic algorithms and models just mimic human intelligence (like weather prediction models are not the same as the weather), aren’t you effectively saying that Altman is deliberately misleading investors and observers about what OpenAI hopes to achieve?
I’m sympathetic to what you are saying, but we do have the saying “If it walks like a duck, and quacks like a duck, then it is a duck.”, which is the a popular way of saying Turing’s Test, “The Imitation Game” is not indicative that the computer’s I/O is intelligent if the human judges cannot distinguish between a computer and a human responding to the same questions. [We hold different ideas today, but we couldn’t have tested this back in 1951 when he wrote his famous paper.] If we cannot tell if a customer service chat has a human or chatbot at the other end of the messaging app, what does “intelligence” actually mean? And if you have used a chatbot at your end, so that the company’s chatbot is incapable of knowing that it is dealing with a human customer or not…
If an alien were examining a human brain and determined that it was just a dense accumulation of cells densely connected to others, that responded to patterns of electrical firing, could the alien not infer there was no true intelligence in this organ compared to its electronic, perhaps rule-based, mind? Or more locally, don’t we generally assume that most animals have no conscious intelligence, despite their having neural brains as we do? Note that animals have intelligence as defined by dealing well with their environment and events, but have no consciousness about their responses, i.e. no freewill or ability to “decide” on responses.
” aren’t you effectively saying that Altman is deliberately misleading investors and observers about what OpenAI hopes to achieve?”
Yes.
An LLM is a bunch of statistical equations connected to a vocabulary bank. There is no *reasoning* involved, no connections between separate ideas. All it can do is spit out what the most likely next word is based on its training data.
Let’s look at the case of the lawyer who got disbarrred for submitting briefs with citations hallucinated by ChatGPT. There is no way to tell the model “only make references to cases that actually exist,” because it doesn’t know about the difference between “real” and “imaginary” – all it can do is generate text based on it’s prompts. The substrate isn’t the issue, the output is – it “looks” intelligent to us because we intuitively associate “communicates in coherent sentences” with out own brand of consciousness.
Robin Wordsworth, while not as poetic as the more famous William Wordsworth, writes an almost elegiac essay asking for humanity to be a necessary part of space exploration. Yet, I find the argument unsatisfactory, mainly because of what I see as motivated reasoning.
Pulling out some key passages:
Yes, the last sentence would be nice, but it neither requires space exploration by humans nor is there any historical precedent. Indeed, history shows the reverse, as more damage has been done introducing invasive species deliberately of accidentally. [I have just read that the Galapagos Islands have accidentally introduced tree frogs onto the islands with no indigenous amphibian species. The frogs have become extremely populous without predators.]
Next:
Why must machines operate like life, repairing and reproducing like basic cellular machinery? Wordsworth dismisses the way machines can self-reproduce using fabricators, with only the hi-tech components needing to be supplied. Conceptually, we can design the way a machine can mine, process, and fabricate itself and other machines, just supplied with some components. But let us not forget that life needs many of its components supplied by physics and chemistry, as they cannot be “mined” directly from the rocks by life. Life can only reproduce the species that are present in the new ecology. Machines, however, can reproduce not only themselves, but other machine species with stored “blueprints”. Hence, they can create a machine ecosystem starting with a few machines and a blueprint library, and with intelligent algorithms, create new machine species adapted to the new environment.
and again:
There is little evidence that space will drive the emergence of new forms in teh same way. More likely, evolution will be too slow, and design will be the model. As stated previously, machines can design new forms too, and likely more rapidly than life, even [post]human life.
The plea for the “special nature” of humanity and human minds. Isn’t this a plea for humans to stay dominant with their special minds that cannot be replicated by machine intelligence? Why can’t machines acquire these “human mind” traits?
Why should technology be directed by the ultrawealthy? Why not everybody as technology is democratized? Why cannot AGI invent and innovate too?
What is the logic to support this “cosy idea?” It seems just an aversion to a “cold, non-biological, universe” where humanity is not welcome. And where does human-crewed space exploration need to be added to make the survival of the human species likely?
The essay effectively ignores the issues of biological difficulties of human settlement of space within the solar system, handwaving the technologies that would be needed. Yes, O’Neill’s would be a compromise, because living on different worlds would have all sorts of difficulties not mentioned, and the assumption that we and other species could “evolve, or change” to adapt. Perhaps, but it will need new technology to allow humans to travel the stars, from long hibernations, to cryosleep, to time dilation velocities with new propulsion, to FTL with totally handwavium technologies. Machines only need faster propulsion to reach teh stars on a shorter time frame, and they are already the most suited to “settle” any celestial body with a wide range of conditions. Have we even settled the deep oceans on Earth, even the shallow seas? Answer – No.
Lastly, from a purely economic argument, machines are already capable of exploring deep space far more effectively and cheaply than humans. The only value of humans to be deeply kept in teh exploration loop is the eventual failure of designing machines which genuinely think like humans, capable of being intellectually curious and able to design and carry out missions. Are we betting on this, perhaps based on some quasi-religious view of the uniqueness of human/wetware brains and minds, so often postulated by philosophers antithetical to the idea that human minds can be usurped with technology? There may be a transition as humans embrace machine mind enhancements, but eventually, our biological natures will limit our capabilities. The exploration of space, especially interstellar space and beyond, will be done by machines. Humans as observers, but not exploration participants. And if machines populate the galaxy, then even as observers, we will be unable to participate except only peripherally.
For fun, read Terry Bisson’s They’re Made Out of Meat”
When will we get science fiction created by machine intelligences writen for themselves, rather than saying it must be maintained by human writers to stay relevant to huams? This isn’t an exclusive options, machines or humans, but not both, and humans first.
Thanks for your views, but while our explorations can be assisted by machines, an artificial intelligence should NEVER act as our ambassador, no matter how smart or devoted to humanity it may be. If humans are going to the stars, we need to do it ourselves.
@Douglas
On a terrestrial level, does that mean we shouldn’t use game theory to decide our responses to enemies?
Suppose we simply never have the technology for biological humans to go to the stars, does that mean we shouldn’t ever send AI imbued robots either? [So no Bracewell probes? Were the aliens of Starholme mistaken in sending their AI probe, Starglider, to explore the galaxy, as in Clarke’s “The Fountains of Paradise” (or were the aliens a machine civilization)?
Please explain why you hold that opinion.
Hi Paul
A very interesting read. AI sure is advancing and currently replacing a range of jobs.
Lots to think about here.
Cheers Edwin
The problem with the O’neill space colony scenario is simply the cost. Freeman Dyson wrote about this with “Pilgrims, Saints, and Spacemen”, which was republished in L-5 News in 1979. This is the real reason why we do not see space colonies today.
I think “baseline” humans are unlikely to migrate to space. By the time we actually have the infrastructure to go to space in a big way, we will develop bio-engineering and regeneration that many people will have bodies based on bio-nano (the bio-engineering equivalent to molecular nanotechnology). it will be these people, along with actual AI’s (not LLM’s) that will go to space. The bio-nano will be developed in efforts to eliminate aging as well as to enable complete regeneration of bodies.
Needless to say, those of us who become bio-nano will seek (and obtain) political autonomy from those who do not share our values and objectives.
Is there any adaptation/engineering of human to post-human form and biology that can outperform a machine in space? If all the basic terresstrial factors that make humans ill-adapted to living in space or other planets, the need to live in some environment that supports the post-human’s body will be needed, whereas a machine can be “naked” in all these environments.
I cannot recall the author, but there was this idea of a human encased in a hard suit with tenuous “wings” to absorb sunlight to manufacture the the needed sustenance to maintain the body and process the wastes, a personal ecosystem. These beings lived in orbit around Saturn. While intriguing, why would such a human be better than a robot at doing anything?
AFAICS, the arguments always dance around the idea that human minds are superior at some function that places us above the robot’s capabilities, as deities, for all their clear human foibles, are always above humanity.
Robots and their intelligence continue to improve at leaps and bounds compared to humans. AI is still nowhere near human capabilities except in some tasks. They will not achieve AGI of SI using hyperscaling despite the claims of the big (too big to fail) AI companies like OpenAI. However, I see nothing peculiar about wetware that makes AIs unable to achieve these goals..eventually. To believe otherwise is similar to the belief that living things had an “elan vitale” that dead or inorganic things did not. This was proven false. We still see this in ideas about minds. That brains use quantum states, or that in silico brains cannot be conscious, or… Wordsworth repeats these shibboleths with the idea that only humans have goals, the exploring spirit, etc, which is only teh repeating of the old idea that “computers can only do what they are programmed to do”, i.e., are limited to some tasks, and cannot generalize. There is no evidence that these limitations are inherent in artificial brains. Maybe these limitations will prove very difficuly or impossible to overcome, but I haven’t seen it yet.
Therefore, if AGI or SI is attainable, then it implies that robots will be inherently better “pre-adapted” to explore and industrialize space. [Post-]Humans may also be able to do the same with better technology, but I cannot see them ever being as able to enter environments extremely hostile to life compared to machines. Humans have biophilia. We see that in the desire to own landscape pictures, potted plants, gardens, parks (mimicking the Savanna landscape), and “getting away” to picturesque locations. In the movie, 2010, Curnow says after what he misses as he awakens from hibernation in Jupiter space aboard the Leonov, “I miss green”. Maybe post-humans will not be so biophilic, and will be completely comfortable in an artificial environment. Maybe they will not need most ecosystem services to remain alive as we do. However, we know machines are not biophilic (although we might make them so), and can explore and live their operating lifetimes without some mental breakdown in space, perhaps traveling for millennia between the stars.
Check this out:
https://www.cryonicsarchive.org/library/24th-century-medicine/
This is the direction we want to go.
Transhumanism. The space-faring, post-humans described by Fred Pohl in Day Million seem preferable (at least to this human 1.0). The problem is that none of it solves the vast amount of time needed to travel between the stars. Immortality might allow you to get there, but all those millennia stuck in a [large] tin can, even in a holodeck, are not going to be even remotely enjoyable. Either high-c velocity STL with good time dilation, FTL, or cryosleep will be needed for biological beings. It will require fully artificial bodies where perception can be slowed down to mimic time dilation. But why stop there? As in Morgan’s Altered Carbon, just keep a copy of your mind in a “stacks” and install it in a waiting body “sleeve” on arrival. But if you can do that, why not stay in an artificial, non-biological body? Or a robot with its own AI? Going the complex mind uploading/downloading route, staying in VR, etc, is just a fantasy required so that a “human” mind stays dominant. It is like an aquatic animal, like a fish, requiring a complex technology to expand fish civilization onto the land. Yes, the land allows for air-breathing, waterproof animals to evolve, but that really ends when you need some sort of technology to allow interstellar travel, or even slow intra-system travel, with times limited to a decade or so. Sure, we want to read about characters we can map ourselves onto, but why should that drive actual, realistic interstellar exploration and expansion?
To understand the “man versus machine” nature of potential explorers, we must look into the most horrifying corners of the near future. Societies reach the level of technology they are able to handle, and this is most definitely beyond it: “organoid intelligence”. The concept of growing human fetal brain cells to the point of potential awareness for research purposes may be disturbing enough for most, but we are scarcely even started.
The real issue is that these organoids might be used, for some form of computing for example; and that the process can go far further. We are eager to see 3D-printed organs to allow unlimited transplantation without immunosuppressants. The next step is 3D-printed organisms. A simple version might involve creating a brain organoid surrounded in a shell of supporting organ systems, making it cheaper and more reliable to maintain.
The more complex version is the homunculus — a little beast, of purely arbitrary form designed by the company AI, 3D printed from myriad cell lines of humans and animals, grown onto a matrix of controlling AI-enabled electronics. It might be made of the cells of a dog and speak with the voice of an AI, or it might be made from the cells of a human and mutely carry out its orders with all improper thought suppressed. None will know if it suffers.
The potential of homunculi includes the combination of advanced biochemistry, digital control, and eventually fabless self-replication of many or all components, permitting organisms designed to be native to the harsh ecologies of non-terrestrial planets and outer space. But if we cannot see clear to a society where consciousness respects itself and works to a common end, ‘progress’ could come at inconceivable price.
@Mike
Don’t blame me but I don’t see any ‘progress’ in all this, besides which ‘progress’ are we talking about : human ; technological ; social progress ? Progress…or regression?
All these elucubrations are once again only the fantasies of the Man who once again thinks he is a great watchmaker and who risks, once again, creating a Golem.
Where is the philosophy in all this?
“Science sans conscience n’est que ruine de l’âme”.
Hi Paul,
Building an interstellar philosophy ? …it inspires me, my favorite trilogy: the human species; its environment and the Technology. We remove an element and our world disappears:
Without an ad hoc environment (space vacuum) no human species and probably nothing else (?) Without Humans, no Technology, so no artificial modifications of the different environments, no A.I and no space travel except in our dreams.
Let’s take the problem on another way: could the Technology appear elsewhere without the Human ? No certainty, only speculation (Dyson sphere etc) Can Technique appear as a spontaneous thing ? …I doubt it. Is it specific to the human species ?
What is troubling in this small analysis is that there seems to be a kind of determinism in this evolution: first an environment that allows the appearance of a biological structure that will evolve up to our species which will develop abilities to model its environment and even come out of it to modify others (Apollo 11; who knows if Voyagers probes is not going to change a parameter somewhere ?
BTW there is no philosophy without humans. Will an ETI be able to ‘build an interstellar philosophy’ ? Can an A.I ‘philosophize’ ? here is an interesting question, isn’t it ?
We can multiply the questions, that’s the goal of the game but ultimately the central question is: what place do we have in this great chessboard ? What are we “being used” for ? Are we the ones who modify the universe by our actions, or are we a simple adjustment variable that brings a bit of thermodynamics to the whole so that the universe does not die (…not right away) ? Who knows ?
Building an interstellar philosophy invites us to reflect on our place in the universe: what values do we want? peace, cooperation, war, curiosity, exchanges, predation? how will our technology impact the worlds we may visit one day? what is our “cosmic responsibility”? must we modify, contaminate, disrupt other worlds or give up our curiosity to preserve them? In short: should we invent a cosmic ethics ?
Doing astronomy by reducing it only to Technique is a bit boring, you have to bring a dose of philosophy.
Quantum Intelligence is not AI, and now Chinese scientists have announced the development of a revolutionary analog computing chip that they say can outperform today’s most advanced digital graphics processing units (GPUs) by up to 1,000 times while consuming only a fraction of the energy.
https://ilkha.com/english/science-technology/china-develops-analog-chip-that-outperforms-nvidia-and-amd-by-1000-times-488973
Project Suncatcher is a moonshot exploring a new frontier: equipping solar-powered satellite constellations with TPUs and free-space optical links to one day scale machine learning compute in space.
https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/
As I mentioned before, deep space would be the perfect place for Quantum Intelligence. A complete superconducting world at near-zero temperatures, makes me wonder about interstellar comet 3I/ATLAS…
Maybe we should look at a stable point in our moons shadow for a nice super cold spot…
Analog neuromorphic chips have been developed in the US by Intel and IBM for over a decade. However, these chips, using some version of analog memory, may be a new approach. I look forward to hearing more about their development, especially if they can be produced at a very low cost, comparable to RAM. This would undermine NVIDIAs technology “moat” and hence stock price.
Google’s idea to put data servers for AI in orbit is an even more expensive project than loss-making terrestrial data server farms for AI. I think they are making absurd claims that they would be cost effective by the 2030s. Maybe with those new Chinese analog chips?
Interesting that Musk said, “Quantum computing is best done in the permanently shadowed craters on the Moon” on November 2nd 2025. We could put these quantum AI monsters in high orbit around Jupiter and have them eat
thier way through the hydrogen atmosphere by self duplicating. Then collapse Jupiter into a white dwarf. Oh that’s already been done? Or maybe that is where 3I Atlas or the third eye intergalactic Buddha is going.
A successful re-do of Biosphere would be a good milestone. If we cannot do it on Earth, what hope do we have of doing so in space? OTOH: Once we have done it on Earth, that will give us much confidence (and know how) for how to do it in space.
I’m saddened that in the last few decades, progress has been slow. Doing the biosphere right (getting to 99% recycling) is an accomplishment we can do right here on the ground.
I have always been skeptical of libertarian New Spacers and such.
It is too easy to red-bait the patience needed for great things:
https://www.yahoo.com/news/articles/hegseth-shreds-soviet-style-bureaucracy-212958609.html
A long view is needed
https://www.spacedaily.com/m/reports/China_increases_lead_in_global_remote_sensing_research_as_US_share_slips_999.html
At least Elon is cosmic minded.
The recent talk about data centers in space may be what kick-starts powersat and sunshade construction and opens up the floodgates.