Self-Assembly: Reshaping Mission Design

It’s interesting to contemplate the kind of missions we could fly if we develop lightweight smallsats coupled with solar sails, deploying them in Sundiver maneuvers to boost their acceleration. Getting past Voyager 1’s 17.1 kilometers per second would itself be a headline accomplishment, demonstrating the feasibility of this kind of maneuver for boosting delta-v as the spacecraft closes to perhaps 0.2 AU of the Sun before adjusting sail attitude to get maximum acceleration from solar photons.

The economic case for smallsats and sails is apparent. Consider The Planetary Society’s LightSail-2, a solar sail in low Earth orbit, which demonstrated its ability to operate and change its orbit in space for multiple years before reentering Earth’s atmosphere in November of 2022. Launched in 2018, LightSail-2 cost $7 million. NASA’s Solar Cruiser, a much larger design still in development despite budging hiccups, weighs in at $65 million. Slava Turyshev and team at the Jet Propulsion Laboratory independently verified a cost model, with the help of Aerospace Corporation, of $11 million for a one-year interplanetary flight based on their Technology Demonstrator design.

Those numbers go up with the complexity of the mission, but can be reduced if we take advantage of the fact that spacecraft like these can be repurposed. A string of smallsat sailcraft sent, for example, to Uranus to conduct flybys of the planet, its moons and rings, would benefit from economies of scale, with successive missions to other outer system targets costing less than the ones that preceded them. Here the contrast between dedicated flagship missions (think Cassini or the Decadal Suevey’s projected Uranus Orbiter) could not be greater. Instead of a separately developed spacecraft for each destination, the modular smallsat/sail model creates a base platform allowing fast, low-cost missions throughout the Solar System.

To the objection that we need orbiters at places like Uranus to get the best science, the answer can only be that we need both kinds of mission if we are not to bog down in high-stakes financial commitments that preclude targets for decades at a time. Of course we need orbiters. But in between, the list of targets for fast flybys is long, and let’s not forget the extraordinary range of data returned by New Horizons at Pluto/Charon and beyond. As the authors of the recent paper from the JPL team note, heliophysics can benefit from missions sent to various directions in the heliosphere:

The shape of the heliosphere and the extent of its tail are subject to debate and the new model of the heliosphere—roughly spherical with a radius of ?100 AU—needs confirmation. Of course, every mission out to >100 AU will test it, but a series of paired missions (nose and tail, and in perpendicular directions) would provide a substantial improvement in our understanding of ISM/solar wind interactions and dynamics. High-velocity, low-cost sailcraft could probe these questions related to the transition region from local to pristine ISM sooner and at lower cost than competing mission concepts. Since the exact trajectory is not that crucial, this would also provide excellent opportunities for ad hoc trans-Neptunian object flybys.

Image: This is Figure 5 from the paper. Caption: New paradigm – fast, low-cost, interplanetary sailcraft with trajectories unconstrained to the ecliptic plane. Note the capability development phases from TDM (at 5–6 AU/yr) to the mission to the focal region of the SGL (20–30 AU/yr). Credit: Turyshev et al.

What I see emerging, however, is a new model not just for flyby missions but for the kind of complicated mission we’ve gotten so much out of through spacecraft like Cassini. We are on the cusp of the era of robotic self-assembly, which means we can usefully combine these ideas. Ten fast smallsats capable of flying considerably faster than anything we’ve flown before can, in this vision, self-assemble into one or more larger craft enroute to a particular destination. The Solar Gravitational Lens mission as designed at JPL relies on self-assembly to achieve the needed payload mass and also draws on the ability of smallsats with sails to achieve the needed acceleration.

We can trace robotic self-assembly all the way back to John von Neumann’s self-replicating probes, but as far as I know, it was Robert Freitas who in 1980 first took the idea apart in terms of a serious engineering study. Freitas applied self-assembly to a highly modified probe based on the Project Daedalus craft. Freeman Dyson considered robotic methods using robot swarms to build large structures and also proposed his famous ‘Astrochicken,’ a 1 kg self-replicating automaton that was part biological and was conceived as a way of exploring the Solar System. Eric Drexler is well known for positing nanomachines that could build large structures in space.

So the idea has an interesting past, and now we can consider the Turyshev paper we’ve been looking at in these past few posts as the outline of an overall rethinking of the classic one-destination-per-mission concept, one that allows cheap flybys but also alternate ways of putting larger instrumented craft into the kind of orbits the 2022 Decadal has recommended for its putative Uranus mission. Modular smallsat design might incorporate self-assembly including propulsion modules for slowing the encounter speed of a mission to the outer planets. Here is what the paper says on the topic as it relates to a possible mission to search for life in the plumes of Enceladus:

Another mission type may rely on in-flight aggregation [8], which may be needed to allow for orbital capture. For that, after perihelion passage and while moving at 5 AU/yr (?25 km/s), the microsats would perform inflight aggregation to make a fully capable smallsat to satisfy conditions for in situ investigations. One such important capability may be enhanced on-board propulsion capable of providing the ?v needed to slow down the smallsat. In this case, before approaching Enceladus, the spacecraft reduces its velocity by 7.5 km/s using a combination of on-board propulsion and gravity assists. Moving in the same direction with Enceladus (which orbits Saturn at 12.6 km/s) it achieves the conditions for in situ biomaterial collection.

We might, then, consider the option of either multiple flybys of small probes or larger payloads in self-assembling smallsat craft of the ice giants and other targets in the outer reaches of the system. The paper names quite a few possibilities. Among them:

    The so-called ‘interstellar ribbon,’ evidently determined by interactions between the heliosphere and the local interstellar magnetic field.

    Indirect probing of sailcraft trajectory in search of information about the putative Planet 9 and its gravitational effects somewhere between 300 and 500 AU of the Sun (Breakthrough Starshot has also discussed this). And if Planet 9 is found, target missions to a world much too far away to study with chemical propulsion methods.

    The Kuiper Belt and beyond: KBOs and dwarf planets like Haumea, Makemake, Eris, and Quaoar within roughly 100 AU of the Sun, or even Sedna, whose orbit takes it well beyond 100 AU.

    Observations of Earth as exoplanet, observing its transits across the Sun and improving transit spectroscopy.

    Missions to interstellar objects like 1/I ‘Oumuamua, which are believed to occur in substantial numbers and likely to be a rich field for future discovery.

    Studies of the local interplanetary dust cloud responsible for the zodiacal light.

    Exoplanet imaging through self-assembling smallsats, the JPL Solar Gravitational Lens mission.

Image: This is Figure 9 from the paper. Caption: IBEX ENA Ribbon. A closer look suggests that the numbers of ENAs are enhanced at the interstellar boundary. A Sundiver spacecraft will go through this boundary as it travels to the ISM. Credit: SwRI.

As examined in JPL’s Phase III study for the SGL mission (the term ‘microsat’ below refers to that category of smallsats massing less than 20 kilograms):

The in-flight (as opposed to Earth-orbiting or cislunar) autonomous assembly [8] allows us to build large spacecraft from modules, separately delivered in the form of microsats (<20 kg), where each microsat is placed on a fast solar system transit trajectory via solar sail propulsion to velocities of ?10 AU/yr. Such a modular approach of combining various microsats into one larger spacecraft for a deep space mission is innovative and will be matured as part of the TDM flights. This unexplored concept overcomes the size and mass limits of typical solar sail missions. Autonomous docking and in-flight assembly are done after a large ?v maneuver, i.e., after passing through perihelion. The concept also offers the compelling ability to assemble different types of instruments and components in a modular fashion, to accomplish many different mission types.

To say that robotic assembly is an ‘unexplored concept’ underlines how much would have to be resolved to make such a daring mission work. The paper goes into more details, of which I’ll mention the high accuracy demanded in terms of trajectory. Remember, we’re talking about flinging each microsat into the outer system after perihelion on its own, with the need for successful rendezvous and assembly not in Earth orbit but in outbound cruise. Docking technologies for structural, power and data connections would go far beyond those deployed on any missions flown to date.

Even so, I’m persuaded this concept is feasible. It’s also completely brilliant.

Autonomous in-space docking has been demonstrated, while proximity operation technologies specific to such missions can be developed with time. I’ve referred before in these pages to NASA’s On-Orbit Autonomous Assembly from Nanosatellites (OAAN) project, and note that the agency has followed with a CubeSat Proximity Operations Demonstration (CPOD) mission. Needless to say, we’ll keep an eye on these and other efforts. I’m reminded of the intricacies of JWST deployment and have to say that from this layman’s view, we are building the roadmap to make self-assembly happen.

Image: An early artist’s impression of OAAN. Credit: NASA.

Alex Tolley has been looking at self-assembly issues in the comments to the previous post. I highly recommend reading what he has to say. I noted this about redundancy, an issue I hadn’t considered. Quoting Alex:

“Normally, a swarm of independent probe sails would offer redundancy in case of failure. A swarm of flyby sail probes can afford the odd failure. However, this is not the case with probes that must be combined into a functioning whole. Now we have a weakest link problem. Any failure could jeopardize the mission if a failed probe has a crucial component needed for the final combined probes. That failure could be with the payload, or with the sail system itself. A sail may fail with a malfunctioning blade, which prevents being able to rendezvous with the rest of the swarm, or more subtly, be unable to manage fine maneuvering for docking.”

Self-assembly is complex indeed, making early missions that can demonstrate docking and assembly a priority. Success could re-shape how we conceive deep space missions.

For a more detailed look at how the JPL team views self-assembly in the context of the SGL mission, see Helvajian et al., “A mission architecture to reach and operate at the focal region of the solar gravitational lens” (abstract). The Turyshev et al. paper is “Science opportunities with solar sailing smallsats,” available as a preprint. I’ve also written about self-assembly in Solar Gravitational Lens: Sailcraft and In-Flight Assembly.

tzf_img_post

AI Colonization: The Founder and the Ambassador

As we look toward future space missions using advanced artificial intelligence, when can we expect to have probes with cognitive capabilities similar to humans? Andreas Hein and Stephen Baxter consider the issue in their paper “Artificial Intelligence for Interstellar Travel” (citation below), working out mass estimates for the spacecraft and its subsystems and applying assumptions about the increase in computer power per payload mass. By 2050 we reach onboard data handling systems with a processing power of 15 million DMIPS per kg.

As DMIPS and flops are different performance measures [the computing power of the human brain is estimated at 1020 flops], we use a value for flops per kg from an existing supercomputer (MareNostrum) and extrapolate this value (0.025?1012 flops/kg) into the future (2050). By 2050, we assume an improvement of computational power by a factor 105 , which yields 0.025?1017 flops/kg. In order to achieve 1020 flops, a mass of dozens to a hundred tons is needed.

All of this factors into the discussion of what the authors call a ‘generic artificial intelligence probe.’ Including critical systems like solar cells and radiator mass, Hein and Baxter arrive at an AI probe massing on the order of hundreds of tons, which is not far off the value calculated in the 1970s for the Daedalus probe’s payload. Their figures sketch out a probe that will operate close to the target star, maximizing power intake for the artificial intelligence. Radiators are needed to reject the heat generated by the AI payload. The assumption is that AI will be switched off in cruise, there being no power source enroute to support its operations.

The computing payload itself masses 40 tons, with 100 tons for the radiator and 100 tons for solar cells. We can see the overall configuration in the image below, drawn from the paper.

Image: This is Figure 10 from the paper. Caption: AI probe subsystems (Image: Adrian Mann).

Of course, all this points to increasingly powerful AI and smaller payloads over time. The authors comment:

Under the assumption that during the 2050 to 2090 timeframe, computing power per mass is still increasing by a factor of 20.5, it can be seen…that the payload mass decreases to levels that can be transported by an interstellar spacecraft of the size of the Daedalus probe or smaller from 2050 onwards. If the trend continues till 2090, even modest payload sizes of about 1 kg can be imagined. Such a mission might be subject to the ”waiting paradox”, as the development of the payload might be postponed successively, as long as computing power increases and consequently launch cost[s] decrease due to the lower payload mass.

And this is interesting: Let’s balance the capabilities of an advanced AI payload against the mass needed for transporting a human over interstellar distances (the latter being, in the authors’ estimation, about 100 tons). We reach a breakeven point for the AI probe with the cognitive capabilities of a human somewhere between 2050 and 2060. Of course, a human crew will mean more than a single individual on what would doubtless be a mission of colonization. And the capabilities of AI should continue to increase beyond 2060.

It’s intriguing that our first interstellar missions, perhaps in the form of Breakthrough Starshot’s tiny probes, are contemplated for this timeframe, at the same time that the development of AGI — artificial general intelligence — is extrapolated to occur around 2060. Moving well beyond this century, we can envision increasing miniaturization of increasingly capable AI and AGI, reducing the mass of an interstellar probe carrying such an intelligence to Starshot-sized payloads.

From Daedalus to a nano-probe is a long journey. It’s one that Robert Freitas investigated in a paper that took macro-scale Daedalus probes and folded in the idea of self-replication. He called the concept REPRO, a fusion-based design that would use local resources to produce a new REPRO probe every 500 years. But he would go on to contemplate probes no larger than sewing needles, each imbued with one or many AGIs and capable of using nanotechnology to activate assemblers, exploiting the surface resources of the objects found at destination.

As for Hein and Baxter, their taxonomy of AI probes, which we’ve partially examined this week, goes on to offer two more possibilities. The first is the ‘Founder’ probe, one able to alter its environment and establish human colonies. Thus a new form of human interstellar travel emerges. From the paper:

The classic application of a Founder-class probe may be the ‘seedship’ colony strategy. Crowl et al. [36] gave a recent sketch of possibilities for ‘embryo space colonisation’ (ESC). The purpose is to overcome the bottleneck costs of distance, mass, and energy associated with crewed interstellar voyages. Crowl et al. [36] suggested near-term strategies using frozen embryos, and more advanced options using artificial storage of genetic data and matterprinting of colonists’ bodies, and even ‘pantropy’, the pre-conception adaptation of the human form to local conditions. Hein [70] previously explored the possibility of using AI probes for downloading data from probes into assemblers that could recreate the colonists. Although appearing speculative, Boles et al. [18] have recently demonstrated the production of genetic code from data.

Founder probes demand capable AI indeed, for their job can include terraforming at destination (with all the ethical questions that raises) and the construction of human-ready habitats. You may recall, as the authors do, Arthur C. Clarke’s The Songs of Distant Earth (1986), wherein the Earth-like planet Thalassa is the colony site, and the first generation of colonists are raised by machines. Vernor Vinge imagines in a 1972 story called ‘Long Shot’ a mission carrying 10,000 human embryos, guided by a patient AI named Ilse. A key question for such concepts: Can natural parenting really be supplanted by AI, no matter how sophisticated?

Image: The cover of the June, 1958 issue of IF, featuring “The Songs of Distant Earth.” Science fiction has been exploring the issues raised by AGI for decades.

Taken to its speculative limit, the Founder probe is capable of digital to DNA conversion, which allows stem cells carried aboard the mission to be reprogrammed with DNA printed out from data aboard the probe or supplied from Earth. A new human colony is thus produced.

Image: This is Figure 9 from the paper. Caption: On-site production of genetic material via a data to DNA converter.

Hein and Baxter also explore what they call an ‘Ambassador’ probe. Here we’re in territory that dates back to Ronald Bracewell, who thought that a sufficiently advanced civilization could send out probes that would remain in a target stellar system until activated by its detection of technologies on a nearby planet. A number of advantages emerge when contrasted with the idea of long-range communications between stars:

A local probe would allow rapid dialogue, compared to an exchange of EM signals which might last millennia. The probe might even be able to contact cultures lacking advanced technology, through recognizing surface structures for example [11]. And if technological cultures are short-lived, a probe, if robust enough, can simply wait at a target star for a culture ready for contact to emerge – like the Monoliths of Clarke’s 2001 [32]. In Bracewell’s model, the probe would need to be capable of distinguishing between local signal types, interpreting incoming data, and of achieving dialogue in local languages in printed form – perhaps through the use of an animated dictionary mediated by television exchanges. In terms of message content, perhaps it would discuss advances in science and mathematics with us, or ‘write poetry or discuss philosophy’…

Are we in ‘Prime Directive’ territory here? Obviously we do not want to harm the local culture; there are planetary protection protocols to consider, and issues we’ve looked at before in terms of METI — Messaging Extraterrestrial Intelligence. The need for complex policy discussion before such probes could ever be launched is obvious. Clarke’s ‘Starglider’ probe (from The Fountains of Paradise) comes to mind, a visitor from another system that uses language skills acquired by radio leakage to begin exchanging information with humans.

Having run through their taxonomy, Hein and Baxter’s concept for a generic artificial intelligence probe, discussed earlier, assumes that future human-level AGI would consume as much energy for operations as the equivalent energy for simulating a human brain. Heat rejection turns out to be a major issue, as it is for supercomputers today, requiring the large radiators of the generic design. Protection from galactic cosmic rays during cruise, radiation-hardened electronics and self-healing technologies in hardware and software are a given for interstellar missions.

Frank Tipler, among others, has looked into the possibility of mind-uploading, which could theoretically take human intelligence along for the ride to the stars and, given the lack of biological crew, propel the colonization of the galaxy. Ray Kurzweil has gone even further,by suggesting that nano-probes of the Freitas variety might traverse wormholes for journeys across the universe. Such ideas are mind-bending (plenty of science fiction plots here), but it’s clear that given the length of the journeys we contemplate, finding non-biological agents to perform such missions will continue to occupy researchers at the boundaries of computation.

The paper is Hein & Baxter, “Artificial Intelligence for Interstellar Travel,” submitted to JBIS (preprint).

tzf_img_post

Future AI: The Explorer and the Philosopher

Robert Bradbury had interesting thoughts about how humans would one day travel to the stars, although whether we could at this point call them human remains a moot point. Bradbury, who died in 2011 at the age of 54, reacted at one point to an article I wrote about Ben Finney and Eric Jones’ book Interstellar Migration and the Human Experience (1985). In a comment to that post, the theorist on SETI and artificial intelligence wrote this:

Statements such as “Finney believes that the early pioneers … will have so thoroughly adapted to the space environment” make sense only once you realize that the only “thoroughly adapted” pioneers will be pure information (i.e. artificial intelligences or mind uploaded humans) because only they can have sufficient redundancy that they will be able to tolerate the hazards that space imposes and exist there with minimal costs in terms of matter and energy.

Note Bradbury’s reference to the hazards of space, and the reasonable supposition — or at least plausible inference — that biological humanity may choose artificial intelligence as the most sensible way to explore the galaxy. Other scenarios are possible, of course, including humans who differentiate according to their off-Earth environments, igniting new speciation that includes unique uses of AI. Is biological life necessarily a passing phase, giving way to machines?

I’ll be having more to say about Robert Bradbury’s contribution to interstellar studies and his work with SETI theorist Milan ?irkovi? in coming months, though for now I can send you to this elegant, perceptive eulogy by George Dvorsky. Right now I’m thinking about Bradbury because of Andreas Hein and Stephen Baxter, whose paper “Artificial Intelligence for Interstellar Travel” occupied us yesterday. The paper references Bradbury’s concept of the ”Matrioshka Brain,” a large number of spacecraft orbiting a star producing the power for the embedded AI.

Image: Artist’s concept of a Matrioshka Brain. Credit: Steve Bowers / Orion’s Arm.

Essentially, a Matrioshka brain (think dolls within dolls like the eponymous Russian original) resembles a Dyson sphere on the surface, but now imagine multiple Dyson spheres, nested within each other, drawing off stellar energies and cycling levels of waste heat into computational activities. Such prodigious computing environments have found their way into science fiction — Charles Stross explores the possibilities, for example, in his Accelerando (2005).

A Matrioshka brain civilization might be one that’s on the move, too, given that Bradbury and Milan ?irkovi? have postulated migration beyond the galactic rim to obtain the optimum heat gradient for power production — interstellar temperatures decrease with increased distance from galactic center, and in the nested surfaces of a Matrioshka brain, the heat gradient is everything. All of which gives us food for thought as we consider the kind of objects that could flag an extraterrestrial civilization and enable its deep space probes.

But let’s return to the taxonomy that Hein and Baxter develop for AI and its uses in probes. The authors call extensions of the automated probes we are familiar with today ‘Explorer probes.’ Even at our current level of technological maturity, we can build probes that make key adjustments without intervention from Earth. Thus the occasional need to go into ‘safe’ mode due to onboard issues, or the ability of rovers on planetary surfaces to adjust to hazards. Even so, many onboard situations can only be resolved with time-consuming contact with Earth.

Obviously, our early probes to nearby stars will need a greater level of autonomy. From the paper:

On arrival at Alpha Centauri, coming in from out of the plane of a double-star system, a complex orbital insertion sequence would be needed, followed by the deployment of subprobes and a coordination of communication with Earth [12]. It can be anticipated that the target bodies will have been well characterised by remote inspection before the launch of the mission, and so objectives will be specific and detailed. Still, some local decision-making will be needed in terms of handling unanticipated conditions, equipment failures, and indeed in prioritising the requirements (such as communications) of a multiple-subprobe exploration.

Image: Co-author Andreas Hein, executive director of the Initiative for Interstellar Studies. Credit: i4IS.

Feature recognition through ‘deep learning’ and genetic algorithms are useful here, and we can envision advanced probes in the Explorer category that can use on-board manufacturing to replace components or create new mechanisms in response to mission demands. The Explorer probe arrives at the destination system and uses its pre-existing hardware and software, or modifications to both demanded by the encounter situation. As you can see, the Explorer category is defined by a specific mission and a highly-specified set of goals.

The ‘Philosopher’ probes we looked at yesterday go well beyond these parameters with the inclusion of highly advanced AI that can actually produce infrastructure that might be used in subsequent missions. Stephen Baxter’s story ‘Star Call’ involves a probe that is sent to Alpha Centauri with the capability of constructing a beamed power station from local resources.

The benefit is huge: Whereas the Sannah III probe (wonderfully named after a spaceship in a 1911 adventure novel by Friedrich Mader) must use an onboard fusion engine to decelerate upon arrival, all future missions can forego this extra drain on their payload size and carry much larger cargoes. From now on incoming starships would decelerate along the beam. Getting that first mission there is the key to creating an infrastructure.

The Philosopher probe not only manufactures but designs what it needs, leading to a disturbing question: Can an artificial intelligence of sufficient power, capable of self-improvement, alter itself to the point of endangering either the mission or, in the long run, its own creators? The issue becomes more pointed when we take the Philosopher probe into the realm of self-replication. The complexity rises again:

…for any practically useful application, physical self-replicating machines would need to possess considerable computing power and highly sophisticated manufacturing capabilities, such as described in Freitas [52, 55, 54], involving a whole self-replication infrastructure. Hence, the remaining engineering challenges are still considerable. Possible solutions to some of the challenges may include partial self-replication, where complete self-replication is achieved gradually when the infrastructure is built up [117], the development of generic mining and manufacturing processes, applicable to replicating a wide range of components, and automation of individual steps in the replication process as well as supply chain coordination.

Image: Stephen Baxter, science fiction writer and co-author of “Artificial Intelligence for Interstellar Travel.” Credit: Stephen Baxter.

The range of mission scenarios is considerable. The Philosopher probe creates its exploration strategy upon arrival depending on what it has learned approaching the destination. Multiple sub-probes could be created from local resources, without the need for transporting them from Earth. In some scenarios, such a probe engineers a destination planet for human habitability.

But given the fast pace of AI growth on the home world, isn’t it possible that upgraded AI could be transmitted to the probe? From the paper:

This would be interesting, in case the evolution of AI in the solar system is advancing quickly and updating the on-board AI would lead to performance improvements. Updating on-board software on spacecraft is already a reality today [45, 111]. Going even a step further, one can imagine a traveling AI which is sent to the star system, makes its observations, interacts with the environment, and is then sent back to our solar system or even to another Philosopher probe in a different star system. An AI agent could thereby travel between stars at light speed and gradually adapt to the exploration of different exosolar environments.

We still have two other categories of probes to consider, the ‘Founder’ and the ‘Ambassador,’ both of which will occupy my next post. Here, too, we’ll encounter many tropes familiar to science fiction readers, ideas that resonate with those exploring the interplay between space technologies and the non-biological intelligences that may guide them to another star. It’s an interesting question whether artificial general intelligence — AGI — is demanded by the various categories. Is a human- or better level of artificial intelligence likely to emerge from all this?

The paper is Hein & Baxter, “Artificial Intelligence for Interstellar Travel,” submitted to JBIS (preprint).

tzf_img_post

Artificial Intelligence and the Starship

The imperative of developing artificial intelligence (AI) could not be more clear when it comes to exploring space beyond the Solar System. Even today, when working with unmanned probes like New Horizons and the Voyagers that preceded it, we are dealing with long communication times, making probes that can adapt to situations without assistance from controllers a necessity. Increasing autonomy promises challenges of its own, but given the length of the journeys involved, earlier interstellar efforts will almost certainly be unmanned and rely on AI.

The field has been rife with speculation by science fiction writers as well as scientists thinking about future missions. When the British Interplanetary Society set about putting together the first serious design for an interstellar vehicle — Project Daedalus in the 1970s — self-repair and autonomous operation were a given. The mission would operate far from home, performing a flyby of Barnard’s Star and the presumed planets there with no intervention from Earth.

We’re at an interesting place here because each step we take in the direction of artificial intelligence leads toward the development of what Andreas Hein and Stephen Baxter call ‘artificial general intelligence’ (AGI), which they describe in an absorbing new paper called “Artificial Intelligence for Interstellar Travel,” now submitted to the Journal of the British Interplanetary Society. The authors define AGI as “[a]n artificial intelligence that is able to perform a broad range of cognitive tasks at similar levels or better than humans.”

This is hardly new terrain for Hein, a space systems engineer who is executive director of the Initiative for Interstellar Studies, or Baxter, an award-winning and highly prolific science fiction novelist. A fascinating Baxter story titled “Star Call” appears in the Starship Century volume (2013), wherein we hear the voice of just such an intelligence:

I am called Sannah III because I am the third of four copies who were created in the NuMind Laboratory at the NASA Ames research base. I was the one who was most keen to volunteer for this duty. One of my sisters will be kept at NASA Ames as backup and mirror, which means that if anything goes wrong with me the sentience engineers will study her to help me. The other sisters will be assigned to different tasks. I want you to know that I understand that I will not come home from this mission. I chose this path freely. I believe it is a worthy cause.

What happens to Sannah III and the poignancy of its journey as it reports home illuminates some of the issues we’ll face as we develop AGI and send it outward.

Image: A visualization of the British Interplanetary Society’s Daedalus Probe by the gifted Adrian Mann.

On the one hand, deep space mandates our work in AI, leading to this far more comprehensive, human-like intelligence, while at the same time human activities in nearby space face directly into the fact that space is a hostile place for biological creatures. There may develop evolutionary offshoots from Earth’s human stock as pioneering colonists move to Mars and perhaps the asteroids, tapping cyborg technologies and perhaps beginning a posthuman era.

I notice that in Martin Rees’ new book On the Future, the famed astrophysicist and Astronomer Royal speculates that pressures such as these may lead to the end of Darwinian evolution. Developing AGI would replace it with artificial enhancement of intelligence directed by increasingly capable generations of machines. It’s a conceivable outcome, and it’s one that would emerge more swiftly away from Earth, in Rees’ view. The need for powerful AGI for our explorations beyond the Kuiper Belt could well be a driving force in this development.

Of course, we don’t have to see future AI as excluding a human presence. One science fiction trope of considerable interest has been what Andreas Hein explored in earlier work (see Transcendence Going Interstellar: How the Singularity Might Revolutionize Interstellar Travel). One option for exploration: Send probes equipped with AGI to create the colonies that humans will eventually use. Could AGI raise a generation of humans from individual embryos upon arrival?

We can also think about self-replication. A first generation of probes could, as Frank Tipler and Robert Freitas have discussed, continually produce new generations, resulting in a step-by-step exploration of the galaxy.

Whether or not humans go with them or send them as humanity’s emissaries will depend on the decisions and technologies of the time. We have rich background speculations in science fiction to rely on, which the authors tap to analyze AI and AGI for a range of interstellar scenarios and the consequent mission architectures.

Thus AXIS (Automated eXplorer of Interstellar Space), the creation of Greg Bear in his novel Queen of Angels, which runs its own scientific investigations. Long-time Centauri Dreams readers will know of my interest in this novel because of the issues it raises about autonomy and growing self-awareness in AI far from human intervention. AXIS is an example of what Hein and Baxter refer to as ‘Philosopher’ probes. These are probes that, in contrast to probes with specific missions, are able to support open-ended exploration.

Probes like this are able, at least to some extent, to use local resources, which could involve manufacturing, hence the potential wave of new probes to further destinations. Agile and adaptive, they can cope with unexpected situations and produce and test hypotheses. A ‘Gödel machine’ contains a program that interacts with its environment and is capable of modification as it searches for proofs that such changes will produce benefits for the mission. Such a machine, write the authors, could “…modify any part of its code, including the proof searcher itself and the utility function which sums up the rewards…” and could “…modify its soft- and hardware with respect to a specific environment and even set its goals.”

‘Philosopher’ probes deserve more exploration, which I’ll get into tomorrow. But Hein and Baxter develop a taxonomy that includes four types, distinguished in terms of their objectives. We’ll need to look at samples of each as we consider AI and AGI as currently envisioned. The mix of formal and qualitative analysis available in this paper opens many a speculative door, pointing toward the paper’s design of a generic AI probe and predictions about AI availability.

The paper is Hein & Baxter, “Artificial Intelligence for Interstellar Travel,” submitted to JBIS (preprint).

tzf_img_post

A Vision to Bootstrap the Solar System Economy

Early probes are one thing, but can we build a continuing presence among the stars, human or robotic? An evolutionary treatment of starflight sees it growing from a steadily expanding presence right here in our Solar System, the kind of infrastructure Alex Tolley examines in the essay below. How we get to a system-wide infrastructure is the challenge, one analyzed by a paper that sees artificial intelligence and 3D printing as key drivers leading to a rapidly expanding space economy. The subject is a natural for Tolley, who is co-author (with Brian McConnell) of A Design for a Reusable Water-Based Spacecraft Known as the Spacecoach (Springer, 2016). An ingenious solution to cheap transportation among the planets, the Spacecoach could readily be part of the equation as we bring assets available off-planet into our economy and deploy them for even deeper explorations. Alex is a lecturer in biology at the University of California, and has been a Centauri Dreams regular for as long as I can remember, one whose insights are often a touchstone for my own thinking.

by Alex Tolley

alexgetty_2x

Crewed starflight is going to be expensive, really expensive. All the various proposed methods from slow world ships to faster fusion vessels require huge resources to build and fuel. Even at Apollo levels of funding in the 1960’s, an economy growing at a fast clip of 3% per year is estimated to need about half a millennium of sustained growth to afford the first flights to the stars. It is unlikely that planet Earth can sustain such a sizable economy that is millions of times larger than today’s. The energy use alone would be impossible to manage. The implication is that such a large economy will likely be solar system wide, exploiting the material and energy resources of the system with extensive industrialization.

Economies grow by both productivity improvements and population increases. We are fairly confident that Earth is likely nearing its carrying capacity and certainly cannot increase its population even 10-fold. This implies that such a solar system wide economy will need huge human populations living in space. The vision has been illustrated by countless SciFi stories and perhaps popularized by Gerry O’Neill who suggested that space colonies were the natural home of a space faring species. John Lewis showed that the solar system has immense resources to exploit that could sustain human populations in the trillions.

Tolley01

Image credit: John Frassanito & Associates

But now we run into a problem. Even with the most optimistic estimates of reduced launch costs, and assuming people want to go and live off planet probably permanently, the difficulties and resources needed to develop this economy will make the US colonization by Europeans seem like a walk in the park by comparison. No doubt it can be done, but our industrial civilization is little more than a quarter of a millennium old. Can we sustain the sort of growth we have had on Earth for another 500 years, especially when it means leaving behind our home world to achieve it? Does this mean that our hopes of vastly larger economies, richer lives for our descendents and an interstellar future for humans is just a pipe dream, or at best a slow grind that might get us there if we are lucky?

Well, there may be another path to that future. Philip Metzger and colleagues have suggested that such a large economy can be developed. More extraordinary, that such an economy can be built quickly and without huge Earth spending, starting and quickly ending with very modest space launched resources. Their suggestion is that the technologies of AI and 3D printing will drive a robotic economy that will bootstrap itself quickly to industrialize the solar system. Quickly means that in a few decades, the total mass of space industrial assets will be in the millions of tonnes and expanding at rates far in excess of our Earth-based economies.

The authors ask, can we solve the launch cost problem by using mostly self-replicating machines instead? This should remind you of the von Neumann replicating probe concept. Their idea is to launch seed factories of almost self-replicating robots to the Moon. The initial payload is a mere 8 tonnes. The robots will not need to be fully autonomous at this stage as they can be teleoperated from Earth due to the short 2.5 second communication delay. They are not fully self-replicating at this stage as need for microelectronics is best met with shipments from Earth. Almost complete self-replication has already been demonstrated with fabs, and 3D printing promises to extend the power of this approach.

The authors assume that initial replication will neither be fully complete, nor high fidelity. They foresee the need for Earth to ship the microelectronics to the Moon as the task of building fabs is too difficult. In addition, the materials for new robots will be much cruder than the technology earth can currently deliver, so that the next few generations of robots and machinery will be of poorer technology than the initial generation. However the quality of replication will improve with each generation and by generation 4, a mere 8 years after starting, the robot technology will be at the initial level of quality, and the industrial base on the Moon should be large enough to support microelectronics fabs. From then on, replication closure is complete and Earth need ship no further resources to the Moon.

GenHuman/Robotic InteractionArtificial IntelligenceScale of IndustryMaterials ManufacturedSource of Electronics
1.0Teleoperated and/or locally operated by a human outpostInsect-likeImported, small-scale, limited diversityGases, water, crude alloys, ceramics, solar cellsImport fully integrated machines
2.0TeleoperatedLizard-likeCrude fabrication, inefficient, but greater throughput than 1.0(Same)Import electronics boxes
2.5TeleoperatedLizard-likeDiversifying processes, especially volatiles and metalsPlastics, rubbers, some chemicalsFabricate crude components plus import electronics boxes
3.0Teleoperated with experiments in autonomyLizard-likeLarger, more complex processing plantsDiversify chemicals, simple fabrics, eventually polymersLocally build PC cards, chassis and simple components, but import the chips
4.0Closely supervised autonomyMouse-likeLarge plants for chemicals, fabrics, metalsSandwiched and other advanced material processesBuilding larger assets such as lithography machines
5.0Loosely supervised autonomyMouse-likeLabs and factories for electronics and robotics. Shipyards to support main belt.Large scale productionMake chips locally. Make bots in situ for export to asteroid belt.
6.0Nearly full autonomyMonkey-likeLarge-scale, self-supporting industry, exporting industry to asteroid main beltMakes all necessary materials, increasing sophisticationMakes everything locally, increasing sophistication
X.0Autonomous robotics pervasive throughout Solar System enabling human presenceHuman-likeRobust exports/imports through zones of solar systemMaterial factories specialized by zone of the Solar SystemElectronics factories in various locations

Table 1. The development path for robotic space industrialization. The type of robots and the products created are shown. Each generation takes about 2 years to complete. Within a decade, chip fabrication is initiated. By generation 6, full autonomy is achieved.

AssetQty. per setMass minus Electronics (kg)Mass of Electronics (kg)Power (kW)Feedstock Input (kg'hr)Product Output (kg/hr)
Power Distrib & Backup12000-----------------
Excavators (swarming)570190.3020----
Chem Plant 1 - Gases1733305.5841.8
Chem Plant 2 - Solids1733305.58101.0
Metals Refinery110191910.00203.15
Solar Cell Manufacturer1169190.500.3----
3D Printer 1 - Small Parts4169195.000.50.5
3D Printer 2 - Large Parts4300195.000.50.5
Robonaut assemblers3135150.40--------
Total per Set~7.7 MT
launched to Moon
64.36 kW20 kg
regolith/hr
4 kg
parts/hr

Table 2. The products and resources needed to bootstrap the industrialization of the Moon with robots. Note the low mass needed to start, a capability already achievable with existing technology. For context, the Apollo Lunar Module had a gross mass of over 15 tonnes on landing.

The authors test their basic model with a number of assumptions. However the conclusions seem robust. Assets double every year, more than an order of magnitude faster than Earth economic growth.

Tolley02

Figure 13 of the Metzger paper shows that within 6 generations, about 12 years, the industrial base off planet could potentially be pushing towards 100K MT.

Tolley03

Figure 14 of the paper shows that with various scenarios for robots, the needed launch masses from Earth every 2 years is far less than 100 tonnes and possibly below 10 tonnes. This is quite low and well within the launch capabilities of either government or private industry.

Once robots become sophisticated enough, with sufficient AI and full self-replication, they can leave the Moon and start industrializing the asteroid belt. This could happen a decade after initiation of the project.

With the huge resources that we know to exist, robot industrialization would rapidly, within decades not centuries, create more manufactures by many orders of magnitude than Earth has. Putting this growth in context, after just 50 years of such growth, the assets in space would require 1% of the mass of the asteroid belt, with complete use within the following decade. Most importantly, those manufactures, outside of Earth’s gravity well, require no further costly launches to transmute into useful products in space. O’Neill colonies popped out like automobiles? Trivial. The authors suggest that one piece could be the manufacture of solar power satellites able to supply Earth with cheap, non-polluting power, in quantities suitable for environmental remediation and achieving a high standard of living for Earth’s population.

With such growth, seed factories travel to the stars and continue their operation there, just as von Neumann would predict with his self-replicating probes. Following behind will be humans in starships, with habitats already prepared by their robot emissaries. All this within a century, possibly within the lifetime of a Centauri Dreams reader today.

Is it viable? The authors believe the technology is available today. The use of telerobotics staves off autonomous robots for a decade. In the 4 years since the article was written, AI research has shown remarkable capabilities that might well increase the viability of this aspect of the project. It will certainly need to be ready once the robots leave the Moon to start extracting resources in the asteroid belt and beyond.

The vision of machines doing the work is probably comfortable. It is the fast exponential growth that is perhaps new. From a small factory launched from Earth, we end up with robots exploiting resources that dwarf the current human economy within a lifetime of the reader.

The logic of the model implies something the authors do not explore. Large human populations in space to use the industrial output of the robots in situ will need to be launched from Earth initially. This will remain expensive unless we are envisaging the birthing of humans in space, much as conceived for some approaches to colonizing the stars. Alternatively an emigrant population will need to be highly reproductive to fill the cities the robots have built. How long will that take? Probably far longer, centuries, rather than the decades of robotic expansion.

Another issue is that the authors envisage the robots migrating to the stars and continuing their industrialization there. Will humans have the technology to follow, and if so, will they continue to fall behind the rate at which robots expand? Will the local star systems be full of machines, industriously creating manufactures with only themselves to use them? And what of the development of AI towards AGI, or Artificial General Intelligence? Will that mean that our robots become the inevitable dominant form of agency in the galaxy?

The paper is Metzger, Muscatello, Mueller & Mantovani, “Affordable, Rapid Bootstrapping of the Space Industry and Solar System Civilization,” Journal of Aerospace Engineering Volume 26 Issue 1 (January 2013). Abstract / Preprint.

tzf_img_post

Interstellar Journey: Shrinking the Probe

We’ve all imagined huge starships jammed with human crews, inspired by many a science fiction novel or movie. But a number of trends point in a different direction. As we look at what it would take to get even a robotic payload to another star, we confront the fact that tens of thousands of tons of spacecraft can deliver only the smallest of payloads. Lowering the mass requirement by miniaturizing and leaving propellant behind looks like a powerful option.

Centauri Dreams regular Alex Tolley pointed to this trend in relation to The Planetary Society’s LightSail-1 project. In a scant ten years, we have gone from the earlier Cosmos 1 sail with an area of 600 square meters to LightSail-1, with 32 square meters, but at no significant cost in scientific return because of continuing miniaturization of sensors and components. We can translate that readily into interstellar terms by thinking about future miniature craft that can be sent out swarm-style to reach their targets. Significant attrition along the way? Sure, but when you’re building tiny, cheap craft, you can lose some and count on the remainder to arrive.

The Emergence of SailBeam

I inevitably think about Jordin Kare’s SailBeam concepts when I hear thinking like this. Kare, a space systems consultant, had been thinking in terms of pellet propulsion of the kind that Clifford Singer and, later, Gerald Nordley have examined. The idea here was to replace a beam of photons from a laser with a stream of pellets fired by an accelerator — the pellets (a few grams in size) would be vaporized into plasma when they reached the spacecraft and directed back as plasma exhaust. Nordley then considered lighter ‘smart’ pellets with onboard course correction.

I’m long overdue for a re-visit to both Singer and Nordley, but this morning I’m thinking about Kare’s idea of substituting tiny sails for the pellets, creating a more efficient optical system because a stream of small sails can be accelerated much faster close to the power source. Think of a solar sail, as Kare did, divided into a million pieces, each made of diamond film and being accelerated along a 30,000 kilometer acceleration path, all of them shot off to drive a larger interstellar probe by being turned into a hot plasma and pushing the probe’s magnetic sail.

sailbeam

Image: Jordin Kare’s ‘SailBeam’ concept. Credit: Jordin Kare/Dana G. Andrews.

Kare, of course, was using his micro-sails for propulsion, but between Nordley and Kare, the elements are all here for tiny smart-probes that can be pushed to a substantial fraction of the speed of light while carrying onboard sensors shrunk through the tools of future nanotechnology. Kare’s sails, in some designs, get up to a high percentage of c within seconds, pushed by a multi-billion watt orbiting laser. Will we reach the point where we can make Kare’s sails and Nordley’s smart pellets not the propulsion method but the probes themselves?

In that case, the idea of a single probe gives way to fleets of tiny, cheap spacecraft sent out at much lower cost. It’s a long way from LightSail-1, of course, but the principle is intact. LightSail-1 is a way of taking off-the-shelf Cubesat technology and giving it a propulsion system. Cubesats are cheap and modular. Equipped with sails, they can become interplanetary exploration tools, sent out in large numbers, communicating among themselves and returning data to Earth. LightSail’s cubesats compel anyone thinking long-term to ask where this trend might lead.

A Gravitational Lensing Swarm

In Existence, which I think is his best novel, David Brin looks at numerous scenarios involving miniaturization. When I wrote about the book in Small Town Among the Stars, I was fascinated with what Brin does with intelligence and nanotechnology, and dwelled upon the creation of a community of beings simulating environments aboard a starship. But Brin also talks about a concept that is much closer to home, the possibility of sending swarms of spacecraft to the Sun’s gravitational focus for observation prior to any star mission.

We normally speak about the distance at which the Sun’s gravity bends light from objects on the other side of it as being roughly 550 AU, but effects begin closer than this if we’re talking about gravitons and neutrinos, and in Brin’s book, early probes go out here, between Uranus and Neptune, to test the concept. But get to 550 AU and beyond and photon lensing effects begin and continue, for the focal line goes to infinity. We have coronal distortion to cope with at 550 AU, but the spacecraft doesn’t stop, and as it continues ever further from the Sun, we can be sampling different wavelengths of light to make observations assisted by this hypothesized lensing.

Before committing resources to any interstellar mission, we want to know what targets are the most likely to reward our efforts. Why not, then, send a swarm of probes. Claudio Maccone, who has studied gravitational lensing more than any other physicist, calls his design the FOCAL probe, but I’m talking about its nanotech counterpart. Imagine millions of these sent out to use the Sun’s natural lens, each with an individual nearby target of interest. Use the tools of future nanotech and couple them with advances in AI and emulation and you open the way for deep study of planets and perhaps civilizations long before you visit them.

The possibilities are fascinating, and one of the energizing things about them is that while they stretch our own technology and engineering well beyond the breaking point, they exceed no physical laws and offer solutions to the vast problems posed by the rocket equation. Perhaps we’ll build probes massing tens of thousands of tons to deliver a 100 kilogram package to Alpha Centauri one day, but a simultaneous track researching what we can do at the level of the very small could pay off as our cheapest, most effective way to reach a neighboring star.

More on this tomorrow, as I take a longer look at Clifford Singer and Gerald Nordley’s ideas on pellet propulsion. I want to use that discussion as a segue into a near term concept, Mason Peck’s ideas on spacecraft the size of computer chips operating in our Solar System.

And today’s references: Cliff Singer’s first pellet paper is “Interstellar Propulsion Using a Pellet Stream for Momentum Transfer,” JBIS 33 (1980), pp. 107-115. Gerald Nordley’s ideas can be found in “Beamriders,” Analog Vol. 119, No. 6 (July/August, 1999). Jordin Kare’s NIAC report “High-Acceleration Micro-Scale Laser Sails for Interstellar Propulsion,” (Final Report, NIAC Research Grant #07600-070, revised February 15, 2002) can be found on the NIAC site.

tzf_img_post