The question of infrastructure haunts the quest to achieve interstellar flight. I’ve always believed that we will develop deep space capabilities not only for research and commerce but also as a means of defense, ensuring that we will be able to change the trajectories of potentially dangerous objects. But consider the recent Breakthrough Starshot discussion. There I noted that we might balance the images we could receive through Starshot’s sails with those we could produce through telescopes at the Sun’s gravitational focus.
Without the infrastructure issue, it would be a simple thing to go with JPL’s Solar Gravitational Lens concept since the target, somewhere around 600 AU, is so much closer, and could produce perhaps even better imagery. But let’s consider Starshot’s huge photon engine in the Atacama desert not as a one-shot enabler for Proxima Centauri, but as a practical tool that, once built, will allow all kinds of fast missions within the Solar System. The financial outlay supports Oort Cloud exploration, fast access to the heliopause and nearby interstellar space, and planetary missions of all kinds. Add atmospheric braking and we can consider it as a supply chain as well.
Robert Freeland, who has labored mightily in the Project Icarus Firefly design, told the Interstellar Research Group’s recent meeting in Montreal about work he is doing within the context of the British Interplanetary Society’s BIS SPACE project, whose goal is to consider the economic drivers, resources, transportation issues and future population growth that would drive an interplanetary economy. That Solar System-wide infrastructure in turn feeds interstellar capabilities, as it generates new technologies that funnel into propulsion concepts. A case in point: In-space fusion.
To make our engines go, we need fuel, an obvious point and a telling one, since the kind of fusion Freeland has been studying for the Firefly design is limited by our current inability to extract enough Helium-3 to use aboard an interstellar craft. Firefly would use Z-pinch fusion – this is a way of confining plasma and compressing it. An electrical current fed into the plasma generates the magnetic fields that ‘pinch,’ or compress the plasma, creating the high temperatures and pressures that can produce fusion.
I was glad to see Freeland’s slides on the fusion fuel possibilities, a helpful refresher. The easiest fusion reactions, if anything about fusion can be called ‘easy,’ is that of deuterium with tritium, with the caveat that this reaction produces most of its energies in neutrons that cannot produce thrust. Whereas the reaction of deuterium with helium-3 releases primarily charged particles that can be shaped into thrust, which is why it was D/He3 fusion that was chosen by the Daedalus team for their gigantic starship design back in the 1970s. Along with that choice came the need to find the helium-3 to fuel the craft. The Daedalus team, ever imaginative, contemplated mining the atmospheres of the gas giants, where He3 can be found in abundance.
The lack of He-3 caused Icarus to choose a pure deuterium fuel (DD). Freeland ran through the problems with DD, noting the abundance of produced neutrons and the gamma rays that result from shielding these fast neutrons. The reaction also produces so-called bremsstrahlung radiation, which emerges in the form of x-rays. Thus the Firefly design stripped down what would otherwise be a significant portion of its mass in shielding by going to what Freeland calls ‘distance shielding,’ meaning minimal structure that allows the radiation to escape into space.
A starship using deuterium and helium-3 minimizes the neutron radiation, so the question becomes, when do we close the gap in our space capabilities to the point that we can extract helium-3 in the quantities needed from planets like Uranus? I see BIS SPACE as seeking to probe what the Daedalus team described as a Solar System-wide economy, and to put some numbers to the question of when this capability would evolve. The question is given point in terms of interstellar probes because while Firefly had been conceived as a starship that could launch before 2100, it seemed likely that helium-3 simply wouldn’t be available in sufficient quantities. So when would it be?
To create an infrastructure off-planet, we’ll need human migration outward, beginning most likely with orbital habitats not far from Earth – think of the orbital environments conceived by Gerard O’Neill, with their access to the abundant resources of the inner system. Freeland imagines future population growth moving further out over the course of the next 20,000 years until the Solar System is fully exploited. In four waves of expansion, he sees the era of chemical and ion rocketry, and perhaps beamed propulsion, to about 2050, with the second generation largely using fission-powered craft, in a phase ending in about 2200. 2200 to 2500 taps fusion energies (DD), while the entire Solar System is populated after 2500, with mining of the gas giants possible.
Let’s pause for a moment on the human population’s growth, because the trends noted in the image below, although widely circulated, seem not to be widely known. We’re looking here at the growth rate of our species and its acceleration followed by its long decline. As Freeland pointed out, the UN expects world population to peak at between 10 and 12 billion perhaps before the end of this century. After that, increase in the population is by no means assured. So much for the scenario that we have to go off-planet because we will simply overwhelm resources here with our numbers.
Image: In both this and the image below I am drawing from Freeland’s slides.
You would think this Malthusian notion would have long ago been discredited, but it is surprisingly robust. Even so, orbital habitats near Earth can potentially re-create basic Earth-like conditions while exploiting material resources in great abundance and solar power, with easy access to space for moving the wave of innovation further out. BIS SPACE looks with renewed interest at these O’Neill habitats in its first wave of papers.
The larger scenario plays out as follows: In the second half of our century, we move development largely to high Earth orbit, with materials drawn mostly from the Moon, using transport of goods by nuclear-powered cargo ships. The third generation creates orbital habitats at all the inner planets (and Ceres) and perhaps near-Earth asteroids using DD fusion propulsion, while the fourth generation takes in the outer planets and their moons. At this point we can set up the kind of aerostat mining rigs in the upper gas giant atmospheres that would enable the collection of helium-3. Here again we have to make comparisons with other technologies. Where will beamed spacecraft capabilities be by the time we are actively mining He-3 in the outer Solar System?
I’ve simplified the details on expansion greatly, and send you to Freeland’s slides for the details. But I want to circle back to Firefly. Using DD fusion, Firefly’s radiator and coolant requirements are extreme (480 tonnes of beryllium coolant!) But move to the deuterium/helium-3 reaction and you drop radiation output by 75 percent while increasing exhaust velocity. Beryllium can be replaced with less expensive aluminum and the physical size of the vessel is greatly reduced. This version of Firefly gets to Alpha Centauri in the same time using 1/5th the fuel and 1/12th the coolant.
In other words, the sooner we can build the infrastructure allowing us to mine the critical helium-3, the sooner we can drop the costs of interstellar missions and expand their capabilities using fusion engines. If such a scenario plays out, it will be fascinating to see how the population growth curves for the entire Solar System track given access to abundant new resources and the technologies to exploit them. If we can imagine a Solar System-wide human population in the range of 100 billion, we can also imagine the growth of new propulsion concepts to power colonization outside the system.
If we’re going to get to the stars, the path along the way has to go through an effort like Breakthrough Starshot. This is not to say that Breakthrough will achieve an interstellar mission, though its aspirational goal of reaching a nearby star like Proxima Centauri with a flight time of 20 years is one that takes the breath away. But aspirations are just that, and the point is, we need them no matter how far-fetched they seem to drive our ambition, sharpen our perspective and widen our analysis. Whether we achieve them in their initial formulation cannot be known until we try.
So let’s talk for a minute about what Starshot is and isn’t. It is not an attempt to use existing technologies to begin building a starship today. Yes, metal is being bent, but in laboratory experiments and simulated environments. No, rather than a construction project, Starshot is about clarifying where we are now, and projecting where we can expect to be within a reasonable time frame. In its early stages, it is about identifying the science issues that would enable us to use laser beaming to light up a sail and push it toward another star with prospects of a solid data return. Starshot’s Harry Atwater (Caltech) told the Interstellar Research Group in Montreal that it is about development and definition. Develop the physics, define and grow the design concepts, and nurture a scientific community. These are the necessary and current preliminaries.
Image: The cover image of a Starshot paper illustrating Harry Atwater’s “Materials Challenges for the Starshot Lightsail,” Nature Materials 17 (2018), 861-867.
We’re talking about what could be a decades-long effort here, one that has already achieved a singular advance in interstellar studies. I don’t have the current count on how many papers have been spawned by this effort, but we can contrast the ongoing work of Starshot’s technical teams with where interstellar studies was just 25 years ago, when few scientific conferences dealt with interstellar ideas and exoplanets were still a field in their infancy. In terms of bringing focus to the issue, Starshot is sui generis.
It is also an organic effort. Starshot will assess its development as it goes, and the more feasible its answers, the more it will grow. I think that learning more about sail possibilities will spawn renewed effort in other areas, and I see the recent growth of fusion rocketry concepts as a demonstration that our field is attaining critical mass not only in the research labs and academy but in commercial space ventures as well.
So let’s add to Atwater’s statement that Starshot is also a cultural phenomenon. Although its technical meetings are anything but media fodder, their quiet work keeps the idea of an interstellar crossing in the public mind as a kind of background musical riff. Yes, we’re thinking about this. We’ve got ideas and lab experiments that point to new directions. We’re learning things about lightsails and beaming we didn’t know before. And yes, it’s a big universe, with approximately one planet per star on average, and we’ve got one outstanding example of a habitable zone planet right next door.
So might Starshot’s proponents say to themselves, although I have no idea how many of those participating in the effort back out sometimes to see that broader picture (I suspect quite a few, based on those I know, but I can’t speak for everyone). But because Starshot has not sought the kind of publicity that our media-crazed age demands, I want to send you to Atwater’s video presentation at Montreal to get caught up on where things stand. I doubt we’re ever going to fly the mission Starshot originally conceived because of cost and sheer scale, but I’m only an outsider looking in. I do think that when the first interstellar mission flies, it will draw heavily on Starshot’s work. And this will be true no matter what final choices emerge as to propulsion.
This is a highly technical talk compressed into an all too short 40 minutes, but let’s just go deep on one aspect of it, the discussion of the lightsail that would be accelerated to 20 percent of lightspeed for the interstellar crossing. Atwater’s charts are worth seeing, especially the background on what the sail team’s meetings have produced in terms of their work on sail materials and, especially, sail shape and stability. The sail is a structure approximately 4 meters in diameter, with a communications aperture 1 meter in size, as seen in the center of the image (2 on the figure). Surrounding it on the circular surface are image sensors (6) and thin-film radioisotope power cells (5).
Maneuvering LEDs (4) provide attitude control, and thin-film magnetometers (7) are in the central disk, with power and data buses (8) also illustrated. A key component: A laser reflector layer positioned between the instruments that are located on the lightsail and the lightsail itself, which is formed as a silicon nitride metagrating. As Atwater covers early in his presentation, the metagrating is crucial for attitude control and beam-riding, keeping the sail from slipping off the beam even though it is flat. The layering is crucial in protecting the sailcraft instrumentation during the acceleration stage, when it is fully illuminated by the laser from the ground.
How to design lensless transmitters and imaging apertures? Atwater said that lensless color camera and steerable phased array communication apertures are being prototyped in the laboratory now using phased arrays with electrooptic materials. Working one-dimensional devices have emerged in this early work for beam steering and electronic focusing of beams. The laser reflector layer offers the requisite high reflectivity at the laser wavelength being considered, using a hybrid design with silicon nitride and molybdenum disulfide to minimize absorption that would heat the sail.
I won’t walk us through all of the Starshot design concepts at this kind of detail, but rather send you to Atwater’s presentation, which shows the beam-riding lightsail structure and its current laboratory iterations. The discussion of power sources is particularly interesting given the thin-film lightweight structures involved, and as shown in the image below, it involves radioisotope thermoelectric generators actually integrated into the sail surface. Thin film batteries and fuel cells were considered by Breakthrough’s power working group but rejected in favor of this RTG design.
So much is going on here in terms of the selection of sail materials and the analysis of its shape, but I’ll also send you to Atwater’s presentation with a recommendation to linger over his discussion of the photon engine, that vast installation needed to produce the beam that would make the interstellar mission happen. The concept in its entirety is breathtaking. The photon engine is currently envisioned as an array of 1,767,146 panels consisting of 706,858,400 individual tiles (Atwater dryly described this as “a large number of tiles”), producing the 200 gW output and covering 3 kilometers on the ground. The communications problem for data return is managed by scalable large-area ground receiver arrays, another area where Breakthrough is examining cost trends that within the decades contemplated for the project will drive component expenses sharply down. The project depends upon these economic outcomes.
Image: What we would see if we had a Starshot-class sailcraft approaching the Earth, from the image at two hours away to within five minutes of its approach. Credit for this and the two earlier images: Harry Atwater/Breakthrough Starshot.
Using a laser-beamed sail technology to reach the nearest stars may be the fastest way to get images like those above. The prospect of studying a planet like Proxima b at this level of detail is enticing, but how far can we count on economic projections to bring costs down to the even remotely foreseeable range? We also have to factor in the possibility of getting still better images from a mission to the solar gravitational lens (much closer) of the kind currently being developed at the Jet Propulsion Laboratory.
Economic feasibility is inescapably part of the Starshot project, and is clearly one of the fundamental issues it was designed to address. I return to my initial point. Identifying the principles involved and defining the best concepts to drive design both now and in the future is the work of a growing scientific community, which the Starshot effort continues to energize. That in itself is no small achievement.
It is, in fact, a key building block in the scientific edifice that will define the best options for achieving the interstellar dream. And while this is not the place to go into the complexities of scientific funding, suffice it to say that putting out the cash to enable these continuing studies is a catalytic gift to a field that has always struggled for traction both financial and philosophical. The Starshot initiative has a foundational role in defining the best technologies for interstellar flight that will lead one day to its realization.
RIME (Radar for Icy Moons Exploration) is the first instrument ever deployed to the outer Solar System that can make direct measurements of conditions below the surface of an object. That makes it precisely tailored for Europa as well as Ganymede and Callisto, two other Galilean moons that also seem to have an internal ocean. Consider it a radar ‘sounder’ that can penetrate up to 9 kilometers below surface ice. RIME is a major part of why JUICE is going to the moons of Jupiter.
Consider it problematic as well, at least for the moment, while controllers working the JUICE mission try to solve an unexpected deployment issue. The 16-meter long antenna shows movement, but continues to have trouble in becoming released from its mounting bracket. The antenna is currently about a third of its full intended length, according to ESA, partially extended but still stowed away.
Image: Shortly after launch on 14 April, ESA’s Jupiter Icy Moons Explorer, JUICE, captured this image with its JUICE monitoring camera 2 (JMC2). JMC2 is located on the top of the spacecraft and is placed to monitor the multi-stage deployment of the 16 m-long Radar for Icy Moons Exploration (RIME) antenna. RIME is an ice-penetrating radar that will be used to remotely probe the subsurface structure of the large moons of Jupiter. In this image, RIME is seen in stowed configuration. The image was taken at 14:19 CEST. JMC images provide 1024 x 1024 pixel snapshots. Credit: ESA.
Given that two months of commissioning remain for the spacecraft, the agency is saying that there is abundant time to work the problem out, which may involve something as simple as a stuck pin, potentially sprung by warming the radar mount by rotating the spacecraft and turning the assembly into direct sunlight.
The memory of the Galileo probe to Jupiter hovers over the mission at least momentarily. Controllers never did free up Galileo’s high-gain antenna, though they were able to return outstanding data through ingenious use of its low-gain counterpart. Needless to say, the hope here is that RIME follows a different path and soon springs free.
In-flight adjustment and occasional repair are no strangers to deep space missions. We’re reminded of this also by the plan to save precious energy and keep Voyager 2 (and potentially Voyager 1) operational for a few years longer than previously thought possible. Both craft rely on RTGs (radioisotope thermoelectric generators) converting heat from plutonium into electricity, so that plutonium decay produces less power each year. Hence the need to turn off unneeded heaters and other systems to reserve power.
The new method: Use power heretofore reserved for a voltage regulator that triggers a backup circuit in the event of a serious fluctuation in voltage. Power is set aside in the spacecraft’s RTG for that purpose, but can be redirected to keeping the craft’s five science instruments operating until 2026. That gives up a certain safety measure, but even after 45 years in flight, the electrical systems on Voyagers 1 and 2 remain stable, so it seems a good gamble to produce further interstellar science. If the approach works for Voyager 2, it may be tried on Voyager 1 in the near future.
Suzanne Dodd is Voyager project manager at the Jet Propulsion Laboratory:
“Variable voltages pose a risk to the instruments, but we’ve determined that it’s a small risk, and the alternative offers a big reward of being able to keep the science instruments turned on longer. We’ve been monitoring the spacecraft for a few weeks, and it seems like this new approach is working.”
Image: Each of NASA’s Voyager probes are equipped with three radioisotope thermoelectric generators (RTGs), including the one shown here. The RTGs provide power for the spacecraft by converting the heat generated by the decay of plutonium-238 into electricity. Credit: NASA/JPL-Caltech.
Anything we can do to keep these priceless assets functioning is to the good. They are our only operational craft outside the heliosphere, a striking thought given their projected mission duration of a scant four years. Operating without one of its science instruments, which failed much earlier in the mission, Voyager 1’s power issues are slightly less pressing than its twin, but decisions about shutting down another instrument still loom, so the new RTG power draw may again come into play.
Take a look at our missions to Jupiter in context. The image below shows the history back to 1973, with the launch of Pioneer 10, and of course, the Voyager encounters. We also have the flybys by Ulysses, Cassini and New Horizons, each designed for other destinations, for Jupiter offers that highly useful gravitational assist to help us get places fast. JUICE (Jupiter Icy Moons Explorer) joins the orbiter side of the image tomorrow, with launch aboard an Ariane 5 from Kourou (French Guiana) scheduled for 1215 UTC (0815 EDT) on Thursday. You can follow the launch live here or here.
The first gravitational maneuver will be in August of next year with a Lunar-Earth flyby, followed by Venus in 2025 and then two more Earth flybys (2026 and 2029) before arrival at Jupiter in July of 2031. I’ve written a good deal about both Europa Clipper and JUICE in these pages and won’t go back to repeat the details, but we can expect 35 icy moon flybys past Europa, Ganymede and Callisto before insertion into orbit at Ganymede, making JUICE the first mission that will go into orbit around a satellite of another planet. Needless to say, we’ll track JUICE closely in these pages.
Image: Ariane 5 VA 260 with JUICE, start of rollout on Tuesday 11 April. Credit for this and the above infographic: ESA.
Over the past several years we’ve looked at two missions that are being designed to go beyond the heliosphere, much farther than the two Voyagers that are our only operational spacecraft in what we can call the Local Interstellar Medium. Actually, we can be more precise. That part of the Local Interstellar Medium where the Voyagers operate is referred to as the Very Local Interstellar Medium, the region where the LISM is directly affected by the presence of the heliosphere. The Interstellar Probe design from Johns Hopkins Applied Physics Laboratory and the Jet Propulsion Laboratory’s Solar Gravity Lens (SGL) mission would pass through both regions as they conduct their science operations.
Both probes have ultimate targets beyond the VLISM, with Interstellar Probe capable of looking back at the heliosphere as a whole and reaching distances are far as 1000 AU still operational and returning data to Earth. The SGL mission begins its primary science mission at the Sun’s gravitational lens distance on the order of 550 AU, using the powerful effects of gravity’s curvature of spacetime to build what the most recent paper on the mission calls “a ‘telescope’ of truly gigantic proportions, with a diameter of that of the sun.” The vast amplification of light would allow a planet on the other side of the Sun to be imaged at stunning levels of detail.
Image: This is Figure 1 from the just released paper on the SGL mission. Caption: A visualization of the key primary optical axes (POA) and the projected image plane of the exoplanet. The imaging spacecraft is the tiny element in front of the exoplanet image plane. Credit: Helvajian et al.
Let’s poke around a bit in “Mission Architecture to Reach and Operate at the Focal Region of the Solar Gravitational Lens,” just out in the Journal of Spacecraft and Rockets, which sets out the basics of how such a mission could be flown. Remember that this work has proceeded through the NASA Innovative Advanced Concepts (NIAC) office, with Phase I, II and now III studies resulting in the refinement of a design that can satisfy the requirements of the heliophysics decadal survey. JHU/APL’s Interstellar Probe takes aim at the same decadal, with both missions designed to return data relevant to our own star and, in SGL’s case, a more distant one.
Given that it has taken Voyager 1 well over 40 years to reach 159 AU, getting a payload to the gravitational lens region for operations there and beyond as the craft departs the Sun is a challenge. But the rewards would be great if it can be made to happen. The JPL work and a great deal of theoretical study prior to it have revealed that an optical telescope of no more than meter-class equipped with an internal coronagraph for blocking the Sun’s light would see light from the target exoplanet appearing in the form of an ‘Einstein ring’ surrounding the solar disk. High-resolution imagery of an exoplanet can be extracted from this data. We can also trade spatial for spectral resolution. From the paper:
The direct high-resolution images of an exoplanet obtained with the SGL could lead to insight on the on-going biological processes on the target exoplanet and find signs of habitability. By combining spatially resolved imaging with spectrally resolved spectroscopy, scientific questions such as the presence of atmospheric gases and its circulation could be addressed. With sufficient SNR and visible to mid-infrared (IR) sensing , the inspection of weak biosignatures in the form of secondary metabolic molecules like dimethyl-sulfide, isoprene, and solid-state transitions could also be probed in the atmosphere. Finally, the addition of polarimetry to the spatially and spectrally resolved signals could provide further insight such as atmospheric aerosols, dust, and, on the ground, properties of the regolith (i.e., minerals) and bacteria and fauna (i.e., homochirality)…
I won’t labor the issue, as we’ve discussed gravity lens imaging on many an occasion in these pages, but I did want to make the point about spectroscopy as a way of underlining the huge reward obtainable from a mission that can collect data at these distances. The paper is rich in detailing the progress of our thinking on this, but I turn to the mission architecture for today, offering as it does a remarkable new way to conceive of deep space missions both in terms of configuration and propulsion. For we’re dealing here with spacecraft that are modular, reconfigurable and highly adaptable using clusters of spacecraft that practice self-assembly during cruise.
The SGL mission is based on a constellation of identical craft, the primary components being what the authors call ‘proto-mission capable’ (pMC) spacecraft, with final ‘mission capable’ (MC) craft being built as the mission proceeds. Smaller pMC nanosats, in other words, dock during cruise to build an MC; five or perhaps six of the latter are assumed in the mission description in this paper to allow full capability during the observational period within the focal region of the gravity lens. The pMC craft use solar sails for a close pass by the Sun, all of them launched into a parking orbit before deployment toward the Sun. The sailcraft fly in formation following perihelion, dispose of their thermal shielding, then their sails, and begin assembly into MC spacecraft.
How to separate a final, fully functional MC craft into the constituent units from which it will be assembled in flight is no small issue, and bear in mind the need for extreme adaptability, especially as the craft reach the gravitational lensing region. Near-autonomous operations are demanded. The SGL study used simulations based on current engineering methodology (CEM) tools, modifying them as needed. The need for in-flight assembly stood out from the alternative. From the paper;
Two types of distributed functionality were explored: a fractionated spacecraft system that operates as an “organism” of free-flying units that distribute function (i.e., virtual vehicle) or a configuration that requires reassembly of the apportioned masses. Given that the science phase is the strong driver for power and propellant mass, the trade study also explored both a 7.5-year (to ?800 AU) and 12.5-year (to ?900 AU) science phase using a 20 AU/year exit velocity as the baseline. The distributed functionality approach that produced the lowest functional mass unit is a cluster of free-flying nanosatellites (i.e., pMC) each propelled by a solar sail but then assembled to form an MC spacecraft.
Out of all this what emerges is a pMC design with the capability of a 6U CubeSat nanosatellite, self-contained and three-axis stabilized, each of these units to carry a critical part of the larger MC spacecraft. Power and data are shared as the pMCs dock. The current design for the pMC is a round disk approximately 1 meter in diameter and 10 cm thick, with the assembled MC spacecraft visualized as stacked pMC units. One pMC would carry the primary and secondary mirrors, a second the science package, optical communications package and star tracker sensors, and so on. In-space assembly need not be rushed. The paper mentions a time period of several months as needed to complete the operation.
The 28-year cruise phase ends in the region of 550 AU, with two of the five or six MC spacecraft now maneuvering to track the primary optical axis of the exoplanet host star, which is the line connecting the center of the star to the center of the Sun. The host star is thus a key navigational resource which will be used to determine the precise position of the exoplanet under study. Interestingly, motion in the image plane has to be accounted for – this is due to the effect of the wobble of the Sun caused by gas giants in our Solar System. Such wobbles are hugely helpful for those using radial velocity methods to study planets around other stars. Here they become a complicating factor in extracting the data the mission will need to construct its exoplanet imagery.
The disposition of the spacecraft at 550 AU is likewise interesting. All of the MC spacecraft are, as the acronym makes clear, capable of conducting the mission. It now becomes necessary to subtract the Sun’s coronal light from the incoming data, which is accomplished by having one of the spacecraft follow an inertial path down the center of the spiral trajectory the other craft will follow (the other craft all move in a noninertial frame to make it possible to acquire the SGL photons). Having one craft on an inertial path means it sees no exoplanet photons, and thus its coronal image can be subtracted from the data gathered by the other four craft. The inertial path spacecraft also acts as a local reference frame that can be used for navigation.
Image: A meter-class telescope with a coronagraph to block solar light, placed in the strong interference region of the solar gravitational lens (SGL), is capable of imaging an exoplanet at a distance of up to 30 parsecs with a few 10 km-scale resolution on its surface. The picture shows results of a simulation of the effects of the SGL on an Earth-like exoplanet image. Left: original RGB color image with (1024×1024) pixels; center: image blurred by the SGL, sampled at an SNR of ~103 per color channel, or overall SNR of 3×103; right: the result of image deconvolution. Credit: Turyshev et al., “Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission,” Final Report NASA Innovative Advanced Concepts Phase II.
The spacecraft are moving at more than 20 AU per year and have up to five years between 550 and 650 AU to lock onto the primary optical axis of the exoplanet host star. As the craft reach 650 AU, the optical axis of the host star becomes what the authors call a ‘navigational steppingstone’ toward locating the image of the exoplanet, which once acquired begins a science phase lasting in the area of ten years.
The details of image acquisition are themselves fascinating and as you would imagine, complex – I send you to the paper for more. My focus today is the novelty of the architecture here. If we can assemble a mission capable spacecraft (and indeed a small fleet of these) out of the smaller pMC units, we reduce the size of sail needed for the perihelion acceleration phase and make it possible to achieve payload sizes for missions far beyond the heliosphere that would not otherwise be possible. We build this out of a known base; in-space assembly and autonomous docking have been demonstrated, and technologies for assembly operations continue to be refined. NASA’s On-Orbit Autonomous Assembly from Nanosatellites and CubeSat Proximity Operations Demonstration mission are examples of this ongoing research.
What a long and winding path it is to extend the human presence via robotic probe ever further from our planet. This paper examines technologies needed to advance this movement, and again I point to the ongoing Interstellar Probe study at JHU/APL as another rich source for current and projected thinking about the needed technologies. In the case of the SGL mission, what is being proposed could have a major impact on the search for life elsewhere in the universe. Imagine a green and blue exoplanet seen with weather patterns, oceans, continents and rich spectral data on its atmosphere.
But I come back to that mission architecture and the idea of self-assembly. As the authors write:
We realize that this architecture fundamentally changes how space exploration could be conducted. One can imagine small- to medium-scale spacecraft on fast-traveling scouting missions on quick cadence cycles that are then followed by flagship-class space vehicles. The proposed mission architecture leverages a global technology base driven by miniaturization and integration, and other technologies that are coming into fruition, including composite materials based on hierarchical structures, edge-computing platforms, small-scale power generation, and storage. These advances have had an effect on the small spacecraft industry with the development of a worldwide CubeSat and nanosat ecosystem that have continually demonstrated increasing functionality in missions…
We’ll continue to track robotic self-assembly and autonomy issues with great interest. I’m convinced the concept opens up mission possibilities we’ve yet to imagine.
The paper is Helvajian, “Mission Architecture to Reach and Operate at the Focal Region of the Solar Gravitational Lens,” Journal of Spacecraft and Rockets. Published online 1 February 2023 (full text). For earlier Centauri Dreams articles on the SGL mission, see JPL Work on a Gravitational Lensing Mission, Good News for a Gravitational Focus Mission and
Solar Gravitational Lens: Sailcraft and In-Flight Assembly.
Kelvin Long’s new paper on the mission concept called Sunvoyager would deploy inertial confinement fusion, described in the last post, to drive a spacecraft to 1000 AU in less than four years. The number pulsates with possibilities: A craft like this would move at 325 AU per year, or roughly 1500 kilometers per second, ninety times the velocity of Voyager 1. This kind of capability, which Long thinks we may achieve late in this century, would open up all kinds of fast science missions to the outer planets, the Kuiper Belt, and even the inner Oort Cloud. And the conquest of inertial confinement methods would open the prospect for later, still faster missions to nearby stars.
Sunvoyager draws on the heritage of the Daedalus starship, that daring design conceived by British Interplanetary Society members in the 1970s, but as we saw last time, inertial confinement fusion (ICF) was likewise examined in a concept called Vista, and one of the pleasures of this kind of research for a scholarly sort like me is digging out the history of ideas, which in the Long paper I can trace through work in JBIS and the IEEE in the 1980s and 90s, where ICF was considered.
Vista itself appeared in the literature in the 1980s, drawing on this earlier and ongoing work, its conical shape a response to the potentially damaging neutron and x-ray flux that ICF produced. Long emulates its form factor in the Sunvoyager design. I should also mention a NASA concept called Discovery II that I hadn’t encountered until now, a spacecraft designed for a mission to the gas giants using a magnetic fusion engine. Both this and an early ICF design by Lawrence Livermore Laboratory’s Rod Hyde and colleagues in the 1970s would use an engine with a mass of 300 tons, a figure which Long selected for the calculations in his Sunvoyager paper as he validated the HeliosX code using Vista as the template: “The current level of accuracy will suffice for making predictions for the expected design performance of the Sunvoyager probe.”
So what do we get as we downselect to achieve the Sunvoyager design? The image below shows the concept.
Image: This is Figure 8 in the paper. Caption: Concept design layout of Sunvoyager spacecraft configuration. Credit: Kelvin Long.
Notice the radiators, a critical part of the design, for we need to find a way to reduce waste heat. Long notes that for Vista, the radiation interaction with the structure was about 3 percent – in other words, the vehicle intercepts about that amount of the neutron and x-ray flux from the fusion reactions. He assumes a higher figure for Sunvoyager, although adding that using a mixture of deuterium and helium-3 as the fuel (Vista used a capsule of deuterium and tritium) would reduce these effects. The design also includes an annular radiation shield within the engine structure.
Long assumes the use of X-band frequencies for communications, transmitting at 8.4 GHz with a power output of 100 W, the signals to be received via the Deep Space Network’s 70-meter dishes. It’s interesting that he does not push for laser methods here, wisely so, I think, given the pointing problems we’ve discussed recently at deep space distances. Pushing data back to Earth from 1000 AU is daunting enough:
The expected data rate at 1000 AU will be 1 kBits?s. Backup medium- and low-gain antennas are also likely to be required. Note that radio signals from a distance of 1000 AU will take around 138 h to reach Earth receiving antennas, and so significant data latency should be expected. The high-gain antenna will be mounted on a rotatable fixing (rather than body mounted) and on a set of rigid extension poles so that it can always be pointed toward Earth, which avoids the need of having to rotate the entire spacecraft such as was performed for the Voyager 2 and New Horizons missions.
The Sunvoyager interstellar precursor probe would be assembled in Earth orbit following multiple launch missions. The author likens building the craft to the construction of the International Space Station, noting on the order of 10 launch vehicles may be needed to get all the parts into the assembly orbit. Booster rockets, perhaps nuclear thermal, would be used to move the vehicle away from Earth at 17 kilometers per second (which happens to be Voyager 1 speed). This reaches twice the mean Earth-Moon distance in a day or so, at which point the fusion engine can be ignited. And here we go with ICF fusion on our way to the outer Solar System:
A capsule is accelerated into the target chamber where the bank of laser beam lines can target it within the open reaction chamber to the point of thermonuclear ignition. A set of externally placed laser-focusing mirrors may be required to ensure a symmetric implosion. The plasma from the detonation will expand into the hemispherical target chamber, with the charge particles then directed by large magnetic fields internal to the chamber. These are then ejected for thrust generation while the next capsule is loaded onto the target ignition point. This occurs 10 times per second, although the hydrodynamic and nuclear phases of the ignition take place on microsecond and nanosecond time scales, respectively, so that in between each ignition there will still be around 10?5 s of time for the loading of the next capsule while the plasma from the previous one is being ejected.
The numbers on the ICF fusion for Sunvoyager are, shall we say, mind-boggling. Consider this: The mission needs 200 million fuel capsules, or 50 million per tank. This is, as the author comments, “no small undertaking,” a thought I can only echo. If we’re looking at constructing and flying a mission like this in, say, 50 years time, we may be able to assume advances in robotic automation and additive manufacturing, but we also have the problem of acquiring the needed fuel. You may recall that the Daedalus starship design was built around the notion of mining the gas giants for helium-3. That, in turn, assumes a Solar System infrastructure sufficient to make such mining feasible.
Image: This is the paper’s Figure 12. Caption: Concept design configuration (side view) of Sunvoyager spacecraft. Credit: Kelvin Long.
I like the sheer daring of concepts like Daedalus and Sunvoyager. Remember that when those frisky BIS engineers put Daedalus together, they worked at a time when it was largely considered impossible to reach another star by any means. Daedalus seemed impossible to build (it still does), but it violated no laws of physics and became a vast engineering problem. The point wasn’t that building it would bankrupt the planet. The point was that if we did decide to build it, nothing in physics would prevent it from working. Assuming, of course, that we did conquer ICF fusion for propulsion.
In other words (and Robert Forward would hammer this home again and again in talks and in papers), interstellar flight was not science fictional dreaming but a matter of reaching the appropriate level of engineering, which one day we might very well do. A mission design like Sunvoyager reminds us that we can stretch our thinking based on what we have today to make wise decisions about how and where we invest in the needed technologies. We gain scientific knowledge in doing this and we also rough out the roadmap that points to still further missions that one day reach another star.
Image: The extraordinary Robert Forward, wearing one of the trademark vests created by his wife Martha. Forward chose this photograph to appear on his own Web site.
So I think Kelvin Long is spot on in his assessment of what he does here:
Additional studies will be required to further develop the design configuration and specification for the Sunvoyager mission proposal so that it can be matured to the point of a credible mission in the coming decades to include a subsystem-level definition. However, the calculations presented in this paper show promise for what may be possible in the future provided that investments into ICF ignition physics are continued and then the applications of this technology pursued with vigor.
I think Bob Forward would have liked this paper. And because I haven’t quoted his famous lines (from JBIS in 1996) in their entirety since 2005, let me do so here. He’s looking into a future when we go from interstellar precursors into actual interstellar crossings to places like Proxima Centauri, and he sees the process:
Travel to the stars will be difficult and expensive. It will take decades of time, gigawatts of power, kilograms of energy and trillions of dollars. Recently, however, some new technologies have emerged and are under development for other purposes, that show promise of providing propulsion systems that will make interstellar travel feasible within the forseeable future — if the world community decides to direct its energies and resources in that direction. Make no mistake — interstellar travel will always be difficult and expensive, but it can no longer be considered impossible.
The paper is Long, “Sunvoyager: Interstellar Precursor Probe Mission Concept Driven by Inertial Confinement Fusion Propulsion,” Journal of Spacecraft and Rockets 2 January 2023 (full text).