Let’s talk about fusion fuels in relation to the recent discussion of building a spacecraft engine. A direct fusion drive (DFD) system using magnetic mirror technologies is, as we saw last time, being investigated at the University of Maryland in its Centrifugal Mirror Fusion Experiment (CMFX), as an offshoot of the effort to produce fusion for terrestrial purposes. The initial concept being developed at CMFX is to introduce a radial electric field into the magnetic mirror system. This enhances centrifugal confinement of the plasma in a system using deuterium and tritium as fusion fuel.
Out of this we get power but not thrust. However, both UMD’s Jerry Carson and colleague Tom Bone told the Interstellar Research Group’s Montreal gathering that such a reactor coupled with a reservoir of warm plasma offers prospects for in-space propulsion. Alpha particles (these are helium nuclei produced in the fusion reaction) may stay in the reactor, further energizing the fuel, or they can move upstream, to be converted into electricity by a Standing Wave Direct Energy Converter (SWDEC). A third alternative: They may move downstream to mix with the warm plasma, producing thrust as the plasma expands within a magnetic nozzle.
Image: The fusion propulsion system as shown in Jerry Carson’s presentation at IRG Montreal. Thanks to Dr. Carson for passing along the slides.
We also know that fusion fuel options carry their own pluses and minuses. We can turn to deuterium/deuterium reactions (D/D) at the expense of neutron production, something we have to watch carefully if we are talking about powering up a manned spacecraft. The deuterium/tritium reaction (D/T) produces even more neutron flux, while deuterium/helium-3 (D/He3) loses most of the neutron output but demands helium-3 in abundances we only find off-planet. Tom Bone’s presentation at Montreal turned the discussion in a new direction. What about hydrogen and boron?
Here the nomenclature is p-11B, or proton-boron-11, where a hydrogen nucleus (p) collides with a boron-11 nucleus in a reaction that is aneutronic and produces three alpha particles. The downside is that this kind of fusion demands temperatures even higher than D/He3, a challenge to our current confinement and heating technologies. A second disadvantage is the production of bremsstrahlung radiation, which Bone told the Montreal audience was of the same magnitude as the charged particle production.
The German word ‘bremsen’ means ‘to brake,’ hence ‘bremsstrahlung’ means ‘braking radiation,’ a reference to the X-ray radiation produced by a charged particle when it is decelerated by its encounter with atomic nuclei. So p-11B becomes even more problematic as a fuel, given the fact that boron has five electrons, creating a fusion plasma that is a lively place indeed. Bone’s notion is to take this otherwise crippling drawback and turn it to our advantage by converting some of the bremsstrahlung radiation into usable electricity. To do this, it will be necessary to absorb the radiation to produce heat.
Bone’s work at UMD focuses on thermal energy conversion using what is called a thermionic energy converter (TEC), which can convert heat directly into electricity. He pointed out that TECs are a good choice for space applications because they offer low maintenance and low mass coupled with high levels of efficiency. TECs operate off the thermionic emission that occurs when an electron can escape a heated material, a process Bone likened to ‘boiling off’ the electron. An emitter and collector in the TEC thus absorb the heat from the bremsstrahlung radiation to produce electricity.
Image: A screenshot from Dr. Bone’s presentation in Montreal.
I don’t want to get any deeper in the weeds here and will send you to Bone’s presentation for the details on the possibilities in TEC design, including putting the TEC emitter and collector in tight proximity with the air pumped out between them (a ‘vacuum TEC’) and putting an ionized vapor between the two (a ‘vapor TEC’). But Bone is upfront about the preliminary nature of this work. The objective at this early stage is to create a basic analytical model for p-11b fuel in a propulsion system using TECs to convert radiation into electricity, with the accompanying calculations to balance power and efficiency and find the lowest bremsstrahlung production for a given power setting.
The scope of needed future work on this is large. What exactly is the best ratio of hydrogen to boron in this scenario, for one thing, and how can the electric and magnetic field levels needed to light this kind of fusion be reduced? “It’s not an easy engineering problem,” Bone added. “It’s certainly not a near-term challenge to solve.”
True enough, but it’s clear that we should be pushing into every aspect of fusion as we learn more about confining these reactions in an in-space engine. Experimenting with alternate fusion fuels has to be part of the process, work that will doubtless continue even as we push forward on the far more tractable issues of deuterium/tritium.
I want to drop back to fusion propulsion at this point, as it bears upon the question of a Solar System-wide infrastructure that we looked at last time. We know that even chemical propulsion is sufficient to get to Mars, but clearly, reducing travel times is critical if for no other reason than crew health. That likely puts the nuclear thermal concept into play, as we have experience in the development of the technology as far back as NERVA (Nuclear Engine for Rocket Vehicle Application), and this fission-based method shows clear advantages over chemical means in terms of travel times.
It’s equally clear, though, that for missions deep into the Solar System and beyond, the high specific impulse (ISP) enabled by a theoretical direct fusion drive sets the standard we’d like to meet. In his presentation at the Interstellar Research Group’s Montreal symposium, Jerry Carson discussed the ongoing work at the University of Maryland on creating fusion conditions using deuterium/deuterium (D/D) and deuterium/tritium (D/T) fuel with centrifugal mirror confinement. D/T fusion will likely drive our first fusion engines, but its higher neutron flux will spotlight the advantages of helium-3 when the latter becomes widely available, as shielding the crew on a fusion-powered spacecraft will be a critical factor.
Image: The Centrifugal Mirror Fusion Experiment at the University of Maryland at Baltimore (principal investigator Carlos Romero-Talamás, University of Maryland, Baltimore County). The plan is to achieve fusion conditions (D/D) by 2025. Credit: UMD.
Let’s dig into the centrifugal mirror (CM) concept. The beauty of plasma is that it is electrically conductive, and hence manageable by magnetic and electric fields. Hall thrusters use plasma (though not fusion!), as do concepts like Ad Astra’s VASIMR (Variable Specific Impulse Magnetoplasma Rocket). In a centrifugal mirror, the notion is to confine, compress and heat the plasma as it is spun within a fusion chamber, as opposed to the perhaps more familiar compression methods of inertial fusion, or the magnetic field structures within tokamaks. Carson argues that the CM makes for a more compact reactor and greatly reduces radiation and momentum loss.
The Maryland work implements this effect using magnetic ‘mirrors’ to create the rapid spin that imposes radial and axial forces on the plasma, confining it into a ‘well’ where fusion can be attained. The fuel is bouncing back and forth along the lines of force between the two magnets, a method first explored in the 1950s, when research indicated that mirrors of this kind are leaky and cannot maintain the plasma long enough to ignite fusion. Carson said that it is the addition of an electric field via a central electrode in the UMD design that spins the ‘doughnut’ around its axis, so that the plasma is held in place both axially as well as radially. The basic diagram is below.
Image: Centrifugal mirror confinement of a high energy plasma. Credit: UMD.
The ongoing work at Maryland grows out of an experimental effort in the 2000s that has led to the current Centrifugal Mirror Fusion Experiment (CMFX). The latter is designed with terrestrial power generation in mind, so we are talking about adapting a power-generating technology into a spacecraft drive. To do that, we fire up a centrifugal mirror fusion reactor in tandem with warm plasma (likely a reservoir of hydrogen, though other gasses are possible), so that high-energy fusion products escape the reactor downstream and deposit their energy in the plasma, causing it to expand as it passes through a magnetic nozzle to produce thrust. The reactor also uses energy leaving the upstream mirror to continue its own operations.
A direct fusion drive of this kind could, Carson said, make the round trip to Mars in 3 months, and reach Saturn in less than three years, a sharp contrast to nuclear electric methods. Even nuclear thermal methods would take over a year to make the Mars mission. Looking further out, the Uranus Orbiter and Probe (UOP), which is being considered as a flagship mission for the upcoming Decadal Survey, would make for a 12 year journey using chemical propulsion and a gravitational assist at Jupiter, while DFD-CM in these specs could do a considerably larger mission to more distant Neptune in as little as 3 years. A second generation Interstellar Probe (50 years to the heliopause in the NASA concept) could reach 1000 AU in 30-35 years using DFD-CM.
We’re not quite through with the University of Maryland, because Carson’s colleague Tom Bone has been analyzing a unique way to take advantage of otherwise problematic bremsstrahlung radiation, which complicates foperations with various kinds of fusion fuels. I’ll run through that work in the next post. Turning this challenging radiation into usable energy is conceivably a possibility, but requires fuel other than the deuterium/tritium combination examined for the DFD-CM drive. Bone’s choice is intriguing, to say the least, but more about this next time.
The question of infrastructure haunts the quest to achieve interstellar flight. I’ve always believed that we will develop deep space capabilities not only for research and commerce but also as a means of defense, ensuring that we will be able to change the trajectories of potentially dangerous objects. But consider the recent Breakthrough Starshot discussion. There I noted that we might balance the images we could receive through Starshot’s sails with those we could produce through telescopes at the Sun’s gravitational focus.
Without the infrastructure issue, it would be a simple thing to go with JPL’s Solar Gravitational Lens concept since the target, somewhere around 600 AU, is so much closer, and could produce perhaps even better imagery. But let’s consider Starshot’s huge photon engine in the Atacama desert not as a one-shot enabler for Proxima Centauri, but as a practical tool that, once built, will allow all kinds of fast missions within the Solar System. The financial outlay supports Oort Cloud exploration, fast access to the heliopause and nearby interstellar space, and planetary missions of all kinds. Add atmospheric braking and we can consider it as a supply chain as well.
Robert Freeland, who has labored mightily in the Project Icarus Firefly design, told the Interstellar Research Group’s recent meeting in Montreal about work he is doing within the context of the British Interplanetary Society’s BIS SPACE project, whose goal is to consider the economic drivers, resources, transportation issues and future population growth that would drive an interplanetary economy. That Solar System-wide infrastructure in turn feeds interstellar capabilities, as it generates new technologies that funnel into propulsion concepts. A case in point: In-space fusion.
To make our engines go, we need fuel, an obvious point and a telling one, since the kind of fusion Freeland has been studying for the Firefly design is limited by our current inability to extract enough Helium-3 to use aboard an interstellar craft. Firefly would use Z-pinch fusion – this is a way of confining plasma and compressing it. An electrical current fed into the plasma generates the magnetic fields that ‘pinch,’ or compress the plasma, creating the high temperatures and pressures that can produce fusion.
I was glad to see Freeland’s slides on the fusion fuel possibilities, a helpful refresher. The easiest fusion reactions, if anything about fusion can be called ‘easy,’ is that of deuterium with tritium, with the caveat that this reaction produces most of its energies in neutrons that cannot produce thrust. Whereas the reaction of deuterium with helium-3 releases primarily charged particles that can be shaped into thrust, which is why it was D/He3 fusion that was chosen by the Daedalus team for their gigantic starship design back in the 1970s. Along with that choice came the need to find the helium-3 to fuel the craft. The Daedalus team, ever imaginative, contemplated mining the atmospheres of the gas giants, where He3 can be found in abundance.
The lack of He-3 caused Icarus to choose a pure deuterium fuel (DD). Freeland ran through the problems with DD, noting the abundance of produced neutrons and the gamma rays that result from shielding these fast neutrons. The reaction also produces so-called bremsstrahlung radiation, which emerges in the form of x-rays. Thus the Firefly design stripped down what would otherwise be a significant portion of its mass in shielding by going to what Freeland calls ‘distance shielding,’ meaning minimal structure that allows the radiation to escape into space.
A starship using deuterium and helium-3 minimizes the neutron radiation, so the question becomes, when do we close the gap in our space capabilities to the point that we can extract helium-3 in the quantities needed from planets like Uranus? I see BIS SPACE as seeking to probe what the Daedalus team described as a Solar System-wide economy, and to put some numbers to the question of when this capability would evolve. The question is given point in terms of interstellar probes because while Firefly had been conceived as a starship that could launch before 2100, it seemed likely that helium-3 simply wouldn’t be available in sufficient quantities. So when would it be?
To create an infrastructure off-planet, we’ll need human migration outward, beginning most likely with orbital habitats not far from Earth – think of the orbital environments conceived by Gerard O’Neill, with their access to the abundant resources of the inner system. Freeland imagines future population growth moving further out over the course of the next 20,000 years until the Solar System is fully exploited. In four waves of expansion, he sees the era of chemical and ion rocketry, and perhaps beamed propulsion, to about 2050, with the second generation largely using fission-powered craft, in a phase ending in about 2200. 2200 to 2500 taps fusion energies (DD), while the entire Solar System is populated after 2500, with mining of the gas giants possible.
Let’s pause for a moment on the human population’s growth, because the trends noted in the image below, although widely circulated, seem not to be widely known. We’re looking here at the growth rate of our species and its acceleration followed by its long decline. As Freeland pointed out, the UN expects world population to peak at between 10 and 12 billion perhaps before the end of this century. After that, increase in the population is by no means assured. So much for the scenario that we have to go off-planet because we will simply overwhelm resources here with our numbers.
Image: In both this and the image below I am drawing from Freeland’s slides.
You would think this Malthusian notion would have long ago been discredited, but it is surprisingly robust. Even so, orbital habitats near Earth can potentially re-create basic Earth-like conditions while exploiting material resources in great abundance and solar power, with easy access to space for moving the wave of innovation further out. BIS SPACE looks with renewed interest at these O’Neill habitats in its first wave of papers.
The larger scenario plays out as follows: In the second half of our century, we move development largely to high Earth orbit, with materials drawn mostly from the Moon, using transport of goods by nuclear-powered cargo ships. The third generation creates orbital habitats at all the inner planets (and Ceres) and perhaps near-Earth asteroids using DD fusion propulsion, while the fourth generation takes in the outer planets and their moons. At this point we can set up the kind of aerostat mining rigs in the upper gas giant atmospheres that would enable the collection of helium-3. Here again we have to make comparisons with other technologies. Where will beamed spacecraft capabilities be by the time we are actively mining He-3 in the outer Solar System?
I’ve simplified the details on expansion greatly, and send you to Freeland’s slides for the details. But I want to circle back to Firefly. Using DD fusion, Firefly’s radiator and coolant requirements are extreme (480 tonnes of beryllium coolant!) But move to the deuterium/helium-3 reaction and you drop radiation output by 75 percent while increasing exhaust velocity. Beryllium can be replaced with less expensive aluminum and the physical size of the vessel is greatly reduced. This version of Firefly gets to Alpha Centauri in the same time using 1/5th the fuel and 1/12th the coolant.
In other words, the sooner we can build the infrastructure allowing us to mine the critical helium-3, the sooner we can drop the costs of interstellar missions and expand their capabilities using fusion engines. If such a scenario plays out, it will be fascinating to see how the population growth curves for the entire Solar System track given access to abundant new resources and the technologies to exploit them. If we can imagine a Solar System-wide human population in the range of 100 billion, we can also imagine the growth of new propulsion concepts to power colonization outside the system.
If we’re going to get to the stars, the path along the way has to go through an effort like Breakthrough Starshot. This is not to say that Breakthrough will achieve an interstellar mission, though its aspirational goal of reaching a nearby star like Proxima Centauri with a flight time of 20 years is one that takes the breath away. But aspirations are just that, and the point is, we need them no matter how far-fetched they seem to drive our ambition, sharpen our perspective and widen our analysis. Whether we achieve them in their initial formulation cannot be known until we try.
So let’s talk for a minute about what Starshot is and isn’t. It is not an attempt to use existing technologies to begin building a starship today. Yes, metal is being bent, but in laboratory experiments and simulated environments. No, rather than a construction project, Starshot is about clarifying where we are now, and projecting where we can expect to be within a reasonable time frame. In its early stages, it is about identifying the science issues that would enable us to use laser beaming to light up a sail and push it toward another star with prospects of a solid data return. Starshot’s Harry Atwater (Caltech) told the Interstellar Research Group in Montreal that it is about development and definition. Develop the physics, define and grow the design concepts, and nurture a scientific community. These are the necessary and current preliminaries.
Image: The cover image of a Starshot paper illustrating Harry Atwater’s “Materials Challenges for the Starshot Lightsail,” Nature Materials 17 (2018), 861-867.
We’re talking about what could be a decades-long effort here, one that has already achieved a singular advance in interstellar studies. I don’t have the current count on how many papers have been spawned by this effort, but we can contrast the ongoing work of Starshot’s technical teams with where interstellar studies was just 25 years ago, when few scientific conferences dealt with interstellar ideas and exoplanets were still a field in their infancy. In terms of bringing focus to the issue, Starshot is sui generis.
It is also an organic effort. Starshot will assess its development as it goes, and the more feasible its answers, the more it will grow. I think that learning more about sail possibilities will spawn renewed effort in other areas, and I see the recent growth of fusion rocketry concepts as a demonstration that our field is attaining critical mass not only in the research labs and academy but in commercial space ventures as well.
So let’s add to Atwater’s statement that Starshot is also a cultural phenomenon. Although its technical meetings are anything but media fodder, their quiet work keeps the idea of an interstellar crossing in the public mind as a kind of background musical riff. Yes, we’re thinking about this. We’ve got ideas and lab experiments that point to new directions. We’re learning things about lightsails and beaming we didn’t know before. And yes, it’s a big universe, with approximately one planet per star on average, and we’ve got one outstanding example of a habitable zone planet right next door.
So might Starshot’s proponents say to themselves, although I have no idea how many of those participating in the effort back out sometimes to see that broader picture (I suspect quite a few, based on those I know, but I can’t speak for everyone). But because Starshot has not sought the kind of publicity that our media-crazed age demands, I want to send you to Atwater’s video presentation at Montreal to get caught up on where things stand. I doubt we’re ever going to fly the mission Starshot originally conceived because of cost and sheer scale, but I’m only an outsider looking in. I do think that when the first interstellar mission flies, it will draw heavily on Starshot’s work. And this will be true no matter what final choices emerge as to propulsion.
This is a highly technical talk compressed into an all too short 40 minutes, but let’s just go deep on one aspect of it, the discussion of the lightsail that would be accelerated to 20 percent of lightspeed for the interstellar crossing. Atwater’s charts are worth seeing, especially the background on what the sail team’s meetings have produced in terms of their work on sail materials and, especially, sail shape and stability. The sail is a structure approximately 4 meters in diameter, with a communications aperture 1 meter in size, as seen in the center of the image (2 on the figure). Surrounding it on the circular surface are image sensors (6) and thin-film radioisotope power cells (5).
Maneuvering LEDs (4) provide attitude control, and thin-film magnetometers (7) are in the central disk, with power and data buses (8) also illustrated. A key component: A laser reflector layer positioned between the instruments that are located on the lightsail and the lightsail itself, which is formed as a silicon nitride metagrating. As Atwater covers early in his presentation, the metagrating is crucial for attitude control and beam-riding, keeping the sail from slipping off the beam even though it is flat. The layering is crucial in protecting the sailcraft instrumentation during the acceleration stage, when it is fully illuminated by the laser from the ground.
How to design lensless transmitters and imaging apertures? Atwater said that lensless color camera and steerable phased array communication apertures are being prototyped in the laboratory now using phased arrays with electrooptic materials. Working one-dimensional devices have emerged in this early work for beam steering and electronic focusing of beams. The laser reflector layer offers the requisite high reflectivity at the laser wavelength being considered, using a hybrid design with silicon nitride and molybdenum disulfide to minimize absorption that would heat the sail.
I won’t walk us through all of the Starshot design concepts at this kind of detail, but rather send you to Atwater’s presentation, which shows the beam-riding lightsail structure and its current laboratory iterations. The discussion of power sources is particularly interesting given the thin-film lightweight structures involved, and as shown in the image below, it involves radioisotope thermoelectric generators actually integrated into the sail surface. Thin film batteries and fuel cells were considered by Breakthrough’s power working group but rejected in favor of this RTG design.
So much is going on here in terms of the selection of sail materials and the analysis of its shape, but I’ll also send you to Atwater’s presentation with a recommendation to linger over his discussion of the photon engine, that vast installation needed to produce the beam that would make the interstellar mission happen. The concept in its entirety is breathtaking. The photon engine is currently envisioned as an array of 1,767,146 panels consisting of 706,858,400 individual tiles (Atwater dryly described this as “a large number of tiles”), producing the 200 gW output and covering 3 kilometers on the ground. The communications problem for data return is managed by scalable large-area ground receiver arrays, another area where Breakthrough is examining cost trends that within the decades contemplated for the project will drive component expenses sharply down. The project depends upon these economic outcomes.
Image: What we would see if we had a Starshot-class sailcraft approaching the Earth, from the image at two hours away to within five minutes of its approach. Credit for this and the two earlier images: Harry Atwater/Breakthrough Starshot.
Using a laser-beamed sail technology to reach the nearest stars may be the fastest way to get images like those above. The prospect of studying a planet like Proxima b at this level of detail is enticing, but how far can we count on economic projections to bring costs down to the even remotely foreseeable range? We also have to factor in the possibility of getting still better images from a mission to the solar gravitational lens (much closer) of the kind currently being developed at the Jet Propulsion Laboratory.
Economic feasibility is inescapably part of the Starshot project, and is clearly one of the fundamental issues it was designed to address. I return to my initial point. Identifying the principles involved and defining the best concepts to drive design both now and in the future is the work of a growing scientific community, which the Starshot effort continues to energize. That in itself is no small achievement.
It is, in fact, a key building block in the scientific edifice that will define the best options for achieving the interstellar dream. And while this is not the place to go into the complexities of scientific funding, suffice it to say that putting out the cash to enable these continuing studies is a catalytic gift to a field that has always struggled for traction both financial and philosophical. The Starshot initiative has a foundational role in defining the best technologies for interstellar flight that will lead one day to its realization.
One of the great problems of lightsail concepts for interstellar flight is the need to decelerate. Here I’m using lightsail as opposed to ‘solar sail’ in the emerging consensus that a solar sail is one that reflects light from our star, and is thus usable within the Solar System out to about 5 AU, where we deal with the diminishment of photon pressure with distance. Or we could use the Sun with a close solar pass to sling a solar sail outbound on an interstellar trajectory, acknowledging that once our trajectory has been altered and cruise velocity obtained, we might as well stow the now useless sail. Perhaps we could use it for shielding in the interstellar medium or some such.
A lightsail in today’s parlance defines a sail that is assumed to work with a beamed power source, as with the laser array envisioned by Breakthrough Starshot. With such an array, whether on Earth or in space, we can forgo the perihelion pass and simply bring our beam to bear on the sail, reaching much higher velocities. Of the various materials suggested for sails in recent times, graphene and aerographite have emerged as prime candidates, both under discussion at the recent Montreal symposium of the Interstellar Research Group. And that problem of deceleration remains.
Is a flyby sufficient when the target is not a nearby planet but a distant star? We accepted flybys of the gas giants as part of the Voyager package because we had never seen these worlds close up, and were rewarded with images and data that were huge steps forward in our understanding of the local planetary environment. But an interstellar flyby is challenging because at the speeds we need to reach to make the crossing in a reasonable amount of time, we would blow through our destination system in a matter of hours, and past any planet of interest in perhaps a matter of minutes.
Robert Forward’s ingenious ‘staged’ lightsail got around the problem by using an Earth-based laser to illuminate one part of the now separated sail ring, beaming that energy back to the trailing part of the sail affixed to the payload and allowing it to decelerate. Similar contortions could divide the sail again to make it possible to establish a return trajectory to Earth once exploration of the distant stellar system was complete. We can also consider using magsail concepts to decelerate, or perhaps the incident light from a bright target star could allow sufficient energy to brake against.
Image: Forward’s lightsail separating at the beginning of its deceleration phase. Laser sailing may turn out to be the best way to the stars, provided we can work out the enormous technical challenges of managing the outbound beam. Or will we master fusion first? Credit: R.L. Forward.
But time is ever a factor, because you want to reach your target quickly, while at the same time, if you approach it too fast, you’re incapable of creating the needed deceleration. Moreover, what is your target? A bright star gives you options for deceleration if you approach at high velocity that are lacking from, say, a red dwarf star like Proxima Centauri, where the closest terrestrial-class world we know is in what appears to be a habitable zone orbit. In Montreal, René Heller (Max Planck Institute for Solar System Research), a familiar name in these pages, laid out the equations for a concept he has been developing for several years, a mission that could use not only the light of Proxima itself but from Centauri A and B to create a deceleration opportunity. You can follow Heller’s presentation at Montreal here.
Remember what we’re dealing with here. We have two stars in the central binary, Centauri A (G-class) and Centauri B (K-class), with the M-class dwarf Proxima Centauri about 13000 AU distant. Centauri A and B are close – their distance as they orbit around a common barycenter varies from 35.6 AU to 11.2 AU. These are distances in Solar System range, meaning that 35.6 AU is roughly the orbit of Neptune, while 11.2 AU is close to Saturn distance. Interesting visual effects in the skies of any planet there.
Image: Orbital plot of Proxima Centauri showing its position with respect to Alpha Centauri over the coming millennia (graduations are in thousands of years). The large number of background stars is due to the fact that Proxima Cen is located very close to the plane of the Milky Way. Proxima’s orbital relation to the central stars becomes profoundly important in the calculations Heller and team make here. Credit: P. Kervella (CNRS/U. of Chile/Observatoire de Paris/LESIA), ESO/Digitized Sky Survey 2, D. De Martin/M. Zamani.
Using a target star for deceleration by braking against incident photons has been studied extensively, especially in recent years by the Breakthrough Starship team, where the question of how its tiny sailcraft could slow from 20 percent of the speed of light to allow longer time at target is obviously significant. Deceleration into a bound orbit at Proxima would be, of course, ideal but it turns out to be impossible given the faint photon pressure Proxima can produce. Investing decades of research and 20 years of travel time is hardly efficient if time in the system is measured in minutes.
In fact, to use photon pressure from Proxima Centauri, whose luminosity is 0.0017 that of the Sun, would require approaching the star so slowly to decelerate into a bound orbit that the journey would take thousands of years. Hence Heller’s notion of using the combined photon pressure and gravitational influences of Centauri A and B to work deceleration through a carefully chosen trajectory. In other words, approach A, begin deceleration, move to B and repeat, then emerge on course outbound to Proxima, where you’re now slow enough to use its own photons to enter the system and stay.
Working with Michael Hippke (Max Planck Institute for Solar System Research, Göttingen) and Pierre Kervella (CNRS/Universidad de Chile), Heller has refined the maximum speed that can be achieved on the approach into Alpha Centauri A to make all this happen: 16900 kilometers per second. If we launch in 2035, we arrive at Centauri A in 2092, with arrival at Centauri B roughly six days later and, finally, arrival at Proxima Centauri for operations there in a further 46 years. That launch time is not arbitrary. Heller chose 2035 because he needs Centauri A and B to be in precise alignment to allow the gravitational and photon braking effects to work their magic.
So we have backed away from Starshot’s goal of 20 percent of lightspeed to a more sedate 5.6 percent, but with the advantage (if we are patient enough) of putting our payload into the Proxima Centauri system for operations there rather than simply flying through it at high velocity. We also get a glimpse of the systems at both Centauri A and B. I wrote about the original Heller and Hippke paper on this back in 2017 and followed that up with Proxima Mission: Fine-Tuning the Photogravitational Assist. I return to the concept now because Heller’s presentation contrasts nicely with the Helicity fusion work we looked at in the previous post. There, the need for fusion to fly large payloads and decelerate into a target was a key driver for work on an in-space fusion engine.
Interstellar studies works, though, through multiple channels, as it must. Pursuing fusion in a flight-capable package is obviously a worthy goal, but so is exploring the beamed energy option in all its manifestations. I note that Helicity cites a travel time to Proxima Centauri in the range of 117 years, which compares with Heller and company’s now fine-tuned transit into a bound orbit at Proxima of 121 years. The difference, of course, is that Helicity can envision launching a substantially larger payload.
Clearly the pressure is on fusion to deliver, if we can make that happen. But the fact that we have gone from interstellar flight times thought to involve thousands of years to a figure of just over a century in the past few decades of research is heartening. No one said this would be easy, but I think Robert Forward would revel in the thought that we’re driving the numbers down for a variety of intriguing propulsion options.
The paper René Heller drew from in the Montreal presentation is Heller, Hippke & Kervella, “Optimized Trajectories to the Nearest Stars Using Lightweight High-velocity Photon Sails,” Astronomical Journal Vol. 154 No. 3 (29 August 2017), 115. Full text.