Reminiscing about some of Robert Forward’s mind-boggling concepts, as I did in my last post, reminds me that it was both Forward as well as the Daedalus project that convinced many people to look deeper into the prospect of interstellar flight. Not that there weren’t predecessors – Les Shepherd comes immediately to mind (see The Worldship of 1953) – but Forward was able to advance a key point: Interstellar flight is possible within known physics. He argued that the problem was one of engineering.

Daedalus made the same point. When the British Interplanetary Society came up with a starship design that grew out of freelance scientists and engineers working on their own dime in a friendly pub, the notion was not to actually build a starship that would bankrupt an entire planet for a simple flyby mission. Rather, it was to demonstrate that even with technologies that could be extrapolated in the 1970s, there were ways to reach the stars within the realm of known physics. Starflight was incredibly hard and expensive, but if it were possible, we could try to figure out how to make it feasible.

And if figuring it out takes centuries rather than decades, what of it? The stars are a goal for humanity, not for individuals. Reaching them is a multi-generational effort that builds one mission at a time. At any point in the process, we do what we can.

What steps can we take along the way to start moving up the kind of technological ladder that Phil Lubin and Alexander Cohen examine in their recent paper? Because you can’t just jump to Forward’s 1000-kilometer sails pushed by a beam from a power station in solar orbit that feeds a gigantic Fresnel lens constructed in the outer Solar System between the orbits of Saturn and Uranus. The laser power demand for some of Forward’s missions is roughly 1000 times our current power consumption. That is to say, 1000 times the power consumption of our entire civilization.

Clearly, we have to find a way to start at the other end, looking at just how beamed energy technologies can produce early benefits through far smaller-scale missions right here in the Solar System. Lubin and Cohen hope to build on those by leveraging the exponential growth we see in some sectors of the electronics and photonics industries, which gives us that tricky moving target we looked at last time. How accurately can you estimate where we’ll be in ten years? How stable is the term ‘exponential’?

These are difficult questions, but we do see trends here that are sharply different from what we’ve observed in chemical rocketry, where we’re still using launch vehicles that anyone watching a Mercury astronaut blast off in 1961 would understand. Consumer demand doesn’t drive chemical propulsion, but in terms of power beaming, we obviously do have electronics and photonics industries in which the role of the consumer plays a key role. We also see the exponential growth in capability paralleled by exponential decreases in cost in areas that can benefit beamed technologies.

Lubin and Cohen see such growth as the key to a sustainable program that builds capability in a series of steps, moving ever outward in terms of mission complexity and speed. Have a look at trends in photonics, as shown in Figure 5 of their paper.

Image (click to enlarge): This is Figure 5 from the paper. Caption: (a) Picture of current 1-3 kW class Yb laser amplifier which forms the baseline approach for our design. Fiber output is shown at lower left. Mass is approx 5 kg and size is approximately that of this page. This will evolve rapidly, but is already sufficient to begin. Courtesy Nufern. (b) CW fiber laser power vs year over 25 years showing a “Moore’s Law” like progression with a doubling time of about 20 months. (c) CW fiber lasers and Yb fiber laser amplifiers (baselined in this paper) cost/watt with an inflation index correction to bring it to 2016 dollars. Note the excellent fit to an exponential with a cost “halving” time of 18 months.

Such growth makes developing a cost-optimized model for beamed propulsion a tricky proposition. We’ve talked in these pages before about the need for such a model, particularly in Jim Benford’s Beamer Technology for Reaching the Solar Gravity Focus Line, where he presented his analysis of cost optimized systems operating at different wavelengths. That article grew out of his paper “Intermediate Beamers for Starshot: Probes to the Sun’s Inner Gravity Focus” (JBIS 72, pg. 51), written with Greg Matloff in 2019. I should also mention Benford’s “Starship Sails Propelled by Cost-Optimized Directed Energy” (JBIS 66, pg. 85 – abstract), and note that Kevin Parkin authored “The Breakthrough Starshot System Model” (Acta Astronautica 152, 370–384) in 2018 (full text). So resources are there for comparative analysis on the matter.

But let’s talk some more about the laser driver that can produce the beam needed to power space missions like those in the Lubin and Cohen paper, remembering that while interstellar flight is a long-term goal, much smaller systems can grow through such research as we test and refine missions of scientific value to nearby targets. The authors see the photon driver as a phased laser array, the idea being to replace a single huge laser with numerous laser amplifiers in what is called a “MOPA (Master Oscillator Power Amplifier) configuration with a baseline of Yb [ytterbium] amplifiers operating at 1064 nm.”

Lubin has been working on this concept through his Starlight program at UC-Santa Barbara, which has received Phase I and II funding through NASA’s Innovative Advanced Concepts program under the headings DEEP-IN (Directed Energy Propulsion for Interstellar Exploration) and DEIS (Directed Energy Interstellar Studies). You’ll also recognize the laser-driven sail concept as a key part of the Breakthrough Starshot effort, for which Lubin continues to serve as a consultant.

Crucial to the laser array concept in economic terms is that the array replaces conventional optics with numerous low-cost optical elements. The idea scales in interesting ways, as the paper notes:

The basic system topology is scalable to any level of power and array size where the tradeoff is between the spacecraft mass and speed and hence the “steps on the ladder.” One of the advantages of this approach is that once a laser driver is constructed it can be used on a wide variety of missions, from large mass interplanetary to low mass interstellar probes, and can be amortized over a very large range of missions.

So immediately we’re talking about building not a one-off interstellar mission (another Daedalus, though using beamed energy rather than fusion and at a much different scale), but rather a system that can begin producing scientific returns early in the process as we resolve such issues as phase locking to maintain the integrity of the beam. The authors liken this approach to building a supercomputer from a large number of modest processors. As it scales up, such a system could produce:

  • Beamed power for ion engine systems (as discussed in the previous post);
  • Power to distant spacecraft, possibly eliminating onboard radioisotope thermoelectric generators (RTG);
  • Planetary defense systems against asteroids and comets;
  • Laser scanning (LIDAR) to identify nearby objects and analyze them.

Take this to a full-scale 50 to 100 GW system and you can push a tiny payload (like Starshot’s ‘spacecraft on a chip’) to perhaps 25 percent of lightspeed using a meter-class reflective sail illuminated for a matter of no more than minutes. Whether you could get data back from it is another matter, and a severe constraint upon the Starshot program, though one that continues to be analyzed by its scientists.

But let me dwell on closer possibilities: A system like this could also push a 100 kg payload to 0.01 c and – the one that really catches my eye – a 10,000 kg payload to more than 1,000 kilometers per second. At this scale of mass, the authors think we’d be better off going to IDM methods, with the beam supplying power to onboard propulsion, but the point is we would have startlingly swift options for reaching the outer Solar System and beyond with payloads allowing complex operations there.

If we can build it, a laser array like this can be modular, drawing on mass production for its key elements and thus achieving economies of scale. It is an enabler for interstellar missions but also a tool for building infrastructure in the Solar System:

There are very large economies of scale in such a system in addition to the exponential growth. The system has no expendables, is completely solid state, and can run continuously for years on end. Industrial fiber lasers have MTBF in excess of 50,000 hours. The revolution in solid state lighting including upcoming laser lighting will only further increase the performance and lower costs. The “wall plug” efficiency is excellent at 42% as of this year. The same basic system can also be used as a phased array telescope for the receive side in the laser communications as well as for future kilometer-scale telescopes for specialized applications such as spectroscopy of exoplanet atmospheres and high redshift cosmology studies…

Such capabilities have to be matched against the complications inevitable in such a design. These ideas are reliant on the prospect of industrial capacity catching up, a process that is mitigated by finding technologies driven by other sectors or produced in mass quantities so as to reach the needed price point. A major issue: Can laser amplifiers parallel what is happening in the current LED lighting market, where costs continue to plummet? A parallel movement in laser amplifiers would, over the next 20 years, reduce their cost enough that it would not dominate the overall system cost.

This is problematic. Lubin and Cohen point out that LED costs are driven by the large volume needed. There is no such demand in laser amplifiers. Can we expect the exponential growth to continue in this area? I asked Dr. Lubin about this in an email. Given the importance of the issue, I want to quote his response at some length:

There are a number of ways we are looking at the economics of laser amplifiers. Currently we are using fiber based amplifiers pumped by diode lasers. There are other types of amplification that include direct semiconductor amplifiers known as SOA (Semiconductor Optical Amplifier). This is an emerging technology that may be a path forward in the future. This is an example of trying to predict the future based on current technology. Often the future is not just “more of the same” but rather the future often is disrupted by new technologies. This is part of a future we refer to as “integrated photonics” where the phase shifting and amplification are done “on wafer” much like computation is done “on wafer” with the CPU, memory, GPU and auxiliary electronics all integrated in a single elements (chip/ wafer).

Lubin uses the analogy of a modern personal computer as compared to an ENIAC machine from 1943, as we went from room-sized computers that drew 100 kW to something that, today, we can hold in our hands and carry in our pockets. We enjoy a modern version that is about 1 billion times faster and features a billion times the memory. And he continues:

In the case of our current technique of using fiber based amplifiers the “intrinsic raw materials cost” of the fiber laser amplifier is very low and if you look at every part of the full system, the intrinsic costs are quite low per sub element. This works to our advantage as we can test the basic system performance incrementally and as we enlarge the scale to increase its capability, we will be able to reduce the final costs due to the continuing exponential growth in technology. To some extent this is similar to deploying solar PV [photovoltaics]. The more we deploy the cheaper it gets per watt deployed, and what was not long ago conceivable in terms of scale is now readily accomplished.

Hence the need to find out how to optimize the cost of the laser array that is critical to a beamed energy propulsion infrastructure. The paper is offered as an attempt to produce such a cost function, to take in the wide range of system parameters and their complex connections. Comparing their results to past NASA programs, Lubin and Cohen point out that exponential technologies fundamentally change the game, with the cost of the research and development phase being amortised over decades. Moreover, directed energy systems are driven by market factors in areas as diverse as telecommunications and commercial electronics in a long-term development phase.

An effective cost model generates the best cost given the parameters necessary to produce a product. A cost function that takes into account the complex interconnections here is, to say the least, challenging, and I leave the reader to explore the equations the authors develop in the search for cost minimums, relating system parameters to the physics. Thus speed and mass are related to power, array size, wavelength, and so on. The model also examines staged system goals – in other words, it considers the various milestones that can be achieved as the system grows.

Bear in mind that this is a cost model, not a cost estimate, which the authors argue would not be not credible given the long-term nature of the proposed program. But it’s a model based on cost expectations drawn from existing technologies. We can see that the worldwide photonics market is expected to exceed $1 trillion by this year (growing from $180 billion in 2016), with annual growth rates of 20 percent.

These are numbers that dwarf the current chemical launch industry; Lubin and Cohen consider them to reveal the “engine upon which a DE program would be propelled” through the integration of photonics and mass production. While fundamental physics drives the analytical cost model, it is the long term emerging trends that set the cost parameters in the model.

Today’s paper is Lubin & Cohen, “The Economics of Interstellar Flight,” to be published in a special issue of Acta Astronautica (preprint).

tzf_img_post