Why Fill a Galaxy with Self-Reproducing Probes?

We can’t know whether there is a probe from another civilization – a von Neumann probe of the sort we discussed in the previous post – in our own Solar System unless we look for it. Even then, though, we have no guarantee that such a probe can be found. The Solar System is a vast place, and even if we home in on the more obvious targets, such as the Moon, and near-Earth objects in stable orbits, a well hidden artifact a billion or so years old, likely designed not to draw attention to itself, is a tricky catch.

As with any discussion of extraterrestrial civilizations, we’re left to ponder the possibilities and the likelihoods, acknowledging how little we know about whether life itself is widely found. One question opens up another. Abiogenesis may be spectacularly rare, or it may be commonplace. What we eventually find in the ice moons of the outer system should offer us some clues, but widespread life doesn’t itself translate into intelligent, tool-making life. But for today, let’s assume intelligent toolmakers and long-lived societies, and ponder what their motives might be.

Let’s also acknowledge the obvious. In looking at motivations, we can only peer through a human lens. The actions of extraterrestrial civilizations, and certainly their outlook on existence itself, would be opaque to us. They would possibly act in ways we consider inexplicable, for reasons that defy the logic we apply to human decisions. But today’s post is a romp into the conjectural, and it’s a reflection of the fact that being human, we want to know more about these things and have to start somewhere.

Motivations of the Probe Builders

Greg Matloff suggests in his paper on von Neumann probes that one reason a civilization might fill the galaxy with these devices is the possibly universal wish to transcend death. A walk through the Roman ruins scattered around what was once the province of Gaul gave weight to the concept when my wife and I prowled round the south of France some years back. Humans, at least, want to put down a marker. They want to be remembered, and their imprint upon a landscape can be unforgettable.

But in von Neumann terms, I have trouble with this one. I stood next to a Roman wall near Saint-Rémy-de-Provence on a late summer day and felt the poignancy of all artifacts worn by time, but the Romans were decidedly mortal. They knew death was a horizon bounding a short life, and could transcend it only through propitiations to their gods and monuments to their prowess. A civilization that is truly long-lived, defined not by centuries but aeons, may have less regard for personal aggrandizement and even less sense of a coming demise. Life might seem to stretch indefinitely before it.

Image: Some of the ruins of the Roman settlement at Glanum in Saint-Rémy-de-Provence, recovered through excavations beginning in 1921. Walking here caused me to reflect on how potent memorials and monuments would be to a species that had all but transcended death. Would the impulse to build them be enhanced, or would it gradually disappear?

Probes as a means of species reproduction, another Matloff suggestion, ring more true to me, and I would suggest this may flag a biological universal, the drive to preserve the species despite the death of the individual. Here we’re in familiar science fiction terrain in which biological material is preserved by machines and flung to the stars, to be activated upon arrival and raised to awareness by artificial intelligence. Or we could go further – Matloff does – to say that biological materials may prove unnecessary, with computer uploads of the minds of the builders taking their place, another SF trope.

I can go with that as a satisfactory motivator, and it’s enough to make me want to at least try to find what Jim Benford calls ‘lurkers’ in our own corner of the galaxy. Another motivator that deeply satisfies me because it’s so universal among humankind is simple curiosity. A long-lived, perhaps immortal civilization that wants to explore can send von Neumann probes everywhere possible in the hope of learning everything it can about the universe. Encyclopedia Galactica? Why not? Imputing any human motive to an extraterrestrial civilization is dangerous, of course, but we have little else to go on. And centuries of human researchers and librarians attest to the power of this one.

Would such probes be configured to establish communication with any societies that arise on the planets under observation? This is the Bracewell probe notion that extends von Neumann self-reproduction to include this much more immediate form of SETI, with potential knowledge stored at planetary distances. Obviously, 2001: A Space Odyssey comes to mind as we recall the mysterious monoliths found on the early Earth and, much later, on the Moon, and the changes to humanity they portend.

But are long-lived civilizations necessarily friendly? Fred Saberhagen’s ‘berserker’ probes key off the Germanic and particularly Norse freelance bodyguards and specialized troops that became fixtures at the courts of royalty in early medieval times (the word is from the Old Norse word meaning ‘bearskin’). These were not guys you wanted to mess with, and associations with their attire of bear and wolfskins seem to have contributed to the legend of werewolves. Old Norse records show that they were prominent at the court of Norway’s king Harald I Fairhair (reigned 872–930).

Because they made violence into a way of life, we should hope not to find the kind of probe that would be named after them, which might be sent out to eliminate competition. Thus Saberhagen’s portrayal of berserker probes sterilizing planets just as advanced life begins to appear. The fact that we have not yet been sterilized may be due to the possibility that such a probe does not yet consider us ‘advanced,’ but more likely implies we have no berserker probes nearby. Let’s hope to keep it that way.

Or what about the spread of life itself? If abiogenesis does turn out to be unusually rare, it’s possible that any civilization with the power to do so would decide to seed the cosmos with life. In this case, we’re not sending uploaded intelligence or biological beings in embryonic form in our probes, but rather the most basic lifeforms that can proliferate on any planets offering the right conditions for their development. Perhaps there becomes an imperative – written about, for example, by Michael Mautner and Matloff himself – to spread life as a way to transform the cosmos. Milan ?irkovi? continues to explore the implications of just such an effort.

In an interesting post in Sentient Developments, Canadian futurist George Dvorsky points out that self-reproduction has more than an outward-looking component. Supposing a civilization interested in building a megastructure – a Dyson sphere, let’s say – decides to harness self-reproduction to supply the needed ‘worker’ devices that would mine the local stellar system and create the object in question.

At a truly cosmic level, Matloff speculates, self-replicating probes might be deployed to build megastructures that could alter the course of cosmic evolution. We’re in Stapledon territory now, freely mixing philosophy and wonder. We’re also in the arena claimed by Frank Tipler in his The Physics of Immortality (Doubleday, 1994).

We’ll want to search the Earth Trojan asteroids and co-orbitals for any indication of extraterrestrial probes, though it’s also true that the abundant resources of the Kuiper Belt might make operations there attractive to this kind of intelligence. One of the biggest questions has to do with the size of such probes. Here I’ll quote Matloff:

In a search for active or quiescent von Neumann probes in the solar system, human science would contend with great uncertainty regarding the size of such objects. Some science fiction authors contend that these devices might be the size of small planetary satellites (see for example L. Johnson, Mission to Methone and A. Reynolds, Pushing Ice). On the other hand, Haqq-Misra and Kopparapu (2012) believe that they may be in the 1-10 m size range of contemporary human space probes and these might be observable.

But there may be a limit to von Neumann probe detection. If they can be nano-miniaturized as suggested by Tipler (1994), the solar system might swarm with them and detection efforts would likely fail.

I remember having a long phone conversation two decades ago with Robert Freitas on this very point. Freitas had originally come up with a self-reproducing probe concept at the macro-scale called REPRO, but went on to delve into the implications of nano-technology. He made Matloff’s point in our discussion: If probe technologies operate at this scale, the surface of planet Earth itself could be home to an observing network about which we would have no awareness. Self-reproductive probes will be hard to rule out, but looking where we can to screen for the obvious makes sense.

The paper is Matloff, “Von Neumann probes: rationale, propulsion, interstellar transfer timing,” International Journal of Astrobiology, published online by Cambridge University Press 28 February 2022 (abstract).

tzf_img_post

Galaxies Like Grains of Salt

I’m riffing on a Brian Aldiss title this morning, the reference being the author’s 1959 collection Galaxies Like Grians of Sand, which is a sequence of short stories spanning millions of years of Earth’s future (originally published as The Canopy of Time). But sand is appropriate for the exercise before us today, one suggested by memories of the day my youngest son told me he had to construct a model of an atom and we went hunting all over town for styrofoam balls. It turns out atoms are easy.

Suppose your child comes home with a project involving the creation of a scale model of the galaxy. Pondering the matter, you announce that grains of salt can stand in for stars. Sand might work as well, but salt is easier because you can buy boxes of salt at the grocery. So while your child goes outside to do other things, you and your calculator get caught up in the question of modeling the Milky Way. Just how much salt will you need?

Most models of the galaxy these days come in at a higher number than the once canonical 100 billion stars. In fact, 200 billion may be too low. But let’s economize by sticking with the lower number. So you need 100 billion grains of salt to make your scale model accurate. A little research reveals that the average box of Morton salt weighs in with about five million grains. Back to the calculator. You will need 20,000 boxes of salt to make this work. The local grocery doesn’t keep this much in stock, so you turn to good old Amazon, and pretty soon a semi has pulled up in front of your house with 20,000 blue boxes of salt.

But how to model this thing? I wouldn’t know where to begin, but fortunately JPL’s Rich Terrile thought the matter through some time back and he knows the answer. If we want to reflect the actual separation of stars just in the part of the galaxy we live in, we have to separate each grain of salt by eleven kilometers from any of its neighbors. Things get closer as we move in toward the bulge. Maybe your child has lots of friends to help spread the salt? Let’s hope so. And plenty of room to work with for the model.

I mention all this because I was talking recently with Nate Simpson, lead developer of Kerbal Space Program 2, and colleague Jon Cioletti. This is the next iteration of the remarkable spaceflight simulation game that offers highly realistic launch and orbital physics capabilities. We were talking deep space, and the salt box comparison came naturally, because these guys are also in the business of reaching a broader audience with extraordinary scales of time and space.

I strongly recommend Kerbal, by the way, because I suspect Kerbal Space Program has already turned the future career path of more than a few young players in the direction of aerospace, just as, say, science fiction novels or Star Trek inspired an earlier generation in that direction. Watching what develops as Kerbal goes into version 2 will be fascinating.

Unexpected Interstellar Targets

But I was also thinking about the salt box analogy because it can be so difficult to get interstellar distances across to the average person, who may know on some level that a galaxy is a very big place, but probably doesn’t have that deep awe that a real acquaintance with the numbers delivers. I think about craft trying to navigate these immensities, and also about objects like ‘Oumuamua and 2I/Borisov, the first two known interstellar objects we have detected. What vast oceans of interstellar space such tiny objects have drifted through! Clearly, as our ability to observe them grows, we’ll find many more such objects, and one of these days we’ll get a mission off to study one up close (probably designed by Andreas Hein and team).

Image: This Hubble Space Telescope image of 2I/Borisov shows the first observed rogue comet, a comet from interstellar space that is not gravitationally bound to a star. It was discovered in 2019 and is the second identified interstellar interloper, after ‘Oumuamua. 2I/Borisov looks a lot like the traditional comets found inside our solar system, which sublimate ices, and cast off dust as they are warmed by the Sun. The wandering comet provided invaluable clues to the chemical composition, structure, and dust characteristics of planetary building blocks presumably forged in an alien star system. It’s rapidly moving away from our Sun and will eventually head back into interstellar space, never to return. Credit: NASA, ESA, and D. Jewitt (UCLA).

I was heartened to learn over the weekend that the James Webb Space Telescope will likely have a role to play in further detection efforts. Indeed, there is now a Webb Target of Opportunity program that homes in on just such discoveries. Here is how a Target of Opportunity is defined on the JWST website:

A target for JWST observation is deemed a Target of Opportunity (ToO) if it is associated with an event that may occur at an unknown time, and in this way ToOs are distinct from time constrained observations.

Sounds made to order for interstellar interlopers. But we can add this:

ToO targets include objects that can be identified in advance, but which undergo unpredictable changes (e.g., some dwarf novae), as well as objects that can only be identified in advance by class (e.g., novae, supernovae, gamma ray bursts, newly discovered comets, etc.). ToOs are generally not suitable for observations of periodic phenomena (e.g., eclipsing binary stars, transiting planets, etc.). ToO proposals must provide a clear definition of the trigger criteria and present a detailed plan for the observations to be performed in the technical justification of the PDF submission if the triggering event occurs. A ToO activation may consist of a single observation or of a set of observations executed with a pre-specified cadence.

Martin Cordiner (NASA GSFC/Catholic University of America) is principal investigator of the Webb Target of Opportunity program to study the composition of an interstellar object:

“The supreme sensitivity and power of Webb now present us with an unprecedented opportunity to investigate the chemical composition of these interstellar objects and find out so much more about their nature: where they come from, how they were made, and what they can tell us about the conditions present in their home systems, The ability to study one of these and find out its composition — to really see material from around another planetary system close up — is truly an amazing thing.”

Image: This artist’s illustration shows one take on the first identified interstellar visitor, 1I/’Oumuamua, discovered in 2017. The wayward object swung within 38 million kilometers of the Sun before racing out of the solar system. 1I/’Oumuamua still defies any simple categorization. It did not behave like a comet, and it had a variety of unusual characteristics. As the complex rotation of the object made it difficult to determine the exact shape, there are many models of what it could look like. Credit: NASA, ESA, and J. Olmsted and F. Summers (STScI).

When astronomers detect another interstellar interloper, they’ll first need to confirm that it’s on a hyperbolic orbit, and if JWST is to come into play, that its trajectory intersects with the telescope’s viewing field. If that’s the case, Cordiner’s team will use JWST’s Near-Infrared Spectrograph (NIRSpec) to examine gasses released by the object due to the Sun’s heat. The spectral resolution available here should allow the detection of molecules ranging from water, methanol, formaldehyde and carbon dioxide to carbon monoxide and methane. The Mid-infrared instrument (MIRI) will track any dust or solid particles produced by the object.

The near- and mid-infrared wavelength ranges will be used to examine interstellar interlopers for the first time with this program, making this fertile ground for new discoveries. The assumption being that such objects exist in vast numbers, the Webb Target of Opportunity program should find material to work with, and likely soon, especially given JWST’s ability to detect incoming objects at extremely faint magnitudes. Are most such discoveries likely to be comet-like, or do we have the possibility of finding other objects as apparently anomalous as ‘Oumuamua?

tzf_img_post

Freeman Dyson’s Advice to a College Freshman

Anyone who ever had the pleasure of talking to Freeman Dyson knows that he was a gracious man deeply committed to helping others. My own all too few exchanges with him were on the phone or via email, but he always gave of his time no matter how busy his schedule. In the article below, Colin Warn offers an example, one I asked him for permission to publish so as to preserve these Dysonian nuggets for a wider audience. Colin is an Associate Propulsion Component Engineer at Maxar, with a Bachelor of Science in mechanical engineering from Washington State University. His research interests dip into in everything from electric spacecraft propulsion to small satellite development, machine learning and machine vision applications for microrobotics. Thus far in his young career, he has published two papers on the topics of nuclear gas core rockets and interstellar braking mechanisms in the Journal of the British Interplanetary Society. He tells me that when he’s not working on interstellar research, he can be found teaching music production classes or practicing martial arts.

by Colin Warn

Three years ago, I decided to make a switch from being a part time dance music ghost producer to study something that would help advance humanity’s knowledge of the stars. Eventually, I decided that something would be mechanical engineering, a switch which was in no small part due to space podcasts that introduced me to cool technologies such as Nuclear Pulsed Propulsion (NPP): Rockets propelled by small mini-nuclear explosions. The man behind this technology? Freeman Dyson.

Dyson worked on Project Orion for four years, deeply involved in studies that produced the world’s first and only prototype spacecraft powered by NPP. Due to the 1963 Partial Test Ban Treaty, which he supported, humanity’s best bet for interstellar travel was filed away. Yet, something about the audacity of this project resonated with me decades later when I uncovered it, especially when I found out that Dyson was in charge of this project despite not being a PhD.

So, as a bright-eyed and optimistic freshman entering his first year of college, figuring that out of anyone in the world he would have the best insights on what technology would lead humanity to the stars, I decided to send him this email:

Hello Professor Dyson,

My name is Colin Warn, and I’m a freshman pursuing a degree in mechanical engineering/physics.

Had a few questions for you regarding how I should structure my career path. My ambitions are to work on interstellar propulsion technologies, and I figured you might know a thing or two about the skill set required.

If you have the time, here’s what I’d love to hear your opinion on:

1. What research/internships would you suggest I focus on as an undergraduate to learn the skills that will be needed for working on advanced propulsion technologies? Especially in my freshman and sophomore years?

2. For my initial undergrad years, would you suggest that I focus more on taking physics or engineering courses initially?

Thank you so much in advance for your time. Been reading the book your son wrote about Orion. Let’s just say the reactions I’ve been getting from my friends when I tell them what I’m reading about is already quite fun to observe.

Regards,

-Colin

I sent it to his Princeton email, as I’ve sent many emails in the past to fairly high-caliber people, without a hope of getting anything in return.

Two days later, I woke up to find this email in my inbox.

Dear Colin Warn,

I will try to answer your two questions and then go on to more general remarks.

1. So far as I know, the only techniques for interstellar propulsion that are likely to be cost-effective are laser-propelled sails and microwave-propelled sails. Yuri Milner has put some real money into his Starshot project using a high-powered laser beam. Bob Forward many years ago proposed the Starwisp spacecraft using a microwave beam. Either way, the power of the beam has to be tens of Megawatts for a miniature instrument payload of the order of a gram, or tens of Terawatts for a human payload of the order of a ton. My conclusion, the manned mission is not feasible for the next century, the instrument mission might be feasible.

For the instrument mission, the propulsion system is the easy part, and the miniaturization of the payload is the difficult part. Therefore, you should aim to join a group working on miniaturization of instruments, optical sensors, transmitters and receivers, navigation and information handling systems. These are all the technologies that were developed to make cell-phones and surveillance drones. An interstellar mission is basically a glorified surveillance drone. You should go where the action is in the development of micro-drones. I do not know where that is. Probably a commercial business attached to a technical university.

2. For undergraduate courses, I would prefer engineering to physics. Some general background in physics is necessary, but specialized physics courses are not. More important is computer science, applied mathematics, electrical engineering and optics, chemistry of optical and electronic materials, microchip engineering. I would add some courses in molecular biology and neurology, with the possibility in mind that these sciences may be the basis for big future advances in miniaturization. We still have a lot to learn by studying how Nature does miniaturization in living cells and brains.

This email contained more detailed insights to my questions than I could have ever hoped for. Then to top it all off, he still had one more piece of advice for me to a question I hadn’t even asked.

General remarks. In my own career I never made long-range plans. I would advise you not to stick to plans. Always be prepared to grab at unexpected opportunities as they arise.Be prepared to switch fields whenever you have the chance to work with somebody who is doing exciting stuff. My daughter Esther, who is a successful venture capitalist running her own business, puts at the bottom of every E-mail her motto, “Always make new mistakes”. That is a good rule if you want to have an interesting life.

With all good wishes for you and your career, yours sincerely,

Freeman Dyson.

Upon reading this email in 2018, I promised myself that one day I’d put myself in a position to thank him in person. Sadly I’ll never get the opportunity. I discovered watching an old YouTube video featuring him that he died in February of 2020.

So this article is my way of saying thank you to him. For creating literal star-shot projects to inspire a new generation. For being someone who always questioned the status quo. But most of all, for still being down to earth enough to email some amazingly insightful answers to a freshman’s cold-email. I hope one day I’m in a position where I can pass on the favor.

tzf_img_post

The Long Result: Star Travel and Exponential Trends

Reminiscing about some of Robert Forward’s mind-boggling concepts, as I did in my last post, reminds me that it was both Forward as well as the Daedalus project that convinced many people to look deeper into the prospect of interstellar flight. Not that there weren’t predecessors – Les Shepherd comes immediately to mind (see The Worldship of 1953) – but Forward was able to advance a key point: Interstellar flight is possible within known physics. He argued that the problem was one of engineering.

Daedalus made the same point. When the British Interplanetary Society came up with a starship design that grew out of freelance scientists and engineers working on their own dime in a friendly pub, the notion was not to actually build a starship that would bankrupt an entire planet for a simple flyby mission. Rather, it was to demonstrate that even with technologies that could be extrapolated in the 1970s, there were ways to reach the stars within the realm of known physics. Starflight was incredibly hard and expensive, but if it were possible, we could try to figure out how to make it feasible.

And if figuring it out takes centuries rather than decades, what of it? The stars are a goal for humanity, not for individuals. Reaching them is a multi-generational effort that builds one mission at a time. At any point in the process, we do what we can.

What steps can we take along the way to start moving up the kind of technological ladder that Phil Lubin and Alexander Cohen examine in their recent paper? Because you can’t just jump to Forward’s 1000-kilometer sails pushed by a beam from a power station in solar orbit that feeds a gigantic Fresnel lens constructed in the outer Solar System between the orbits of Saturn and Uranus. The laser power demand for some of Forward’s missions is roughly 1000 times our current power consumption. That is to say, 1000 times the power consumption of our entire civilization.

Clearly, we have to find a way to start at the other end, looking at just how beamed energy technologies can produce early benefits through far smaller-scale missions right here in the Solar System. Lubin and Cohen hope to build on those by leveraging the exponential growth we see in some sectors of the electronics and photonics industries, which gives us that tricky moving target we looked at last time. How accurately can you estimate where we’ll be in ten years? How stable is the term ‘exponential’?

These are difficult questions, but we do see trends here that are sharply different from what we’ve observed in chemical rocketry, where we’re still using launch vehicles that anyone watching a Mercury astronaut blast off in 1961 would understand. Consumer demand doesn’t drive chemical propulsion, but in terms of power beaming, we obviously do have electronics and photonics industries in which the role of the consumer plays a key role. We also see the exponential growth in capability paralleled by exponential decreases in cost in areas that can benefit beamed technologies.

Lubin and Cohen see such growth as the key to a sustainable program that builds capability in a series of steps, moving ever outward in terms of mission complexity and speed. Have a look at trends in photonics, as shown in Figure 5 of their paper.

Image (click to enlarge): This is Figure 5 from the paper. Caption: (a) Picture of current 1-3 kW class Yb laser amplifier which forms the baseline approach for our design. Fiber output is shown at lower left. Mass is approx 5 kg and size is approximately that of this page. This will evolve rapidly, but is already sufficient to begin. Courtesy Nufern. (b) CW fiber laser power vs year over 25 years showing a “Moore’s Law” like progression with a doubling time of about 20 months. (c) CW fiber lasers and Yb fiber laser amplifiers (baselined in this paper) cost/watt with an inflation index correction to bring it to 2016 dollars. Note the excellent fit to an exponential with a cost “halving” time of 18 months.

Such growth makes developing a cost-optimized model for beamed propulsion a tricky proposition. We’ve talked in these pages before about the need for such a model, particularly in Jim Benford’s Beamer Technology for Reaching the Solar Gravity Focus Line, where he presented his analysis of cost optimized systems operating at different wavelengths. That article grew out of his paper “Intermediate Beamers for Starshot: Probes to the Sun’s Inner Gravity Focus” (JBIS 72, pg. 51), written with Greg Matloff in 2019. I should also mention Benford’s “Starship Sails Propelled by Cost-Optimized Directed Energy” (JBIS 66, pg. 85 – abstract), and note that Kevin Parkin authored “The Breakthrough Starshot System Model” (Acta Astronautica 152, 370-384) in 2018 (full text). So resources are there for comparative analysis on the matter.

But let’s talk some more about the laser driver that can produce the beam needed to power space missions like those in the Lubin and Cohen paper, remembering that while interstellar flight is a long-term goal, much smaller systems can grow through such research as we test and refine missions of scientific value to nearby targets. The authors see the photon driver as a phased laser array, the idea being to replace a single huge laser with numerous laser amplifiers in what is called a “MOPA (Master Oscillator Power Amplifier) configuration with a baseline of Yb [ytterbium] amplifiers operating at 1064 nm.”

Lubin has been working on this concept through his Starlight program at UC-Santa Barbara, which has received Phase I and II funding through NASA’s Innovative Advanced Concepts program under the headings DEEP-IN (Directed Energy Propulsion for Interstellar Exploration) and DEIS (Directed Energy Interstellar Studies). You’ll also recognize the laser-driven sail concept as a key part of the Breakthrough Starshot effort, for which Lubin continues to serve as a consultant.

Crucial to the laser array concept in economic terms is that the array replaces conventional optics with numerous low-cost optical elements. The idea scales in interesting ways, as the paper notes:

The basic system topology is scalable to any level of power and array size where the tradeoff is between the spacecraft mass and speed and hence the “steps on the ladder.” One of the advantages of this approach is that once a laser driver is constructed it can be used on a wide variety of missions, from large mass interplanetary to low mass interstellar probes, and can be amortized over a very large range of missions.

So immediately we’re talking about building not a one-off interstellar mission (another Daedalus, though using beamed energy rather than fusion and at a much different scale), but rather a system that can begin producing scientific returns early in the process as we resolve such issues as phase locking to maintain the integrity of the beam. The authors liken this approach to building a supercomputer from a large number of modest processors. As it scales up, such a system could produce:

  • Beamed power for ion engine systems (as discussed in the previous post);
  • Power to distant spacecraft, possibly eliminating onboard radioisotope thermoelectric generators (RTG);
  • Planetary defense systems against asteroids and comets;
  • Laser scanning (LIDAR) to identify nearby objects and analyze them.

Take this to a full-scale 50 to 100 GW system and you can push a tiny payload (like Starshot’s ‘spacecraft on a chip’) to perhaps 25 percent of lightspeed using a meter-class reflective sail illuminated for a matter of no more than minutes. Whether you could get data back from it is another matter, and a severe constraint upon the Starshot program, though one that continues to be analyzed by its scientists.

But let me dwell on closer possibilities: A system like this could also push a 100 kg payload to 0.01 c and – the one that really catches my eye – a 10,000 kg payload to more than 1,000 kilometers per second. At this scale of mass, the authors think we’d be better off going to IDM methods, with the beam supplying power to onboard propulsion, but the point is we would have startlingly swift options for reaching the outer Solar System and beyond with payloads allowing complex operations there.

If we can build it, a laser array like this can be modular, drawing on mass production for its key elements and thus achieving economies of scale. It is an enabler for interstellar missions but also a tool for building infrastructure in the Solar System:

There are very large economies of scale in such a system in addition to the exponential growth. The system has no expendables, is completely solid state, and can run continuously for years on end. Industrial fiber lasers have MTBF in excess of 50,000 hours. The revolution in solid state lighting including upcoming laser lighting will only further increase the performance and lower costs. The “wall plug” efficiency is excellent at 42% as of this year. The same basic system can also be used as a phased array telescope for the receive side in the laser communications as well as for future kilometer-scale telescopes for specialized applications such as spectroscopy of exoplanet atmospheres and high redshift cosmology studies…

Such capabilities have to be matched against the complications inevitable in such a design. These ideas are reliant on the prospect of industrial capacity catching up, a process that is mitigated by finding technologies driven by other sectors or produced in mass quantities so as to reach the needed price point. A major issue: Can laser amplifiers parallel what is happening in the current LED lighting market, where costs continue to plummet? A parallel movement in laser amplifiers would, over the next 20 years, reduce their cost enough that it would not dominate the overall system cost.

This is problematic. Lubin and Cohen point out that LED costs are driven by the large volume needed. There is no such demand in laser amplifiers. Can we expect the exponential growth to continue in this area? I asked Dr. Lubin about this in an email. Given the importance of the issue, I want to quote his response at some length:

There are a number of ways we are looking at the economics of laser amplifiers. Currently we are using fiber based amplifiers pumped by diode lasers. There are other types of amplification that include direct semiconductor amplifiers known as SOA (Semiconductor Optical Amplifier). This is an emerging technology that may be a path forward in the future. This is an example of trying to predict the future based on current technology. Often the future is not just “more of the same” but rather the future often is disrupted by new technologies. This is part of a future we refer to as “integrated photonics” where the phase shifting and amplification are done “on wafer” much like computation is done “on wafer” with the CPU, memory, GPU and auxiliary electronics all integrated in a single elements (chip/ wafer).

Lubin uses the analogy of a modern personal computer as compared to an ENIAC machine from 1943, as we went from room-sized computers that drew 100 kW to something that, today, we can hold in our hands and carry in our pockets. We enjoy a modern version that is about 1 billion times faster and features a billion times the memory. And he continues:

In the case of our current technique of using fiber based amplifiers the “intrinsic raw materials cost” of the fiber laser amplifier is very low and if you look at every part of the full system, the intrinsic costs are quite low per sub element. This works to our advantage as we can test the basic system performance incrementally and as we enlarge the scale to increase its capability, we will be able to reduce the final costs due to the continuing exponential growth in technology. To some extent this is similar to deploying solar PV [photovoltaics]. The more we deploy the cheaper it gets per watt deployed, and what was not long ago conceivable in terms of scale is now readily accomplished.

Hence the need to find out how to optimize the cost of the laser array that is critical to a beamed energy propulsion infrastructure. The paper is offered as an attempt to produce such a cost function, to take in the wide range of system parameters and their complex connections. Comparing their results to past NASA programs, Lubin and Cohen point out that exponential technologies fundamentally change the game, with the cost of the research and development phase being amortised over decades. Moreover, directed energy systems are driven by market factors in areas as diverse as telecommunications and commercial electronics in a long-term development phase.

An effective cost model generates the best cost given the parameters necessary to produce a product. A cost function that takes into account the complex interconnections here is, to say the least, challenging, and I leave the reader to explore the equations the authors develop in the search for cost minimums, relating system parameters to the physics. Thus speed and mass are related to power, array size, wavelength, and so on. The model also examines staged system goals – in other words, it considers the various milestones that can be achieved as the system grows.

Bear in mind that this is a cost model, not a cost estimate, which the authors argue would not be not credible given the long-term nature of the proposed program. But it’s a model based on cost expectations drawn from existing technologies. We can see that the worldwide photonics market is expected to exceed $1 trillion by this year (growing from $180 billion in 2016), with annual growth rates of 20 percent.

These are numbers that dwarf the current chemical launch industry; Lubin and Cohen consider them to reveal the “engine upon which a DE program would be propelled” through the integration of photonics and mass production. While fundamental physics drives the analytical cost model, it is the long term emerging trends that set the cost parameters in the model.

Today’s paper is Lubin & Cohen, “The Economics of Interstellar Flight,” to be published in a special issue of Acta Astronautica (preprint).

tzf_img_post

The Goodness of the Universe

The end of one year and the beginning of the next seems like a good time to back out to the big picture. The really big picture, where cosmology interacts with metaphysics. Thus today’s discussion of evolution and development in a cosmic context. John Smart wrote me after the recent death of Russian astronomer Alexander Zaitsev, having been with Sasha at the 2010 conference I discussed in my remembrance of Zaitsev. We also turned out to connect through the work of Clément Vidal, whose book The Beginning and the End tackles meaning from the cosmological perspective (see The Zen of SETI). As you’ll see, Smart and Vidal now work together on concepts described below, one of whose startling implications is that a tendency toward ethics and empathy may be a natural outgrowth of networked intelligence. Is our future invariably post-biological, and does such an outcome enhance or preclude journeys to the stars? John Smart is a global futurist, and a scholar of foresight process, science and technology, life sciences, and complex systems. His book Evolution, Development and Complexity: Multiscale Evolutionary Models of Complex Adaptive Systems (Springer) appeared in 2019. His latest title, Introduction to Foresight, 2021, is likewise available on Amazon.

by John Smart

In 2010, physicists Martin Dominik and John Zarnecki ran a Royal Society conference, Towards a Scientific and Societal Agenda on Extra-Terrestrial Life addressing scientific, legal, ethical, and political issues around the search for extra-terrestrial intelligence (SETI). Philosopher Clement Vidal and I both spoke at that conference. It was the first academic venue where I presented my Transcension Hypothesis, the idea that advanced intelligence everywhere may be developmentally-fated to venture into inner space, into increasingly local and miniaturized domains, with ever-greater density and interiority (simulation capacity, feelings, consciousness), rather than to expand into “outer space”, the more complex it becomes. When this process is taken to its physical limit, we get black-hole-like domains, which a few astrophysicists have speculated may allow us to “instantly” connect with all the other advanced civilizations which have entered a similar domain. Presumably each of these intelligent civilizations will then compare and contrast our locally unique, finite and incomplete science, experiences and wisdom, and if we are lucky, go on to make something even more complex and adaptive (a new network? a universe?) in the next cycle.

Clement and I co-founded our Evo-Devo Universe complexity research and discussion community in 2008 to explore the nature of our universe and its subsystems. Just as there are both evolutionary and developmental processes operating in living systems, with evolutionary processes being experimental, divergent, and unpredictable, and developmental processes being conservative, convergent, and predictable, we think that both evo and devo processes operate in our universe as well. If our universe is a replicating system, as several cosmologists believe, and if it exists in some larger environment, aka, the multiverse, it is plausible that both evolutionary and developmental processes would self-organize, under selection, to be of use to the universe as complex system. With respect to universal intelligence, it seems reasonable that both evolutionary diversity, with many unique local intelligences, and developmental convergence, with all such intelligences going through predictable hierarchical emergences and a life cycle, would emerge, just as both evolutionary and developmental processes regulate all living intelligences.

Once we grant that developmental processes exist, we can ask what kind of convergences might we predict for all advanced civilizations. One of those processes, accelerating change, seems particularly obvious, even though we still don’t have a science of that acceleration. (In 2003 I started a small nonprofit, ASF, to make that case). But what else might we expect? Does surviving universal intelligence become increasingly good, on average? Is there an “arc of progress” for the universe itself?

Developmental processes become increasingly regulated, predictable, and stable as function of their complexity and developmental history. Think of how much more predictable an adult organism is than a youth (try to predict your young kids thinking or behavior!), or how many less developmental failures occur in an adult versus a newly fertilized embryo. Development uses local chaos and contingency to converge predictably on a large set of far-future forms and functions, including youth, maturity, replication, senescence, and death, so the next generation may best continue the journey. At its core, life has never been about either individual or group success. Instead, life’s processes have self-organized, under selection, to advance network success. Well-built networks, not individuals or even groups, always progress. As a network, life is immortal, increasingly diverse and complex, and always improving its stability, resiliency, and intelligence.

But does universal intelligence also become increasingly good, on average, at the leading edge of network complexity? We humans are increasingly able to use our accelerating S&T to create evil, with ever-rising scale and intensity. But are we increasingly free to do so, or do we grow ever-more self-regulated and societally constrained? Steven Pinker, Rutger Bregman, and many others argue we have become increasingly self- and socially-constrained toward the good, for yet-unclear reasons, over our history. Read The Better Angels of Our Nature, 2012 and Humankind, 2021 for two influential books on that thesis. My own view on why we are increasingly constrained to be good is because there is a largely hidden but ever-growing network ethics and empathy holding human civilizations together. The subtlety, power, and value of our ethics and empathy grows incessantly in leading networks, apparently as a direct function of their complexity.

As a species, we are often unforesighted, coercive, and destructive. Individually, far too many of us are power-, possession- or wealth-oriented, zero-sum, cruel, selfish, and wasteful. Not seeing and valuing the big picture, we have created many new problems of progress, like climate change and environmental destruction, that we shamefully neglect. Yet we are also constantly progressing, always striving for positive visions of human empowerment, while imagining dystopias that we must prevent.

Ada Palmer’s science fiction debut, Too Like the Lightning, 2017 (I do not recommend the rest of the series), is a future world of technological abundance, accompanied by dehumanizing, centrally-planned control over what individuals can say, do, or believe. I don’t think Palmer has written a probable future. But it is plausible, under the wrong series of unfortunate and unforesighted future events, decisions and actions. Imagining such dystopias, and asking ourselves how to prevent them, is surely as important as positive visions to improving adaptiveness. I am also convinced we are rapidly and mostly unconsciously creating a civilization that will be ever more organized around our increasingly life-like machines. We can already see that these machines will be far smarter, faster, more capable, more miniaturized, more resource-independent, and more sustainable than our biology. That fast-approaching future will be importantly different from (and better than?) anything Earth’s amazing, nurturing environment has developed to date, and it is not well-represented in science-fiction yet, in my view.

On average, then, I strongly believe our human and technological networks grow increasingly good, the longer we survive, as some real function of their complexity. I also believe that postbiological life is an inevitable development, on all the presumably ubiquitous Earthlike planets in our universe. Not only does it seem likely that we will increasingly choose to merge with such life, it seems likely that it will be far smarter, stabler, more capable, more ethical, empathic, and more self-constrained than biological life could ever be, as an adaptive network. There is little science today to prove or disprove such beliefs. But they are worth stating and arguing.

Arguing the goodness of advanced intelligence was the subtext of the main debate at the SETI conference mentioned above. The highlight of this event was a panel debate on whether it is a good idea to not only listen for signs of extraterrrestrial intelligence (SETI), but to send messages (METI), broadcasting our existence, and hopefully, increase the chance that other advanced intelligences will communicate with us earlier, rather than later.

One of the most forceful proponents for such METI, Alexander Zaitsev, spoke at this conference. Clement and I had some good chats with him there (see picture below). Since 1999, Zaitsev has been using a radiotelescope in the Ukraine, RT-70, to broadcast “Hello” messages to nearby interesting stars. He did not ask permission, or consult with many others, before sending these messages. He simply acted on his belief that doing so would be a good act, and that those able to receive them would not only be more advanced, but would be inherently more good (ethical, empathic) than us.

Image: Alexander Zaitsev and John Smart, Royal Society SETI Conference, Chicheley Hall, UK, 2010. Credit: John Smart.

Sadly, Zaitsev has now passed away (see Paul Gilster’s beautiful elegy for him in these pages). It explains the 2010 conference, where Zaitsev debated others on the METI question, including David Brin. Brin advocates the most helpful position, one that asks for international and interdisciplinary debate prior to sending of messages. Such debate, and any guidelines it might lead to, can only help us with these important and long-neglected questions.

It was great listening to these titans debate at the conference, yet I also realized how far we are from a science that tells us the general Goodness of the Universe, to validate Zaitsev’s belief. We are a long way from his views being popular, or even discussed, today. Many scientists assume that we live in a randomness-dominated, “evolutionary” universe, when it seems much more likely that it is an evo-devo universe, with both many unpredictable and predictable things we can say about the nature of advanced complexity. Also, far too many of us still believe we are headed for the stars, when our history to date shows that the most complex networks are always headed inward, into zones of ever-greater locality, miniaturization, complexity, consciousness, ethics, empathy, and adaptiveness. As I say in my books, it seems that our destiny is density, and dematerialization. Perhaps all of this will even be proven in some future network science. We shall see.

tzf_img_post

Two Takes on the Extraterrestrial Imperative

Topping the list of priorities for the Decadal Survey on Astronomy and Astrophysics 2020 (Astro2020), just released by the National Academy of Sciences, Engineering and Medicine, is the search for extraterrestrial life. Entitled Pathways to Discovery in Astronomy and Astrophysics for the 2020s, the report can be downloaded as a free PDF here. At 614 pages, this is not light reading, but it does represent an overview in which to place continuing work on exoplanet discovery and characterization.

In the language of the report:

“Life on Earth may be the result of a common process, or it may require such an unusual set of circumstances that we are the only living beings within our part of the galaxy, or even in the universe. Either answer is profound. The coming decades will set humanity down a path to determine whether we are alone.”

A ~6 meter diameter space telescope capable of spotting exoplanets 10 billion times fainter than their host stars, thought to be feasible by the 2040s, leads the observatory priorities. As forwarded to me by Centauri Dreams regular John Walker, the survey recommends an instrument covering infrared, optical and ultraviolet wavelengths with high-contrast imaging and spectroscopy. Its goal: Searching for biosignatures in the habitable zone. Cost is estimated at an optimistic $11 billion.

I say ‘optimistic’ because of the cost overruns we’ve seen in past missions, particularly JWST. But perhaps we’re learning how to rein in such problems, according to Joel Bregman (University of Michigan), chair of the AAS Committee on Astronomy and Public Policy. Says Bregman:

“The Astro2020 report recommends a ‘technology development first’ approach in the construction of large missions and projects, both in space and on the ground. This will have a profound effect in the timely development of projects and should help avoid budgets getting out of control.”

Time will tell. It should be noted that a number of powerful telescopes, both ground- and space-based, have been built following the recommendations of earlier decadal surveys, of which this is the seventh.

Suborbital Building Blocks

We’re a long way from the envisioned instrument in terms of both technology and time, but the building blocks are emerging and the characterization of habitable planets is ongoing. What a difference between a flagship level space telescope like the one described by Astro2020 and the small, suborbital instrument slated for launch from the White Sands Missile Range in New Mexico on Nov. 8. SISTINE (Suborbital Imaging Spectrograph for Transition region Irradiance from Nearby Exoplanet host stars) is the second of a series of missions homing in on how the light of a star affects biosignatures on its planets.

False positives will likely bedevil biosignature searches as our technology improves. Principal investigator Kevin France (University of Colorado Boulder) points particularly to ultraviolet levels and their role in breaking down carbon dioxide, which frees oxygen atoms to form molecular oxygen, made of two oxygen atoms, or ozone, made of three. These oxygen levels can easily be mistaken for possible biosignatures. Says France: “If we think we understand a planet’s atmosphere but don’t understand the star it orbits, we’re probably going to get things wrong.”

Image: A sounding rocket launches from the White Sands Missile Range, New Mexico. Credit: NASA/White Sands Missile Range.

It’s a good point considering that early targets for atmospheric biosignatures will be M-dwarf stars. Now consider the early Earth, laden with perhaps 200 times more carbon dioxide than today, its atmosphere likewise augmented with methane and sulfur from volcanic activity in the era not long after its formation. It took molecular oxygen a billion and a half years to emerge as nothing more than a waste product produced during photosynthesis, eventually leading to the Great Oxygenation Event.

Oxygen becomes a biomarker on Earth, but it’s an entirely different question around other stars. M-dwarf stars like Proxima Centauri generate extreme levels of ultraviolet light, making France’s point that simple photochemistry can produce oxygen in the absence of living organisms. Bearing in mind that M-dwarfs make up as many as 80 percent of the stars in the galaxy, we may find ourselves with a number of putative biosignatures that turn out to be a reflection of these abiotic reactions. Aboard the spacecraft is a telescope and a spectrograph that will home in on ultraviolet light from 100 to 160 nanometers, which includes the range known to produce false positive biomarkers. The UV output in this range varies with the mass of the star; thus the need to sample widely.

SISTINE-2’s target is Procyon A. The craft will have a brief window of about five minutes from its estimated altitude of 280 kilometers to observe the star, with the instrument returning by parachute for recovery.

An F-class star larger and hotter than the Sun, Procyon A has no known planets, but what is at stake here is accurate determination of its ultraviolet spectrum. A reference spectrum for F-stars growing out of these observations of Procyon A and incorporating existing data on other F-class stars at X-ray, extreme ultraviolet and visible light is the goal. France says the next SISTINE target will be Alpha Centauri A and B.

Image: A size comparison of main sequence Morgan-Keenan classifications. Main sequence stars are those that fuse hydrogen into helium in their cores. The Morgan-Keenan system shown here classifies stars based on their spectral characteristics. Our Sun is a G-type star. SISTINE-2’s target is Procyon A, an F-type star. Credit: NASA GSFC.

Launch is to be aboard a Black Brant IX sounding rocket. And although it sounds like a small mission, SISTINE-2 will be working at wavelengths the Hubble Space Telescope cannot observe. Likewise, the James Webb Space Telescope will work at visible to mid-infrared wavelengths, making the SISTINE observations useful for frequencies that Webb cannot see. The mission also experiments with new optical coatings and what NASA describes as ‘novel UV detector plates’ for better reflection of extreme UV.

Image: SISTINE’s third mission, to be launched in 2022, will target Alpha Centauri A and B. Here we see the system in optical (main) and X-ray (inset) light. Only the two largest stars, Alpha Cen A and B, are visible. These two stars will be the targets of SISTINE’s third flight. Credit: Zdenek Bardon/NASA/CXC/Univ. of Colorado/T. Ayres et al.

tzf_img_post