Centauri Dreams

Imagining and Planning Interstellar Exploration

Getting There Quickly: The Nuclear Option

Adam Crowl has been appearing on Centauri Dreams for almost as long as the site has been in existence, a welcome addition given his polymathic interests and ability to cut to the heart of any issue. His long-term interest in interstellar propulsion has recently been piqued by the Jet Propulsion Laboratory’s work on a mission to the Sun’s gravitational lens region. JPL is homing in on multiple sailcraft with close solar passes to expedite the cruise time, leading Adam to run through the options to illustrate the issues involved in so dramatic a mission. Today he looks at the pros and cons of nuclear propulsion, asking whether it could be used to shorten the trip dramatically. Beamed sail and laser-powered ion drive possibilities are slated for future posts. With each of these, if we want to get out past 550 AU as quickly as possible, the devil is in the details. To keep up with Adam’s work, keep an eye on Crowlspace.

by Adam Crowl

The Solar Gravitational Lens amplifies signals from distant stars and galaxies immensely, thanks to the slight distortion of space-time caused by the Sun’s mass-energy. Basically the Sun becomes an immense spherical lens, amplifying incoming light by focussing it hundreds of Astronomical Units (AU) away. Depending on the light frequency, the Sun’s surrounding plasma in its Corona can cause interference, so the minimum distance varies. For optical frequencies it can be ~600 AU at a minimum and light is usefully focussed out to ~1,000 AU.

One AU is traveled in 1 Julian Year (365.25 days) at a speed of 4.74 km/s. Thus to travel 100 AU in 1 year needs a speed of 474 km/s, which is much faster than the 16.65 km/s that probes have been launched away from the Earth. If a Solar Sail propulsion system could be deployed close to the Sun and have a Lifting Factor (the ratio of Light-Pressure to Weight of Solar Sail vehicle) greater than 1, then such a mission could be launched easily. However, at present, we don’t have super-reflective gossamer light materials that could usefully lift a payload against solar gravity.

Carbon nanotube mesh has been studied in such a context, as has aerographite, but both are yet to be created in large enough areas to carry large payloads. The ratio of the push of sunlight, for a perfect reflector, to the gravity of the Sun means an areal mass density of 1.53 grams per square metre gives a Lifting Factor of 1. A Sail with such an LF will hover when pointing face on at the Sun. If a Solar Sail LF is less than 1, then it can be angled and used to speed up or slow down the Sail relative to its initial orbital vector, but the available trajectories are then slow spirals – not fast enough to reach the Gravity Lens in a useful time.

Image: A logarithmic look at where we’d like to go. Credit: NASA.

Absent super-light Solar Sails, what are the options? Modern day rockets can’t reach 474 km/s without some radical improvements. Multi-grid Ion Drives can achieve exhaust velocities of the right scale, but no power source yet available can supply the energy required. The reason why leads into the next couple of options so it’s worth exploring. For deep space missions the only working option for high-power is a nuclear fission reactor, since we’re yet to build a working nuclear fusion reactor.

When a rocket’s thrust is limited by the power supply’s mass, then there’s a minimum power & minimum travel time trajectory with a specific acceleration/deceleration profile – it accelerates 1/3 the time, then cruises at constant speed 1/3 the time, then brakes 1/3 the time. The minimum Specific Power (Power per kilogram) is:

P/M = (27/4)*S2*T-3

…where P/M is Power/Mass, S is displacement (distance traveled) and T is the total mission time to travel the displacement S. In units of AU and Years, the P/M becomes:

P/M = 4.8*S2*T-3 W/kg

However while the Average Speed is 474 km/s for a 6 year mission to 600 AU, the acceleration/deceleration must be accounted for. The Cruise Speed is thus 3/2 times higher, so the total Delta-Vee is 3 times the Average Speed. The optimal mass-ratio for the rocket is about 4.41, so the required Effective Exhaust Velocity is a bit over twice the Average Speed – in this case 958 km/s. As a result the energy efficiency is 0.323, meaning the required Specific Power for a rocket is:

P/M = 14.9*S2*T-3 W/kg

For a mission to 600 AU in 6 years a Specific Power of 24,850 W/kg is needed. But this is the ideal Jet-Power – the kinetic energy that actually goes into the forward thrust of the vehicle. Assuming the power source is 40% (40% drive and 10% payload) of the vehicle’s empty mass and the efficiency of the higher-powered multi-grid ion-drive is 80%, then the power source must produce 77,600 W/kg of power. Every power source produces waste heat. For a fission power supply, the waste heat can only be expelled by a radiator. Thermodynamic efficiency is defined as the difference in temperature between the heat-source (reactor) and the heat-sink (radiator), divided by the temperature of the heat source:

Thermal Efficiency = (Tsource – Tsink) / Tsource

For a reactor with a radiator in space, the mass of that radiator is (usually) minimised when the efficiency is 25 % – so to maximise the Power/Mass ratio the reactor has to be really HOT. The heat of the reactor is carried away into a heat exchanger and then travels through the radiator to dump the waste heat to space. To minimise mass and moving parts so called Heat-Pipes can be used, which are conductive channels of certain alloys.

Another option, which may prove highly effective given clever reactor designs, is to use high performance thermophotovoltaic (TPV) cells to convert high temperature thermal emissions directly into electrical power. High performance TPV’s have hit 40% efficiency at over 2,000 degrees C, which would also maximise the P/M ratio of the whole power system.

Pure Uranium-235, if perfectly fissioned (a Burn-Up Fraction of 1), releases 88 trillion joules (88 TJ) per kilogram. A jet-power of 24,850 W/kg sustained for 4 years is a total power output of 3.1 TJ/kg. Operating the Solar Lens Telescope payload won’t require such power levels, so we’ll assume it’s negligible fraction of the total output – a much lower power setting. So our fuel needs to be *at least* 3.6% Uranium-235. But there’s multipliers which increase the fraction required – not all the vehicle will be U-235.

First, the power-supply mass fraction and the ion-drive efficiency – a multiplier of 1/0.32. Therefore the fuel must be 11.1% U-235.

Second, there’s the thermodynamic efficiency. To minimise the radiator area (thus mass) required, it’s set at 25%. Therefore the U-235 is 45.6% of the power system mass. The Specific Power needed for the whole system is thus 310,625 W per kilogram.

The final limitation I haven’t mentioned until now – the thermophysical properties of Uranium itself. Typically Uranium is in the form of Uranium Dioxide, which is 88% uranium by mass. When heated every material goes up in temperature by absorbing (or producing internally) a certain amount of heat – the so called Heat Capacity. The total amount of heat stored in a given amount of material is called the Enthalpy, but what matters to extracting heat from a mass of fissioning Uranium is the difference in Enthalpy between a Higher and a Lower temperature.

Considering the whole of the reactor core and the radiator as a single unit, the Lower temperature will be the radiator temperature. The Higher will be the Core where it physically contacts the heat exchanger/radiator. Thanks to the Thermal efficiency relation we know that if the radiator is at 2,000 K, then the Core must be at least ~2,670 K. The Enthalpy difference is 339 kilojoules per kilogram of Uranium Oxide core. Extracting that heat difference every second maintains the temperature difference between the Source and the Sink to make Work (useful power) and that means a bare minimum of 91.6% of the specific mass of the whole power system must be very hot fissioning Uranium Dioxide core. Even if the Core is at melting point – about 3120 K – then the Enthalpy difference is 348 KJ/kg – 89.3% of the Power System is Core.

The trend is obvious. The power supply ends up being almost all fissioning Uranium, which is obviously absurd.

To conclude: A fission powered mission to 600 AU will take longer than 6 years. As the Power required is proportional to the inverse cube of the mission time, the total energy required is proportional to the inverse square of the mission time. So a mission time of 12 years means the fraction of U-235 burn-up comes down to a more achievable 22.9% of the power supply’s total mass. A reactor core is more than just fissioning metal oxide. Small reactors have been designed with fuel fractions of 10%, but this is without radiators. A 5% core mass puts the system in range of a 24 year mission time, but that’s approaching near term Solar Sail performance.

tzf_img_post

Solar Gravitational Lens: Sailcraft and In-Flight Assembly

The last time we looked at the Jet Propulsion Laboratory’s ongoing efforts toward designing a mission to the Sun’s gravitational lens region beyond 550 AU, I focused on how such a mission would construct the image of a distant exoplanet. Gravitational lensing takes advantage of the Sun’s mass, which as Einstein told us distorts spacetime. A spacecraft placed on the other side of the Sun from the target exoplanetary system would take advantage of this, constructing a high resolution image of unprecedented detail. It’s hard to think of anything short of a true interstellar mission that could produce more data about a nearby exoplanet.

In that earlier post, I focused on one part of the JPL work, as the team under the direction of Slava Turyshev had produced a paper updating the modeling of the solar corona. The new numerical simulations led to a powerful result. Remember that the corona is an issue because the light we are studying is being bent around the Sun, and we are in danger of losing information if we can’t untangle the signal from coronal distortions. And it turned out that because the image we are trying to recover would be huge – almost 60 kilometers wide at 1200 AU from the Sun if the target were at Proxima Centauri distance – the individual pixels are as much as 60 meters apart.

Image: JPL’s Slava Turyshev, who is leading the team developing a solar gravitational lens mission concept that pushes current technology trends in striking new directions. Credit: JPL/S. Turyshev.

The distance between pixels turns out to help; it actually reduces the integration time needed to pull all the data together to produce the image. The integration time (the time it takes to gather all the data that will result in the final image) is in fact reduced when pixels are not adjacent at a rate proportional to the inverse square of the pixel spacing. I’ve more or less quoted the earlier paper there to make the point that according to the JPL work thus far, exoplanet imaging at high resolution using these methods is ‘manifestly feasible,’ another quotation from the earlier work.

We now have a new paper from the JPL team, looking further at this ongoing engineering study of a mission that would operate in the range of 550 to 900 AU, performing multipixel imaging of an exoplanet up to 100 light years away. The telescope is meter-class, the images producing a surface resolution measured in tens of kilometers. Again I will focus on a specific topic within the paper, the configuration of the architecture that would reach these distances. Those looking for the mission overview beyond this should consult the paper, the preprint of which is cited below.

Bear in mind that the SGL (solar gravitational lens) region is, helpfully, not a focal ‘point’ but rather a cylinder, which means that a spacecraft stays within the focus as it moves further from the Sun. This movement also causes the signal to noise ratio to improve, and means we can hope to study effects like planetary rotation, seasonal variations and weather patterns over integration times that may amount to months or years.

Image: From Geoffrey Landis’ presentation at the 2021 IRG/TVIW symposium in Tucson, a slide showing the nature of the gravitational lens focus. Credit: Geoffrey Landis.

Considering that Voyager 1, our farthest spacecraft to date, is now at a ‘mere’ 156 AU, a journey that has taken 44 years, we have to find a way to move faster. The JPL team talks of reaching the focal region in less than 25 years, which implies a hyperbolic escape velocity of more than 25 AU per year. Chemical methods fail, giving us no more than 3 to 4 AU per year, while solar thermal and even nuclear thermal move us into a still unsatisfactory 10-12 AU per year in the best case scenario. The JPL team chooses solar sails in combination with a close perihelion pass of the Sun. The paper examines perihelion possibilities at 15 as well as 10 solar radii but notes that the design of the sailcraft and its material properties define what is going to be possible.

Remember that we have also been looking at the ongoing work at the Johns Hopkins Applied Physics Laboratory involving a mission called Interstellar Probe, which likewise is in need of high velocity to reach the distances needed to study the heliosphere from the outside (a putative goal of 1000 AU in 50 years has been suggested). Because the JHU/APL effort has just released a new paper of its own, I’ll also be referring to it in the near future, because thus far the researchers working under Ralph McNutt on the problem have not found a close perihelion pass, coupled with a propulsive burn but without a sail, to be sufficient for their purposes. But more on that later. Keep it in mind in relation to this, from the JPL paper:

…the stresses on the sailcraft structure can be well understood. For the sailcraft, we considered among other known solar sail designs, one with articulated vanes (i.e., SunVane). While currently at a low technology readiness level (TRL), the SunVane does permit precision trajectory insertion during the autonomous passage through solar perigee. In addition, the technology permits trimming of the trajectory injection errors while still close to the Sun. This enables the precision placement of the SGL spacecraft on its path towards the image cylinder which is 1.3 km in diameter and some 600+ AU distant.

Is the SunVane concept the game-changer here? I looked at it 18 months ago (see JPL Work on a Gravitational Lensing Mission), where I used the image below to illustrate the concept. The sail is constructed of square panels aligned along a truss. In the Phase II study for NIAC that preceded the current papers, a sail based on SunVane design could achieve 25 AU per year – that would be arrival at 600 AU in 26 years in conjunction with a close solar pass – using a craft with total sail area of 45,000 square meters (that’s equivalent to a roughly 200 X 200 square meter single sail).

Image: The SunVane concept. Credit: Darren D. Garber (Xplore, Inc).

With sail area distributed along the truss rather than confined to the sail’s center of gravity, this is a highly maneuverable design that continues to be of great interest. Maneuverability is a key factor as we look at injecting spacecraft into perihelion trajectory, where errors can be trimmed out while still in close proximity to the Sun.

But current thinking goes beyond flying a single spacecraft. What the JPL work has developed through the three NIAC phases and beyond is a mission built around a constellation of smaller spacecraft. The idea is chosen, the authors say, to enhance redundancy, enable the needed precision of navigation, remove the contamination of background light during SGL operations, and optimize the return of data. What intrigues me particularly is the use of in-flight assembly, with the major spacecraft modules placed on separate sailcraft. This will demand that the sailcraft fly in formation in order to effect the needed rendezvous for assembly.

Let’s home in on this concept, pausing briefly on the sail, for this mission will demand an attitude control system to manage the thrust vector and sail attitude once we have reached perihelion with our multiple craft, each making a perihelion pass followed by rendezvous with the other craft. I turn to the paper for more:

Position and velocity requirements for the incoming trajectory prior to perihelion are < 1 km and ?1 cm/sec. Timing through perihelion passage is days to weeks with errors in entry-time compensated in the egress phase. As an example, if there is a large position and/or velocity error upon perihelion passage that translated to an angular offset of 100” from the nominal trajectory, there is time to correct this translational offset with the solar sail during the egress phase all the way out to the orbit of Jupiter. The sail’s lateral acceleration is capable of maneuvering the sailcraft back to the desired nominal state on the order of days depending on distance from the Sun. This maneuvering capability relaxes the perihelion targeting constraints and is well within current orbit determination knowledge threshold for the inner solar system which drive the ?1 km and ?1 cm/sec requirements.

Why the need to go modular and essentially put the craft together during the cruise phase? The paper points out that the 1-meter telescope that will be necessary cannot currently be produced in the mass and volume range needed to fit a CubeSat. The mission demands something on the order of a 100 kg spacecraft, which in turn would demand solar sails of extreme size as needed to reach the target velocity of 20 AU per year or higher. Such sails will be commonplace one day (I assume), but with the current state of the art, in-flight robotic assembly leverages our growing experience with miniaturization and small satellites and allows for a mission within a decade.

If in-flight assembly is used, because of the difficulties in producing very large sails, the spacecraft modules…are placed on separate sailcraft. After in-flight assembly, the optical telescope and if necessary, the thermal radiators are deployed. Analysis shows that if the vehicle carries a tiled RPS [radioisotope power system]…where the excess heat is used for maintaining spacecraft thermal balance, then there is no need for thermal radiators. The MCs [the assembled spacecraft] use electric propulsion (EP) to make all the necessary maneuvers for the cruise (?25 years) and science phase of the mission. The propulsion requirements for the science phase are a driver since the SGL spacecraft must follow a non inertial motion for the 10-year science mission phase.

According to the authors, numerous advantages accrue from using a modular approach with in-space assembly, including the ability to use rideshare services; i.e., we can launch modules as secondary payloads, with related economies in money and time. Moreover, such a use means that we can use conventional propulsion rather than sails as an option for carrying the cluster of sailcraft inbound toward perihelion in formation. In any case, at some point the sailcraft deploy their sails and establish the needed trajectory for the chosen solar perihelion point. After perihelion, the sails — whose propulsive qualities diminish with distance from the Sun — are ejected, perhaps nearing Earth orbit, as the sailcraft prepare for assembly.

Flying in formation, the sailcraft reduce their relative distance outbound and begin the in-space assembly phase while passing near Earth orbit. The mission demands that each of the 10-20 kg mass spacecraft be a fully functional nanosatellite that will use onboard thrusters for docking. Autonomous docking in space has already been demonstrated, essentially doing what the SGL mission will have to do, assembling larger craft from smaller ones. It’s worth noting, as the authors do, that NASA’s space technology mission directorate has already begun a project called On-Orbit Autonomous Assembly from Nanosatellites-OAAN along with a CubeSat Proximity Operations Demonstration (CPOD) mission, so we see these ideas being refined.

What demands attention going forward is the needed development of proximity operation technologies, which range from sensor design to approach algorithms, all to be examined as study of the SGL mission continues. There was a time when I would have found this kind of self-assembly en-route to deep space fanciful, but there was also a time when I would have said landing a rocket booster on its tail for re-use was fanciful, and it’s clear that self-assembly in in the SGL context is plausible. The recent deployment of the James Webb Space Telescope reinforces the same point.

The JPL team has been working with simulation tools based on concurrent engineering methodology (CEM), modifying current software to explore how such ‘fractionated’ spacecraft can be assembled. Note this:

Two types of distributed functionality were explored. A fractionated spacecraft system that operates as an “organism” of free-flying units that distribute function (i.e., virtual vehicle) or a configuration that requires reassembly of the apportioned masses. Given that the science phase is the strong driver for power and propellant mass, the trade study also explored both a 7.5 year (to ?800 AU) and 12.5 year (to ?900 AU) science phase using a 20 AU/yr xit velocity as the baseline. The distributed functionality approach that produced the lowest functional mass unit is a cluster of free-flying nanosatellites…each propelled by a solar sail but then assembled to form a MC [mission capable] spacecraft.

Image: Various approaches will emerge about the kind of spacecraft that might fly a mission to the gravitational focus of the Sun. In this image (not taken from the Turyshev et al. paper), swarms of small sailcraft capable of self-assembly into a larger spacecraft are depicted that could fly to a spot where our Sun’s gravity distorts and magnifies the light from a nearby star system, allowing us to capture a sharp image of an Earth-like exoplanet. Credit: NASA/The Aerospace Corporation.

The current paper goes deeply into the attributes of the kind of nanosatellite that can assemble the final design, and I’ll send you to it for further details. Each of the component craft has the capability of a 6U CubeSat/nanosat and each carries components of the final craft, from optical communications to primary telescope mirror. Current thinking is that the design is in the shape of a round disk about 1 meter in diameter and 10 cm thick, with a carbon fiber composite scaffolding. The idea is to assemble the final craft as a stack of these units, producing the final round cylinder.

What a fascinating, gutsy mission concept, and one with the possibility of returning extraordinary data on a nearby exoplanet. The modular approach can be used to enhance redundancy, the authors note, as well as allowing for reconfiguration to reduce the risk of mission failure. Self-assembly leverages current advances in miniaturization, composite materials, and computing as reflected in the proliferation of CubeSat and nanosat technologies. What this engineering study is pointing to is a mission to the solar gravity lens that seems feasible with near-term technologies.

The paper is Helvajian et al., “A mission architecture to reach and operate at the focal region of the solar gravitational lens,” now available as a preprint. The earlier report on the study’s progress is “Resolved imaging of exoplanets with the solar gravitational lens,” (preprint). The Phase II NIAC report on this work is Turyshev & Toth, “Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission,” Final Report NASA Innovative Advanced Concepts Phase II (2020). Full text.

tzf_img_post

Getting Down to Business with JWST

So let’s get to work with the James Webb Space Telescope. Those dazzling first images received a gratifying degree of media attention, and even my most space-agnostic neighbors were asking me about what exactly they were looking at. For those of us who track exoplanet research, it’s gratifying to see how quickly JWST has begun to yield results on planets around other stars. Thus WASP-96 b, 1150 light years out in the southern constellation Phoenix, a lightweight puffball planet scorched by its star.

Maybe ‘lightweight’ isn’t the best word. Jupiter is roughly 320 Earth masses, and WASP-96b weighs in at less than half that, but its tight orbit (0.04 AU, or almost ten times closer to its Sun-like star than Mercury) has puffed its diameter up to 1.2 times that of Jupiter. This is a 3.5-day orbit producing temperatures above 800 ?.

As you would imagine, this transiting world is made to order for analysis of its atmosphere. To follow JWST’s future work, we’ll need to start learning new acronyms, the first of them being the telescope’s NIRISS, for Near-Infrared Imager and Slitless Spectrograph. NIRISS was a contribution to the mission from the Canadian Space Agency. The instrument measured light from the WASP-96 system for 6.4 hours on June 21.

Parsing the constituents of an atmosphere involves taking a transmission spectrum, which examines the light of a star as it filters through a transiting planet’s atmosphere. This can then be compared to the light of the star when no transit is occurring. As specific wavelengths of light are absorbed during the transit, atmospheric gasses can be identified. Moreover, scientists can gain information about the atmosphere’s temperature based on the height of peaks in the absorption pattern, while the spectrum’s overall shape can flag the presence of haze and clouds.

These NIRISS observations captured 280 individual spectra detected in a wavelength range from 0.6 microns to 2.8 microns, thus taking us from red into the near infrared. Even with a relatively large object like a gas giant, the actual blockage of starlight is minute, here ranging from 1.36 percent to 1.47 percent. As the image below reveals, the results show the huge promise of the instrument as we move through JWST’s Cycle 1 observations, nearly a quarter of which are to be devoted to exoplanet investigation.

Image: A transmission spectrum is made by comparing starlight filtered through a planet’s atmosphere as it moves across the star, to the unfiltered starlight detected when the planet is beside the star. Each of the 141 data points (white circles) on this graph represents the amount of a specific wavelength of light that is blocked by the planet and absorbed by its atmosphere. The gray lines extending above and below each data point are error bars that show the uncertainty of each measurement, or the reasonable range of actual possible values. For a single observation, the error on these measurements is remarkably small. The blue line is a best-fit model that takes into account the data, the known properties of WASP-96 b and its star (e.g., size, mass, temperature), and assumed characteristics of the atmosphere. Credit: NASA, ESA, CSA, and STScI.

No more detailed infrared transmission spectrum has even been taken of an exoplanet, and this is the first that includes wavelengths longer than 1.6 microns at such resolution, as well as the first to cover the entire frequency range from 0.6 to 2.8 microns simultaneously. Here we can detect water vapor and infer the presence of clouds, as well as finding evidence for haze in the shape of the slope at the left of the spectrum. Peak heights can be used to deduce an atmospheric temperature of about 725 ?.

Moving into wavelengths longer than 1.6 microns gives scientists a part of the spectrum that is made to order for the detection of water, oxygen, methane and carbon dioxide, all of which are expected to be found in other exoplanets observed by the instrument, and a portion of the spectrum not available from predecessor instruments. All this bodes well for what JWST will have to offer as it widens its exoplanet observations.

Spatial-Temporal Variance Explanation for the Fermi Paradox

Just how likely is it that the galaxy is filled with technological civilizations? Kelvin F Long takes a look at the question using diffusion equations to probe the possible interactions among interstellar civilizations. Kelvin is an aerospace engineer, physicist and author of Deep Space Propulsion: A Roadmap to Interstellar Flight (Springer, 2011). He is the Director of the Interstellar Research Centre (UK), has been on the advisory committee of Breakthrough Starshot since its inception in 2016, and was the co-founder of Icarus Interstellar and the Initiative/Institute for Interstellar Studies, He has served as editor of the Journal of the British Interplanetary Society and continues to maintain the Interstellar Studies Bibliography, currently listing some 1400 papers on the subject.

by Kelvin F Long

Many excellent papers have been written about the Fermi paradox over the years, and until we find solid evidence for the existence of life or intelligent life elsewhere in the galaxy the best we can do is to estimate based on what we do know about the nature of the world we live in and the surrounding universe we observe across space and time.

Yet ultimately to increase the chances of finding life we need to send robotic probes external to our solar system to visit the planets around other stars. Whilst telescopes can do a lot of significant science, in principle a probe can conduct in-situ reconnaissance of the system to include orbiters, atmospheric penetrators and even landers.

Currently, the Voyager 1 and 2 probes are taking up the vanguard of this frontier and hopefully in the years ahead more will follow in their wake. Although these are only planetary flyby probes and would take tens of thousands of years to reach the nearest stars, our toes have been dipped into the cosmic ocean at least, and this is a start.

If we can send a probe out into the Cosmos, it stands to reason that other civilizations may do the same. As probes from different civilizations explore space, there is a possibility that they may encounter each other. Indeed, it could be argued that the probability of species-species first contact is more with their robotic ambassadors rather than the original biological organisms that launched them on their vast journeys.

However, the actual probability of two different probes from alternative points of origin (different species) interacting is low. This is for several reasons. The first relates to astrobiology in that we do not yet know how frequent life is in the galaxy. The second relates to the time of departure of the probes within the galaxy’s history. Two probes may appear in the same region of space, but if this happens millions of years apart then they will not meet. Third, and an issue not often discussed in the literature, is the fact that each probe will have a different propulsion system and so its velocity of motion will be different.

As a result, not only do probes have to contend with relativistic effects with respect to their world of origin (particularly if they are going close to the speed of light), but they will also have to deal with the fact that their clocks are not synchronised with each other. The implication is that for probes interacting from civilizations that are far apart, the relativistic effects become so large that it creates a complex scenario of temporal synchronization. This becomes more pronounced the larger the different species of probes, and the larger the difference in the respective average speeds. This is a state we might call ‘temporal spaghettification’, in reference to the complex space-time history of the spacecraft trajectories relative to each other.

An implication of this is that ideas like the Isaac Asimov Foundation series, where vast empires are constructed across hundreds or thousands of light years of space, do not seem plausible. This is particularly the case for ultra-fast speeds (where relativistic effects dominate) that do approach the speed of light. In general, the faster the probe speeds and the further apart the separate civilizations, the more pronounced the effect. In 2016 this author framed the idea as a postulate:

“Ultra-relativistic spaceflight leads to temporal spaghettification and is not compatible with galaxy wide civilizations interacting in stable equilibrium.”

Another consequence of ultra-fast speeds is that if civilizations do interact, it will not be possible to prevent the technology (i.e. power and propulsion) associated with the more advanced race from eventually emerging within the other species at some point in the future. Imagine, for example, if a species turned up with faster than light drives and simply chose to share that technology, even if for a price, as a part of a cultural information exchange.

Should such a culture refuse to share that technology with us, we would likely work towards its fruition anyway. This is because our knowledge of its existence will promote research within our own science to work towards its realisation. Alternatively, knowledge of that technology will eventually just leak out and be known by others.

There is also a statistical probability that if it can be invented by one species, it will be invented by another; as a law of large numbers. As a result when one species has this technology and starts interacting with others, eventually many other species will obtain it, even if it takes a long time to mature. We might think of this as a form of technological equilibration, in reference to an analogy to thermodynamics.

Ultimately, this implies that it is not possible to contain the information associated with the technology forever once species-species interaction begins. Indeed, it has been discovered that even the gravitational prisons of light (black holes) are leaky through Hawking evaporation. The idea that there is no such thing as a permanently closed system was also previously framed as a second postulate by this author:

“No information can be contained in any system indefinitely.”

Adopting analogues from plasma physics and the concept of distribution functions, we can imagine a scenario in which within a galaxy there are multiple populations, each sending out waves of probes at some average velocity of expansion rate. If most of the populations adopted fusion propulsion technology, for example, as their choice of interstellar transport, then the average velocity might be around 0.1c (i.e. plausible speeds for fusion propulsion are 0.05-0.15c) and this would then define the peak of a velocity distribution function.

The case of human-carrying ships may be represented by world ships traveling at the slow speeds of 0.01-0.03c. In the scenario of the majority of the populations employing a more energetic propulsion method, such as using antimatter fuel, the peak would shift to the right. In general, the faster the average expansion speed, the further to the right the peak would shift, since the peak represents the average velocity.

The more the populations interacted, the greater the technological equilibration over time, and this could see a gradual shift into the relativistic and then ultra-relativistic (>0.9c) speed regimes. Yet, due to the limiting factor of the speed of light limit (~300,000 km/s or 1c), the peak would start to move asymptotically towards some infinite value.

There is also the special case of faster-than-light travel (ftl), but by the second postulate if any one civilization develops it then eventually many of the others will also develop it. Then as the mean velocity of many of the galactic populations tends towards some ftl value, you get a situation where many civilizations can now leave the galaxy, creating a massive population expansion outwards, as starships are essentially capable of reaching other galaxies. That population would also be expanding inwards to the other stars within our galaxy since trip times are so short. Indeed, ships would also be arriving from other galaxies due to the ease of travel. But if this were the case, starships would be arriving in Earth orbit by now.

In effect, the more those civilizations interact, the more the average speed of spacecraft in the galaxy would shift to higher speeds, and eventually this average would begin to move asymptotically towards ftl (assuming it is physically possible), which is an effect we might refer to as ‘spatial runaway’ since there is no longer any tendency towards some equilibrium speed limit. In addition, the ubiquity of ftl transport comes with all sorts of implications for communications and causality and in general creates a chaotic scenario that does not lean towards a stable state.

This then leads to the third postulate:

“Faster than light spaceflight leads to spatial runaway, and is not compatible with galaxy wide civilizations interacting in stable equilibrium.”

Each species that is closely interacting may start out with different propulsion systems so that they have an average speed of population expansion, but if technology is swapped there will be some sort of equilibration that will occur such that all species tend towards some mean velocity of population diffusion.

The modeling of a population density of a substance is borrowed from stochastic potential theory, with discrete implementation for the quantization of space and time intervals by the use of average collision parameters. This is analogous to problems such as Brownian motion, where particles undergo a random walk. This can be adopted as an analogy to explain the motion of a population of interstellar probes dispersing through the galaxy from a point of origin.

Modeling population interaction is best done using the diffusion equation of physics, which is derived from Fick’s first and second law for the dispersion of a material flux, and also the continuity equation. This is a second order partial differential equation and its solution for a population that starts with some initial high density and drops to some low density. It is given by a flux equation which is a function of both distance and time. This equation is proportional to the exponential of the negative distance squared.

Using this physics as a model, it is possible to show that the galaxy can be populated within only a couple of million years, but even faster if the population is growing rapidly, as for instance via von Neumann self-replication. A key part of the use of the diffusion equation is the definition of a diffusion coefficient which is equal to ½(distance squared/time), where the distance is the average collision distance between stars (assumed to be around 5 light years) and time is the average collision time between stars (assumed to be between 50-100 years for 0.05-0.1c average speed). These relatively low cruise speeds were chosen because the calculations were conducted in relation to fusion propulsion designs only.

For probes that eventually manufacture another probe on average (i.e., not fully self-reproducing), this might be seen as analogous to a critical nuclear state. Where the probe reproduction rate drops to less than unity on average, this is like a sub-critical state and eventually the probe population will fall-off until some stagnation horizon is reached. For example, calculations by this author using the diffusion equation show that with an initial population as large as 1 million probes, each traveling at an average velocity of 0.1c, after about ~1,000 years the population would have stagnated at a distance of approximately ~100 light years.

If however, the number of probes being produced is greater than unity, such as through self-replicating von Neumann probes, then the population will grow from a low density state to a high density state as a type of geometrical progression. This is analogous to a supercritical state. For example, if each probe produced a further two probes on average from a starting population of 10 probes, then by the 10th generation there would be a total of 10,000 probes in the population.

Assume that there are at least 100 billion stars in the Milky Way galaxy. For the number of von Neumann probes in the population to equal that number of stars would only require a starting population of less than 100 probe factories, with each producing 10 replication probes, and after only 10 generations of replication. This underscores the argument made by some such as Boyce (Extraterrestrial Encounter, A Personal Perspective, 1979) that von Neumann-like replication probes should be here already. The suggestion of self-replicating probes was advanced by Bracewell (The Galactic Club: Intelligent Life in Outer Space, 1975) but has its origins in automata replication and the research of John von Neumann (Theory of Self-Reproducing Automata, 1966).

Any discussion about robotic probes interacting is also a discussion about the number of intelligent civilizations – such probes had to be originally designed by someone. It is possible that these probes are no longer in contact with their originator civilization, which may be many hundreds of light years away. This is why such probes would have to be fully autonomous in their decision making capability. Indeed, it could be argued that the probability of the human species first meeting an artificial intelligence-based robotic probe is more likely than meeting an alien biological organism. It may also be the case that in reality there is no difference, if biological entities have figured out how to go fully artificial and avoid their mortal fate.

Indeed, when considering the future of Homo Sapiens and our continued convergence with technology the science and science fiction writer Arthur C Clarke referred to a new species that would eventually emerge, which he called Homo Electronicus. He depicted it thus:

“One day we may be able to enter into temporary unions with any sufficiently sophisticated machines, thus being able not merely to control but to become a spaceship or a submarine or a TV network….the thrill that can be obtained from driving a racing car or flying an aeroplane may be only a pale ghost of the excitement our great grandchildren may know, when the individual human consciousness is free to roam at will from machines to machine, through all reaches of sea and sky and space.” (Profiles of the Future, 1962).

So even the idea of separating a biological organism from a machine intelligence may be an incorrect description of the likely encounter scenarios of the future. A von Neumann robotic spacecraft could turn up in our orbit tomorrow and from a cultural information exchange perspective there may be no distinction. It is certainly the case that robotic probes are more suited for the environment of space than biological organisms that require a survival environment.

Consider a thought experiment. Assume the galaxy’s disc diameter is 100,000 light years and consider only one dimension of space. A population of probes starts out at one end with an average diffusion wave speed of around 10 percent of the speed of light (0.1c). We assume no stopping and instantaneous time between populations of diffusion waves (in reality, there would be a superposition of diffusion waves propagating as a function of distance and time). This diffusion wave would take on the order of 1 million years to cross from one side of the galaxy to the other. We can continue this thought experiment and imagine that the same population starts at the centre and expands out as a spherical diffusion wave. Assuming that the wave did not dissipate and continued to grow, then the time to cover the galactic disc would be approximately half than if it had started on one side.

Now imagine there are two originating civilizations, each sending out populations of probes that continue to grow and do not dissipate. These two civilizations are located at opposite ends of the galaxy. The time for the galaxy to be covered by the two populations will now be half of a single population starting out on the edge of the disc. We can continue to add more numbers of populations n=1,2,3,4,5,6….and we get t, t/2, t/4, t/6, t/8, t/10…and we eventually find that for n>1 it follows a geometrical series of the form tn=t0/2(n-1), where t0 is the galactic crossing timescale (i.e. 1 million years) assumed for an initiating population of probes derived from a single civilization which is a function of the diffusion wave speed.

So that for a high number of initiating populations where n ? infinity, the interaction time between populations will be low so that tn ? 0, and the probability of interaction is therefore high. However, for a low number of initiating populations where n ? 0, the interaction time between populations will be high, so that tn ? infinity; thus the timescales between potential interactions are a lot larger and the probability of interaction is therefore low.

It is important to clarify the definition of interaction time used here. The shorter the interaction time, the higher the probability of interaction, since the time between effective overlapping diffusion waves is short. Conversely, where the interaction time is long, the time between overlapping diffusion waves is long and so the probability of interaction is low. The illustrated graphic below demonstrates these limits and the boxes are the results of diffusion calculations and the implications for population interaction.

As discussed by Bond & Martin (‘Is Mankind Unique?’, JBIS 36, 1983), the graphic illustrates two extreme viewpoints about intelligence within the galaxy. The first is known as Drake-Sagan chauvinism and advocates for a crowded galaxy. This has been argued by Shklovskii & Sagan (‘Intelligent Life in the Universe’, 1966), Sagan & Drake (The Search for Extraterrestrial Intelligence, 1975). In the graphic this occurs when n ? ? , tn ? 0, so that the probability of interaction is extremely high.

Especially since there are likely to be a large superposition of diffusion waves overlapping each other. This effect would become more pronounced for multiple populations of vN probes diffusing simultaneously. We note also that an implication of this model for the galaxy is that if there are large populations of probes, then there must have been large populations of civilizations to launch them, which implies that the many steps to complexity in astrobiology are easier than we might believe. In terms of diffusion waves this scenario is characterised by very high population densities such that ?(S,t) ? ? which also implies that the probability of probe-probe interaction is high p(S,t) ? ?. This is box (d) in the graphic.

The second viewpoint is known as Hart-Viewing chauvinism and advocates for a quiet galaxy. This has been argued by Tipler (‘Extraterrestrial Intelligent Beings do not Exist’, 1980), Hart (‘An Explanation for the Absence of Extraterrestrials on Earth’, 1975) and Viewing (‘Directly Interacting Extraterrestrial Technological Communities’, 1975). This occurs when n ? 0, tn ? ?, so that the probability of interaction is extremely low. In contrast with the first argument, this might imply that the many steps to complexity in astrobiology are hard. This scenario is characterised by very low population densities such that ?(S,t) ? 0 so that few diffusion waves can be expected and also that the probability of interaction is low p(S,t) ? 0. This is box (a) in the graphic.

In discussing biological complexity, we are referring to the difficulty in going from single celled to multi-celled organisms, but then also to large animals, and then to intelligent life which proceeds towards a state of advanced technological attainment. A state where biology is considered ‘easy’ is when all this happens regularly provided the environmental conditions for life are met within a habitat. A state where biology is considered ‘hard’ may be, for example, where it may be possible for life to emerge purely as a function of chemistry but building that up to more complex life such as to an intelligent life-form that may one day build robotic probes is a lot more difficult and less probable. This is a reference to the science of astrobiology which will not be discussed further here. However, since the existence of robotic probes would require a starting population of organisms it has to be mentioned at least.

Given that these two extremes are the limits of our argument, it stands to reason that there must be transition regimes in between which either work towards or against the existence of intelligence and therefore the probability of interaction. The right set of parameters would be optimum to explain our own thinking around the Fermi paradox in terms of our theoretical predictions being in contradiction to our observations.

As shown in the graphic it comes down to the variance ?2 of the statistical distribution for the distance S of a number of probe populations ni within a region of space in a galaxy (not necessarily a whole galaxy), where the variance is also the square root of the standard distribution ? relative to a mean distance between population sources ?S. In other words whether the originating civilizations that initiated the probe populations are closely compacted or widely spread out.

A region of space which had a high probe population density (not spread out or sharp distribution function) would be characterised by a low variance. A region with a low probe population density (widely distributed or flattened distribution function) would be characterised by a high variance. The starting interaction time to of two separate diffusion waves from independent civilizations would then be proportional to the variance and the diffusion wave velocity vdw of each population such that to is proportional to ?2/vdw.

Going back to the graphic there comes a point where the number of populations of probes becomes less than some critical number n<nc, the value of which we do not know, but as this threshold is crossed the interaction time will also increase past that critical value tn>tc. In box (c) of the graphic, biology is ‘hard’ and so despite the low variance the population density will be less than some critical value ?(S,t)<?c(S,t) which means that the probability of probe-probe interaction will be low p(S,t) ? 0. This is referred to as a low spatio-temporal distributed galaxy. Whereas for box (b) of the graphic although biology may be ‘easy’, the large variance of the populations makes for a low population density of the total combined and so also a low probability of probe-probe interaction. This is referred to as a high spatio-temporal distributed galaxy.

Taking all this into account and assessing the Milky Way, we don’t see evidence of a crowded galaxy, which would rule out box (d) in the graphic. In this author’s opinion the existence of life on Earth and its diversity does not imply (at least) consistency with a quiet galaxy (unless one is invoking something special about planet Earth). This is indicated in (a). On the basis of all this, we might consider a fourth postulate along the following lines:

“The probability of interaction for advanced technological intelligent civilizations within a galaxy strongly depends on the number of such civilizations, and their spatial-temporal variance.”

Due to the exponential fall-off in the solution of the diffusion wave equation, the various calculations by this author suggest that intelligent life may occur at distances of less than ~200 ly, which for a 100-200 kly diameter galaxy might suggest somewhere in the range of ~500-1,000 intelligent civilizations along a galactic disc. Given the vast numbers of stars in the galaxy this would lean towards a sparsely populated galaxy, but one where civilizations do occur. Then considering the calculated time scales for interaction, the high probability of von Neumann probes or other types of probes interacting therefore remains.

We note that the actual diffusion calculations performed by this author showed that even with a seed population of 1 billion probes, the distance where the population falls off was at around ~164 ly. This is not too dissimilar to the independent conclusion of Betinis (“On ETI Alien Probe Flux Density”, JBIS, 1978) who calculated that the sources of probes would likely be somewhere within 70-140 ly. Bond and Martin (‘A Conservative Estimate of the Number of Habitable Planets in the Galaxy’ 1978) also calculated that the average distance between habitable planets was likely ~110 ly and ~140 ly between intelligent life relevant planets. Sagan (‘Direct Contact Among Galactic Civilizations by Relativistic Interstellar Spaceflight’, 1963) also calculated that the most probable distance to the nearest extant advanced technical civilization in our galaxy would be several hundred light years. This all implies that an extraterrestrial civilization would be at less than several hundred light years distance, and this therefore is where we should focus search efforts.

When it comes down to the Fermi paradox, this analysis implies that we live in a moderately populated galaxy, and so the probability of interaction is low when considering both the spatial and temporal scales. However, when it comes to von Neumann probes it is clear that the galaxy could potentially be populated in a timescale of less than a million years. This implies they should be here already. As we perhaps ponder recent news stories that are gaining popular attention, we might once again consider the words of Arthur C Clarke in this regard:

“I can never look now at the Milky Way without wondering from which of those banked clouds of stars the emissaries are coming…I do not think we will have to wait for long.” (‘The Sentinel’, 1951).

The content of this article is by this author and appears in a recently accepted 2022 paper for the Journal of the British Interplanetary Society titled ‘Galactic Crossing Times for Robotic Probes Driven by Inertial Confinement Fusion Propulsion’, as well as in an earlier paper published in the same journal titled ‘Unstable Equilibrium Hypothesis: A Consideration of Ultra-Relativistic and Faster than Light Interstellar Spaceflight’, JBIS, 69, 2016.

tzf_img_post

Probing the Galaxy: Self-Reproduction and Its Consequences

In a long and discursive paper on self-replicating probes as a way of exploring star systems, Alex Ellery (Carleton University, Ottawa) digs, among many other things, into the question of what we might detect from Earth of extraterrestrial technologies here in the Solar System. The idea here is familiar enough. If at some point in our past, a technological civilization had placed a probe, self-replicating or not, near enough to observe Earth, we should at some point be able to detect it. Ellery believes such probes would be commonplace because we humans are developing self-replication technology even today. Thus a lack of probes would indicate that there are no extraterrestrial civilizations to build them.

There are interesting insights in this paper that I want to explore, some of them going a bit far afield from Ellery’s stated intent, but worth considering for all that. SETA, the Search for Extraterrestrial Artifacts, is a young endeavor but a provocative one. Here self-replication attracts the author because probing a stellar system is a far different proposition than colonizing it. In other words, exploration per se — the quest for information — is a driver for exhaustive seeding of probes not limited by issues of sustainability or sociological constraints. Self-replication, he believes, is the key to exponential exploration of the galaxy at minimum cost and greatest likelihood of detection by those being studied.

Image: The galaxy Messier 101 (M101, also known as NGC 5457 and nicknamed the ‘Pinwheel Galaxy’) lies in the northern circumpolar constellation, Ursa Major (The Great Bear), at a distance of about 21 million light-years from Earth. This is one of the largest and most detailed photos of a spiral galaxy that has been released from Hubble. How long would it take a single civilization to fill a galaxy like this with self-replicating probes? Image credit: NASA/STScI.

Growing the Idea of Self-Reproduction

Going through the background to ideas of self-replication in space, Ellery cites the pioneering work of Robert Freitas, and here I want to pause. It intrigues me that Freitas, the man who first studied the halo orbits around the Earth-Moon L4 and L5 points looking for artifacts, is also responsible for one of the earliest studies of machine self-replication in the form of the NASA/ASEE study in 1980. The latter had no direct interstellar intent but rather developed the concept of a self-replicating factory on the lunar surface using resources mined by robots. Freitas would go on to explore a robot factory coupled to a Daedalus-class starship called REPRO, though one taken to the next level and capable of deceleration at the target star, where the factory would grow itself to its full capabilities upon landing.

I should mention that following REPRO, Freitas would turn his attention to nanotechnology, a world where payload constraints are eased and self-reproduction occurs at the molecular level. But let’s stick with REPRO a moment longer, even though I’m departing from Ellery in doing so. For in Freitas’ original concept, half the REPRO payload would be devoted to self-reproduction, with a specialized payload exploiting the resources of a gas giant moon to produce a new REPRO probe every 500 years.

As you can see, the REPRO probe would have taken Project Daedalus’ onboard autonomy to an entirely new level. Freitas’ studies foresaw thirteen distinct robot species, among them chemists, miners, metallurgists, fabricators, assemblers, wardens and verifiers. Each would have a role to play in the creation of the new probe. The chemist robots, for example, were to process ore and extract the heavy elements needed to build the factory on the moon of the gas giant planet. Aerostat robots would float like hot-air balloons in the gas giant’s atmosphere, where they would collect the needed propellants for the next generation REPRO probe. Fabricators would turn raw materials (produced by the metallurgists) into working parts, from threaded bolts to semiconductor chips, while assemblers created the modules that would build the initial factory. Crawler robots would specialize in surface hauling, while wardens, as with Project Daedalus, remained responsible for maintenance and repair of ship systems.

I spend so much time on this because of my fascination with the history of interstellar ideas. In any case, I don’t know of any earlier studies that explored self-reproduction in the interstellar context and in terms of mission hardware than Freitas’ 1980 paper “A Self-Reproducing Interstellar Probe” in JBIS, which is conveniently available online. This was a step forward in interstellar studies, and I want to highlight it with this quotation from its text:

A major alternative to both the Daedalus flyby and “Bracewell probe” orbiter is the concept of the self -reproducing starprobe. Replicating spacefaring machines recently have received a cursory examination by Calder [4] and Boyce [5], but the basic feasibility of this approach has never been seriously considered despite its tremendous potential. In theory, each self -reproducing device dispatched by the launching society would become an independent agent, slowly scouting the Galaxy for evidence of life, intelligence and civilization. While such machines might be costlier to design and construct, given sufficient time a relatively few replicating starprobes could search the entire Milky Way.

The present paper addresses the plausibility of self-reproducing starprobes and the basic parameters of feasibility. A subsequent paper [10] compares reproductive and nonreproductive probe search strategies for missions of interstellar and galactic exploration.

Hart, Tipler and the Spread of Intelligence

These days, as Freitas went on to explore, massive redundancy, miniaturization and self-assembly at the molecular level have moved into tighter focus as we contemplate missions to the stars, and the enormous Daedalus-style craft (54,000 tons initial mass, including 50,000 tonnes of fuel and 500 tonnes of scientific payload) and its successors, while historically important, also resonate a bit with Captain Nemo’s Nautilus, as spectacular creations of the imagination that defied no laws of physics, but remain in tension with the realities of payload and propulsion. These days we explore miniaturization, with Breakthrough Starshot’s tiny payloads as one example.

But back to Ellery. From a philosophical standpoint, self-reproduction, he rightly points out, had also been considered by Michael Hart and Frank Tipler, each noting that if self-replication were possible, a civilization could fill the galaxy in a relatively short (compared to the age of the galaxy) timeframe. Ultimately self-reproducing probes exploit local materials upon arrival and make copies of themselves, a wave of exploration that would ensure every habitable planet had an attendant probe. Thus the Hart/Tipler contention that the lack of evidence for such a probe is an indication that extraterrestrial intelligence does not exist, an idea that still has currency.

Would any exploring civilization turn to self-replication? The author sees many reasons to do so:

There are numerous reasons to send out self-replicating probes – reconnaissance prior to interstellar migration, first-mover advantage, insurance against planetary disaster, etc – but only one not to – indifference to information growth (which must apply to all extant ETI without exception). Self-replicating probes require minimal capital investment and represent the most economical means to explore space, interstellar space included. In a real sense, self-replicating machines cannot become obsolete – new design developments can be broadcast and uploaded to upgrade them when necessary. Once the self-replicating probe is established in a star system, the probe may be exploited in various ways. The universal construction capability ensures that the self-replicating probe can construct any other device.

Probes that can fill the galaxy extract maximum information and can not only monitor but communicate with local species. Should a civilization choose to implement panspermia in systems devoid of life, the capability is implicit here, including “the prospect of exploiting microorganism DNA as a self-replicating message.” Such probes could also, in the event of colonization at a later period, establish needed infrastructure for the new arrivals, with the possibility of terraforming.

Thus probes like these become a route from Kardashev II to III. In fact, as Ellery sees it, if a Kardashev Type I civilization is capable of self-reproduction technology – and remember, Ellery believes we are on the cusp of it now – then the entire Type I phase may be relatively short on the way to Kardashev Types II and III, perhaps as little as a few thousand years. It’s an interesting thought given our current status somewhere around Kardashev 0.72, beset by problems of our own making and wondering whether we will survive long enough to establish a Type I civilization.

Image: NASA’s James Webb Space Telescope has produced the deepest and sharpest infrared image of the distant universe to date. Known as Webb’s First Deep Field, this image of galaxy cluster SMACS 0723 is overflowing with detail. Thousands of galaxies – including the faintest objects ever observed in the infrared – have appeared in Webb’s view for the first time. This slice of the vast universe covers a patch of sky approximately the size of a grain of sand held at arm’s length by someone on the ground. If self-reproducing probes are possible, are all galaxies likely to be explored by other civilizations? Credit: NASA, ESA, CSA, and STScI.

Early Days for SETA

The question of diffusion through the galaxy here gets a workover from a theory called TRIZ (Teorija Reshenija Izobretatel’skih Zadach), which Ellery uses to analyze the implications of self-reproduction, finding that the entire galaxy could be colonized within 24 probe generations. This produces a population of 424 billion probes. He’s assuming a short replication time at each stop – a few years at most – and thus finds that the spread of such probes is dominated by the transit time across the galactic plane, a million year process to complete assuming travel at a tenth of lightspeed.

Given this short timespan compared with the age of the Galaxy, our Galaxy should be swarming with self-replicating probes yet there is no evidence of them in our solar system. Indeed, it only requires a civilization to exist long enough to send out such probes as they would thenceforth continue to propagate through the Galaxy even if the sending civilization were no more. And of course, it requires only one ETI to do this.

Part of Ellery’s intent is to show how humans might create a self-replicating probe, going through the essential features of such and arguing that self-replication is near- rather than long-term, based on the idea of the universal constructor, a machine that builds any or all other machines including itself. Here we find intellectual origins in the work of Alan Turing and John von Neumann. Ellery digs into 3D printing and ongoing experiments in self-assembly as well as in-situ resource utilization of asteroid material, and along the way he illustrates probe propulsion concepts.

At this stage of the game in SETA, there is no evidence of self-replication or extraterrestrial probes of any kind, the author argues:

There is no observational evidence of large structures in our solar system, nor signs of large-scale mining and processing, nor signs of residue of such processes. Our current terrestrial self-replication scheme with its industrial ecology is imposed by the requirements for closure of the self-replication loop that (i) minimizes waste (sustainability) to minimize energy consumption; (ii) minimizes materials and components manufacture to minimize mining; (iii) minimizes manufacturing and assembly processes to minimize machinery. Nevertheless, we would expect extensive clay residues. We conclude therefore that the most tenable hypothesis is that ETI do not exist.

The answer to that contention is, of course, that we haven’t searched for local probes in any coordinated way, and that now that we are becoming capable of minute analysis of, for instance, the lunar surface (through Lunar Reconnaissance Orbiter imagery, for one), we can become more systematic in the investigation, taking in Earth co-orbitals, as Jim Benford has suggested, or looking for signs of lurkers in the asteroid belt. Ellery notes that the latter might demand searching for signs of resource exploitation there as opposed to finding an individual probe amidst the plethora of candidate objects.

But Ellery is adamant that efforts to find such lurkers should continue, citing the need to continue what has been up to now a meager and sporadic effort to conduct SETA. I’m going to recommend this paper to those Centauri Dreams readers who want to get up to speed on the scholarship on self-reproduction and its consequences. Indeed, the ideas jammed into its pages come at bewildering pace, but the scholarship is thorough and the references handy to have in one place. Whether self-reproducing probes are indeed imminent is a matter for debate but their implications demand our attention.

The paper is Ellery, “Self-replicating probes are imminent – implications for SETI,” International Journal of Astrobiology 8 July 2022 (full text). A companion paper published at the same time is “Curbing the fruitfulness of self-replicating machines,” International Journal of Astrobiology 8 July 2022 (full text).

tzf_img_post

Two Close Stellar Passes

Interstellar objects are much in the news these days, as witness the flurry of research on ‘Oumuamua and 2I/Borisov. But we have to be cautious as we look at objects on hyperbolic orbits, avoiding the assumption that any of these are necessarily from another star. Spanish astronomers Carlos and Raúl de la Fuente Marcos dug several years ago into the question of objects on hyperbolic orbits, noting that some of these may well have origins much closer to home. Let me quote their 2018 paper on this:

There are mechanisms capable of generating hyperbolic objects other than interstellar interlopers. They include close encounters with the known planets or the Sun, for objects already traversing the Solar system inside the trans-Neptunian belt; but also secular perturbations induced by the Galactic disc or impulsive interactions with passing stars, for more distant bodies (see e.g. Fouchard et al. 2011, 2017; Królikowska & Dybczy?ski 2017). These last two processes have their sources beyond the Solar system and may routinely affect members of the Oort cloud (Oort 1950), driving them into inbound hyperbolic paths that may cross the inner Solar system, making them detectable from the Earth (see e.g. Stern 1987).

Scholz’s Star Leaves Its Mark

So much is going on in the outer reaches of the Solar System! In the 2018 paper, the two astronomers looked for patterns in how hyperbolic objects move, noting that anything approaching us from the far reaches of the Solar System seems to come from a well-defined location in the sky known as its radiant (also called its antapex). Given the mechanisms for producing objects on hyperbolic orbits, they identify distinctive coordinate and velocity signatures among these radiants.

Work like this relies on the past orbital evolution of hyperbolic objects using computer modeling and statistical analyses of the radiants, and I wouldn’t have dug quite so deeply into this arcane work except that it tells us something about objects that are coming under renewed scrutiny, the stars that occasionally pass close to the Solar System and may disrupt the Oort Cloud. Such passing stars are an intriguing subject in their own right and even factor into studies of galactic diffusion; i.e., how a civilization might begin to explore the galaxy by using close stellar passes as stepping stones.

But more about that in a moment, because I want to wrap up this 2018 paper before moving on to a later paper, likewise from the de la Fuente Marcos team, on close stellar passes and the intriguing Gliese 710. Its close pass is to happen in the distant future, but we have one well characterized pass that the 2018 paper addresses, that of Scholz’s Star, which is known to have made the most recent flyby of the Solar System when it moved through the Oort Cloud 70,000 years ago. In their work on minor objects with long orbital periods and extreme orbital eccentricity, the researchers find a “significant overdensity of high-speed radiants toward the constellation of Gemini” that may be the result of the passage of this star.

This is useful stuff, because as we untangle prior close passes, we learn more about the dynamics of objects in the outer Solar System, which in turn may help us uncover information about still undiscovered objects, including the hypothesized Planet 9, that may lurk in the outer regions and may have caused its own gravitational disruptions.

Before digging into the papers I write about today, I hadn’t realized just how many objects – presumably comets – are known to be on hyperbolic orbits. The astronomers work with the orbits of 339 of these, all with nominal heliocentric eccentricity > 1, using data from JPL’s Solar System Dynamics Group Small-Body Database and the Minor Planet Center Database. For a minor object moving with an inbound velocity of 1 kilometer per second, which is the Solar System escape velocity at about 2000 AU, the de la Fuente Marcos team runs calculations going back 100,000 years to examine the modeled object’s orbital evolution all the way out to 20,000 AU, which is in the outer Oort Cloud.

That overdensity of radiants toward Gemini that I mentioned above does seem to implicate the Scholz’s Star flyby. If so, then a close stellar pass that occurred 70,000 years ago may have left traces we can still see in the orbits of these minor Solar System bodies today. The uncertainties in the analysis of other stellar flybys relate to the fact that past encounters with other stars are not well determined, with Scholz’s Star being the prominent exception. Given the lack of evidence about other close passes, the de la Fuente Marcos team acknowledges the possibility of other perturbers.

Image: This is Figure 3 from the paper. Caption: Distribution of radiants of known hyperbolic minor bodies in the sky. The radiant of 1I/2017 U1 (‘Oumuamua) is represented by a pink star, those objects with radiant’s velocity > ?1?km?s?1 are plotted as blue filled circles, the ones in the interval (?1.5, ?1.0) km s?1 are shown as pink triangles, and those < ? 1.5?km?s?1 appear as goldenrod triangles. The current position of the binary star WISE J072003.20-084651.2, also known as Scholz’s star, is represented by a red star, the convergent brown arrows represent its motion and uncertainty as computed by Mamajek et al. (2015). The ecliptic is plotted in green. The Galactic disc, which is arbitrarily defined as the region confined between Galactic latitude ?5° and 5°, is outlined in black, the position of the Galactic Centre is represented by a filled black circle; the region enclosed between Galactic latitude ?30° and 30°? appears in grey. Data source: JPL’s SSDG SBDB. Credit: Carlos and Raúl de la Fuente Marcos.

The Coming of Gliese 710

Let’s now run the clock forward, looking at what we might expect to happen in our next close stellar passage. Gliese 710 is an interesting K7 dwarf in the constellation Serpens Cauda that occasionally pops up in our discussions because of its motion toward the Sun at about 24 kilometers per second. Right now it’s a little over 60 light years away, but give it time – in about 1.3 million years, the star should close to somewhere in the range of 10,000 AU, which is about 1/25th of the current distance between the Sun and Proxima Centauri. As we’re learning, wait long enough and the stars come to us.

Note that 10,000 AU; we’ll tighten it up further in a minute. But notice that it is actually inside the distance between the closest star, Proxima Centauri, and the Centauri A/B binary.

Image: Gleise 710 (center), destined to pass through the inner Oort Cloud in our distant future. Credit: SIMBAD / DSS

An encounter like this is interesting for a number of reasons. Interactions with the Oort Cloud should be significant, although well spread over time. Here I go back to a 1999 study by Joan García-Sánchez and colleagues that made the case that spread over human lifetimes, the effects of such a close passage would not be pronounced. Here’s a snippet from that paper:

For the future passage of Gl 710, the star with the closest approach in our sample, we predict that about 2.4 × 106 new comets will be thrown into Earth-crossing orbits, arriving over a period of about 2 × 106 yr. Many of these comets will return repeatedly to the planetary system, though about one-half will be ejected on the first passage. These comets represent an approximately 50% increase in the flux of long-period comets crossing Earth’s orbit.

As far as I know, the García-Sánchez paper was the first to identify Gliese 710’s flyby possibilities. The work was quickly confirmed in several independent studies before the first Gaia datasets were released, and the parameters of the encounter were then tightened using Gaia’s results, the most recent paper using Gaia’s third data release. Back to Carlos and Raúl de la Fuente Marcos, who tackle the subject in a new paper appearing in Research Notes of the American Astronomical Society.

The researchers have subjected the Gliese 710 flyby to N-body simulations using a suite of software tools that model perturbations from the star and factor in the four massive planets in our own system as well as the barycenter of the Pluto/Charon system. They assume a mass of 0.6 Solar masses for Gliese 710, consistent with previous estimates. In addition to the Gaia data, the authors include the latest ephemerides information for Solar System objects as provided by the Jet Propulsion Laboratory’s Horizons System.

Image: This is Figure 1 from the paper. Caption: Future perihelion passage of Gliese?710 as estimated from Gaia?DR3 input data and the N-body simulations discussed in the text. The distribution of times of perihelion passage is shown in the top-left panel and perihelion distances in the top-right one. The blue vertical lines mark the median values, the red ones show the 5th and 95th percentiles. The bottom panels show the times of perihelion passage (bottom-left) and the distance of closest approach (bottom–right) as a function of the observed values of the radial velocity of Gliese?710 and its distance (randomly generated using the mean values and standard deviations from Gaia?DR3), both as color coded scatter plots of the distribution in the associated top panel. Histograms have been produced using the Matplotlib library (Hunter 2007) with sets of bins computed using Numpy (Harris et al. 2020) by applying the Freedman and Diaconis rule; instead of considering frequency-based histograms, we used counts to form a probability density so the area under the histogram will sum to one. The colormap scatter plot has also been produced using Matplotlib. Credit: Carlos and Raúl de la Fuente Marcos.

The de la Fuente Marcos paper now finds that the close approach of Gliese 710 will take it to within 10635 AU plus or minus 500 AU, putting it inside the inner Oort Cloud in about 1.3 million years – both the distance of the approach and the time of perihelion passage are tightened from earlier estimates. And as we’ve seen, Scholz’s Star passed through part of the Oort Cloud at perhaps 52,000 AU some 70,000 years ago. We thus get a glimpse of the Solar System influenced by passing stars on a time frame that begins to take shape and clearly defines a factor in the evolution of the Solar System.

What Gaia Can Tell Us

We can now back out further again to a 2018 paper from Coryn Bailer-Jones (Max Planck Institute for Astronomy, Heidelberg), which examines not just two stars with direct implications for our Solar System, but Gaia data (using the Gaia DR2 dataset) on 7.2 million stars to look for further evidence for close stellar encounters. Here we begin to see the broader picture. Bailer-Jones and team find 26 stars that have or will approach within 1 parsec, 7 that will close to 0.5 parsecs, and 3 that will pass within 0.25 parsecs of the Sun. Interestingly, the closest encounter is with our friend Gliese 710.

How often can these encounters be expected to occur? The authors estimate about 20 encounters per million years within a range of one parsec. Greg Matloff has used these data to infer roughly 2.5 encounters within 0.5 parsecs per million years. Perhaps 400,000 to 500,000 years should separate close stellar encounters as found in the Gaia DR2 data. We should keep in mind here what Bailer-Jones and team say about the current state of this research, especially given subsequent results from Gaia: “There are no doubt many more close – and probably closer – encounters to be discovered in future Gaia data releases.” But at least we’re getting a feel for the time spans involved.

So given the distribution of stars in our neighborhood of the galaxy, our Sun should have a close encounter every half million years or so. Such encounters between stars dramatically reduce the distance for any would be travelers. In the case of Scholz’s Star, for instance, the distances involved cut the current distance to the nearest star by a factor of 5, while Gliese 710 is even more provocative, for as I mentioned, it will close to a distance not all that far off Proxima Centauri’s own distance from Centauri A/B.

A good time for interstellar migration? We’ve considered the possibilities in the past, but as new data accumulate, we have to keep asking how big a factor stellar passages like these may play in helping a technological civilization spread throughout the galaxy.

The earlier de la Fuente Marcos paper is “Where the Solar system meets the solar neighbourhood: patterns in the distribution of radiants of observed hyperbolic minor bodies,” Monthly Notices of the Royal Astronomical Society Letters Vol. 476, Issue 1 (May 2018) L1-L5 (abstract). The later de la Fuente Marcos paper is “An Update on the Future Flyby of Gliese 710 to the Solar System Using Gaia DR3: Flyby Parameters Reproduced, Uncertainties Reduced,” Research Notes of the AAS Vol. 6, No. 6 (June, 2022) 136 (full text). The García-Sánchez et al. paper is “Stellar Encounters with the Oort Cloud Based on Hipparcos Data,” Astronomical Journal 117 (February, 1999), 1042-1055 (full text). The Bailer-Jones paper is “New stellar encounters discovered in the second Gaia data release,” Astronomy & Astrophysics Vol. 616, A37 (13 August 2018). Abstract.

tzf_img_post

Charter

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For many years this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image courtesy of Marco Lorenzi).

Now Reading

Version 1.0.0

Recent Posts

On Comments

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to others, I'm probably not going to run it.

Follow with RSS or E-Mail

RSS
Follow by Email

Follow by E-Mail

Get new posts by email:

Advanced Propulsion Research

Beginning and End

Archives