The Challenge of ‘Twilight Asteroids’

We have the Zwicky Transient Facility at Palomar Observatory to thank for the detection of the strikingly named ’Ayló’chaxnim (2020 AV2). This is a large near-Earth asteroid with a claim to distinction, being the first NEO found to orbit inside the orbit of Venus. I love to explore the naming of things, and now that we have ’Ayló’chaxnim (2020 AV2), we have to name the category, at least provisionally. The chosen name is Vatira, which in turn is a nod to Atira, a class of asteroids that orbit entirely inside Earth’s orbit. Thus Vatira refers to an Atira NEO with orbit interior to Venus.

As to the ’Ayló’chaxnim, it’s a word from indigenous peoples whose ancestral lands took in the mountainous region where the Palomar Observatory is located. I’m told by the good people at Caltech that the word means something like ‘Venus Girl.’ On June 7, people of Pauma descent gathered for a ceremony at the observatory, having been asked by the team manning the Zwicky Transient Facility to choose a local name.

I couldn’t tell you how ’Ayló’chaxnim is pronounced, but with the ZTF on watch, it’s possible we’ll find more Vatiras, or at least Atiras, which seem to be more numerous, so we may have more Pauma names to come and perhaps we’ll learn. 2020 AV2 is 1 to 3 kilometers in size and has an orbit tilted about 15 degrees from the plane of the Solar System. On its 151 day orbit, it stays interior to Venus and comes close to the orbit of Mercury. Postdoc Bryce Bolan at Caltech flagged it as a candidate in early 2020.

The ZTF itself is a survey camera mounted on the Samuel Oschin Telescope at Palomar, conducting a wide-field survey making rapid scans of the sky. 2020 AV2, says Caltech’s George Helou, who is a ZTF co-investigator, is on an interesting orbit, surely the result of migration from further out in the system:

“Getting past the orbit of Venus must have been challenging. The only way it will ever get out of its orbit is if it gets flung out via a gravitational encounter with Mercury or Venus, but more likely it will end up crashing on one of those two planets.”

Image: The Zwicky Transient Facility field of view. The ZTF Observing System delivers efficient, high-cadence, wide-field-of-view, multi-band optical imagery for time-domain astrophysics analysis. The camera utilizes the entire focal plane of 47 square degree of the 48-inch Samuel Oschin Schmidt telescope, providing the largest instantaneous field-of-view of any camera on a telescope of aperture greater than 0.5 m: each image will cover 235 times the area of the full moon. Credit: Zwicky Transient Facility.

This close to the Sun, Vatiras are only going to be visible at dusk or dawn. As the University of Hawaii’s Scott Sheppard points out in a recent issue of Science, our asteroid surveys mostly take place with a dark night sky, which implies that small objects orbiting between the Earth and the Sun are not likely to be found. Modeling of the NEO population predicts that objects as large as 2020 AV2 are unlikely among Vatiras but smaller objects could be plentiful. Asteroid surveys interior to Venus’ orbit are few, so there is work here for facilities like the ZTF, or the NSF’s Blanco 4-meter telescope in Chile with the Dark Energy Camera (DECam) to fill out this population. Both have fields of view sufficient to carry out this kind of survey.

So let’s get down to the asteroid mitigation question. Sheppard points out that what with current NEO surveys coupled with formation models for these objects, more than 90 percent of what he calls ‘planet killer’ NEOs have probably already been found – these would be objects larger than 1 kilometer, and he’s talking here about the entire range of NEOs, not just those interior to the orbits of Earth or Venus. He writes:

The last few unknown 1-km NEOs likely have orbits close to the Sun or high inclinations, which keep them away from the fields of the main NEO surveys. The 48-inch Zwicky Transient Facility telescope has found one Vatira and several Atira asteroids, making it one of the most prolific asteroid hunters interior to Earth. To combat twilight to find smaller asteroids, one can use a bigger telescope. Large telescopes usually do not have big fields of view to efficiently survey. The National Science Foundation’s Blanco 4-meter telescope in Chile with the Dark Energy Camera (DECam) is an exception. A new search for asteroids hidden in plain twilight with DECam has found a few Atira asteroids, including 2021 PH27.

Sheppard’s also describes a category he calls ‘city killers,’ which takes in NEOs larger than 140 meters; of these, he believes we have found about half. The progress in tracking NEOs has been heartening as we learn about potentially dangerous trajectories, and turning to twilight surveys like these will help us learn more about NEOs hidden in the glare of the Sun.

It turns out that the Zwicky team recently found the asteroid with the smallest known semimajor axis (0.46 AU). This is 2021 PH27, an object with high eccentricity whose orbit crosses the orbit of Mercury as well as Venus. Thus, given our categorization, PH27 is an Atira rather than a Vatira. With a perihelion of 0.13 AU, this NEO shows 1 arc minute of precession per century, the highest of any object in the Solar System including Mercury. This is another large NEO at about 1 kilometer in size. Although as Sheppard notes:

…because the diameter of these interior asteroids is calculated with an assumed albedo and solar phase function, the actual diameters for both of these discoveries could be under 1 km. This would put them in a more-expected population and make them less of a statistical fluke.

Image: 2020 AV2 orbits entirely within the orbit of Venus. Credit: Bryce Bolin/Caltech

Clearly we have much to do to build our catalog of objects close to the Sun. We can extend the catalog of exotic names as well. Asteroids called Amors are those that approach the Earth but do not cross its orbit. Apollos do cross the orbit of the Earth but have semimajor axes greater than Earth’s. Atens, in turn, cross Earth’s orbit but have semimajor axes less than that of the Earth. Sheppard points out that NEOs have dynamically unstable orbits, and speculates that a reservoir that replenishes their numbers must exist because the overall count seems to be in a steady state.

Among possible reservoirs are those that may exist in long-term resonances with Venus or Mercury, and there may conceivably be a population of asteroids not yet observed, the so-called Vulcanoids, that could have orbits entirely within the orbit of Mercury. Sheppard’s excellent article makes the point that Vulcanoids would be at the mercy of many factors, including Yarkovsky drift, collisions and thermal fracturing from proximity to the sun, so they’re likely uncommon. We do know that spacecraft observations of the region near the Sun seem to rule out Vulcanoids larger than 5 kilometers, but stable reservoirs for smaller objects may exist. Remember, too, that we have found numerous exoplanets closer to their host stars than the Vulcanoid region in our Solar System.

Overall, NEOs in the Sun’s glare should not be too prolific:

Fewer Atiras should exist than the more-distant NEOs, and even fewer Vatiras, because it becomes harder and harder for an object to move inward past Earth’s and then Venus’ orbit. Random walks of a NEO’s orbit through planetary gravitational interactions can make an Aten into an Atira and/or Vatira orbit and vice versa. Atiras should make up some 1.2% and Vatiras only 0.3% of the total NEO population coming from the main belt of asteroids (4). 2020 AV2 itself will spend only a few million years in a Vatira orbit before crossing Venus’ orbit. Eventually, 2020 AV2 will either collide with or be tidally disrupted by one of the planets, disintegrate near the Sun, or be ejected from the inner Solar System.

Scott Sheppard’s article is “In the Glare of the Sun,” Vol. 377 Issue 6604 (21 July 2022), pp. 366-367 (full text). For more on the Zwicky Transient Facility, see Graham et al., “The Zwicky Transient Facility: Science Objectives,” Publications of the Astronomical Society of the Pacific Vol. 131, No. 1001 (22 May 2019). Full text.

tzf_img_post

Getting There Quickly: The Nuclear Option

Adam Crowl has been appearing on Centauri Dreams for almost as long as the site has been in existence, a welcome addition given his polymathic interests and ability to cut to the heart of any issue. His long-term interest in interstellar propulsion has recently been piqued by the Jet Propulsion Laboratory’s work on a mission to the Sun’s gravitational lens region. JPL is homing in on multiple sailcraft with close solar passes to expedite the cruise time, leading Adam to run through the options to illustrate the issues involved in so dramatic a mission. Today he looks at the pros and cons of nuclear propulsion, asking whether it could be used to shorten the trip dramatically. Beamed sail and laser-powered ion drive possibilities are slated for future posts. With each of these, if we want to get out past 550 AU as quickly as possible, the devil is in the details. To keep up with Adam’s work, keep an eye on Crowlspace.

by Adam Crowl

The Solar Gravitational Lens amplifies signals from distant stars and galaxies immensely, thanks to the slight distortion of space-time caused by the Sun’s mass-energy. Basically the Sun becomes an immense spherical lens, amplifying incoming light by focussing it hundreds of Astronomical Units (AU) away. Depending on the light frequency, the Sun’s surrounding plasma in its Corona can cause interference, so the minimum distance varies. For optical frequencies it can be ~600 AU at a minimum and light is usefully focussed out to ~1,000 AU.

One AU is traveled in 1 Julian Year (365.25 days) at a speed of 4.74 km/s. Thus to travel 100 AU in 1 year needs a speed of 474 km/s, which is much faster than the 16.65 km/s that probes have been launched away from the Earth. If a Solar Sail propulsion system could be deployed close to the Sun and have a Lifting Factor (the ratio of Light-Pressure to Weight of Solar Sail vehicle) greater than 1, then such a mission could be launched easily. However, at present, we don’t have super-reflective gossamer light materials that could usefully lift a payload against solar gravity.

Carbon nanotube mesh has been studied in such a context, as has aerographite, but both are yet to be created in large enough areas to carry large payloads. The ratio of the push of sunlight, for a perfect reflector, to the gravity of the Sun means an areal mass density of 1.53 grams per square metre gives a Lifting Factor of 1. A Sail with such an LF will hover when pointing face on at the Sun. If a Solar Sail LF is less than 1, then it can be angled and used to speed up or slow down the Sail relative to its initial orbital vector, but the available trajectories are then slow spirals – not fast enough to reach the Gravity Lens in a useful time.

Image: A logarithmic look at where we’d like to go. Credit: NASA.

Absent super-light Solar Sails, what are the options? Modern day rockets can’t reach 474 km/s without some radical improvements. Multi-grid Ion Drives can achieve exhaust velocities of the right scale, but no power source yet available can supply the energy required. The reason why leads into the next couple of options so it’s worth exploring. For deep space missions the only working option for high-power is a nuclear fission reactor, since we’re yet to build a working nuclear fusion reactor.

When a rocket’s thrust is limited by the power supply’s mass, then there’s a minimum power & minimum travel time trajectory with a specific acceleration/deceleration profile – it accelerates 1/3 the time, then cruises at constant speed 1/3 the time, then brakes 1/3 the time. The minimum Specific Power (Power per kilogram) is:

P/M = (27/4)*S2*T-3

…where P/M is Power/Mass, S is displacement (distance traveled) and T is the total mission time to travel the displacement S. In units of AU and Years, the P/M becomes:

P/M = 4.8*S2*T-3 W/kg

However while the Average Speed is 474 km/s for a 6 year mission to 600 AU, the acceleration/deceleration must be accounted for. The Cruise Speed is thus 3/2 times higher, so the total Delta-Vee is 3 times the Average Speed. The optimal mass-ratio for the rocket is about 4.41, so the required Effective Exhaust Velocity is a bit over twice the Average Speed – in this case 958 km/s. As a result the energy efficiency is 0.323, meaning the required Specific Power for a rocket is:

P/M = 14.9*S2*T-3 W/kg

For a mission to 600 AU in 6 years a Specific Power of 24,850 W/kg is needed. But this is the ideal Jet-Power – the kinetic energy that actually goes into the forward thrust of the vehicle. Assuming the power source is 40% (40% drive and 10% payload) of the vehicle’s empty mass and the efficiency of the higher-powered multi-grid ion-drive is 80%, then the power source must produce 77,600 W/kg of power. Every power source produces waste heat. For a fission power supply, the waste heat can only be expelled by a radiator. Thermodynamic efficiency is defined as the difference in temperature between the heat-source (reactor) and the heat-sink (radiator), divided by the temperature of the heat source:

Thermal Efficiency = (Tsource – Tsink) / Tsource

For a reactor with a radiator in space, the mass of that radiator is (usually) minimised when the efficiency is 25 % – so to maximise the Power/Mass ratio the reactor has to be really HOT. The heat of the reactor is carried away into a heat exchanger and then travels through the radiator to dump the waste heat to space. To minimise mass and moving parts so called Heat-Pipes can be used, which are conductive channels of certain alloys.

Another option, which may prove highly effective given clever reactor designs, is to use high performance thermophotovoltaic (TPV) cells to convert high temperature thermal emissions directly into electrical power. High performance TPV’s have hit 40% efficiency at over 2,000 degrees C, which would also maximise the P/M ratio of the whole power system.

Pure Uranium-235, if perfectly fissioned (a Burn-Up Fraction of 1), releases 88 trillion joules (88 TJ) per kilogram. A jet-power of 24,850 W/kg sustained for 4 years is a total power output of 3.1 TJ/kg. Operating the Solar Lens Telescope payload won’t require such power levels, so we’ll assume it’s negligible fraction of the total output – a much lower power setting. So our fuel needs to be *at least* 3.6% Uranium-235. But there’s multipliers which increase the fraction required – not all the vehicle will be U-235.

First, the power-supply mass fraction and the ion-drive efficiency – a multiplier of 1/0.32. Therefore the fuel must be 11.1% U-235.

Second, there’s the thermodynamic efficiency. To minimise the radiator area (thus mass) required, it’s set at 25%. Therefore the U-235 is 45.6% of the power system mass. The Specific Power needed for the whole system is thus 310,625 W per kilogram.

The final limitation I haven’t mentioned until now – the thermophysical properties of Uranium itself. Typically Uranium is in the form of Uranium Dioxide, which is 88% uranium by mass. When heated every material goes up in temperature by absorbing (or producing internally) a certain amount of heat – the so called Heat Capacity. The total amount of heat stored in a given amount of material is called the Enthalpy, but what matters to extracting heat from a mass of fissioning Uranium is the difference in Enthalpy between a Higher and a Lower temperature.

Considering the whole of the reactor core and the radiator as a single unit, the Lower temperature will be the radiator temperature. The Higher will be the Core where it physically contacts the heat exchanger/radiator. Thanks to the Thermal efficiency relation we know that if the radiator is at 2,000 K, then the Core must be at least ~2,670 K. The Enthalpy difference is 339 kilojoules per kilogram of Uranium Oxide core. Extracting that heat difference every second maintains the temperature difference between the Source and the Sink to make Work (useful power) and that means a bare minimum of 91.6% of the specific mass of the whole power system must be very hot fissioning Uranium Dioxide core. Even if the Core is at melting point – about 3120 K – then the Enthalpy difference is 348 KJ/kg – 89.3% of the Power System is Core.

The trend is obvious. The power supply ends up being almost all fissioning Uranium, which is obviously absurd.

To conclude: A fission powered mission to 600 AU will take longer than 6 years. As the Power required is proportional to the inverse cube of the mission time, the total energy required is proportional to the inverse square of the mission time. So a mission time of 12 years means the fraction of U-235 burn-up comes down to a more achievable 22.9% of the power supply’s total mass. A reactor core is more than just fissioning metal oxide. Small reactors have been designed with fuel fractions of 10%, but this is without radiators. A 5% core mass puts the system in range of a 24 year mission time, but that’s approaching near term Solar Sail performance.

tzf_img_post

Solar Gravitational Lens: Sailcraft and In-Flight Assembly

The last time we looked at the Jet Propulsion Laboratory’s ongoing efforts toward designing a mission to the Sun’s gravitational lens region beyond 550 AU, I focused on how such a mission would construct the image of a distant exoplanet. Gravitational lensing takes advantage of the Sun’s mass, which as Einstein told us distorts spacetime. A spacecraft placed on the other side of the Sun from the target exoplanetary system would take advantage of this, constructing a high resolution image of unprecedented detail. It’s hard to think of anything short of a true interstellar mission that could produce more data about a nearby exoplanet.

In that earlier post, I focused on one part of the JPL work, as the team under the direction of Slava Turyshev had produced a paper updating the modeling of the solar corona. The new numerical simulations led to a powerful result. Remember that the corona is an issue because the light we are studying is being bent around the Sun, and we are in danger of losing information if we can’t untangle the signal from coronal distortions. And it turned out that because the image we are trying to recover would be huge – almost 60 kilometers wide at 1200 AU from the Sun if the target were at Proxima Centauri distance – the individual pixels are as much as 60 meters apart.

Image: JPL’s Slava Turyshev, who is leading the team developing a solar gravitational lens mission concept that pushes current technology trends in striking new directions. Credit: JPL/S. Turyshev.

The distance between pixels turns out to help; it actually reduces the integration time needed to pull all the data together to produce the image. The integration time (the time it takes to gather all the data that will result in the final image) is in fact reduced when pixels are not adjacent at a rate proportional to the inverse square of the pixel spacing. I’ve more or less quoted the earlier paper there to make the point that according to the JPL work thus far, exoplanet imaging at high resolution using these methods is ‘manifestly feasible,’ another quotation from the earlier work.

We now have a new paper from the JPL team, looking further at this ongoing engineering study of a mission that would operate in the range of 550 to 900 AU, performing multipixel imaging of an exoplanet up to 100 light years away. The telescope is meter-class, the images producing a surface resolution measured in tens of kilometers. Again I will focus on a specific topic within the paper, the configuration of the architecture that would reach these distances. Those looking for the mission overview beyond this should consult the paper, the preprint of which is cited below.

Bear in mind that the SGL (solar gravitational lens) region is, helpfully, not a focal ‘point’ but rather a cylinder, which means that a spacecraft stays within the focus as it moves further from the Sun. This movement also causes the signal to noise ratio to improve, and means we can hope to study effects like planetary rotation, seasonal variations and weather patterns over integration times that may amount to months or years.

Image: From Geoffrey Landis’ presentation at the 2021 IRG/TVIW symposium in Tucson, a slide showing the nature of the gravitational lens focus. Credit: Geoffrey Landis.

Considering that Voyager 1, our farthest spacecraft to date, is now at a ‘mere’ 156 AU, a journey that has taken 44 years, we have to find a way to move faster. The JPL team talks of reaching the focal region in less than 25 years, which implies a hyperbolic escape velocity of more than 25 AU per year. Chemical methods fail, giving us no more than 3 to 4 AU per year, while solar thermal and even nuclear thermal move us into a still unsatisfactory 10-12 AU per year in the best case scenario. The JPL team chooses solar sails in combination with a close perihelion pass of the Sun. The paper examines perihelion possibilities at 15 as well as 10 solar radii but notes that the design of the sailcraft and its material properties define what is going to be possible.

Remember that we have also been looking at the ongoing work at the Johns Hopkins Applied Physics Laboratory involving a mission called Interstellar Probe, which likewise is in need of high velocity to reach the distances needed to study the heliosphere from the outside (a putative goal of 1000 AU in 50 years has been suggested). Because the JHU/APL effort has just released a new paper of its own, I’ll also be referring to it in the near future, because thus far the researchers working under Ralph McNutt on the problem have not found a close perihelion pass, coupled with a propulsive burn but without a sail, to be sufficient for their purposes. But more on that later. Keep it in mind in relation to this, from the JPL paper:

…the stresses on the sailcraft structure can be well understood. For the sailcraft, we considered among other known solar sail designs, one with articulated vanes (i.e., SunVane). While currently at a low technology readiness level (TRL), the SunVane does permit precision trajectory insertion during the autonomous passage through solar perigee. In addition, the technology permits trimming of the trajectory injection errors while still close to the Sun. This enables the precision placement of the SGL spacecraft on its path towards the image cylinder which is 1.3 km in diameter and some 600+ AU distant.

Is the SunVane concept the game-changer here? I looked at it 18 months ago (see JPL Work on a Gravitational Lensing Mission), where I used the image below to illustrate the concept. The sail is constructed of square panels aligned along a truss. In the Phase II study for NIAC that preceded the current papers, a sail based on SunVane design could achieve 25 AU per year – that would be arrival at 600 AU in 26 years in conjunction with a close solar pass – using a craft with total sail area of 45,000 square meters (that’s equivalent to a roughly 200 X 200 square meter single sail).

Image: The SunVane concept. Credit: Darren D. Garber (Xplore, Inc).

With sail area distributed along the truss rather than confined to the sail’s center of gravity, this is a highly maneuverable design that continues to be of great interest. Maneuverability is a key factor as we look at injecting spacecraft into perihelion trajectory, where errors can be trimmed out while still in close proximity to the Sun.

But current thinking goes beyond flying a single spacecraft. What the JPL work has developed through the three NIAC phases and beyond is a mission built around a constellation of smaller spacecraft. The idea is chosen, the authors say, to enhance redundancy, enable the needed precision of navigation, remove the contamination of background light during SGL operations, and optimize the return of data. What intrigues me particularly is the use of in-flight assembly, with the major spacecraft modules placed on separate sailcraft. This will demand that the sailcraft fly in formation in order to effect the needed rendezvous for assembly.

Let’s home in on this concept, pausing briefly on the sail, for this mission will demand an attitude control system to manage the thrust vector and sail attitude once we have reached perihelion with our multiple craft, each making a perihelion pass followed by rendezvous with the other craft. I turn to the paper for more:

Position and velocity requirements for the incoming trajectory prior to perihelion are < 1 km and ?1 cm/sec. Timing through perihelion passage is days to weeks with errors in entry-time compensated in the egress phase. As an example, if there is a large position and/or velocity error upon perihelion passage that translated to an angular offset of 100” from the nominal trajectory, there is time to correct this translational offset with the solar sail during the egress phase all the way out to the orbit of Jupiter. The sail’s lateral acceleration is capable of maneuvering the sailcraft back to the desired nominal state on the order of days depending on distance from the Sun. This maneuvering capability relaxes the perihelion targeting constraints and is well within current orbit determination knowledge threshold for the inner solar system which drive the ?1 km and ?1 cm/sec requirements.

Why the need to go modular and essentially put the craft together during the cruise phase? The paper points out that the 1-meter telescope that will be necessary cannot currently be produced in the mass and volume range needed to fit a CubeSat. The mission demands something on the order of a 100 kg spacecraft, which in turn would demand solar sails of extreme size as needed to reach the target velocity of 20 AU per year or higher. Such sails will be commonplace one day (I assume), but with the current state of the art, in-flight robotic assembly leverages our growing experience with miniaturization and small satellites and allows for a mission within a decade.

If in-flight assembly is used, because of the difficulties in producing very large sails, the spacecraft modules…are placed on separate sailcraft. After in-flight assembly, the optical telescope and if necessary, the thermal radiators are deployed. Analysis shows that if the vehicle carries a tiled RPS [radioisotope power system]…where the excess heat is used for maintaining spacecraft thermal balance, then there is no need for thermal radiators. The MCs [the assembled spacecraft] use electric propulsion (EP) to make all the necessary maneuvers for the cruise (?25 years) and science phase of the mission. The propulsion requirements for the science phase are a driver since the SGL spacecraft must follow a non inertial motion for the 10-year science mission phase.

According to the authors, numerous advantages accrue from using a modular approach with in-space assembly, including the ability to use rideshare services; i.e., we can launch modules as secondary payloads, with related economies in money and time. Moreover, such a use means that we can use conventional propulsion rather than sails as an option for carrying the cluster of sailcraft inbound toward perihelion in formation. In any case, at some point the sailcraft deploy their sails and establish the needed trajectory for the chosen solar perihelion point. After perihelion, the sails — whose propulsive qualities diminish with distance from the Sun — are ejected, perhaps nearing Earth orbit, as the sailcraft prepare for assembly.

Flying in formation, the sailcraft reduce their relative distance outbound and begin the in-space assembly phase while passing near Earth orbit. The mission demands that each of the 10-20 kg mass spacecraft be a fully functional nanosatellite that will use onboard thrusters for docking. Autonomous docking in space has already been demonstrated, essentially doing what the SGL mission will have to do, assembling larger craft from smaller ones. It’s worth noting, as the authors do, that NASA’s space technology mission directorate has already begun a project called On-Orbit Autonomous Assembly from Nanosatellites-OAAN along with a CubeSat Proximity Operations Demonstration (CPOD) mission, so we see these ideas being refined.

What demands attention going forward is the needed development of proximity operation technologies, which range from sensor design to approach algorithms, all to be examined as study of the SGL mission continues. There was a time when I would have found this kind of self-assembly en-route to deep space fanciful, but there was also a time when I would have said landing a rocket booster on its tail for re-use was fanciful, and it’s clear that self-assembly in in the SGL context is plausible. The recent deployment of the James Webb Space Telescope reinforces the same point.

The JPL team has been working with simulation tools based on concurrent engineering methodology (CEM), modifying current software to explore how such ‘fractionated’ spacecraft can be assembled. Note this:

Two types of distributed functionality were explored. A fractionated spacecraft system that operates as an “organism” of free-flying units that distribute function (i.e., virtual vehicle) or a configuration that requires reassembly of the apportioned masses. Given that the science phase is the strong driver for power and propellant mass, the trade study also explored both a 7.5 year (to ?800 AU) and 12.5 year (to ?900 AU) science phase using a 20 AU/yr xit velocity as the baseline. The distributed functionality approach that produced the lowest functional mass unit is a cluster of free-flying nanosatellites…each propelled by a solar sail but then assembled to form a MC [mission capable] spacecraft.

Image: Various approaches will emerge about the kind of spacecraft that might fly a mission to the gravitational focus of the Sun. In this image (not taken from the Turyshev et al. paper), swarms of small sailcraft capable of self-assembly into a larger spacecraft are depicted that could fly to a spot where our Sun’s gravity distorts and magnifies the light from a nearby star system, allowing us to capture a sharp image of an Earth-like exoplanet. Credit: NASA/The Aerospace Corporation.

The current paper goes deeply into the attributes of the kind of nanosatellite that can assemble the final design, and I’ll send you to it for further details. Each of the component craft has the capability of a 6U CubeSat/nanosat and each carries components of the final craft, from optical communications to primary telescope mirror. Current thinking is that the design is in the shape of a round disk about 1 meter in diameter and 10 cm thick, with a carbon fiber composite scaffolding. The idea is to assemble the final craft as a stack of these units, producing the final round cylinder.

What a fascinating, gutsy mission concept, and one with the possibility of returning extraordinary data on a nearby exoplanet. The modular approach can be used to enhance redundancy, the authors note, as well as allowing for reconfiguration to reduce the risk of mission failure. Self-assembly leverages current advances in miniaturization, composite materials, and computing as reflected in the proliferation of CubeSat and nanosat technologies. What this engineering study is pointing to is a mission to the solar gravity lens that seems feasible with near-term technologies.

The paper is Helvajian et al., “A mission architecture to reach and operate at the focal region of the solar gravitational lens,” now available as a preprint. The earlier report on the study’s progress is “Resolved imaging of exoplanets with the solar gravitational lens,” (preprint). The Phase II NIAC report on this work is Turyshev & Toth, “Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission,” Final Report NASA Innovative Advanced Concepts Phase II (2020). Full text.

tzf_img_post

Getting Down to Business with JWST

So let’s get to work with the James Webb Space Telescope. Those dazzling first images received a gratifying degree of media attention, and even my most space-agnostic neighbors were asking me about what exactly they were looking at. For those of us who track exoplanet research, it’s gratifying to see how quickly JWST has begun to yield results on planets around other stars. Thus WASP-96 b, 1150 light years out in the southern constellation Phoenix, a lightweight puffball planet scorched by its star.

Maybe ‘lightweight’ isn’t the best word. Jupiter is roughly 320 Earth masses, and WASP-96b weighs in at less than half that, but its tight orbit (0.04 AU, or almost ten times closer to its Sun-like star than Mercury) has puffed its diameter up to 1.2 times that of Jupiter. This is a 3.5-day orbit producing temperatures above 800 ?.

As you would imagine, this transiting world is made to order for analysis of its atmosphere. To follow JWST’s future work, we’ll need to start learning new acronyms, the first of them being the telescope’s NIRISS, for Near-Infrared Imager and Slitless Spectrograph. NIRISS was a contribution to the mission from the Canadian Space Agency. The instrument measured light from the WASP-96 system for 6.4 hours on June 21.

Parsing the constituents of an atmosphere involves taking a transmission spectrum, which examines the light of a star as it filters through a transiting planet’s atmosphere. This can then be compared to the light of the star when no transit is occurring. As specific wavelengths of light are absorbed during the transit, atmospheric gasses can be identified. Moreover, scientists can gain information about the atmosphere’s temperature based on the height of peaks in the absorption pattern, while the spectrum’s overall shape can flag the presence of haze and clouds.

These NIRISS observations captured 280 individual spectra detected in a wavelength range from 0.6 microns to 2.8 microns, thus taking us from red into the near infrared. Even with a relatively large object like a gas giant, the actual blockage of starlight is minute, here ranging from 1.36 percent to 1.47 percent. As the image below reveals, the results show the huge promise of the instrument as we move through JWST’s Cycle 1 observations, nearly a quarter of which are to be devoted to exoplanet investigation.

Image: A transmission spectrum is made by comparing starlight filtered through a planet’s atmosphere as it moves across the star, to the unfiltered starlight detected when the planet is beside the star. Each of the 141 data points (white circles) on this graph represents the amount of a specific wavelength of light that is blocked by the planet and absorbed by its atmosphere. The gray lines extending above and below each data point are error bars that show the uncertainty of each measurement, or the reasonable range of actual possible values. For a single observation, the error on these measurements is remarkably small. The blue line is a best-fit model that takes into account the data, the known properties of WASP-96 b and its star (e.g., size, mass, temperature), and assumed characteristics of the atmosphere. Credit: NASA, ESA, CSA, and STScI.

No more detailed infrared transmission spectrum has even been taken of an exoplanet, and this is the first that includes wavelengths longer than 1.6 microns at such resolution, as well as the first to cover the entire frequency range from 0.6 to 2.8 microns simultaneously. Here we can detect water vapor and infer the presence of clouds, as well as finding evidence for haze in the shape of the slope at the left of the spectrum. Peak heights can be used to deduce an atmospheric temperature of about 725 ?.

Moving into wavelengths longer than 1.6 microns gives scientists a part of the spectrum that is made to order for the detection of water, oxygen, methane and carbon dioxide, all of which are expected to be found in other exoplanets observed by the instrument, and a portion of the spectrum not available from predecessor instruments. All this bodes well for what JWST will have to offer as it widens its exoplanet observations.

Spatial-Temporal Variance Explanation for the Fermi Paradox

Just how likely is it that the galaxy is filled with technological civilizations? Kelvin F Long takes a look at the question using diffusion equations to probe the possible interactions among interstellar civilizations. Kelvin is an aerospace engineer, physicist and author of Deep Space Propulsion: A Roadmap to Interstellar Flight (Springer, 2011). He is the Director of the Interstellar Research Centre (UK), has been on the advisory committee of Breakthrough Starshot since its inception in 2016, and was the co-founder of Icarus Interstellar and the Initiative/Institute for Interstellar Studies, He has served as editor of the Journal of the British Interplanetary Society and continues to maintain the Interstellar Studies Bibliography, currently listing some 1400 papers on the subject.

by Kelvin F Long

Many excellent papers have been written about the Fermi paradox over the years, and until we find solid evidence for the existence of life or intelligent life elsewhere in the galaxy the best we can do is to estimate based on what we do know about the nature of the world we live in and the surrounding universe we observe across space and time.

Yet ultimately to increase the chances of finding life we need to send robotic probes external to our solar system to visit the planets around other stars. Whilst telescopes can do a lot of significant science, in principle a probe can conduct in-situ reconnaissance of the system to include orbiters, atmospheric penetrators and even landers.

Currently, the Voyager 1 and 2 probes are taking up the vanguard of this frontier and hopefully in the years ahead more will follow in their wake. Although these are only planetary flyby probes and would take tens of thousands of years to reach the nearest stars, our toes have been dipped into the cosmic ocean at least, and this is a start.

If we can send a probe out into the Cosmos, it stands to reason that other civilizations may do the same. As probes from different civilizations explore space, there is a possibility that they may encounter each other. Indeed, it could be argued that the probability of species-species first contact is more with their robotic ambassadors rather than the original biological organisms that launched them on their vast journeys.

However, the actual probability of two different probes from alternative points of origin (different species) interacting is low. This is for several reasons. The first relates to astrobiology in that we do not yet know how frequent life is in the galaxy. The second relates to the time of departure of the probes within the galaxy’s history. Two probes may appear in the same region of space, but if this happens millions of years apart then they will not meet. Third, and an issue not often discussed in the literature, is the fact that each probe will have a different propulsion system and so its velocity of motion will be different.

As a result, not only do probes have to contend with relativistic effects with respect to their world of origin (particularly if they are going close to the speed of light), but they will also have to deal with the fact that their clocks are not synchronised with each other. The implication is that for probes interacting from civilizations that are far apart, the relativistic effects become so large that it creates a complex scenario of temporal synchronization. This becomes more pronounced the larger the different species of probes, and the larger the difference in the respective average speeds. This is a state we might call ‘temporal spaghettification’, in reference to the complex space-time history of the spacecraft trajectories relative to each other.

An implication of this is that ideas like the Isaac Asimov Foundation series, where vast empires are constructed across hundreds or thousands of light years of space, do not seem plausible. This is particularly the case for ultra-fast speeds (where relativistic effects dominate) that do approach the speed of light. In general, the faster the probe speeds and the further apart the separate civilizations, the more pronounced the effect. In 2016 this author framed the idea as a postulate:

“Ultra-relativistic spaceflight leads to temporal spaghettification and is not compatible with galaxy wide civilizations interacting in stable equilibrium.”

Another consequence of ultra-fast speeds is that if civilizations do interact, it will not be possible to prevent the technology (i.e. power and propulsion) associated with the more advanced race from eventually emerging within the other species at some point in the future. Imagine, for example, if a species turned up with faster than light drives and simply chose to share that technology, even if for a price, as a part of a cultural information exchange.

Should such a culture refuse to share that technology with us, we would likely work towards its fruition anyway. This is because our knowledge of its existence will promote research within our own science to work towards its realisation. Alternatively, knowledge of that technology will eventually just leak out and be known by others.

There is also a statistical probability that if it can be invented by one species, it will be invented by another; as a law of large numbers. As a result when one species has this technology and starts interacting with others, eventually many other species will obtain it, even if it takes a long time to mature. We might think of this as a form of technological equilibration, in reference to an analogy to thermodynamics.

Ultimately, this implies that it is not possible to contain the information associated with the technology forever once species-species interaction begins. Indeed, it has been discovered that even the gravitational prisons of light (black holes) are leaky through Hawking evaporation. The idea that there is no such thing as a permanently closed system was also previously framed as a second postulate by this author:

“No information can be contained in any system indefinitely.”

Adopting analogues from plasma physics and the concept of distribution functions, we can imagine a scenario in which within a galaxy there are multiple populations, each sending out waves of probes at some average velocity of expansion rate. If most of the populations adopted fusion propulsion technology, for example, as their choice of interstellar transport, then the average velocity might be around 0.1c (i.e. plausible speeds for fusion propulsion are 0.05-0.15c) and this would then define the peak of a velocity distribution function.

The case of human-carrying ships may be represented by world ships traveling at the slow speeds of 0.01-0.03c. In the scenario of the majority of the populations employing a more energetic propulsion method, such as using antimatter fuel, the peak would shift to the right. In general, the faster the average expansion speed, the further to the right the peak would shift, since the peak represents the average velocity.

The more the populations interacted, the greater the technological equilibration over time, and this could see a gradual shift into the relativistic and then ultra-relativistic (>0.9c) speed regimes. Yet, due to the limiting factor of the speed of light limit (~300,000 km/s or 1c), the peak would start to move asymptotically towards some infinite value.

There is also the special case of faster-than-light travel (ftl), but by the second postulate if any one civilization develops it then eventually many of the others will also develop it. Then as the mean velocity of many of the galactic populations tends towards some ftl value, you get a situation where many civilizations can now leave the galaxy, creating a massive population expansion outwards, as starships are essentially capable of reaching other galaxies. That population would also be expanding inwards to the other stars within our galaxy since trip times are so short. Indeed, ships would also be arriving from other galaxies due to the ease of travel. But if this were the case, starships would be arriving in Earth orbit by now.

In effect, the more those civilizations interact, the more the average speed of spacecraft in the galaxy would shift to higher speeds, and eventually this average would begin to move asymptotically towards ftl (assuming it is physically possible), which is an effect we might refer to as ‘spatial runaway’ since there is no longer any tendency towards some equilibrium speed limit. In addition, the ubiquity of ftl transport comes with all sorts of implications for communications and causality and in general creates a chaotic scenario that does not lean towards a stable state.

This then leads to the third postulate:

“Faster than light spaceflight leads to spatial runaway, and is not compatible with galaxy wide civilizations interacting in stable equilibrium.”

Each species that is closely interacting may start out with different propulsion systems so that they have an average speed of population expansion, but if technology is swapped there will be some sort of equilibration that will occur such that all species tend towards some mean velocity of population diffusion.

The modeling of a population density of a substance is borrowed from stochastic potential theory, with discrete implementation for the quantization of space and time intervals by the use of average collision parameters. This is analogous to problems such as Brownian motion, where particles undergo a random walk. This can be adopted as an analogy to explain the motion of a population of interstellar probes dispersing through the galaxy from a point of origin.

Modeling population interaction is best done using the diffusion equation of physics, which is derived from Fick’s first and second law for the dispersion of a material flux, and also the continuity equation. This is a second order partial differential equation and its solution for a population that starts with some initial high density and drops to some low density. It is given by a flux equation which is a function of both distance and time. This equation is proportional to the exponential of the negative distance squared.

Using this physics as a model, it is possible to show that the galaxy can be populated within only a couple of million years, but even faster if the population is growing rapidly, as for instance via von Neumann self-replication. A key part of the use of the diffusion equation is the definition of a diffusion coefficient which is equal to ½(distance squared/time), where the distance is the average collision distance between stars (assumed to be around 5 light years) and time is the average collision time between stars (assumed to be between 50-100 years for 0.05-0.1c average speed). These relatively low cruise speeds were chosen because the calculations were conducted in relation to fusion propulsion designs only.

For probes that eventually manufacture another probe on average (i.e., not fully self-reproducing), this might be seen as analogous to a critical nuclear state. Where the probe reproduction rate drops to less than unity on average, this is like a sub-critical state and eventually the probe population will fall-off until some stagnation horizon is reached. For example, calculations by this author using the diffusion equation show that with an initial population as large as 1 million probes, each traveling at an average velocity of 0.1c, after about ~1,000 years the population would have stagnated at a distance of approximately ~100 light years.

If however, the number of probes being produced is greater than unity, such as through self-replicating von Neumann probes, then the population will grow from a low density state to a high density state as a type of geometrical progression. This is analogous to a supercritical state. For example, if each probe produced a further two probes on average from a starting population of 10 probes, then by the 10th generation there would be a total of 10,000 probes in the population.

Assume that there are at least 100 billion stars in the Milky Way galaxy. For the number of von Neumann probes in the population to equal that number of stars would only require a starting population of less than 100 probe factories, with each producing 10 replication probes, and after only 10 generations of replication. This underscores the argument made by some such as Boyce (Extraterrestrial Encounter, A Personal Perspective, 1979) that von Neumann-like replication probes should be here already. The suggestion of self-replicating probes was advanced by Bracewell (The Galactic Club: Intelligent Life in Outer Space, 1975) but has its origins in automata replication and the research of John von Neumann (Theory of Self-Reproducing Automata, 1966).

Any discussion about robotic probes interacting is also a discussion about the number of intelligent civilizations – such probes had to be originally designed by someone. It is possible that these probes are no longer in contact with their originator civilization, which may be many hundreds of light years away. This is why such probes would have to be fully autonomous in their decision making capability. Indeed, it could be argued that the probability of the human species first meeting an artificial intelligence-based robotic probe is more likely than meeting an alien biological organism. It may also be the case that in reality there is no difference, if biological entities have figured out how to go fully artificial and avoid their mortal fate.

Indeed, when considering the future of Homo Sapiens and our continued convergence with technology the science and science fiction writer Arthur C Clarke referred to a new species that would eventually emerge, which he called Homo Electronicus. He depicted it thus:

“One day we may be able to enter into temporary unions with any sufficiently sophisticated machines, thus being able not merely to control but to become a spaceship or a submarine or a TV network….the thrill that can be obtained from driving a racing car or flying an aeroplane may be only a pale ghost of the excitement our great grandchildren may know, when the individual human consciousness is free to roam at will from machines to machine, through all reaches of sea and sky and space.” (Profiles of the Future, 1962).

So even the idea of separating a biological organism from a machine intelligence may be an incorrect description of the likely encounter scenarios of the future. A von Neumann robotic spacecraft could turn up in our orbit tomorrow and from a cultural information exchange perspective there may be no distinction. It is certainly the case that robotic probes are more suited for the environment of space than biological organisms that require a survival environment.

Consider a thought experiment. Assume the galaxy’s disc diameter is 100,000 light years and consider only one dimension of space. A population of probes starts out at one end with an average diffusion wave speed of around 10 percent of the speed of light (0.1c). We assume no stopping and instantaneous time between populations of diffusion waves (in reality, there would be a superposition of diffusion waves propagating as a function of distance and time). This diffusion wave would take on the order of 1 million years to cross from one side of the galaxy to the other. We can continue this thought experiment and imagine that the same population starts at the centre and expands out as a spherical diffusion wave. Assuming that the wave did not dissipate and continued to grow, then the time to cover the galactic disc would be approximately half than if it had started on one side.

Now imagine there are two originating civilizations, each sending out populations of probes that continue to grow and do not dissipate. These two civilizations are located at opposite ends of the galaxy. The time for the galaxy to be covered by the two populations will now be half of a single population starting out on the edge of the disc. We can continue to add more numbers of populations n=1,2,3,4,5,6….and we get t, t/2, t/4, t/6, t/8, t/10…and we eventually find that for n>1 it follows a geometrical series of the form tn=t0/2(n-1), where t0 is the galactic crossing timescale (i.e. 1 million years) assumed for an initiating population of probes derived from a single civilization which is a function of the diffusion wave speed.

So that for a high number of initiating populations where n ? infinity, the interaction time between populations will be low so that tn ? 0, and the probability of interaction is therefore high. However, for a low number of initiating populations where n ? 0, the interaction time between populations will be high, so that tn ? infinity; thus the timescales between potential interactions are a lot larger and the probability of interaction is therefore low.

It is important to clarify the definition of interaction time used here. The shorter the interaction time, the higher the probability of interaction, since the time between effective overlapping diffusion waves is short. Conversely, where the interaction time is long, the time between overlapping diffusion waves is long and so the probability of interaction is low. The illustrated graphic below demonstrates these limits and the boxes are the results of diffusion calculations and the implications for population interaction.

As discussed by Bond & Martin (‘Is Mankind Unique?’, JBIS 36, 1983), the graphic illustrates two extreme viewpoints about intelligence within the galaxy. The first is known as Drake-Sagan chauvinism and advocates for a crowded galaxy. This has been argued by Shklovskii & Sagan (‘Intelligent Life in the Universe’, 1966), Sagan & Drake (The Search for Extraterrestrial Intelligence, 1975). In the graphic this occurs when n ? ? , tn ? 0, so that the probability of interaction is extremely high.

Especially since there are likely to be a large superposition of diffusion waves overlapping each other. This effect would become more pronounced for multiple populations of vN probes diffusing simultaneously. We note also that an implication of this model for the galaxy is that if there are large populations of probes, then there must have been large populations of civilizations to launch them, which implies that the many steps to complexity in astrobiology are easier than we might believe. In terms of diffusion waves this scenario is characterised by very high population densities such that ?(S,t) ? ? which also implies that the probability of probe-probe interaction is high p(S,t) ? ?. This is box (d) in the graphic.

The second viewpoint is known as Hart-Viewing chauvinism and advocates for a quiet galaxy. This has been argued by Tipler (‘Extraterrestrial Intelligent Beings do not Exist’, 1980), Hart (‘An Explanation for the Absence of Extraterrestrials on Earth’, 1975) and Viewing (‘Directly Interacting Extraterrestrial Technological Communities’, 1975). This occurs when n ? 0, tn ? ?, so that the probability of interaction is extremely low. In contrast with the first argument, this might imply that the many steps to complexity in astrobiology are hard. This scenario is characterised by very low population densities such that ?(S,t) ? 0 so that few diffusion waves can be expected and also that the probability of interaction is low p(S,t) ? 0. This is box (a) in the graphic.

In discussing biological complexity, we are referring to the difficulty in going from single celled to multi-celled organisms, but then also to large animals, and then to intelligent life which proceeds towards a state of advanced technological attainment. A state where biology is considered ‘easy’ is when all this happens regularly provided the environmental conditions for life are met within a habitat. A state where biology is considered ‘hard’ may be, for example, where it may be possible for life to emerge purely as a function of chemistry but building that up to more complex life such as to an intelligent life-form that may one day build robotic probes is a lot more difficult and less probable. This is a reference to the science of astrobiology which will not be discussed further here. However, since the existence of robotic probes would require a starting population of organisms it has to be mentioned at least.

Given that these two extremes are the limits of our argument, it stands to reason that there must be transition regimes in between which either work towards or against the existence of intelligence and therefore the probability of interaction. The right set of parameters would be optimum to explain our own thinking around the Fermi paradox in terms of our theoretical predictions being in contradiction to our observations.

As shown in the graphic it comes down to the variance ?2 of the statistical distribution for the distance S of a number of probe populations ni within a region of space in a galaxy (not necessarily a whole galaxy), where the variance is also the square root of the standard distribution ? relative to a mean distance between population sources ?S. In other words whether the originating civilizations that initiated the probe populations are closely compacted or widely spread out.

A region of space which had a high probe population density (not spread out or sharp distribution function) would be characterised by a low variance. A region with a low probe population density (widely distributed or flattened distribution function) would be characterised by a high variance. The starting interaction time to of two separate diffusion waves from independent civilizations would then be proportional to the variance and the diffusion wave velocity vdw of each population such that to is proportional to ?2/vdw.

Going back to the graphic there comes a point where the number of populations of probes becomes less than some critical number n<nc, the value of which we do not know, but as this threshold is crossed the interaction time will also increase past that critical value tn>tc. In box (c) of the graphic, biology is ‘hard’ and so despite the low variance the population density will be less than some critical value ?(S,t)<?c(S,t) which means that the probability of probe-probe interaction will be low p(S,t) ? 0. This is referred to as a low spatio-temporal distributed galaxy. Whereas for box (b) of the graphic although biology may be ‘easy’, the large variance of the populations makes for a low population density of the total combined and so also a low probability of probe-probe interaction. This is referred to as a high spatio-temporal distributed galaxy.

Taking all this into account and assessing the Milky Way, we don’t see evidence of a crowded galaxy, which would rule out box (d) in the graphic. In this author’s opinion the existence of life on Earth and its diversity does not imply (at least) consistency with a quiet galaxy (unless one is invoking something special about planet Earth). This is indicated in (a). On the basis of all this, we might consider a fourth postulate along the following lines:

“The probability of interaction for advanced technological intelligent civilizations within a galaxy strongly depends on the number of such civilizations, and their spatial-temporal variance.”

Due to the exponential fall-off in the solution of the diffusion wave equation, the various calculations by this author suggest that intelligent life may occur at distances of less than ~200 ly, which for a 100-200 kly diameter galaxy might suggest somewhere in the range of ~500-1,000 intelligent civilizations along a galactic disc. Given the vast numbers of stars in the galaxy this would lean towards a sparsely populated galaxy, but one where civilizations do occur. Then considering the calculated time scales for interaction, the high probability of von Neumann probes or other types of probes interacting therefore remains.

We note that the actual diffusion calculations performed by this author showed that even with a seed population of 1 billion probes, the distance where the population falls off was at around ~164 ly. This is not too dissimilar to the independent conclusion of Betinis (“On ETI Alien Probe Flux Density”, JBIS, 1978) who calculated that the sources of probes would likely be somewhere within 70-140 ly. Bond and Martin (‘A Conservative Estimate of the Number of Habitable Planets in the Galaxy’ 1978) also calculated that the average distance between habitable planets was likely ~110 ly and ~140 ly between intelligent life relevant planets. Sagan (‘Direct Contact Among Galactic Civilizations by Relativistic Interstellar Spaceflight’, 1963) also calculated that the most probable distance to the nearest extant advanced technical civilization in our galaxy would be several hundred light years. This all implies that an extraterrestrial civilization would be at less than several hundred light years distance, and this therefore is where we should focus search efforts.

When it comes down to the Fermi paradox, this analysis implies that we live in a moderately populated galaxy, and so the probability of interaction is low when considering both the spatial and temporal scales. However, when it comes to von Neumann probes it is clear that the galaxy could potentially be populated in a timescale of less than a million years. This implies they should be here already. As we perhaps ponder recent news stories that are gaining popular attention, we might once again consider the words of Arthur C Clarke in this regard:

“I can never look now at the Milky Way without wondering from which of those banked clouds of stars the emissaries are coming…I do not think we will have to wait for long.” (‘The Sentinel’, 1951).

The content of this article is by this author and appears in a recently accepted 2022 paper for the Journal of the British Interplanetary Society titled ‘Galactic Crossing Times for Robotic Probes Driven by Inertial Confinement Fusion Propulsion’, as well as in an earlier paper published in the same journal titled ‘Unstable Equilibrium Hypothesis: A Consideration of Ultra-Relativistic and Faster than Light Interstellar Spaceflight’, JBIS, 69, 2016.

tzf_img_post