SWIMMERs: A Thought Experiment on the Potential and Limitations of Propellantless Interstellar Travel

Can we tap ionized particles in the interstellar medium as a way of exchanging momentum for propulsion? It’s a concept with a lot of pluses if it can be made to work, chief among them the fact that such a device would be propellantless. Looking at the topic today is Drew Brisbin, a postdoctoral researcher in astronomy who received his PhD from Cornell University in 2014. Dr. Brisbin has since gone on to work towards better understanding his field of specialization: the study of galaxy evolution in the early universe. He currently works at Universidad Diego Portales, in Santiago Chile, where he collaborates closely with other researchers using some of the most sensitive telescopes in the world, located in the mountainous Chilean desert. In addition to his formal work and outdoors-oriented hobbies, he also enjoys dreaming about the future of humanity. One particular dream recently seemed to warrant some further investigation, leading him to the ideas he explains today.

By Drew Brisbin

Foreword

This article represents a distillation of a work published in April 2019 in the Journal of the British Interplanetary Society (reference [0]). Some technical details have been omitted for brevity, but interested readers are encouraged to read the original publication which is freely available at https://arxiv.org/abs/1808.02019. Additional commentary has been included to address what the author sees as a critical flaw in Dr. Robert Zubrin’s “Dipole Drive” concept.

Abstract

In particular light of the recent public excitement and ensuing disappointment regarding the exotic “EM drive” it is worthwhile to point out that space travel without on-board propellant is eminently possible based on well established physical principles. Here a new mode of transport is proposed which relies on electric-field moderated momentum exchange with the ionized particles in the interstellar medium. The application of this mechanism faces significant challenges requiring industrial-scale exploitation of space but the technological roadblocks are different than those presented by light sails or particle beam powered craft. This mode of space travel is well suited to energy efficient travel at velocities below about five percent the speed of light (0.05 x c) and compares exceptionally well to light sails on an energy expenditure basis. It therefore represents an extremely attractive mode of transport for slow (of order multi-century long) voyages carrying massive payloads to nearby stellar neighbors. This will be a useful niche for missions that would otherwise be too energy intensive to carry out, including initial forays into nearby stellar systems with observatory probes, or long term transport of bulk materials as a precursor mission to set up colony infrastructure.

Introduction

The tyranny of the rocket equation has long been recognized as an impediment to becoming a truly spacefaring species. Due to the exorbitant reaction mass required for traditional rockets in interstellar travel, there has been considerable attention to methods of space travel that circumvent the rocket equation. Laser-driven light sails are a prominent and long-standing idea (see for example [1] and references therein). While light sails are well established and also the engines of the widely publicized Breakthrough Starshot program [2] and Project Dragonfly [3], their thrust is fundamentally limited to 6.67 N/GW. For comparison, the Three Gorges Dam, the largest capacity power plant currently in operation, has a capacity of about 22.5 GW. If this power was transmitted with perfect efficiency to a light sail it would provide thrust equivalent to the force required to lift a 15 kg mass on Earth. Scaling light sails up to larger-than-gram-scale spacecraft therefore necessarily depends on humanity’s ability to harness incredible power. Furthermore, since light is only able to push, it is very difficult for light sail spacecraft to slow down at their destination, limiting missions to fly-bys unless complicated reflecting infrastructure can be sent ahead of the craft. Alternatively, direct sunlight could be used as a source of photon pressure. Unfortunately, the material properties suggested to be necessary for a practical interstellar solar sail require materials with extremely low areal densities with σ≲10-3 g/m2 [4]. Current state-of-the-art reflective films developed for light sails reach areal densities of order 10 g/m2, or four orders of magnitude too dense even without including any support structure or payload, so it is uncertain when if ever suitable materials will be developed for an interstellar solar sail [5].

Another idea using external reaction mass is the particle-beam powered spacecraft. This hinges on a sail formed by an extended electric or magnetic field which is able to deflect a remotely-beamed stream of charged particles. Since charged particles carry much more momentum per unit energy than photons this could have much lower power requirements than light sails. This concept has its origins in the Magsail, a large loop of current carrying wire which deflects passing charged particles in the interstellar medium (ISM), eliciting a drag force which could be used as a brake to slow spacecraft down to rest with respect to the ISM after a high speed journey [6]. To provide acceleration, one could simply supplant the ISM with a beamed source of high velocity charged particles [7]. Providing a long distance beam of charged particles is, however, quite difficult because of beam divergence due to particle thermal motion, interaction with interplanetary or interstellar magnetic fields, and electrostatic beam expansion in the case of non-neutral particle beams. Andrews (2003) suggests that it would be necessary to construct a highway of beam generators at least every AU or so along the route on which the craft accelerates [8]. The related concept of the electric sail instead uses an electric field generated by a positively charged grid of wires or wire spokes extending from a central hub to push against the outward streaming solar wind [9]. This concept has the near term potential to allow travel within our own stellar neighborhood with very low energy costs. The electric sail, like the Magsail however, ultimately relies on a drag force, decelerating the spacecraft to rest with respect to the surrounding medium (the outward moving solar wind in this case). It is therefore unable to accelerate beyond the heliosphere, nor can it accelerate directly inwards towards the sun while in the heliosphere.

It would be possible to overcome these obstacles by actively pushing against the charged particles of the ISM, rather than passively coming to rest with respect to the medium. These spacecraft with interstellar medium momentum exchange reactions (SWIMMERs) can accelerate with respect to the ISM, are significantly more energy efficient than light sails, would be able to decelerate at their destination, do not require pre-established infrastructure along the route and are based on elementary physical principles. Recently Dr. Robert Zubrin discussed his independent work on a “dipole drive” concept which is similar to the SWIMMER concept described here [10,11]. Although the two ideas are related and even share a similar geometry, they were arrived at independently. Furthermore the dipole drive as described, suffers from a flaw which prevents its successful acceleration in the stationary ISM. The work presented here concerns the conceptual mechanism which allows SWIMMERs to accelerate through a stationary ISM.

Both the Magsail and electric sail concepts rely on the fact that there is significant mass in the ISM (or the heliosphere) which can interact with relatively low mass structures consisting of charged or current carrying wires. How, then, could a spacecraft interact to accelerate rather than decelerate with respect to the surrounding medium?

The need for a time varying electrical voltage

Quite generally this will require a time varying electric field which can do work on the surrounding particles of the ISM. As a thought experiment, imagine a spacecraft consisting of a pair of conducting plates arranged in a parallel-plate capacitor style configuration with a switchable power source connecting them and able to charge and discharge them at will. The conducting plates, rather than being solid, are composed of a wire mesh with the vast majority of the area taken up by open space rather than metal, such that particles are easily able to pass through the plate mesh without collision. The spacecraft is moving face on through a stationary medium of charged particles (like the interstellar medium), as shown schematically in Fig. 1. For the moment take the charged particles to be macroscopic and extremely dispersed so we can easily see individual particles and identify when they are in the vicinity of the spacecraft — charged pebbles rather than atoms or elementary particles. As the spacecraft moves through the field of charged particles we can strategically switch the power source on and off to create an electric field and push on charged particles as they pass between the conducting plates, accelerating the particles backward and creating thrust to push the spacecraft forward (Fig. 1). This scenario is perfectly in line with conservation laws. momentum is conserved since the particle gains momentum in the backward direction and the spacecraft gains momentum in the forward direction. Energy is conserved as the increase in kinetic energy (of the particle and the spacecraft) is drawn directly from the power source depleting whatever energy source is being used (depleting a battery’s chemical energy, or converting beamed laser light for instance).

In practice of course, the ISM is not made of macroscopic, easily separable pebbles, but microscopic ions and electrons with tens of thousands per cubic meter or more, so we cannot consider manually switching the charged plates based on the positions of individual particles. The possibility of simply leaving the plates continuously charged, front facing plate positive, back plate negative, may initially seem workable, and this appears to be the scenario imagined in the “Dipole Drive” [10]. Introductory electricity courses train us to think about parallel plate capacitors as having a strong field in one direction between a pair of charged plates, and no field outside the plates, so it is easy to initially imagine that a positively charged ion would approach the front plate while feeling absolutely no force, feel a strong force backwards while between the plates, and then feel absolutely no force again as it recedes beyond the back plate. In fact, however, a set of finite parallel plates will indeed have electric fields outside of their gap which directly oppose the electric field inside the gap, and would perfectly negate the thrust generated by particles transitioning the gap.

This is made more clear by considering the electrical voltage rather than electric fields. Fig. 2 (top) shows schematically the voltage through the center of an idealized infinite parallel plate capacitor charged to a potential difference of 2V. Positive charges will want to “slide down” the potential ramp located between the plates, accelerating rightward. For a finite sized set of parallel plates the voltage extends a bit to the left and right of the gap, continuously decaying from the voltage at the plates to a voltage of zero at great distances as shown in Fig. 2 (middle). Note that these voltage ramps tilt in the other way than the region between the plates and will tend to accelerate positive charges in the opposite direction. For a particle entering in from the left and making it all the way through to exit out the right side, would it end up with more or less rightward velocity? Remember that voltage (often referred to as electric potential) is simply potential energy divided by charge (in SI units 1 volt = 1 joule/coulomb). Starting out very far on the left in Fig. 2, the rightward traveling particle with charge q will have some kinetic energy, KE, and zero potential energy.

As it approaches the front plate, it will begin slowing down as it rises up the potential ramp and converting kinetic energy to potential energy, eventually reaching a peak potential energy of +qV (with a kinetic energy of KE-qV). As it traverses the gap it is accelerated rightward as it slides down the potential, eventually reaching a potential energy of -qV and a kinetic energy of KE+qV. It then exits the parallel plate gap and is again forced up a potential ramp, converting kinetic energy back into potential energy, until it finally reaches distances far away from the plates where the voltage is 0 at which point it has potential energy 0 and kinetic energy KE — exactly the same as it started.

This is not a minor fluke of this particular geometry either. Any arrangement of charged plates – so long as the voltage is finite and the plate volume is finite – will leave the potential 0 at infinity. In [11] Zubrin has suggested that Debye shielding (the phenomenon of oppositely charged free particles in a plasma tending to cluster around charged objects and screen out the electric field) would somehow ameliorate this issue, but that is not the case. The effect of Debye shielding will be for the front facing positive plate to accumulate a cloud of electrons and the back plate to accumulate a cloud of positive ions, making the voltage ramps just outside the plate pair more steeply return to zero, as shown in Fig. 2 (bottom). Nonetheless, the potential remains 0 at large distances and in the end passing particles enter and leave the system with the exact same kinetic energy. This does not preclude the particles from changing direction, either being reflected back in the direction they came, or deflected to the side if they interact with the parallel plates at an angle, so such a configuration could certainly be used to either steer or decelerate with respect to the charged particle medium, but such a system with constant voltages cannot do work on charged particles which begin and end far away, and cannot accelerate with respect to them (it could still be useful to accelerate up to the velocity of the solar wind inside our heliosphere, much like the electric sail).

If the goal is to accelerate in the dead of the ISM, it is essential that a time-varying electrical potential be used to do work on the passing particles. There are multiple ways to do this, but one simple implementation, illustrated in Fig. 3, could feature a pusher plate made of a large grid of wires moving face-on through the ISM (much like the proposed geometry of a standard electric sail). Unlike a standard electric sail, however, the grid of wires would actually be two identical layers of wire sandwiching a strong insulator between them to keep the two layers physically apart and electrically isolated. These wire grids or tethers could be made from very fine superconducting wire and the entire ensemble could be spun to create tension and keep the wire grids extended without heavy support structure. The two faces of the pusher plate would be charged and discharged cyclically. In the “primer” portion of the operation cycle, the front layer in the pusher plate is raised to a positive potential and the back layer to an equal negative potential. Due to edge effects of the finite plates and the self-shielding behavior of plasmas, this results in a decaying electric potential of opposite sign on either side of the plates. Ions streaming towards the front positively charged layer slow down, building up an overdense clump in front of the pusher plate while an underdensity forms at the immediate location of the pusher plate. Then in the “pull” stage of the cycle the potential difference across the layers is reversed and significantly increased. The ion clump that was formed in front of the plate will be attracted to the negative front layer, pulling the spacecraft forward. As the clump approaches the pusher plate, the potential difference is turned off and the clump is allowed to coast through the plate to the other side. In the final “push” stage the same potential difference is applied and the clump is further pushed backwards by the positive back layer of the pusher plate. The clump drifts away beyond the influence of the pusher plate and the cycle repeats. Fig. 4 shows the electric potential and ion density at various cycle stages for a simple model.

By intentionally setting up clumps in the oncoming ISM we can interact with it much more like in our initial thought experiment with charged pebbles. The spacecraft gains momentum by giving backward momentum to the ISM (pushing ion clumps to the right in Fig. 4). The source of the potential difference does work in the primer stage when it sets up the positive potential, raising the electrical potential energy of the ions in front and again in the push stage when it raises the ion clump to a higher potential. Electrons encountering the potential ramps will largely be reflected but this causes only a negligible momentum drag since they are far less massive than the protons and other positively charged ions. In a real three dimensional case, there will also be loss of efficiency due to particles which do not interact perfectly in one dimension, but instead are pushed off to the side as they pass by the charged wires. Furthermore, this qualitative conceptual analysis does not account for the self influencing behavior of plasmas. This will undoubtedly strongly affect the ion (and electron) distributions and the extended electric potential. Detailed particle-in-cell simulations will be necessary to investigate the optimal tuning of cycle timings, electrical potentials, and even geometry of the charged plates as it may be advantageous in some cases to accelerate ion clumps across a series of potential differences to gain more thrust per ion, at the expense of a more complex and massive pusher plate. These simulations are beyond the scope of this work but will be a critical step in transitioning the concept from a thought experiment to a practical real world device.

Mathematical expression of an idealized case

While the effectiveness and geometry of a SWIMMER will ultimately need to be tested thoroughly by simulation, it is straightforward to represent the force on an idealized system which is able to efficiently convert electrical power, P, into backwards acceleration of nearby particles. The resulting force, FSWIMMER, is

where mp is the particle mass (of order the proton mass for the ISM), n is the density of particles, v is the velocity of the spacecraft (with respect to the stationary-particle frame), and A is the cross sectional area over which the system can interact with particles (or equivalently v x A is the volume rate of particles swept out by the spacecraft through time). The positive sign is used when accelerating with respect to the the stationary-particle frame, and the negative sign is used when decelerating. The derivation of this relationship is shown in [0].

The power referred to throughout this work is the delivered electrical power. Thus far the source of power for a SWIMMER has been ignored. There is no reason a SWIMMER could not use an onboard power source, making it totally independent of external infrastructure. This, of course, would require an exceptionally energy dense fuel source as well as a very efficient generator to achieve useful velocities for interstellar travel. Beaming power remotely to the SWIMMER is possibly a more viable strategy for interstellar travel, which invites a direct comparison to light sails. In this case an additional P/c term is included in eq. 2, corresponding to the photon pressure of the beamed energy being absorbed by the spacecraft. The total force is then:

Where P/c is either added or subtracted depending on if the beamed energy source is coming from the origin or the destination respectively. It will also be useful to consider the ratio, R, of the force on a SWIMMER to the force on an ideal light sail with equal delivered power (F=2 P/c). This ratio can be written as:

where we have used the the positive signs in the SWIMMER and photon forces indicating the spacecraft is accelerating and beamed energy is coming from the origin (as would be the case for an initial out-bound journey to another star system). In Fig. 5 R is shown as a function of velocity for a few values of A/P. There is some uncertainty surrounding the structure and properties of the local ISM, but there is general consensus that a journey to Cen A will involve passage through some combination of the Local Interstellar Cloud, the Circum-Heliospheric Interstellar Medium and the G Cloud. Therefore, a conservatively low ion density of n=0.07 cm-3, consistent with the estimated densities in these clouds, (see for example [13]) is used in Fig. 5. Fig. 5 shows the force initially rising with velocity due to the fact that at higher velocities the SWIMMER plates are sweeping out larger volumes of the ISM faster and able to interact with more particles per second. The force peaks at some velocity, and then decreases since it takes more and more energy to accelerate the passing ions to yet higher velocities to get the same momentum change. Due to this initial rise in force with velocity, it may be useful to give SWIMMERs an initial velocity boost through other means (such as conventional rockets, gravitational assists, or particle beam assists) to take advantage of the forces at higher velocities.

Larger A/P values give significantly better performance at lower velocities, but trend together as velocity increases, with the force approaching (P/v) x (c+v)/c (the ratio R approaches (c+v)/(2v), shown by the red line in Fig. 5). This high velocity limit implies an order of magnitude larger force for SWIMMERs relative to light sails up to v=c/19 or about 5% the speed of light.

Example mission

To illustrate the potential of SWIMMERs for interstellar travel, it is helpful to consider a possible future mission. Further details and technical considerations are available in the published manuscript, but here we simply assume the the engineering difficulties of beaming power to interstellar distances is solved, and the onboard energy converter has a specific power capacity of 4 kW/kg (every 4 kW of delivered power requires an increase of 1 kg in the mass of the power converter equipment). The ISM is assumed to be uniform with a density of 0.07 cm-3, a temperature of 7000 K and therefore an electron Debye length, λD=21.8 m.

A relatively lower mass SWIMMER mission might have the goal of transporting a modest space probe, mpay=1000 kg to α Cen A. Accelerating within our heliosphere and decelerating at the destination are possible, and in fact relatively energy efficient but discussion of these will be left for the main publication, [0], for brevity. A modest electrical power delivered to the SWIMMER of 10 MW is assumed. The pusher plate will be made up of several long tethers. In practice these tethers will consist of very fine braided filaments to prevent failure due to micrometeoroid and interstellar dust collision, as described for the electric sail [9]. From a material mass standpoint these are considered to be single wires with an effective diameter of 30 μm. This is equivalent in material to eight filaments with diameters of about 10 μm. Given the pulsed nature of the SWIMMER electric field, the wire tethers should be made out of superconducting materials. A single charged wire will interact with charged particles passing within about λD on either side of it. The total cross sectional interaction area is given by

where L is the summed length of all the tethers. This cross sectional interaction area is somewhat of an idealization as the Debye length does not represent a hard cut off where particles suddenly cease to be effected by an electric field, and in regions where tethers intersect, part of their cross sectional areas will overlap. Nonetheless it is a sufficient estimate for our rough calculations. The mass devoted to this pusher plate will be mpusher= ρ x L x π rwire2. Where rwire represents the effective radius of the wire tether (15 μm in our case), and ρ represents the density of the tether material, which we will take to be 2570 kg/m3, the density of the popular superconducting material, magnesium diboride.

The total mass of the SWIMMER ship is comprised of mpay=1000 kg, mpower=2500 kg (given by the 10 MW supplied electric power and a 4 kW/kg specific power), and mpusher. We will take 7400 kg as the mass of the pusher plate which provides for a total summed tether length of 4.1 x 109 m. While this is seemingly a very long tether, it does not in any way represent the spatial scale of the SWIMMER, as the pusher plate will be made up of several thousand tethers, possibly splitting off from each other at greater radial distances. The summed length is merely a useful value for determining the total cross sectional area in plasmas of different temperatures and densities. In this case, from eq. 4 our SWIMMER tether length corresponds to a cross sectional interaction area of 180,000 km2 (about the size of Uruguay or the state of Washington).

We will begin our voyage as the SWIMMER enters the ISM at 100 AU with a velocity of 4.0 x 105 m/s (0.133% the speed of light and consistent with typical velocities of the solar wind). Iteratively integrating using eq. 3, we find that after just less than 300 years the spacecraft will be on the doorstep of Alpha Centauri after having travelled one parsec and achieving a final velocity of 1.66% the speed of light. Including time to initially accelerate from rest within the heliosphere and decelerate at the destination marginally increases the trip time, but we could also shorten the trip slightly by systematically shedding mass and reducing the size of the pusher plate enroute. As Fig. 5 shows, at higher velocities larger plate areas provide diminishing returns, so as the spacecraft reaches higher velocities the larger area of the pusher plate becomes dead weight. A total journey of about 300 years starting from rest in the solar system to being gravitationally captured by Cen A is reasonable for the overall journey [0].

While 300 years is a significant amount of time for a scientific endeavor, there is good precedent for multi-century science projects for worthwhile investigations (c.f. [14-16]). Furthermore, the energy expense is a pittance compared to an equivalent mission using laser-pushed light sails. An equivalent, 7400 kg probe pushed by 10 MW of laser light incident on ideal light sails (and starting with a velocity 4 x 105 m/s) would take about 1600 years to travel 1 parsec and reach a final velocity of 0.28% the speed of light. To reduce the light sail travel time to 300 years would require an average power consumption of nearly 700 MW (70x higher).

Conclusion

SWIMMERs represent a new mode of interstellar transport. By disposing of onboard reaction mass they circumvent the rocket equation, and by exchanging momentum with ions in the ISM they improve by orders of magnitude over the energy efficiency of traditional light sails at relatively low velocities. The key to this momentum exchange is the time varying electric field which allows SWIMMERs to create inhomogeneities in the surrounding plasma and then push on these inhomogeneities to create thrust.

SWIMMERS perform exceptionally well at lower velocities, with their advantage over light sails diminishing quickly at v > 0.05 c. Furthermore, by relying on the ambient ISM as a momentum exchange medium, they are quite versatile, able to accelerate either away or towards a beamed energy source, opening up myriad opportunities to serve as one-way transport, roundtrips or even statites remaining in stationary positions with respect to the Sun and serving as useful waypoints with infrastructure for other potential space transportation networks.

The example discussed here only scratches the surface of the possible roles for SWIMMERs in our spacefaring future. Their characteristics make them ideal for any mission with large masses in which relatively low velocities are acceptable. They are unlikely to be the sole mode of space transport due to their diminishing advantages at high velocities and their structural complexity which requires onboard power conversion systems with significant mass. They can play the role of the proverbial Mack trucks of space, transporting goods slowly and reliably at a low energy cost, while more time sensitive cargo can make use of fast yet inefficient light sails – the Ferraris of space. SWIMMERs might, for instance, be well suited to aiding the construction of a fast interstellar highway by transporting massive particle beam stations along with their fuel supply out to stationary positions between us and our target destinations. These particle beam stations could be used to swiftly carry low mass Magsails along the path or augment the power of future SWIMMERs by replacing the stationary ISM with a corridor of fast moving beamed particles.

The mission analyzed here regards a one-way interstellar trip. While it does push the limits of current technology by assuming relatively high specific power electrical systems, very thin mass-produced super conducting wire, and low mass electrical insulators which can resist large potential differences (as well as very large laser array optics which are addressed in other works regarding light sails) there is no obvious material or theoretical limits which would prevent such missions from realization. Future work in this vein will need to examine several issues ignored here. Areas of further investigation, include the efficiency of the SWIMMER drive in three dimensions; the electrical potential and cycle timings during the pulsed SWIMMER operation and how they effect the required current density of the tethers; the expected impact of interstellar dust collisions and redundant tether configurations to avoid catastrophic damage from tether breakage; and realistic limits on power conversion system capabilities.

As our understanding of interstellar travel develops, we must face the realization that, not only is it difficult, but there is no one-size-fits-all solution. Where SWIMMERs excel in one metric, other methods may excel in another. Ultimately our best strategy is to develop all possible methods in the hope that their synergy will provide a means to accomplish our goals.

References

0. D. Brisbin, “Spacecraft with interstellar medium momentum exchange reactions: the potential and limitations of propellantless interstellar travel”, JBIS, 72, pp.116-124, 2019.

1. R.L. Forward, “Roundtrip Interstellar Travel Using Laser-Pushed Lightsails”, J. Spacecraft and Rockets, 21, pp.187-195, 1984.

2. P. Lubin, “A Roadmap to Interstellar Flight”, JBIS, 69, pp.40-72, 2016.

3. N. Perakis, L.E. Schrenk, J. Gutsmiedl, A. Kroop, M.J. Losekamm, “Project Dragonfly: A feasibility study of interstellar travel using laser-powered light sail propulsion”, Acta Astronautica, 129, pp.316-324, 2016.

4. R. Heller, and M. Hippke, “Deceleration of High-velocity Interstellar Photon Sails into Bound Orbits at Centauri”, Astrophysical Journal Letters, 835, pp.L32, 2017.

5. D. Spieth, and R.M. Zubrin, “Ultra-Thin Solar Sails for Interstellar Travel–Phase I Final Report”, NASA Institute for Advanced Concepts, Pioneer Astronautics Inc, 1999.

6. D.G. Andrews, and R.M. Zubrin, “Magnetic sails and interstellar travel”, JBIS, 43, pp.265-272, 1990.

7. G.A. Landis, “Interstellar flight by particle beam”, in AIP Conference Proceedings, vol. 552, pp.393-396, 2001.

8. D.G. Andrews, “Interstellar Transportation using Today’s Physics”, Conference proceedings, American Institute of Aeronautics and Astronautics, 4691, 2003.

9. P. Janhunen, “Electric sail for spacecraft propulsion”, J. of Propulsion and Power, 20, pp.763-764, 2004.

10. R. Zubrin, “Dipole Drive for Space Propulsion”, JBIS, 70, pp.442-448, 2019.

11 R. Zubrin, “The Dipole Drive: A New Concept in Space Propulsion”, 70th International Astronautical Congress (IAC), Washington DC, October 2019.

12. Stilfehler, “technique of 4 strand braiding”, Wikimedia Commons file (licensed for sharing and adaptation), https://commons.wikimedia.org/wiki/File:4_Strand_Braiding.png, last accessed on 18 March 2019. Cropped and edited for 3-d effect.

13. I.A. Crawford, “Project Icarus: A review of local interstellar medium properties of relevance for space missions to the nearest stars”, Acta Astronautica, 68, pp.691-699, 2011.

14. A. Kivilaan, and R.S. Bandurski, “The one hundred-year period for Dr. Beal’s seed viability experiment”, American Journal of Botany, 68, pp.1290-1292, 1981.

15. R. Johnston, “World’s slowest-moving drop caught on camera at last”, Nature News, 18, 2013.

16. C. Cockell, “The 500-year microbiology experiment”, Microbiology Today, 95, pp.95-96, May 2014.

tzf_img_post

In Search of a Wormhole

A star called S2 is intriguingly placed, orbiting around the supermassive black hole thought to be at Sgr A*, the bright, compact radio source at the center of the Milky Way. S2 has an orbital period of a little over 16 years and a semi-major axis in the neighborhood of 970 AU. Its elliptical orbit takes it no closer than 120 AU, but the star is close enough to Sgr A* that continued observations may tell us whether or not a black hole is really there. A new paper in Physical Review D now takes us one step further: Is it possible that the center of our galaxy contains a wormhole?

By now the idea of a wormhole that connects different spacetimes has passed into common parlance, thanks to science fiction stories and films like Interstellar. We have no evidence that a wormhole exists at galactic center at all, much less one that might be traversable, though the idea that it might be possible to pass between spacetimes using one of these is too tempting to ignore, at least on a theoretical level. At the University at Buffalo, Dejan Stojkovic, working with De-Chang Dai (Yangzhou University, China and Case Western Reserve University), thinks the star S2’s behavior may offer a way to look for wormholes.

Image: An artist’s concept illustrates a supermassive black hole. A new theoretical study outlines a method that could be used to search for wormholes (a speculative phenomenon) in the background of supermassive black holes. Credit: NASA/JPL-Caltech.

Note that the authors are not saying they find such an object in the existing datasets on S2 (the object has only been monitored since 1995 at UCLA and at the Max Planck Institute for Extraterrestrial Physics). Rather, they’re arguing for using the behavior of objects near black holes, where extreme astrophysical conditions exist, to see whether they exhibit unusual behavior that could be the result of a wormhole associated with the black hole. So this is a methodological approach that advances a proposed course of observation.

You may remember that a 1995 paper from John Cramer, Robert Forward, Gregory Benford and other authors including Geoff Landis (see below) went to work on this question, though not using a star near the Milky Way’s center (see How to Find a Wormhole, a Centauri Dreams article from the same year). Cramer et al. argued for looking for an astrophysical signal of negative mass, which would be needed to keep a wormhole mouth open. Let me quote from something Geoff Landis told me about the paper:

“If the wormhole is exactly between you and another star, it would defocus the light, so it’s dim and splays out in all directions. But when the wormhole moves and it’s nearer but not in front of the star, then you would see a spike of light. So if the wormhole moves between you and another star and then moves away, you would see two spikes of light with a dip in the middle.”

That’s an astrophysical signature interesting enough to be noted. And from the paper itself:

“…the negative gravitational lensing presented here, if observed, would provide distinctive and unambiguous evidence for the existence of a foreground object of negative mass.”

Back to Stojkovic, whose new paper notes a property we would expect to exist in wormholes. Let me quote his paper on this:

The purpose of this work…is to establish a clear link between wormholes and astrophysical observations. By definition, a wormhole smoothly connects two different spacetimes. If the wormhole is traversable, then the flux (scalar, electromagnetic, or gravitational) can be conserved only in the totality of these two spaces, not individually in each separate space.

Interesting point. An example: A physical electric charge on one side of the wormhole would manifest itself on the other side. There, where there is no electric charge, an observer would notice the electric flux coming from the wormhole and assume that the wormhole is charged. There is, in fact, no real charge at the wormhole, but the flux is strictly conserved only if the entirety of both spaces connected by the wormhole is considered. And as the paper goes on to state, a gravitational source like a star orbiting the mouth of the wormhole should be observed as gravitational perturbations on the other side.

The message is clear. Again, from the Stojkovic paper:

As a direct consequence, trajectories of objects propagating in [the] vicinity of a wormhole must be affected by the distribution of masses/charges in the space on the other side of the wormhole. Since wormholes in nature are expected to exist only in extreme conditions, e.g. around black holes, the most promising systems to look for them are either large black holes in the centers of galaxies, or binary black hole systems.

By now it should be clear why S2 is an interesting star for this purpose. Its proper motion orbiting what is believed to be a supermassive black hole at Sgr A* could theoretically tell us whether the black hole harbors a wormhole. The extreme gravitational conditions make this the best place to look for a wormhole, and minute deviations in the expected orbit of S2 could indicate one’s presence. That means we need to assemble a lot more data about S2.

Stojkovic doesn’t expect to find a lot of traffic coming through any wormhole we do find:

“Even if a wormhole is traversable, people and spaceships most likely aren’t going to be passing through. Realistically, you would need a source of negative energy to keep the wormhole open, and we don’t know how to do that. To create a huge wormhole that’s stable, you need some magic.”

In the absence of magic, we can still put observational astronomy to work. We may be a decade or two away from being able to track S2 this closely, and in any case will need a lot more data to make the call, but the scientist cautions that even deviations in its expected orbit won’t be iron-clad proof of a wormhole. They’ll simply make it a possibility, leading us to ask what other causes on our own side of the presumed wormhole could be creating the perturbations. And any wormhole we do come to believe is there would not necessarily be traversable, but if the effects of gravity from a different spacetime are in play, that’s certainly something we’ll want to study as we untangle the complicated situation at galactic center.

The paper is Dai and Stojkovic, “Observing a Wormhole,” Phys. Rev. D 100, 083513 (10 October 2019). Abstract / preprint). The Cramer et al. paper is “Natural Wormholes as Gravitational Lenses,” Physical Review D (March 15, 1995): pp. 3124-27 (abstract).

tzf_img_post

Artificial Singularity Power: A Basis for Developing and Detecting Advanced Spacefaring Civilizations

Could an advanced civilization create artificial black holes? If so, the possibilities for power generation and interstellar flight would be profound. Imagine cold worlds rendered habitable by tiny artificial ‘suns.’ Robert Zubrin, who has become a regular contributor to Centauri Dreams, considers the consequences of black hole engines in the essay below. Dr. Zubrin is an aerospace engineer and founder of the Mars Society, as well as being the president of Pioneer Astronautics. His latest book, The Case for Space: How the Revolution in Spaceflight Opens Up a Future of Limitless Possibility, was recently published by Prometheus Books. As Zubrin notes, generating energy through artificial singularities would leave a potential SETI signal whose detectability is analyzed here, a signature unlike any we’ve examined before.

by Robert Zubrin

Abstract

Artificial Singularity Power (ASP) engines generate energy through the evaporation of modest sized (108-1011 kg) black holes created through artificial means. This paper discusses the design and potential advantages of such systems for powering large space colonies, terraforming planets, and propelling starships. The possibility of detecting advanced extraterrestrial civilizations via the optical signature of ASP systems is examined. Speculation as to possible cosmological consequences of widespread employment of ASP engines is considered.

Introduction

According to a theory advanced by Stephen Hawking [1] in 1974, black holes evaporate at a rate given by:

tev = (5120π)tP(m/mP)3 (1)

where tev is the time it takes for the black hole to evaporate, tP is the Planck time (5.39e-44 s), m is the mass of the black hole in kilograms, and mP is the Planck mass (2.18e-8 kg) [2]

Hawking considered the case of black holes formed by the collapse of stars, which need to be at least ~3 solar masses to occur naturally. For such a black hole, equation 1 yields an evaporation time of 5e68 years, far longer than the expected life of the universe. In fact, evaporation would never happen, because the black hole would gain energy, and thus mass, by drawing in cosmic background radiation at a rate faster than its own insignificant rate of radiated power.

However it can be seen from examining equation (1) that the evaporation rate goes inversely with the cube of singularity, which means that the emitted power (=mc2/tev) goes inverse with the square of the mass. Thus if the singularity could be made small enough, very large amounts of power could theoretically be produced.

This possibility was quickly grasped by science fiction writers, and such propulsion systems were included by Arthur C. Clarke in his 1976 novel Imperial Earth [3] and Charles Sheffield in his 1978 short story “Killing Vector.” [4]

Such systems did not receive serious technical analysis however, until 2009, when it was examined by Louis Crane and Shawn Westmoreland, both then of Kansas State University, in their seminal paper “Are Black Hole Starships Possible?” [5]

In their paper, Crane and Westmoreland focused on the idea of using small artificial black holes powerful enough to drive a starship to interstellar-class velocities yet long-lived enough to last the voyage. They identified a “sweet spot” for such “Black Hole Starships” (BHS) with masses on the order of 2×109 kg, which they said would have lifetimes on order of 130 years, yet yield power of about 13,700 TW. They proposed to use some kind of parabolic reflector to reflect this radiation, resulting in a photon rocket. The ideal thrust T of a rocket with jet power P and exhaust velocity v is given by:

T = 2P/v (2)

So with T = 13,700 TW and v=c = 3e8 m/s, the thrust would be 8.6e7 N. Assuming that the payload spacecraft had a mass of 1e9 kg, this would accelerate the ship at a rate of a=8.6e7/3e9 = 2.8e-2 m/s2. Accelerating at this rate, such a ship would reach about 30% the speed of light in 100 years.

There are a number of problems with this scheme. In the first place, the claimed acceleration is on the low side. Furthermore their math appears to be incorrect. A 2e9 kg singularity would only generate about 270 TW, or 1/50th as much as their estimate, reducing thrust by a factor of 50 (although it would last about 20,000 years). These problems could be readily remedied, however, by using a smaller singularity and a smaller ship. For example a singularity with a mass of 2e8 kg would produce a power of 26,900 TW. Assuming a ship with a mass of 1e8 kg, an acceleration of 0.6 m/s2 could be achieved, allowing 60% the speed of light to be achieved in 10 years. The singularity would only have a lifetime of 21 years. However it could be maintained by being constantly fed mass at a rate of about 0.33 kg/s.

A bigger problem is that a 1e9 kg singularity would produce radiation with a characteristic temperature of 9 GeV, increasing in inverse proportion to the singularity mass. So for example a 1e8 kg singularity would produce gamma rays with energies of 90 GeV (i.e. for Temperature, T, in electron volts, T = 9e18/m.) There is no known way to reflect such high energy photons. So at this point the parabolic reflector required for the black hole starship photon engine is science fiction.

Yet another problem is the manufacture of the black hole. Crane and Westmoreland suggest that it could be done using converging gamma ray lasers. To make a 1e9 kg unit, they suggested a “high-efficiency square solar panel a few hundred km on each side, in a circular orbit about the sun at a distance of 1,000,000 km” to provide the necessary energy. A rough calculation indicates the implied power of this system from this specification is on the order of 106 TW, or about 100,000 times the current rate used by human civilization. As an alternative construction technique, they also suggest accelerating large masses to relativistic velocities and then colliding them. The density of these masses would be multiplied both by relativistic mass increase and length contraction. However the energy required to do this would still equal the combined masses times the speed of light squared. While this technique would eliminate the need for giant gamma ray lasers, the same huge power requirement would still present itself.

In what follows, we will examine possible solutions for the above identified problems.

Advanced Singularity Engines

In MKS units, equation (1) can be rewritten as:

tev = 8.37e-15 m3 (3)

This implies that the power, P, in Watts, emitted by the singularity is given by:

P = 1.08e33/m2 (4)

The results of these two equations are shown in Fig. 1.

Fig 1. Power and Lifetime of ASP Engines as a Function of Singularity Mass

No credible concept is available to enable a lightweight parabolic reflector of the sort needed to enable the Black Hole Starship. But we can propose a powerful and potentially very useful system by dropping the requirement for starship-relevant thrust to weight ratios. Instead let us consider the use of ASP engines to create an artificial sun.

Consider a 1e8 kg ASP engine. As shown in Fig 1, it would produce a power of 1.08e8 Gigawatts. Such an engine, if left along, would only have a lifetime of 2.65 years, but it could be maintained by a constant feed of about 3 kg/s of mass. We can’t reflect its radiation, but we can absorb it with a sufficiently thick material screen. So let’s surround it with a spherical shell of graphite with a radius of 40 km and a thickness of 1.5 m. At a distance of 40 km, the intensity of the radiation will be about 5 MW/m2, which the graphite sphere can radiate into space with a black body temperature of 3000 K. This is about the same temperature as the surface of a type M red dwarf star. We estimate that graphite has an attenuation length for high energy gamma rays of about 15 cm, so that 1.5 m of graphite (equivalent shielding to 5 m of water or half the Earth’s atmosphere) will attenuate the gamma radiation by ten factors of e, or 20,000. The light will then radiate out further, dropping in intensity with the square of the distance, reaching typical Earth sunlight intensities of 1 kW/m2 at a distance of about 3000 km from the center.

The mass of the artificial star will be about 1014 kg (that’s the mass of the graphite shell, compared to which the singularity is insignificant.). As large as this is, however, it is still tiny compared to that of a planet, or even the Earth’s Moon (which is 7.35e22 kg). So, no planet would orbit such a little star. Instead, if we wanted to terraform a cold world, we would put the mini-star in orbit around it.

The preferable orbital altitude of the ASP mini-star of 3000 km altitude in the above cited example was dictated by the power level of the singularity. Such a unit would be sufficient to provide all the light and heat necessary to terraform an otherwise sunless planet the size of Mars. Lower power units incorporating larger singularities but much smaller graphite shells are also feasible. (Shell mass is proportional to system power.) These are illustrated in Table 1.

The high-powered units listed in Table 1 with singularity masses in the 1e8 to 1e9 kg range are suitable to serve as mini-suns orbiting planets, moons or asteroids, with the characteristic radius of such terraforming candidates being about the same as the indicated orbital altitude. The larger units, with lower power and singularity masses above 1e10 kg are more appropriate for space colonies.

Consider an ASP mini-sun with a singularity mass of 3.16e10 kg positioned in the center of a cylinder with a radius of 10 km and a length of 20 km. The cylinder is rotating at a rate of 0.0316 radians per second, which provides it with 1 g or artificial gravity. Let’s say the cylinder is made of material with an areal density of 1000 kg per square meter. In this case it will experience an outward pressure of 104 pascals, or about 1.47 psi, due to outward acceleration. If the cylinder were made of solid Kevlar (density = 1000 kg/m3) it would be about 1 m thick. So the hoop stress on it would be 1.47*(10,000)/1 = 14,700 psi, which is less than a tenth the yield stress of Kevlar. Or put another way, 10 cm of Kevlar would do the job of carrying the hoop stress, and the rest of mass load could be anything, including habitations. If the whole interior of the cylinder were covered with photovoltaic panels with an efficiency of 10 percent, 100 GWe of power would be available for use of the inhabitants of the space colony, which would have an area of 1,256 square kilometers. The mini-sun powering it would have a lifetime of 84 million years, without refueling. Much larger space colonies (i.e, with radii over ~100 km) would not be possible however, unless stronger materials become available, as the hoop stress would become too great.

Both of these approaches seem potentially viable in principle. However we note that the space colony approach cited requires a singularity some 300 times more massive than the approach of putting a 1e8 kg mini-sun in orbit around a planet, which yields 4π(3000)2 = 100 million square kilometers of habitable area, or about 80,000 times as much land. Furthermore, the planet comes with vast supplies of matter of every type, whereas the space colony needs to import everything.

Building Singularities

Reducing the size of the required singularity by a factor of 10 from 1e9 to 1e8 kg improves feasibility of the ASP concept somewhat, but we need to do much better. Fortunately there is a way to do so.

If we examine equation (3), we can see that the expected lifetime of a 1000 kg singularity would be about 8.37 x 10-6 s. In this amount of time, light can travel about 250 m. and an object traveling at half the speed of light 125 m. If a sphere with a radius of 125 m were filled with steel it would contain about 8 x 1010 kg, or about 100 times what we need for our 1e8 kg ASP singularity. In fact, it turns out that if the initial singularity is as small as about 200 kg, and fired into a mass of steel, it will gain mass much faster than it losses it, and eventually grow into a singularity as massive as the steel provided.

By using this technique we can reduce the amount of energy required to form the required singularity by about 7 orders of magnitude compared to Crane and Westmoreland’s estimate. So instead of needing a 106 TW system, a 100 GW gamma ray laser array might do the trick. Alternatively, accelerating two 200 kg masses to near light speed would require 3.6e7 TJ, or 10,000 TW-hours of energy. This is about the energy humanity currently uses in 20 days. We still don’t know how to do it, but reducing the scale of the required operation by a factor of 10 million certainly helps.

ASP Starships

We now return to the subject of ASP starships. In the absence of a gamma ray reflector, we are left with using solid material to absorb the gamma rays and other energetic particles and re-radiate their energy as heat. (Using magnetic fields to try to contain and reflect GeV-class charged particles that form a portion of the Hawking radiation won’t work because the required fields would be too strong and too extensive, and the magnets to generate them would be exposed to massive heating by gamma radiation.)

Fortunately, we don’t need to absorb all the radiation in the absorber/reflector, we only need to absorb enough to get it hot. So let’s say that we position a graphite hemispherical screen to one side of a 1e8 kg ASP singularity, but instead of making it 1.5 m thick, we make it 0.75 mm thick. At that thickness it will only absorb about 5 percent of the radiation that hits it, the rest will pass right through. So we have 5e6 GW of useful energy, which we want to reduce to 5 MW/m2 in order for the graphite to be kept at ~3000 K where it can survive. The radius will be about 9 km, and the mass of the graphite hemisphere will be about 6e8 kg. A thin solar sail like parabolic reflector with an area 50 times as great and the carbon hemisphere but a thickness 1/500th (i.e. 1.5 microns) as great would be positioned in front of the hemisphere, adding another 0.6 e8 kg to the system, which then plus the singularity and the 1e8 kg ship might be 7.6e8 kg in all. Thrust will be 0.67e8 N, so the ship would accelerate at a speed of 0.67/7.6 = 0.09 m/s2, allowing it to reach 10 percent the speed of light in about 11 years.

Going much faster would become increasingly difficult, because using only 5% of the energy of the singularity mass would give the system an effective exhaust velocity of about 0.22 c. Higher efficiencies might be possible if a significant fraction of the Hawking radiation came off as charged particles, allowing a thin thermal screen to capture a larger fraction of the total available energy. In this case, effective exhaust velocity would go as c times the square root of the achieved energy efficiency. But sticking with our 5% efficiency, if we wanted to reach 0.22 c we could, but we would require a mass ratio of 2.7, meaning we would need about 1.5e9 kg of propellant to feed into the ASP engine, whose mass would decrease our average acceleration by about a factor of two over the burn, meaning we would take about 40 years to reach 20 percent the speed of light.

Detecting ET

The above analysis suggests that if ASP technology is possible, using it to terraform cold planets with orbital mini-suns will be the preferred approach. Orbiting (possibly isolated) cold worlds at distances of thousands of kilometers, and possessing 3000 K type M red dwarf star spectra, potentially with gamma radiation in excess of normal stellar expectations, it is possible that such objects could be detectable.

Indeed, one of the primary reasons to speculate on the design of ASP engines right now is to try to identify their likely signature. We are far away from being able to build such things. But the human race is only a few hundred thousand years old, and human civilization is just a few thousand years. In 1905 the revolutionary HMS Dreadnought was launched, displacing 18,000 tons. Today ships 5 times that size are common. So it is hardly unthinkable that in a century or two we will have spacecraft in the million ton (109 kg) class. Advanced extraterrestrial civilizations may have reached our current technological level millions or even billions of years ago. So they have had plenty of time to develop every conceivable technology. If we can think it, they can build it, and if doing so would offer them major advantages, they probably have. Thus, looking for large energetic artifacts such as Dyson Spheres [6], starships [7,8], or terraformed planets [9] is potentially a promising way to carry out the SETI search, as unlike radio SETI, it requires no mutual understanding of communication conventions. Given the capabilities the ASP technology would offer any species seeking to expand it prospects by illuminating and terraforming numerous new worlds, such systems may actually be quite common.

ASP starships are also feasible and might be detectable as well. However the durations of starship flights would be measured in decades or centuries, while terraformed worlds could be perpetual. Furthermore, once settled, trade between solar systems could much more readily be accomplished by the exchange of intellectual property via radio than by physical transport. As a result, the amount of flight traffic will be limited. In addition, there could be opportunities for employment of many ASP terraforming engines within a single solar system. For example, within our own solar system there are seven worlds of planetary size (Mars, Ceres, Ganymede, Calisto, Titan, Triton, and Pluto) whose terraforming could be enhanced or enabled by ASP systems, not to mention hundreds of smaller but still considerable moons and asteroids, and potentially thousands of artificial space colonies as well. Therefore the number of ASP terraforming engines in operation in the universe at any one time most likely far exceeds those being used for starship propulsion. It would therefore appear advantageous to focus the ASP SETI search effort on such systems.

Proxima Centauri is a type M red dwarf with a surface temperature of 3000 K. It therefore has a black body spectrum similar to that of the 3000 K graphite shell of our proposed ASP mini-sun discussed above. The difference however is that it has about 1 million times the power, so that an ASP engine placed 4.2 light years (Proxima Centauri’s distance) from Earth would have the visual brightness as a star like Proxima Centauri positioned 4,200 light years away. Put another way, Proxima Centauri has a visual magnitude of 11. It takes 5 magnitudes to equal a 100 fold drop in power, so our ASP engine would have a visual magnitude of 26 at 4.2 light years, and magnitude 31 at 42 light years. The limit of optical detection of the Hubble Space Telescope is magnitude 31. So HST would be able to see our proposed ASP engine out to a distance of about 50 light years, within which there are some 1,500 stellar systems.

Consequently ASP engines may already have been imaged by Hubble, appearing on photographs as unremarkable dim objects assumed to be far away. These should be subjected to study to see if any of them exhibit parallax. If they do, this would show that they are actually nearby objects of much lower power than stars. Further evidence of artificial origin could be provided if they were found to exhibit a periodic Doppler shift, as would occur if they were in orbit around a planetary body. An anomalous gamma ray signature could be present as well.

I suggest we have a look.

Cosmological Implications

One of the great mysteries of science is why the laws of the universe are so friendly to life. Indeed, it can be readily shown that if any one of most of the twenty or so apparently arbitrary fundamental constants of nature differed from their actual value by even a small amount, life would be impossible [9]. Some have attempted to answer this conundrum by claiming that there is nothing to be explained because there are an infinite number of universes; we just happen to live in the odd one where life is possible. This multiverse theory answer is absurd, as it could just as well be used to avoid explaining anything. For example take the questions, why did the Titanic sink/it snow heavily last winter/the sun rise this morning/the moon form/the chicken cross the road? These can all also be answered by saying “no reason, it other universes they didn’t.” The Anthropic Principle reply, to the effect of “clearly they had to, or you wouldn’t be asking the question” is equally useless.

Clearly a better explanation is required. One attempt at such an actual causal theory was put forth circa 1992 by physicist Lee Smolin [10], who says that daughter universes are formed by black holes created within mother universes. This has a ring of truth to it, because a universe, like a black hole, is something that you can’t leave. Well, says Smolin, in that case, since black holes are formed from collapsed stars, the universes that have the most stars will have the most progeny. So to have progeny a universe must have physical laws that allow for the creation of stars. This would narrow the permissible range of the fundamental constants by quite a bit. Furthermore, let’s say that daughter universes have physical laws that are close to, but slightly varied from that of their mother universes. In that case, a kind of statistical natural selection would occur, overwhelmingly favoring the prevalence of star-friendly physical laws as one generation of universes follows another.

But the laws of the universe don’t merely favor stars, they favor life, which certainly requires stars, but also planets, water, organic and redox chemistry, and a whole lot more. Smolin’s theory gets us physical laws friendly to stars. How do we get to life?

Reviewing an early draft of Smolin’s book in 1994, Crane offered the suggestion [11] that if advanced civilizations make black holes, they also make universes, and therefore universes that create advanced civilizations would have much more progeny than those that merely make stars. Thus the black hole origin theory would explain why the laws of the universe are not only friendly to life, but the development of intelligence and advanced technology as well. Universes creates life because life creates universes. This result is consistent with complexity theory, which holds that if A is necessary to B, then B has a role in causing A.

These are very interesting speculations. So let us ask, what would we see if our universe was created as a Smolin black hole, and how might we differentiate between a natural star collapse or ASP engine origin? From the above discussion, it should be clear that if someone created an ASP engine, it would be advantageous for them to initially create a small singularity, then grow it to its design size by adding mass at a faster rate than it evaporates, and then, once it reaches its design size, maintain it by continuing to add mass at a constant rate equal to the evaporation rate. In contrast, if it were formed via the natural collapse of a star it would start out with a given amount of mass that would remain fixed thereafter.

So let’s say our universe is, as Smolin says, a black hole. Available astronomical observations show that it is expanding, at a velocity that appears to be close to the speed of light. Certainly the observable universe is expanding at the speed of light.

Now a black hole has an escape velocity equal to the speed of light. So for such a universe

c2/2 = GM/R (5)

Where G is the universal gravitational constant, c is the speed of light in vacuum, M is the mass of the universe, and R is the radius of the universe.

If we assume that G and c are constant, R is expanding at the speed of light, and τ is the age of the universe, then:

R = cτ (6)

Combining (5) and (6), we have.

M/τ = (Rc2/2G)(c/R) = c3/2G (7)

This implies that the mass of such a universe would be growing at a constant rate. Contrary to the classic Hoyle continuous creation theory, however, which postulated that mass creation would lead to a steady state universe featuring constant density for all eternity, this universe would have a big bang event with density decreasing afterwards inversely with the square of time.

Now the Planck mass, mp, is given by:

mp = (hc/2πG)½ (8)

And the Planck time, tp, is given by:

tp = (hG/2πc5)½ (9)

If we divide equation (8) by equation (9) we find:

mp/tp = c3/G (10)

If we compare equation (10) to equation (7) we see that:

M/τ = ½(mp/tp) (11)

So the rate at which the mass of such a universe would increase equals exactly ½ Planck mass per Planck time.

Comparison with Observational Astronomy

In MKS units, G = 6.674e-11, c= 3e+8, so:

M/τ= c3/2G = 2.02277 e+35 kg/s. (12)

For comparison, the mass of the Sun is 1.989+30 kg. So this is saying that the mass of the universe would be increasing at a rate of about 100,000 Suns per second.

Our universe is believed to be about 13 billion years, or 4e+17 seconds old. The Milky Way galaxy has a mass of about 1 trillion Suns. So this is saying that the mass of the universe should be about 40 billion Milky Way galaxies. Astronomers estimate that there are 100 to 200 billion galaxies, but most are smaller than the Milky Way. So this number is in general agreement with what we see.

According to this estimate, the total mass of the universe M, is given by:

M = (2e+35)(4e+17) = 8e+52 kg. (13)

This number is well known. It is the critical mass required to make our universe “flat.” It should be clear, however, that when the universe was half as old, with half its current diameter, this number would have needed to be half as great. Therefore, if the criteria is that such a universe mass always be critical for flatness, and not just critical right now, then its mass must be increasing linearly with time.

These are very curious results. Black holes, the expanding universe, and the constancy of the speed of light are results of relativity theory. Planck masses and Planck times relate to quantum mechanics. Observational astronomy provides data from telescopes. It is striking that these three separate approaches to knowledge should provide convergent results.

This analysis does require that mass be continually added to the universe at a constant rate, exactly as would occur in the case of an ASP engine during steady-state operation. It differs however in that in an ASP engine, the total mass only increases during the singularity’s buildup period. During steady state operation mass addition would be balanced by mass evaporation. How these processes would appear to the inhabitants of an ASP universe is unclear. Also unclear is how the inhabitants of any Smolinian black hole universe could perceive it as rapidly expanding. Perhaps the distance, mass, time, and other metrics inside a black hole universe could be very different from those of its parent universe, allowing it to appear vast and expanding to its inhabitants while looking small and finite to outside observers. One possibility is that space inside a black hole is transformed, in a three dimensional manner analogous to a ω = 1/z transformation in the complex plane, so that the point at the center becomes a sphere at infinity. In this case mass coming into the singularity universe from its perimeter would appear to the singularity’s inhabitants as matter/energy radiating outward from its center.

Is there a model that can reconcile all the observations of modern astronomy with those that would be obtained by observers inside either a natural black hole or ASP universe? Speculation on this matter by scientists and science fiction writers with the required physics background would be welcome [13].

Conclusions

We find that ASP engines appear to be theoretically possible, and could offer great benefits to advanced spacefaring civilizations. Particularly interesting is their potential use as artificial suns to enable terraforming of unlimited numbers of cold worlds. ASP engines could also be used to enable interstellar colonization missions. However the number of ASP terraforming engines in operation in the universe at any one time most likely far exceeds those being used for starship propulsion. Such engines would have optical signatures similar to M-dwarfs, but would differ in that they would be much smaller in power than any natural M star, and hence have to be much closer to exhibit the same apparent luminosity. In addition they would move in orbit around a planetary body, thereby displaying a periodic Doppler shift, and could have an anomalous additional gamma ray component to their spectra. An ASP engine of the type discussed would be detectable by the Hubble Space Telescope at distances as much as 50 light years, within which there are approximately 1,500 stellar systems. Their images may therefore already be present in libraries of telescopic images as unremarkable dim objects, whose artificial nature would be indicated if they were found to display parallax. It is therefore recommended that such a study be implemented.

As for cosmological implications, the combination of the attractiveness of ASP engines with Smolinian natural selection theory does provide a potential causal mechanism that could explain the fine tuning of the universe for life. Whether our own universe could have been created in such a manner remains a subject for further investigation.

References

1. Hawking, S. W. (1974). “Black hole explosions?” Nature 248(5443): 30-31. https://ui.adsabs.harvard.edu/abs/1974Natur.248…30H/abstract

2. Hawking Radiation, Wikipedia https://en.wikipedia.org/wiki/Hawking_radiation accessed September 22, 2019.

3. Arthur C. Clarke, Imperial Earth, Harcourt Brace and Jovanovich, New York, 1976.

4. Charles Sheffield, “Killing Vector,” in Galaxy, March 1978.

5. Louis Crane and Shawn Westmoreland, “Are Black Hole Starships Possible?” 2009, 2019. https://arxiv.org/pdf/0908.1803.pdf accessed September 24.

6. Freeman Dyson, “The Search for Extraterrestrial Technology,” in Selected Papers of Freeman Dyson with Commentary, Providence, American Mathematical Society. Pp. 557-571, 1996.

7. Robert Zubrin, “Detection of Extraterrestrial Civilizations via the Spectral Signature of Advanced Interstellar Spacecraft,” in Progress in the Search for Extraterrestrial Life: Proceedings of the 1993 Bioastronomy Symposium, Santa Cruz, CA, August 16-20 1993.

8. Crane, “Searching for Extraterrestrial Civilizations Using Gamma Ray Telescopes,” available at https://arxiv.org/abs/1902.09985.

9. Robert Zubrin, The Case for Space: How the Revolution in Spaceflight Opens Up a Future of Limitless Possibility, Prometheus Books, Amherst, NY, 2019.

10. Paul Davies, The Accidental Universe, Cambridge University Press, Cambridge, 1982

11. Lee Smolin, The Life of the Cosmos, Oxford University Press, NY, 1997.

12. Louis Crane, “Possible Implications of the Quantum Theory of Gravity: An Introduction to the Meduso-Anthropic principle,” 1994. https://arxiv.org/PS_cache/hep-th/pdf/9402/9402104v1.pdf

13. I provided a light hearted explanation in my science fiction satire The Holy Land (Polaris Books, 2003) where the advanced extraterrestrial priestess (3rd Class) Aurora mocks the theory of the expanding universe held by the Earthling Hamilton. “Don’t be ridiculous. The universe isn’t expanding. That’s obviously physically impossible. It only appears to be expanding because everything in it is shrinking. What silly ideas you Earthlings have.” In a more serious vein, the late physicist Robert Forward worked out what life might be like on a neutron star in his extraordinary novel Dragon’s Egg (Ballantine Books, 1980.) A similar effort to describe life on the inside of a black hole universe could be well worthwhile. Any takers?

Robert Zubrin
Pioneer Astronautics
11111 W. 8th Ave, unit A
Lakewood, CO 80215
Zubrin@aol.com

tzf_img_post

Marc Millis: Testing Possible Spacedrives

Marc Millis, former head of NASA’s Breakthrough Propulsion Physics project, recently returned from another trip to Germany, where he worked with Martin Tajmar’s SpaceDrive project at Germany’s Technische Universität Dresden. Recent coverage of the ongoing experimental work into spacedrives in both the popular and scientific press has raised public interest, leading Millis to explain in today’s essay why and how the techniques for studying these matters are improving, and how far we have to go before we have something definitive. Millis is in the midst of developing an interstellar propulsion study from a NASA grant even as he continues to examine advanced propulsion concepts and the methodologies with which to approach them.

by Marc Millis

Two recent articles, one in Scientific American [1] and the other in Acta Astronautica [2], prompted this update about the experimental tests of possible spacedrives. In short, the experimental methods are improving, but definitive results are not yet in hand. While this update is mostly on the “Mach Effect Thruster,” it also touches on the infamous “EmDrive,” as well as a refresher on the general quest for spacedrive physics.

First, what is a spacedrive? Presently, a spacedrive is still a goal rather than a proven device. The ambition is to find a fundamentally different way to propel spacecraft rather than rockets or sails. Rockets are limited by having to carry their entire journey’s reaction mass with them (propellant). Sails are limited by one-directional photons (or particles) from an external source. Imagine, instead, if there was some way for a spacecraft to interact with its surrounding spacetime to move in any direction and be limited only by the amount of available energy. That ambition is the essence of a spacedrive.

That detail – of interacting with spacetime to induce motion – is a matter of undiscovered physics. That makes it harder to grasp, harder to explain, and harder to solve. It’s easier to grasp engineering challenges that are based on known physics, since there are already operating principles to cite. With spacedrives, the operating principles are works-in-progress – more akin to lines of inquiry than having complete packages ready for scrutiny. Though theories for faster-than-light warp drives do exist (one type of spacedrive), the physics of the required negative energy is still debated – which itself is a prerequisite to devising how to engineer a warp drive. In addition, though there are experimental replications of thrusts from possible spacedrives, separating experimental artifacts from actual thrusts is also, still, a work in progress – and the main point of this update.

Before getting to the latest experiments, here is a bit more background behind the challenges of a spacedrive. At first blush, such wishful thinking might seem to violate conservation of momentum – a crucial detail. Conservation of momentum is easy to grasp for a rocket; the rearward-blasted propellant matches the forward momentum of the spacecraft. The situation is less obvious with spacedrives. There are a least 3 approaches to address conservation of momentum: 1) using a reaction mass indigenous to space or spacetime, 2) negative inertia, or 3) exploring the physics about inertial reference frames – the backdrop upon which the conservation laws are defined.

The majority of this update is related to the 3rd option – inertial frames. For new readers, a more complete introduction to various approaches and issues of both spacedrives and faster-than-light flight are spelled out in the book Frontiers of Propulsion Science [3]. If you’re curious about that broader coverage, that book and subsequent papers are one starting point.

Back to inertial frames and conservation laws: An inertial frame is such a ubiquitous property of spacetime that it is often taken for granted. It is what allows accelerated motion to be felt – the reference frame for Newton’s F=ma and the subsequent conservation laws. If you’ve never thought about it before, this can be hard to grasp because it’s so foundational. One useful book is Mach’s Principle: From Newton’s Bucket to Quantum Gravity [4], which articulates several different attempts to represent how inertial frames exist. What makes this book particularly useful is that it compiled workshop discussions about the differing approaches. Those discussions are illuminating.

One of those attempts is called “Mach’s Principle,” which asserts that the surrounding matter of the universe gives rise to the inertial frame properties of space. Or stated differently, “inertial here, because of matter, out there.” A similar perspective is something called “inertial induction.” The implication of these is that inertia is more than just a property of mass. Inertia is an interaction between mass and spacetime – and perhaps with undiscovered nuances.

Perhaps an analogy might help. When you plot trajectories on graph paper, you usually don’t give much thought to the paper. The paper is just some fixed, reliable background upon which the more interesting details are plotted. But what if the paper was not uniform nor constant over time? What if the trajectories might vary because of the properties of the paper itself? In this case, the rules for plotting on graph paper would have to be updated to account for the rules about the paper itself. Here, the graph paper is analogous to an inertial frame and “plotting trajectories” is analogous to Newton’s F=ma and subsequent conservation laws. If there are deeper details about inertial frames and their effect on inertia, then Newton’s F=ma and the conservation laws would have to be refined to incorporate those finer details.

In terms of Einstein’s general relativity – an established refinement of Newton’s laws – inertial frames and momentum conservation are treated only locally. I’m not sure quite how to put this in words, so I’ll defer to examples. With the warp drive, Einstein’s equations describe the local effects on spacetime from the warp drive itself, but cannot describe how (or if) momentum is conserved across a whole journey, encompassing the departure and arrival points as a total picture. Similarly, the momentum conservation of traveling through a wormhole cannot be described. While the local effects at each throat can be described, the bigger picture encompassing both the entry and exit throats and the mass that went through, cannot. There is room for more advances in physics.

Mach’s Principle and Inertial Induction are still open investigations in general physics, though not a dominant theme. Their relevance to spacedrives is because Mach’s Principle was a starting point for what is now called the “Mach Effect Thruster.” It began around 1990, when a reexamination of Mach’s Principle led to new hypotheses about fluctuating inertia, which then led to a 1994 patent for a propulsion concept [5]. Experiments followed. By 2016, three other labs were observing similar thrusts, which led NASA to award a 2017 NIAC grant for further investigations.

The original theory, from James Woodward of the University of California at Fullerton, showed that the inertia of a mass would fluctuate with a change of power of that mass. At first, varying the power of the mass took the form of charging and discharging a capacitor – where the capacitor was that mass. By doing this with two capacitors, while also changing the distance between them (via a piezoelectric actuator), a propulsive force was claimed to be generated (see figure and caption).

Figure 1. Transient inertia applied for propulsion: While the rear capacitor’s inertia is higher and the forward capacitor lower, the piezoelectric separator is extended. The front capacitor moves forward more than the rear one moves rearward. Then, while the rear capacitor’s inertia is lower and the forward capacitor higher, the piezoelectric separator is contracted. The front capacitor moves backward less than the rear one moves forward. Repeating this cycle shifts the center of mass of the system forward.

Since the center of mass of such a system moves without the opposite motion of a reaction mass, it appears to violate conservation of momentum, but does it? Since inertia is no longer constant, the usual equations do not fit without some reconsideration. This is a debated issue – debated in a constructive way. One version asserts how momentum conservation is indeed satisfied [6]. Others would prefer that the original fluctuating inertia equation be further advanced to explicitly address the conservation laws. Another desired refinement is to have the original equations explicitly connected to the experimental hardware – to show what dimensions of that hardware are the most critical.

Armed with an apparently working device, Woodward and his team concentrated on improving the experiments rather than that additional theoretical work. Over the years of making modifications to the device to amplify the effect, the ‘fluctuating inertia’ capacitors and the piezoelectric actuator were merged. Now a stack of piezoelectric disks serves both the functions of the inertial fluctuations and the oscillating motion. The power that affects the inertia now includes the mechanical motion too.

This is where the Scientific American article is worth mentioning. That article gives a decent review of the history and status of the Mach Effect Thruster (which also goes by the name “Mach Effect Gravity Assist (MEGA) Device”) as conducted by Jim Woodward and Heidi Fearn. It includes some perspectives that are useful to read separately, instead of needing to repeat those here. It addresses other aspects of the bigger picture of pursuing these kinds of research inquires.

The other article that prompted this update is in the journal Acta Astronautica. In addition to Woodward’s team, a group at the Technical University of Dresden, Germany, led by Martin Tajmar, secured funding for a broader project to research spacedrives in 2017. That group is one of the 3 labs that replicated the Woodward results in 2016. The recent Acta Astronautica article is an update on their experimental hardware and procedures, in preparation for careful testing of the Mach Effect Thruster, the EmDrive, and other possible spacedrive effects.

A preceding work by Tajmar that fed into this latest update was an attempt to advance Woodward’s original fluctuating inertia equations into a form that mapped to the experimental hardware [7]. With such equations a new thruster could be designed to maximize the thrust and experimental predictions could be made for the existing hardware. To span the possibility of debated assumptions (such as what kind of power affects the inertia; mechanical, electrical, other?), more than one version of such equations was derived for future tests.

Though this paper is more about the testing methods, in the course of that preparatory work, it became evident that none of the analytical models match the data. The models predicted correlations between the thrust and operating frequencies that was not observed. If the Mach Effect Thruster is indeed working, it is not producing thrust per these models derived from the original theory. Hence, that thruster is now considered a “black box” – a term used to denote a device whose operating principles are unknown, and where the test program concentrates on seeing if, and under what circumstances, it functions.

To test the thrusters, they are placed on the end of a torsion beam that can twist horizontally (vertical axis). The term “torsion” means that the beam is sprung, its rotation is limited and proportional to how much thrust occurs at the tip. This is the same concept as the Cavendish balance that measured Newton’s gravitational constant. When the thruster is pointed one way, the beam deflects one direction. When pointed in the other direction, the beam deflects in the other direction. And the third important orientation is when the thruster is pointed in a direction where it should not deflect the beam. By comparing the actual deflections in each direction (and under different operating conditions), the performance of the thruster can be assessed.

Deciphering actual thrust from all the other things that can look like thrust is difficult. A major clue for a false positive is if the beam is deflected when the thruster is not pointed in a thrusting direction. Another major clue is revealed when the power is delivered to a dummy device instead of to the thruster – to see if simply delivering the power through the apparatus affects the apparatus. Another possible effect is from the peculiarities of the balance beam itself while powered up (e.g. thermal drift of the electronics). When testing the thruster in a thrusting direction, there might be slight shifts in the center of mass as the thruster warms up – where that thermal effect might look like thrust. And then there is the challenge of how vibration might shift the position of the balance beam. There are more possible side-effects than these, but these are the major ones.

Another false positive that merits separate mention is confirmation bias. Confirmation bias is not an instrumentation phenomenon, but a psychological phenomenon. After people reach a conclusion, they tend to filter evidence to fit their preconceived notion, rather than letting the data speak for itself. It happens way more often than it should. It is so insidious that we seldom know when we are guilty of it ourselves. Our bias skews, well, our bias. The important lesson here, for you the audience, is how to spot those biases when you come across new articles. If an article sounds like they it’s trying to prove or disprove, rather than decipher and conclude, then its findings are likely skewed.

The Acta Astronautica article comes across like an investigation in progress, rather than a conclusion in search of evidence (or advocacy). The article outlines the performance limits of their hardware and the procedures used to distinguish the aforementioned side-effects from potential genuine thrust. To measure a claimed thrust of 2 µN, the thrust stand has demonstrated a sensitivity of 0.1 µN, as well as plots of the background noise showing less than ± 0.02 µN. The procedures include calibration with known forces before and after each run, measuring the thermal drift of the electronics, and automated operation that repeats a set of runs 140 times to get ample data to average. The tests are conducted in vacuum and the thrusting directions can be changed during a test sequence remotely without having to break vacuum or risk affecting other configuration settings.

Other than the aforementioned conclusion that the Mach Effect Thruster is not following analytical models, there are no other conclusions to report. Sample data is shown for the Mach Effect Thruster (more than one version) and the EmDrive, but only to illustrate the measurements that can be made, rather than any attempt to report on the viability of either of those thrusters.

In closing

Conferences are coming up where more progress will be reported. Consider this article a preparation for interpreting these next series of papers. Carl Sagan’s adage, “Extraordinary claims require extraordinary evidence,” is exactly the tactic here. The results only have as much substance as the fidelity of the tests. This most recent progress bodes well for that fidelity. The prior tactic of “quick and cheap” experiments to test other claimed devices turned out to be neither quick nor cheap. Promotional material and sensationalistic articles are easy to create. Reliable findings are harder, less glamorous, and take longer.

The implications of a genuine new propulsion method, plus the independent replications, are driving the perseverance to wade through these complications. If it turns out that a new propulsion method is discovered, then not only will we have a more effective way to propel spacecraft, but also a new window into the lingering mysteries of physics. The less obvious value is if it turns out to be a false positive. In that case the years-long ambiguity will be resolved, and the lessons learned will make it easier to assess future claims of new thrusters.

References

1. Scoles, S. (2019). The Good Kind of Crazy. Scientific American, 321, 59-65.

2. Kößling, M., Monette, M., Weikert, M., & Tajmar, M. (2019). The SpaceDrive project-Thrust balance development and new measurements of the Mach-Effect and EMDrive Thrusters. Acta Astronautica, 161, 139-152.

3. Millis, M. G., & Davis, E. W. (Eds.). (2009). Frontiers of propulsion science. American Institute of Aeronautics and Astronautics.

4. Barbour, J. B., & Pfister, H. (Eds.). (1995). Mach’s principle: from Newton’s bucket to quantum gravity (Vol. 6). Springer Science & Business Media.

5. Woodward, J. F. (1994). U.S. Patent No. 5,280,864. Washington, DC: U.S. Patent and Trademark Office.

6. Wanser, K. H. (2013). Center of mass acceleration of an isolated system of two particles with time variable masses interacting with each other via Newton’s third law internal forces: Mach effect thrust. J. of Space Exploration, 2(2).

7. Tajmar, M. (2017). Mach-Effect thruster model. Acta Astronautica, 141, 8-16.

tzf_img_post

Breakthrough Propulsion Study

Ideas on interstellar propulsion are legion, from fusion drives to antimatter engines, beamed lightsails and deep space ramjets, not to mention Orion-class fusion-bomb devices. We’re starting to experiment with sails, though beaming energy to a space sail is still an unrealized, though near-term, project. But given the sheer range of concepts out there and the fact that almost all are at the earliest stages of research, how do we prioritize our work so as to move toward a true interstellar capability? Marc Millis, former head of NASA’s Breakthrough Propulsion Physics project and founder of the Tau Zero Foundation, has been delving into the question in new work for NASA. In the essay below, Marc describes a developing methodology for making decisions and allocating resources wisely.

by Marc G Millis

In February 2017, NASA awarded a grant to the Tau Zero Foundation to compare propulsion options for interstellar flight. To be clear, this is not about picking a mission and its technology – a common misconception – but rather about identifying which research paths might have the most leverage for increasing NASA’s ability to travel farther, faster, and with more capability.

The first report was completed in June 2018 and is now available on the NASA Technical Report Server, entitled “Breakthrough Propulsion Study: Assessing Interstellar Flight Challenges and Prospects.” (4MB file at: http://hdl.handle.net/2060/20180006480).

This report is about how to compare the diverse propulsion options in an equitable, revealing manner. Future plans include creating a database of the key aspects and issues of those options. Thereafter comparisons can be run to determine which of their research paths might be the most impactive and under what circumstances.

This study does not address technologies that are on the verge of fruition, like those being considered for a probe to reach 1000 AU with a 50 year flight time. Instead, this study is about the advancements needed to reach exoplanets, where the nearest is 270 times farther (Proxima Centauri b). These more ambitious concepts span different operating principles and levels of technological maturity, and their original mission assumptions are so different that equitable comparisons have been impossible.

Furthermore, all of these concepts require significant additional research before their performance predictions are ready for traditional trade studies. Right now their values are more akin to goals than specifications.

To make fair comparisons that are consistent with the varied and provisional information, the following tactics are used: (1) all propulsion concepts will be compared to the same mission profiles in addition to their original mission context; (2) the performance of the disparate propulsion methods will be quantified using common, fundamental measures; (3) the analysis methods will be consistent with fidelity of the data; and (4) the figures of merit by which concepts will be judged will finally be explicit.

Regarding the figures of merit – this was one of the least specified details of prior interstellar studies. It is easy to understand why there are so many differing opinions about which concept is “best” when there are no common criteria with which to measure goodness. The criteria now include quantifiable factors spanning: (1) the value of the mission, (2) the time to complete the mission, and (3) the cost of the mission.

The value of a mission includes subjective criteria and objective values. The intent is to allow the subjective factors to be variables so that the user can see how their interests affect which technologies appear more valuable. One of those subjective judgments is the importance of the destination. For example, some might think that Proxima Centauri b is less interesting than the ‘Oumuamua object. Another subjective factor is motive. The prior dominant – and often implicit – figure of merit was “who can get there first.” While that has merit, it can only happen once. The full suite of motives continue beyond that first event, including gathering science about the destinations, accelerating technological progress, and ultimately, ensuring the survival of humanity.

Examples of the objective factors include: (1) time within range of target; (2) closeness to target (better data fidelity); and (3) the amount of data acquired. A mission that gets closer to the destination, stays there longer, and sends back more data, is more valuable. Virtually all mission concepts have been limited to fly-by’s. Table 1 shows how long a probe would be within different ranges for different fly-by speeds. To shift attention toward improving capabilities, the added value (and difficulty) of slowing at the destination – and even entering orbit – will now be part of the comparisons.

Table 1: Time on target for different fly by speeds and instrumentation ranges

Quantifying the time to complete a mission involves more than just travel time. Now, instead of the completion point being when the probe arrives, it is defined as when its data arrive back at Earth. This shift is because the time needed to send the data back has a greater impact than often realized. For example, even though Breakthrough StarShot aims to get there the quickest, in just 22 years, that comes at the expense of making the spacecraft so small that it takes an additional 20 years to finish transmitting the data. Hence, the time from launch to data return is about a half century, comparable to other concepts (46 yrs = 22 trip + 4 signal + 20 to transmit data). The tradeoffs of using a larger payload with a faster data rate, but longer transit time, will be considered.

Regarding the total time to complete the mission, the beginning point is now. The analysis includes considerations for the remaining research and the subsequent work to design and build the mission hardware. Further, the mission hardware, now by definition, includes its infrastructure. While the 1000 AU precursor missions do not need new infrastructure, most everything beyond that will.

Recall that the laser lightsail concepts of Robert Forward required a 26 TW laser, firing through a 1,000 km diameter Fresnel lens placed beyond Saturn (around 10 AU), aimed at a 1,000 km diameter sail with a mass of 800 Tonnes. Project Daedalus envisioned needing 50,000 tonnes of helium 3 mined from the atmospheres of the gas giant planets. This not only requires the infrastructure for mining those propellants, but also processing and transporting that propellant to the assembly area of the spacecraft. Even the more modest Earth-based infrastructure of StarShot is beyond precedent. StarShot will require one million synchronized 100 kW lasers spread over an area of 1 km2 to get it up to the required 100 GW.

While predicting these durations in the absolute sense is dubious (predicting what year concept A might be ready), it is easier to make relative predictions (if concept A will be ready before B) by applying the same predictive models to all concepts. For example, the infrastructure rates are considered proportional to the mass and energy required for the mission – where a smaller and less energetic probe is assumed to be ready sooner than a larger, energy-intensive probe.

The most difficult duration to estimate, even when relaxed to relative instead of absolute comparisons, is the pace of research. Provisional comparative methods have been outlined, but this is an area needing further attention. The reason that this must be included – even if difficult – is because the timescales for interstellar flight are comparable to breakthrough advancements.

The fastest mission concepts (from launch to data return) are 5 decades, even for StarShot (not including research and infrastructure). Compare this to the 7 decades it took to advance from the rocket equation to having astronauts on the moon (1903-1969), or the 6 decades to go from the discovery of radioactivity to having a nuclear power plant tied to the grid (1890-1950).

So, do you pursue a lesser technology that can be ready sooner, a revolutionary technology that will take longer, or both? For example, what if technology A is estimated to need just 10 more years of research, but 25 years to build its infrastructure, while option B is estimated to take 25 more years of research, but will require no infrastructure. In that case, if all other factors are equal, option B is quicker.

To measure the cost of missions, a more fundamental currency than dollars is used – energy. Energy is the most fundamental commodity of all physical transactions, and one whose values are not affected by debatable economic models. Again, this is anchoring the comparisons in relative, rather than the more difficult, absolute terms. The energy cost includes the aforementioned infrastructure creation plus the energy required for propulsion.

Comparing the divergent propulsion methods requires converting their method-specific measures to common factors. Laser-sail performance is typically stated in terms of beam power, beam divergence, etc. Rocket performance in terms of thrust, specific impulse, etc. And warp drives in terms of stress-energy-tensors, bubble thickness, etc. All these type-specific terms can be converted to the more fundamental and common measures of energy, mass, and time.

To make these conversions, the propulsion options are divided into 4 analysis groups, where the distinction is if power is received from an external source or internally, and if their reaction mass is onboard or external. Further, as a measure of propulsion efficiency (or in NASA parlance, “bang for buck”) the ratio of the kinetic energy imparted to the payload, to the total energy consumed by the propulsion method, can be compared.

The other reason that energy is used as the anchoring measure is that it is a dominant factor with interstellar flight. Naively, the greatest challenge is thought to be speed. The gap between the achieved speeds of chemical rockets and the target goal of 10% lightspeed is a factor of 400. But, increasing speed by a factor of 400 requires a minimum of 160,000 times more energy. That minimum only covers the kinetic energy of the payload, not the added energy for propulsion and inefficiencies. Hence, energy is a bigger deal than speed.

For an example, consider the 1-gram StarShot spacecraft traveling at 20% lightspeed. Just its kinetic energy is approximately 2 TJ. When calculating the propulsive energy in terms of the laser power and beam duration, (100 GW for minutes) the required energy spans 18 to 66 TJ, for just a 1-gram probe. For comparison, the energy for a suite of 1,000 probes is roughly the same as 1-4 years of the total energy consumption of New York City (NYC @ 500 MW).

Delivering more energy faster requires more power. By launching only 1 gm at a time, StarShot keeps the power requirement at 100 GW. If they launched the full suite of 1000 grams at once, that would require 1000 times more power (100 TW). Power is relevant to another under-addressed issue – the challenge of getting rid of excess heat. Hypothetically, if that 100 GW system has a 50% efficiency, that leaves 50 GW of heat to radiate. On Earth, with atmosphere and convection, that’s relatively easy. If it were a space-based laser, however, that gets far more dicey. To run fair comparisons, it is desired that each concept uses the same performance assumptions for their radiators.

Knowing how to compare the options is one thing. The other need is knowing which problems to solve. In the general sense, the entire span of interstellar challenges have been distilled into this “top 10” list. It is too soon to rank these until after running some test cases:

  • Communication – Reasonable data rates with minimum power and mass.
  • Navigation – Aiming well from the start and acquiring the target upon arrival, with minimum power and mass. (The ratio of the distance traversed to a ½ AU closest approach is about a million).
  • Maneuvering upon reaching the destination (at least attitude control to aim the science instruments, if not the added benefit of braking).
  • Instrumentation – Measure what cannot be determined by astronomy, with minimum power and mass.
  • High density and long-term energy storage for powering the probe after decades in flight, with minimum mass.
  • Long duration and fully autonomous spacecraft operations (includes surviving the environment).
  • Propulsion that can achieve 400 times the speed of chemical rockets.
  • Energy production at least 160,000 times chemical rockets and the power capacity to enable that high-speed propulsion.
  • Highly efficient energy conversion to minimize waste heat from that much power.
  • Infrastructure creation in affordable, durable increments.

While those are the general challenges common to all interstellar missions, each propulsion option will have its own make-break issues and associated research goals. At this stage, none of the ideas are ready for mission trade studies. All require further research, but which of those research paths might be the most impactive, and under what circumstances? It is important to repeat that this study is not about picking “one solution” for a mission. Instead, it is a process for continually making the most impactive advances that will not only enable that first mission, but the continually improving missions after that.

Ad astra incrementis.

tzf_img_post

Small Provocative Workshop on Propellantless Propulsion

In what spirit do we pursue experimentation, and with what criteria do we judge the results? Marc Millis has been thinking and writing about such questions in the context of new propulsion concepts for a long time. As head of NASA’s Breakthrough Propulsion Physics program, he looked for methodologies by which to push the propulsion envelope in productive ways. As founding architect of the Tau Zero Foundation, he continues the effort through books like Frontiers of Propulsion Science, travel and conferences, and new work for NASA through TZF. Today he reports on a recent event that gathered people who build equipment and test for exotic effects. A key issue: Ways forward that retain scientific rigor and a skeptical but open mind. A quote from Galileo seems appropriate: “I deem it of more value to find out a truth about however light a matter than to engage in long disputes about the greatest questions without achieving any truth.”

by Marc G Millis

A workshop on propellantless propulsion was held at a sprawling YMCA campus of classy rusticity, in Estes Park Colorado, from Sept 10 to 14. These are becoming annual events, with the prior ones being in LA in Nov 2017, and in Estes Park, Sep 2016. This is a fairly small event of only about 30 people.

It was at the 2016 event where three other labs reported the same thrust that Jim Woodward and his team had been reporting for some time – with the “Mach Effect Thruster” (which also goes by the name “Mach Effect Gravity Assist” device). Backed by those independent replications, NASA awarded Woodward’s team NIAC grants. Updates on this work and several other concepts were discussed at this workshop. There will be a proceedings published after all the individual reports are rounded up and edited.

Before I go on to describe these updates, I feel it would be helpful to share a technique that I regularly use to when trying to assess potential breakthrough concepts. I began using this technique when I ran NASA’s Breakthrough Propulsion Physics project to help decide which concepts to watch and which to skip.

When faced with research that delves into potential breakthroughs, one faces the challenge of distinguishing which of those crazy ideas might be the seeds of breakthroughs and which are the more generally crazy ideas. In retrospect, it is easy to tell the difference. After years of continued work, the genuine breakthroughs survive, along with infamous quotes from their naysayers. Meanwhile the more numerous crazy ideas are largely forgotten. Making that distinction before the fact, however, is difficult.

So how do I tell that difference? Frankly, I can’t. I’m not clairvoyant nor brilliant enough to tell which idea is right (though it is easy to spot flagrantly wrong ideas). What I can judge and what needs to be judged is the reliability of the research. Regardless if the research is reporting supportive or dismissive evidence of a new concept, those findings mean nothing unless they are trustworthy. The most trustworthy results come from competent, rigorous researchers who are impartial – meaning they are equally open to positive or negative findings. Therefore, I first look for the impartiality of the source – where I will ignore “believers” or pedantic pundits. Next, I look to see if their efforts are focused on the integrity of the findings. If experimenters are systematically checking for false positives, then I have more trust in their findings. If theoreticians go beyond just their theory to consider conflicting viewpoints, then I pay more attention. And lastly, I look to see if they are testing a critical make-break issue or just some less revealing detail. If they won’t focus on a critical issue, then the work is less relevant.

Consider the consequences of that tactic: If a reliable researcher is testing a bad idea, you will end up with a trustworthy refutation of that idea. Null results are progress – knowing which ideas to set aside. Reciprocally, if a sloppy or biased researcher is testing a genuine breakthrough, then you won’t get the information you need to take that idea forward. Sloppy or biased work is useless (even if from otherwise reputable organizations). The ideal situation is to have impartial and reliable researchers studying a span of possibilities, where any latent breakthrough in that suite will eventually reveal itself (the “pony in the pile”).

Now, back to the workshop. I’ll start with the easiest topic, the infamous EmDrive. I use the term “infamous” to remind you that (1) I have a negative bias that can skew my impartiality, and (2) there are a large number of “believers” whose experiments never passed muster (which lead to my negative bias and overt frustration).

Three different tests of the EmDrive were reported of varying degrees of rigor. All of the tests indicated that the claimed thrust is probably attributable to false positives. The most thorough tests were from the Technical University of Dresden, Germany, led by Martin Tajmar, and where his student, Marcel Weikert presented the EmDrive tests, and Matthias Kößling on the details of their thrust stand. They are testing more than one version of the EmDrive, under multiple conditions, and all with alertness for false positives. Their interim results show that thrusts are measured when the device is not in a thrusting mode – meaning that something else is creating the appearance of a thrust. They are not yet fully satisfied with the reliability of their findings and tests continue. They want to trace the apparent thrust its specific cause.

The next big topic was Woodward’s Mach Effect Thruster – determining if the previous positive results are indeed genuine, and then determining if they are scalable to practical levels. In short – it is still not certain if the Mach Effect Thruster is demonstrating a genuine new phenomenon or if it is a case of a common experimental false positive. In addition to work of Woodward’s team, led by Heidi Fearn, the Dresden team also had substantial progress to report, specifically where Maxime Monette covered the Mach Effect thruster details in addition to the thrust stand details from Matthias Kößling. There was also an analytical assessment by based on conventional harmonic oscillators, plus more than one presentation related to the underlying theory.

One of the complications that developed over the years is that the original traceability between Woodward’s theory and the current thruster hardware has thinned. The thruster has become a “back box” where the emphasis is now on the empirical evidence and less on the theory.

Originally, the thruster hardware closely followed the 1994 patent which itself was a direct application of Woodward’s 1990 hypothesized fluctuating inertia. It involved two capacitors at opposite ends of a piezoelectric separator, where the capacitors experience the inertial fluctuations (during charging and discharging cycles) and where the piezoelectric separator cyclically changes length between these capacitors.

Its basic operation is as follows: While the rear capacitor’s inertia is higher and the forward capacitor lower, the piezoelectric separator is extended. The front capacitor moves forward more than the rear one moves rearward. Then, while the rear capacitor’s inertia is lower and the forward capacitor higher, the piezoelectric separator is contracted. The front capacitor moves backward less than the rear one moves forward. Repeating this cycle shifts the center of mass of the system forward – apparently violating conservation of momentum.

The actual conservation of momentum is more difficult to assess. The original conservation laws are anchored to the idea of an immutable connection between inertia and an inertial frame. The theory behind this device deals with open questions in physics about the origins and properties of inertial frames, specifically evoking “Mach’s Principle.” In short, that principle is ‘inertia here because of all the matter out there.’ Another related physics term is “Inertial Induction.” Skipping through all the open issues, the upshot is that variations in inertia would require revisions to the conservation laws. It’s an open question.

Back to the tale of the evolved hardware. Eventually over the years, the hardware configuration changed. While Woodward and his team tried different ways to increase the observed thrust, the ‘fluctuating inertia’ components and the ‘motion’ components were merged. Both the motions and mass fluctuations are now occurring in a stack of piezoelectric disks. Thereafter, the emphasis shifted to the empirical observations. There were no analyses to show how to connect the original theory to this new device. The Dresden team did develop a model to link the theory to the current hardware, but determining its viability is part of the tests that are still unfinished [Tajmar, M. (2017). Mach-Effect thruster model. Acta Astronautica, 141, 8-16.].

Even with the disconnect between the original theory and hardware now under test, there were a couple of presentations about the theory, one by Lance Williams and the other by Jose’ Rodal. Lance, reporting on discussions he had when attending the April 2018 meeting of American Physical Society, Division of Gravitational Physics, suggested how to engage the broader physics community about this theory, such as using the more common term of “Inertial Induction” instead of “Mach’s Principle.” Lance elaborated on the prevailing views (such as the absence of Maxwellian gravitation) that would need to be brought into the discussion – facing the constructive skepticism to make further advances. Jose’ Rodal elaborated on the possible applicability of “dilatons” from the Kaluza-Klein theory of compactified dimensions. Amid these and other presentations, there was lively discussion involving multiple interpretations of well established physics.

An additional provocative model for the Mach Effect Thruster came from an interested software engineer, Jamie Ciomperlik, who dabbles in these topics for recreation. In addition to his null tests of the EmDrive, he created a numerical simulation for the Mach Effect using conventional harmonic oscillators. The resulting complex simulations showed that, with the right parameters, a false positive thrust could result from vibrational effects. After lengthy discussions, it was agreed to examine this more closely, both experimentally and analytically. Though the experimentalists already knew of possible false positives from vibration, they did not previously have an analytical model to help hunt for these effects. One of the next steps is to check how closely the analysis parameters match the actual hardware.

Quantum approaches were also briefly covered, where Raymond Chiao discussed the negative energy densities of Casimir cavities and Jonathan Thompson (a prior student of Chiao’s) gave an update on experiments to demonstrate the “Dynamical Casimir effect” – a method to create a photon rocket using photons extracted from the quantum vacuum.

There were several other presentations too, spanning topics of varying relevance and fidelity. Some of these were very speculative works, whose usefulness can be compared to the thought-provoking effect of good science fiction. They don’t have to be right to be enlightening. One was from retired physicist and science fiction writer, John Cramer, who described the assumptions needed to induce a wormhole using the Large Hadron Collider (LHC) that could cover 1200 light-years in 59 days.

Representing NASA’s Innovative Advanced Concepts (NIAC), Ron Turner gave an overview of the scope and how to propose for NIAC awards.

A closing thought about consequences. By this time next year, we will have definitive results on the Mach Effect Thruster, and the findings of the EmDrive will likely arrive sooner. Depending on if the results are positive or negative, here are my recommendations on how to proceed in a sane and productive manner. These recommendations are based on history repeating itself, using both the good and bad lessons:

If It Does Work:

  • Let the critical reviews and deeper scrutiny run their course. If this is real, a lot of people will need to repeat it for themselves to discover what it’s about. This takes time, and not all of it will be useful or pleasant. Pay more attention to those who are attempting to be impartial, rather than those trying to “prove” or “disprove.” Because divisiveness sells stories, expect press stories focusing on the controversy or hype, rather than reporting the blander facts.
  • Don’t fall for the hype of exaggerated expectations that are sure to follow. If you’ve never heard of the “Gartner Hype Cycle,” then now’s the time to look it up. Be patient, and track the real test results more than the news stories. The next progress will still be slow. It will take a while and a few more iterations before the effects start to get unambiguously interesting.
  • Conversely, don’t fall for the pedantic disdain (typically from those whose ideas are more conventional and less exciting). You’ll likely hear dismissals like, “Ok, so it works, but it’s not useful. ” or “We don’t need it to do the mission.” Those dismissals only have a kernel of truth in a very narrow, near-sighted manner.
  • Look out for the sharks and those riding the coattails of the bandwagon. Sorry to mix metaphors, but it seemed expedient. There will be a lot of people coming out of the woodwork in search of their own piece of the action. Some will be making outrageous claims (hype) and selling how their version is better than the original. Again, let the test results, not the sales pitches, help you decide.

If It Does Not Work:

  • Expect some to dismiss the entire goal of “spacedrives” based on the failure of one or two approaches. This is a “generalization error” which might make some feel better, but serves no useful purpose.
  • Expect others to chime in with their alternative new ideas to fill the void, the weakest of which will be evident by their hyped sales pitches.
  • Follow the advice given earlier: When trying to figure out which idea to listen too, check their impartiality and rigor. Listen to those that are not trying to sell nor dismiss, but rather to honestly investigate and report. When you find those service providers, keep tuned in to them.
  • To seek new approaches toward the breakthrough goals, look for the intersection of open questions in physics to the critical make-break issues of those desired breakthroughs. Those intersections are listed in our book Frontiers of Propulsion Science.

tzf_img_post