≡ Menu

Is seeding life into the universe to be a part of the human future? Space probes conceivably could be doing this inadvertently, and the processes of panspermia also may be moving biological possibilities between planets and even stars. Robert Buckalew has his own take on what humans might do in this regard, as discussed below. Robert has written fiction and non-fiction since 2013 under the pen name Ry Yelcho for the blog Yelcho’s Muses. In 2015 he received the Canopus Award for Excellence in Interstellar Fiction from 100 Year Starship for the story “Everett’s Awakening.” His short story “The Interlopers” appears on Literally Stories. What follows draws on his speculative science article “Microbots—The Seeds of Interstellar Civilization,” which was awarded the Canopus Award for Original Non-Fiction. The essay that follows is based on his presentation at the Icarus Interstellar Starship Congress 2019.

by Robert Buckalew

The series of pivotal events that led to the development of intelligent life on Earth are so numerous and seemingly random that the occurrence of intelligent life at other places in the galaxy may be very rare. The chance extinction of the dinosaurs which led to the diversification and opportunistic evolution of mammals is but one of many such events. Assuming the extraordinary rarity of this occurrence elsewhere in the galaxy should be a compelling reason for humans to presume our gifts of intellect exceptional and assume the obligation to prevent this cosmic largess from vanishing through global natural disaster, nuclear war or self-made neglect. Even if intelligent life is found to be common in the universe, it is certain that our form of intelligent life is unique. If we wish, someday, to communicate and interact with these various sentient species and contribute our singular human culture to their diverse communities, we must project our species’ existence into cosmic time frames.

The creation of dispersed, self-sufficient human settlements both interplanetary and extra-solar is the best way to ensure our long term survival as a species. Because of the unimaginable distances to other star systems, most proposals for interstellar colonization involve large multi-generational starships, warp drives or wormholes. Although common plot devices to create science fiction stories, wormholes and warp drives appear unworkable travel methods given the constraints of known physics. Multi-generational starships come with their own technological, political, biological, social and psychological challenges that make their realization daunting to consider.

Nature, however, has developed efficient methods to spread life on Earth that could be employed for interstellar colonization. Engineered Exogenesis, modeled after successful natural processes, proposes a method to spread Earth-life throughout our local stellar neighborhood.

Exogenesis, in astrobiology, is the hypothesis that life originated elsewhere in the universe and was conveyed here to Earth. For example, there is evidence that life in our solar system originated on Mars and was brought to this planet aboard a meteorite. The plausibility of Earth-life having been transplanted is supported by our inability to create spontaneous life from primordial organic chemicals in the laboratory and by the fact that there is no known remnant of pre-genetic life on Earth.

Engineered Exogenesis will use an additional strategy derived from nature. The survival system used by plants, insects and many aquatic lifeforms is based on the overproduction of seeds, larvae or spores in order to overcome their natural failure rate. Mass produced microbots can be designed to intentionally deliver engineered genetics to prospective exoplanets in numbers sufficient to assure that some will likely reach their destination and survive. As with seeds, this may require the dissemination of thousands to millions of them, based on the projected failure rate of the delivery system and the expected germination rate.

We know that water and organic compounds, known as tholins, have been found on planets, moons, comets and asteroids throughout our solar system. These precursors for life are believed to exist on interstellar comets and asteroids as well. Planetary exo-systems are expected to offer a similar fertile environment ready for the introduction of earthly genetic material.

The major components of an Engineered Exogenesis system might include 1) a microbiotic vessel that travels to the extra-solar planet, 2) a space-based magnetic accelerator capable of providing the inertial energy to send the vessel to other solar systems, 3) a space-based laser providing communication and supplemental energy for solar sail navigation and maneuvering, and 4) the engineered genetic material capable of growing a bio-robotic agent on the exoplanet to prepare the planetary environment for humans.

The Microbot

The microbiotic vessel, hereafter referred to as the microbot, would transport hermetically encapsulated, genetic material to the destination exoplanet while providing radiation, magnetic and acceleration protection. This vessel would be designed to open in the presence of liquid water, deploy a biobot zygote and, if necessary, a photosynthesizing food source such as phytoplankton or other aqueous plant food. The engineering of the microbot would incorporate nano technology, bio-robotics, AI and neural networks. It could be very small, possibly the size of a grain of rice, and constructed of low mass materials to minimize the energy required for acceleration. Construction materials might include carbon fiber, graphene or Kevlar designed to withstand the high magnetic fields and high acceleration rates and to provide heat shielding for atmospheric entry. Microbot vessels would possess no on-board propulsion using only their initial inertial energy for space travel. A ferromagnetic mass at the leading end of the vessel would be required for magnetic acceleration and inertial stability. This mass might be separated for deceleration upon arrival at the planetary system or retained for atmospheric entry shielding.

Microbots would also use leading and trailing photo-sensors for navigational aids with the leading photo-sensor directed at the destination star and the trailing sensor pointed at Sol. Fore and aft modulated, bio-luminescent lasers would provide communication between traveling microbot ships reminiscent of fireflies on a summer night.

A series of pivoting, flat panels would be extended following launch. Each panel would use one side for solar energy collection. Solar electric storage might be achieved by capacitance of the microbot body. The obverse side, capable of adjustable reflectivity, would be used as a solar sail. The panel ends would also be capable of latching with other microbot panels for the creation of microbot arrays, connected clusters of individual microbots. A powdered iron substrate layer which would become magnetized during acceleration could aid in microbot arraying and later become a magsail for magnetic braking. A superconducting loop for magnetic braking could be incorporated into the perimeter of the solar panels or otherwise deployed as an independent loop. Finally, the panels could be positioned for autogyro aerobraking during atmospheric entry in the same way maple seeds can dissipate energy by helicoptering to Earth.

The Accelerator

Magnetic acceleration happens by activating each electromagnet ahead of the projectile to pull it forward. As the projectile accelerates, the rate of coil activation increases to stay ahead of the accelerating mass. Projectile acceleration would be limited by the inertial mass of the projectile and the force produced by the electromagnets. Magnetic accelerators can include a circular or linear motor configuration.

Aimed at the destination star, the microbots would be accelerated sequentially for a maximum exit velocity with a minimum energy expenditure. This must be a space based as the atmospheric heating from the very high exit velocity precludes microbots being launched from Earth. By timing the sequencing of the magnets the rate of acceleration can be optimized for the given projectile mass and the vulnerability of the vehicle and payload to the forces of acceleration. The length of a linear accelerator is inversely related to the acceleration needed for a given exit velocity – the longer the coil gun, the less acceleration needed. A circular, toroidal accelerator would not have this length constraint as it could use multiple cycles to obtain projectile terminal velocity. Once the system is deployed it could be used for numerous target stars. If microbots can be accelerated to 10% light speed, a trip to Alpha Centari (4.37 light years away) would take 43 years and a trip to Tau Ceti (10.4 light years away) would take 104 years. Reaching these speeds would be a function of the length and power of the accelerator and the mass of the microbot ship.

The Laser

The laser is not for propulsion, as suggested by Yuri Milner’s Breakthrough Starshot, but would be used for communication and programming updates by modulation of the laser beam. It could also provide energy for course correction and arraying maneuvers. It is also best located in space to reduce atmospheric light scattering.

Arrays

Microbots would be programmed to array although some microbots would necessarily remain as self sufficient individual ships. The primary purpose of arraying is to improve communication with Earth as a larger antenna area can enhance both transmission and reception performance. However, arraying may also be used to collectivize the solar power and energy storage and to organize the use of this power.

Implementing Arrays

Microbot arraying could be achieved through use of swarm intelligence, a naturally occurring function among social insects, migrating birds and fish schools. Known in robotics as distributed AI, microbots could communicate with each other while traveling in space through their fore and aft photo-sensors and modulated bio-luminescence. As the trip may take 100 years or more, there should be adequate time for arraying even considering the limited maneuvering power provided by the angular incident positioning and variable reflectivity of their panels. Although launched individually, the first vessels would be accelerated at a slower velocity than the later vessels causing their clumping in space as they travel and increasing their ability to form arrays.

Microbots Arrival and Descent to the ExoPlanet

Deceleration from near relativistic speeds to that of planetary orbit velocity is always problematic. Explosive ejection of the leading ferromagnetic mass could substantially decelerate the containment vessel while reducing the remaining microbot maneuvering mass. Employing the resistance of the reflective solar sails and the magsail, braking and maneuvering could be achieved by using the retrograde radiant photon pressure, plasma energy streams and charged magnetosphere of the destination star. Solar and magnetic braking could continue after entering an elliptical orbit of the star, until a matching planetary orbital velocity is obtained. With aerobraking the arrays could go from an elliptical orbit of the planet to a circular one. For the solitary microbots aerobraking would slow them for atmospheric entry.

Microbots entering the atmosphere would experience further braking through atmospheric drag. With reduced velocity, low gravitational attraction and high surface-to-mass ratios, atmospheric entry damage to the microbots might be kept to a minimum. Descent might be further dampened and controlled by positioning the panels for autogyro energy dissipation.

Engineered Genetic Materials

Genetics has become a game-changing technology allowing for man-made biological creativity. Genetic engineering has been revolutionized through CRISPR and the creation of artificial life by scientists like Craig Venter. Expanding the genetic alphabet beyond the 4 chemical bases found in DNA could add functions capable of assembling metal or silicon components into planet inhabiting biobots.

Although the microbot starship incorporated some bio-robotic functions such as neural networks and bio-luminescence for communication, it did not need the biobot ability to grow and reproduce. The biobots used for planetary exploration, terraforming and habitat construction would be grown from the genetic material in the microbot after acquiring a suitable watery environment for germination/gestation. These bio-life forms would be genetically designed to be suitable for the anticipated planetary environment but might also incorporate a greater degree of robotics into their biology for laser communication with Earth and reprogrammability.

Essential BioBot Characteristics

The first terrestrial biobots might best be designed to be amphibious for food access and terrain mobility and cold blooded for temperature tolerance. They might be capable of solar power or photosynthesis for their energy, but separate genetic material may be included to produce a photosynthesizing food source for the biobots.

Their instinctive behavior would include a work activity for communication and infrastructure building similar to instinctive nest, hive or web building found with Earth life. Their behaviors would be re-programmable from Earth allowing task changing capability. For this they would require a means to transmit and receive laser modulated signals with Earth as well as their interspecies communication using sound or light modulation.

Reproduction could be biological though it would likely be asexual. Reproduction would also be programmable through Earth communication in order to create a series of diverse offspring, specifically-tasked and specialized biobots. Eventually there would be terraforming for human habitation. After successful habitat construction and terraforming, the final genetic download would be human genomes for incubation. This would require specialized biobots for human gestation and nurturing. These humans could be genetically designed for the gravity, atmosphere and temperatures of the exoplanet by adjusting metabolism rate, body mass, lung capacity, skin color, fat, fur covering, etc. These modifications would prognosticate natural changes which would have evolved over time in humans in adapting to their new environment.

The simpler alternative to the more complex remote reproductive reprogramming would be to send sequential waves of microbots each containing subsequent genetics. However, the advantages of remote genetic reprogramming would not only result in faster colonization, but the genetic developments during the 100+ year microbot travel time could be incorporated in the transmitted genetic code.

Advantages of Engineered Exogenesis

There are some obvious advantages that come with with an Engineered Exogenesis approach to interstellar colonization. It would be scalable, specifically, the numbers of microbots manufactured and launch frequencies can be adjusted to suit political or financial circumstances. It would not be limited to one target planet, and any number of planets or newly discovered planets or moons could be added as targets over time. It would be tested in our solar system and modified with improvements before deployment. The seeding and growth of biobots could take place on Earth or in domed environments on the moon or Mars. Finally, it reduces the time scale for starship colonization, and eliminates human exposure to space travel. It, however, it lacks the drama and romance found in human adventure stories in space.

For Engineered Exogenesis to become a reality there are many technical problems to be resolved as well as numerous ethical issues to be considered such as the chauvinistic imposition of our genetics onto other evolving planetary systems, the creation and dissemination of synthetic, reproducing life forms and the alternation of our own human genome. Society, so far, has been accepting of test tube babies, GMO food crops and gene therapy, especially when it seems to improve our lives. Such controversial technological impositions may also be accepted as necessary in order to achieve a human interstellar presence.

Has This Happened Before?

If Engineered Exogenesis is a viable idea, would it not have been done by advanced alien civilizations? If so, why are there no alien biobots roving around on Earth? This is like the Fermi Paradox about space aliens. If they were sent here, maybe ocean Earth life feeds on these undeveloped alien genomes before they grow and reproduce. Possibly life on Earth is the result of an alien exogenesis. Then where are the microbot ships that carried them? They would be tiny, widely scattered and hard to find. Possibly the nascent search for micrometeorites on Earth may yet find one of these artificial nanobots.

References

1. Michael Noah Mautner, “Seeding the Universe with Life: Securing Our Cosmological Future,” The Interstellar Panspermia Society. http://www.panspermia-society.com/

2. N. Mathews, A. L. Christensen, R. O’Grady, F. Mondada, and M. Dorigo, “Mergeable nervous systems for robots,” Nature Communications 8(439), 2017 (full text).

3. Jennifer Doudna, “How CRISPR lets us edit our DNA,’ TED Talk September 2015. https://www.ted.com/talks/jennifer_doudna_we_can_now_edit_our_dna_but_let_s_do_it_wisely?language=en

4. J. Craig Venter, “Watch me unveil ‘synthetic life,’” TED talk. May 2010. https://www.ted.com/talks/craig_venter_unveils_synthetic_life?language=en

5. Daniela Rus, “Autonomous boats can target and latch onto each other,” MIT News June 5, 2019. http://news.mit.edu/2019/autonomous-robot-boats-latch-0605

6. Sarah Hörst, “What in the world(s) are tholins?” Planetary Society July 22, 2015. http://www.planetary.org/blogs/guest-blogs/2015/0722-what-in-the-worlds-are-tholins.html

7. Francesco Corea, “Distributed Artificial Intelligence: A Primer on Multi-Agent Systems Agent Based Modeling and Swarm Intelligence,” https://www.kdnuggets.com/2019/04/distributed-artificial-intelligence-multi-agent-systems-agent-based-modeling-swarm-intelligence.html

8. Paul Gilster, “Starship Surfing: Ride the Bow Shock,” Centauri Dreams March 21, 2012 https://www.centauri-dreams.org/2012/03/21/starship-surfing-ride-the-bow-shock/

tzf_img_post
{ 96 comments }

Investigating a Pluto Orbiter

The spectacular success of New Horizons inevitably leads to questions about what an orbiter at Pluto/Charon might accomplish. It’s heartening that NASA has funded the Southwest Research Institute (SwRI) to look further into the matter, the Institute having already examined the question on its own. Now a Pluto orbiter becomes one of ten mission studies NASA is sponsoring as we look toward the next National Academy Planetary Science Decadal Survey. Beginning in 2020, the survey will outline science objectives and recommend missions over a ten year period.

The NASA decision leverages all the work SwRI has put into the Pluto orbiter concept, and brings the focus to what we might accomplish with such a mission that a flyby could not. Particularly significant will be the choice of science instruments, which a spacecraft achieving global coverage will demand. And because we have a system at Pluto with five moons, we have a range of targets that can be subjected to detailed study. There is even the possibility of taking the mission to other targets, as New Horizons principal investigator Alan Stern explained:

“In an SwRI-funded study that preceded this new NASA-funded study, we developed a Pluto system orbital tour, showing the mission was possible with planned capability launch vehicles and existing electric propulsion systems. We also showed it is possible to use gravity assists from Pluto’s largest moon, Charon, to escape Pluto orbit and to go back into the Kuiper Belt for the exploration of more KBOs like MU69 and at least one more dwarf planet for comparison to Pluto.”

Image: To follow up on NASA’s New Horizons mission that revealed Pluto’s “heart,” SwRI is studying a new Pluto orbiter mission for NASA. SwRI has shown it is possible to orbit Pluto and then escape orbit to tour additional dwarf planets and Kuiper Belt Objects. Credit: NASA/JHUAPL/SwRI.

New Horizons carries seven instruments, all of which are still functioning well, as we learned from Stern in his latest PI’s Perspective. Having flown past Ultima Thule (2014 MU69), New Horizons continues to explore the Kuiper Belt, and it will be instructive to see how long it continues to return data. The Voyagers have demonstrated longevity far beyond the expectations of those who built them, Right now the spacecraft is continuing to return data on the Ultima Thule flyby, a process that will last another year or so, according to Stern, but New Horizons is also continuing to observe KBOs as it moves ever further out..

The seven scientific instruments aboard the spacecraft have just been put through a thorough calibration, the first such campaign run since just before the Pluto/Charon flyby. As with the Ultima Thule data, the complete calibration results will be returned with the dataflow over the next year, though Stern says the instruments ‘performed flawlessly.’ The crucial Long Range Reconnaissance Imager (LORRI) has received a software upgrade designed to detect fainter targets than before as well as to enable longer exposures. The new capability will be in place by December for further Kuiper Belt exploration.

We still don’t have a dedicated mission to the interstellar medium, meaning one with an instrument package expressly designed for operations beyond the heliosphere, but we do have continuing dust and plasma observations of the outer heliosphere from New Horizons. This is useful stuff, because we are building a dataset that complements what the two Voyagers have given us, though the New Horizons instrument package is more capable. Planetary scientists will take advantage of observations like these in learning more about how the surfaces of KBOs and dwarf planets are affected by the environment through which they orbit.

A number of science papers on Pluto/Charon and Ultima Thule are about to be submitted, with dozens of new results reported at the recent Division for Planetary Sciences meeting in Geneva. You can see that New Horizons is very much an ongoing mission, even as we look toward the benefits of a Pluto orbiter that is at least under study not just at SwRI but NASA. The continued naming of Pluto surface features is a reminder of how much we’ve learned, but imagine how we can fill out these young maps with features yet to be observed in detail.

Image (click to enlarge): This map, compiled from images and data gathered by NASA’s New Horizons spacecraft during its flight through the Pluto system in 2015, contains Pluto feature names approved by the International Astronomical Union. Names from the newest round of nominations, approved in 2019, are in yellow. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute/Ross Beyer.

tzf_img_post
{ 13 comments }

In Search of a Wormhole

A star called S2 is intriguingly placed, orbiting around the supermassive black hole thought to be at Sgr A*, the bright, compact radio source at the center of the Milky Way. S2 has an orbital period of a little over 16 years and a semi-major axis in the neighborhood of 970 AU. Its elliptical orbit takes it no closer than 120 AU, but the star is close enough to Sgr A* that continued observations may tell us whether or not a black hole is really there. A new paper in Physical Review D now takes us one step further: Is it possible that the center of our galaxy contains a wormhole?

By now the idea of a wormhole that connects different spacetimes has passed into common parlance, thanks to science fiction stories and films like Interstellar. We have no evidence that a wormhole exists at galactic center at all, much less one that might be traversable, though the idea that it might be possible to pass between spacetimes using one of these is too tempting to ignore, at least on a theoretical level. At the University at Buffalo, Dejan Stojkovic, working with De-Chang Dai (Yangzhou University, China and Case Western Reserve University), thinks the star S2’s behavior may offer a way to look for wormholes.

Image: An artist’s concept illustrates a supermassive black hole. A new theoretical study outlines a method that could be used to search for wormholes (a speculative phenomenon) in the background of supermassive black holes. Credit: NASA/JPL-Caltech.

Note that the authors are not saying they find such an object in the existing datasets on S2 (the object has only been monitored since 1995 at UCLA and at the Max Planck Institute for Extraterrestrial Physics). Rather, they’re arguing for using the behavior of objects near black holes, where extreme astrophysical conditions exist, to see whether they exhibit unusual behavior that could be the result of a wormhole associated with the black hole. So this is a methodological approach that advances a proposed course of observation.

You may remember that a 1995 paper from John Cramer, Robert Forward, Gregory Benford and other authors including Geoff Landis (see below) went to work on this question, though not using a star near the Milky Way’s center (see How to Find a Wormhole, a Centauri Dreams article from the same year). Cramer et al. argued for looking for an astrophysical signal of negative mass, which would be needed to keep a wormhole mouth open. Let me quote from something Geoff Landis told me about the paper:

“If the wormhole is exactly between you and another star, it would defocus the light, so it’s dim and splays out in all directions. But when the wormhole moves and it’s nearer but not in front of the star, then you would see a spike of light. So if the wormhole moves between you and another star and then moves away, you would see two spikes of light with a dip in the middle.”

That’s an astrophysical signature interesting enough to be noted. And from the paper itself:

“…the negative gravitational lensing presented here, if observed, would provide distinctive and unambiguous evidence for the existence of a foreground object of negative mass.”

Back to Stojkovic, whose new paper notes a property we would expect to exist in wormholes. Let me quote his paper on this:

The purpose of this work…is to establish a clear link between wormholes and astrophysical observations. By definition, a wormhole smoothly connects two different spacetimes. If the wormhole is traversable, then the flux (scalar, electromagnetic, or gravitational) can be conserved only in the totality of these two spaces, not individually in each separate space.

Interesting point. An example: A physical electric charge on one side of the wormhole would manifest itself on the other side. There, where there is no electric charge, an observer would notice the electric flux coming from the wormhole and assume that the wormhole is charged. There is, in fact, no real charge at the wormhole, but the flux is strictly conserved only if the entirety of both spaces connected by the wormhole is considered. And as the paper goes on to state, a gravitational source like a star orbiting the mouth of the wormhole should be observed as gravitational perturbations on the other side.

The message is clear. Again, from the Stojkovic paper:

As a direct consequence, trajectories of objects propagating in [the] vicinity of a wormhole must be affected by the distribution of masses/charges in the space on the other side of the wormhole. Since wormholes in nature are expected to exist only in extreme conditions, e.g. around black holes, the most promising systems to look for them are either large black holes in the centers of galaxies, or binary black hole systems.

By now it should be clear why S2 is an interesting star for this purpose. Its proper motion orbiting what is believed to be a supermassive black hole at Sgr A* could theoretically tell us whether the black hole harbors a wormhole. The extreme gravitational conditions make this the best place to look for a wormhole, and minute deviations in the expected orbit of S2 could indicate one’s presence. That means we need to assemble a lot more data about S2.

Stojkovic doesn’t expect to find a lot of traffic coming through any wormhole we do find:

“Even if a wormhole is traversable, people and spaceships most likely aren’t going to be passing through. Realistically, you would need a source of negative energy to keep the wormhole open, and we don’t know how to do that. To create a huge wormhole that’s stable, you need some magic.”

In the absence of magic, we can still put observational astronomy to work. We may be a decade or two away from being able to track S2 this closely, and in any case will need a lot more data to make the call, but the scientist cautions that even deviations in its expected orbit won’t be iron-clad proof of a wormhole. They’ll simply make it a possibility, leading us to ask what other causes on our own side of the presumed wormhole could be creating the perturbations. And any wormhole we do come to believe is there would not necessarily be traversable, but if the effects of gravity from a different spacetime are in play, that’s certainly something we’ll want to study as we untangle the complicated situation at galactic center.

The paper is Dai and Stojkovic, “Observing a Wormhole,” Phys. Rev. D 100, 083513 (10 October 2019). Abstract / preprint). The Cramer et al. paper is “Natural Wormholes as Gravitational Lenses,” Physical Review D (March 15, 1995): pp. 3124–27 (abstract).

tzf_img_post
{ 18 comments }

Exoplanet Collision at BD +20 307?

NASA collaborates with the German Aerospace Center (DLR) on one of our more interesting observatories. SOFIA, the Stratospheric Observatory for Infrared Astronomy, is a Boeing 747 aircraft that flies an infrared telescope with a 2.7 m diameter mirror. Located on the port side of the fuselage near the tail, the telescope houses a number of instruments for infrared astronomy at wavelengths from 1–655 micrometers (μm). One of these is FORCAST (Faint Object Infrared Camera for the SOFIA Telescope), which has now spotted an intriguing phenomenon, one that may be flagging a collision of two exoplanets.

The stars in question form a double system called BD +20 307, some 300 light years from Earth. Note the age of this system, about one billion years, an important consideration in what follows. About ten years ago, observations from the Spitzer instrument as well as ground observatories produced evidence of warm debris here, whereas from age alone, we would have expected warm circumstellar dust to have disappeared, just as it has in our own system.

What SOFIA brings to the table is a new set of measurements that shows the infrared brightness from the debris at BD +20 307 has increased by more than 10 percent in a time period of 10 years. We don’t usually find this kind of rapid fluctuation when studying what ought to be the gradual evolution of a planetary system, especially not in one as mature as this. There should be little dust here to begin with, much less warm dust, and while there are other possible mechanisms in play (see below), the rapid pace implies a collision.

“This is a rare opportunity to study catastrophic collisions occurring late in a planetary system’s history,” said Alycia Weinberger, staff scientist at the Carnegie Institution for Science’s Department of Terrestrial Magnetism in Washington, and lead investigator on the project. “The SOFIA observations show changes in the dusty disk on a timescale of only a few years.”

Image: Artist’s concept illustrating a catastrophic collision between two rocky exoplanets in the planetary system BD +20 307, turning both into dusty debris. Ten years ago, scientists speculated that the warm dust in this system was a result of a planet-to-planet collision. Now, SOFIA found even more warm dust, further supporting that two rocky exoplanets collided. This helps build a more complete picture of our own Solar System’s history. Such a collision could be similar to the type of catastrophic event that ultimately created our Moon. Credit: NASA/SOFIA/Lynette Cook.

From the paper:

We investigated several mechanisms that could cause the observed changes in the disk flux, including making the dust grains hotter, either through an increase in stellar luminosity or moving the dust grains closer to the stars, or increasing the number of dust grains in the system. If the origin of the copious amount of warm dust orbiting BD +20 307 is an extreme collision between planetary-sized bodies, then this system may help unlock clues into planetary systems around binary stars, along with providing a glimpse into catastrophic collisions occurring late in a planetary system’s history.

It’s also true that gaining a stronger understanding of dusty debris disks should give us insights into how binary systems evolve, useful as we investigate such interesting places as the Alpha Centauri triple system. If we are indeed looking at the result of a major collision at BD +20 307, further work should illuminate the kind of catastrophes we find evidence for in the Solar System, from our Moon’s formation (likely through an impact with a Mars-sized object) to the huge axial tilt of Uranus, which is probably the result of multiple impacts. The authors argue that new SOFIA observations at a wider wavelength range out to 20 µm will allow us to draw more definitive conclusions.

The paper is Thompson et al., “Studying the Evolution of Warm Dust Encircling BD +20 307 Using SOFIA,” Astrophysical Journal Vol. 875, No. 1 (12 April 2019). Abstract / preprint.

tzf_img_post
{ 18 comments }

Exoplanet Geochemistry: The White Dwarf Factor

I continue to be fascinated by small stars. My earliest passion for such involved red dwarfs, which appeared to make habitable planet possibilities that would be of great interest to science fiction authors, assuming such environments could survive tidal lock and stellar flaring. But white dwarfs have a weird seductiveness of their own, because we’re learning how to extract from them information about planets that orbited them before being consumed.

Thus a new paper out of UCLA, which focuses on an unusual way of determining the geochemistry of rocks from beyond our Solar System. We can do this because white dwarfs, the remnants of normal stars that have gone through their red giant phase and collapsed into objects about the size of the Earth, have strong gravitational pull. That means we would expect heavy elements like carbon, oxygen and nitrogen to vanish into their interiors, utterly out of view to our instruments. We should see little more than hydrogen and helium, making what actually does show up in their atmospheres intriguing.

Image: An artist’s concept showing debris falling into a white dwarf star. Credit: NASA/JPL-Caltech.

According to the UCLA researchers, spectroscopic studies reveal that the atmospheres of up to half of white dwarfs with effective temperatures below 25,000 K are polluted by elements heavier than helium. With their own heavy elements hidden within the stars’ interiors, white dwarf atmospheres are clearly collecting something external, presumably debris from rocky bodies that once orbited the stars and became disrupted by their gravitational pull.

We are talking about stellar objects a long way off. The closest examined in the study led by graduate student Alexandra Doyle is some 200 light years out, but Doyle’s UCLA team went to work on six white dwarfs in all, the farthest being at a distance of 665 light years. Doyle likens the work to a tool we’re familiar with in other spheres of inquiry. “Observing a white dwarf,” she says, “is like doing an autopsy on the contents of what it has gobbled in its solar system.”

Image: An artist’s rendering shows a white dwarf star with a planet in the upper right. Credit: Mark Garlick.

A key issue here is what is known as fugacity, which is helpfully defined in the issue of Science in which the paper ran: “The oxygen fugacity of a rock, fO2, is a measure of how oxidizing or reducing its surroundings were when the rock formed. Different minerals form at different fO2 and have different physical properties, so the internal structure of an exoplanet depends on this value.”

The importance of oxidation — in which iron shares its electrons with oxygen — is clear when we consider its impact on our own planet. Co-author Edward Young (UCLA):

“All the chemistry that happens on the surface of the Earth can ultimately be traced back to the oxidation state of the planet. The fact that we have oceans and all the ingredients necessary for life can be traced back to the planet being oxidized as it is. The rocks control the chemistry.”

The question under consideration: Are rocks in our Solar System typical of those around other stars? Doyle’s results show oxygen fugacities within range of what we find on Earth and Mars, as well as asteroids. This would suggest that there are geochemical similarities between Earth and the rocky exoplanets whose disintegration is measured around the white dwarfs.

It’s hard to see how else we might go to work on geochemistry around other stars than by studying white dwarf atmospheres. The paper notes that estimating the composition of exoplanets from their host star abundances or from planet mass-radius relationships is unreliable. White dwarfs provide a more direct alternative for studying extrasolar rocks.

Image: UCLA researchers Benjamin Zuckerman, Beth Klein, Alexandra Doyle, Hilke Schlichting, Edward Young (left to right). Credit: Christelle Snow/UCLA..

The UCLA scientists homed in on the six most common elements in rock: iron, oxygen, silicon, aluminum, magnesium and calcium, as found in white dwarf atmospheres. They compared their calculated results with Solar System materials, with findings that imply, according to co-author Hilke Schlichting (UCLA), that a planet bearing such rocks would have similar plate tectonics and potential for magnetic fields as the Earth. Let the paper conclude the story:

Our results show that the parent objects that polluted these WDs had intrinsic oxidation states similar to those of rocks in the Solar System. Based on estimates of their mass, the bodies accreting onto WDs were either asteroids that represent the building blocks of rocky exoplanets, or they were fragments of rocky exoplanets themselves. In either case, our results constrain the intrinsic oxygen fugacities of rocky bodies that orbited the progenitor star of their host WD. Our data indicate that rocky exoplanets constructed from these planetesimals should be geophysically and geochemically similar to rocky planets in the Solar System, including Earth.

The paper is Doyle et al., “Oxygen fugacities of extrasolar rocks: Evidence for an Earth-like geochemistry of exoplanets,” Science Vol. 366, Issue 6463 (18 October 2018), pp. 356-359 (abstract/full text).

tzf_img_post
{ 17 comments }

Could an advanced civilization create artificial black holes? If so, the possibilities for power generation and interstellar flight would be profound. Imagine cold worlds rendered habitable by tiny artificial ‘suns.’ Robert Zubrin, who has become a regular contributor to Centauri Dreams, considers the consequences of black hole engines in the essay below. Dr. Zubrin is an aerospace engineer and founder of the Mars Society, as well as being the president of Pioneer Astronautics. His latest book, The Case for Space: How the Revolution in Spaceflight Opens Up a Future of Limitless Possibility, was recently published by Prometheus Books. As Zubrin notes, generating energy through artificial singularities would leave a potential SETI signal whose detectability is analyzed here, a signature unlike any we’ve examined before.

by Robert Zubrin

Abstract

Artificial Singularity Power (ASP) engines generate energy through the evaporation of modest sized (108-1011 kg) black holes created through artificial means. This paper discusses the design and potential advantages of such systems for powering large space colonies, terraforming planets, and propelling starships. The possibility of detecting advanced extraterrestrial civilizations via the optical signature of ASP systems is examined. Speculation as to possible cosmological consequences of widespread employment of ASP engines is considered.

Introduction

According to a theory advanced by Stephen Hawking [1] in 1974, black holes evaporate at a rate given by:

tev = (5120π)tP(m/mP)3 (1)

where tev is the time it takes for the black hole to evaporate, tP is the Planck time (5.39e-44 s), m is the mass of the black hole in kilograms, and mP is the Planck mass (2.18e-8 kg) [2]

Hawking considered the case of black holes formed by the collapse of stars, which need to be at least ~3 solar masses to occur naturally. For such a black hole, equation 1 yields an evaporation time of 5e68 years, far longer than the expected life of the universe. In fact, evaporation would never happen, because the black hole would gain energy, and thus mass, by drawing in cosmic background radiation at a rate faster than its own insignificant rate of radiated power.

However it can be seen from examining equation (1) that the evaporation rate goes inversely with the cube of singularity, which means that the emitted power (=mc2/tev) goes inverse with the square of the mass. Thus if the singularity could be made small enough, very large amounts of power could theoretically be produced.

This possibility was quickly grasped by science fiction writers, and such propulsion systems were included by Arthur C. Clarke in his 1976 novel Imperial Earth [3] and Charles Sheffield in his 1978 short story “Killing Vector.” [4]

Such systems did not receive serious technical analysis however, until 2009, when it was examined by Louis Crane and Shawn Westmoreland, both then of Kansas State University, in their seminal paper “Are Black Hole Starships Possible?” [5]

In their paper, Crane and Westmoreland focused on the idea of using small artificial black holes powerful enough to drive a starship to interstellar-class velocities yet long-lived enough to last the voyage. They identified a “sweet spot” for such “Black Hole Starships” (BHS) with masses on the order of 2×109 kg, which they said would have lifetimes on order of 130 years, yet yield power of about 13,700 TW. They proposed to use some kind of parabolic reflector to reflect this radiation, resulting in a photon rocket. The ideal thrust T of a rocket with jet power P and exhaust velocity v is given by:

T = 2P/v (2)

So with T = 13,700 TW and v=c = 3e8 m/s, the thrust would be 8.6e7 N. Assuming that the payload spacecraft had a mass of 1e9 kg, this would accelerate the ship at a rate of a=8.6e7/3e9 = 2.8e-2 m/s2. Accelerating at this rate, such a ship would reach about 30% the speed of light in 100 years.

There are a number of problems with this scheme. In the first place, the claimed acceleration is on the low side. Furthermore their math appears to be incorrect. A 2e9 kg singularity would only generate about 270 TW, or 1/50th as much as their estimate, reducing thrust by a factor of 50 (although it would last about 20,000 years). These problems could be readily remedied, however, by using a smaller singularity and a smaller ship. For example a singularity with a mass of 2e8 kg would produce a power of 26,900 TW. Assuming a ship with a mass of 1e8 kg, an acceleration of 0.6 m/s2 could be achieved, allowing 60% the speed of light to be achieved in 10 years. The singularity would only have a lifetime of 21 years. However it could be maintained by being constantly fed mass at a rate of about 0.33 kg/s.

A bigger problem is that a 1e9 kg singularity would produce radiation with a characteristic temperature of 9 GeV, increasing in inverse proportion to the singularity mass. So for example a 1e8 kg singularity would produce gamma rays with energies of 90 GeV (i.e. for Temperature, T, in electron volts, T = 9e18/m.) There is no known way to reflect such high energy photons. So at this point the parabolic reflector required for the black hole starship photon engine is science fiction.

Yet another problem is the manufacture of the black hole. Crane and Westmoreland suggest that it could be done using converging gamma ray lasers. To make a 1e9 kg unit, they suggested a “high-efficiency square solar panel a few hundred km on each side, in a circular orbit about the sun at a distance of 1,000,000 km” to provide the necessary energy. A rough calculation indicates the implied power of this system from this specification is on the order of 106 TW, or about 100,000 times the current rate used by human civilization. As an alternative construction technique, they also suggest accelerating large masses to relativistic velocities and then colliding them. The density of these masses would be multiplied both by relativistic mass increase and length contraction. However the energy required to do this would still equal the combined masses times the speed of light squared. While this technique would eliminate the need for giant gamma ray lasers, the same huge power requirement would still present itself.

In what follows, we will examine possible solutions for the above identified problems.

Advanced Singularity Engines

In MKS units, equation (1) can be rewritten as:

tev = 8.37e-15 m3 (3)

This implies that the power, P, in Watts, emitted by the singularity is given by:

P = 1.08e33/m2 (4)

The results of these two equations are shown in Fig. 1.

Fig 1. Power and Lifetime of ASP Engines as a Function of Singularity Mass

No credible concept is available to enable a lightweight parabolic reflector of the sort needed to enable the Black Hole Starship. But we can propose a powerful and potentially very useful system by dropping the requirement for starship-relevant thrust to weight ratios. Instead let us consider the use of ASP engines to create an artificial sun.

Consider a 1e8 kg ASP engine. As shown in Fig 1, it would produce a power of 1.08e8 Gigawatts. Such an engine, if left along, would only have a lifetime of 2.65 years, but it could be maintained by a constant feed of about 3 kg/s of mass. We can’t reflect its radiation, but we can absorb it with a sufficiently thick material screen. So let’s surround it with a spherical shell of graphite with a radius of 40 km and a thickness of 1.5 m. At a distance of 40 km, the intensity of the radiation will be about 5 MW/m2, which the graphite sphere can radiate into space with a black body temperature of 3000 K. This is about the same temperature as the surface of a type M red dwarf star. We estimate that graphite has an attenuation length for high energy gamma rays of about 15 cm, so that 1.5 m of graphite (equivalent shielding to 5 m of water or half the Earth’s atmosphere) will attenuate the gamma radiation by ten factors of e, or 20,000. The light will then radiate out further, dropping in intensity with the square of the distance, reaching typical Earth sunlight intensities of 1 kW/m2 at a distance of about 3000 km from the center.

The mass of the artificial star will be about 1014 kg (that’s the mass of the graphite shell, compared to which the singularity is insignificant.). As large as this is, however, it is still tiny compared to that of a planet, or even the Earth’s Moon (which is 7.35e22 kg). So, no planet would orbit such a little star. Instead, if we wanted to terraform a cold world, we would put the mini-star in orbit around it.

The preferable orbital altitude of the ASP mini-star of 3000 km altitude in the above cited example was dictated by the power level of the singularity. Such a unit would be sufficient to provide all the light and heat necessary to terraform an otherwise sunless planet the size of Mars. Lower power units incorporating larger singularities but much smaller graphite shells are also feasible. (Shell mass is proportional to system power.) These are illustrated in Table 1.

The high-powered units listed in Table 1 with singularity masses in the 1e8 to 1e9 kg range are suitable to serve as mini-suns orbiting planets, moons or asteroids, with the characteristic radius of such terraforming candidates being about the same as the indicated orbital altitude. The larger units, with lower power and singularity masses above 1e10 kg are more appropriate for space colonies.

Consider an ASP mini-sun with a singularity mass of 3.16e10 kg positioned in the center of a cylinder with a radius of 10 km and a length of 20 km. The cylinder is rotating at a rate of 0.0316 radians per second, which provides it with 1 g or artificial gravity. Let’s say the cylinder is made of material with an areal density of 1000 kg per square meter. In this case it will experience an outward pressure of 104 pascals, or about 1.47 psi, due to outward acceleration. If the cylinder were made of solid Kevlar (density = 1000 kg/m3) it would be about 1 m thick. So the hoop stress on it would be 1.47*(10,000)/1 = 14,700 psi, which is less than a tenth the yield stress of Kevlar. Or put another way, 10 cm of Kevlar would do the job of carrying the hoop stress, and the rest of mass load could be anything, including habitations. If the whole interior of the cylinder were covered with photovoltaic panels with an efficiency of 10 percent, 100 GWe of power would be available for use of the inhabitants of the space colony, which would have an area of 1,256 square kilometers. The mini-sun powering it would have a lifetime of 84 million years, without refueling. Much larger space colonies (i.e, with radii over ~100 km) would not be possible however, unless stronger materials become available, as the hoop stress would become too great.

Both of these approaches seem potentially viable in principle. However we note that the space colony approach cited requires a singularity some 300 times more massive than the approach of putting a 1e8 kg mini-sun in orbit around a planet, which yields 4π(3000)2 = 100 million square kilometers of habitable area, or about 80,000 times as much land. Furthermore, the planet comes with vast supplies of matter of every type, whereas the space colony needs to import everything.

Building Singularities

Reducing the size of the required singularity by a factor of 10 from 1e9 to 1e8 kg improves feasibility of the ASP concept somewhat, but we need to do much better. Fortunately there is a way to do so.

If we examine equation (3), we can see that the expected lifetime of a 1000 kg singularity would be about 8.37 x 10-6 s. In this amount of time, light can travel about 250 m. and an object traveling at half the speed of light 125 m. If a sphere with a radius of 125 m were filled with steel it would contain about 8 x 1010 kg, or about 100 times what we need for our 1e8 kg ASP singularity. In fact, it turns out that if the initial singularity is as small as about 200 kg, and fired into a mass of steel, it will gain mass much faster than it losses it, and eventually grow into a singularity as massive as the steel provided.

By using this technique we can reduce the amount of energy required to form the required singularity by about 7 orders of magnitude compared to Crane and Westmoreland’s estimate. So instead of needing a 106 TW system, a 100 GW gamma ray laser array might do the trick. Alternatively, accelerating two 200 kg masses to near light speed would require 3.6e7 TJ, or 10,000 TW-hours of energy. This is about the energy humanity currently uses in 20 days. We still don’t know how to do it, but reducing the scale of the required operation by a factor of 10 million certainly helps.

ASP Starships

We now return to the subject of ASP starships. In the absence of a gamma ray reflector, we are left with using solid material to absorb the gamma rays and other energetic particles and re-radiate their energy as heat. (Using magnetic fields to try to contain and reflect GeV-class charged particles that form a portion of the Hawking radiation won’t work because the required fields would be too strong and too extensive, and the magnets to generate them would be exposed to massive heating by gamma radiation.)

Fortunately, we don’t need to absorb all the radiation in the absorber/reflector, we only need to absorb enough to get it hot. So let’s say that we position a graphite hemispherical screen to one side of a 1e8 kg ASP singularity, but instead of making it 1.5 m thick, we make it 0.75 mm thick. At that thickness it will only absorb about 5 percent of the radiation that hits it, the rest will pass right through. So we have 5e6 GW of useful energy, which we want to reduce to 5 MW/m2 in order for the graphite to be kept at ~3000 K where it can survive. The radius will be about 9 km, and the mass of the graphite hemisphere will be about 6e8 kg. A thin solar sail like parabolic reflector with an area 50 times as great and the carbon hemisphere but a thickness 1/500th (i.e. 1.5 microns) as great would be positioned in front of the hemisphere, adding another 0.6 e8 kg to the system, which then plus the singularity and the 1e8 kg ship might be 7.6e8 kg in all. Thrust will be 0.67e8 N, so the ship would accelerate at a speed of 0.67/7.6 = 0.09 m/s2, allowing it to reach 10 percent the speed of light in about 11 years.

Going much faster would become increasingly difficult, because using only 5% of the energy of the singularity mass would give the system an effective exhaust velocity of about 0.22 c. Higher efficiencies might be possible if a significant fraction of the Hawking radiation came off as charged particles, allowing a thin thermal screen to capture a larger fraction of the total available energy. In this case, effective exhaust velocity would go as c times the square root of the achieved energy efficiency. But sticking with our 5% efficiency, if we wanted to reach 0.22 c we could, but we would require a mass ratio of 2.7, meaning we would need about 1.5e9 kg of propellant to feed into the ASP engine, whose mass would decrease our average acceleration by about a factor of two over the burn, meaning we would take about 40 years to reach 20 percent the speed of light.

Detecting ET

The above analysis suggests that if ASP technology is possible, using it to terraform cold planets with orbital mini-suns will be the preferred approach. Orbiting (possibly isolated) cold worlds at distances of thousands of kilometers, and possessing 3000 K type M red dwarf star spectra, potentially with gamma radiation in excess of normal stellar expectations, it is possible that such objects could be detectable.

Indeed, one of the primary reasons to speculate on the design of ASP engines right now is to try to identify their likely signature. We are far away from being able to build such things. But the human race is only a few hundred thousand years old, and human civilization is just a few thousand years. In 1905 the revolutionary HMS Dreadnought was launched, displacing 18,000 tons. Today ships 5 times that size are common. So it is hardly unthinkable that in a century or two we will have spacecraft in the million ton (109 kg) class. Advanced extraterrestrial civilizations may have reached our current technological level millions or even billions of years ago. So they have had plenty of time to develop every conceivable technology. If we can think it, they can build it, and if doing so would offer them major advantages, they probably have. Thus, looking for large energetic artifacts such as Dyson Spheres [6], starships [7,8], or terraformed planets [9] is potentially a promising way to carry out the SETI search, as unlike radio SETI, it requires no mutual understanding of communication conventions. Given the capabilities the ASP technology would offer any species seeking to expand it prospects by illuminating and terraforming numerous new worlds, such systems may actually be quite common.

ASP starships are also feasible and might be detectable as well. However the durations of starship flights would be measured in decades or centuries, while terraformed worlds could be perpetual. Furthermore, once settled, trade between solar systems could much more readily be accomplished by the exchange of intellectual property via radio than by physical transport. As a result, the amount of flight traffic will be limited. In addition, there could be opportunities for employment of many ASP terraforming engines within a single solar system. For example, within our own solar system there are seven worlds of planetary size (Mars, Ceres, Ganymede, Calisto, Titan, Triton, and Pluto) whose terraforming could be enhanced or enabled by ASP systems, not to mention hundreds of smaller but still considerable moons and asteroids, and potentially thousands of artificial space colonies as well. Therefore the number of ASP terraforming engines in operation in the universe at any one time most likely far exceeds those being used for starship propulsion. It would therefore appear advantageous to focus the ASP SETI search effort on such systems.

Proxima Centauri is a type M red dwarf with a surface temperature of 3000 K. It therefore has a black body spectrum similar to that of the 3000 K graphite shell of our proposed ASP mini-sun discussed above. The difference however is that it has about 1 million times the power, so that an ASP engine placed 4.2 light years (Proxima Centauri’s distance) from Earth would have the visual brightness as a star like Proxima Centauri positioned 4,200 light years away. Put another way, Proxima Centauri has a visual magnitude of 11. It takes 5 magnitudes to equal a 100 fold drop in power, so our ASP engine would have a visual magnitude of 26 at 4.2 light years, and magnitude 31 at 42 light years. The limit of optical detection of the Hubble Space Telescope is magnitude 31. So HST would be able to see our proposed ASP engine out to a distance of about 50 light years, within which there are some 1,500 stellar systems.

Consequently ASP engines may already have been imaged by Hubble, appearing on photographs as unremarkable dim objects assumed to be far away. These should be subjected to study to see if any of them exhibit parallax. If they do, this would show that they are actually nearby objects of much lower power than stars. Further evidence of artificial origin could be provided if they were found to exhibit a periodic Doppler shift, as would occur if they were in orbit around a planetary body. An anomalous gamma ray signature could be present as well.

I suggest we have a look.

Cosmological Implications

One of the great mysteries of science is why the laws of the universe are so friendly to life. Indeed, it can be readily shown that if any one of most of the twenty or so apparently arbitrary fundamental constants of nature differed from their actual value by even a small amount, life would be impossible [9]. Some have attempted to answer this conundrum by claiming that there is nothing to be explained because there are an infinite number of universes; we just happen to live in the odd one where life is possible. This multiverse theory answer is absurd, as it could just as well be used to avoid explaining anything. For example take the questions, why did the Titanic sink/it snow heavily last winter/the sun rise this morning/the moon form/the chicken cross the road? These can all also be answered by saying “no reason, it other universes they didn’t.” The Anthropic Principle reply, to the effect of “clearly they had to, or you wouldn’t be asking the question” is equally useless.

Clearly a better explanation is required. One attempt at such an actual causal theory was put forth circa 1992 by physicist Lee Smolin [10], who says that daughter universes are formed by black holes created within mother universes. This has a ring of truth to it, because a universe, like a black hole, is something that you can’t leave. Well, says Smolin, in that case, since black holes are formed from collapsed stars, the universes that have the most stars will have the most progeny. So to have progeny a universe must have physical laws that allow for the creation of stars. This would narrow the permissible range of the fundamental constants by quite a bit. Furthermore, let’s say that daughter universes have physical laws that are close to, but slightly varied from that of their mother universes. In that case, a kind of statistical natural selection would occur, overwhelmingly favoring the prevalence of star-friendly physical laws as one generation of universes follows another.

But the laws of the universe don’t merely favor stars, they favor life, which certainly requires stars, but also planets, water, organic and redox chemistry, and a whole lot more. Smolin’s theory gets us physical laws friendly to stars. How do we get to life?

Reviewing an early draft of Smolin’s book in 1994, Crane offered the suggestion [11] that if advanced civilizations make black holes, they also make universes, and therefore universes that create advanced civilizations would have much more progeny than those that merely make stars. Thus the black hole origin theory would explain why the laws of the universe are not only friendly to life, but the development of intelligence and advanced technology as well. Universes creates life because life creates universes. This result is consistent with complexity theory, which holds that if A is necessary to B, then B has a role in causing A.

These are very interesting speculations. So let us ask, what would we see if our universe was created as a Smolin black hole, and how might we differentiate between a natural star collapse or ASP engine origin? From the above discussion, it should be clear that if someone created an ASP engine, it would be advantageous for them to initially create a small singularity, then grow it to its design size by adding mass at a faster rate than it evaporates, and then, once it reaches its design size, maintain it by continuing to add mass at a constant rate equal to the evaporation rate. In contrast, if it were formed via the natural collapse of a star it would start out with a given amount of mass that would remain fixed thereafter.

So let’s say our universe is, as Smolin says, a black hole. Available astronomical observations show that it is expanding, at a velocity that appears to be close to the speed of light. Certainly the observable universe is expanding at the speed of light.

Now a black hole has an escape velocity equal to the speed of light. So for such a universe

c2/2 = GM/R (5)

Where G is the universal gravitational constant, c is the speed of light in vacuum, M is the mass of the universe, and R is the radius of the universe.

If we assume that G and c are constant, R is expanding at the speed of light, and τ is the age of the universe, then:

R = cτ (6)

Combining (5) and (6), we have.

M/τ = (Rc2/2G)(c/R) = c3/2G (7)

This implies that the mass of such a universe would be growing at a constant rate. Contrary to the classic Hoyle continuous creation theory, however, which postulated that mass creation would lead to a steady state universe featuring constant density for all eternity, this universe would have a big bang event with density decreasing afterwards inversely with the square of time.

Now the Planck mass, mp, is given by:

mp = (hc/2πG)½ (8)

And the Planck time, tp, is given by:

tp = (hG/2πc5)½ (9)

If we divide equation (8) by equation (9) we find:

mp/tp = c3/G (10)

If we compare equation (10) to equation (7) we see that:

M/τ = ½(mp/tp) (11)

So the rate at which the mass of such a universe would increase equals exactly ½ Planck mass per Planck time.

Comparison with Observational Astronomy

In MKS units, G = 6.674e-11, c= 3e+8, so:

M/τ= c3/2G = 2.02277 e+35 kg/s. (12)

For comparison, the mass of the Sun is 1.989+30 kg. So this is saying that the mass of the universe would be increasing at a rate of about 100,000 Suns per second.

Our universe is believed to be about 13 billion years, or 4e+17 seconds old. The Milky Way galaxy has a mass of about 1 trillion Suns. So this is saying that the mass of the universe should be about 40 billion Milky Way galaxies. Astronomers estimate that there are 100 to 200 billion galaxies, but most are smaller than the Milky Way. So this number is in general agreement with what we see.

According to this estimate, the total mass of the universe M, is given by:

M = (2e+35)(4e+17) = 8e+52 kg. (13)

This number is well known. It is the critical mass required to make our universe “flat.” It should be clear, however, that when the universe was half as old, with half its current diameter, this number would have needed to be half as great. Therefore, if the criteria is that such a universe mass always be critical for flatness, and not just critical right now, then its mass must be increasing linearly with time.

These are very curious results. Black holes, the expanding universe, and the constancy of the speed of light are results of relativity theory. Planck masses and Planck times relate to quantum mechanics. Observational astronomy provides data from telescopes. It is striking that these three separate approaches to knowledge should provide convergent results.

This analysis does require that mass be continually added to the universe at a constant rate, exactly as would occur in the case of an ASP engine during steady-state operation. It differs however in that in an ASP engine, the total mass only increases during the singularity’s buildup period. During steady state operation mass addition would be balanced by mass evaporation. How these processes would appear to the inhabitants of an ASP universe is unclear. Also unclear is how the inhabitants of any Smolinian black hole universe could perceive it as rapidly expanding. Perhaps the distance, mass, time, and other metrics inside a black hole universe could be very different from those of its parent universe, allowing it to appear vast and expanding to its inhabitants while looking small and finite to outside observers. One possibility is that space inside a black hole is transformed, in a three dimensional manner analogous to a ω = 1/z transformation in the complex plane, so that the point at the center becomes a sphere at infinity. In this case mass coming into the singularity universe from its perimeter would appear to the singularity’s inhabitants as matter/energy radiating outward from its center.

Is there a model that can reconcile all the observations of modern astronomy with those that would be obtained by observers inside either a natural black hole or ASP universe? Speculation on this matter by scientists and science fiction writers with the required physics background would be welcome [13].

Conclusions

We find that ASP engines appear to be theoretically possible, and could offer great benefits to advanced spacefaring civilizations. Particularly interesting is their potential use as artificial suns to enable terraforming of unlimited numbers of cold worlds. ASP engines could also be used to enable interstellar colonization missions. However the number of ASP terraforming engines in operation in the universe at any one time most likely far exceeds those being used for starship propulsion. Such engines would have optical signatures similar to M-dwarfs, but would differ in that they would be much smaller in power than any natural M star, and hence have to be much closer to exhibit the same apparent luminosity. In addition they would move in orbit around a planetary body, thereby displaying a periodic Doppler shift, and could have an anomalous additional gamma ray component to their spectra. An ASP engine of the type discussed would be detectable by the Hubble Space Telescope at distances as much as 50 light years, within which there are approximately 1,500 stellar systems. Their images may therefore already be present in libraries of telescopic images as unremarkable dim objects, whose artificial nature would be indicated if they were found to display parallax. It is therefore recommended that such a study be implemented.

As for cosmological implications, the combination of the attractiveness of ASP engines with Smolinian natural selection theory does provide a potential causal mechanism that could explain the fine tuning of the universe for life. Whether our own universe could have been created in such a manner remains a subject for further investigation.

References

1. Hawking, S. W. (1974). “Black hole explosions?” Nature 248(5443): 30–31. https://ui.adsabs.harvard.edu/abs/1974Natur.248…30H/abstract

2. Hawking Radiation, Wikipedia https://en.wikipedia.org/wiki/Hawking_radiation accessed September 22, 2019.

3. Arthur C. Clarke, Imperial Earth, Harcourt Brace and Jovanovich, New York, 1976.

4. Charles Sheffield, “Killing Vector,” in Galaxy, March 1978.

5. Louis Crane and Shawn Westmoreland, “Are Black Hole Starships Possible?” 2009, 2019. https://arxiv.org/pdf/0908.1803.pdf accessed September 24.

6. Freeman Dyson, “The Search for Extraterrestrial Technology,” in Selected Papers of Freeman Dyson with Commentary, Providence, American Mathematical Society. Pp. 557-571, 1996.

7. Robert Zubrin, “Detection of Extraterrestrial Civilizations via the Spectral Signature of Advanced Interstellar Spacecraft,” in Progress in the Search for Extraterrestrial Life: Proceedings of the 1993 Bioastronomy Symposium, Santa Cruz, CA, August 16-20 1993.

8. Crane, “Searching for Extraterrestrial Civilizations Using Gamma Ray Telescopes,” available at https://arxiv.org/abs/1902.09985.

9. Robert Zubrin, The Case for Space: How the Revolution in Spaceflight Opens Up a Future of Limitless Possibility, Prometheus Books, Amherst, NY, 2019.

10. Paul Davies, The Accidental Universe, Cambridge University Press, Cambridge, 1982

11. Lee Smolin, The Life of the Cosmos, Oxford University Press, NY, 1997.

12. Louis Crane, “Possible Implications of the Quantum Theory of Gravity: An Introduction to the Meduso-Anthropic principle,” 1994. https://arxiv.org/PS_cache/hep-th/pdf/9402/9402104v1.pdf

13. I provided a light hearted explanation in my science fiction satire The Holy Land (Polaris Books, 2003) where the advanced extraterrestrial priestess (3rd Class) Aurora mocks the theory of the expanding universe held by the Earthling Hamilton. “Don’t be ridiculous. The universe isn’t expanding. That’s obviously physically impossible. It only appears to be expanding because everything in it is shrinking. What silly ideas you Earthlings have.” In a more serious vein, the late physicist Robert Forward worked out what life might be like on a neutron star in his extraordinary novel Dragon’s Egg (Ballantine Books, 1980.) A similar effort to describe life on the inside of a black hole universe could be well worthwhile. Any takers?

Robert Zubrin
Pioneer Astronautics
11111 W. 8th Ave, unit A
Lakewood, CO 80215
Zubrin@aol.com

tzf_img_post
{ 42 comments }

Remembering Alexei Leonov (1934-2019)

The Russian space agency Roscosmos, as most of you know, has announced the death of cosmonaut Alexei Leonov, who died last Friday at Moscow’s Burdenko Hospital following a long illness. He was 85.

If handling stress under extreme conditions is a prerequisite for someone who is going to the Moon, Leonov had already proven his mettle when the Soviet Union chose him as the man to pilot its lunar lander to the surface. The failure of the N-1 rocket put an end to that plan, but Leonov will always be associated with the 1965 mission aboard Voskhod 2 shared with Pavel Belyayev. This was the spacewalk mission, conducted successfully before NASA could manage the feat 10 weeks later.

Image: A man and his art. Alexei Leonov was as attracted to drawing and painting as he was to flying, creating some work while in orbit. Credit: Roscosmos.

The problems Leonov had with his bulky spacesuit as it ballooned out of shape are widely known, making his re-entry into the capsule a dicey affair, though one he managed with skilful use of a bleed valve in the suit’s lining. But consider everything else that went wrong: The charge fired to remove the airlock caused Voskhod 2 to tumble, while oxygen levels in the cabin began to rise and took several hours to correct.

Add to this the failure of the automated re-entry system, and the tangled separation of the orbital module from the landing capsule, caused by a communications cable that had failed to separate. It was the heat of re-entry itself that finally burned through the cable and allowed the landing module to bring both cosmonauts home. Then they landed so far downrange from their intended target that two days passed before they could be reached in a taiga forest in the Ural Mountains.

Image: Alexei Leonov in 1965. Credit: Associated Press.

This was indeed the kind of man the Soviet Union would have liked to place on the Moon ahead of the Apollo astronauts, but it was not to be. Leonov, who went on to be given the title of Chief Cosmonaut, is also remembered for the Soyuz he piloted and docked with an Apollo capsule in 1975 in the Apollo Soyuz Test Project, which was the first mission conducted jointly between the US and the Soviet Union. A self-taught artist, Leonov created what as far as I know is humanity’s first artwork in space when he did a colored pencil drawing of a sunrise as seen from Voskhod 2.

Stanley Kubrick’s 1968 film 2001: A Space Odyssey included recordings of Leonov’s breathing in space, a great move by a great director, but the later 2010 (directed not by Kubrick but Peter Hyams) would provoke an official response. Twice made Hero of the Soviet Union, but later suspected of disloyalty to the government, Leonov had been brought before the Central Committee of the Communist Party to explain the fact that the spaceship named ‘Leonov’ in the film (nice touch by Clarke) had been filled with Soviet dissidents. Leonov is said to have responded that the committee “was not worth the nail on Arthur C. Clarke’s little finger.”

A quote from Leonov’s Two Sides of the Moon: Our Story of the Cold War Space Race (Thomas Dunne Books, 2004), written with NASA astronaut David Scott, may be the best way to end as we consider how close this man came to being first on the Moon. The time is early 1966, and Soviet Luna probes had completed a series of missions in lunar orbit. Leonov recalls:

I was undergoing intensive training for a lunar mission by this time. In order to focus attention and resources our cosmonaut corps had been divided into two groups. One group, which included Yuri Gagarin and Vladimir Komarov, was training to fly our latest spacecraft — Soyuz — in Earth orbit…

The second group, of which I was commander, was training for circumlunar missions in a modified version of Soyuz known as the L-1, or Zond, and also for lunar-landing missions in another modified Soyuz known as the L-3. Vasily Mishin’s cautious plan called for three circumlunar missions to be carried out with three different two-man crews, one of which would then be chosen to make the first lunar landing.

The initial plan was for me to command the first circumlunar mission, together with Oleg Makarov, in June or July 1967. We then expected to be able to accomplish the first Moon landing — ahead of the Americans — in September 1968.

Alternate histories are fascinating to consider. But no matter which timeline we dwell in, Alexei Leonov was a courageous, generous man who carried the spirit of that insanely adventurous era. It’s a spirit we can strive to recover.

tzf_img_post
{ 21 comments }

Voyager: Pressure at the Edge of the System

One of these days we’ll have a spacecraft on a dedicated mission into the interstellar medium, carrying an instrument package explicitly designed to study what lies beyond the heliosphere. For now, of course, we rely on the Voyagers, both of which move through this realm, with Voyager 1 having exited the heliosphere in August of 2012 and Voyager 2, on a much different trajectory, making the crossing in late 2018. Data from both spacecraft are filling in our knowledge of the heliosheath, where the solar wind is roiled by the interstellar medium.

A new study of this transitional region has just appeared, led by Jamie Rankin (Princeton University), using comparative data from the time when Voyager 2 was still in the heliosheath and Voyager 1 had already moved into interstellar space. Leaving the heliosheath, the pressure of the Sun’s solar wind is affected by particles from other stars, and the magnetic influence of our star effectively ends. What the scientists found is that the combined pressure of plasma, magnetic fields, ions, electrons and cosmic rays is greater than expected at the boundary.

“In adding up the pieces known from previous studies, we found our new value is still larger than what’s been measured so far,” said Rankin. “It says that there are some other parts to the pressure that aren’t being considered right now that could contribute.”

Image: This is an illustration depicting the layers of the heliosphere. Credit: NASA/IBEX/Adler Planetarium.

Thus the Voyager data continue to be robust, giving us a look into a dynamic and turbulent region through which future missions will have to pass. The particular area that the study’s authors focused on is called a global merged interaction region, a wave of outrushing plasma produced by bursts of particles from the Sun in events like coronal mass ejections. Such an event is visible in Voyager 2 data from 2012, causing a decrease in the number of galactic cosmic rays, one that Voyager 1 would go on to detect four months later.

Traveling at nearly the speed of light, galactic cosmic rays are atomic nuclei from which all of the surrounding electrons have been stripped away. The difference between how this change in their numbers was detected by the two spacecraft is instructive. Still within the heliosheath at the time, Voyager 2 saw a decrease of galactic cosmic rays in all directions around the spacecraft, whereas at Voyager 1’s vantage beyond the heliosphere, only those galactic cosmic rays traveling perpendicular to the magnetic fields in the region decreased.

This intriguing asymmetry flags the crossing of the heliosheath, though the study’s authors are quick to point out that why this directional change in cosmic rays occurs remains unknown. They are able to calculate the larger than expected total pressure in the heliosheath, and discover that the speed of sound in the heliosheath is roughly 300 kilometers per second (remember that the speed of sound in any medium is simply the speed at which disturbances in pressure propagate, in this case the result of interactions in the solar wind).

Image: The Voyager spacecraft, one in the heliosheath and the other just beyond in interstellar space, took measurements as a solar event known as a global merged interaction region passed by each spacecraft four months apart. These measurements allowed scientists to calculate the total pressure in the heliosheath, as well as the speed of sound in the region. Credit: NASA’s Goddard Space Flight Center/Mary Pat Hrybyk-Keith.

“There was really unique timing for this event because we saw it right after Voyager 1 crossed into the local interstellar space,” Rankin said. “And while this is the first event that Voyager saw, there are more in the data that we can continue to look at to see how things in the heliosheath and interstellar space are changing over time.”

The paper is Rankin et al., “Heliosheath Properties Measured from a Voyager 2 to Voyager 1 Transient,” Astrophysical Journal Vol. 883, No. 1 (25 September 2019). Abstract.

tzf_img_post
{ 26 comments }

Enceladus: New Organic Compounds via Cassini Data

While I’m working on the project I discussed the other day, I’m trying to keep my hand in with the occasional article here, looking forward to when I can get back to a more regular schedule. Things are going to remain sporadic for a bit longer this month, and then again in mid-November, but I’ll do my best to follow events and report in when I can. I did want to take the opportunity to use an all too brief break to get to the Enceladus news, which has been receiving attention from the space media and, to an extent, the more general outlets.

We always track Enceladus news with interest given those remarkable geysers associated with its south pole, and now we return to the Cassini data pool, which should be producing robust research papers for many years. In this case, Nozair Khawaja (University of Berlin) and colleagues have tapped data from the spacecraft’s Cosmic Dust Analyzer (CDA) to study the ice grains Enceladus emits into Saturn’s E ring, finding nitrogen- and oxygen-bearing compounds. These are similar to compounds found on Earth that can produce amino acids. Says Khawaja:

“If the conditions are right, these molecules coming from the deep ocean of Enceladus could be on the same reaction pathway as we see here on Earth. We don’t yet know if amino acids are needed for life beyond Earth, but finding the molecules that form amino acids is an important piece of the puzzle.”

Image: This illustration shows how newly discovered organic compounds — the ingredients of amino acids — were detected by NASA’s Cassini spacecraft in the ice grains emitted from Saturn’s moon Enceladus. Powerful hydrothermal vents eject material from Enceladus’ core into the moon’s massive subsurface ocean. After mixing with the water, the material is released into space as water vapor and ice grains. Condensed onto the ice grains are nitrogen- and oxygen-bearing organic compounds. Credit: NASA/JPL-Caltech.

So let’s clarify the process. What the Cosmic Dust Analyzer is looking at appears to be organics that would have been dissolved in the ocean beneath Enceladus’ surface. These would have evaporated from the ocean and then condensed, freezing on ice grains inside fractures in the crust. Rising plumes would have accounted for these materials being blown into space.

We begin to get a window into what might be produced within the ocean, though the view is preliminary. In the excerpt below, note that the scientists classify various types of ice grains on Enceladus according to a taxonomy: Type 1 represents grains of almost pure water ice, Type 2 shows features consistent with grains containing significant amounts of organic material, and Type 3 is indicative of salt-rich water ice grains. The study homes in on Type 2:

It is highly likely that there are many more dissolved organic compounds in the Enceladean ocean than reported here… In this investigation of Type 2 grains, the initial constraints, in particular the choice of salt-poor spectra, favoured the identification of compounds with high vapour pressures. Despite the expected solubility of potential synthesized intermediate- or high-mass compounds, their low vapour pressures mean that they will not efficiently evaporate at the water surface and thus remain undetectable not only in the vapour, but also those Type 2 grains forming from it. Potential soluble biosignatures with higher masses might therefore be found in spectra from Type 3 grains, which are thought to form from oceanic spray (Postberg et al. 2009a, 2011). Finding and identifying such biosignatures will be the main goal of future work.

Image: With Enceladus nearly in front of the Sun from Cassini’s viewpoint, its icy jets become clearly visible against the background. The view here is roughly perpendicular to the direction of the linear “tiger stripe” fractures, or sulci, from which the jets emanate. The jets here provide the extra glow at the bottom of the moon. The general brightness of the sky around the moon is the diffuse glow of Saturn’s E ring, which is an end product of the jets’ material being spread into a torus, or doughnut shape, around Saturn. North on Enceladus (505 kilometers, or 314 miles across) is up and rotated 20 degrees to the left. Credit: NASA/JPL/Space Science Institute.

The researchers believe that similarities between the hydrothermal environment found on Enceladus and what we see on Earth prioritizes the exploration of the Saturnian moon for life. After all, we know of many places on our planet where life develops without sunlight, with the vents supplying the energy that fuels reactions leading to the production of amino acids. Despite the remarkable strides made by Cassini, its Cosmic Dust Analyzer was not, the authors say, designed for deep probing of this question. That makes high-resolution mass spectrometers a key component of any dedicated mission designed to explore the organic chemistry beneath the ice.

The paper is Khawaja et al., “Low-mass nitrogen-, oxygen-bearing, and aromatic compounds in Enceladean ice grains,” Monthly Notices of the Royal Astronomical Society Vol. 489, Issue 4 (November 2019), pp. 5231–5243 (full text).

tzf_img_post
{ 48 comments }

Alan Boss: The Gas Giants We Have Yet to Find

The news of a gas giant of half Jupiter’s mass around a small red dwarf, GJ 3512 b, continues to resonate. It goes to what has become a well enshrined controversy among those who follow planet formation models. While core accretion is widely accepted as a way of building planets, gravitational instability has remained an option. We are not talking about replacing one model with another, but rather saying that there may be various roads to planet formation among the gas giants. In any case, GJ 3512 b makes a strong case that we have much to learn.

When I think about gravitational instability, I go back to the work of Alan Boss (Carnegie Institution for Science), as he has long investigated the concept. I learned about it from his papers and his subsequent book The Crowded Universe (Basic Books, 2009). Here’s how Boss describes it there:

Proponents of the top-down mechanism… envision clumps of gas and dust forming directly out of the planet-forming disk as a result of the self-gravity of the disk gas. The clumps would result from the intersections of random waves sloshing around the disk, waves that look much like the arms in spiral galaxies such as the Milky Way. When two spiral arms pass through each other, they momentarily merge to form a wave with their combined heights, just as waves do on the surface of an ocean. Such a rogue wave might rapidly lead to the formation of a clump massive enough to be self-gravitating, and so hold itself together against the forces trying to pull it apart. Once such a self-gravitating clump forms, the dust grains within the clumps settle down to the center of the protoplanet and form a core…

What emerges, then, is a solid core with gaseous envelope. In other words, this ‘disk instability’ model produces a planet that has the same structure as one derived from core accretion. What makes GJ 3512 b so interesting is that its position around a star as small as its host is hard to explain without a truly massive disk, and if that disk were to form, it would be unstable, and the gravitational instability model could come into play. The creator of this model, by the way, is Alastair Cameron (Harvard University), who spawned the notion in 1972. Other key players were Gerard Kuiper and Soviet scientist Victor Safronov, though it was Boss who revived the idea in 1997 and began developing computer models showing how it could occur.

Image: Simulation of the disk of gas and dust surrounding a young star. Credit: Alan Boss.

So what does Boss think of GJ 3512 b? As you might guess, he’s energized by the result:

My new models show that disk instability can form dense clumps at distances similar to those of the Solar System’s giant planets. The exoplanet census is still very much underway, and this work suggests that there are many more gas giants out there waiting to be counted.”

The work he is referring to is a new paper in press at The Astrophysical Journal that suggests there is a likelihood that gas giants in Jupiter-like orbits may be plentiful, with the inherent biases built into our observational techniques making them hard to find. As for how many of these may be formed from disk instability, Boss is computing various protoplanetary disk models to continue the investigation. As he notes in the new paper:

These models are intended to be first steps toward creating a hybrid model for exoplanet population synthesis, where a combination of core accretion and disk instability works in tandem to try to reproduce the exoplanet demographics emerging from numerous large surveys using ground-based Doppler spectroscopy and gravitational microlensing or space-based transit photometry (e.g., Kepler, TESS).

Image: The black box encapsulating Jupiter denotes the approximate region of exoplanet discovery space where Alan Boss’ new models of gas giant planet formation suggest significant numbers of exoplanets remain to be found by direct imaging surveys of nearby stars. NASA’s WFIRST mission, slated for launch in 2025, will test the technology for a coronagraph (CGI) that would be capable of detecting these putative exoplanets. Top Right: This simulation of the disk of gas and dust surrounding a young star shows dense clumps forming in the material. According to the proposed disk instability method of planet formation, they will contract and coalesce into a baby gas giant planet. Credit: Alan Boss.

We’re slowly eking out planets in wider orbits that are tricky for radial velocity, where their signals are more difficult to tease out than large planets close to their star, and also for transit work, because the transits would occur over an orbital period of five years or more. Boss continues to champion the concept that one size may not fit all when it comes to planet formation, with GJ 3512 b a striking case in point. Given a sufficiently massive protoplanetary disk, giant planets form in his models within 20 AU. Another targeted investigation for WFIRST will come out of all this when the mission launches some time in the next decade.

The paper is Boss, “The Effect of the Approach to Gas Disk Gravitational Instability on the Rapid Formation of Gas Giant Planets,” in press at The Astrophysical Journal (preprint).

tzf_img_post
{ 18 comments }