Centauri Dreams

Imagining and Planning Interstellar Exploration

A Longer, Heavier Bombardment

We know that the early Earth was a violent place, but just how violent? The so-called Late Heavy Bombardment is thought to have occurred from 4.1 billion to 3.8 billion years ago, likely the result of asteroids being destabilized in their orbits by shifts in the orbits of the outer planets. That model is self-limiting, with the unstable asteroids being depleted over time and the Late Heavy Bombardment winding down, and it matches the dating of rocks from the lunar basins that show vivid evidence of the battering both Earth and Moon took.

But as I mentioned last week, the question of the length of the Late Heavy Bombardment is in play, with two papers in Nature suggesting that heavy impacts may have continued for a much longer time, perhaps half of the Earth’s history. William Bottke (Southwest Research Institute) and team are suggesting that during this early period, the inner edge of the asteroid belt was just 1.7 AU from the Sun — in a region called the E-belt, a largely extinct portion of the asteroid belt between 1.7 and 2.1 AU — rather than at today’s main belt inner edge distance of 2.1 AU. Asteroids dislodged from orbits here would have been ten times more likely to strike the Earth than asteroids from the region of today’s main belt. From the paper:

The main asteroid belt’s inner boundary is currently set by the v6 secular resonance at 2.1AU (one astronomical unit is approximately the Earth-Sun distance); objects entering this resonance have their eccentricities pumped up to planet-crossing values in less than a million years. Before the LHB, the giant planets and their associated secular resonances were in different locations, with the only remaining natural inner boundary being the Mars-crossing zone. Accordingly, the main asteroid belt may have once stretched into the E-belt zone as far as 1.7AU.

Destabilizing the E-belt asteroids, then, caused nearly all of them to be driven onto planet-crossing orbits over the next four billion years, with a small percentage of survivors winding up in the ‘Hungaria’ population of high-inclination asteroids. The record of major impacts is preserved in the melted droplets of debris called ‘impact spherules,’ which would have been scattered around the planet in the case of major events like the one causing the Chicxulub crater in Mexico, thought to have occurred 65 million years ago and potentially the event that caused the demise of the dinosaurs. You’ll recall it was Chicxulub that Tetsuya Hara (Kyoto Sangyo University) and colleagues used in their study of impact debris, as discussed here on Friday.

Image: Researchers are learning details about asteroid impacts going back to the Earth’s early history by using a new method for extracting precise information from tiny “spherules” embedded in layers of rock. The spherules were created when asteroids crashed into Earth, vaporizing rock that expanded as a giant vapor plume. Small droplets of molten rock in the plume condensed and solidified, falling back to the surface as a thin layer. This sample was found in Western Australia and formed 2.63 billion years ago in the aftermath of a large impact. Credit: Oberlin College/Bruce M. Simonson.

A period of bombardment emerges from the study of spherules that continued long past the time older theories assumed it had ended, making impact events a continuing player in the evolution of life. Brandon Johnson and Jay Melosh (Purdue University) add to this picture in the same issue of Nature by looking at similar spherules in rock formed between 2 billion and 3.5 billion years ago, finding evidence that the theory of an extended Late Heavy Bombardment makes sense. Johnson told Nature‘s Helen Thompson in a review of his work that it “… shows that a lot more big asteroids — meaning dinosaur-killer or larger — were hitting Earth well after the current idea of when it ended.” The paper supports Bottke’s analysis, with impacts continuing to be prolific in the Archaean eon, when photosynthesizing cyanobacteria were on the rise.

Are major impacts always killer events, or do they spur evolution by continually shaping the environment, introducing organics and other key materials as life is taking hold? From Nature‘s coverage of the story:

How life would have responded to a sustained barrage throughout this period is unclear. A giant impact would have come as a severe blow for some forms of early life, but it need not have been all bad news, says Steve Mojzsis, a geologist at the University of Colorado Boulder. That is because the energy deposited by the ongoing impacts could have created hot zones like those found near hydrothermal vents today. “These are great places for microbes,” says Mojzsis, who notes that some phylogenetic evidence suggests that the last common ancestors of all present-day life were heat-loving organisms.

The papers are Bottke et al., “An Archaean heavy bombardment from a destabilized extension of the asteroid belt,” Nature 485 (03 May 2012), 78-81 (abstract) and Johnson and Melosh, “Impact spherules as a record of an ancient heavy bombardment of Earth,” Nature 485 (03 May 2012), 75-77 (abstract). Be aware, too, of the Impact Earth! calculator, available online from Purdue to allow calculations of asteroid or comet impact damage based on the object’s mass. The new work should allow its engine to be fine-tuned.

tzf_img_post

Impacts Spreading Life through the Cosmos?

Still catching up after the recent series on antimatter propulsion, I want to move into some intriguing work on panspermia, the idea that life may spread throughout a Solar System, and perhaps from star to star, because of massive impacts on a planetary surface. Catching up with older stories means leaving some things unsaid about antimatter — in particular, I want to return to the question of antimatter storage, which in my mind is far more significant a problem even than antimatter production. But there’s time for that next week, and as I said yesterday, interesting stories keep accumulating and deserve our attention.

Planetary Ejecta and Trapped Microorganisms

What Tetsuya Hara (Kyoto Sangyo University) and colleagues put forth in a recent paper are their calculations about the ejection of life-bearing rocks and water into space from events like the possible ‘dinosaur killer’ asteroid impact some 65 million years ago, which involved an asteroid 10 kilometers in diameter. It’s a remarkable fact that materials can be knocked off one planetary surface and wind up on another, and in some quantity. Consider, for example, the 100 or so meteorites identified by their isotopic composition as being from Mars. They show marked similarity in chemical composition to Viking’s analysis of Martian surface rocks in 1976, and trapped gases in some closely resemble the Martian atmosphere.

So planets in the same system can exchange materials, and of course the Allan Hills meteorite found in Antarctica (ALH 84001), thought to have been ejected from Mars about 16 million years ago, caused quite a stir back in 1996 when scientists thought they had found evidence for microscopic fossils within it, an analysis that remains controversial. But whether or not ALH 84001 contained life, the discovery of various kinds of extremophiles here on Earth and the possibility that they could survive for long periods trapped in rocky debris leads to the idea that one world can seed another, and as we’ve seen in earlier posts on the topic, the idea goes back as far as the Greek philosopher Anaxagoras, with a revival of interest in the 19th Century.

Image: Would an asteroid impact like this drive life-bearing materials to other planets? Possibly so, but the bigger question is, would microorganisms be able to survive the trip? Credit: NASA.

Fred Hoyle and Chandra Wickramasinghe, who were proponents of continuing panspermia and the idea that life entering Earth’s atmosphere from outside could be a driver for evolution, would doubtless find Hara and team’s work fascinating. While the latter argue that solar storms could eject microbes from the upper atmosphere into space, they concede that bolide impacts would be the major driver:

Naturally, those meteors, asteroids or comets which strike with the strong force, would eject the most material into space. Thus it could be predicted that the asteroid or meteor which struck this planet 65 million years ago, and which created the Chicxulub crater (Alvarez. et al. 1979) would have ejected substantial amount of rock, soil, and water into space, some of which would have fallen onto other planets and moons, including stellar bodies outside our stellar system, Kuiper belt objects, Oort cloud objects, and possibly extrasolar planets…

And there’s the issue — just how much material would actually be transferred not just into the outer Solar System, but to nearby stars? Hara uses the Chicxulub crater event as the model for the kind of collisions that drive Earth materials into space, estimating that about the same amount of mass would be ejected from Earth as arrived in the original asteroid impactor.

Long Journey into the Dark

Remarkably, almost as much of the ejected materials make the journey to Europa as to our own Moon, an interesting outcome explained by Jupiter’s deep gravitational well, which in this case takes possibly biologically-laden material to a moon that contains an under-ice ocean. Another place of high astrobiological interest is Saturn’s moon Enceladus, with an internal body of water of its own, as evidenced by the geysers Cassini continues to monitor around its south pole. Here the numbers drop considerably but as many as 500-2000 Earth rocks may have reached Enceladus. Even distant Eris in the Kuiper Belt scarfs up 4 x 107 Earthly objects in one scenario.

These numbers vary according to which of two models the authors use, but in either case their figures show significant movement of Earth materials into the outer Solar System. Extending the model to Gliese 581 becomes a fascinating exercise because we have reason to believe that the ‘super-Earth’ Gl 581d may orbit in the outer edges of the habitable zone there. The result: As many as 1000 rocks may have made the million year journey to fall upon a planet in the Gl 581 system. Thus we have the possibility, however remote, of dormant microorganisms moving between one stellar system and another, to fall upon a planet that conceivably can support life.

Hara and team acknowledge the uncertainties in their calculations but insist that “…the probability of rocks originated from Earth to reach nearby star system is not so small.” If this is the case, their conclusion points to the possibility that life did not originate on Earth at all:

We estimate the transfer velocity of the microorganisms among the stellar systems. Under some assumptions, it could be estimated that if origin of life has begun 1010 years ago in one stellar system as estimated by Joseph and Schild (2010a, b), it could propagate throughout our Galaxy by 1010 years, and could certainly have reached Earth by 4.6 billion years ago (Joseph 2009), thereby explaining the origin of life on Earth.

This assumes that there are 25 sites where life began 1010 years ago, with biological materials spread through the galaxy by the same kind of impact events that caused the Chicxulub crater. Add to this recent work by Brandon Johnson (Purdue) and colleagues. They’ve been investigating layers of rock droplets called spherules, which may tell us better than craters about ancient impacts, including the size and velocity of the impacting object. An initial reading of their work shows that the Late Heavy Bombardment, thought to have occurred from 4.1 billion to 3.8 billion years ago as huge numbers of asteroids and comets hit the Earth, may have lasted longer than we have previously believed.

Was Chicxulub relatively minor compared to the size of some of these impacts? Evidently so, which would account for even more materials from our planet being pushed out into nearby space. The wild card in all this is the ability of microorganisms to survive not just the impact but the journey, and when it comes to interstellar panspermia, my own credulity is pushed to the breaking point. Although I’m running out of time this morning, I want to return to two papers in Nature that examine the Late Heavy Bombardment and the history of later impacts on our planet. We’ll home in on evidence for a longer bombardment era when Centauri Dreams returns next week.

The Hara paper is “Transfer of Life-Bearing Meteorites from Earth to Other Planets,” Journal of Cosmology 7 (2010), p. 1731 (preprint). Thanks to John Kilis for the original pointer to Hara and an update on the Nature work.

tzf_img_post

Disruptive Planets and their Consequences

One of the joys of writing a site like Centauri Dreams is that I can choose my own topics and devote as much or as little time as I want to each. The downside is that when I’m covering something in greater depth, as with the four articles on antimatter that ran in the last six days, I invariably fall behind on other interesting work. That means a couple of days of catch-up, which is what we’ll now see, starting with some thoughts on a possible planet beyond Neptune, a full-sized world as opposed to an ice dwarf like Pluto or Eris. This story is actually making the rounds right now, but it triggered thoughts on older exoplanet work I’ll describe in a minute.

It’s inevitable that we call such a world Planet X, in my case because of my love for the wonderful Edgar Ulmer film The Man from Planet X (1951), in which a planet from the deeps wanders into the Solar System and all manner of trouble — including the landing of an extraterrestrial on a foggy Scottish moor — breaks out. Of course, Planet X was also the name for the world Percival Lowell was searching for in the 1920s, a hunt that resulted in Clyde Tombaugh’s discovery of Pluto, although the latter occurred more or less by chance since Pluto/Charon isn’t big enough to cause the gravitational effects Lowell was examining.

So is there a real Planet X? Rodney Gomes (National Observatory of Brazil) has run simulations on the ‘scattered disk’ beyond Neptune and, factoring in oddities like the highly elliptical orbit of Sedna and other data points on these distant objects, Gomes believes a Neptune-class planet about four times as massive as Earth may be lurking in the outer system. Sedna, you may recall, has a perihelion of 76 AU but an aphelion fully 975 AU out — it’s on a 12,000 year orbital period! As for Gomes, his team has been looking at what they call ‘true inner Oort cloud objects’ for some time, seeing objects like Sedna as markers for the existence of a planet.

Gomes ran through the results of his simulations at an American Astronomical Society meeting in Oregon in May, keeping the Planet X hunt alive, and it’s worth noting that a Jupiter-class planet at about 5000 AU may also fit the bill (see Finding the Real Planet X). For that matter, the orbits of scattered disk objects may have another explanation besides an undiscovered planet. But thinking about Gomes’ work brought me around to Jason Steffen and team, whose new paper goes to work on a much different kind of gravitational effect, the disruption caused by a ‘hot Jupiter’ as it moves through a young Solar System and scatters smaller planets.

Realm of the Wandering Planets

Steffen (Fermilab Center for Particle Astrophysics) is digging into exactly what makes ‘hot Jupiters’ take up such extreme orbits. These are planets of Jupiter’s size and larger that whip around their stars in periods of just a few days. The question is how they got to their present position, for the assumption is that planets of this size had to form much further out in their system and then move inward. There are two mechanisms that could make this happen, one of which — a slow migration through a gas disk that would allow low-mass planets to likewise migrate inward, where they can be captured into mean-motion resonance with the gas giant — seems benign. These models suggest the presence of smaller worlds near the hot Jupiter.

Image: Artist’s concept of a hot Jupiter, likely a disrupter of any planets that encounter it. Credit: NASA.

The other model is lethal to the inner system. Here, the giant planet’s migration is caused by gravitational interactions with another gas giant that result in one of the worlds being flung into interstellar space, while the other migrates inward and disrupts the orbits of any inner-system worlds. This scenario is what the Steffen paper is suggesting, for the team’s analysis of 63 Kepler planets around solar-type stars in orbits of 6.3 days or less shows no evidence at all for nearby planets. If such worlds were there, they ought to be detectable through transit timing variations (TTV) unless they are smaller than the Earth, or much further out in the system.

To compare and contrast environments, the researchers took another sample of 31 Kepler planets with ‘warm Jupiters’ — planets of Jupiter size around the same kind of star, but with longer orbital periods of between 6.3 and 15.8 days. They also checked 222 Kepler ‘hot Neptunes.’ The result: Three of the 31 ‘warm Jupiter’ systems showed companion planets in the inner system, and fully one-third of the hot Neptune systems showed the presence of inner system planets. Finally, the team looked at 52 ‘hot Earths’ in the Kepler data for TTVs, testing whether hot Jupiters and smaller worlds like these might co-exist in mutually inclined orbits. They found no evidence for high-mass companions on inclined orbits in this scenario.

The authors see this as a boost to the ‘scattering’ model, the study suggesting that hot Jupiters are migrating worlds on initially highly elliptical orbits that scattered other planets out of the inner system before their orbits became circularized close to their stars. Short period, low-mass planets would seem to have a different formation history than hot Jupiters. From the paper:

Hot Jupiter systems where planet-planet scattering is important are unlikely to form or maintain terrestrial planets interior to or within the habitable zone of their parent star. Thus, theories that predict the formation or existence of such planets (Raymond et al. 2006; Mandell et al. 2007) can only apply to a small fraction of systems. Future population studies of planet candidates, such as this, that are enabled by the Kepler mission will yield valuable refinements to planet formation theories — giving important insights into the range of probable contemporary planetary system architectures and the possible existence of habitable planets within them.

If hot Jupiter systems have a different dynamical history than other planetary systems, as this work suggests, then we have a useful filter to apply to exoplanet studies. If it can be firmly established that the presence of a hot Jupiter means no planets in the habitable zone, we know our resources are best focused elsewhere when it comes to looking for terrestrial worlds. It’s too early to make that call now, but the evidence is mounting that in most cases hot Jupiters are killer worlds when it comes to young planets in the warm regions where life may occur.

The paper is Steffen at al., “Kepler constraints on planets near hot Jupiters,” Proceedings of the National Academy of Sciences 109 (21) 7982-7987 (2012). Abstract available.

tzf_img_post

Losing Our Cosmology

Long-time Centauri Dreams readers know I love the idea of ‘deep time,’ an interest that cosmology provokes on a regular basis these days. Avi Loeb’s new work at Harvard tweaks these chords nicely as the theorist examines what we know and when we won’t be able to study it any longer. For an accelerating universe means that galaxies are moving outside our light horizon, to become forever unknown to us. Using tools like the Wilkinson Microwave Anisotropy Probe, we’ve been able to learn how density perturbations in the early universe, thought to have been caused by quantum fluctuations writ large by a period of cosmic inflation, emerged as the structures we see today. But are there limits to cosmological surveys?

Start with that period of inflation after the Big Bang, which would have boosted the scale of things by more than 26 orders of magnitude, helping to account for the fact that the cosmic microwave background (CMB) appears so uniform in all directions. We can tease out the irregularities that have grown into galaxies today. Loeb refers to these density perturbations as a ‘Rosetta stone’ that has unlocked our understanding of the largest cosmological parameters.

Even so, he concludes in his new paper that our ability to understand the earliest period of the universe deteriorates over time. In fact, the optimum time to study the cosmos turns out to have been more than 13 billion years ago, with information being lost along the way in the eras since. From the paper:

… the most accurate statistical constraints on the primordial density perturbations are accessible at z ? 10, when the age of the Universe was a few percent of its current value (i.e., hundreds of Myr after the Big Bang). The best tool for tracing the matter distribution at this epoch involves intensity mapping of the 21-cm line of atomic hydrogen… Although the present time (a = 1) is still adequate for retrieving cosmological information with sub-percent precision, the prospects for precision cosmology will deteriorate considerably within a few Hubble times into the future.

The era Loeb picks as optimum for observation is just 500 million years after the Big Bang, when cosmic perturbations were still emerging in the form of the first stars and galaxies. Hubble time, as defined by the standard cosmological model, is 13.8 billion years, giving us a sense of the scale he is working with. Today we can use 21-centimeter surveys to map the early distribution of matter, but such observations will be impossible in the far future. We have to reckon with the fact that dark energy is still an open question — does it evolve with time? Loeb’s work assumes a universe with a true cosmological constant, but he notes that the ultimate loss of information will occur whether or not an accelerated expansion varies as the universe ages.

Image: New research finds that the ideal time to study the cosmos was more than 13 billion years ago, just about 500 million years after the Big Bang – the era (shown in this artist’s conception) when the first stars and galaxies began to form. Since information about the early universe is lost when the first galaxies are made, the best time to view cosmic perturbations is right when stars began to form. Modern observers can still access this nascent era from a distance by using surveys designed to detect 21-cm radio emission from hydrogen gas at those early times. Credit: Harvard-Smithsonian Center for Astrophysics.

This work takes us into deep time indeed, with Loeb’s calculations showing that beyond 100 Hubble times (a trillion years in the future), the wavelength of the CMB and other extragalactic photons will be stretched by a factor of 1029 or more, making current cosmological sources unobservable:

While the amount of information available now from observations of our cosmological past at z ? 10 is limited by systematic uncertainties that could potentially be circumvented through technological advances, the loss of information in our future is unavoidable as long as cosmic acceleration will persist.

It’s always interesting to speculate about what any observers in this remote future would make of the universe without the ability to recover evidence of the early cosmological perturbations. Clearly our observations have a fundamental limit as a function of cosmic time, and even our existing data, which Loeb sees as far from optimal, will one day be unrecoverable from new observations. If there are future advances that would somehow surmount these limitations to open up the universe’s past, we can only speculate on what they might be, and how they would be developed by a civilization living in this unimaginably remote future cosmos.

The paper is Loeb, “The Optimal Cosmic Epoch for Precision Cosmology,” accepted for publication in the Journal of Cosmology and AstroParticle Physics (preprint).

tzf_img_post

Toward a Beamed Core Drive

If you didn’t see this morning’s spectacular launch of the SpaceX Falcon 9, be sure to check out the video (and it would be a good day to follow @elonmusk on Twitter, too). As we open the era of private launches to resupply the International Space Station, it’s humbling to contrast how exhilarating this morning feels with the great distances we have to traverse before missions to another star become a serious possibility. We’ve been talking the last few days about the promise of antimatter, but while the potential for liberating massive amounts of energy is undeniable, the problems of achieving antimatter propulsion are huge.

So we have to make a lot of leaps when speculating about what might happen. But let’s assume just for the sake of argument that the problem analyzed yesterday — how to produce antimatter in quantity — is solved. What kind of antimatter engine would we build? If everything else were optimum, we’d surely try to master a beamed core drive, the pure product of the matter/antimatter annihilation sequence. Protons and antiprotons are injected into a magnetic nozzle, blowing out the back at a substantial percentage of the speed of light. This is the kind of rocket analyzed by Ronan Keane (Western Reserve Academy) and Wei-Ming Zhang (Kent State University) in the paper I’ve been skirting around the edges of these past few days.

Channeling Antimatter’s Energies

The paper, headed for publication in the Journal of the British Interplanetary Society, has the provocative title “Beamed Core Antimatter Propulsion: Engine Design and Optimization,” and it deals with the particle stream emerging from proton/antiproton collisions. What you get when you put the two together are gamma rays and pions, some of the latter charged and some neutral. Almost immediately the pions decay into positrons and electrons, which meet each other and produce gamma rays. But the tens of nanoseconds the pions take to decay gives us long enough to channel the charged pions through a magnetic nozzle to produce the needed thrust.

Image: Antimatter promises fast transportation throughout the Solar System and the opportunity for interstellar probes, but only if we can master its production and storage. New work is explaining how efficient an antimatter engine might be. Credit: Positronics Research, LLC.

The beamed core engine, then, is all about channeling the pions into a focused flow. Get this right and you’ve got a lot of energy to work with. In fact, Keane and Zhang note that the energy released per kilogram of annihilating antimatter and matter is 9 X 1016 joules, which is two billion times more than the thermal energy from burning a kilogram of hydrocarbon, and over a thousand times larger than burning a kilogram of fuel in a nuclear fission reactor. But while the beamed core engine is attractive because of the high relativistic velocities of the charged particles produced by the annihilation reactions, the situation is not ideal.

For one thing, much of the energy of the reaction goes into producing electrically neutral particles, which are impervious to the workings of a magnetic nozzle and thus cannot contribute to thrust. The other problem is that the nozzles we’ve been able to analyze have efficiency problems of their own in terms of creating the tight beam of thrust we’d like to produce. What Keane and Zhang do is to use software called Geant4 from the CERN accelerator laboratory to produce simulations of the interactions of particles with matter and fields. They want to bring previous studies of beamed core concepts up to date especially in terms of magnetic nozzles.

Robert Frisbee has performed rigorous studies of beamed core concepts in which magnetic nozzle efficiency is only about 36 percent, which means that while you’re dealing with pions that are initially moving at 90 percent of light speed and above, the exhaust velocity of the rocket would be just a third of that amount. Keane and Zhang derive an efficiency that is better than twice that, and manage to reach charged pion exhaust speeds of 69 percent of c. They also show that the initial speed of charged pions in a beamed core engine is actually closer to .81c than Frisbee’s 90 percent-plus. Despite the lower initial speed, the nozzle efficiencies make quite a difference depending on the kind of mission being attempted:

Frisbee’s papers explain in depth the needed generalization to account for emission of uncharged particles… When loss of propellant is taken into account, Frisbee has shown that ve ~ 0.3c leads to a beamed core rocket facing daunting challenges in reaching a true relativistic cruise speed on a one-way interstellar mission where deceleration at the destination (a “rendezvous” mission) would be involved.

Fuel requirements become critical with lower nozzle efficiencies:

… with a payload of 100 metric tons, a 4-stage beamed core rocket designed for a cruise speed of 0.42c on a 40 light-year rendezvous mission would require 40 million tons of antimatter fuel. If the cruise speed were limited to 0.25c or less, only two stages might be needed, and Frisbee envisaged viable interstellar missions with as few as one beamed core stage; in such scenarios, fuel requirements would be dramatically lower.

All of this at 36 percent nozzle efficiency. The new numbers change the picture, with Keane and Zhang stating “With the new reference point of ve =0.69c provided by the present Geant-based simulation, true relativistic speeds once more become a possibility using the highest performance beamed core propulsion in the distant future.”

Note the ‘distant future’ caveat, highly significant when you consider our problems in producing antimatter (or harvesting same) and the perhaps even more intractable issues involved in storage.

On Software and Methodology

But even if we can’t put a timeframe on something as futuristic as a beamed core rocket, we can continue to study the concept, and it’s heartening to see Keane and Zhang’s conclusion that the simulation software at CERN has proven robust in meeting this challenge and updating our numbers. Whether or not Keane and Zhang’s methodology is on target may be another issue, as Adam Crowl noted recently in a post to a private mailing list of aerospace engineers. Crowl hastens to add that his computations are provisional, but let me quote (with his permission) where he is right now on the magnetic nozzle efficiency issue:

There’s a problem with using just the exhaust velocity given to *part* of the fuel/propellant. It means the actual mass-ratio for a given delta-vee is quite different to a naive computation using the classic Tsiolkovskii equation. A more useful figure of merit for rockets with mass-loss in addition to reaction mass is specific impulse – momentum change per unit mass of fuel/propellant. Using the equations derived by Shawn Westmoreland and the rather vague particle energies in Zhang & Keane, the effective specific impulse is ~0.28c. Even with a perfect jet efficiency the Isp is just 0.31c.

The antimatter reaction, then, may not offer as much as we hoped:

The 0.81c average particle speed quoted in the paper isn’t as useful as the spread of kinetic energies in the particles produced, or the total kinetic energy in the distribution, but they don’t report either figure. What it does imply is that an antimatter-matter reaction puts about 11% of the mass-energy into the charged particles. Not exactly spectacular.

The chance to go to work on concepts through papers in the preprint process is invaluable, and we’ll see how Crowl’s thinking, as well as Keane and Zhang’s, evolves with further study of the issues in this paper. One thing is for sure: Given the manifest problems of antimatter production and storage, we’ll have no shortage of time in which to consider these matters before the question of actually producing this kind of antimatter rocket becomes pressing.

The paper is Keane and Zhang, “Beamed Core Antimatter Propulsion: Engine Design and Optimization,” accepted by JBIS (preprint).

tzf_img_post

Antimatter: The Production Problem

Antimatter is so tantalizing a prospect for propulsion that every time a new slant on using it appears, I try to figure out its implications for long-haul missions. But the news, however interesting, is inevitably balanced by the reality of production problems. There’s no question that antimatter is potent stuff, with the potential for dealing out a thousand times the energy of a nuclear fission reaction. Use hydrogen as a working fluid heated up by antimatter and 10 milligrams of antimatter can give you the kick of 120 tonnes of conventional rocket fuel. If we could get the cost down to $10 million per milligram, antimatter propulsion would be less expensive than nuclear fission methods, depending on the efficiency of the design.

But how to reduce the cost? Current estimates show that producing antimatter in today’s accelerator laboratories runs the total up to $100 trillion per gram. But when I was researching my Centauri Dreams book, I spent some time going through the collection of Robert Forward’s papers at the University of Alabama-Huntsville, where several boxes of materials are stored in Salmon Library. Forward was constantly working in a number of different fields, always keeping his eye on the latest research, and as part of that effort he produced a series of newsletters on antimatter developments that he circulated among colleagues.

Image: A Penn State artist’s concept of an antimatter-powered Mars ship with equipment and crew landers at the right, and the engine, with magnetic nozzles, at left. Credit: PSU.

Reading through these materials, I came to see that when we quote the $100 trillion per gram figure, we’re talking about antimatter as produced more or less as a byproduct. Forward understood and appreciated the science requirements of particle accelerator labs but also saw that they were hardly the most efficient place to produce antimatter in any quantity. They were not, after all, in the propulsion business. He proceeded to do a study for the US Air Force looking at what might happen if an antimatter facility were actually designed for no other purpose than the creation of antiprotons, finding that the energy efficiency could be raised from one part in 60 million to a part in 10,000, or 0.01 percent.

The cost of building the factory, meanwhile, could be lowered dramatically, to the point where Forward believed our $10 million per milligram would be within reach. This is an interesting figure in several ways. As noted above, it makes antimatter feasible for certain kinds of space missions (assuming equivalent advances in our methods of antimatter storage. But as the price begins to drop, we can expect to find new applications in other areas of research, which should drive demand and spur further work on efficient production. It’s worth remembering that even at today’s prices, antimatter has proven its worth in scientific research and medical uses.

What about other ways of lowering the cost? One possibility is to look beyond slamming high-energy protons into heavy-nuclei targets. Writing with Joel Davis in a book called Mirror Matter: Pioneering Antimatter Physics (Wiley, 1988), Forward looked at options like heavy ion beam colliders, in which beams of heavy ions like uranium could be collided to produce 1018 antiprotons per second (with acknowledged problems in creating large amounts of nuclear debris). He also considered new generations of superconducting magnets to create magnetic focusing fields near the region where the beams collide, which should make tighter beams and greater antimatter production possible.

I bring all this up because the possibility of harvesting antimatter from natural sources in space, which we talked about last week, has to be weighed against boosting production here on Earth. But Forward’s ideas actually coupled the two notions. He wanted to move antimatter production by humans into space in the form of huge factories. Here’s what he has to say on this in an essay in his book Indistinguishable from Magic (Baen, 1995):

Where will we get the energy to run these magic matter factories? Some of the prototype factories will be built on Earth, but for large scale production we certainly don’t want to power these machines by burning fossil fuels on Earth. There is plenty of energy in space. At the distance of the Earth from the Sun, the Sun delivers over a kilowatt of energy for each square meter of collector, or a gigawatt (1,000,000,000 watts) per square kilometer. A collector array of one hundred kilometers on a side would provide a power input of ten terawatts (10,000,000,000,000), enough to run a number of antimatter factories at full power, producing a gram of antimatter a day.

We’re a long, long way from producing a gram of antimatter a day, of course, which is why studies like the recent one performed by Ronan Keane (Western Reserve Academy) and Wei-Ming Zhang (Kent State University) have such a futuristic air. But it’s important to learn the theoretical constraints on propulsion systems even if the required antimatter isn’t available, and on that score, Keane and Zhang are thinking ahead to the most advanced kind of antimatter of them all, a beamed core drive. To make it work, assuming you have the antimatter available, you need to inject protons and antiprotons into a magnetic nozzle, one that channels charged pions from the matter/antimatter annihilation into a focused beam of powerful thrust.

Although charged pions decay quickly, they can start out at 90 percent of the speed of light. Unfortunately, earlier magnetic nozzle calculations have proven inefficient at channeling these energies, dropping the exhaust velocity down to a third of this value. Tomorrow we’ll look at how a more efficient magnetic nozzle can produce better results, as Keane and Zhang have analyzed using CERN software to simulate what would go on in the hellish interior of a beamed core antimatter engine. But we also need to consider other ways of using antimatter for propulsion, assuming that Forward’s space-borne factories aren’t going to be coming online any time soon.

tzf_img_post

Charter

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For many years this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image courtesy of Marco Lorenzi).

Now Reading

Version 1.0.0

Recent Posts

On Comments

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to others, I'm probably not going to run it.

Follow with RSS or E-Mail

RSS
Follow by Email

Follow by E-Mail

Get new posts by email:

Advanced Propulsion Research

Beginning and End

Archives