Modeling Black Hole Mergers

Any guesses as to what the most powerful event in the universe is? According to a team of NASA scientists working with breakthrough computer modeling techniques, it’s the merger of two massive black holes. When the event occurs, gravitational waves muscle out from the collision site at the speed of light. Each such merger generates more energy than all the stars in the universe combined. John Baker at Goddard Space Flight Center is lead author of the paper, which appears in the March 26 issue of Physical Review Letters.

Of course, although the modeling works (via the largest astrophysics calculations ever performed on a NASA supercomputer), one problem is that gravitational waves have yet to be detected directly. But there is reasonable expectation that that will change through observations from the Laser Interferometer Gravitational-Wave Observatory and the joint NASA/ESA Laser Interferometer Space Antenna, a proposed mission that should be able to do the job. Einstein’s theory of general relativity predicts that a massive black hole merger would cause spacetime itself to jiggle, in the words of a NASA news release, “…like a bowl of Jell-O.”

Merging black holes

Image: Scientists are watching two supermassive black holes spiral towards each other near the center of a galaxy cluster named Abell 400. Shown in this X-ray/radio composite image are the multi-million degree radio jets emanating from the black holes. Click on image to view large resolution. Credit: X-ray: NASA/CXC/AIfA/D.Hudson & T.Reiprich et al.; Radio: NRAO/VLA/NRL

Also on the black hole front is a new study using Chandra data that shows most of the energy released by matter as it falls toward a supermassive black hole appears in the form of high-energy jets traveling away from the object at speeds near that of light. So powerful are these jets that they could disrupt gas clouds that might otherwise cool into new stars.

“We see enough energy coming out of these black holes to completely stifle star formation,” said study co-author Christopher Reynolds of the University of Maryland. “This is an exciting finding because it demonstrates a direct interaction between black holes and galaxy formation, which is a hot topic in astrophysics right now.”

Performed by scientists at the University of Maryland, Stanford University, the University of Cambridge and New Mexico State University, the study will appear in an upcoming issue of the Monthly Notices of the Royal Astronomical Society. So much for old, ‘boring’ black holes, the kind that receive short shrift next to the more crowd-pleasing quasars. This work is a window into galaxy formation in the presence of black holes, another reminder (when we’re not talking about dark matter) of how much we have to learn about how the largest objects in the universe take shape.

Eberhardt Rechtin: Death of a Pioneer

Centauri Dreams is saddened to learn of the death of Eberhardt Rechtin, a pioneer of deep space exploration. The list of his accomplishments is long: Rechtin served as CEO of Aerospace Corporation (El Segundo, CA), as chief engineer of Hewlett-Packard, and as director of the Defense Advanced Research Projects Agency. But his career is best remembered for his work on the Deep Space Network that today allows us to track probes at the edge of the heliosphere.

Building that network was a towering achievement, and one that seemed unlikely in the 1950s, when Rechtin developed Microlock, a system of receivers that, when deployed at different ground stations, could compensate for the shifting frequencies produced by rockets in flight. Those of us who remember the days right after Sputnik may recall that there was an intense effort to expand America’s ground tracking capabilities thereafter. Microlock was at the heart of it, expanding to the system that tracked the first US satellites. The Deep Space Instrumentation Facility would emerge as its successor.

DSIF grew as early robotic flights to the Moon became feasible and the first parabolic dish went up at Goldstone in California’s Mojave Desert. The system received an early shakeout with the launch of Pioneer 3, communicating with the spacecraft throughout its failed attempt to reach the Moon. Rechtin could see that a worldwide system of receivers would be needed for future space work, one that could provide round-the-clock coverage through stations at Woomera (Australia), north of Johannesburg in South Africa and Goldstone itself.

Goldstone visitors in 1963

Image: Eberhardt Rechtin (far left) at the Goldstone facility in 1963. Credit: NASA/JPL.

By late 1963, DSIF had become the Deep Space Network. The big 210-foot dish that followed at Goldstone was complemented by changes of venue elsewhere, from Woomera to a site near Canberra, and out of South Africa to Robledo, near Madrid. An article on the DSN by Mark Wolverton quotes Rechtin on the challenges involved:

“I was told by Nobel Prize winners that it would not be possible really to communicate to the edge of the solar system,” Rechtin recalled in a 1995 oral-history interview with the Institute of Electrical and Electronics Engineers. “If you could, you couldn’t send back enough interesting information. You would have to have bandwidths of your receiving system wide enough to account for the Doppler shifts. If you did that you had to send megawatts of power back from enormous antennas at the edge of the solar system. Nobody knew how to do that… . Well, I sat down and figured out, ‘Wait a minute. We could track the Doppler.'”

Rechtin’s work at perfecting the phase-locked loop receiver helped the network compensate for the Doppler effect: the changes in frequency and wavelength caused by the relative motion between spacecraft and receiver on Earth. And his labors on solid-state ruby crystal masers (maser = microwave amplification by stimulated emission of radiation) allowed the creation of the world’s most sensitive radio receivers.

To say that sensitivity is critical for deep space work is a gross understatement. Ponder this: When Voyager’s 23-watt signal reaches us today, it delivers 20 billion times less power than a digital wristwatch, and its beam width is a thousand times greater than Earth’s diameter. But we’re still tracking the Voyagers, and plan to continue at least until 2020.

You can read more about the fascinating early days of the DSN in the Wolverton article, but do read Rechtin’s obituary in the Los Angeles Times as well. And let me throw in a plug for Wolverton’s outstanding book The Depths of Space: The Pioneer Planetary Probes (Joseph Henry Press, 2004), an essential resource for anyone tracking deep space research.

A Boost for Innovative Interstellar Explorer

The Innovative Interstellar Explorer mission discussed recently in these pages has received new support in a study of alternative propulsion concepts. IIE, you may remember, would use radioisotope electric propulsion (REP), tapping xenon as propellant. The mission’s goal is to deliver a scientific payload to 150-200 AU within a 15 to 20 year time frame; the concept thus tracks earlier mission concepts built around solar sails and allows useful comparisons beween the various propulsion methods that have been proposed for such deep space work.

In a paper to be published as a chapter in a book on NASA ‘Vision’ missions this summer, Thomas Zurbuchen (University of Michigan) and a team of researchers discuss the specifics of powering such a probe by nuclear methods and find them wanting. The paper is so rich that I want to discuss several issues from it in coming weeks. For now, though, let’s consider the propulsion dilemma as seen by scientists running the numbers using existing technologies.

A solar sail gets you to the interstellar medium more quickly than the kind of chemical propulsion with gravity assists used by Voyager, but even so the task is daunting, requiring the probe’s escape velocity to be a factor of 3 greater than Voyager 1’s. And existing sail designs deliver speed but at a cost in payload weight.

NASA’s now defunct Prometheus project would have created a spacecraft too heavy and slow for a mission to the nearby interstellar medium — the Zurbuchen paper centers around a nuclear power source in conjunction with electric propulsion fine-tuned for the mission. For a variety of reasons, the best compromise between the various proposals seems to be the radioisotope electric propulsion advocated by Ralph McNutt and the IIE team.

One reason, of course, is the end of Prometheus funding, which affects all thinking on nuclear systems for an interstellar probe (ISP). From the paper:

This chapter details the results of one of two highly complementary technical approaches for an ISP that were funded under NASA’s Vision Mission initiative. The study described here focuses on an ISP that utilizes nuclear reactor technology that is not currently available for use in space. The study explores the utility of nuclear technology, its challenges, and its effects on scientific instruments. When this study was initiated, NASA was aggressively pursuing such technologies, but those efforts have been reined in. The second study, led by McNutt et al., seeks to address this challenge using an entirely different and perhaps more promising approach given the current political and technological conditions. These two reports should be read, in conjunction with a previous study exploring solar sailing technologies discussed later in detail, as attempts of an enthusiastic and unified science community to find a way to make ISP a reality in our lifetimes.

“Leaving the Heliosphere: A Nuclear-Powered Interstellar Probe” points out this key fact: nuclear electric methods deliver substantially more power, but at a considerable cost in technical complexity. The study used an active nuclear reactor as power source and envisions a primary spacecraft with two ‘daughter’ probes. The payload is 1500 kg, accounting for these probes as well as the suite of scientific instruments aboard the mothercraft. Thrusting begins after launch and continues for 7.5 years, after which the spacecraft coasts; a Jupiter flyby allows the team to reduce propellant and time of flight.

Some problems:

  • The large mass involved makes current launch vehicles inadequate. In fact, a Saturn-class launch vehicle seems needed. Conceivably, heavy lift technology of the sort being studied for manned moon missions could meet this need. The paper explores a dual-launch scenario with the spacecraft assembled in orbit.
  • What nuclear methods make possible is a relative abundance of onboard power, affecting the amount of data that can be returned to Earth. In fact, the nuclear option provides 125 kWe as opposed to the IIE’s 1 kWe. But look at mass: the nuclear option has a total dry mass of 19,000 kg as opposed to IIE’s 600 kg (payload mass is, respectively, 1500 kg and 35 kg). It is, as Zurbuchen told me in an e-mail, ‘a tank, not a race car.’
  • The mass-to-power ratio involved in the nuclear design is too high to be viable when compared both to solar sail and other electric propulsion designs.
  • None of which is to say that nuclear options are forever discounted in deep space work. But it is to say that the best we can do with existing technologies seems to favor the Innovative Interstellar Explorer’s radioisotope electric propulsion system. If, that is, we are intent on getting a dedicated mission into the interstellar medium, which Centauri Dreams argues is the essential next step to follow up the extraordinary discoveries of the Pioneers and Voyagers that have gone ahead. As Zurbuchen and team sum it up:

    The task at hand remains exciting and the motivation of the scientific community has not diminished. We are more committed than ever to determine how we can be involved, during our lifetimes, in one of the most historic missions of exploration that will ever happened: Interstellar Probe. Exploration is not easy—reports of the perils and the challenges faced by those who do it fill libraries. Sometimes the reason for failure to explore can be tracked to human weaknesses in the political arena. But always, humanity overcomes these obstacles, and finally, there is the day when the ship leaves the harbor and moves into a new, unexplored world. Now, the anchor lifts and the eyes are firmly set at the horizon, where surely the most exciting worlds are going to emerge.

    The paper is Zurbuchen, Patel and Fisk, et al., “Leaving the Heliosphere: A Nuclear-Powered Interstellar Probe,” to be published this summer under the auspices of the AIAA.

    Three Planets, and a New Category

    Three new planets have just been announced, but contrary to expectations, the number of planetary detections has not been picking up in recent times. The peak rate was 34 planetary discoveries in 2002, with the years since showing about 25 planets per year. There are a number of reasons for the slowdown, among them the fact that readily detectible short-period planets for the most obvious candidate stars have already been detected, and it will take years for enough data to accumulate to snare the presumably numerous outer planets in these systems. Greg Laughlin’s Systemic site provides the details.

    But we do have a nice set of three new worlds delivered to us courtesy of the radial velocity method, which remains the primary detection scheme as we tune up transit and microlensing searches. HD 224693 offers a 0.7 Jupiter mass world in a 27-day orbit and HD 33283 shows a Saturn-class world in an 18-day orbit. HD 86081 is the most intriguing; it seems to have a 1.5 Jupiter mass planet in a tight two-day orbit. That extremely short orbital period sets up a high 17 percent transit probability, but no evidence for a transit has yet been found by the California-Carnegie Planet Search Team.

    Note this, too, about the HD 86081 planet: the Optical Gravitational Lensing Experiment has found three out of five observed transiting planets to be in this category, with an orbital period of two days or less. Coupled with this latest find and accounting for selection bias among the two detection methods, we seem to have identified a class of worlds that the California-Carnegie team calls ‘very hot Jupiters,’ an apparently rare class of exoplanet occurring perhaps an eighth to a tenth as often as the far more commonly observed ‘hot Jupiters’ in somewhat wider orbits.

    The paper is Johnson, Marcy, Laughlin et al., “The N2K Consortium VI: Doppler Shifts Without Templates and Three New Short-Period Planets,” available here.

    Antimatter’s Advantages (and the Catch)

    One of the beauties of antimatter is its efficiency. A fission reaction uses up about 1 percent of the available energy inside matter, whereas the annihilation of antimatter and matter converts 100 percent of the mass into energy. No wonder tiny amounts of antimatter can have such powerful effects. Put a gram of matter together with a gram of antimatter and you release the equivalent of a 20 kiloton bomb, about the size of the one that destroyed Hiroshima.

    And if you really want to see antimatter’s potential, consider what it does to mass ratios, which compare the weight of a fully fueled spacecraft with that of an empty one. In his book Mirror Matter: Pioneering Antimatter Physics (New York: John Wiley & Sons, 1988), Robert Forward spoke of antimatter-driven spacecraft with mass ratios of 5 to 1 (by contrast, the Apollo missions operated with a ratio of 600 to 1). Indeed, Forward believed that a 1-ton probe to Alpha Centauri would require roughly four tons of liquid hydrogen and forty pounds of antimatter.

    All of which takes us back to Gerald Smith’s intriguing work with positrons. I first ran into Smith’s ideas when he collaborated with Steven Howe, Raymond Lewis, and Kirby Meyer on a project called AIMStar (Antimatter Initiated Microfusion Starship). Here’s a link, but beware: it’s a PDF file. With a mission goal of reaching 10,000 AU — the domain of the Oort Cloud’s comets — in fifty years, AIMStar would use antimatter to ignite a fusion reaction. Building on an earlier design that used antiprotons to trigger fission, AIMStar would tap tens of micrograms of antimatter to create the reaction, injecting deuterium and helium-3 into a cloud of antiprotons.

    Note: I made an error in Centauri Dreams (the book) when describing AIMStar — I mentioned it using 30 to 130 milligrams of antimatter, rather than the correct 30 to 130 micrograms. This whopper has now been added to the Errata page.

    Another design to look at as you consider Smith’s ‘positron rocket’ idea is also from Penn State, where AIMStar originated. The Ion Compressed Antimatter Nuclear Rocket (ICAN-II) would use pellets of uranium and liquid hydrogen, with antiprotons to trigger the resulting nuclear reaction. In both cases, a critical factor was working within the limits of available antimatter. Note that Smith’s new design calls for antimatter in the milligram range (Smith thinks he’ll need tens of milligrams, so it’s quite a step up from the AIMStar design).

    The key to all our antimatter hopes is increased antimatter production. The last time I looked, the cost of producing antimatter was about $62.5 trillion per gram, or $1.75 quadrillion per ounce. Get this: making antiprotons in particle beam collisions takes ten billion times more energy than is stored in their mass, and a few years ago, CERN pointed out that the amount of antimatter an accelerator laboratory can produce in a year is enough to make a 100-watt lightbulb shine for fifteen minutes.

    So there’s the crux: we have to find ways to ramp up production and lower cost until we’re in the range Gerald Smith talks about. How to proceed? Robert Forward recommended the development of antimatter factories, dedicated facilities that wouldn’t have to rely on particle accelerator labs with larger agendas for their product. Here’s what Forward says about this in his Indistinguishable from Magic (New York: Baen Books, 1995), 25-26:

    In a study I carried out for the Air Force Rocket Propulsion Laboratory, I showed that if an antiproton factory were designed properly by engineers, instead of by scientists with limited budgets and in a hurry to win a Nobel prize, the present energy efficiency (electrical energy in compared to antimatter annihilation energy out) could be raised from a part in sixty million to a part in ten thousand, or 0.01%, while at the same time, the cost of building the factory could be substantially lowered compared to the cost of the high precision scientific machines. From these studies, I estimated the cost of the antimatter at ten million dollars per milligram.

    Now we’re getting somewhere — $10 million per milligram is more cost effective than chemical propulsion given the huge energy efficiency of antimatter. We won’t have Forward-style factories to rely on for the conceivable future, but designs that maximize the antimatter we can produce — and Steve Howe’s antimatter sail is certainly high on that list — will help us get antimatter into the rocket business. Both Smith’s Positronics Research and Howe’s Hbar Technologies are companies to watch as antimatter research continues.