A Practical Positron Rocket?

Antimatter seems the boldest — and newest — of propulsion concepts, but in fact Eugen Sänger’s work on the uses of antimatter in rocketry goes back to the 1930s. The German scientist thought it would be possible to reflect gamma rays produced by the annihilation of electrons and positrons to produce thrust. His work wowed the Fourth International Astronautical Congress in 1952, but there was a catch: the gamma rays created by this reaction seemed too energetic to use the way Sänger hoped — they penetrated all known materials and could not be channeled effectively into a rocket exhaust.

Which is why most antimatter designs since have focused on antiprotons. When antiprotons and protons annihilate each other, they produce not only gamma rays but pi-mesons, short-lived particles also known as pions. Many of these are charged as they emerge from the proton/antiproton annihilation, and can therefore be controlled by sending them through a strong magnetic field. Early designs by Robert Forward and David Morgan (Lawrence Livermore National Laboratory) took advantage of these traits even though the technology to produce sufficient antimatter lagged far behind their visionary concepts.

But antimatter researcher Gerald Smith and colleagues have been working on a study for NASA’s Institute for Advanced Concepts that takes us back to positrons, one that could power a human mission to Mars with tens of milligrams of antimatter. Not only would such a design be far lighter than competing chemical and nuclear options, but it would be fast enough to dramatically shorten flight time to the Red Planet; advanced versions might make the trip in as little as 45 days.

Smith’s background in antimatter research needs little elaboration; he is a towering figure in the field. While at Pennsylvania State University, he oversaw two key hybrid designs called ICANN II and AIMStar that used antimatter as a catalyst to induce nuclear reactions. The author of hundreds of research papers, he is also the designer of both the Mark I portable antimatter trap and the current state of the art, the High Performance Antimatter Trap (HiPAT).

He is, in other words, a key figure when it comes to wedding powerful antimatter technologies to practical spacecraft designs. Now head of Positronics Research LLC in New Mexico, Smith has built on the lessons learned from these earlier concepts to promote a new design that seems to offer enormous benefits, if we can produce the antimatter needed to make it fly.

One advantage of positrons is that the gamma rays they generate are about 400 times less energetic than those created by antiprotons, making the spacecraft a far safer place for human crews. According to a description of Smith’s work posted on the NIAC Web site (PDF warning), the positron/electron annihilation “…results in the creation of two soft 511 keV gamma rays. These gamma rays can be easily absorbed to heat a working fluid in a closed, high-efficiency thermodynamic power system, or directly into a propellant.” The NIAC work is ongoing — Smith’s Phase I study was completed in March and he is now making the case for an advanced Phase II project that will examine design variants like the positron reactor shown below.

Smith's positron engine concept

Image: A diagram of a rocket powered by a positron reactor. Positrons are directed from the storage unit to the attenuating matrix, where they interact with the material and release heat. Liquid hydrogen (H2) circulates through the attenuating matrix and picks up the heat. The hydrogen then flows to the nozzle exit (bell-shaped area in yellow and blue), where it expands into space, producing thrust. Credit: Positronics Research, LLC.

The high cost of antimatter is always an issue, but one that may become manageable. Smith is now estimating that the 10 milligrams of positrons a human Mars mission would require could be produced for roughly $250 million. It seems a reasonable assumption that antimatter production costs will continue to go down, just as it is also reasonable to question the wisdom of using staged chemical rockets with launch costs of $10,000 per pound when designs that could undertake far more sophisticated missions are waiting to be developed. Let’s talk more about these notions tomorrow and dig into antimatter’s advantages when it comes to deep space work.

A Targeting Strategy for Optical SETI

Optical SETI has generally adopted the conventions of conventional SETI by targeting nearby, Sun-like stars. It’s a strategy that makes sense, but given the number of potential transmitting stars and the need for broader surveys, what we’d ultimately like to find is a strategy for optimizing our chances, a way of looking for optical signals from other civilizations that both we and the transmitting civilization could deduce. That’s the challenge Seth Shostak (SETI Institute) and Ray Villard (Space Telescope Science Institute) take on in a paper called “A Scheme for Targeting SETI Observations.”

So what makes immediate sense as a method of star targeting? Something that is sufficiently repeititive to be used as a kind of pointer. Shostak and Villard argue for planetary transits as a way of providing temporal synchronization between distant civilizations. A transmitting society could time its signal to be sent during the transit as observed from the transmitter, or timed to arrive when the transit occurs in the targeting system. What makes these events propitious is that the precise alignments involved would be perceived as likely times for the receiving planet’s astronomers to look for a signal.

From the paper:

Synchronizing with transits requires either the transmitting or receiving party to accurately know the distance between the two star systems. Since the number of targets will be far smaller than the number of potential senders, the lesser burden for acquiring this information is on the transmitting end (and might be possibly concomitant to the discovery and evaluation of targets). Ergo, the reasonable strategy for receivers (us) is to search for signals now from star systems in the Sun’s anti-direction, stars that are able to see us in transit, rather than trying to account for the propagation time to all possible senders. [italics mine]

Thus an efficient OSETI search is one that monitors the sky within an 0.5 degree swath of the ecliptic corresponding to the direction away from the Sun. And the optimum observing time is near the solstices, “…when the anti-Sun direction coincides with the position at which the ecliptic crosses the plane of the Milky Way.” That’s a six-week window that should be both logical and compelling to both sender and receiver.

Centauri Dreams‘ take: Targeted searches offer maximum use of our resources, with more potential for success than undifferentiated surveys. What makes papers like this one exciting is that we are trying to work out not only what makes sense to our own species but what would be apparent to civlizations whose parameters we cannot even guess. Shostak and Villard are compelling because the transits they recommend are an obvious marker, assuming that there are societies that choose to communicate through such beacons — and only an ongoing OSETI program will be able to draw conclusions about that.

The Riches of an ‘Empty’ Field

The image below is merely a marker — it leads to something far grander. For what you’re looking at is a small part of a vast image of ’empty’ space, made with over 64 hours of observations using the Wide-Field Camera on the 2.2-meter La Silla telescope in Chile. Rather than linking to a simple enlargement of this fragment, I’ve linked instead to a zoomable imaging tool set up by the European Southern Observatory, where you can roam the galaxies in any direction you please in what is called the Deep 3 Field.

A portion of Deep Field 3

Image (click to use the zoom tool): Part of the Deep 3 Deep Public Survey field, showing the brightest galaxy in the field ESO 570-19 (upper left) and the brightest star UW Crateris. This red giant (upper right) is a variable star that is about 8 times fainter than what the unaided eye can see. An ‘S’-shaped ensemble of galaxies is also visible in the lower part of the picture. Credit: European Southern Observatory.

The Deep 3 field is located in what appears to the naked eye as an empty field of space in a southern constellation called the Crater, where the brightest star is of 4th magnitude. The region of sky covered is approximately five times the size of the full moon; within the image are objects 100 million times fainter than what the unaided eye can see. No field is truly empty; this one is actually an open window, taking us back to the most distant regions, and times, of the universe.

A Key Paper from an Astounding Source

Most papers about interstellar flight appear in serious venues like Acta Astronautica or the Journal of the British Interplanetary Society. The latter, in fact, has emerged as the leading arena for such discussions, and the growth of the arXiv site has brought many new ideas to light in the digital realm. It may be surprising, then, to find that the popular Astounding Science Fiction was once a key player in interstellar theory with the publication of an article that brought solar sails to the attention of the public — and to many scientists — for the first time. But the magazine, in the hands of the capable John Campbell, was often home to science essays, and none more prescient than this one.

May 1951 Astounding

“Clipper Ships of Space” appeared under the byline Russell Saunders in Astounding‘s issue of May, 1951. ‘Saunders’ was in reality an engineer named Carl Wiley, who we may speculate wrote under a pseudonym to avoid any damage to his reputation — many scientists and engineers read science fiction, but some have a more tolerant view of it than others. And to be sure, the idea of a solar sail wasn’t Wiley’s own invention; indeed, the British physicist and socialist J.D. Bernal had written about ‘space sailing’ in 1929 (in his The World, the Flesh & the Devil: An Enquiry into the Future of the Three Enemies of the Rational Soul).

And so, even earlier, had Konstantin Tsiolkovsky, while his colleague Friderikh Arturovich Tsander had investigated huge, thin reflecting surfaces for such purposes in the 1920s, important work but limited in circulation. Some trace the notion of solar sailing as far back as Kepler. But it was left to Wiley to get the idea out to a much broader audience, and he did so convincingly in a short, seven page article that began like this:

It is becoming more and more taken for granted that the only possible method of propulsion in a vacuum is the rocket. It is true that science fiction is full of various types of space-warp drives. However, even the fertile imaginations of writers in this field have not challenged the rocket as the only practical, or even possible interplanetary drive in the forseeable future.

I intend to propose another method of propulsion in a vacuum which is based on present day physics. I will show that in many ways this drive is more practical than the rocket. In order to prove my point I will have to use a certain amount of mathematics. This will permit those who wish to, a chance to check my assertions. The rest may follow my verbal argument which I hope will be fairly coherent without the mathematics.

And with those straightforward lines, solar sailing was launched into the public consciousness, although it would be seven years before Richard Garwin delivered the first look at solar sails in a technical journal. Garwin’s paper was “Solar Sailing: A Practical Method of Propulsion within the Solar System,” Jet Propulsion 28 (March 1958): 188-90. It’s a short, solid look at the topic, but lacks the charm of the Wiley essay, and the fact that solar sailing would continue to take shape through classic science fiction stories like Arthur C. Clarke’s “The Wind from the Sun” and Poul Anderson’s “Sunjammer” reminds us that the interplay between science and fiction can be productive indeed.

Microlensing and Its Limits

Recent exoplanet detections like the ‘super Earth’ found orbiting a red dwarf 9000 light years away have put the spotlight on gravitational microlensing. The phenomenon occurs when light from a background star is deflected by the gravity of an intervening object; in other words, one star passing quite near or in front of a far more distant one (as seen from Earth) will cause a lensing effect that can be studied. We’ve seen that a distant quasar can be lensed by a foreground galaxy, producing eerie, multiple images of the same quasar.

But things get trickier when it comes to microlensing within our own galaxy using individual stars. We can’t resolve the images created by these events with current telescopes, but the lensing does produce a measurable amplification of the distant star’s light. And any planets in orbit around the intervening star can perturb that lensing effect enough to signal their presence. The beauty of this is that microlensing is sensitive to planets down to terrestrial size.

But for microlensing to work, the two stars involved must be precisely aligned. The method thus demands keeping a close watch on millions of stars to catch these rare events, which is why microlensing observations for exoplanets are focused on the galactic bulge and the Magellanic Clouds. And even there the wait can be long, for the chances of a background star being microlensed at any time are roughly 10 -6, according to a useful new study by Nicholas James Rattenbury (Jodrell Bank and University of Manchester).

“Microlensing is currently detecting planets in a previously unreachable region of the planetary mass-radius space,” Rattenbury writes. In fact, “Microlensing is returning detections of planets with masses approaching that of Earth.” But he goes on to note that most of the lens system planets are M-class dwarfs, and the planets thus discovered would be, because of the limits of the current method, well removed from their habitable zones. The clear implication is that we are still several technological steps away from being able to detect habitable Earth-class planets around such stars.

One way to improve our capabilities is through proposed space telescopes like the Microlensing Planet Finder, which Rattenbury believes would be capable of detecting Earth-mass planets in the habitable zones of G and K-type stars. In fact, MPF could, according to its proponents, detect planets down to 0.1 Earth masses, and at separations from 0.7 AU to infinity. For more on MPF, see Bennett, Bond, Cheng et al., “The Microlensing Planet Finder: Completing the Census of Extrasolar Planets in the Milky Way,” available here (PDF warning).

But back to Rattenbury, who comments here on microlensing’s bright future:

Improvements in survey and follow-up instrumentation and operation will increase the discovery rate of low-mass planets, leading to estimates of the Galactic planetary mass-function. Many more planets will be discovered via microlensing. New ground and space telescopes will have higher sensitivity to low-mass planets than current instrumentation. In particular, a space telescope such as the proposed MPF mission will be sensitive to habitable Earth-like planets around Sun-like stars. Within the next ten years we can expect that dozens of extra-solar planets will have been discovered via microlensing, possibly some very similar to Earth.

Centauri Dreams‘ take: As for habitable worlds around red dwarfs, an exciting and rapidly developing subspecialty of the exoplanet hunt, we seem unlikely to find them through microlensing even via missions like MPF, but promising projects like Transitsearch.org hold out the real possibility of detecting Earth-size worlds and smaller around relatively nearby red dwarfs. The exoplanet hunt thus continues as a cluster of techniques are fine-tuned, from ever more accurate radial velocity measurements to space-based microlensing proposals and transit searches that will one day snare us a terrestrial world.