Centauri Dreams
Imagining and Planning Interstellar Exploration
Thoughts on Antihydrogen and Propulsion
Normally when we talk about interstellar sail concepts, we’re looking at some kind of microwave or laser beaming technologies of the kind Robert Forward wrote about, in which the sail is driven by a beam produced by an installation in the Solar System. Greg and Jim Benford have carried out sail experiments in the laboratory showing that microwave beaming could indeed drive such a sail. But Steven Howe’s concept, developed in reports for NASA’s Institute for Advanced Concepts, involved antimatter released from within the spacecraft. The latter would encounter a sail enriched with uranium-235 to reach velocities of well over 100 kilometers per second.
That’s fast enough to make missions to the nearby interstellar medium feasible, and it points the way to longer journeys once the technology has proven itself. But everything depends upon storing antihydrogen, which is an antimatter atom — an antiproton orbited by a positron. Howe thinks the antihydrogen could be stored in the form of frozen pellets, these to be kept in micro-traps built on integrated circuit chips that would contain the antihydrogen in wells spaced at periodic intervals, allowing pellets to be discharged to the sail on demand. The storage method alone makes for fascinating reading, and you can find it among the NIAC reports online.
Of course, we have to create the antihydrogen first, a feat achieved back in 2002 at CERN through the mixing of cold clouds of positrons and antiprotons. And it goes without saying that before we get to the propulsion aspect of antihydrogen, we have to go to work on the differences between hydrogen and antihydrogen, while investigating the various kinds of long-term storage options that might be used for antimatter. Does antihydrogen have the same basic properties as hydrogen? CERN is moving on to study the matter, with new work showing the amount of energy needed to change the spin of antihydrogen’s positrons.
The report comes from CERN’s Antihydrogen Laser Physics Apparatus (ALPHA) experiment, the same team that trapped antihydrogen for over 1000 seconds last year. Successful trapping now allows the analysis of the antihydrogen itself, applying microwave pulses to affect the magnetic moment of the anti-atoms. This BBC story quotes ALPHA scientist Jeffrey Hangst:
“When that happens, it goes from being trapped like a marble in a bowl to being repelled, like a marble on top of a hill,” Dr Hangst explained.
“It wants to ‘roll away’, and when it does that, it encounters some matter and annihilates, and we detect the fact that it disappears.”
Image: The ALPHA experiment facility at CERN. Credit: Jeffrey Hangst/CERN.
The work is part of a much larger program that will probe antihydrogen with laser light, the goal being to explore the energy levels within antihydrogen. What the work may eventually uncover, perhaps in addition to tuning up our methods of antihydrogen storage along the way, is whether there are clues in the makeup of antihydrogen that explain why the universe is filled with matter and not its opposite, given that both matter and antimatter existed in equal amounts in the earliest moments of the universe. The light emitted as an excited electron returns to its resting orbit is well studied in hydrogen and assumed to be identical in its antihydrogen counterpart.
These are early results that promise much, but the important thing is that the ALPHA team has demonstrated that their apparatus has the capability of making these measurements on antihydrogen. Uncovering the antihydrogen spectrum will take further work but could prove immensely useful in our understanding of the simplest anti-atom. We’re a long way from the antimatter sail concept, but Howe’s Phase II report at NIAC covered his own experiments with antiprotons and uranium-laden foils, critical work for fleshing out the architecture for a mission that may one day fly once we’ve mastered antihydrogen storage and learned to produce the needed milligrams of antimatter (current global production is measured in nanograms per year).
Antimatter’s promise has always been bright, given that 10 milligrams of the stuff used in an antiproton engine (not Howe’s sail) heating hydrogen through antimatter annihilation would produce the equivalent of 120 tons of hydrogen/liquid oxygen chemical fuel. But as soon as you start talking about the energy involved, the difficulty in producing and storing antimatter puts a damper on the entire conversation. That’s one reason why, at a time when antimatter costs in the neighborhood of $100 trillion per gram, finding natural antimatter sources in space is such an interesting possibility. It was just last year that we learned about the inner Van Allen belts’ roll in trapping natural antimatter, and James Bickford (Draper Laboratory, Cambridge MA) has been examining more abundant sources farther out in the Solar System.
The CERN work is reported in Amole et al., “Resonant quantum transitions in trapped antihydrogen atoms,” published online in Nature 9 January 2012 (abstract). For more on antimatter sources in nearby space, see Adriani et al., “The discovery of geomagnetically trapped cosmic ray antiprotons,” Astrophysical Journal Letters Vol. 37, No. 2, L29 (abstract / preprint). I discuss the recent results from the Pamela satellite (Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics) and provide sources for Bickford’s continuing work on naturally occurring antimatter in Antimatter Source Near the Earth.
Looking Into Kepler’s Latest
I’ve held off a bit on the latest Kepler data release because I wanted some time to ponder what we’re looking at. The list of candidate planets here is based on data from the first sixteen months of the mission, and at first blush it seems encouraging in terms of our search for Earth-class planets. But dig deeper and you realize how much we still have to learn. Not all the trends point to the near ubiquity of rocky worlds in the habitable zone that some have hoped for. You might remember, for example, Carl Sagan famously saying (on ‘Cosmos’) that one out of every four stars may have planets, with two in each such system likely to be in the habitable zone.
Kepler’s Candidates and Some Qualifications
I remember being suitably agog at that statement, but we’ve learned more since. John Rehling, writing an essay for SpaceDaily, didn’t miss the Sagan quote and uses it to contrast with his own analysis of the new Kepler material showing that Earth-like planets may be considerably harder to find. Let’s talk about what’s going on here. A Kepler news release from February 28 breaks down the highlights. We find that the total count of Kepler planet candidates has reached 2321, with 1091 emerging in the new data analysis. Here we are dealing with 1790 host stars, and what caught everyone’s attention was this:
A clear trend toward smaller planets at longer orbital periods is evident with each new catalog release. This suggests that Earth-size planets in the habitable zone are forthcoming if, indeed, such planets are abundant.
Indeed, the Kepler catalog now holds over 200 Earth-size planet candidates and over 900 that are smaller than twice the Earth’s size, which makes for a 197 percent increase in this type of planet candidate (with planets larger than 2 Earth radii increasing at about 52 percent). 10 planets in the habitable zone (out of a total of 46 planet candidates there) are near Earth in size. We also learn that the fraction of host stars with multiple candidates has grown from 17 to 20 percent, and that improvements in the Kepler data analysis software are helping us identify smaller and longer period candidates at a faster than expected clip. So far, so good.
What John Rehling did was to go to work on two biases that affect the Kepler data set: 1) The Kepler data is more complete in regions close to the host star, which is reflected in the fact that over 90 percent of the observed candidates have shorter periods than Mercury; and 2) Because of the transit methodology used, larger planets are more readily observed than small ones. And here we note (see diagram) that most observed candidates are considerably larger than Earth.
Rehling uses the two forms of bias to calculate a numerical de-bias factor, having put the observed candidates into bins based on radius and orbital period. From his essay:
Where the positive observations are significant in number, we can calculate the universal abundance of such planets. Where there have been few or no observations, we can use the de-bias factor to infer probabilistically a ceiling on the number of such worlds.
A similar approach by Wesley Traub used the data from the earlier four-month data release to calculate absolute frequencies and furthermore to extrapolate trends in planet radius and orbital period to project that about 34% of stars host an earth-sized planet in the habitable zone, a happy speculation for the future goal of finding truly earthlike planets as possible abodes for life.
A happy speculation indeed, but as adjusted for the new data release (Traub was working with bins with nothing longer than 50-day orbital periods), we can begin to tune up the accuracy. Even so, we should note that 16 months of observation isn’t enough to flag an Earth-like planet (remember, we need three transits, so to detect a true Earth analogue, we need 24 months of observation or more). We’re extrapolating, then, based on trends, and Rehling finds two trends at work in the new data, the first being that we see more Earth-size planets that are close to their stars, the second being that we see more giant planets located farther away from their stars.
The upshot: These trends are not favorable, because only the larger planets continue to increase in frequency as we move into the longer dataset, while the ‘super-Earth’ and Neptune-class candidates peak in frequency “at orbital periods roughly corresponding to the upper end of the four-month release’s window.” From the essay (italics mine):
Overall, we see that our solar system is qualitatively typical in placing larger planets farther out than smaller planets. However, it is quantitatively atypical: While Kepler shows us the happy result that there are almost certainly several planets for every star, it shows us that our solar system is distributed freakishly outwards, in comparison to more typical planetary systems.
In Rehling’s estimate (and you should read the entire essay, where he backs up his analysis with useful graphics), the frequency of Earth-like worlds is not Traub’s robust 34 percent but something closer to 0.7 percent. We can raise that a bit by extending the parameters (including bins surrounding Earth’s bin) and by including somewhat smaller planets, and what we then emerge with may get as high as 9 percent. And at the lower end, if Earth analogues are less abundant than 3 percent, then it’s possible we may find not a single one with Kepler.
What to make of this? The most obvious point is that the Kepler mission is ongoing, and that we need to see what the next data release brings. We’re still extrapolating as we gradually move the zone of detection outwards, gradually filling up the relevant bins. The second point is that given the vast number of stars in the galaxy, even with the much lower assessments of Rehling’s analysis, we may still be looking at hundreds of millions of habitable terrestrial planets.
Ramifications of ‘Rarer Earths’
But Rehling’s case is highly interesting in two directions. First, the kind of spectroscopic follow-ups we need to make on planet candidates are rendered more difficult by the distance of the Kepler stars from us. As we look toward future missions to characterize the atmospheres of terrestrial worlds, we’re going to need planets that are relatively close, but rarer ‘Earths’ means that such planets are farther apart than we’d like them to be. That has obvious implications as well for our favorite Centauri Dreams subject, future probes sent to nearby solar systems.
So perhaps we have to stay creative when it comes to habitable zones and astrobiology. We already know that M-class stars are the most common in the Milky Way by a huge margin (as many as 80 percent of all stars may fit this class). Here we’re not talking about an Earth analogue, but planets at the right distance from M-dwarfs may be habitable despite the problems of tidal lock and solar radiation. Then too, we can consider that if most solar systems really are compressed toward the star, there may be many gas giants in the habitable zone of stars like our Sun and into the K-class. Here we have the possibility of habitable conditions on moons.
G-class stars like our Sun are not themselves all that common — I believe that about 3.5 percent of all stars fit the bill. But K-class stars like Centauri B are also in the picture (8 percent) along with the above-mentioned red dwarfs, and we are steadily finding out more about the variety of planetary system configurations around such stars. Rehling notes, too, that as more Kepler data become available, the frequency of planets as a function of orbital period may show a second peak. No one is saying that we are finished with Kepler, not by a long shot. What we are trying to do is to draw the maximum amount of information out of what we do have. What will the terrestrial planet outlook be after Kepler’s next release?
Upcoming Interstellar Sessions
It’s shaping up to be an interesting week. I want to get to the recent Kepler data release, and also to the antimatter news from CERN, and I also want to talk about everything from decelerating an interstellar craft to models of expansion into the galaxy a la Frank Tipler. [And thanks to Centauri Dreams reader Eric Goldstein for reminding me of the upcoming WISE data release on the 14th!]. For today, though, let’s look at two upcoming conferences, especially since I’m running behind in getting to the first of them, the CONTACT 2012 gathering, which is coming up right away.
The full title of this one is CONTACT: Cultures of the Imagination, and it’s a meeting with a rich history. Back in 1979, Jim Funaro was teaching a course in anthropology at Cabrillo College (Aptos, CA) that used science fiction as a vector into the scientific issues his course raised. The course allowed students to go to work creating cultures and, in a game-like simulation, to explore how the fictional societies interacted with each other. By 1983, Funaro was able to use this ‘laboratory experiment’ in anthropology as the main event of the first CONTACT meeting, set up to be a national academic conference bringing scientists, artists and writers together.
Interdisciplinary Insights into ETI
When Funaro founded CONTACT, his goal was to encourage interdisciplinary thinking, which must have been much in the air back in 1983, considering that this was also the year of the storied Interstellar Migration and the Human Experience conference held at Los Alamos. The latter ranged from astrophysics to sociology, psychology and history and probed how emerging technologies would affect future human expansion into the cosmos. Meanwhile, CONTACT had been energized by Funaro’s interactions with science fiction writer Frank Herbert, whose classic novel Dune was one of the books used in his class as an example of a credible created culture.
How Funaro lured other writers and scientists into CONTACT is told on the conference’s website. In any case, writers like Michael Bishop, Larry Niven, John Brunner and C.J. Cherryh soon became involved, and Funaro worked with artist Joel Hagen to launch the first world-building project. Although the first CONTACT conference ran in April of 1983 (in Santa Cruz), the culture-building simulations of what Funaro called the ‘Bateson Project’ (after anthropologist Gregory Bateson) were a success, and anthropology as simulation/performance art was established, while the original simulation idea was renamed “Cultures of the Imagination.”
Funaro calls CONTACT III “the first time it worked,” noting that this was the conference where lessons learned from the first two conferences were first implemented. The cross-disciplinary nature of the proceedings is easily seen in his account of building the pre-conference package:
Poul Anderson gave us a planet, Ophelia, with its primary and solar system… We then sent the planetary specifications to C. J. Cherryh, who suggested the Mossback [the resident alien of the planet] and provided us with its basic design. Next, Larry Niven elaborated on this alien, contributed other species for the ecology and explained the conditions that the human team would face on this world. Finally, Joel Hagen produced some sketches of the critters. This “homework” was then distributed to all the guests several weeks before the conference.
Specialized teams at the conference then went to work to develop the world and its culture, and sequential workshops developed the key issues. Role-playing developed and became a major tool. Funaro acknowledges that such simulations are artificial and limited:
But, like the real intercultural contacts that anthropologists have been participating in for more than a century here on our home planet, the interaction was unrehearsed, proceeded carefully from known behavioral and ethnographic methodologies towards consistent and ethical choices of action, and provided at least a possible model for developing a protocol for an extraterrestrial encounter. And the value of spontaneous role-playing in enhancing the effectiveness of the simulation was convincingly (however unexpectedly) demonstrated. It has been an essential part of COTI forever after.
Frank Drake will be the keynote speaker at CONTACT 2012 at the Domain Hotel in Sunnyvale (CA), with conference sessions running from March 30 to April 1. You can see the full schedule along with abstracts of the talks here. Among the offerings I note in particular Kathryn Denning on our expectations in interstellar contact (“Unboxing Pandora”), Albert Harrison on Russian cosmism, a philosophical movement that emerged around 1900 and influenced our modern views of space exploration, and Seth Shostak’s sure to be controversial “Broadcasting into Space: Recipe for Catastrophe?” If that last one doesn’t raise the temperature in the room, nothing will.
Searching for Life Signatures
The call for papers for the Fourth IAA Symposium on Searching for Life Signatures is available online. The conference, to be held at the Kursaal Congress Centre in San Marino (Italy) runs from September 25-28 of this year, ranging over traditional SETI and so-called Active SETI (Messaging to ETI), along with studies of biosignatures and exoplanet discovery. For those intending to be at the 63rd International Astronautical Congress (IAC), note that Searching for Life Signatures will take place in the week just prior to the IAC, which runs from October 1-5.
Image: Northern Cross radio telescope at Medicina (Bologna, Italy), 564 by 640 m, 30000 square meter multi-element, centered at 408 MHz. Credit: Simona Righini/INAF.
The SETI Permanent Committee of the International Academy of Astronautics (IAA) invites abstracts to be submitted to the Symposium. The deadline for abstract submission is Sunday June 24. Information about travel possibilities is available at the site — I notice that Rimini and San Marino airport is closest to the venue (about 25 kilometers), but Bologna is the more likely option for those coming in from overseas (132 kilometers from San Marino), while Milan is a good 300 kilometers out, though with hourly train connections to Rimini and thence by bus to San Marino.
I’ll have more details about the San Marino conference as abstracts become available.
Science Fiction and the Probe
Physicist Al Jackson, who is the world’s greatest dinner companion, holds that title because amongst his scientific accomplishments, he is also a fountainhead of information about science fiction. No matter which writer you bring up, he knows something you never heard of that illuminates that writer’s work. So it was no surprise that when the subject of self-replicating probes came up in these pages, Al would take note in the comments of Philip K. Dick’s story “Autofac,” which ran in the November, 1955 issue of H. L. Gold’s Galaxy. A copy of that issue sits, I am happy to say, not six feet away from me on my shelves.
This is actually the first time I ever anticipated Al — like him, I had noticed “Autofac” as one of the earliest science fiction treatments of the ideas of self-replication and nanotechnology, and had written about it in my Centauri Dreams book back in 2004. If any readers know of earlier SF stories on the topic, please let me know in the comments. In the story, the two protagonists encounter a automated factory that is spewing pellets into the air. A close examination of one of the pellets reveals what we might today consider to be nanotech assemblers at work:
The pellet was a smashed container of machinery, tiny metallic elements too minute to be analyzed without a microscope…
The cylinder had split. At first he couldn’t tell if it had been the impact or deliberate internal mechanisms at work. From the rent, an ooze of metal bits was sliding. Squatting down, O’Neill examined them.
The bits were in motion. Microscopic machinery, smaller than ants, smaller than pins, working energetically, purposefully — constructing something that looked like a tiny rectangle of steel.
Further examination of the site shows that the pellets are building a replica of the original factory. O’Neill then has an interesting thought:
“Maybe some of them are geared for escape velocity. That would be neat — autofac networks throughout the whole universe.”
The Ethically Challenged Probe
Just how ‘neat’ it would be remains to be seen, of course, assuming nanotech assemblers can ever be built and self-replication made a reality (an assumption many disagree with). Carl Sagan, working with William Newman, published a paper called “The Solopsist Approach to Extraterrestrial Intelligence” (reference below) that made a reasonable argument: Self-reproducing probes are too viral-like to be built. Their existence would endanger their creators and any species they encountered as the probes spread unchecked through the galaxy.
Sagan and Newman were responding to Frank Tipler’s argument that extraterrestrial civilizations do not exist because we have no evidence of von Neumann probes (it was von Neumann whose studies of self-replication in the form of ‘universal assemblers’ would lead to the idea of self-replicating probes, though von Neumann himself didn’t apply his ideas to space technologies). Given the time frames involved, Sagan and Newman argued, probes like this should have become blindingly obvious as they would have begun to consume a huge fraction of the raw materials available in the galaxy.
The reason we have seen no von Neumann probes? Contra Tipler, Sagan and Newman say this points to extraterrestrials reaching the same conclusion we will once we have such technologies — we can’t build them because to do so would be suicidal. Freeman Dyson, writing as far back as 1964, referred to a “cancer of purposeless technological exploitation” that should be observable. All of this depends, of course, on the assumed replication rates of the probes involved (Sagan and Newman chose a much higher rate than Tipler to reach their conclusion). Robert Freitas picked up on the cancer theme in a 1983 paper, but saw a different outcome:
A well-ordered galaxy could imply an intelligence at work, but the absence of such order is insufficient evidence to rule out the existence of galactic ETI. The incomplete observational record at best can exclude only a certain limited class of extraterrestrial civilisation – the kind that employs rapacious, cancer-like, exploitative, highly-observable technology. Most other galactic-type civilisations are either invisible to or unrecognisable by current human observational methods, as are most if not all of expansionist interstellar cultures and Type I or Type II societies. Thus millions of extraterrestrial civilisations may exist and still not be directly observable by us.
Self-Replication Close to Home
We need to talk (though not today because of time constraints) about what mechanisms might be used to put the brakes on self-replicating interstellar probes. For now, though, I promised a look at what the human encounter with such a technology would look like. Fortunately, Gregory Benford has considered the matter and produced a little gem of a story called “Dark Sanctuary,” which originally ran in OMNI in May of 1979 but is now accessible online. Greg told me in a recent email that Ben Bova (then OMNI‘s editor) had asked him for a hard science fiction story, and von Neumann’s ideas on self-replicating probes were what came to mind.
Mix this in with some of Michael Papagiannis’ notions about looking for unusual infrared signatures in the asteroid belt and you wind up with quite a tale. In the story, humans are well established in the asteroid main belt, which is now home to the ‘Belters,’ people who mine the resources there to boost ice into orbits that will take it to the inner system to feed the O’Neill habitats that are increasingly being built there. One of the Belters is struck by a laser and assumes an attack is underway, probably some kind of piracy on the part of rogue elements nearby. A chase ensues, but what the Belter eventually sees does not seem to be human:
I sat there, not breathing. A long tube, turning. Towers jutted out at odd places — twisted columns, with curved faces and sudden jagged struts. A fretwork of blue. Patches of strange, moving yellow. A jumble of complex structures. It was a cylinder, decorated almost beyond recognition. I checked the ranging figures, shook my head, checked again. The inboard computer overlaid a perspective grid on the image, to convince me.
I sat very still. The cylinder was pointing nearly away from me, so radar had reported a cross section much smaller than its real size. The thing was seven goddamn kilometers long.
“Dark Sanctuary” is a short piece and I won’t spoil the pleasure of reading it for you by revealing its ending, but suffice it to say that the issues we’ve been raising about tight-beam laser communications between probes come into play here, as does the question of what beings (biological or robotic) might do after generations on a starship — would they want to re-adapt to living on a planetary surface even if they had the opportunity? Would we be able to detect their technology if they had a presumed vested interest in keeping their existence unknown? Something tells me we’re not through with the discussion of these issues, not by a long shot.
The papers I’ve been talking about today are Sagan and Newman, “The Solipsist Approach to Extraterrestrial Intelligence”, Quarterly Journal of the Royal Astronomical Society, Vol. 24, No. 113 (1983), and Tipler, “Extraterrestrial Beings Do Not Exist”, Quarterly Journal of the Royal Astronomical Society, Vol. 21, No. 267 (1981). I’ve just received a copy of Richard Burke-Ward’s paper “Possible Existence of Extraterrestrial Technology in the Solar System,” JBIS Vol. 53, No 1/2, Jan/Feb 2000 — this one may also have a bearing on the self-replicating probe question, and I’ll try to get to it in the near future.
Intelligent Probes: The Spread-Spectrum Challenge
Let’s imagine for a moment that John Mathews (Pennsylvania State University) is right in theorizing that space-faring civilizations will use self-reproducing probes to expand into the galaxy. We’ve been kicking the issues around most of this week, but the SETI question continues to hang in the background. For if there really are extraterrestrial civilizations in the nearby galaxy, how would we track down their signals if they used the kind of communications network Mathews envisions, one in which individual probes talked to each other through tight-beam laser communications designed only for reception by the network itself?
One problem is that the evidence we’re looking for would most likely come in the form of spread-spectrum signals, a fact Jim Benford pointed out in a comment to my original post on Mathews, and one that also pointed to recent work by David Messerschmitt (UC-Berkeley). The latter makes a compelling case for spread-spectrum methods as the basis for interstellar communication because such signals are more robust in handling radio-frequency interference (RFI) of technological origin. In SETI terms, RFI is a major issue because it mimics the interstellar signal we are hoping to find, and Messerschmitt assumes an advanced civilization, having experienced RFI issues in its own past, will use the best tools to minimize them.
Image: 3D map of all known stellar systems in the solar neighbourhood within a radius of 12.5 light-years. Can we build self-reproducing probes that could explore these systems over the course of millennia? If other civilizations did the same, could we detect them? Credit: ESO/R.-D.Scholz et al. (AIP).
Spread-spectrum techniques spread what would have been narrowband information signals over a wide band of frequencies. Think of the kind of ‘frequency hopping’ deployed in World War II, where a transmitter would work at multiple frequencies and the receiver would need to tune in to each of the transmitted frequencies. In addition to being resistant to interference, the method allows you to resist enemy jamming of your communications or to conceal communications in what would otherwise seem to be white noise. Actress Hedy Lamarr and composer George Antheil developed a frequency hopping technique that made radio-guided munitions much harder for enemy forces to jam, a spread-spectrum story entertainingly told in a recent book (Richard Rhodes’ Hedy’s Folly: The Life and Breakthrough Inventions of Hedy Lamarr, the Most Beautiful Woman in the World, 2011). Lamarr and Antheil’s system used 88 different carrier frequencies.
Messerschmitt isn’t talking about probes but about one civilization trying to reach another — he works from the perspective of the transmitter designer trying to reach a receiver about which little can be known. From the paper:
The transmitter can…explicitly design a transmit signal that minimizes the effect of RFI on the receiver’s discovery and detection probabilities in a robust way; that is, in a way that provides a constant immunity regardless of the nature of the RFI. It is shown that the resulting immunity increases with the product of time duration and bandwidth, and that the signal should resemble statistically a burst of white noise. Intuitively this is advantageous because RFI resembles such a signal with a likelihood that decreases exponentially with time-bandwidth product. Both a transmitter and receiver designer using this optimization criterion and employing the tools of elementary probability theory will arrive at this same conclusion. Although the context is different, variations on this principle inform the design of many modern widely deployed terrestrial digital wireless communication systems, so this has been extensively tested in practice and is likely to have a prominent place in the technology portfolio of an extraterrestrial civilization as well.
We’ve learned a great deal about dealing with RFI, especially given the rapid growth of wireless communications here on Earth, and we’ve also learned more about how radio signals propagate in the interstellar medium, thanks in large part, Messerschmitt notes, to advances in pulsar astronomy. Couple this with the ever-quickening pace of electronics and computer development and the search technologies in play are expanded so that we can accommodate the problem of natural noise sources as well as our own RFI. And we would have to assume that any extraterrestrial civilization would employ RFI mitigation techniques in its own communications.
In the case of accidental interception of a signal beamed between two intelligent probes, we can also look at the issue in terms of our detection algorithms. Early SETI work involved the so-called Fourier Transform to search for comparatively narrowband signals, and moved in the 1960s to the Fast Fourier Transform as the tool of choice. But as François Biraud noted as early as 1982, our terrestrial move from narrow-band to broader bandwidth communications presents a new challenge, breaking information into chunks carried by frequency-shifting carrier waves. Claudio Maccone has long argued that FFT methods are inappropriate for this kind of signal.
Enter the Karhunen-Loève Theorem (KLT) that Maccone continues to champion, a way of improving our sensitivity to an artificial signal that can dig tricky spread spectrum signals out of the background noise. Whether or not KLT algorithms are put to work with new installations like the Square Kilometer Array remains to be seen, but arguments like Messerschmitt’s point to the viability of spread-spectrum methods as a prime choice for interstellar communications. The point, then, is that spread-spectrum modulation is a factor we can deal with, allowing us to incorporate Messerschmitt’s ideas into our SETI toolkit even as we ponder the circumstances that might lead an extraterrestrial civilization to deploy a network of self-reproducing probes.
The Messerschmitt paper is “Interstellar Communication: The Case for Spread Spectrum” (preprint), while the Mathews paper is “From Here to ET,” Journal of the British Interplanetary Society 64 (2011), pp. 234-241. I have more to say about all this, and particularly about the ethical issues raised by self-reproducing technologies, but I’m running out of time this morning. The discussion continues tomorrow, when I’ll ponder how a civilization like ours might accidentally run into a network of extraterrestrial probes, and what that encounter might look like.
SETI and Self-Reproducing Probes
It was back in the 1980s when Robert Freitas came up with a self-reproducing probe concept based on the British Interplanetary Society’s Project Daedalus, but extending it in completely new directions. Like Daedalus, Freitas’ REPRO probe would be fusion-based and would mine the atmosphere of Jupiter to acquire the necessary helium-3. Unlike Daedalus, REPRO would devote half its payload to what Freitas called its SEED package, which would use resources in a target solar system to produce a new REPRO probe every 500 years. Probes like this could spread through the galaxy over the course of a million years without further human intervention.
A Vision of Technological Propagation
I leave to wiser heads than mine the question of whether self-reproducing technologies like these will ever be feasible, or when. My thought is that I wouldn’t want to rule out the possibility for cultures significantly more advanced than ours, but the question is a lively one, as is the issue of whether artificial intelligence will ever take us to a ‘Singularity,’ beyond which robotic generations move in ways we cannot fathom. John Mathews discusses self-reproducing probes, as we saw yesterday, as natural extensions of our early planetary explorer craft, eventually being modified to carry out inspections of the vast array of objects in the Kuiper Belt and Oort Cloud.
Image: The Kuiper Belt and much larger Oort Cloud offer billions of targets for self-reproducing space probes, if we can figure out how to build them. Credit: Donald Yeoman/NASA/JPL.
Here is Mathews’ vision, operating under a System-of-Systems paradigm in which the many separate systems needed to make a self-reproducing probe (he calls them Explorer roBots, or EBs) are examined separately, and conceding that all of them must be functional for the EB to emerge (the approach thus includes not only the technological questions but also the ethical and economic issues involved in the production of such probes). Witness the probes in operation:
Once the 1st generation proto-EBs arrive in, say, the asteroid belt, they would evolve and manufacture the 2nd generation per the outline above. The 2nd generation proto-EBs would be launched outward toward appropriate asteroids and the Kuiper/Oort objects as determined by observations of the parent proto-EB and, as communication delays are relatively small, human/ET operators. A few generations of the proto-EBs would likely suffice to evolve and produce EBs capable of traversing interstellar distances either in a single “leap” or, more likely, by jumping from Oort Cloud to Oort Cloud. Again, it is clear that early generation proto-EBs would trail a communications network.
The data network — what Mathews calls the Explorer Network, or ENET — has clear SETI implications if you buy the idea that self-reproducing probes are not only possible (someday) but also likely to be how intelligent cultures explore the galaxy. Here the assumption is that extraterrestrials are likely, as we have been thus far, to be limited to speeds far below the speed of light, and in fact Mathews works with 0.01c as a baseline. If EBs are an economical and efficient way to exploring huge volumes of space, then the possibility of picking up the transmissions linking them into a network cannot be ruled out. Mathews envisages them building a library of their activities and knowledge gained that will eventually propagate back to the parent species.
A Celestial Network’s Detectability
Here we can give a nod to the existing work on extending Internet protocols into space, the intent being to connect remote space probes to each other, making the download of mission data far more efficient. Rather than pointing an enormous dish at each spacecraft in turn, we point at a spacecraft serving as the communications hub, downloading information from, say, landers and atmospheric explorers and orbiters in turn. Perhaps this early interplanetary networking is a precursor to the kind of networks that might one day communicate the findings of interstellar probes. Mathews notes the MESSENGER mission to Mercury, which has used a near-infrared laser ranging system to link the vehicle with the NASA Goddard Astronomical Observatory at a distance of 24 million kilometers (0.16 AU) as an example of what is feasible today.
Tomorrow’s ENET would be, in the author’s view, a tight-beam communications network. In SETI terms, such networks would be not beacons but highly directed communications, greatly compromising but not eliminating our ability to detect them. Self-reproducing probes propagating from star to star — conceivably with many stops along the way — would in his estimation use mm-wave or far-IR lasers, communicating through highly efficient and highly directive beams. From the paper:
The solar system and local galaxy is relatively unobscured at these wavelengths and so these signaling lasers would readily enable communications links spanning up to a few hundred AUs each. It is also clear that successive generations of EBs would establish a communications network forming multiple paths to each other and to “home” thus serving to update all generations on time scales small compared with physical transit times. These various generations of EBs would identify the locations of “nearby” EBs, establish links with them, and thus complete the communications net in all directions.
Working the math, Mathews finds that current technologies for laser communications yield reasonable photon counts out to the near edge of the Oort Cloud, given optimistic assumptions about receiver noise levels. It is enough, in any case, to indicate that future technologies will allow networked probes to communicate from one probe to another over time, eventually returning data to the source civilization. An extraterrestrial Explorer Network like this one thus becomes a SETI target, though not one whose wavelengths have received much SETI attention.
On Ethics and Possibilities
In any case, there is no reason why an exploring extraterrestrial culture would necessarily want its activities to be noticed. Rather than eavesdropping on leakage from an extremely efficient communications network, a more likely SETI outcome would involve human expansion through gradually more autonomous probes, with the chances of finding evidence for ET expanding as our own sphere of exploration widens. Getting a positive SETI result might thus involve centuries if not millennia.
It may also be the case that reproducing probes are severely restricted out of ethical concerns. Runaway propagation poses many dilemmas, so that few if any cultures build them. A null result might also indicate that their development is more difficult and expensive than anticipated, particularly in terms of finding needed energy sources.
How would we track narrow-beam communications systems in the mm-wave/IR region? As some commenters on yesterday’s post have already noted, they would likely be spread-spectrum, but there are tools for handling such signals. More on this, and on the ethics issues as well, tomorrow. Here again is the citation for the Mathews paper: “From Here to ET,” Journal of the British Interplanetary Society 64 (2011), pp. 234-241. For more on Robert Freitas’ REPRO ideas, see his paper “A Self-Reproducing Interstellar Probe,” JBIS 33 (July 1980), pp. 251-264.