The Winds of Deep Space

If we can use solar photons to drive a sail, and perhaps use their momentum to stabilize a threatened observatory like Kepler, what about that other great push from the Sun, the solar wind? Unlike the stream of massless photons that exert a minute but cumulative push on a surface like a sail, the solar wind is a stream of charged particles moving at speeds of 500 kilometers per second and more, a flow that has captured the interest of those hoping to create a magnetic sail to ride it. A ‘magsail’ interacts with the solar wind’s plasma. The sailing metaphor remains, but solar sails and magsails get their push from fundamentally different processes.

Create a magnetic field around your spacecraft and interesting things begin to happen. Those electrons and positively charged ions flowing from the Sun experience a force as they move through the field, one that varies depending on the direction the particles are moving with respect to the field. The magsail is then subjected to an opposing force, producing acceleration. The magsail concept envisions large superconducting wire loops that produce a strong magnetic field when current flows through them, taking advantage of the solar wind’s ‘push.’

A magsail sounds like a natural way to get to the outer Solar System or beyond, but the solar wind introduces problems that compromise it. One is that it’s a variable wind indeed, weakening and regaining strength, and although I cited 500 kilometers per second in the introductory paragraph, the solar wind can vary anywhere from 350 to 800 kilometers per second. An inconstant wind raises questions of spacecraft control, an issue Gregory Matloff, Les Johnson and Giovanni Vulpetti are careful to note in their 2008 title Solar Sails: A Novel Approach to Interplanetary Travel (Copernicus, 2008). Here’s the relevant passage:

While technically interesting and somewhat elegant, magsails have significant disadvantages when compared to solar sails. First of all, we don’t (yet) have the materials required to build them. Second, the solar wind is neither constant nor uniform. Combining the spurious nature of the solar wind flux with the fact that controlled reflection of solar wind ions is a technique we have not yet mastered, the notion of sailing in this manner becomes akin to tossing a bottle into the surf at high tide, hoping the currents will carry the bottle to where you want it to go.

Interstellar Tradewinds and the Local Cloud

We have much to learn about the solar wind, but missions like Ulysses and the Advanced Composition Explorer have helped us understand its weakenings and strengthenings and their effect upon the boundaries of the heliosphere, that vast bubble whose size depends upon the strength of the solar wind and the pressures exerted by interstellar space. For we’re not just talking about a wind from the Sun. Particles are also streaming into the Solar System from outside, and data from four decades and eleven different spacecraft have given us a better idea of how these interactions work.

A paper from Priscilla Frisch (University of Chicago) and colleagues notes that the heliosphere itself is located near the inside edge of an interstellar cloud, with the two in motion past each other at some 22 kilometers per second. The result is an interstellar ‘wind,’ says Frisch:

“Because the sun is moving through this cloud, interstellar atoms penetrate into the solar system. The charged particles in the interstellar wind don’t do a good job of reaching the inner solar system, but many of the atoms in the wind are neutral. These can penetrate close to Earth and can be measured.”

sun_position

Image: The solar system moves through a local galactic cloud at a speed of 50,000 miles per hour, creating an interstellar wind of particles, some of which can travel all the way toward Earth to provide information about our neighborhood. Credit: NASA/Adler/U. Chicago/Wesleyan.

We’re learning that the interstellar wind has been changing direction over the years. Data on the matter go back to the 1970s, and this NASA news release mentions the U.S. Department of Defense’s Space Test Program 72-1 and SOLRAD 11B, NASA’s Mariner, and the Soviet Prognoz 6 as sources of information. We also have datasets from Ulysses, IBEX (Interstellar Boundary Explorer), STEREO (Solar Terrestrial Relations Observatory), Japan’s Nuzomi observatory and others including the MESSENGER mission now in orbit around Mercury.

Usefully, we’re looking at data gathered using different methods, but the flow of neutral helium atoms is apparent with each, and the cumulative picture is clear: The direction of the interstellar wind has changed by some 4 to 9 degrees over the past forty years. The idea of the interstellar medium as a constant gives way to a dynamic, interactive area that varies as the heliosphere moves through it. What we don’t know yet is why these changes occur when they do, but our local interstellar cloud may experience a turbulence of its own that affects our neighborhood.

The interstellar winds show us a kind of galactic turbulence that can inform us not only about the local interstellar medium but the lesser known features of our own heliosphere. Ultimately we may learn how to harness stellar winds, perhaps using advanced forms of magnetic sails to act as brakes when future probes enter a destination planetary system. As with solar sails, magsails give us the possibility of accelerating or decelerating without carrying huge stores of propellant, an enticing prospect indeed as we sort through how these winds blow.

The paper is Frisch et al., “Decades-Long Changes of the Interstellar Wind Through Our Solar System,” Science Vol. 341, No. 6150 (2013), pp. 1080-1082 (abstract)

tzf_img_post

Can Kepler be Revived?

Never give up on a spacecraft. That seems to be the lesson Kepler is teaching us, though it’s one we should have learned by now anyway. One outstanding example of working with what you’ve got is the Galileo mission, which had to adjust to the failure of its high-gain antenna. The spacecraft’s low-gain antenna came to the rescue, aided by data compression techniques that raised its effective data rate, and sensitivity upgrades to the listening receivers on Earth. Galileo achieved 70 percent of its science goals despite a failure that had appeared catastrophic, and much of what we’ve learned about Europa and the other Galilean satellites comes from it.

galileo_3

Image: Galileo at Jupiter, still functioning despite the incomplete deployment of its high gain antenna (visible on the left side of the spacecraft). The blue dots represent transmissions from Galileo’s atmospheric probe. Credit: NASA/JPL.

Can we tease more data out of Kepler? The problem has been that two of its four reaction wheels, which function like gyroscopes to keep the spacecraft precisely pointed, have failed. Kepler needs three functioning wheels to maintain its pointing accuracy because it is constantly being bathed in solar photons that can alter its orientation. But mission scientists and Ball Aerospace engineers have been trying to use that same issue — solar photons and the momentum they impart — to come up with a mission plan that can still operate.

You can see the result in the image below — be sure to click to enlarge it for readability. By changing Kepler’s orientation so that the spacecraft is nearly parallel to its orbital path around the Sun, mission controllers hope to keep the sunlight striking its solar panels symmetric across its long axis. We may not have that third reaction wheel, but we do have the possibility of this constant force acting as a surrogate. Re-orientation of the probe four times during its orbit will be necessary to keep the Sun out of its field of view. And the original field of view in the constellations of Cygnus, Lyra and Draco gives way to new regions of the sky.

Testing these methods, mission scientists have been able to collect data from a distant star field of a quality within five percent of the primary mission image standards. That’s a promising result and an ingenious use of the same photon-imparted momentum that’s used in the design of solar sails like JAXA’s IKAROS and the upcoming Sunjammer sail from NASA. We now continue with testing to find out whether this method will work not just for hours but for days and weeks.

K2

Image (click to enlarge): This concept illustration depicts how solar pressure can be used to balance NASA’s Kepler spacecraft, keeping the telescope stable enough to continue searching for transiting planets around distant stars. Credit: NASA Ames/W Stenzel.

If a decision is made to proceed, the revised mission concept, called K2, will need to make it through the 2014 Senior Review, in which operating missions are assessed. We should expect further news by the end of 2013 even as data from the original mission continue to be analyzed.

All of this may take you back to the Mariner 10 mission to Mercury in 1973, itself plagued by problems. Like Galileo, its high-gain antenna malfunctioned early in the flight, although it would later come back to life, and like Kepler, Mariner 10 proved difficult to stabilize. Problems in its star tracker caused the spacecraft to roll, costing it critical attitude control gas as it tried to stabilize itself. The guidance and control team at the Jet Propulsion Laboratory was able to adjust the orientation of Mariner 10’s solar panels, testing various tilt angles to counter the spacecraft’s roll. It was an early demonstration of the forces at work in solar sails.

We can hope that ingenuity and judicious use of solar photons can also bring Kepler back to life in an extended mission no one would have conceived when the spacecraft was designed. What we’ll wind up with is about 4.5 viewing periods (‘campaigns’) per orbit of the Sun, each with its own field of view and the capability of studying it for approximately 83 days. As the diagram shows, the proper positioning of the spacecraft to keep sunlight balanced on its solar panels is crucial. It’s a tricky challenge but one that could provide new discoveries ahead.

Addendum, from a news release just issued by NASA:

Based on an independent science and technical review of the Kepler project’s concept for a Kepler two-wheel mission extension, Paul Hertz, NASA’s Astrophysics Division director, has decided to invite Kepler to the Senior Review for astrophysics operating missions in early 2014.

The Kepler team’s proposal, dubbed K2, demonstrated a clever and feasible methodology for accurately controlling the Kepler spacecraft at the level of precision required for scientifically valuable data collection. The team must now further validate the concept and submit a Senior Review proposal that requests the funding necessary to continue the Kepler mission, with sufficient scientific justification to make it a viable option for the use of NASA’s limited resources.

To be clear, this is not a decision to continue operating the Kepler spacecraft or to conduct a two-wheel extended mission; it is merely an opportunity to write another proposal and compete against the Astrophysics Division’s other projects for the limited funding available for astrophysics operating missions.

tzf_img_post

Putting the Solar System in Context

Yesterday I mentioned that we don’t know yet where New Horizons will ultimately end up on a map of the night sky like the ones used in a recent IEEE Spectrum article to illustrate the journeys of the Voyagers and Pioneers. We’ll know more once future encounters with Kuiper Belt objects are taken into account. But the thought of New Horizons reminds me that Jon Lomberg will be talking about the New Horizons Message Initiative, as well as the Galaxy Garden he has created in Hawaii, today at the Arthur C. Clarke Center at UC San Diego. The talk will be streamed live at: http://calit2.net/webcasting/jwplayer/index.php, with the webcast slated to begin at approximately 2045 EST, or 0145 UTC.

While both the Voyagers and the Pioneers carried artifacts representing humanity, New Horizons may have its message uploaded to the spacecraft’s memory, its collected images and perhaps sounds ‘crowdsourced’ from people around the world after the spacecraft’s encounter with Pluto/Charon. That, at least, is the plan, but we need your signature on the New Horizons petition to make it happen. The first 10,000 to sign will have their names uploaded to the spacecraft, assuming all goes well and NASA approval is forthcoming. Please help by signing. In backing the New Horizons Message Initiative, principal investigator Alan Stern has said that it will “inspire and engage people to think about SETI and New Horizons in new ways.”

Artifacts, whether in computer memory or physical form like Voyager’s Golden Record, are really about how we see ourselves and our place in the universe. On that score, it’s heartening to see the kind of article I talked about yesterday in IEEE Spectrum, discussing where our probes are heading. When the Voyagers finished their planetary flybys, many people thought their missions were over. But even beyond their continued delivery of data as they cross the heliopause, the Voyagers are now awakening a larger interest in what lies beyond the Solar System. Even if they take tens of thousands of years to come remotely close to another star, the fact is that they are still traveling, and we’re seeing our system in this broader context.

The primary Alpha Centauri stars — Centauri A and B — are about 4.35 light years away. Proxima Centauri is actually a bit closer, at 4.22 light years. It’s easy enough to work out, using Voyager’s 17.3 kilometers per second velocity, that it would take over 73,000 years to travel the 4.22 light years that separate us from Proxima, but as we saw yesterday, we have to do more than take distance into account. Motion is significant, and the Alpha Centauri stars (I am assuming Proxima Centauri is gravitationally bound to A and B, which seems likely) are moving with a mean radial velocity of 25.1 ± 0.3 km/s towards the Solar System.

We’re talking about long time frames, to be sure. In about 28,000 years, having moved into the constellation Hydra as seen from Earth, Alpha Centauri will close to 3.26 light years of the Solar System before beginning to move away. So while we can say that Voyager 1 would take 73,000 years to cross the 4.22 light years that currently separate us from Proxima Centauri, the question of how long it would take Voyager 1 to get to Alpha Centauri given the relative motion of each remains to be solved. I leave this exercise to those more mathematically inclined than myself, but hope one or more readers will share their results in the comments.

proxima_hubble_2

Image: A Hubble image of Proxima Centauri taken with the observatory’s Wide Field and Planetary Camera 2. Centauri A and B are out of the frame. Credit: ESA/Hubble & NASA.

We saw yesterday that both Voyagers are moving toward stars that are moving in our direction, Voyager 1 toward Gliese 445 and Voyager 2 toward Ross 248. When travel times are in the tens of thousands of years, it helps to be moving toward something that is coming even faster towards you, which is why Voyager 1 closes to 1.6 light years of Gl 445 in 40,000 years. But these are hardly the only stars moving in our direction. Barnard’s Star, which shows the largest known proper motion of any star relative to the Solar System, is approaching at around 140 kilometers per second. Its closest approach should be around 9800 AD, when it will close to 3.75 light years. By then, of course, Alpha Centauri will have moved even closer to the Sun.

When we talk about interstellar probes, we’re obviously hoping to move a good deal faster, but it’s interesting to realize that our motion through the galaxy sets up a wide variety of stellar encounters. Epsilon Indi, currently some 11.8 light years away, is moving at about 90 kilometers per second relative to the Sun, and will close to 10.6 light years in about 17,000 years, a distance roughly similar to Tau Ceti’s as it will be in the sky of 43,000 years from now.

And as I learned from Erik Anderson’s splendid Vistas of Many Worlds, the star Gliese 710 is one of the most interesting in terms of close encounters. It’s currently 64 light years away in the constellation Serpens, but give it 1.4 million years and Gl 710 will move within 50,000 AU. That’s clearly in our wheelhouse, for 50,000 AU is the realm of the Oort Cloud comets, and we can only imagine what effects the passage of a star this close to the Sun will have on disturbing the cometary cloud. If humans are around this far in the future, GL 710 will give us an interstellar destination right on our doorstep as it swings by on its galactic journey.

NHMI_Graphic_small_2

The Stars in their Courses

Here’s hoping Centauri Dreams readers in the States enjoyed a restful Thanksgiving holiday, though with travel problems being what they are, I often find holidays can turn into high-stress drama unless spent at home. Fortunately, I was able to do that and, in addition to a wonderful meal with my daughter’s family, spent the rest of the time on long-neglected odds and ends, like switching to a new Linux distribution (Mint 16 RC) and fine-tuning it as the platform from within which I run this site and do other work (I’ve run various Linux flavors for years and always enjoy trying out the latest release of a new version).

Leafing through incoming tweets over the weekend, I ran across a link to Stephen Cass’ article in IEEE Spectrum on Plotting the Destinations of 4 Interstellar Probes. We always want to know where things are going, and I can remember digging up this information with a sense of awe when working on my Centauri Dreams book back around 2002-2003. After all, the Voyagers and the Pioneers we’ve sent on their journeys aren’t going to be coming back, but represent the first spacecraft we’ve sent on interstellar trajectories.

And if the Pioneers have fallen silent, we do have the two Voyagers still sending back data, as they will for another decade or so, giving us a first look at the nearby interstellar medium. For true star traveling, of course, even these doughty probes will fall silent at the very beginning of their journeys. Voyager 1 is headed on a course that takes it in the rough direction of the star AC+79 3888 (Gliese 445), a red dwarf 17.6 light years away in the constellation Camelopardalis. Here’s the star chart the IEEE Spectrum ran to show Voyager 1’s path — the article offers charts for the other interstellar destinations as well, but I’ll send you directly to it for those.

voyager1-gliese445_1

Notice anything interesting about Voyager 1? The IEEE Spectrum article says the spacecraft will pass within 1.6 light years of the star in 40,000 years, and indeed, this is a figure you can find mentioned in two authoritative papers: Rudd et al., “The Voyager Interstellar Mission” (Acta Astronautica 40, pp. 383-396) and Cesarone et al. ,”Prospects for the Voyager Extra-Planetary and Interstellar Mission” (JBIS 36, pp. 99-116). NASA used the 40,000 year figure as recently as September, when Voyager project manager Suzanne Dodd (JPL) spoke of the star to Space.com: “Voyager’s on its way to a close approach with it in about 40,000 years.” I used the figure as well in my Centauri Dreams book.

But wait: I’ve often cited how long it would take Voyager 1 to get to Alpha Centauri if it happened to be aimed in that direction. The answer, well over 70,000 years, is obviously much longer than the 40,000 years NASA is citing to get to the neighborhood of Gliese 445. How can this be? After all, Voyager 1 is moving at 17 kilometers per second, and should take 17,600 years to travel one light year. In 40,000 years, it should be a little over 2 light years from Earth.

I’m drawing these numbers from Voyager 1 and Gliese 445, a fascinating piece that ran in the Math Encounters Blog in September of this year. The author, listed as ‘an engineer trying to figure out how the world works,’ became interested in Voyager 1’s journey and decided to work out the math. It turns out NASA is right because the star Gliese 445 is itself moving toward the Solar System at a considerable clip, about 119 kilometers per second. Gliese 445 should be making its closest approach to the Solar System approximately 46,000 years from now, closing to within 3.485 light years, a good deal closer than the Alpha Centauri stars are today.

Never forget, in other words, that we’re dealing with a galaxy in constant motion! Voyager 1’s sister craft makes the same point. It’s headed toward the red dwarf Ross 248, 10 light years away but also moving briskly in our direction. In fact, within 36,000 years, the star should have approached within 3.024 light years, much closer indeed than Proxima Centauri. Voyager 2, headed outbound at roughly 15 kilometers per second, thus closes on Ross 248 in about the same time as Voyager 1 nears Gliese 445, 40,000 years. Voyager 2 then presses on in the direction of Sirius, but 256,000 years will have gone by before it makes its closest approach.

As for the Pioneers, Pioneer 10 is moving in the direction of Aldebaran, some 68 light years out, and will make its closest approach in two million years, while Pioneer 11 heads toward Lambda Aquilae, an infant star (160 million years old) some 125 light years away, approaching it four million years from now. We could throw New Horizons into the mix, but remember that after the Pluto/Charon encounter, mission controllers hope to re-direct the craft toward one or more Kuiper Belt objects, so we don’t know exactly where its final course will take it. Doubtless I’ll be able to update this article with a final New Horizons trajectory a few years from now. Until then, be sure to check out the IEEE star charts for a graphic look at these distant destinations.

tzf_img_post

Is Energy a Key to Interstellar Communication?

I first ran across David Messerschmitt’s work in his paper “Interstellar Communication: The Case for Spread Spectrum,” and was delighted to meet him in person at Starship Congress in Dallas last summer. Dr. Messerschmitt has been working on communications methods designed for interstellar distances for some time now, with results that are changing the paradigm for how such signals would be transmitted, and hence what SETI scientists should be looking for. At the SETI Institute he is proposing the expansion of the types of signals being searched for in the new Allen Telescope Array. His rich discussion on these matters follows.

By way of background, Messerschmitt is the Roger A. Strauch Professor Emeritus of Electrical Engineering and Computer Sciences at the University of California at Berkeley. For the past five years he has collaborated with the SETI institute and other SETI researchers in the study of the new domain of “broadband SETI”, hoping to influence the direction of SETI observation programs as well as future METI transmission efforts. He is the co-author of Software Ecosystem: Understanding an Indispensable Technology and Industry (MIT Press, 2003), author of Understanding Networked Applications (Morgan-Kaufmann, 1999), and co-author of a widely used textbook Digital Communications (Kluwer, 1993). Prior to 1977 he was with AT&T Bell Laboratories as a researcher in digital communications. He is a Fellow of the IEEE, a Member of the National Academy of Engineering, and a recipient of the IEEE Alexander Graham Bell Medal recognizing “exceptional contributions to the advancement of communication sciences and engineering.”

by David G. Messerschmitt

DM_author

We all know that generating sufficient energy is a key to interstellar travel. Could energy also be a key to successful interstellar communication?

One manifestation of the Fermi paradox is our lack of success in detecting artificial signals originating outside our solar system, despite five decades of SETI observations at radio wavelengths. This could be because our search is incomplete, or because such signals do not exist, or because we haven’t looked for the right kind of signal. Here we explore the third possibility.

A small (but enthusiastic and growing) cadre of researchers is proposing that energy may be the key to unlocking new signal structures more appropriate for interstellar communication, yet not visible to current and past searches. Terrestrial communication may be a poor example for interstellar communication, because it emphasizes minimization of bandwidth at the expense of greater radiated energy. This prioritization is due to an artificial scarcity of spectrum created by regulatory authorities, who divide the spectrum among various uses. If interstellar communication were to reverse these priorities, then the resulting signals would be very different from the familiar signals we have been searching for.

Starships vs. civilizations

There are two distinct applications of interstellar communication: communication with starships and communication with extraterrestrial civilizations. These two applications invoke very different requirements, and thus should be addressed independently.

Starship communication. Starship communication will be two-way, and the two ends can be designed as a unit. We will communicate control information to a starship, and return performance parameters and scientific data. Effectiveness in the control function is enhanced if the round-trip delay is minimized. The only parameter of this round-trip delay over which we have influence is the time it takes to transmit and receive each message, and our only handle to reduce this is a higher information rate. High information rates also allow more scientific information to be collected and returned to Earth. The accuracy of control and the integrity of scientific data demands reliability, or a low error rate.

Communication with a civilization. In our preliminary phase where we are not even sure other civilizations exist, communication with a civilization (or they with us) will be one way, and the transmitter and receiver must be designed independently. This lack of coordination in design is a difficult challenge. It also implies that discovery of the signal by a receiver, absent any prior information about its structure, is a critical issue.

We (or they) are likely to carefully compose a message revealing something about our (or their) culture and state of knowledge. Composition of such a message should be a careful deliberative process, and changes to that message will probably occur infrequently, on timeframes of years or decades. Because we (or they) don’t know when and where such a message will be received, we (or they) are forced to transmit the message repeatedly. In this case, reliable reception (low error rate) for each instance of the message need not be a requirement because the receiving civilization can monitor multiple repetitions and stitch them together over time to recover a reliable rendition. In one-way communication, there is no possibility of eliminating errors entirely, but very low rates of error can be achieved. For example, if an average of one out of a thousand bits is in error for a single reception, after observing and combining five (seven) replicas of a message only one out of 100 megabits (28 gigabits) will still be in error.

Message transmission time is also not critical. Even after two-way communication is established, transmission time won’t be a big component of the round-trip delay in comparison to large one-way propagation delays. For example, at a rate of one bit per second, we can transmit 40 megabyles of message data per decade, and a decade is not particularly significant in the context of a delay of centuries or millennia required for speed-of-light propagation alone.

At interstellar distances of hundreds or thousands of light years, there are additional impairments to overcome at radio wavelengths, in the form of interstellar dispersion and scattering due to clouds of partially ionized gases. Fortunately these impairments have been discovered and “reverse engineered” by pulsar astronomers and astrophysicists, so that we can design our signals taking these impairments into account, even though there is no possibility of experimentation.

Propagation losses are proportional to distance-squared, so large antennas and/or large radiated energies are necessary to deliver sufficient signal flux at the receiver. This places energy as a considerable economic factor, manifested either in the cost of massive antennas or in energy utility costs.

The remainder of this article addresses communication with civilizations rather than starships.

Compatibility without coordination

Even though one civilization is designing a transmitter and the other a receiver, the only hope of compatibility is for each to design an end-to-end system. That way, each fully contemplates and accounts for the challenges of the other. Even then there remains a lot of design freedom and a world (and maybe a galaxy) full of clever ideas, with many possibilities. I believe there is no hope of finding common ground unless a) we (and they) keep things very simple, b) we (and they) fall back on fundamental principles, and c) we (and they) base the design on physical characteristics of the medium observable by both of us. This “implicit coordination” strategy is illustrated in Fig. 1. Let’s briefly review all three elements of this three-pronged strategy.

DM_fig1

The simplicity argument is perhaps the most interesting. It postulates that complexity is an obstacle to finding common ground in the absence of coordination. Similar to Occam’s razor in philosophy, it can be stated as “the simplest design that meets the needs and requirements of interstellar communication is the best design”. Stated in a negative way, as designers we should avoid any gratuitous requirements that increase the complexity of the solution and fail to produce substantive advantage.

Regarding fundamental principles, thanks to some amazing theorems due to Claude Shannon in 1948, communications is blessed with mathematically provable fundamental limits on our ability to communicate. Those limits, as well as ways of approaching them, depend on the nature of impairments introduced in the physical environment. Since 1948, communications has been dominated by an unceasing effort to approach those fundamental limits, and with good success based on advancing technology and conceptual advances. If both the transmitter and receiver designers seek to approach fundamental limits, they will arrive at similar design principles even as they glean the performance advantages that result.

We also have to presume that other civilizations have observed the interstellar medium, and arrived at similar models of impairments to radio propagation originating there. As we will see, both the energy requirements and interstellar impairments are helpful, because they drastically narrow the characteristics of signals that make sense.

Prioritizing energy simplifies the design

Ordinarily it is notoriously difficult and complex to approach the Shannon limit, and that complexity would be the enemy of uncoordinated design. However, if we ask “limit with respect to what?”, there are two resources that govern the information rate that can be achieved and the reliability with which that information can be extracted from the signal. These are the bandwidth which is occupied by the signal and the “size” of the signal, usually quantified by its energy. Most complexity arises from forcing a limit on bandwidth. If any constraint on bandwidth is avoided, the solution becomes much simpler.

Harry Jones of NASA observed in a paper published in 1995 that there is a large window of microwave frequencies over which the interstellar medium and atmosphere are relatively transparent. Why not, Jones asked, make use of this wide bandwidth, assuming there are other benefits to be gained? In other words, we can argue than any bandwidth constraint is a gratuitous requirement in the context of interstellar communication. Removing that constraint does simplify the design. But another important benefit emphasized by Jones is reducing the signal energy that must be delivered to the receiver. At the altar of Occam’s razor, constraining bandwidth to be narrow causes harm (an increase in required signal energy) with no identifiable advantage. Peter Fridman of the Netherlands Institute for Radio Astronomy recently published a paper following up with a specific end-to-end characterization of the energy requirements using techniques similar to Jones’s proposal.

I would add to Jones’s argument that the information rates are likely to be low, which implies a small bandwidth to start with. For example, starting at one bit per second, the minimum bandwidth is about one Hz. A million-fold increase in bandwidth is still only a megahertz, which is tiny when compared to the available microwave window. Even a billion-fold increase should be quite feasible with our technology.

Why, you may be asking, does increasing bandwidth allow the delivered energy to be smaller? After all, a wide bandwidth allows more total noise into the receiver. The reason has to do with the geometry of higher dimensional Euclidean spaces, since permitting more bandwidth allows more degrees of freedom in the signal, and a higher dimensional space has a greater volume in which to position signals farther apart and thus less likely to be confused by noise. I suggest you use this example to motivate your kids to pay better attention in geometry class.

Another requirement that we have argued is gratuitous is high reliability in the extraction of information from the signal. Achieving very low bit error rates can be achieved by error-control coding schemes, but these add considerable complexity and are unnecessary when the receiver has multiple replicas of a message to work with. Further, allowing higher error rates further reduces the energy requirement.

The minimum delivered energy

For a message, the absolute minimum energy that must be delivered to the receiver baseband processing while still recovering information from that signal can be inferred from the Shannon limit. The cosmic background noise is the ultimate limiting factor, after all other impairments are eliminated by technological means. In particular the minimum energy must be larger than the product of three factors: (1) the power spectral density of the cosmic background radiation, (2) the number of bits in the message, and (3) the natural logarithm of two.

Even at this lower limit, the energy requirements are substantial. For example, at a carrier frequency of 5 GHz at least eight photons must arrive at the receiver baseband processing for each bit of information. Between two Arecibo antennas with 100% efficiency at 1000 light years, this corresponds to a radiated energy of 0.4 watt-hours for each bit in our message, or 3.7 megawatt-hours per megabyte. To Earthlings today, this would create a utility bill of roughly $400 per megabyte. (This energy and cost scale quadratically with distance.) This doesn’t take into account various non-idealities (like antenna inefficiency, noise in the receiver, etc.) or any gap to the fundamental limit due to using practical modulation techniques. You can increase the energy by an order of magnitude or two for these effects. This energy and cost per message is multiplied by repeated transmission of the message in multiple directions simultaneously (perhaps thousands!), allowing that the transmitter may not know in advance where the message will be monitored. Pretty soon there will be real money involved, at least at our Earthly energy prices.

Two aspects of the fundamental limit are worth noting. First, we didn’t mention bandwidth. In fact, the stated fundamental limit assumes that bandwidth is unconstrained. If we do constrain bandwidth and start to reduce it, then the requirement on delivered energy increases, and rapidly at that. Thus both simplicity and minimizing energy consumption or reducing antenna area at the transmitter are aligned with using a large bandwidth in relation to the information rate. Second, this minimum energy per message does not depend on the rate at which the message is transmitted and received. Reducing the transmission time for the message (by increasing the information rate) does not affect the total energy, but does increase the average power correspondingly. Thus there is an economic incentive to slow down the information rate and increase the message transmission time, which should be quite okay.

What do energy-limited signals actually look like?

A question of considerable importance is the degree to which we can or cannot infer enough characteristics of a signal to significantly constrain the design space. Combined with Occam’s razor and jointly observable physical effects, the structure of an energy-limited transmitted signal is narrowed considerably.

Based on models of the interstellar medium developed in pulsar astronomy, I have shown that there is an “interstellar coherence hole” consisting of an upper bound on the time duration and bandwidth of a waveform such that the waveform is for all practical purposes unaffected by these impairments. Further, I have shown that structuring a signal around simple on-off patterns of energy, where each “bundle” of energy is based on a waveform that falls within the interstellar coherence hole, does not compromise our ability to approach the fundamental limit. In this fashion, the transmit signal can be designed to completely circumvent impairments, without a compromise in energy. (This is the reason that the fundamental limit stated above is determined by noise, and noise alone.) Both the transmitter and receiver can observe the impairments and thereby arrive at similar estimates of the coherence hole parameters.

The interstellar medium and motion are not completely removed from the picture by this simple trick, because they still announce their presence through scintillation, which is a fluctuation of arriving signal flux similar to the twinkling of the stars (radio engineers call this same phenomenon “fading”). Fortunately we know of ways to counter scintillation without affecting the energy requirement, because it does not affect the average signal flux. The minimum energy required for reliable communication in the presence of noise and fading was established by Robert Kennedy of MIT (a professor sharing a name with a famous politician) in 1964. My recent contribution has been to extend his models and results to the interstellar case.

Signals designed to minimize delivered energy based on these energy bundles have a very different character from what we are accustomed to in terrestrial radio communication. This is an advantage in itself, because another big challenge I haven’t yet mentioned is confusion with artificial signals of terrestrial or near-space origin. This is less of a problem if the signals (local and interstellar) are quite distinctive.

A typical example of an energy-limited signal is illustrated in Fig. 2. The idea behind energy-limited communication is to embed energy in the locations of energy bundles, rather than other (energy-wasting but bandwidth-conserving) parameters like magnitude or phase. In the example of Fig. 2, each rectangle includes 2048 locations where an energy bundle might occur (256 frequencies and 8 time locations), but an actual energy bundle arrives in only one of these locations. When the receiver observes this unique location, eleven bits of information have been conveyed from transmitter to receiver (because 211 = 2048). This location-based scheme is energy-efficient because a single energy bundle conveys eleven bits.

dm_fig2

The singular characteristic of Fig. 2 is energy located in discrete but sparse locations in time and frequency. Each bundle has to be sufficiently energetic to overwhelm the noise at the receiver, so that its location can be detected reliably. This is pretty much how a lighthouse works: Discrete flashes of light are each energetic enough to overcome loss and noise, but they are sparse in time (in any one direction) to conserve energy. This is also how optical SETI is usually conceived, because optical designers usually don’t concern themselves with bandwidth either. Energy-limited radio communication thus resembles optical, except that the individual “pulses” of energy must be consciously chosen to avoid dispersive impairments at radio wavelengths.

This scheme (which is called frequency-division keying combined with pulse-position modulation) is extremely simple compared to the complicated bandwidth-limited designs we typically see terrestrially and in near space, and yet (as long as we don’t attempt to violate the minimum energy requirement) it can achieve an error probability approaching zero as the number of locations grows. (Some additional measures are needed to account for scintillation, although I won’t discuss this further.) We can’t do better than this in terms of the delivered energy, and neither can another civilization, no matter how advanced their technology. This scheme does consume voluminous bandwidth, especially as we attempt to approach the fundamental limit, and Ian S. Morrison of the Australian Centre for Astrobiology is actively looking for simple approaches to achieve similar ends with less bandwidth.

What do “they” know about energy-limited communication?

Our own psyche is blinded by bandwidth-limited communication based on our experience with terrestrial wireless. Some might reasonably argue that “they” must surely suffer the same myopic view and gravitate toward bandwidth conservation. I disagree, for several reasons.

Because energy-limited communication is simpler than bandwidth-limited communication, the basic design methodology was well understood much earlier, back in the 1950’s and 1960’s. It was the 1990’s before bandwidth-limited communication was equally well understood.

Have you ever wondered why the modulation techniques used in optical communications are usually so distinctive from radio? One of the main differences is this bandwidth- vs energy-limited issue. Bandwidth has never been considered a limiting resource at the shorter optical wavelengths, and thus minimizing energy rather than bandwidth has been emphasized. We have considerable practical experience with energy-limited communication, albeit mostly at optical wavelengths.

If another civilization has more plentiful and cheaper energy sources or a bigger budget than us, there are plenty of beneficial ways to consume more energy other than being deliberately inefficient. They could increase message length, or reduce the message transmission time, or transmit in more directions simultaneously, or transmit a signal that can be received at greater distances.

Based on our Earthly experience, it is reasonable to expect that both a transmitting and receiving civilization would be acutely aware of energy-limited communication and I expect that they would choose to exploit it for interstellar communication.

Discovery

Communication isn’t possible until the receiver discovers the signal in the first place. Discovery of an energy-limited signal as illustrated in Fig. 2 is easy in one respect, since the signal is sparse in both time and frequency (making it relatively easy to distinguish from natural phenomenon as well as artificial signals of terrestrial origin) and individual energy bundles are energetic (making them easier to detect reliably). Discovery is hard in another respect, since due to that same sparsity we must be patient and conduct multiple observations in any particular range of frequencies to confidently rule out the presence of a signal with this character.

Criticisms of this approach

What are some possible shortcomings or criticisms of this approach? None of us have yet studied possible issues in design of a high-power radio transmitter generating a signal of this type. Some say that bandwidth does need to be conserved, for some reason such as interference with terrestrial services. Others say that we should expect a “beacon”, which is a signal designed to attract attention, but simplified because it carries no information. Others say that an extraterrestrial signal might be deliberately disguised to look identical to typical terrestrial signals (and hence emphasize narrow bandwidth rather low energy) so that it might be discovered accidently.

What do you think? In your comments to this post, the Centauri Dreams community can be helpful in critiquing and second guessing my assumptions and conclusions. If you want to delve further into this, I have posted a report at http://arxiv.org/abs/1305.4684 that includes references to the foundational work.

tzf_img_post