Is Energy a Key to Interstellar Communication?

I first ran across David Messerschmitt’s work in his paper “Interstellar Communication: The Case for Spread Spectrum,” and was delighted to meet him in person at Starship Congress in Dallas last summer. Dr. Messerschmitt has been working on communications methods designed for interstellar distances for some time now, with results that are changing the paradigm for how such signals would be transmitted, and hence what SETI scientists should be looking for. At the SETI Institute he is proposing the expansion of the types of signals being searched for in the new Allen Telescope Array. His rich discussion on these matters follows.

By way of background, Messerschmitt is the Roger A. Strauch Professor Emeritus of Electrical Engineering and Computer Sciences at the University of California at Berkeley. For the past five years he has collaborated with the SETI institute and other SETI researchers in the study of the new domain of “broadband SETI”, hoping to influence the direction of SETI observation programs as well as future METI transmission efforts. He is the co-author of Software Ecosystem: Understanding an Indispensable Technology and Industry (MIT Press, 2003), author of Understanding Networked Applications (Morgan-Kaufmann, 1999), and co-author of a widely used textbook Digital Communications (Kluwer, 1993). Prior to 1977 he was with AT&T Bell Laboratories as a researcher in digital communications. He is a Fellow of the IEEE, a Member of the National Academy of Engineering, and a recipient of the IEEE Alexander Graham Bell Medal recognizing “exceptional contributions to the advancement of communication sciences and engineering.”

by David G. Messerschmitt

DM_author

We all know that generating sufficient energy is a key to interstellar travel. Could energy also be a key to successful interstellar communication?

One manifestation of the Fermi paradox is our lack of success in detecting artificial signals originating outside our solar system, despite five decades of SETI observations at radio wavelengths. This could be because our search is incomplete, or because such signals do not exist, or because we haven’t looked for the right kind of signal. Here we explore the third possibility.

A small (but enthusiastic and growing) cadre of researchers is proposing that energy may be the key to unlocking new signal structures more appropriate for interstellar communication, yet not visible to current and past searches. Terrestrial communication may be a poor example for interstellar communication, because it emphasizes minimization of bandwidth at the expense of greater radiated energy. This prioritization is due to an artificial scarcity of spectrum created by regulatory authorities, who divide the spectrum among various uses. If interstellar communication were to reverse these priorities, then the resulting signals would be very different from the familiar signals we have been searching for.

Starships vs. civilizations

There are two distinct applications of interstellar communication: communication with starships and communication with extraterrestrial civilizations. These two applications invoke very different requirements, and thus should be addressed independently.

Starship communication. Starship communication will be two-way, and the two ends can be designed as a unit. We will communicate control information to a starship, and return performance parameters and scientific data. Effectiveness in the control function is enhanced if the round-trip delay is minimized. The only parameter of this round-trip delay over which we have influence is the time it takes to transmit and receive each message, and our only handle to reduce this is a higher information rate. High information rates also allow more scientific information to be collected and returned to Earth. The accuracy of control and the integrity of scientific data demands reliability, or a low error rate.

Communication with a civilization. In our preliminary phase where we are not even sure other civilizations exist, communication with a civilization (or they with us) will be one way, and the transmitter and receiver must be designed independently. This lack of coordination in design is a difficult challenge. It also implies that discovery of the signal by a receiver, absent any prior information about its structure, is a critical issue.

We (or they) are likely to carefully compose a message revealing something about our (or their) culture and state of knowledge. Composition of such a message should be a careful deliberative process, and changes to that message will probably occur infrequently, on timeframes of years or decades. Because we (or they) don’t know when and where such a message will be received, we (or they) are forced to transmit the message repeatedly. In this case, reliable reception (low error rate) for each instance of the message need not be a requirement because the receiving civilization can monitor multiple repetitions and stitch them together over time to recover a reliable rendition. In one-way communication, there is no possibility of eliminating errors entirely, but very low rates of error can be achieved. For example, if an average of one out of a thousand bits is in error for a single reception, after observing and combining five (seven) replicas of a message only one out of 100 megabits (28 gigabits) will still be in error.

Message transmission time is also not critical. Even after two-way communication is established, transmission time won’t be a big component of the round-trip delay in comparison to large one-way propagation delays. For example, at a rate of one bit per second, we can transmit 40 megabyles of message data per decade, and a decade is not particularly significant in the context of a delay of centuries or millennia required for speed-of-light propagation alone.

At interstellar distances of hundreds or thousands of light years, there are additional impairments to overcome at radio wavelengths, in the form of interstellar dispersion and scattering due to clouds of partially ionized gases. Fortunately these impairments have been discovered and “reverse engineered” by pulsar astronomers and astrophysicists, so that we can design our signals taking these impairments into account, even though there is no possibility of experimentation.

Propagation losses are proportional to distance-squared, so large antennas and/or large radiated energies are necessary to deliver sufficient signal flux at the receiver. This places energy as a considerable economic factor, manifested either in the cost of massive antennas or in energy utility costs.

The remainder of this article addresses communication with civilizations rather than starships.

Compatibility without coordination

Even though one civilization is designing a transmitter and the other a receiver, the only hope of compatibility is for each to design an end-to-end system. That way, each fully contemplates and accounts for the challenges of the other. Even then there remains a lot of design freedom and a world (and maybe a galaxy) full of clever ideas, with many possibilities. I believe there is no hope of finding common ground unless a) we (and they) keep things very simple, b) we (and they) fall back on fundamental principles, and c) we (and they) base the design on physical characteristics of the medium observable by both of us. This “implicit coordination” strategy is illustrated in Fig. 1. Let’s briefly review all three elements of this three-pronged strategy.

DM_fig1

The simplicity argument is perhaps the most interesting. It postulates that complexity is an obstacle to finding common ground in the absence of coordination. Similar to Occam’s razor in philosophy, it can be stated as “the simplest design that meets the needs and requirements of interstellar communication is the best design”. Stated in a negative way, as designers we should avoid any gratuitous requirements that increase the complexity of the solution and fail to produce substantive advantage.

Regarding fundamental principles, thanks to some amazing theorems due to Claude Shannon in 1948, communications is blessed with mathematically provable fundamental limits on our ability to communicate. Those limits, as well as ways of approaching them, depend on the nature of impairments introduced in the physical environment. Since 1948, communications has been dominated by an unceasing effort to approach those fundamental limits, and with good success based on advancing technology and conceptual advances. If both the transmitter and receiver designers seek to approach fundamental limits, they will arrive at similar design principles even as they glean the performance advantages that result.

We also have to presume that other civilizations have observed the interstellar medium, and arrived at similar models of impairments to radio propagation originating there. As we will see, both the energy requirements and interstellar impairments are helpful, because they drastically narrow the characteristics of signals that make sense.

Prioritizing energy simplifies the design

Ordinarily it is notoriously difficult and complex to approach the Shannon limit, and that complexity would be the enemy of uncoordinated design. However, if we ask “limit with respect to what?”, there are two resources that govern the information rate that can be achieved and the reliability with which that information can be extracted from the signal. These are the bandwidth which is occupied by the signal and the “size” of the signal, usually quantified by its energy. Most complexity arises from forcing a limit on bandwidth. If any constraint on bandwidth is avoided, the solution becomes much simpler.

Harry Jones of NASA observed in a paper published in 1995 that there is a large window of microwave frequencies over which the interstellar medium and atmosphere are relatively transparent. Why not, Jones asked, make use of this wide bandwidth, assuming there are other benefits to be gained? In other words, we can argue than any bandwidth constraint is a gratuitous requirement in the context of interstellar communication. Removing that constraint does simplify the design. But another important benefit emphasized by Jones is reducing the signal energy that must be delivered to the receiver. At the altar of Occam’s razor, constraining bandwidth to be narrow causes harm (an increase in required signal energy) with no identifiable advantage. Peter Fridman of the Netherlands Institute for Radio Astronomy recently published a paper following up with a specific end-to-end characterization of the energy requirements using techniques similar to Jones’s proposal.

I would add to Jones’s argument that the information rates are likely to be low, which implies a small bandwidth to start with. For example, starting at one bit per second, the minimum bandwidth is about one Hz. A million-fold increase in bandwidth is still only a megahertz, which is tiny when compared to the available microwave window. Even a billion-fold increase should be quite feasible with our technology.

Why, you may be asking, does increasing bandwidth allow the delivered energy to be smaller? After all, a wide bandwidth allows more total noise into the receiver. The reason has to do with the geometry of higher dimensional Euclidean spaces, since permitting more bandwidth allows more degrees of freedom in the signal, and a higher dimensional space has a greater volume in which to position signals farther apart and thus less likely to be confused by noise. I suggest you use this example to motivate your kids to pay better attention in geometry class.

Another requirement that we have argued is gratuitous is high reliability in the extraction of information from the signal. Achieving very low bit error rates can be achieved by error-control coding schemes, but these add considerable complexity and are unnecessary when the receiver has multiple replicas of a message to work with. Further, allowing higher error rates further reduces the energy requirement.

The minimum delivered energy

For a message, the absolute minimum energy that must be delivered to the receiver baseband processing while still recovering information from that signal can be inferred from the Shannon limit. The cosmic background noise is the ultimate limiting factor, after all other impairments are eliminated by technological means. In particular the minimum energy must be larger than the product of three factors: (1) the power spectral density of the cosmic background radiation, (2) the number of bits in the message, and (3) the natural logarithm of two.

Even at this lower limit, the energy requirements are substantial. For example, at a carrier frequency of 5 GHz at least eight photons must arrive at the receiver baseband processing for each bit of information. Between two Arecibo antennas with 100% efficiency at 1000 light years, this corresponds to a radiated energy of 0.4 watt-hours for each bit in our message, or 3.7 megawatt-hours per megabyte. To Earthlings today, this would create a utility bill of roughly $400 per megabyte. (This energy and cost scale quadratically with distance.) This doesn’t take into account various non-idealities (like antenna inefficiency, noise in the receiver, etc.) or any gap to the fundamental limit due to using practical modulation techniques. You can increase the energy by an order of magnitude or two for these effects. This energy and cost per message is multiplied by repeated transmission of the message in multiple directions simultaneously (perhaps thousands!), allowing that the transmitter may not know in advance where the message will be monitored. Pretty soon there will be real money involved, at least at our Earthly energy prices.

Two aspects of the fundamental limit are worth noting. First, we didn’t mention bandwidth. In fact, the stated fundamental limit assumes that bandwidth is unconstrained. If we do constrain bandwidth and start to reduce it, then the requirement on delivered energy increases, and rapidly at that. Thus both simplicity and minimizing energy consumption or reducing antenna area at the transmitter are aligned with using a large bandwidth in relation to the information rate. Second, this minimum energy per message does not depend on the rate at which the message is transmitted and received. Reducing the transmission time for the message (by increasing the information rate) does not affect the total energy, but does increase the average power correspondingly. Thus there is an economic incentive to slow down the information rate and increase the message transmission time, which should be quite okay.

What do energy-limited signals actually look like?

A question of considerable importance is the degree to which we can or cannot infer enough characteristics of a signal to significantly constrain the design space. Combined with Occam’s razor and jointly observable physical effects, the structure of an energy-limited transmitted signal is narrowed considerably.

Based on models of the interstellar medium developed in pulsar astronomy, I have shown that there is an “interstellar coherence hole” consisting of an upper bound on the time duration and bandwidth of a waveform such that the waveform is for all practical purposes unaffected by these impairments. Further, I have shown that structuring a signal around simple on-off patterns of energy, where each “bundle” of energy is based on a waveform that falls within the interstellar coherence hole, does not compromise our ability to approach the fundamental limit. In this fashion, the transmit signal can be designed to completely circumvent impairments, without a compromise in energy. (This is the reason that the fundamental limit stated above is determined by noise, and noise alone.) Both the transmitter and receiver can observe the impairments and thereby arrive at similar estimates of the coherence hole parameters.

The interstellar medium and motion are not completely removed from the picture by this simple trick, because they still announce their presence through scintillation, which is a fluctuation of arriving signal flux similar to the twinkling of the stars (radio engineers call this same phenomenon “fading”). Fortunately we know of ways to counter scintillation without affecting the energy requirement, because it does not affect the average signal flux. The minimum energy required for reliable communication in the presence of noise and fading was established by Robert Kennedy of MIT (a professor sharing a name with a famous politician) in 1964. My recent contribution has been to extend his models and results to the interstellar case.

Signals designed to minimize delivered energy based on these energy bundles have a very different character from what we are accustomed to in terrestrial radio communication. This is an advantage in itself, because another big challenge I haven’t yet mentioned is confusion with artificial signals of terrestrial or near-space origin. This is less of a problem if the signals (local and interstellar) are quite distinctive.

A typical example of an energy-limited signal is illustrated in Fig. 2. The idea behind energy-limited communication is to embed energy in the locations of energy bundles, rather than other (energy-wasting but bandwidth-conserving) parameters like magnitude or phase. In the example of Fig. 2, each rectangle includes 2048 locations where an energy bundle might occur (256 frequencies and 8 time locations), but an actual energy bundle arrives in only one of these locations. When the receiver observes this unique location, eleven bits of information have been conveyed from transmitter to receiver (because 211 = 2048). This location-based scheme is energy-efficient because a single energy bundle conveys eleven bits.

dm_fig2

The singular characteristic of Fig. 2 is energy located in discrete but sparse locations in time and frequency. Each bundle has to be sufficiently energetic to overwhelm the noise at the receiver, so that its location can be detected reliably. This is pretty much how a lighthouse works: Discrete flashes of light are each energetic enough to overcome loss and noise, but they are sparse in time (in any one direction) to conserve energy. This is also how optical SETI is usually conceived, because optical designers usually don’t concern themselves with bandwidth either. Energy-limited radio communication thus resembles optical, except that the individual “pulses” of energy must be consciously chosen to avoid dispersive impairments at radio wavelengths.

This scheme (which is called frequency-division keying combined with pulse-position modulation) is extremely simple compared to the complicated bandwidth-limited designs we typically see terrestrially and in near space, and yet (as long as we don’t attempt to violate the minimum energy requirement) it can achieve an error probability approaching zero as the number of locations grows. (Some additional measures are needed to account for scintillation, although I won’t discuss this further.) We can’t do better than this in terms of the delivered energy, and neither can another civilization, no matter how advanced their technology. This scheme does consume voluminous bandwidth, especially as we attempt to approach the fundamental limit, and Ian S. Morrison of the Australian Centre for Astrobiology is actively looking for simple approaches to achieve similar ends with less bandwidth.

What do “they” know about energy-limited communication?

Our own psyche is blinded by bandwidth-limited communication based on our experience with terrestrial wireless. Some might reasonably argue that “they” must surely suffer the same myopic view and gravitate toward bandwidth conservation. I disagree, for several reasons.

Because energy-limited communication is simpler than bandwidth-limited communication, the basic design methodology was well understood much earlier, back in the 1950’s and 1960’s. It was the 1990’s before bandwidth-limited communication was equally well understood.

Have you ever wondered why the modulation techniques used in optical communications are usually so distinctive from radio? One of the main differences is this bandwidth- vs energy-limited issue. Bandwidth has never been considered a limiting resource at the shorter optical wavelengths, and thus minimizing energy rather than bandwidth has been emphasized. We have considerable practical experience with energy-limited communication, albeit mostly at optical wavelengths.

If another civilization has more plentiful and cheaper energy sources or a bigger budget than us, there are plenty of beneficial ways to consume more energy other than being deliberately inefficient. They could increase message length, or reduce the message transmission time, or transmit in more directions simultaneously, or transmit a signal that can be received at greater distances.

Based on our Earthly experience, it is reasonable to expect that both a transmitting and receiving civilization would be acutely aware of energy-limited communication and I expect that they would choose to exploit it for interstellar communication.

Discovery

Communication isn’t possible until the receiver discovers the signal in the first place. Discovery of an energy-limited signal as illustrated in Fig. 2 is easy in one respect, since the signal is sparse in both time and frequency (making it relatively easy to distinguish from natural phenomenon as well as artificial signals of terrestrial origin) and individual energy bundles are energetic (making them easier to detect reliably). Discovery is hard in another respect, since due to that same sparsity we must be patient and conduct multiple observations in any particular range of frequencies to confidently rule out the presence of a signal with this character.

Criticisms of this approach

What are some possible shortcomings or criticisms of this approach? None of us have yet studied possible issues in design of a high-power radio transmitter generating a signal of this type. Some say that bandwidth does need to be conserved, for some reason such as interference with terrestrial services. Others say that we should expect a “beacon”, which is a signal designed to attract attention, but simplified because it carries no information. Others say that an extraterrestrial signal might be deliberately disguised to look identical to typical terrestrial signals (and hence emphasize narrow bandwidth rather low energy) so that it might be discovered accidently.

What do you think? In your comments to this post, the Centauri Dreams community can be helpful in critiquing and second guessing my assumptions and conclusions. If you want to delve further into this, I have posted a report at http://arxiv.org/abs/1305.4684 that includes references to the foundational work.

tzf_img_post

Moving Stars: The Shkadov Thruster

Although I didn’t write about the so-called ‘Shkadov thruster’ yesterday, it has been on my mind as one mega-engineering project that an advanced civilization might attempt. The most recent post was all about moving entire stars to travel the galaxy, with reference to Gregory Benford and Larry Niven’s Bowl of Heaven (Tor, 2012), where humans encounter an object that extends and modifies Shkadov’s ideas in mind-boggling ways. I also turned to a recent Keith Cooper article on Fritz Zwicky, who speculated on how inducing asymmetrical flares on the Sun could put the whole Solar System into new motion, putting our star under our directional control.

The physicist Leonid Shkadov described a Shkadov thruster in a 1987 paper called “Possibility of Controlling Solar System Motion in the Galaxy” (reference at the end). Imagine an enormous mirror constructed in space so as to reflect a fraction of the star’s radiation pressure. You wind up with an asymmetrical force that exerts a thrust upon the star, one that Shkadov believed could move the star (with accompanying planets) in the event of a dangerous event, like a close approach from another star. Shkadov thrusters fall into the category of ‘stellar engines,’ devices that extract significant resources from the star in order to generate their effect.

shkadov_thruster

Image: A Shkadov thruster as conceived by the artist Steve Bowers.

There are various forms of stellar engines that I’ll be writing about in future posts. But to learn more about the ideas of Leonid Shkadov, I turned to a recent paper by the always interesting Duncan Forgan (University of Edinburgh). Forgan points out that Shkadov thrusters are not in the same class as Dyson spheres, for the latter are spherical shells built so that radiation pressure from the star and the gravitational force on the sphere remain balanced, the purpose being to collect solar energy, with the additional benefit of providing vast amounts of living space.

Where Shkadov thrusters do remind us of Dyson spheres, as Forgan notes, is in their need for huge amounts of construction material. The scale becomes apparent in his description, which is clarified in the diagram below:

A spherical arc mirror (of semi-angle ψ) is placed such that the radiation pressure force generated by the stellar radiation field on its surface is matched by the gravitational force of the star on the mirror. Radiation impinging on the mirror is reflected back towards the star, preventing it from escaping. This force imbalance produces a thrust…

Here I’m skipping some of the math, for which I’ll send you to the preprint. But here is his diagram of the Shkadov thruster:

Shkadov_diagram

Figure 1: Diagram of a Class A Stellar Engine, or Shkadov thruster. The star is viewed from the pole – the thruster is a spherical arc mirror (solid line), spanning a sector of total angular extent 2ψ. This produces an imbalance in the radiation pressure force produced by the star, resulting in a net thrust in the direction of the arrow.

Forgan goes on to discuss the effects of the thruster upon the star:

In reality, the reflected radiation will alter the thermal equilibrium of the star, raising its temperature and producing the above dependence on semi-angle. Increasing ψ increases the thrust, as expected, with the maximum thrust being generated at ψ = π radians. However, if the thruster is part of a multi-component megastructure that includes concentric Dyson spheres forming a thermal engine, having a large ψ can result in the concentric spheres possessing poorer thermal efficiency.

The sheer size of Dyson spheres, Shkadov thrusters and other stellar engines inevitably makes us think about such constructions in the context of SETI, and whether we might be able to pick up the signature of such an object by looking at exoplanet transits. Richard Carrigan is among those who have conducted searches for Dyson spheres (see Archaeology on an Interstellar Scale), but Forgan thinks a Shkadov thruster should also be detectable. For the light curve produced by an exoplanet during transit would show particular characteristics if a Shkadov thruster were near the star, a signature that could be untangled by follow-up radial velocity measurements.

The chances that we might pick up a transit showing clear signs of extraterrestrial engineering seem remote, but Forgan’s point is that we have numerous exoplanet surveys in progress, ranging from analysis of the Kepler data (with a recent SETI component factored in) to future surveys using the TESS and PLATO instruments, each intended to undergo radial velocity scrutiny as a follow-up to any detections. The GAIA satellite will also provide useful data for possible follow-ups of transit candidates. With all this in the mix, Forgan wants to clarify what a Shkadov thruster would look like if by whatever chance we do find one in our data.

The presence of a Shkadov thruster, he demonstrates, can be flagged by study of the lightcurve of both transiting planet and thruster, with the possibility that both the primary and secondary eclipses can be affected. It would be a tricky catch even so, for transient phenomena like starspots can mask features in the lightcurve, and Forgan thinks that further radial velocity studies, along with interferometric imaging and asteroseismology would have to come into play to tease out the features of such a thruster. Missions designed to study exoplanet atmospheres — he mentions CHEOPS or EChO — could be used to confirm the thruster’s presence.

A long shot indeed, but it’s good to have this study of those features that would flag a lightcurve as anomalous and indicative of advanced engineering. For while the probabilities of finding a Shkadov thruster are remote, we’ll have a growing number of datasets from various exoplanet missions to draw on. Interstellar archaeology is all about digging into this rich stratum to see whether any observed events fit models that suggest the presence of artificial objects. And today’s exoplanet catalogue only hints at the volumes of information still to come.

The paper is Forgan, “On the Possibility of Detecting Class A Stellar Engines Using Exoplanet Transit Curves,” accepted for publication in the Journal of the British Interplanetary Society (preprint). Leonid Shkadov’s paper on Shkadov thrusters is “Possibility of controlling solar system motion in the galaxy,” 38th Congress of IAF,” October 10-17, 1987, Brighton, UK, paper IAA-87-613. More on stellar engines in coming weeks.

tzf_img_post

The Star as Starship

Moving entire stars rather than building spaceships would have certain benefits as a way of traveling through the galaxy. After all, it would mean taking your local environment with you on a millennial journey. Some have suggested it might therefore be an observable sign of highly advanced civilizations at work. But how would you move a star in the first place?

In Bowl of Heaven (Tor, 2012), Gregory Benford and Larry Niven conceive of a vast bowl — think of one-half of a Dyson sphere — wrapped around a star whose energies are directed into a propulsive plasma jet that, over aeons, moves the structure forward. Thus this snippet of dialogue, said aboard a starship by the humans who discover the alien artifact:

“…You caught how the jet bulges out near the star.”

More hand waving. “Looks to me like the magnetic fields in it are getting control, slimming it down into a slowly expanding straw…”

“A wok with a neon jet shooting out the back…and living room on the inside, more territory than you could get on the planets of a thousand solar systems. Pinned to it with centrifugal grav…”

“They don’t live on the whole bowl. Just the rim. Most of it is just mirrors. Even so, it’s more than a habitat,” said Cliff. “It’s accelerating. That jet? This whole thing is going somewhere. A ship that is a star. A ship star…”

The Benford/Niven excursion into mega-engineering came to mind over the weekend when I read Keith Cooper’s recent article for the Institute for Interstellar Studies on the ideas of Fritz Zwicky. The wildly creative astrophysicist (1898-1974) once imagined a scheme that would use the Sun as an engine that could propel us — and I do mean all of us — to Alpha Centauri. The notion was to induce ‘hot spots’ in the solar photosphere that would lead to asymmetrical flares, nudging the Sun in a new direction. Zwicky imagined that the recoil of these directed exhaust jets would make an interstellar crossing in about fifty centuries possible, pulling the Earth along with our parent star.

In a lecture in 1948, Zwicky referred to ideas like these as ‘morphological astronomy,’ which he would go on to discuss in detail in his 1969 book Discovery, Invention, Research Through the Morphological Approach, a title that ranges over everything from telescope design to aerodynamics and the concept of justice. But as Cooper notes, a number of questions are left unanswered, including details of the asymmetric thrust mechanism itself, and the always interesting question of deceleration. Just how does the Sun approach Alpha Centauri, and what effects would the move have on solar planets as well as those around the destination stars?

If a highly advanced civilization did have the ability to engineer stellar acceleration, we might spot its efforts through unusually high proper motions of particular stars. So-called ‘hypervelocity stars’ have been observed that appear to be gravitationally unbound to the Milky Way. In fact, Kelly Holley-Bockelmann and Lauren Palladino (Vanderbilt University) have identified 675 stars that were probably ejected from the galactic core, presumably because of gravitational interactions with the supermassive black hole at galactic center. Moving at velocities as high as 900 kilometers per second, stars like these would take 10 million years to travel from the core to the outer edge of the spiral. The stars in question tend to be red giants with high metallicity. Later work at Ohio State has identified a small number of hypervelocity stars with masses closer to that of the Sun.

hypervelocity stars

Image: Hypervelocity stars zoom around the center of the Milky Way, where a supermassive black hole lurks. Credit: ESO/MPE.

Cooper asks whether stars with anomalous proper motion might not be worth investigating in a SETI context. From his essay:

Perhaps some advanced extraterrestrial civilisation out there has that power. Maybe we should look to stars with anomalously large proper motions. Is Barnard’s Star, which has the highest proper motion of any star in the sky at 10.3 arcseconds per year, at its distance of 5.98 light years, simply a refugee ejected from a binary star system or is it being deliberately driven? Astronomers even observed a giant flare on the star in 1998, which reached temperatures as high as 8,000 degrees Celsius, which is 2,500 degrees Celsius hotter than the Sun’s surface, or photosphere. Given that Barnard’s Star is a red dwarf with an average surface temperature typically languishing at 2,860 degrees Celsius, that’s a heck of an increase. Now, I’m not suggesting that extraterrestrials are using Barnard’s Star as a spacecraft, only that should such a feat be possible (and that’s a big if), we might expect it to look something akin to the speeding red dwarf.

The idea of moving entire stars as a means of interstellar travel is intriguing and might fall into our toolbox of ideas on ‘interstellar archaeology,’ the search for unusual artifacts in our astronomical data. After all, moving a star simply ramps up an already existing process. We’re all on a grand tour through the Milky Way as the Sun moves at a brisk 220 kilometers per second in its orbit. An advanced civilization with clearly defined destinations in mind might find random encounters with other stars less interesting than targeted travel.

The paper on hypervelocity stars is Palladino et al., “Identifying High Metallicity M Giants at Intragroup Distances with SDSS,” The Astronomical Journal Vol. 143, No. 6 (May, 2012), p. 128 (abstract / preprint).

tzf_img_post

Leafing Through Early Interstellar Ideas

Although John Jacob Astor IV did many things in his life — as a businessman, builder, Spanish American War veteran and financier — his place in history was secured with his death on the Titanic in 1912. His was actually one of the three hundred or so bodies that were later recovered out of over 1500 who died, and he is buried in Trinity Church Cemetery in New York City. Less well known is the fact that Astor was a writer who, in 1894, produced a science fiction novel called A Journey to Other Worlds in which people travel to the outer planets.

John_Jacob_Astor_1909

I’ve been digging around in this curious novel and discovered an interstellar reference that was entirely new to me. Using a form of repulsive energy called ‘apergy,’ variants of which were much in vogue in the scientific romances of this era (think H. G. Wells’ The First Men in the Moon, which uses gravity-negating ‘cavorite’), Astor’s crew sets out on the Callisto to explore Jupiter and Saturn, where various adventures ensue. Along the way, his travelers see Mars’ moons Deimos and Phobos and begin to speculate:

“Either of those,” said Bearwarden, looking back at the little satellites, “would be a nice yacht for a man to explore space on. He would also, of course, need a sun to warm him, if he wished to go beyond this system, but that would not have to be a large affair–in fact, it might be smaller than the planet, and could revolve about it like a moon.”

Image: John Jacob Astor (1864-1912) in August of 1909. Credit: Wikimedia Commons.

Bearwarden’s companion Cortlandt thinks this over and decides an object as small as Deimos couldn’t maintain an atmosphere. Better, then, to create a small sun and travel along with it in a spacecraft rather than a moon or asteroid:

“It would be better, therefore, to have such a sun as you describe and accompany it in a yacht or private car like this, well stocked with oxygen and provisions. When passing through meteoric swarms or masses of solid matter, collision with which is the most serious risk we run, the car could follow behind its sun instead of revolving around it, and be kept from falling into it by partially reversing the attraction. As the gravitation of so small a sun would be slight, counteracting it for even a considerable time would take but little from the batteries.”

Conspicuously left out of the discussion is how such a small ‘sun’ would be powered, but we can forgive Astor for not anticipating Hans Bethe’s 1939 work on how fusion powers stars. Instead, he has a third character muse about creating a violent collision between two asteroids that would cause the new-made object to become luminous, which pleases an enraptured Bearwarden: “Bravo!” said Bearwarden. “There is no limit to what can be done. The idea of our present trip would have seemed more chimerical to people a hundred years ago than this new scheme appears now.”

Science fiction author and critic Richard Lupoff has speculated that Astor drew ‘apergy’ from Percy Greg’s 1880 novel Across the Zodiac, in which an anonymous narrator uses a kind of anti-gravity to travel to Mars aboard a spectacularly large spacecraft. In any case, Astor’s novel is fun to dip into in places, offering in addition to space travel a look at the world of 2000, a time in which the great project on Earth is to shift the planet’s axis, a job undertaken by the Terrestrial Axis Straightening Company to eliminate the extremes of Earth’s climate.

An Early Look at Suspended Animation

Lupoff’s book Master of Adventure is the source of his thinking on Astor. It’s also a delightful read for anyone interested in the life and works of Edgar Rice Burroughs, who came startlingly to mind after I wrote last Monday’s entry on Carl Sagan and Iosif S. Shklovskii (see Two Ways to the Stars). The duo had speculated in their book Intelligent Life in the Universe that high pressures and controlled temperatures could be the key to freezing humans for long transit times, taking advantage of ice II, which has about the same density as liquid water, as opposed to normal ice, which could disrupt human cells in the freezing and thawing process.

argosy_19370220

I suddenly recalled an old Burroughs story called “The Resurrection of Jimber Jaw,” which ran in Argosy in 1937 — it was later reprinted by Lupoff himself for Canaveral Press in Tales of Three Planets. Burroughs’ characters, forced to land their plane in Siberia after mechanical trouble, find the frozen body of a caveman who is eventually returned to life. His adventures in America — he becomes a professional wrestler! — lead to an unhappy romance and he winds up re-freezing himself in a meat locker, asking not to be re-awakened. But before then, he gets off a number of comments about modern life that would not sit well with today’s sensibilities.

Various forms of freezing and suspended animation populate early science fiction going back to 18th Century romances like L. S. Mercier’s Memoirs of the Year Two Thousand Five Hundred and moving forward to H. G. Wells’ When the Sleeper Wakes (1899). Robert Heinlein would use suspended animation to great effect in one of my favorite of his novels, The Door Into Summer (1957), but magazine science fiction is packed with stories using the trope. There is Laurence Manning’s “The Man Who Awoke” (Wonder Stories 1933) and A. E. van Vogt’s classic “Far Centaurus” (Astounding, 1944), where the protagonists survive a long voyage only to learn that faster than light travel has been invented while they were enroute.

Recently Adam Crowl looked back to rocket pioneer Robert Goddard’s thoughts on suspended animation. In a short note titled “The Ultimate Migration,” written in 1918, Goddard had seen two ways for humans to reach the stars, the first being the use of atomic energy to accelerate an asteroid that had been hollowed out to serve as a spaceship. But if this didn’t work, tinkering with human cells might do the trick:

…will it be possible to reduce the protoplasm in the human body to the granular state, so that it can withstand the intense cold of interstellar space? It would probably be necessary to dessicate the body, more or less, before this state could be produced. Awakening may have to be done very slowly. It might be necessary to have people evolve, through a number of generations, for this purpose.

Dr._Robert_Goddard

Goddard goes on to speculate about an immense journey in which the pilot is ‘animated’ once every 10,000 years, or at even longer intervals for longer journeys, so he can correct the spacecraft’s course. He also addresses the issue of how to build a clock that would survive such long time-frames and control the re-awakening of the pilot. The question of long-lived time-pieces is, of course, one that’s under intense investigation at the Long Now Foundation, which is engaged in the process of building a 10,000 Year Clock in the mountains of west Texas, a project conceived by Danny Hillis in which Amazon’s Jeff Bezos has already invested some $42 million.

Image: Robert Goddard (1882-1945). Credit: Wikimedia Commons.

Even 10,000 years might not seem long-term compared to some of the journeys Goddard contemplated. And what if we aren’t the only intelligence that sets about creating such missions? Adam Crowl closes his post with this intriguing thought:

…the idea of flying between the stars as mummified cryogenic life-forms has a strange allure. To travel the stars so, we would needs become like human-sized ‘tardigrades’ or ‘brine-shrimp’, both of which can undergo reversible cryptobiosis in a mostly dessicated state. Even if we can’t do so (reversibly – it’s not too difficult to make it permanent), might there not be intelligences “Out There” who have done so? What if we found one of their slow sail-ships? Would it seem like a funerary barge, filled with strange freeze-dried corpses?

There’s fodder here for more than a few science fiction stories, the creation of which is an impulse that runs through the entirety of our post-Enlightenment encounter with technology. We plug in the science of our own time in making the attempt to sketch out a future, knowing all too well that we’re only guessing at the discoveries that could change all our assumptions. That’s why I love the rich history of science fiction. It’s a genre that lets us try ideas on for size and work them through to their consequences, all the while reminding us of how much we have to learn.

tzf_img_post

Ancient Brown Dwarfs Discovered

How many brown dwarfs should we expect in the Milky Way? I can recall estimates that there could be as many brown dwarfs as main sequence stars back when people started speculating about this, but we have to go by the data, and what we have so far tells another tale. The WISE (Wide-field Infrared Survey Explorer) mission can only come up with one brown dwarf for every six stars, leading Davy Kirkpatrick (Caltech), who is part of the WISE science team, to say “Now that we’re finally seeing the solar neighborhood with keener, infrared vision, the little guys aren’t as prevalent as we once thought” (see Brown Dwarfs Sparser than Expected).

BrownDwarfCompare-WISE

Image: Brown dwarfs in relation to the Sun and planets. Credit: NASA/WISE mission.

This is true, at least, in the Sun’s vicinity, where WISE identifies about 200 brown dwarfs, with 33 measured within 26 light years. In the latter volume, some 211 other stars can be found. If we extrapolated this to the entire galaxy, we would get about 33 billion brown dwarfs, assuming a galactic population of 200 billion stars, but extrapolating from our own backyard may be highly unreliable. We just need to keep accumulating data to get an accurate read on these ‘failed stars,’ which are too cool to ignite hydrogen burning in their cores. Extremely low temperature Y-dwarfs are hard to spot, and it’s possible a few more will be teased out in WISE data.

I try to keep up with brown dwarf studies in the probably vain hope that we might find a Y-dwarf close enough to serve as a target for a future probe — by ‘close,’ I mean still undetected and within a few light years, though the hopes for such a find seem remote. Meanwhile, new work out of the University of Hertfordshire has uncovered two of the oldest brown dwarfs yet observed, thought to go back to the early days of the galaxy some ten billion years ago. Old brown dwarfs really up the ante for detection, for since they cannot ignite internal fusion, they fade with time. The new brown dwarfs have temperatures of 250-600 degrees Celsius. You can contrast that with the surface temperature of the Sun, about 5500 degrees Celsius.

A team working under David Pinfield found the objects in the Pisces and Hydra constellations using WISE data, with additional measurements from the Magellan, Gemini, VISTA and UKIRT instruments on the ground. WISE 0013+0634 and WISE 0833+0052 are moving at speeds of between 100 and 200 kilometers per second, a good deal faster than normal stars, and a marker for their age, which is also flagged by ancient atmospheres made up almost entirely of hydrogen.

This Royal Astronomical Society news release goes on to speculate about the implications of the new work for brown dwarf proliferation. Almost all local stars — about 97 percent — are members of the galactic ‘thin disk,’ a grouping much younger than the ‘thick disk’ in which stars move up and down in relation to galactic center at higher velocities. With only 3 percent of stars in our local volume being from the ‘thick disk’ or the ‘halo’ containing remnants of the earliest stars, it’s no surprise that these are the first brown dwarfs we’ve found from that population.

thickdisk_halo_bd_jpg

Image: A brown dwarf from the thick-disk or halo is shown. Although astronomers observe these objects as they pass near to the solar system, they spend much of their time away from the busiest part of the Galaxy. The Milky Way’s disk can be seen in the background. Credit: John Pinfield.

Given that the thick disk and halo occupy much larger volumes than the thin disk, finding a small number of brown dwarfs in the local thick disk/halo population implies a high number of brown dwarfs in the galaxy. Says Pinfield: “These two brown dwarfs may be the tip of an iceberg and are an intriguing piece of astronomical archaeology.” True enough, but we’re still working the numbers as we try to find these faint objects against a background of infrared sources ranging from distant galaxies to clouds of gas and dust. More data, and more insight, lie ahead.

The paper is Pinfield et al., “A deep WISE search for very late type objects and the discovery of two halo/thick-disk T dwarfs: WISE 0013+0634 and WISE 0833+0052,” in press at Monthly Notices of the Royal Astronomical Society (abstract / preprint).

tzf_img_post