Are ‘Waterworlds’ Planets in Transition?

Ponder how our planet got its water. The current view is that objects beyond the ‘snow line,’ where water ice is available in the protoplanetary disk, were eventually pushed into highly eccentric orbits by their encounters with massive young planets like Jupiter. Eventually some of these water-bearing objects would have impacted the Earth. The same analysis works for exoplanetary systems, but the amount of water delivered to a potentially habitable planet depends, in this scenario, on the presence of giant planets and their orbits.

Dorian Abbot (University of Chicago) and colleagues Nicolas Cowan and Fred Ciesla (both at Northwestern University) note the consequences of this theory of water delivery. One is that because low mass stars are thought to have low mass disks, they would have fewer gas giants and would produce less gravitational scattering. In other words, we may find that small planets around M-dwarfs are dry. On the other hand, solar-mass stars and above could easily have habitable planets with amounts of water similar to the Earth.

Waterworlds and their Future

‘Waterworlds’ are planets that may have formed outside the snow line and then migrated to a position in the habitable zone. A planet like this could be completely covered in ocean. In any case, we can expect habitable zone planets could have a wide range of water mass fractions; i.e., the amount of water vs. the amount of land. The Abbot paper studies how variable land surfaces could influence planetary habitability, and the authors attack the question using a computer model for weathering and global climate, assuming an Earth-like planet with silicate rocks, a large reservoir of carbon in carbonate rocks, and at least some surface ocean.

Image: A waterworld may be a planet in transition, moving from all ocean to a mixture of land and sea. Credit: ESA – AOES Medialab.

Interestingly, the researchers found that partially ocean-covered planets like the Earth are not dependent upon a particular fraction of land coverage as long as the land fraction is greater than about 0.01:

We will find that the weathering behavior is fairly insensitive to land fraction when there is partial ocean coverage. For example, we will find that weathering feedbacks function similarly, yielding a habitable zone of similar width, if a planet has a land fraction of 0.3 (like modern Earth) or 0.01 (equivalent to the combined size of Greenland and Mexico). In contrast, we will find that the weathering behavior of a waterworld is drastically different from a planet with partial ocean coverage.

What that means is that planets with some continent and some ocean should have habitable zones of about the same width, no matter what the percentage of land to water. The conclusion is based upon the fact that silicate weathering feedback helped to maintain habitable conditions through Earth’s own history. The weathering of surface silicate rocks is the main removal process for carbon dioxide from the atmosphere, and it is temperature dependent, thus helping to buffer climate changes and expanding the size of the habitable zone around a star.

Seafloor weathering also occurs, but the authors point out that it is thought to be weaker than continental weathering and to depend on ocean chemistry and seawater circulation more than surface climate. That would mean carbon dioxide would be removed less efficiently from the atmosphere of a waterworld, which would produce higher CO2 levels and a warmer climate. A planet like this would be less able to buffer any changes in received solar radiation (insolation) and would thus have a smaller habitable zone.

Planetary Evolution at Work

All this is leading up to an absorbing conclusion about waterworlds. Assuming that seafloor weathering does not depend on surface temperature, planets that are completely covered by water can have no climate-weathering feedback. Thus the conclusion that a water world has a smaller habitable zone than a planet with even a few small continents. But a waterworld may be, depending on its position in its solar system, a planet in a state of transition. Abbot and company posit a mechanism that would put a waterworld through a ‘moist greenhouse’ stage which would turn it into a planet with only partial ocean coverage, much like the Earth. Here what would have been complete loss of water is stopped by the exposure of even a small amount of land:

We find… that weathering could operate quickly enough that a waterworld could “self-arrest” while undergoing a moist greenhouse and the planet would be left with partial ocean coverage and a clement climate. If this result holds up to more detailed kinetic weathering modeling, it would be profound, because it implies that waterworlds that form in the habitable zone have a pathway to evolve into a planet with partial ocean coverage that is more resistant to changes in stellar luminosity.

A waterworld thus becomes an Earth-like planet after going through a ‘moist greenhouse’ phase — this occurs when a planet gets hot enough that large amounts of water are lost by photolysis in the atmosphere and hydrogen escapes into space. As water is lost and land begins to be exposed, the moist greenhouse phase can then be stopped by reducing the carbon dioxide through silicate weathering. This is the process the authors call ‘waterworld self-arrest.’

Although we have not performed a full analysis of the kinetic (non-equilibrium) effects, the order-of-magnitude analysis we have done indicates that a habitable zone waterworld could stop a moist greenhouse through weathering and become a habitable partially ocean-covered planet. We note that this process would not occur if the initial water complement of the planet is so large that continent is not exposed even after billions of years in the moist greenhouse state…

It’s also true that waterworlds at the outer edge of the habitable zone would not be in a moist greenhouse state in the first place. We’re likely to find waterworlds, then, but some of them may be in the process of transformation, becoming planets of continents and oceans. And any Earth-sized planet discovered near the habitable zone would be a good candidate to have a wide habitable zone and a stable climate if it has at least a small area of exposed land. That makes discovering the land fraction of any Earth-class planet we observe through future planet-finder missions a priority. The authors believe that missions of the Terrestrial Planet Finder class should be able to determine the land fraction by measuring reflected visible light.

The paper is Abbot et al., “Indication of insensitivity of planetary weathering behavior and habitable zone to surface land fraction,” accepted at The Astrophysical Journal (preprint). Thanks to Andrew Tribick for the pointer.

tzf_img_post

Barnard’s Star: No Sign of Planets

Barnard’s Star has always gotten its share of attention, and deservedly so. It was in 1916 that this M-class dwarf in Ophiuchus was measured by the American astronomer Edward Emerson Barnard, who found its proper motion to be the largest of any star relative to the Sun. That meant the star soon to be named for him was close to us, and unless we’re surprised by a hitherto unobserved brown dwarf, Barnard’s Star remains the closest star to our Sun after the Alpha Centauri triple system. Stick around long enough and Barnard’s Star will close to within 3.75 light years, but even if you make it to 10,000 AD or so, the star will still be too faint to be a naked eye object.

Image: Barnard’s Star, with proper motion demonstrated, part of an ongoing project to track the star. This image shows motion between 2004 and 2008. Credit: Paul Mortfield & Stefano Cancelli/The Backyard Astronomer.

Peter van de Kamp, working at Swarthmore College, had been looking for wobbles in the position of Barnard’s Star going all the way back to 1938, and for a time his results indicated at least one Jupiter-class planet there and possibly two. But other astronomers failed to find evidence for planets, and later work raised the likelihood that the changes in the star fields van de Kamp was looking at were caused by issues related to the refractor he was using. Now we have a new paper from Jieun Choi (UC Berkeley), whose team went to work on Doppler monitoring of Barnard’s Star and concluded that van de Kamp’s findings were erroneous.

The paper, however, gives a generous nod to van de Kamp’s work:

The two planets claimed by Peter van de Kamp are extremely unlikely by these 25 years of precise RVs. We frankly pursued this quarter-century program of precise RVs for Barnard’s Star with the goal of examining anew the existence of these historic planets. Indeed, Peter van de Kamp remains one of the most respected astrometrists of all time for his observational care, persistence, and ingenuity. But there can be little doubt now that van de Kamp’s two putative planets do not exist.

The one-planet model fails to fit as well when studied with radial velocity data from both Lick and Keck:

Even van de Kamp’s model of a single-planet having 1.6 MJup orbiting at 4.4 AU (van de Kamp 1963) can be securely ruled out. The RVs from the Lick and Keck Observatories that impose limits on the stellar reflex velocity of only a few meters per second simply leave no possibility of Jupiter-mass planets within 5 AU, save for unlikely face-on orbits.

The paper goes on to drill down to planets of roughly Earth mass, finding no evidence for such worlds. The result is interesting on a number of levels. We’re finding smaller planets with radii 2 to 4 times that of Earth around M-dwarfs regularly in data from the Kepler mission, in an area close in to the star where this new study of Barnard’s Star is most sensitive to Earth-mass planets. The transit data back up radial velocity data on M-dwarfs from the HARPS spectrograph, which have shown numerous planets with mass a few times larger than Earth’s around M-dwarfs. A 2011 study found the occurrence of super-Earths in the habitable zone is in the area of 41 percent for M-dwarfs, leading to what Jieun Choi and colleagues describe as ‘a lovely moment in science.’

After all, our two major planet-hunting techniques — Doppler measurements to detect planets by their effect on the host star, and brightness measurements for transit detection — both indicate that small planets are apparently common around M-dwarfs. By contrast:

…the non-detection of planets above a few Earth masses around Barnard’s Star remains remarkable as the detection limits here are as tight or tighter than was possible for the Kepler and HARPS surveys. The lack of planetary companions around Barnard’s Star is interesting because of its low metallicity. This non-detection of nearly Earth-mass planets around Barnard’s Star is surely unfortunate, as its distance of only 1.8 parsecs would render any Earth-size planets valuable targets for imaging and spectroscopy, as well as compelling destinations for robotic probes by the end of the century.

Let’s not forget that the early work of Peter van de Kamp had energized speculation about missions to Barnard’s Star. The British Interplanetary Society’s Project Daedalus chose it as a destination largely because of its supposed planetary system even though Alpha Centauri was considerably closer (4.3 light years vs. 6). Robert Forward toyed with Barnard’s Star in his fiction, writing Flight of the Dragonfly, later expanded as Rocheworld, to depict both the planetary system there as well as the technology needed to reach it.

Now we have 248 precise Doppler measurements of Barnard’s Star from the Lick and Keck Observatories saying that the habitable zone of this conveniently nearby star appears to be empty of planets of roughly Earth mass or larger. Let’s hope the Alpha Centauri stars yield a better result. The paper is Choi et al., “Precise Doppler Monitoring of Barnard’s Star,” available online.

tzf_img_post

100 Year Starship Public Symposium

“The future never just happened, it was created.” The quote is from Will and Ariel Durant, the husband and wife team who collaborated on an eleven-volume history of civilization that always used to be included in Book of the Month deals, which is how many of us got our copies. I’m glad to see the Durants’ quotation brought into play by the 100 Year Starship organization in the service of energizing space exploration. It’s a call to create, to work, to push our ideas.

100 Year Starship (100YSS) puts the Durants’ thinking into practice at the second 100 Year Starship Public Symposium, September 13-16 at the Hyatt Regency in Houston. The event promises academic presentations, science fiction panels, workshops, classes and networking possibilities for those in the aerospace community and the public at large. My hope is that the gathering will kindle some of the same enthusiasm we saw last October in Orlando, when the grant from DARPA that created the 100 Year Starship had yet to be assigned and the halls of the Orlando Hilton filled up with starship aficionados. For more on the event, check the 100YSS symposium page.

Image: The track chair panel from last year’s symposium in Orlando. Credit: 100YSS.

DARPA (the Defense Advanced Research Projects Agency) provided the seed money, but 100 Year Starship is now in the hands of Mae Jemison, whose Dorothy Jemison Foundation for Excellence will develop the idea with partners Icarus Interstellar (the people behind Project Icarus, the re-envisioning of the Project Daedalus starship) and the Foundation for Enterprise Development. And as we’ve discussed before, the 100 Year Starship refers not to a century-long star mission but to an organization that can survive for a century to nurture the starship idea, the thinking being that a century from now we will have made major progress on the interstellar front. It’s a gutsy and optimistic time frame and I hope it’s proven right.

The upcoming symposium will take place the week of the 50th anniversary of president Kennedy’s famous speech at Rice University exhorting Americans to land on the Moon, so it’s fitting that former president Bill Clinton has agreed to serve as honorary chair for the event. “This important effort helps advance the knowledge and technologies required to explore space,” said Clinton, “all while generating the necessary tools that enhance our quality of life on Earth.” 100YSS is collaborating with Rice University to integrate activities, which will include a salute to fifty years of human space flight at the Johnson Space Center.

The goal is for a multidisciplinary gathering, as a 100YSS news release makes clear:

100 Year Starship will bring in experts from myriad fields to help achieve its goal — utilizing not only scientists, engineers, doctors, technologists, researchers, sociologists and computer experts, but also architects, writers, artists, entertainers and leaders in government, business, economics, ethics and public policy. 100YSS will also collaborate with existing space exploration and advocacy efforts from both private enterprise and the government. In addition, 100YSS will establish a scientific research institute, The Way, whose major emphasis will be speculative, long-term science and technology.

The 2012 symposium is titled ‘Transition to Transformation…The Journey Begins.’ According to the organization, the goals for the gathering include:

  • Identifying research directions and priorities
  • Understanding methods to assess, transform and deploy space-related technologies to improve daily life
  • Fostering ways to identify and integrate partnerships and partnering opportunities, social structures, cultural awareness and global momentum essential to the 100 year challenge

I see that three track chairs have already been announced. David Alexander (Rice University) will be in charge of a special session on Future Visions, while Eric Davis (Institute for Advanced Studies – Austin) is to be track chair for Time-Distance Solutions. Amy Millman (Springboard Enterprises) chairs a special session called Interstellar Aspiration – Commercial Perspiration. More news on the other track chairs and session topics and papers as it becomes available.

The opportunity before us is to keep the Durants’ quotation in mind: “The future never just happened, it was created.” It’s true on the level of civilizations and on the level of individuals. I’m hoping you can create the opportunity to make it to Houston to see how 100 Year Starship is evolving and to join the scientists, engineers, public policy experts, entertainers and the rest who will be focusing on the issues of interstellar flight. These go well beyond propulsion to include life support, robotics, economics, intelligent systems, communications and more. If it’s anything like last year’s event in Orlando, this second symposium should help move interstellar studies forward.

tzf_img_post

Starships: ‘Skylark’ vs. the Long Haul

Centauri Dreams readers will remember Adam Frank’s recent op-ed Alone in the Void in the New York Times arguing that given the difficulty involved in traveling to the stars, humans had better get used to living on and improving this planet. ‘We will have no other choice,’ wrote Frank. ‘There will be nowhere else to go for a very long time.’ I responded to Frank’s essay in Defending the Interstellar Vision, to which Frank replied on his NPR blog.

Dr. Frank is an astrophysicist at the University of Rochester and author of the highly regarded About Time: Cosmology and Culture at the Twilight of the Big Bang (Free Press, 2011), a study of our changing conception of time that is now nearing the top of my reading stack. In his short NPR post, he makes a compelling point:

Even if we could get a starship up to 10% of light speed (which would be an epoch-making achievement) then the round trip to the nearest known star with a planet would still take 300 years (it’s Gliese 876d for all you exoplanet fans). It’s hard to imagine a culture driving significant changes in a significant fraction of humanity based on three-century long shipping delays! As much as I support moving forward in interstellar research, I can’t escape the conclusion that the theater of our future — for at least a few thousand years — will be here in the solar system. Not forever, perhaps, but millennia at least. And that is a long, long time.

On balance, what we are really disagreeing about is time, my own view being that interstellar flight may be a matter of several centuries away, while Frank takes a longer view. In any case, it’s interesting to speculate on what a society would look like if that 10 percent of lightspeed turned out to be attainable but remained more or less a maximum for space travel. We’ll doubtless find closer worlds than those around Gl 876 (even now the evidence for at least one planet around Epsilon Eridani seems strong, and we’ll see about Alpha Centauri). But a nearby world around Alpha Centauri B is still forty-plus years away at 10 percent of c.

Making Starflight Too Easy

From a cultural perspective, the discussion of interstellar flight has suffered from extremes. I think about something Geoff Landis once told me while he, Marc Millis and I were having lunch at an Indian restaurant near Glenn Research Center in Cleveland. We had come over from GRC after I interviewed the two there and we spent an enjoyable meal talking lightsails and Bussard ramjets and science fiction. A skilled science fiction writer himself, Landis felt that the genre hadn’t always been kind to the serious study of interstellar topics. I quoted him on this in my Centauri Dreams book:

“Science fiction has made work on interstellar flight harder to sell because in the stories, it’s always so easy… Somebody comes up with a breakthrough and you can make interstellar ships that are just like passenger liners. In a way, that spoiled people, because they don’t understand how much work is going to be involved in traveling to the stars. It’s going to be hard. And it’s going to take a long time.”

In Astounding Wonder (Univ. of Pennsylvania Press, 2012), John Cheng’s superb study of science fiction in the era between World War I and II, the author introduces Hugo Gernsback’s strategy of mingling science fact with fiction as a way into wonders that went beyond conventional physics. A case in point was Edward E. Smith’s ‘Skylark of Space’ series, published in Gernsback’s Amazing Stories in three installments in 1928. The tale, soon followed by popular sequels like ‘Skylark Three’ and ‘The Skylark of Valeron,’ went well beyond the science fiction of the period in its scale, invoking fast interstellar travel and a coming galactic civilization. Star travel mingled with romance and adventure to form ‘space opera.’

Image: The August, 1930 cover of Amazing Stories, containing the opening installment of Edward E. Smith’s ‘Skylark Three.’

Here we can see Landis’ point at work, just as we can see it on the bridge of the Enterprise in the Star Trek franchise. Gernsback wanted to keep science in the forefront but he accepted a vehicle — the starship — that defied all understanding of physics at the time. To do this, Skylark author Smith had to come up with an explanation for how his ships flew, and Gernsback’s readers critiqued it even though the device in question was pure fantasy.

Cheng explains:

…Smith provided a detailed discussion in his original story about the ‘intra-atomic energy’ within copper that powered the Skylark. Nonetheless attempts at more contemporary or realistic extrapolations of science were not distinctive features of his narratives. Their operatic sensibilities came from their adventure, the immense scale of their universe, and their moral and ethical consideration, drawing on their scope and perspective, of universal value. In the broader context of achieving universal civilization throughout the galaxy, full knowledge of the details of Einstein’s special theory of relativity might have seemed relatively unimportant. Readers, however, criticized Smith’s stories for their improper science or lack of scientific consideration in purely imagined devices, however small, but not for their grandiose social extrapolations.

The Starship as Plot Device

If Smith’s science didn’t stand up, readers like future science fiction critic P. Schuyler Miller (who explained the Lorentz-Fitzgerald equations in Amazing‘s letter column as a way of critiquing Smith) agreed that the tale was a thundering good read. In the same way, James T. Kirk and his descendants gave television viewers a feel for interstellar flight at warp speed, even though many of those enjoying the ride believed that such journeys could be nothing more than plot devices. It was only in the mid-20th Century, and really not until the 1960s, that a case began to be made that actual interstellar flight might be something more than a fantasy.

Of course, the kind of travel the early interstellar pioneers were talking about was nothing like Smith’s or Gene Roddenberry’s. I think Landis’ point stands up well — the reflex is to dismiss interstellar travel because our cultural representations of it have made it look absurdly easy. We’ve now learned that it is possible in principle to send a payload to another star within a human lifetime, but we also see that such journeys would take huge amounts of power and decades of time. Adam Frank is surely right that a society making this kind of journey wouldn’t be working with a cohesive set of colonies with convenient re-supply, but with outposts of humans that would be completely self-sustaining, a new and deeply isolated branch of the human family.

It’s an open question whether our species will choose to make journeys at 10 percent of c, or whether we’ll find ways to ramp up the speed to shorten travel times. Another open question is whether, if we really do find that a low percentage of lightspeed is the best we can attain, humans rather than artificial intelligence — ‘artilects’ — would be the likely crew of such vessels. One thing that is happening in the public perception of interstellar flight, though, is that the gradually more visible study of these topics is opening peoples’ eyes to the possibilities, and the difficulties, of interstellar journeys. A science fictional movie treatment of just how challenging a journey at 10 percent of lightspeed would be is a project worth exploring.

tzf_img_post

Into the Uncanny Valley

After our recent exchange of ideas on SETI, Michael Chorost went out and read the Strugatsky brothers’ novel Roadside Picnic, a book I had cited as an example of contact with extraterrestrials that turns out to be enigmatic and far beyond the human understanding. I’ve enjoyed the back and forth with Michael between Centauri Dreams and his World Wide Mind blog because I learn something new from him each time. In his latest post, Michael explains why incomprehensible technology isn’t really his thing.

A Sense of the Weird

Why? Michael grants the possibility that extraterrestrial intelligence may be far beyond our understanding. But in terms of science fiction and speculation in general, he favors what he calls ‘the uncanny valley,’ the sense of weirdness we get from a technology that is halfway between the incomprehensible and the known. A case in point is Piers Anthony’s novel Macroscope, in which an alien message overwhelms the minds of those who can understand it (people with IQs in the 150 range), causing some to go into a coma, some to die. Those of average intellect, unable to understand the message, remain unharmed by it.

Chorost explains:

The idea of a mind-destroying concept falls into the uncanny valley. It’s analogous to something we have, but it’s qualitatively, ungraspably better. It echoes Godel’s Theorem, which proves that it, the theorem itself, is unprovable. (I’m oversimplifying here.) A theorem that proves its own unprovability is a fascinating, mindbending thing. It upended mathematics when Godel published it in 1931. I have never understood it myself in a whole and complete moment of insight, and there’s a reason for that; it is fundamentally paradoxical. I can understand the pieces one at a time, but not the pieces put together. It gives me the feeling that if I ever did fully grasp all of it, my mind would be both much smarter, and broken. (Godel in fact went insane toward the end of his life.) The point is, we already know of ideas that probably exceed the mental capacity of most human beings. Macroscope invites us to consider the possibility that even higher-octane ideas would break our minds.

All of which brings to my mind Robert W. Chambers 1895 book The King in Yellow. Chambers was an artist and writer of considerable power. Indeed, Lovecraft biographer S. T. Joshi described The King in Yellow as a classic of supernatural literature, a sentiment echoed by science fiction bibliographer E. F. Bleiler. The book is a collection of unusual tales named after a play that becomes a theme in some, but not all of the stories. With Chambers you have truly entered into the uncanny valley, for when characters in his tales read the fictional play called The King in Yellow, they often go mad. The narrator reads the first act, throws the book into his fireplace and then, seeing the opening words of the second act, snatches it back and reads the entire volume, becoming possessed by its bizarre imagery.

When the French Government seized the translated copies which had just arrived in Paris, London, of course, became eager to read it. It is well known how the book spread like an infectious disease, from city to city, from continent to continent, barred out here, confiscated there, denounced by press and pulpit, censured even by the most advanced of literary anarchists. No definite principles had been violated in those wicked pages, no doctrine promulgated, no convictions outraged. It could not be judged by any known standard, yet, although it was acknowledged that the supreme note of art had been struck in The King in Yellow, all felt that human nature could not bear the strain, nor thrive on words in which the essence of purest poison lurked. The very banality and innocence of the first act only allowed the blow to fall afterward with more awful effect.

Image (above): A print by Robert Chambers illustrating his book The King in Yellow. Credit: New York Public Library/Art and Architecture Collection, Miriam and Ira D. Wallach Division of Art, Prints and Photographs.

Edgar Allen Poe’s ‘The Masque of the Red Death, with its decadent masquerade and spreading plague, was surely in Chambers’ mind when he wrote this. It’s chilling stuff, made all the more mysterious because the reader is left to fit the more prosaic tales in the volume into the larger themes of contact with a mighty force that can destroy the intellect. Not all the stories can be described as macabre but the feeling of strangeness persists, one that influenced H. P. Lovecraft and numerous later writers from James Blish, who set about writing a text for the mysterious play, to Raymond Chandler, who wrote a story of the same title using a narrator familiar with Chambers’ work.

Snaring the Mind in Language

But back to the extraterrestrial question, and the idea that contact may involve a deeply imperfect understanding of alien ideas that could be so powerful as to overwhelm our minds. What is it that could disrupt a human intellect? Michael Chorost talks about Shannon entropy as a way of working out the complexity of a message. That has me thinking about a Stephen Baxter story called “Turing’s Apples,” in which a signal has been received from the Eagle Nebula. We often think of such a signal being made as simple as possible so the beings behind it can communicate, but would they necessarily be communicating with us? What if we don’t get something simple, like a string of prime numbers, but an intercepted message intended for minds greater than our own?

In Baxter’s story, a SETI team has six years’ worth of data to work with. The signal technique is similar to terrestrial wavelength division multiplexing, with the signal divided into sections each roughly a kilohertz wide. Information theory says it is far more than just noise, but it defeats analysis. One character describes it as ‘More like a garden growing on fast-forward than any human data stream.’ The team has applied Shannon entropy analysis, which looks for relationships between signal elements. The method is straightforward:

You work out conditional probabilities: Given pairs of elements, how likely is it that you’ll see U following Q? Then you go on to higher-order ‘entropy levels,’ in the jargon, starting with triples: How likely is it to find G following I and N?

We can use mathematics, in other words, to work out the complexity of a language even if we can’t understand the language itself. That makes it possible to peg the languages dolphins use at third or fourth-order entropy, whereas human language gets up to about nine. In Baxter’s story, the mind-boggling SETI message weighs in with an entropy level around order thirty.

“It is information, but much more complex than any human language. It might be like English sentences with a fantastically convoluted structure — triple or quadruple negatives, overlapping clauses, tense changes.” He grinned. “Or triple entendres. Or quadruples.”

“They’re smarter than us.”

“Oh, yes. And this is proof, if we needed it, that the message isn’t meant specifically for us…. [T]he Eaglets are a new category of being for us. This isn’t like the Incas meeting the Spaniards, a mere technological gap. They had a basic humanity in common. We may find the gap between us and the Eaglets is forever unbridgeable…”

Chorost mentions Robert Sawyer’s WWW: Wake as containing a good discussion of Shannon entropy, which is why I’ve just acquired a copy. Meanwhile, the idea of a SETI message being so layered with meaning that we couldn’t possibly understand it does indeed put the chill down the spine that Chorost talks about, the same chill that Robert Chambers so effectively summons up in The King in Yellow, where language can suggest complexities that entangle the mind until merely human intellect overloads.

tzf_img_post