Rosetta and the Language of Hope

There are several reasons to keep an eye on Rosetta, the European Space Agency’s mission to comet 67P/Churyumov-Gerasimenko. In 2014, the spacecraft will go into orbit around the comet before deploying a lander to the nucleus. Watching changes as the comet heads toward the Sun should prove interesting indeed, but these short term effects take place within a provocative longer-term context. For aboard Rosetta is a 2.8-inch diameter disc inside a small glass sphere containing some 6000 pages of information. The subject: The languages of planet Earth, many of which will disappear before century’s end.

The synergy here is fascinating. The Rosetta Stone, one of the most impressive objects in the British Museum when you realize what you’re looking at, contains inscriptions that include Egyptian hieroglyphics, Demotic and classical Greek. The Greek, readily understood by linguists, helped researchers unravel the meaning of the hieroglyphics, a pioneering task performed 200 years ago at the hands of Jean François Champollion and Thomas Young, the latter (in the spirit of his time) both physicist and physician. Since those days, the term ‘Rosetta Stone’ has always referred to a key that unlocks a previously undecipherable mystery. Placing a disc from the Rosetta Project, which is building an archive of all documented human languages, aboard a spacecraft headed for a comet dovetails nicely with this theme.

Image: The Rosetta Stone on display at the British Museum in a somewhat less frenetic era. We can hope (but not assume) that future generations need no comparable find to decipher languages that are dying in our own century. Credit: Getty Images.

Now Rosetta (the spacecraft) appears to be doing quite well. A trajectory correction maneuver was carried out on the 14th of August using optical tracking data of asteroid 2867 Steins. The vehicle will conduct a flyby past the asteroid next month at a distance now estimated to be 800 kilometers, although we’re learning as we go, according to Trevor Morley, who leads the Rosetta Flight Dynamics Orbit team at ESOC, Darmstadt, Germany:

“The closer we get to Steins, the more accurate our knowledge of the asteroid’s position relative to Rosetta will be. Thanks to Rosetta’s cameras, we will obtain increasingly precise measurements that will allow us to adjust again, if necessary, Rosetta’s orbit for an optimal asteroid encounter.”

Thus more trajectory maneuvers are possible in late August and early September. Closest approach to Steins is expected to be on September 5. As to the linguistic aspects of Rosetta, I see that the first version of the complete Rosetta Disk (the one aboard the spacecraft being an earlier iteration) has now been shown to the public. It contains on a 3-inch diameter nickel disk information on over 1500 human languages in 14,000 pages of text and images, all of which can be read at an optical magnification of 750x. More in this San Francisco Chronicle story.

The Rosetta Project involves not only the Long Now Foundation but also the National Science Foundation, Stanford University Libraries and the National Science Digital Library. And as you would expect, the spheres created by Rosetta are designed to last for a long time, at least 2000 years. As languages, particularly tribal dialects, are increasingly lost in the modern era of advanced telecommunications, we also lose ways of looking at the world that are expressed in particular idioms and metaphors, not to mention the linguistic structures that help us make sense out of how a culture views the world.

Why, in an era when we are landing on comets, do we need to be concerned about securing information for 2000 years and more? The assumptions behind the question are their own answer: We have grown so accustomed to believing that the arrow of progress points ever upward that we forget how tenuous a thing civilization can be. Yet in the larger scheme of things, the 1600 years that have passed since Alaric’s Visigoths entered Rome is but the blink of an eye. What wouldn’t we give for a preserved Library of Alexandria, or survivals of some of the abundant literature of medieval Europe that did not make it down to us in manuscript form?

We have to wish that our own civilization falls victim to no future dark age, but the danger signs, particularly in terms of nuclear proliferation, are all around us. The view from here is that civilizations walk a very narrow path in making their way up to Kardashev Type 1 status, a dangerous trek that can witness the death of all our aspirations or see us emerge on the other side with a power and mastery of Solar System resources we can hardly imagine today. Projects like Rosetta — both spacecraft and disk — remind us that we live within a historical context that shows us both outcomes, and contains both warning and hope.

Star Formation Near Black Holes

Simulations showing how giant gas clouds evolve — clouds as large as 100,000 times the mass of the Sun — have demonstrated that stars can form in the neighborhood of supermassive black holes, the kind of black holes found at the center of galaxies. As you would expect, the clouds are disrupted when they move close to the black hole, but only part of the cloud is captured, with the rest contributing to the formation of massive stars that move about the black hole in eccentric orbits. Usefully, the results match what we see near the center of the Milky Way.

These are short-lived stars, says Ian Bonnell (St Andrews University), which in itself may be telling us something:

“That the stars currently present around the Galaxy’s supermassive black hole have relatively short lifetimes of ~10 million years, suggests that this process is likely to be repetitive. Such a steady supply of stars into the vicinity of the black hole, and a diet of gas directly accreted by the black hole, may help us understand the origin of supermassive black holes in our and other galaxies in the Universe.”

The disc that shapes the nascent star, made up of surviving material from the original gas cloud, is itself elliptical in shape. The spiral patterns formed in it by gravitational disruption transfer energy to gas further away from the black hole. Crucial to this work is the modeling of the heating and cooling process within the gas that allows star formation. The work, performed on the Scottish Universities Physics Alliance (SUPA) SGI Altix supercomputer, took over a year of computer time as it modeled the gas clouds in their movement toward the black hole.

Thus we wind up with highly massive stars on eccentric orbits around the huge black hole thought to exist at galactic center. It would take a science fiction writer par excellence (a veritable Benford!) to describe what that scenario would look like from a safe but still relatively close distance. In fact, I’d be interested in hearing from readers about any writers who have tried to model this deeply mysterious part of the Milky Way.

The paper is Bonnell and Rice, “Star Formation Around Supermassive Black Holes,” Science Vol. 321, No. 5892 (22 August 2008), pp. 1060-1062 (abstract).

Talking about astrophysics in Scotland reminds me that this October 8-10, the Royal Observatory Edinburgh will play host to a workshop titled Habitability in our Galaxy, discussing (among other things) exoplanet habitability, possible venues for life in the Solar System, and prospects for SETI. Should be interesting, especially as our ideas of a galactic habitable zone have been undergoing fruitful growth in recent years.

The Interstellar Conundrum Reconsidered

Just how hard would it be to build a true interstellar craft? I’m not talking about a spacecraft that might, in tens of thousands of years, drift past a star by happenstance, but about a true, dedicated interstellar mission. Those of you who’ve been following my bet with Tibor Pacher on Long Bets (now active, with terms available for scrutiny on the site) know that I think such a mission will happen, but not any time soon. And the proceedings of the Joint Propulsion Conference, held last month in Hartford, go a long way toward explaining why the problem is so difficult.

Wired looked at the conference results in a just published article, the most interesting part of which contained Robert Frisbee’s speculations about antimatter rocketry. Two things have been clear about antimatter for a long time. The first is that producing sufficient antimatter is a problem in and of itself, one that may keep us working with tiny amounts of the stuff for some time to come. Even so, interesting mission concepts, like Steve Howe’s antimatter-energized sail, have grown out of the studies that have been performed on possible hybrid systems.

As to antimatter itself, while the annihilation of matter and antimatter releases vast amounts of energy, controlling the result is even more difficult than producing antimatter in quantity in the first place. Proton/anti-proton annihilation is preferable to electron/positron because the gamma rays produced by the latter can’t be directed to produce thrust, a problem Eugen Sänger wrestled with fifty years ago. But the former is a possibility because the reaction products (pions) can be directed and confined electromagnetically. The idea here is to transfer some of that vast energy of annihilation to a propellant working liquid.

Even so, our rocket still has problems. Check our friend Adam Crowl’s recent piece on antimatter for several good links and some musing on the relatively poorer performance with antimatter than one might have expected (an exhaust velocity of 0.33 c may itself be a surprise, but take a look at this Frisbee presentation). Frisbee (NASA, Jet Propulsion Laboratory) has been studying the interstellar conundrum for a long time, with particular attention to antimatter. The design he presented at the conference, a stack of linked components designed to keep radiation away from crew or payload, is summed up by Wired this way:

At the rocket end, a large superconducting magnet would direct the stream of particles created by annihilating hydrogen and antihydrogen. A regular nozzle could not be used, even if made of exotic materials, because it could not withstand exposure to the high-energy particles… A heavy shield would protect the rest of the ship from the radiation produced by the reaction.

A large radiator would be placed next in line to dissipate all the heat produced by the engine, followed by the storage compartments for the hydrogen and antihydrogen. Because antihydrogen would be annihilated if it touched the walls of any vessel, Frisbee’s design stores the two components as ice at one degree above absolute zero.

So far, so good. We then include basic spacecraft systems in front of the tanks of propellant and then our payload. But theory meets a grim reality in the numbers: Frisbee is talking about an 80 million metric ton starship (the Space Shuttle weighs in at 2,000 metric tons), with another 40 million metric tons each of hydrogen and antihydrogen. The payoff is a forty year mission to Alpha Centauri.

At least it’s designed as a rendezvous mission. A forty year flyby to the Centauri stars would be moving at something better than a tenth of lightspeed once it gets up to cruise. Even if exquisitely targeted, such a probe would operate within 1 AU of the target system (let’s say Centauri B) for something less than three hours. Ponder the challenge presented by collecting imagery and data from Centauri planets in such a scenario.

What to do? These results reinforce much that we already knew about the difficulty of coming up with an interstellar mission design that is remotely affordable, and everything comes down to energy. As noted by Wired, interstellar theorist Brice Cassenti (Rensselaer Polytechnic Institute) comes up with a minimum value of the current energy output of the entire world to send a probe to the Centauri system, a figure Cassenti is quick to note could easily swell to 100 times that value.

It’s useful to ponder the size of the challenge as we continue to scout for concepts that can overcome these problems. The dual track that interstellar studies takes continues to work this way: 1) Push concepts constructed under the parameters of known physics to their utmost, to see where they might lead. Antimatter rockets, laser sails, pulsed fusion and their ilk all fall under this category. 2) Investigate potential concepts that might extend our knowledge of known physics. Here we turn to studies like those sponsored by, among others, NASA’s now defunct Breakthrough Propulsion Physics project. The Tau Zero Foundation hopes to bring philanthropic support to both approaches.

No one can say whether interstellar missions will ever be feasible. What we can insist is that studying physics from the standpoint of propulsion science may tell us a great deal about how the universe works, whether or not we ever find ways of extracting propulsive effects from such futuristic means as dark matter or dark energy. And if it turns out that our breakthroughs fail to materialize, the potential of multi-generational missions supported by human crews still exists. They will be almost inconceivably demanding, but nothing in known physics says that a thousand-year mission to Centauri is beyond the reach of human technology within a future we can still recognize.

How big would an interstellar mission be? Let me close by quoting Robert Frisbee himself, from a presentation he gave at the 2003 iteration of the Joint Propulsion Conference:

In the long term, it will represent a Solar System civilization’s defining accomplishment in much the same way we look to the past accomplishments of humanity, like the Pyramids, Stonehenge, the great medieval Cathedrals of Europe, the Great Wall of China and, not so long ago, a space program called Apollo.

NanoSail-D: Duplicate Exists, Needs to Fly

Remember the great scene in Contact, when the fabulously rich S. R. Hadden (John Hurt), who funded the stargate device that has been destroyed by sabotage, says “Why build one when you can build two for twice the price?” He then reveals the existence of a second facility off the coast of Japan, which is what Ellie Arroway uses on her interstellar trip. So is solar sail expert Greg Matloff a ringer for S. R. Hadden? Read on.

Greg’s recent phone call may not have been as dramatic as that scene in Contact, but he was able to tell me that although NanoSail-D did perish in the SpaceX Falcon explosion, there is a second sail. Marshall Space Flight Center built two. So now we’re in the energizing position of having a second chance at a sail deployment in space, and it could be done soon via the next Falcon launch, if SpaceX will cooperate in the enterprise.

And here’s why they should: Launching a payload on the Space Shuttle costs approximately $10,000 per pound. That’s pricey, and the whole premise behind the Falcon is that it can cut launch costs to as little as a tenth of this. Now the NanoSail-D package is a scant ten pounds (it can be carried around in a suitcase!). If you were working with full Space Shuttle prices (and remember to factor in the fact that the sail has to be delivered to Kwajalein for launch), that still works out to something not terribly far over $100,000 dollars. Call it $150,000 to be safe.

But SpaceX aims to achieve a tenth of that cost. So let’s be extravagant and build in some margins, and we still arrive at no more than $15,000 to put NanoSail-D into space on the next Falcon.

Will SpaceX be willing to help out the doughty team of solar sail researchers at Marshall and elsewhere, especially in light of the $20 million infusion it has recently received from the Founders Fund? This is a chance for the commercial space business to contribute hugely to our solar sail effort, one that has stalled not because of technology — far from it — but because of funding issues. We need to get a sail into space for deployment tests and NanoSail-D is our best shot. A Shuttle launch may not be in the cards, but a new Falcon is going to be flying soon. Let’s hope SpaceX and NASA can get NanoSail-D’s twin aboard that rocket.

An Icy Wanderer from the Oort Cloud

A symposium called Sloan Digital Sky Survey:Asteroids to Cosmology, held in Chicago this past weekend, is producing interesting news, not the least of which is the discovery of a ‘minor planet’ that is currently inside the orbit of Neptune. 2006 SQ372 is only in the neighborhood briefly, already setting out on a journey that will take it 150 billion miles from the Sun. Its orbit is an ellipse four times longer than it is wide, not dissimilar from the dwarf world called Sedna, which was discovered in 2003. But SQ372 strays even further out and takes twice as long to complete its orbit. You’ll need to click to enlarge the image below to see the details.

Image (click to enlarge): The orbit of the newly discovered solar system object SQ372 (blue), in comparison to the orbits of Neptune, Pluto, and Sedna (white, green, red). The location of the Sun is marked by the yellow dot at the center. The inset panel shows an expanded view, including the orbits of Uranus, Saturn, and Jupiter inside the orbit of Neptune. Even on this expanded scale, the size of Earth’s orbit would be barely distinguishable from the central dot. Credit: N. Kaib.

Is the new object actually an unusual comet? Andrew Becker (University of Washington), leader of the discovery team, calls it such, and adds that 2006 SQ372 simply never gets close enough to the Sun to develop a bright tail. Thirty to sixty miles across, the chunk of rock and ice was discovered and its existence confirmed through data originally taken to study supernovae, and the team behind the discovery has since been running computer simulations to make sense out of the object’s orbit. Says graduate student Nathan Kaib (also at UW):

“It could have formed, like Pluto, in the belt of icy debris beyond Neptune, then been kicked to large distance by a gravitational encounter with Neptune or Uranus. However, we think it is more probable that SQ372 comes from the inner edge of the Oort Cloud.”

The latter is an intriguing prospect. SQ372 is ten times closer to the Sun than the main body of the Oort Cloud, in which most objects are thought to orbit at distances of several trillion miles. The discovery of Sedna may have pointed to an ‘inner’ Oort region, and SQ372 could fit into that same picture. What’s likely to happen as we tune up next-generation surveys is that we’ll discover more and more objects of this class, giving us a better read on the population and distribution of this reservoir of icy materials.

Oort Cloud objects may in some cases be interstellar wanderers. We can imagine a process in the period of planetary formation four and a half billion years ago in which rocky materials were ejected from the inner system by the gravitational effects of the giant planets. A passing star could eventually dislodge some of these objects, causing them to move back into the inner system, where they would be visible as comets. Others might be ejected into interstellar regions, perhaps eventually falling into the gravitational grip of another star, cometary starships exchanging materials between distant solar systems.

It’s a pleasing thought, but we have much to learn before we can consider it definitive. As to 2006 SQ372 itself, its orbit ensures that a long time will pass before we have a closer look, 22,000 years in fact, and by then let’s hope we will have found ways to move among Oort Cloud objects at will, sampling, learning and exploiting their resources as we support a civilization with interstellar reach.