The View from Outside the Galaxy

by Paul Gilster on June 5, 2015

The Russian Federal Space Agency (Roscosmos) has recently released a video (viewable here on YouTube) showing how a number of celestial objects might look if they were substantially closer to Earth than they are. The image of the Andromeda galaxy and its trillion stars projected against an apparent Earthscape is below. Unfortunately, this seems to be an astronomical image inserted into a view that purports to show what we would see in visible light. What we would actually see if we were standing in such a location is much different. After all, astronomical images are teased out of lengthy exposures in carefully chosen wavelengths.


In reality the Andromeda galaxy is gigantic even when viewed from 2.5 million light years, but I doubt the average person has any idea where it is in the sky. Although considerably wider than the Moon as seen from Earth, M31 is visually faint, a fact that reminds us of the importance of photographs and charged coupled devices (CCDs) in light gathering as we probe the universe. The human eye is a very limited instrument. As it happens, I’m currently reading Alan Hirshfeld’s Starlight Detectives: How Astronomers, Inventors, and Eccentrics Discovered the Modern Universe (2014), which examines the transition between sketches of visual observations and the steady advances in photography in the 19th Century that brought so much more of the galaxy into view.

About a year ago I looked at Poul Anderson’s novel World Without Stars, which tells of a starship crew dispatched to a planet far outside the Milky Way, a place where from 200,000 light years, galaxy-rise is a rather muted event (see The Milky Way from a Distance). Anderson depicts a galaxy filling 22 degrees along its major axis, but one that appears ‘ghostly pale across seventy thousand parsecs.’ In the same issue of Analog (June, 1966) in which the novel was originally serialized, John Campbell published Anderson’s letter describing the calculations that went into that description, and explaining why the galaxy from outside would be so dim.

One point came up which may interest you. Though the galaxy would be a huge object in the sky, covering some 20⁰ of arc, it would not be bright. In fact, I make its luminosity, as far as this planet is concerned, somewhere between 1% and 0.1% of the total sky-glow (stars, zodiacal light, and permanent aurora) on a clear moonless Earth night. Sure, there are a lot of stars there — but they’re an awfully long ways off!

But it’s not just the distance. Galaxies, as UC-Santa Cruz astrophysicist Greg Laughlin has explained on his systemic site, are mostly empty space, or as he puts it, “To zeroth, to first, to second approximation, a galaxy is nothing at all.”

Building the Galactic Map

Interesting, then, to think about recent work on mapping the galaxy. The WISE mission (Wide-Field Infrared Survey Explorer) produced the data used in this effort to improve our understanding of the Milky Way’s spiral arm structure. Precisely because we don’t have the kind of overview of the galactic disk referred to above, we’re trying to sense the shape of our galaxy from within a disk obscured by dust and seen from our vantage two-thirds of the way out from the galaxy’s center. The new work supports a four-arm model for the Milky Way.

To derive this result, Denilso Camargo (Federal University of Rio Grande do Sul, Brazil) and team worked with WISE data revealing over 400 embedded star clusters, stellar nurseries of the kind that form in the dust- and gas-packed spiral arms, where most stars originate. Usefully for the purposes of the study, young clusters like these have not had the chance to drift out of the arms, thus providing a powerful tool for visualizing spiral arm structure.

In the paper, Camargo and team investigated 18 embedded clusters, seven of which are newly discovered in the WISE images — this complements a list of 437 new clusters Camargo recently published. The work supports the hypothesis that embedded clusters are predominantly found in the galactic thin disk and along its spiral arms. The Perseus and Scutum-Centaurus arms are the most prominent, while the Sagittarius and Outer arms show fewer stars but appear to have the same amount of gas as the other two arms.


Image: This illustration shows where WISE data revealed clusters of young stars shrouded in dust, called embedded clusters, which are known to reside in spiral arms. The bars represent uncertainties in the data. The nearly 100 clusters shown here were found in the arms called Perseus, Sagittarius-Carina, and Outer — three of the galaxy’s four proposed primary arms. Credit: NASA/JPL-Caltech/Federal University of Rio Grande do Sul.

Our Sun is in a minor arm called Orion Cygnus, marked on the image. You can see what a mapping challenge it is to work out, from within the disk itself, what kind of spiral arm structure exists. What we wouldn’t give for a perspective like the one above…

Supernovae in the Deep

Backing out to see the galaxy whole would take us far into intergalactic space, where we’re also learning about three supernovae found between galaxies in large galactic clusters. Melissa Graham, a UC-Berkeley postdoc, is the leader of a study of these objects. She also turns out to be a science fiction fan, one whose reference to the intergalactic deep isn’t the Anderson story I cited above, but Iain Banks’ novel Against a Dark Background (Orbit, 2009). There, the planet Golter lies a million light years from the nearest star.

We can imagine Anderson’s pale galaxy in an otherwise starless night sky, and Banks’ as well. Planets around the three supernovae in Graham’s study would have been destroyed in the event, but if there were any before, their night skies would likewise have been depleted of stars. “It would have been a fairly dark background indeed,” Golter adds, “populated only by the occasional faint and fuzzy blobs of the nearest and brightest cluster galaxies.”

Graham’s work using Hubble Space Telescope imaging confirms the earlier discovery (at the Canada-France-Hawaii Telescope on Mauna Kea) of the three supernovae and shows that they reside in a population of solitary stars in regions where the density of stars is about a million times less than we see from Earth. Gravitational interactions in massive galactic clusters can sometimes fling as many as 15 percent of a single galaxy’s stars out of the main disk, though the stars remain gravitationally bound within the cluster itself. Stars like these are going to be all but invisible unless they explode as Type Ia supernovae. Their explosions thus prove useful as a way to study the broader population of intracluster stars.

And this is intriguing: A fourth exploding star was found by the same observatory, one that may well be inside a globular cluster. If this is the case, we have an unusual event, the first time that a supernova has been found inside a globular cluster (GC). From the paper:

We have shown that the SN Ia in Abell 399 was very likely hosted by a faint red point-like source that has a magnitude and color consistent with both dwarf red sequence galaxies and red GCs. Our statistical analysis of the expected surface densities has shown that a dwarf galaxy is less likely at that location than a GC, due to the presence of a nearby elliptical galaxy. We have demonstrated that the rate enhancements in dwarfs or GCs implied by this new faint host are plausible under current observational constraints, and we do not reject either hypothesis.


Image: One of the four supernovae (top, 2009) may be part of a dwarf galaxy or globular cluster visible on the 2013 HST image (bottom). Credit: Melissa Graham, CFHT and HST.

The supernovae paper is Graham et al., “Confirmation of Hostless Type Ia Supernovae Using Hubble Space Telescope Imaging,” accepted at The Astrophysical Journal (preprint). The paper on galactic mapping is Camargo et al., “Tracing the Galactic spiral structure with embedded clusters,” Monthly Notices of the Royal Society Vol. 450, Issue 4 (20 May, 2015), pp. 4150-4160 (full text).



Science Fiction: An Updated Solar System

by Paul Gilster on June 4, 2015

Having written yesterday about the constellation of missions now returning data from deep space, I found Geoffrey Landis’ essay “Spaceflight and Science Fiction” timely. The essay is freely available in the inaugural issue of The Journal of Astrosociology, the publication of the Astrosociology Research Institute (downloadable here). And while it covers some familiar ground — Jules Verne’s moon cannon, Frau im Monde, etc. — it also highlights Landis’ insights into the relationship between the space program and the genre that helped inspire it.


My friend Al Jackson has written in various comments here (and in a number of back-channel emails) about Wernher von Braun’s ideas and their relation to science fiction. As Landis notes, von Braun was himself a science fiction reader who credited an 1897 novel called Auf Zwei Planeten (Two Planets) by Kurd Lasswitz with inspiring his interest in rocketry. So, by the way, did Walter Hohmann, the German engineer who helped develop the area of orbital dynamics and demonstrated fuel-efficient ways to move between two orbits.

Although there have been numerous editions of the Lasswitz book since its original publication, it would not be until 1971 that an English translation (badly abridged) was published. The story depicts the discovery of a Martian base at the Earth’s north pole, with humans being taken back to Mars for a look at its canals. Lasswitz followed Schiaparelli and Percival Lowell in his fascination with a thriving, fertile Mars and the ancient race that lived there. The science fiction historian Everett F. Bleiler believes Lasswitz was a major influence on Hugo Gernsback and hence on the shape of science fiction in the 1920s and ‘30s.

Image: Wernher von Braun with Walt Disney, with whom he collaborated on a series of three films. Credit: Wikimedia Commons.

But back to von Braun, who lived in a Germany in which Fritz Lang made the 1929 film Frau im Mond (‘Woman in the Moon’) with the help of rocket scientists Hermann Oberth and Willy Ley, who were hired to build a real rocket to launch in synch with the film’s opening. That stunt didn’t happen, but Oberth, Ley and von Braun would have worked together in the early 1930s as part of the Verein für Raumschiffahrt, a rocket club created by amateurs that would go on to influence the development of the deadly V-2.

In his early years in the United States, von Braun wrote a short science fiction novel in German about a trip to Mars, one that describes intelligent Martians in the context of a carefully designed mission. This is where things get tricky for the bibliographer. The technical appendix for this novel was published as Das Marsproject in 1952, appearing as The Mars Project the following year. The novel that had included it was not published until a 2006 edition from a Canadian publisher, who offered it as something of a historical curiosity (available as Project Mars: A Technical Tale, from Collector’s Guide Publishing).

Science fiction, meanwhile, had entered a robust post-war era in which spaceflight seized the public imagination. Landis comments:

The V-2 brought the reality of rockets public in a highly visible way; rockets were no longer comic-strip stuff, but real and highly-visible tools of warfare and, presumably, spaceflight. Following the end of the war, the rockets on science fiction magazine and covers now all looked remarkably like the V-2, and science fiction entered a golden age, with spaceflight stories written by a number of classic writers such as Robert Heinlein, Arthur C. Clarke (who was also noted for inventing the concept of a geosynchronous communications satellite), Isaac Asimov, and Andre Norton reaching new audiences.

It was in this same era that Collier’s ran its highly popular series of articles on von Braun’s ideas, with eight issues illustrated by Chesley Bonestell and other artists between 1952 and 1954. Soon von Braun was a household name thanks not only to Collier’s but also Walt Disney’s TV programs, on which he appeared three times. By 1956, von Braun had scaled down his Mars mission and published his later thinking in The Exploration of Mars, written with co-author Willy Ley.


What effect did the space program von Braun did so much to launch have on the science fiction of its day? It’s an interesting question, and one that Landis is ambivalent about, for as we began to probe the planets, we learned that they differed sharply from what writers had imagined:

In some respects the space program was a disappointment to science fiction. Spaceflight has not become as simple and ubiquitous (nor as cheap) as science fiction predicted. The cratered Mars revealed by the Mariner and Viking missions was not nearly as colorful a setting for science fiction as the Mars of Percival Lowell, with its canals and ancient, dying civilization; the furnace of the Venus surface revealed by Russian and American probes was not nearly as picturesque a setting for science fiction as the earlier swampy or even ocean-covered Venus hypothesized by astronomers when all that could be seen were clouds. Even the moon, dry and grey and mostly lacking in resources, was a disappointment.

Image: The April 30, 1954 issue of Collier’s, part of a series that explored von Braun’s ideas.

What grows out of this is a turn in the field’s direction. If the Solar System was the great venue of exploration for the science fiction of the Gernsback and later Campbell eras, by the mid-1960s many of its destinations had been revealed as barren places (think Mariner 4 and its images of a cratered, evidently lifeless Mars). Interstellar destinations, Landis notes, became the new terra incognita, but so did an entirely different kind of exploration into social and psychological realms (Bradbury becomes an interesting bridge between these two worlds). Landis doesn’t say it but I assume he’s thinking that trends like science fiction’s ‘New Wave’ grew directly out of this impulse.

Think back, then, to some of science fiction’s precursors. Johannes Kepler could write in his Somnium (1609) about space travel by non-technological means in a book that used the Moon as a place where basic ideas of astronomy could be discussed. Edgar Allen Poe would develop his 1835 story “The Unparalleled Adventure of One Hans Pfaall” with a lunar trip using actual technology, in his case a balloon, finding ways to make Earth’s atmosphere extend high enough for a balloon filled with a new kind of gas to get there.

From Voltaire (Micromegas, 1752) to Verne, science fiction shaped spaceflight around technologies available at the time, like Verne’s 274-meter cannon driven by 200 tons of gun cotton. What could be envisioned drove the narrative, but so did the desire for an exotic destination on which humans could walk. It’s interesting that we’re again seeing Solar System destinations as revealed by the space program as settings for modern SF — I think of tales like Gerald Nordley’s “Into the Miranda Rift” as a classic in this vein — but the interstellar impulse has never been stronger as SF continues its quest for alien, habitable worlds.



Mission Data: An Early Summer Harvest

by Paul Gilster on June 3, 2015

What a time for space missions, with data returning from far places and a nail-biter close at hand. On the latter, be advised that the LightSail mission team has decided to divide sail deployment into two operations, one of them starting today as the CubeSat’s solar panels are released and an imaging session verifies the craft is ready for sail deployment. The actual deployment will then follow on Friday, and is currently scheduled for 1647 UTC (1247 EDT).

From Jason Davis:

The first indication the sail sequence has started should come from the spacecraft’s automated telemetry signals, which include a motor revolution count for the boom system. The next few orbits will be used to check LightSail’s health and status, transfer imagery from the cameras to flight computer, and begin sending home to Earth.The last contact of the day comes during a Cal Poly ground pass at 4:16 p.m. EDT (20:16 UTC). By then, the team hopes to at least part of a sail image on the ground. If not, the next series of ground pass orbits begin at 2:45 a.m. EDT Saturday.

On to Pluto

At the outer edge of the system (or, if you prefer, the inner edge of the Kuiper Belt), New Horizons pushes on toward Pluto/Charon. As we wait with great anticipation for the views that lie ahead, a new study published in Nature looks at what we have learned about Pluto’s moons, with interesting findings regarding Nix and Hydra. Evidently the gravitational interactions in this system are complex, with the smaller moons tumbling unpredictably. But if the rotational motions are odd, the Plutonian moons’ orbits are governed by resonance.

“The resonant relationship between Nix, Styx and Hydra makes their orbits more regular and predictable, which prevents them from crashing into one another,” says Douglas Hamilton (University of Maryland). “This is one reason why tiny Pluto is able to have so many moons.”

We also learn that tiny Kerberos, discovered in 2011, is distinctively dark compared with the other moons. We can hope that New Horizons helps to solve the riddle of this variation. The study in Nature is based on a new analysis of Hubble Space Telescope data on the four smaller Plutonian moons. Hamilton notes that because Pluto and Charon comprise a binary system, what we learn about the orbits of moons here may help us understand how planets could behave orbiting a binary star, useful information for the exoplanet hunt.

Meanwhile, the New Horizons team has reported that the first set of hazard-search images of Pluto/Charon show, at least so far, no signs of trouble for the approaching spacecraft.

Deep System Search 1 5-11-15

Image: This image shows the results of the New Horizons team’s first search for potentially hazardous material around Pluto, conducted May 11-12, 2015, from a range of 76 million kilometers. The image combines 48 10-second exposures, taken with the spacecraft’s Long Range Reconnaissance Imager (LORRI), to offer the most sensitive view yet of the Pluto system. Credit: New Horizons / JHU/APL.

Off on a Comet

You would think New Horizons principal investigator Alan Stern would have enough on his plate just now, but Stern is also principal investigator for the Alice instrument at the Southwest Research Institute in Colorado. Alice has just made the news again in relation to its findings aboard the European Space Agency’s Rosetta spacecraft, which show that electrons near the surface of comet 67P/Churyumov-Gerasimenko cause the swift breakup of water and carbon dioxide there rather than photons from the Sun, which had been the prior explanation.

“The discovery we’re reporting is quite unexpected,” said Stern. “It shows us the value of going to comets to observe them up close, since this discovery simply could not have been made from Earth or Earth orbit with any existing or planned observatory. And, it is fundamentally transforming our knowledge of comets.”

Rosetta has been orbiting within about 160 kilometers of the comet since last August. What the Alice spectrograph does is to study the far-ultraviolet wavelength band in order to reveal the chemistry of the gases in the cometary coma. Much of the water and carbon dioxide being found in the coma comes from eruptions on the surface. Analysis of the Alice data shows that it is seeing water and carbon dioxide being broken up about one kilometer from the cometary nucleus by electrons produced by solar radiation.


Image: This composite is a mosaic comprising four individual NAVCAM images taken 31 kilometers from the center of comet 67P/Churyumov-Gerasimenko on Nov. 20, 2014. The image resolution is 3 meters per pixel. Credit: ESA/Rosetta/NAVCAM.

And on to Ceres

The Dawn spacecraft sent the image below back to Earth on May 23, after which it moved toward its second mapping orbit, which it is scheduled to enter today. The spacecraft will spend the rest of June observing Ceres from roughly 2400 kilometers above the surface. What we see below is part of a sequence of images from 5100 kilometers, with resolution of about 480 meters per pixel. This image is part of OpNav9, the final set of Ceres imagery taken by Dawn for navigation purposes. Note the numerous secondary craters now visible, caused by the impact of debris from the larger impact sites.


Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA.

And Finally, Hyperion

Cassini has been doing yeoman work for years now, but the image below shows its final close approach to Saturn’s moon Hyperion. Given the wide range in moons we’ve found at both Jupiter and Saturn, the discovery of the odd surface of Pluto’s moon Kerberos swims into context. When has a new encounter in deep space failed to offer up a few surprises? In the face of this, Hyperion’s odd, sponge-like appearance fits right in, an indication that the moon has an unusually low density and is porous, so that impactors compress the surface.


Image: Cassini’s view of Hyperion was acquired at a distance of approximately 38,000 kilometers from Hyperion and at a Sun-Hyperion-spacecraft, or phase, angle of 46 degrees. Image scale is 230 meters per pixel.

Cassini does have several more close flybys of Saturnian moons scheduled for 2015, but after that follows a departure of the planet’s equatorial plane as controllers prepare the craft for its final events. The ‘Grand Finale’ plunge, closing to within 4000 kilometers of Titan’s cloud tops, is part of this, as are maneuvers close enough to the F ring to do radar backscatter measurements for the first time. Cassini will ultimately burn up in Saturn’s atmosphere, and this seems like a good time to quote The Planetary Society’s Emily Lakdawalla on the matter:

When Cassini finally flies low enough to fall into Saturn’s atmosphere on September 15, 2017, it will be a day not to mourn it, but rather to celebrate its achievements. The day for mourning will come a month or few later, when Juno’s mission likewise comes to an end. On that day, for the first time since the 1970s, Earth will have no active missions exploring any of the giant planets. There won’t even be a mission on the way. The Voyagers and New Horizons will (hopefully) still be active way beyond Neptune, but Jupiter, Saturn, Uranus, and Neptune will only be visible through telescopes on Earth.

All of which makes the constellation of space achievements we’re celebrating this summer both dazzling and a bit poignant. At the very least, an exploration hiatus approaches.



A Kuiper Belt in the Making

by Paul Gilster on June 2, 2015

The Scorpius-Centaurus OB association is a collection of several hundred O and B-class stars some 470 light years from the Sun. Although the stars are not gravitationally bound, they are roughly the same age — 10 to 20 million years — their formation triggered by a series of supernovae explosions in large molecular clouds. Now the Gemini Planet Imager on the Gemini South instrument in Chile has uncovered a young planetary system within the association, one with solid similarities to our own Solar System in its infancy.

In fact, says lead author Thayne Currie (Subaru Telescope), the ring orbiting the star HD 115600 could be a Solar System clone. “It’s kind of like looking at [our] outer solar system when it was a toddler,” the astronomer adds, noting that the ring is about the same distance from its host star as the Kuiper Belt is from the Sun, receiving about the same amount of light from an F-class star that is about fifty percent more massive than our own G-class Sol.


Image: The F-class star HD 115600 showing a bright debris ring viewed nearly edge-on and located just beyond a Pluto-like distance to its star. One or more unseen planets are causing the disk center to be offset from the star’s position (cross). Credit: Thayne Currie/NAOJ.

Distortions in the observed debris disk indicate that it is interacting with at least one thus far unseen planet, creating an eccentricity in the disk itself that is among the largest yet observed. A gas giant of Jupiter or Saturn class could explain the distortions, but there are also other possibilities depending on the models used — the researchers explored ‘gap opening’ models as well as ‘planet stirring’ explanations. From the paper:

Interior to ∼ 30 AU only a superjovian-mass planet can sculpt the ring by gap opening… Planets with masses and semimajor axes comparable to the outer solar system planets could stir the disk to appear as a bright debris ring with an eccentricity of 0.1–0.2 (e.g. a Saturn with e = 0.2). In both cases, Super-Earths just interior to the ring edge could sculpt the ring.

So we have a range of planetary possibilities here amidst a disk whose spectrum shows similarities to the Kuiper Belt, with a composition of ice, silicates and dust, although the researchers note that HD 115600’s disk is, in comparison with other debris disks, more efficient at scattering starlight, implying a higher percentage of ices:

HD 115600’s disk is reflecting light more efficiently than HR 4796A’s disk while having less thermal emission, a result explicable if HD 115600s disk is dominated by higher albedo species like water-ice. Multi-wavelength photometry/spectroscopy is needed to more decisively assess the composition of HD 115600’s disk.

Even so, it’s clear that HD 115600 is a promising source for further information about the early development of disks like the Kuiper Belt, with clear evidence for the sculpting effects of at least one planet. This work was conducted with the Gemini Planet Imager, a new generation of adaptive optics instruments whose numbers are soon to grow. The HD 115600 system thus becomes a useful reference as we tighten our focus on planet-disk interactions.

The paper is Currie et al., “Direct Imaging and Spectroscopy of a Young Extrasolar Kuiper Belt in the Nearest OB Association,” in press at The Astrophysical Journal Letters (preprint). A Subaru Telescope news release is available.



LightSail Reboots: Sail Deployment Soon

by Paul Gilster on June 1, 2015

It was a worrisome eight days, but LightSail has broken its silence with an evident reboot and return to operations, sending telemetry to ground stations and taking test images. We now have sail deployment possibly as early as Tuesday morning EDT (15:44 UTC), but according to The Planetary Society’s Jason Davis, much will depend on today’s intensive checkout.

Planetary Society CEO Bill Nye issued this statement on the spacecraft’s reawakening:

“Our LightSail spacecraft has rebooted itself, just as our engineers predicted. Everyone is delighted. We were ready for three more weeks of anxiety. In this meantime, the team has coded a software patch ready to upload. After we are confident in the data packets regarding our orbit, we will make decisions about uploading the patch and deploying our sails— and we’ll make that decision very soon. This has been a rollercoaster for us down here on Earth, all the while our capable little spacecraft has been on orbit going about its business. In the coming two days, we will have more news, and I am hopeful now that it will be very good.”


Image: LightSail-A back in August of 2014 during a testing period at Cal Poly. The craft is a three-unit CubeSat no larger than a loaf of bread, but it packs 32 square meters of sail inside. Four metal booms will, if all goes well, unfold the craft’s four triangular sails. Credit: The Planetary Society.

What we get from Davis is largely positive (see LightSail Team Prepares for Possible Tuesday Sail Deployment). The diminutive craft made twelve passes over the Cal Poly and Georgia Tech ground stations on Sunday, returning 102 data packets. You’ll recall that errant software was suspected in LightSail’s problems, with engineers crafting a software patch that they tried unsuccessfully to upload. The trick is that the satellite is tumbling, able to receive some commands and transmit data, but lacking the kind of stable downlink that would allow the software changes to be made. For that reason, the patch idea is now being abandoned.

Instead, a series of reboots will keep the beacon.csv file reset so that it doesn’t fill up and crash the system. Understandably, the LightSail team wants to begin sail deployment as soon as it is safe to do so. We should have a final decision on a possible Tuesday deployment by Monday night. We’re going to find out what effect the spacecraft’s tumble has on sail deployment the hard way. As Davis noted in an earlier post:

… the rotation rate has increased from -7, -0.1 and -0.3 degrees per second about the X, Y, and Z axes to 10.8, -7.3 and 2.9 degrees. The cause for the tumbling uptick currently unknown, but with the spacecraft’s attitude control system offline [see What Images Will We Get Back from the LightSail Test Mission?], sail deployment is likely to be a wild ride.

The best Tuesday ground pass window begins at 11:44 EDT. To keep up with the latest, follow Jason’s Twitter account @jasonrdavis, and we’ll see just how wild a ride it is. Remember, this mission is not designed to demonstrate controlled solar sailing, but serves as a test of the craft’s attitude control system and sail systems before it is pulled back into the Earth’s atmosphere. The tumbling we’re seeing now is obviously a concern because just next year the next LightSail mission is scheduled to perform controlled Earth-orbit sail flight. We’re going to need to find the bugs in this spacecraft’s attitude control system long before then.



Transhumanism and Adaptive Radiation

by Paul Gilster on May 29, 2015

Centauri Dreams regular Nick Nielsen here tackles transhumanism, probing its philosophical underpinnings and its practical consequences as civilization spreads outward from the Solar System. In a sense, transhumanism is what humans have always done, the act of transcendence through technology being a continuing theme of our existence. But accelerating technologies demand answers about human freedom in the context of a species that will inevitably bifurcate as it takes to the stars. Think of the ‘Cambrian explosion’ as a model as we consider what is to come. The author’s philosophy often takes him into mathematics (hence a digression on Georg Cantor and set theory), but the prolific Nielsen (Grand Strategy: The View from Oregon and Grand Strategy Annex) always has the long result in mind, a human future that grows and changes with us in a galactic diaspora and beyond.

by J. N. Nielsen

0. Introduction: Synchronic and Diachronic Historiography
1. Planetary Constraints upon Civilization
2. Transcending Human Limitations
3. Transhumanism and reflection principles: a Technical Digression
4. The Thin End of the Wedge
5. Transhumanism is not One, but Many
6. Transhumanism implies Transspeciesism
7. Existential ends are not indifferent to technological means
8. The Great Voluntaristic Divergence: Peopling the Future
9. Conclusion: Transcending Planetary Limitations

0. Introduction: Synchronic and Diachronic Historiography


One of the most difficult aspects of thinking about the future is to see it not as Balkanized fragments, but to capture a glimpse of the whole, the big picture, and how the parts of the whole are related to each other. Complex wholes composed of diverse and originally separate elements are not always so elusive. In the experience of the individual, the separate deliverances of our senses, and the stimulation of thousands upon thousands of nerve endings, are synthesized in a single, unified narrative of conscious experience. [1] In so far as history aspires to be a single, unified narrative of the world entire, we would ideally like to see a conception of history as seamless as individual experience. Yet we routinely fail to see the past whole—few have the requisite knowledge to possess anything like a holistic perspective—so it should be no surprise that we fail to see the future whole. Nevertheless, we can make the attempt.

Historians make a distinction between the diachronic and the synchronic that can be helpful here—if we understand what these terms signify. It is often said that the diachronic perspective is through time while the synchronic perspective is across time. I have never found this very enlightening, so I explain it like this: diachrony is succession in time and synchrony is interaction in time. For example, the history of a single technology—say, chemical rockets—exhibits a sequence of developments, accidents, setbacks, triumphs, and the like that can be told as a unified and linear narrative. But no single technology or its history occurs in a vacuum; it interacts not only with other technologies, but also with economics, politics, diplomacy, and a myriad of other factors in its development. A synchronic history of the development of a single chemical rocket—say, the Saturn V booster for the Apollo program—would contextualize the technological effort within the many lives of the individuals involved in the program, these lives in the context of the Space Race, the Space Race within the Cold War, and the Cold War within post-WWII economic, political, and diplomatic superpower competition between former WWII allies. And it doesn’t end there.

A distinction between diachronic and synchronic historiography implies a parallel distinction between diachronic and synchronic futurism. Indeed, I would go farther and assert that within big history the two are one: we should not distinguish in principle between past and future, but should attempt to construct a framework of historical understanding that incorporates both. A big history of civilization that includes the ten thousand years of development up to the present must also look forward to at least the next ten thousand years, which will see the transition to spacefaring civilization if civilization does not stagnate or does not self-destruct.

In the spirit of a synchronic big history of civilization, then, I would like to develop some ideas regarding what might loosely be called “transhumanism” in order to better understand the transition to spacefaring civilization in the context of developments that will be simultaneously occurring with the agents of this transition. [2]

1. Planetary Constraints upon Civilization

Technological developments in the next one to two hundred years offer the potential of selectively eliminating certain constraints on human life and civilization that have defined the human condition since its emergence. Longevity technologies may alter or eliminate (for all practical purposes) temporal constraints on human life; leaving Earth may alter or eliminate spatial constraints of planetary life, as well as the temporal constraint of the limited habitable period of Earth; genetic engineering and technological enhancement may alter or eliminate the limitations intrinsic to the human mind and human body. The joint elimination of spatial and human limitations suggests the possibility of individuals assuming any form they may choose in order to live in any environment that they choose. Energy technologies may alter or eliminate the limitations on the production and consumption of energy that have shaped the development of civilization.

It is not likely that all human constraints that can be eliminated will be eliminated, or that this will happen simultaneously, or that it will happen completely (i.e., absolutely). The development of technological enhancement cannot be predicted. It is one of the distinctive properties of industrial-technological civilization that it is shaped by unpredictable technological developments, much as agrarian-ecclesiastical civilization is shaped by unpredictable weather and theological controversies. However, the closer a technological enhancement is to the stage of engineering an industrial application, the more predictable that development becomes. At our present state of knowledge, longevity is less of a scientific problem than a problem of particular technologies and engineering the application of these technologies; we can expect these unfolding technological advances to continue to incrementally contribute to longevity. However, the problem of artificial intelligence (not to mention machine consciousness) still lacks an adequate scientific basis. An adequate science of intelligence and consciousness is a necessary prerequisite of technologies of AI (or, rather, machine consciousness), and technologies must be engineered into practical solutions before they can pass on to industrial application.

It is the fact of pervasive (and sometimes even petty) constraints on human freedom that makes this freedom open to changes in scope as a result of technological change. If human freedom were something ideal and absolute, it would not be subject to revision as a consequence of technological change, or any change in contingent circumstances. But while we often think of freedom as an ideal, it is rather grounded in pragmatic realities of action. If a lever or an inclined plane make it possible for you to do something that it was impossible to do without them, then these machines have expanded the scope of human agency; more choices are available as a result, and the degrees of human freedom are multiplied.

Consistent and predictable constraints result in forced choices and mutually exclusive alternatives; when constraints are weakened or eliminated, the forced choice between mutually exclusive alternatives is removed, history no longer converges according to the predictable patterns of the past, and history bifurcates repeatedly as different individuals and different social groups make different choices. Technologies of consciousness may even eliminate perennial existential dilemmas, allowing an individual to be in two places at the same time, or to pursue both of two mutually exclusive alternatives (if embodiment is eventually rendered entirely voluntaristic).

What we have experienced to date of history might be framed as sentient-intelligent beings and their civilizations under conditions of severe constraint — what I have called terrestrial conflation, as the constraints of our planetary existence force the efforts of historical agents to converge upon actions consistent with planetary constraints, therefore conflating forms of civilization that would be distinct under conditions of constraint less severe than those imposed by planetary habitation. Before we consider these planetary constraints in more detail, however, we must first consider human constraints and their amelioration.

2. Transcending Human Limitations

Transhumanism means many different things to many different people. There is a simple explanation for this. Human being admits of many limitations and many forms of finitude. If we define transhumanism as the lessening or elimination of some one limitation or of several limitations (i.e., the transcendence of finitude and limitations), then there are as many forms of transhumanism as there are limitations and forms of finitude (as well as combinations and permutations thereof) that a human being might transcend. [3]

It is arguable that transhumanism is simply a perennial expression of human nature as magnified by the lens of contemporary technology, and that there is nothing at all radically new about transhumanism, hence it ought not to pose any kind of threat, nor be perceived as a threat—moral, emotional, or intellectual. Recently in Astrobiology is island biogeography writ large I expressed this such that technology is the pursuit of biology by other means, though it is the pursuit of biological ends accelerated by technology. If the tempo of historical transcendence alone counts as a threat, then one can understand the perceived threat of transhumanism.

In other words, transhumanism is simply humanism. With the advent of cognitive modernity, when human beings began to make use of their generous encephalization endowment, they began a series of unprecedented innovations not previously explored or exploited by any other organism on our planet, and, in so doing, transcending limitations never before transcended in the history of terrestrial life. It is, then, human nature to continually transcend the human condition.

From this point of view of continual transcendence, there is no human nature. Human beings are whatever they make of themselves. This is a position prefigured in philosophy. Sartre denied that there is any such thing as human nature, but (at least in his early years) insisted upon radical human freedom. If human nature is conceived as an eternal essence that cannot be transcended, then Sartre is right and there is no human nature. But if freedom itself is human nature, the ability to transcend any fixed and static nature, then there is a human nature, but not a nature expressible in terms of essence or necessity. Human nature, under this interpretation, confounds any attempts to assign it limitations on the basis of essentialist definitions or necessary constraints.

Despite the fact that human beings have created a unique world for themselves by transforming (i.e., transcending) their environment, and in so doing have transcended limitations for hundreds of thousands of years (and perhaps for millions of years, if we count our bipedal tool-making pre-human ancestors), transhumanism in its contemporary formulations—i.e., contemporary transcendence of limitations never before transcended in the history of our planet—is viewed as a threat by many, and as an existential threat by some.

Transhumanism as an existential threat (which is, at the same time, an existential opportunity) defines the contemporary debate over transhumanism, such as it is, which is often found to deteriorate into the assertion and counter-assertion of conflicting points of view that concede nothing to subtle shades of meaning, feeling, and intention that are intrinsic dimensions of human life. And this debate has not been clarified by the wide diversity of meanings that have been attached to the term “transhumanism.” The two operative assumptions that define each party to the conflict are, on the one hand, that everyone would, if given the opportunity, choose to transcend their limitations, or to transcend some particular limitation, and, on the other hand, that no one would really want to transcend their limitations, because our limitations define who and what we are, however loathe we are to admit this.

To make this conflict more concrete, let us take a particular human limitation: mortality. Some assume that everyone would want to live forever, if only the means were available, while, on the other hand, some assume that no one would really want to live forever, if truly faced with that prospect. We know that the quest for eternal youth and eternal life are ancient themes in our civilization, that alchemists sought to formulate the elixir of life and that some of the conquistadors sought to find the Fountain of Youth in the wild exoticism of the New World. There is no question that some men are greedy of life, and if given the opportunity would seize the possibility of life everlasting without hesitation. The thoughtful critics of this attitude assume that most individuals eventually rise to a level of maturity that allows them to accept their mortality, and assert that if human beings were given a life freed of biological necessities that they would not know what to do with themselves, and, like the Cumaean Sibyl, would wish for a death that does not come.

One of the most familiar critiques of transhumanism is that we would eventually find the ennui of immortality unbearable (which says nothing as to whether we might put two or three hundred years to good use), and this is an objection that has been around since the heretically-minded have questioned the joys of an eternity of heaven, such as Milton put in the mouth of the rebel angel Mammon:

“Suppose he should relent
And publish Grace to all, on promise made
Of new Subjection; with what eyes could we
Stand in his presence humble, and receive
Strict Laws impos’d, to celebrate his Throne
With warbl’d Hymns, and to his Godhead sing
Forc’t Halleluiah’s; while he Lordly sits
Our envied Sovran, and his Altar breathes
Ambrosial Odours and Ambrosial Flowers,
Our servile offerings.” [4]

It is difficult to imagine anyone enjoying a heaven so described, and it would only become more hateful over time. Presumably some parallel to ennui holds for other forms of human enhancement (distinct from longevity) with which we might become bored or fatigued once we had our fill of the exercise of some new capacity.

It is an obvious, and unforgivable, oversimplification to assume that everyone wants to live forever or that no one really wants to live forever. Even without the prospect of living forever, we know that there are suicides who choose to not even live out their natural term on Earth. That, too, is part of the human condition. We have always had the opportunity to shorten our own lives, or indeed to shorten the lives of others. We have not, to date, had the wherewithal to extend our own lives or to extend the lives of others at will (except incrementally through the interventions of scientific medicine), though, as I have observed in Unfinished Business: Finitude, Contingency, and Openness, the finitude of human being is a contingency, and it is a contingency subject to change in the light of future contingencies. The determination of human being by what is perceived as biological necessity is an accident of history.

This dialectic of transhumanism — will we accept or reject enhancement? — we can see, is a non-constructive approach, and when we generalize beyond any particular limitation of human being (whether it be lifespan or anything else) we immediately understand the necessity of patiently surveying human limitations in detail and asking of each limitation, “Is this desirable or is this not desirable?” This question in turn, formulated in tertium non datur form, is another non-constructive conception, and we again see the need to adopt a more subtle approach. Each human limitation, then, must be weighed and considered on its own merits, precisely because the human condition is defined by a range of limitations that each bear differently upon life.

One way to do this would be through a thought experiment in which one can imagine presenting a human being of today with a distant human descendent, some millions of years in the future, having been subjected to technological interventions intended to ameliorate limitations, and ask whether this descendent is to be counted within the charmed circle of humanity. Better, a human being of today could be presented with a range of distant descendents, some of which would be attractive and readily claimed as one of our own, and some of which would be repulsive. [5]

3. Transhumanism and Reflection Principles: a Technical Digression

The transcendence of human finitude and limitation, whether in regard to any one particular limitation (such as our finite life span) or in regard to all human limitations, begs the question of which human limitations or how many human limitations might be ameliorated or eliminated. As an exercise, i.e., as a thought experiment, we could attempt to imagine human being having infinitely transcended all human limitations. We may characterize the removal of all human finitude and limitation as absolute transhumanism. What would absolute transhumanism look like? There may be a way to approach this question, though at first sight it would seem to transcend any finite human conception by definition.

In order to formulate a coherent conception of absolute transhumanism we will need to call upon the theoretical resources of set theory, and firstly we will refer to a distinction that Georg Cantor, the founder of set theory, made between the transfinite and the absolutely infinite. In one passage Cantor formulated this distinction in the following terms: “Totalities that cannot be regarded as sets (an example is the totality of all alephs as is shown above), I have already many years ago called absolute infinite totalities [absolut unendliche Totalitäten], which I sharply distinguish from transfinite sets.” [6]

Elsewhere Cantor makes the distinction that the totalities of infinite sets are increaseable while the absolute infinite cannot be increased in size. This is profoundly counter-intuitive, but in studying the set theoretical conception of infinity, i.e., the transfinite, one must become familiar with the idea that there are infinite sets of different powers (i.e., infinite sets can be smaller or larger), and indeed an infinite hierarchy of infinitudes. [7] Cantor’s distinction between the increasable transfinite and the unincreasable absolute infinite is particularly suggestive for transhumanism, as human enhancement can be characterized as the increase of human capabilities and human faculties (which is logically equivalent to the reduction or elimination of limitations). Absolute transhumanism, if it could be realized, would then be an unincreasable enhancement of humanity—an unattainable and inconsistent ideal—while transhumanism simpliciter admits of increase, i.e., consists of human enhancements that admit of further enhancement.

In so far as a realized transhumanist property is the transcendence of some limitation, the telos of this transcendence is to pass from the finitude defined by limitation to an infinitude defined by no limitation. Of course, human enhancement will begin with the transcendence of limitations that nevertheless remain finite, and it may well be impossible for any human faculty to converge onto the infinite; this infinite transcendence is, however, the telos of the transcendence, whether or not that telos is realized. [8]

How do we distinguish the removal of all human finitude from the absolute infinite? Here we must go a bit deeper into set theory. The absolute infinite of human being suggests a transhuman reflection principle: any property that holds of absolute transhumanism holds of some lesser transhumanism, i.e., the properties of absolute transhumanism are reflected in lesser transhumanisms.

Reflection principles are employed in set theory in the study of infinite sets. First explicitly discussed and formalized by Azriel Lévy [9], reflection principles (there are many of them) are set theoretical implementations of the idea that the properties of higher infinite sets are “reflected downward” and can be found also as properties of smaller infinite sets. Intuitive glosses on the reflection principle are legion, as are proposed formalizations; as in all formal thought, the idea behind reflection principles is open to variation and to alternative formulations. [10] Lévy himself wrote that reflection is, “the idea of the impossibility of distinguishing, by specified means, the universe from partial universes,” and that reflection principles, “…state the existence of standard models… which reflect in some sense the state of the universe.”

Since the idea behind reflection principles is not widely familiar and can be difficult to grasp at first, I will give a couple of additional explanations. Here is how Hao Wang expressed the intuitive basis of reflection principles:

“The universe of all sets is structurally indefinable. One possible way to make this statement precise is the following: The universe of sets cannot be uniquely characterized (i.e., distinguished from all its initial segments) by any internal structural property of the membership relation in it which is expressible in any logic of finite or transfinite type, including infinitary logics of any cardinal number. This principle may be considered a generalization of the closure principle. Further generalizations and refinements are in the making in recent literature. The totality of all sets is, in some sense, indescribable. When you have any structural property that is supposed to apply to all sets, you know you have not got all sets. There must be some sets that contain as members all sets that have that property.” [11]

For another example, here is how Mary Tiles explains the reflection principle:

“…there is no way, within the language of ZF, to characterize the whole set-theoretic universe, as opposed to some member of it (some Va). Every formula in the language of ZF, if true at all, will be true within some set which falls short of being the whole universe. So there is no way, from within ZF, of insisting that one is talking about all sets rather than about all sets up to a given rank.” [12]

It follows from the reflection principle that every property of the absolute infinite is to be found in some lesser infinitude. Absolute transhumanism could be understood as some lesser absolute infinite—more than a mathematical infinite, i.e., the transfinite, but less than the absolute infinite, which excludes nothing—or indeed as an infinitude that is provided for by a reflection principle.

As we saw above, Cantor defined the absolute infinite as an inconsistent totality that cannot be a set and that cannot be increased in size (it lies beyond the hierarchy of transfinite numbers). Transfinite numbers are infinite sets that can be increased in size; the size of an infinite set is established by one-to-one correspondence. Equipollent sets are of the same size. The definition of the an infinite set as a set that can be put in one-to-one correspondence with a proper subset of itself already prefigures the idea of reflection: the infinite set of even numbers “reflects” the infinitude of the infinite set of all natural numbers, with which even numbers can be put in one-to-one correspondence (the set of natural numbers and the set of even numbers are equipollent).

Perhaps this isn’t very helpful, so let me try another approach. The absolute infinite is a conception of the infinite that is infinite in every respect. It is not uncommon that naïve conceptions of the infinite implicitly invoke the absolute infinite, which is sometimes framed as an objection to the mathematical theory of the infinite, which is not concerned with the absolute infinite. As we have seen, Cantor called the absolute infinite an inconsistent multiplicity (and did so long before Russell’s paradox was known), and distinguished it from the transfinite. Transhumanism simpliciter, then, is parallel to the transfinite, while absolute transhumanism is different in its essential nature from transhumanism because its admittedly inconsistent properties cannot be increased.

Just as the unrestricted comprehension principle [13] results in contradictions, so too unrestricted reflection principles also result in contradictions, and so more subtle formulations of reflection principles have proved to be necessary to translate the informal and intuitive conception into rigorous and formal contexts. One way of rescuing comprehension is to relativize it to a property that marks off elements of the set already given, so that instead of asserting that any property whatsoever defines a set, it is asserted that any property defines a set within another set. (Cf. the text of note [13] above) A similar approach can be taken with reflection principles, so that a reflection principle is relativized to properties marked off from a set already given. This, finally, is how we distinguish the removal of all human finitude (absolute transhumanism) from the absolute infinite. Absolute transhumanism is a set of infinite properties potentially applicable to human beings nested within the infinite properties of the absolute infinite. In this way, we have a glimpse of absolute transhumanism.

With the idea of absolute transhumanism we find ourselves on the verge of theological problems, and here I am even more wary to tread than set theory, but in all honesty I cannot discuss absolute transhumanism without reference to related theological concepts, however much I would prefer to avoid the subject. In so far as absolute transhumanism would be characterized by omniscience, omnipresence, omnipotence, and eternity (among other “omni-” properties), the infinite personhood of absolute transhumanism is, in a sense, the culmination of the Feuerbachian idea that humanity, in worshipping the divine, is worshipping what is best in itself, and that which humanity may, in the fullness of time, come into as its own birthright. This theme is pervasive throughout Feuerbach’s works [14] and it is a testament to the tolerance of nineteenth century civilization that Feuerbach was able to develop these themes as extensively as he did without experiencing persecution. People in our own time have been persecuted for less.

For contingent reasons derived from the terrestrial origins of human being, human being could never ascend to absolute transhumanism—in some cases because omni-properties are inconsistent, and in some cases because they conflict with the physical structure of the world. For example, human being can never be eternal, because humanity has a beginning in time, and therefore is limited to future sempiternity. Human being cannot ascend to omnipotence, because omnipotence is contradictory: could the omnipotent transhuman being create a rock so heavy that the same being could not lift it? Human being cannot aspire to omnipresence, because omnipresence is not possible in a relativistic universe: the relativity of simultaneity entails that no being could be present simultaneously at all times and all places. These contradictions actually show us how fruitful the definition of transhumanism in terms of a reflection principle is, since we can clearly see that absolute transhumanism, like the absolute infinite, is an inconsistent totality. Thus the parallelism of infinitude and humanism holds on several levels.

4. The Thin End of the Wedge

We have already accepted, if not embraced, psychosocial and emotional enhancement, and a range of psychotropic medications are available for this purpose, although enhancement is conceptualized in terms of normativity. There is a similar acceptance of physical enhancement through performance enhancing drugs, though a certain opprobrium attaches to this—perhaps due to the fact that physical enhancement is more obviously enhancement that exceeds normative levels of performance. In terms of emotional and cognitive enhancement, our measures are more subjective, and the efficacy of drugs (such as, e.g., modafinil) are openly debated. There is as yet no consensus regarding whether the efficacy of such drugs is a placebo effect or genuinely nootropic, but the time will come when cognitive enhancement in the form of a pill is no longer debatable except in so far as the desirability of taking such a royal road to cognitive performance.

Human enhancement has already begun; the thin end of the wedge of transhumanism has already been driven into the crooked timber of humanity. [15] Any attempt to circumscribe further development cannot but appear arbitrary—at least to some—and the more that political regimes or social pressure are mobilized to enforce an arbitrary interdiction of development, the stronger the reaction against interdiction and prohibition will be. In this context it will be easy to frame human enhancement as an expression of individual liberty, though moral arguments will also be made for interdiction and prohibition—as we well know from the range of prohibitions presently maintained by our society.

While we can already see the first signs of an emerging transhumanist ideology that views the elimination of human limitations as a moral good, we can also see the first signs of an anti-transhumanist ideology that celebrates human limitations and views their elimination with a kind of moral horror. While we have accepted the use of glasses to correct vision, pacemakers to correct irregular heartbeat, cochlear implants to restore hearing, at some point we cross a threshold that the greater part of humanity is unwilling to cross, and human enhancements (or even restorations to full functionality) are viewed as morally unacceptable. The implied sorites paradox as to where humanity ends and transhumanity begins is, however, a temporary stage in our moral evolution.

Well-intentioned opposition to transhumanism will eventually paint itself into too narrow a corner from which it will not be able to extricate itself, first by creating a kind of secular theology that debates the minutiae of what it means to be “truly” human—it is easy to imagine a scholasticism that parses the human condition down to its finest details and debates what constitutes a “real” or “genuine” human being (absolute humanism in contradistinction to absolute transhumanism)—and secondly by attempting to fix human biology at its present stage of development. [16] One might even imagine a dystopian totalitarian political regime attempting to enforce the Hardy-Weinberg equilibrium and so bring human evolution to an end, institutionalizing allele frequencies at their present level in the human population and by selective breeding attempting not to produce a “better” human being but rather to produce a “genuine” human being, utterly free of the taint of having been the result of engineering—a result paradoxically only possible through population engineering.

5. Transhumanism is not One, but Many

When the future contingencies of history yet-to-be will eventually compromise the contingency of human finitude, the human response to this change in the human condition will be subtle, sophisticated, and, above all, it will be manifold. Transhumanism is not one, but many. There will not be one response, but many different responses, and each opportunity to confront a new dilemma will mean that a new bifurcation opens up, with some taking one horn of the dilemma and the rest taking the other horn.

Transhumanism will initially be something only to be had by the very few at a very high price, but in the fullness of time it will be a choice that every individual will be forced to make. When every individual must choose for or against transhumanism, many individuals will make many different choices. Here we must distinguish between the specific limitations of individuals and classes of limitations (including their characteristic parameters) that limit all human beings. The variation of limitation across individuals has been a social problem throughout human history. Some individuals are more limited than others, some less limited than others, and these natural differences will invite both technological redress (social leveling) and technological facilitation (the maintenance of traditional social hierarchy).

In the meantime, the earliest bifurcations in our species will come to us in more intimate and familiar forms. Parents, in their doting solicitude of their children, including future children not yet born, who imagine the enhancement of their children through genetic engineering and biotechnology, are typically as naïve as futurists who rely solely on diachronic extrapolation for their predictions. A child who had been greatly enhanced and experiences disproportionate opportunity as a result, is very likely to utterly disregard his family, not to mention the traditions and beliefs of this family (which likely provided that family with its staple meanings and reason for existence, and ultimately reasons for enhancement), because he begins life so far in advance of them and develops from that point forward, so that there is very little in common between the enhanced child and the family of source. Sometimes the apple not only falls far from the tree, but also continues rolling once it hits the ground. Parents and children in these circumstances would likely be a profound disappointment to each other, and this familial disenchantment will open further social rifts.

6. Transhumanism implies Transspeciesism

In the spirit of synchrony in the attempt to see history whole it should be obvious that we cannot treat transhumanism in isolation, i.e., as only affecting human beings. That transhumanism is not one, but many, means that the technology of enhancement will not be confined to human beings. The enhancement of other species began with their domestication, although in this context “enhancement” means selective breeding of species to produce qualities desired by human beings, which might not be considered enhancement by the species in question. Enhancement is in the eye of the beholder.

Transhumanism implies transspeciesism [17], i.e., that any and all species might be enhanced through the reduction or elimination of limitations. We would expect this for our companion animals (some time ago I had already written a post on transcanidism), but we might also wish to enhance the species we hunt, in the interest of better sport—we could produce for ourselves the most dangerous game, more dangerous than anything nature has to offer.

As each species possesses a unique set of endowments, we can conceptualize the enhancement of other species along the same lines as that I have outlined for transhumanism, in terms of reflection principles, so that each species implies its trans-being absolute as an inaccessible telos that is a reflection of the absolute infinite, marking off a distinctive parameter space of being that is further reflected downward in embodied trans-beings.

In the consideration of the technological facilitation of our species and our future it is not uncommon to skip ahead to the point at which embodiment becomes irrelevant, or at least marginal, but I want to linger for a moment on our relationship to other terrestrial species. The bare possibility of making the transition from biocentric civilization to post-biological civilization does not entail the actualization of that possibility—at least, not immediately. We are likely to maintain a great many naturally occurring biological species simply because we want them to remain a part of our civilization, whether for companionship or for food. While technological advances may eventually make food irrelevant in theory, in fact almost no one will want to give up the pleasures of cooking and eating. Food has a ritual if not a sacral place in human experience (much as does the concretely embodied home) and will be maintained in some contexts simply for this ritual comfort, while initially only a small minority adopts new rituals and displaces sacrality elsewhere. In so far as a ritual is an opportunity to participate in a myth [18], ritual meals will allow our distant descendents to participate in the traditional mythology of terrestrial humanity.

Each biome into which human beings inserted themselves during our planetary diaspora out of our African origins has made available a unique cohort of species, some of which have been domesticated and the fates of which have thus become tied to human beings and their civilization (no less than our fate is joined to theirs). Terrestrial food production involves this tightly-coupled cohort of co-evolving species dependent upon one another as a consequence of domestication (which latter formulation would constitute a biologically minimalist conception of civilization). This species cohort varies according to endemic species, topography, and climatic conditions—in the Americas we find the “three sisters,” potatoes, and bison, in southeast Asia tropical fruit, rice, and water buffalo, in the Arabian peninsula fava beans, dates, and chicken, in Africa sorghum, taro, yam, and Ankole longhorn—with the invariant being the participation of human beings.

Thus each region of Earth not only possesses a cultural diversity of civilizations, but also a biological diversity of civilizations, each of which may be defined in terms of the unique cohort of tightly-coupled co-evolving species. To date, this process has been an exclusively terrestrial one, but when cohorts of species representative of terrestrial civilizations leave Earth and establish themselves in other environments, the same principles will be iterated at higher orders of magnitude. And when human beings become transhuman this coevolutionary cohort will not simply vanish, but will change even as human beings change.

While the industrial revolution entirely marginalized food production as a sector of the economy, even given a biotechnological revolution that would, in theory, allow for biology to be entirely marginalized in human experience, that marginalization will not follow automatically, because human beings are biologically embodied, many identify with this embodiment, and many will continue to maintain a strong sentimental relationship to embodiment long after embodiment is no longer strictly necessary and becomes entirely volunaristic.

An extraordinarily long period of time will likely be required for Earth-originating intelligence to largely (much less entirely) separate itself from its biological origins, initially by making the transition from a biocentric civilization to a biologically marginal civilization, and then eventually taking leave even of that biologically marginalized fragment. It may require the entirety of the Stelliferous Era for the development of biocentric civilization to run its course, to leave us at the beginning the Degenerate Era only when our descendents are at long last prepared to make the transition to post-biological civilization. Even after experience can be exhaustively emulated in a virtual context, and ordinary experience cannot be differentiated from virtual experience by any objective, quantifiable measure, there will yet remain those who not only insist upon the authenticity of embodied experience, but who also insist that there is a subtle qualitative difference between embodied and virtual experience.

The advent of post-biocentric civilization would mean the end of civilization defined in terms of a co-evolutionary cohort of species, but the advent of transhumanism would mean the adaptive radiation, and therefore the speciation, of humanity, which would mean a potentially dramatic expansion of the coevolutionary cohort of species, to include not only many post-human descendents, but also post-natural species of every phyla. The technology that would enable transhumanism would also enable the technologically-driven adaptive radiation of other species. It seems likely that the cluster of biological enhancement technologies would result in an expansion of the coevolutionary cohort of species. Much as the industrial revolution expanded biocentric civilization to include extinct species (in the form of fossil fuels), biotechnology may extend biocentric civilization to artificial species. Before the narrowing of our civilization down to a sole technological implementation, there will come a great expansion of biocentric civilization facilitated by transhuman technologies.

transhuman adaptive radiation 2

7. Existential ends are not indifferent to technological means

Transhumanism will have a strongly selective effect. After a few generations (and generations will take much longer to elapse given the technologies of transhumanism) those who choose to avoid all enhancement will represent an increasingly small proportion of the population, and their genes will be consequently less frequently represented in the gene pool. The human condition will repeatedly bifurcate as individuals face more choices and more sophisticated choices: how much, to what degree and in what way, an individual might change his mind or body. Depending upon the resources that are available, and depending upon what technologies of transhumanism prove to be robust and effective—that is to say, depending upon contingencies not yet known—the human condition may do considerably more than bifurcate, it may embark upon a technologically enabled and accelerated adaptive radiation throughout the cosmos.

The argument that I have made in regard to the unknowns of technological development when it comes to the engineering success or failure of particular spacefaring technologies (cf. How We Get There Matters) also applies to the engineering success or failure of particular transhuman technologies, i.e., technologies of human enhancement. How exactly we become transhuman matters. Just as it matters how we travel in space, because the design, production, and use of particular technologies will shape the economic infrastructure and industrial base of that civilization that employs these technologies, so too how we become transhuman matters because the technologies of transhumanism will shape the civilization that employs them, and the technologies will come to shape civilization in turn. Again, existential ends are not indifferent to the technological means by which these ends are attained.

We can make broad distinctions in classes of technologies and consequent distinctions in the civilizations integral with these technologies. It should be noted at this point that only industrial-technological civilization can be integral with technologies in this way—driving technologies and being driven in turn by them. Other forms of civilization, if they have a relation to technology, do not have a relationship to technology that defines the civilization, whereas industrial-technological civilization is defined by its relation to technology.

In so far as civilization is a social technology emergent from human intelligence (a principle that I have called civilization-intelligence covariance), the adaptive radiation that will be driven by the technologies of transhumanism will result in different forms of intelligence, which will mean in turn the adaptive radiation of civilization, as novel forms of intelligence give rise to novel forms of civilization. The particular social technologies of civilization—which among them can and which cannot be engineered into large-scale social organization and institutions—will matter. And, again, in so far as civilization is a co-evolutionary cohort of species, the proliferation of new species in the wake of transhumanism again points to an adaptive radiation of civilization.

8. The Great Voluntaristic Divergence: Peopling the Future

The Great Voluntaristic Divergence will come about when both biological and planetary constraints are overcome and agents have a free hand to shape their history as they will. When history arrives at a point of complexity and sophistication that pathways can diverge significantly, and history is no longer subject to the constraints that have defined it since its planetary origins, preventing its divergence, convergent evolution will begin to decrease while divergent evolution will increase.

Though the Great Voluntaristic Divergence represents the overtaking of contingent limitations by conscious choice (hence voluntaristic), transhuman adaptive radiation through the cosmos will not fly in the face of established evolutionary principles, but rather will exemplify what we know of adaptive radiation, albeit at an order of magnitude not previously realized in terrestrial history—a macroevolutionary adaptive radiation, if you will. Beyond the terrestrial biosphere, the principles of biology will continue to hold good, but in unprecedented ways. That we have at times mistaken contingent limitations on our biology for a kind of biological necessity may show up some of our initial misapprehensions, but once the fallacy has been identified as such, we will see adaptive radiation unfold on a cosmological scale.

For a sense of what biologists mean by adaptive radiation, here is a brief definition:

adaptive radiation: biologic evolution in a group of related species that is characterized by spreading into different environments and by divergence of structure. [19]

A longer and more detailed definition of adaptive radiation is to be found at Paleos as follows:

Adaptive radiation: the rapid expansion and diversification of a group of organisms as they fill unoccupied ecological niches, evolving into new species or sub-species; the classic example being Darwin’s finches. This occurs as a result of different populations becoming reproductively isolated from each other, usually by adapting to different environments. Radiations specifically to increase in taxonomic diversity or morphological disparity, due to adaptive change or the opening of ecospace, may affect one clade or many, and be rapid or gradual. The term can also be applied to larger groups of organisms, as in “the adaptive radiation of mammals” (see diagram below), although in this context it is perhaps better referred to as evolutionary radiation. Evolutionary radiation in this context refers to a larger scale radiation; whereas rapid radiation driven by a single lineage‘s adaptation to their environment is adaptive radiation proper. Adaptive and evolutionary radiations in this latter context follow mass-extinctions, as when during the early Cenozoic mammals and large flightless birds filled ecological roles previously occupied in the Mesozoic by dinosaurs.

Elsewhere I have said that astrobiology is island biogeography writ large. As we know from the example of Darwin’s finches, islands provide a particularly effective setting for speciation. The mechanisms of adaptive radiation are facilitated by an archipelago. On a much larger scale, the adaptive radiation of life in the cosmos will be driven by the range of conditions on an archipelago of habitable worlds, isolated like islands in the vast sea of interstellar space, more sterile even than the pelagic zones of terrestrial oceans.

Transhuman adaptive radiation will be an aspect of extraterrestrial dispersal vectors, both serving as a vector for the extraterrestrial dispersal of terrestrial species (which will radiate adaptively no less than humanity), and being driven further in turn by this dispersal of other terrestrial species. Each cohort of species that remains in biological proximity will coevolve while simultaneously radiating outward into the universe.

J. B. S. Haldane already explored some of these possibilities in his essay “Man’s Destiny” in his book Possible Worlds and other Papers (published in 1928), and in his 1963 lecture “Biological Possibilities for the Human Species in the Next Ten Thousand Years” [20] A more recent treatment of the same theme is to be found in Christopher Wills:

“As we spread to worlds that are very different from our own, the consequences for our evolution will be at least as profound as when our remote ancestors first ventured out of Africa. Will people living on other planets evolve into new species? Given sufficient isolation from the rest of us, we have no reason to suppose otherwise. Indeed, challenged by very different environments, they will probably become new species much more quickly than the millions of years that were required to engender the timid beginnings of speciation in the orangutans of Borneo and Sumatra. In short, the powers of natural selection that Darwin was the first to understand will certainly continue to shape our species. Here it is our task to try to glimpse the accelerating ways in which biological and cultural evolution will reinforce each other in the future, and to understand how this mutual interaction will allow us to survive the evolutionary challenges that we will face as we begin our spread through the universe.” [21]

Wills’ book is all about how human evolution, biological and cultural, far from having been arrested by the advent of civilization, has been accelerating as a direct result of civilization. To Wills’ concern for the accelerating evolution of human beings I would add the accelerating evolution of all species tied to human civilization, and to biological and cultural evolution noted in the above quote, we might wish to add technological evolution. When technological evolution becomes indistinguishable from biological and cultural evolution, i.e., after transhumanism, we will understand that biology and technology will be coextensive, without any need to make note of it.

I noted above the transhuman adaptive radiation will have a selective effect. I previously addressed this in The Human Future in Space, where I wrote:

“Other planetary bodies can be adapted for human habitation, and we can build structures in space that imitate gravity. However, it is worth noting that health will be an important factor for those leaving Earth to live elsewhere in the cosmos. Just as the individuals who go into space will be self-selected by their desire to live away from Earth, they will also be physically selected. Living in micro-gravity, low gravity, high gravity or simulated gravity environments will be strongly selective. There may be individuals who desire to live in these other environments, but who find that they are physical incapacitated by them. But due to human genetic variability, there will be some individuals who happened to be well-adapted to differing gravity environments. Such individuals will be favored in the expansion into space and they will pass on their genetic legacy to their children. Humanity will evolve under extraterrestrial selection pressures.”

These extraterrestrial selection pressures will be distinct in distinct environments, and so will lead to adaptive radiation among isolated populations, and populations will be more isolated across the gulfs of space than they were ever isolated by geographical barriers on Earth. To the technologically-driven selection pressures of transhumanism, then can be added the unprecedented selection pressures of alien worlds and artificial environments. And it doesn’t end there.

Because human beings are not only physical beings but also cultural beings, there are cultural forms of adaptation. These cultural forms of adaptation are more familiar and more obvious to us from our terrestrial experience than biological adaptation, which takes place over a biological time scale that conceals in plain sight the change that cannot be observed over a period as short as a human life span. Cultural adaptation is what we today call diversity, and an extreme diversity of cultural adaptations characterizes humanity’s inhabitation of Earth, as it was by way of behavioral adaptation that human beings were able to expand on a planetary scale and to inhabit every biome.

The sociocultural radiation of transhuman beings will also take place on an unprecedented scale coinciding with the possibilities opened up by the energy and material resources of a practically inexhaustible universe. Thus there is, in addition, the possibility of an abstract radiation into artificial environments—a radiation that transhuman beings would be especially well placed to effect. The technium is an artificial environment into which human beings could radiate, and which would expand and proliferate in that radiation.

9. Conclusion: Transcending Planetary Limitations

A viable spacefaring civilization, as we see, cannot be separated from the evolution of the agents of the spacefaring civilization; both are mechanisms of history considered on the largest scale. Transhumanism, like spacefaring, is not an end itself. Like any evolutionary development, as in technology no less than in biology, everything is a transitional form. In a famous passage of Thus Spake Zarathustra Nietzsche had his prophet deliver himself of the utterance: “Man is a rope stretched between animal and superman—a rope stretched over an abyss.” [22] Today we might rather say, “Man is a rope stretched between animal and transhumanity—a rope stretched over an abyss,” and then go on to note that the abyss is existential risk.

This does not go far enough. Transhumanity is a paradigm of sentient-intelligent being of the Stelliferous Era, still ensconced within the paradigm of civilizations of the Stelliferous Era. We must transcend this paradigm as well. Earth and its biosphere are a rope stretched over a cosmological abyss. Civilization is a rope stretched between prehistory and our life in the cosmos. Civilization is perhaps the highest stage of terrestrial development, but that does not mean there are no further developments to come—those further developments being what I have elsewhere called “post-civilizational successor institutions.” What the Cambrian revolution was to life on Earth, the great voluntaristic divergence will be for the cosmos at large. And if it is not terrestrial life that goes on to this destiny, we can hope that life originating from some other biosphere fulfills this function, whether according to evolutionary or technological time scales.

The transcending of planetary limitations is of a piece with transcending our biological limitations: it is a necessary step toward cosmic viability, but it is not the final step. On the contrary, both the transcendence of human limitations and planetary limitations may be considered the first step toward the cosmic viability of life originating in a planetary biosphere. Moreover, these two forms of transcendence are essentially one, as the peculiar limitations of human beings are a function of the terrestrial history of our species; the transcendence of human limitations is simply another planetary limitation to overcome, one limitation among many. This process of transcendence admits of no boundary and converges on infinite time.

In Who will read the Encyclopedia Galactica? I already began to suggest these themes of infinite history, and the possibility of beings with an infinite lifespan who could quite literally think the infinite thoughts that are denied human beings as we exist (and think) today. Dyson’s eternal intelligences are the distant descendents of the transhuman beings we see in our own future; or, to see it from a different perspective, the transhuman beings of our future, that some may perhaps see as the fulfillment of technological maturity, are a bridge over an abyss, and on the other side of the abyss are Dyson’s eternal intelligences.

Finally, in regard to an infinite future, it should be noted that the relation of absolute transhumanism to the absolute infinite by way of the reflection principle will also hold of infinitistically conceived history and cosmology in relation to the absolute infinite by way of the reflection principle: all that I have suggested here in relation to transhumanism and the reflection principle applies, mutatis mutandis, to infinitistic historiography and infinitistic cosmology. We will of course ask ourselves whether this infinite historiography is a reflection of a temporal absolute (an absolute temporal infinity, itself a reflection of the absolute infinite), and, if you like to think of a temporal absolute as the eternal, you have arrived at the Platonic formulation that time is the moving image of eternity. [23]

– – – – – – – – – – – – – – – – – – – – – – – – – – – –


[1] It has become fashionable in some quarters to dismiss human experience and its apparent unity as a “user illusion,” but if it is an illusion, it is a robust and persistent illusion that can be studied in some degree of detail without the loss of its unity. Indeed, it is more difficult for most of us to imagine our experience as fragmented than as unified, simply because we have never experienced anything other than the unified stream of conscious experience, however heavy-handedly our brains must intervene in order to impose this order on experience—but this cognitive component, too, is another contributing element to the final synthesis of experience. There are, of course, many forms of pathological experience, and the books of Oliver Sacks are replete with remarkable anecdotes of individuals who fail to experience the world as most of us do. The existence of these exceptions in no sense disproves the rule.

[2] If the transition is a transition to the realization of some existential risk—extinction, permanent stagnation, flawed realization, or subsequent ruination—rather than to a viable spacefaring future, the role of the agents in this transition is no less interesting. Indeed, it could be argued that it will be the experience of the agents of history, collectively and individually, that will determine the overall course of history when it comes to the bifurcation between the viability or ultimate non-viability of civilization.

[3] I am not making the claim that this is the only way to define transhumanism, but only that it is an interesting and potentially fruitful way of defining transhumanism, as I will attempt to demonstrate below. Transhumanism might also be defined in terms of humanity beyond the prediction wall. This, too, might be a fruitful approach to transhumanism, but I will reserve its exposition for another time.

[4] Milton, Paradise Lost, Book II, lines 237-246.

[5] A slightly different thought experiment could consist in being presented with distant descendents modified by different selection pressures rather than through technological intervention.

[6] Quoted in “Cantor’s Views on the Foundations of Mathematics” by Walter Purkert, in Ideas and Their Reception: Proceedings of the Symposium on the History of Mathematics, edited by David E. Rowe and John McCleary, Boston et al.: Academic Press, 1989, p. 56.

[7] The reader who wishes to learn more about the theory of transfinite numbers may profitably consult any number of textbooks on set theory. A good introduction is Everything and More: A Compact History of Infinity by David Foster Wallace—yes, the novelist.

[8] In regard to any human faculty converging upon the infinite, there is a passage from Gödel that I have quoted many times (cf. Gödel’s Lesson for Geopolitics, Evolutionary Transcendence, and Addendum on Technological Unemployment): “Turing… gives an argument which is supposed to show that mental procedures cannot go beyond mechanical procedures. However, this argument is inconclusive. What Turing disregards completely is the fact that mind, in its use, is not static, but is constantly developing, i.e., that we understand abstract terms more and more precisely as we go on using them, and that more and more abstract terms enter the sphere of our understanding. There may exist systematic methods of actualizing this development, which could form part of the procedure. Therefore, although at each stage the number and precision of the abstract terms at our disposal may be finite, both (and, therefore, also Turing’s number of distinguishable states of mind) may converge toward infinity in the course of the application of the procedure.” (“Some remarks on the undecidability results,” Italics in original, in Gödel, Kurt, Collected Works, Volume II, Publications 1938-1974, New York and Oxford: Oxford University Press, 1990, p. 306) In this way, any human faculty that transcends its limitations might be said to converge toward infinity.

[9] Azriel Lévy “Principles of reflection in axiomatic set theory” Fundamenta Mathematicae, T. XLIX, 1960, pp. 1-10. Lévy cites a paper by Tarski and Vaught, “Arithmetical extensions of relational systems,” Compositio Mathematica, 13 (1957), pp. 81-102, noting that reflection principles are related to arithmetical equivalence and arithmetical extension. This line of thought emerges out of model theory, and the Löwenheim-Skolem theorem is a continual point of reference. On the Löwenheim-Skolem theorem and model theory cf. Badesa’s book The Birth of Model Theory: “Today, we use the name Löwenheim-Skolem theorem to refer to all the theorems that guarantee that if a set of formulas has a model of a particular cardinality, it also has a model of some other cardinality.” (Calixto Badesa, The Birth of Model Theory: Löwenheim’s Theorem in the Frame of the Theory of Relatives, Princeton and Oxford: Princeton University Press, 2004, p. 143)

[10] On alternative formulations cf. Hilary Putnam, Philosophy of Logic, New York: Harper & Row, 1971: “One group of questions which I might have considered has to do with the existence of what I might call ‘equivalent constructions’ in mathematics. For example, numbers can be constructed from sets in more than one way. Moreover, the notion of set is not the only notion which can be taken as basic; we have already indicated that predicative set theory, at least, is in some sense intertranslatable with talk of formulas and truth; and even the impredicative notion of set admits of various equivalents: for example, instead of identifying functions with certain sets, as I did, I might have identified sets with certain functions. My own view is that none of these approaches should be regarded as ‘more true’ than any other; the realm of mathematical fact admits of many ‘equivalent descriptions’: but clearly a whole essay could have been devoted to this.” (pp. 75-76)

[11] Hao Wang, A Logical Journey: From Gödel to Philosophy, p. 280. In Wang’s books it is nearly impossible to tell where Wang leaves off and Gödel begins, as Wang’s books are assembled from notes he took based on conversations with Gödel, so that many of the ideas are Gödel’s, but expressed in Wang’s terms (much like Ludwig Landgrebe’s reworking of Husserl’s notes in the manuscript that became Experience and Judgment). The closure principle, in turn, is defined by Wang as follows: “If the universe of sets is closed with respect to certain operations, there exists a set that is similarly closed. This implies, for example, the existence of inaccessible cardinals and of inaccessible cardinals equal to their index as inaccessible cardinals.” (Wang, Ibid.)

[12] Tiles, Mary, The Philosophy of Set Theory: An Historical Introduction to Cantor’s Paradise, Oxford: Basil Blackwell, 1991, p. 158. “ZF” is an abbreviation for “Zermelo Fraenkel” which identifies by the names of the responsible mathematicians the most widely familiar axiomatization of set theory.

[13] The comprehension principle is an idea from naïve set theory that any property defines a set. The Russell paradox is that it is impossible to form a set that is the set of all sets not members of themselves. In order to avoid such contradictions, limitations are placed on the comprehension principle, as, for example, with the separation axiom (also called the subset axiom). D. C. Makinson characterizes the separation axiom as follows: “…every property marks off a subset of any given set. In other words, given any property expressible in the notation of our system of set theory, and any given set, there is a set whose elements are exactly the elements of the first set that possess the property.” (Topics in Modern Logic, London: Methuen & Co Ltd, 1973, p. 81) Different formalizations of set theory achieve this end by different means. (One way to narrate the history of set theory would be to present it as the many and different attempts to limit comprehension while retaining as many sets as possible).

[14] This is the same Feuerbach attacked with characteristic violence by Marx in the famous “Theses on Feuerbach.” Max Stirner was also a critic of Feuerbach.

[15] “Out of the crooked timber of humanity, no straight thing was ever made.” Immanuel Kant, Idea for a General History with a Cosmopolitan Purpose (1784), Proposition 6.

[16] If human beings were to arrest human evolution while the remainder of the biosphere continued to evolve, eventually these two biologies would diverge and human beings would no longer be fit to live on Earth. Hence the dystopian scenario in the next sentence would have to be extended to the whole of the biosphere if it were to function over the long term.

[17] A more felicitous rendering might employ a single “s”: transpeciesism, as in “wherever” in which one “e” is eliminated in the conjunction. Alternatively, one might employ a hyphenated form: trans-speciesism.

[18] This is a formulation frequently employed by Joseph Campbell; I do not know if the idea is original to Campbell, but cf. my Myth, Ritual, and Social Consensus.

[19] The Cambridge Dictionary of Human Biology and Evolution, by Larry L. Mai, Marcus Young Owl; the authors might also have noted that adaptive radiation could also comprise diversity of behavior in behavioral adaptation.

[20] A speech given in 1963, reprinted in Man and His Future edited by Gordon Wolstenholme, with 8 illustrations. Little, Brown and Company, Boston. 1963.

[21] Christopher Wills, Children of Prometheus: The Accelerating Pace of Human Evolution, Perseus Books, 1998, pp. 53-54.

[22] Nietzsche, Thus Spake Zarathustra, Prologue, 4. In the original German: “Der Mensch ist ein Seil, geknüpft zwischen Thier und Übermensch, —ein Seil über einem Abgrunde.”

[23] Cf. Plato, Timaeus, 37.


Into Plutonian Depths

by Paul Gilster on May 28, 2015


The image of Pluto on the right — an artist’s impression, to be sure (credit: NASA, ESA and G. Bacon, STScI) — suggests Ganymede to me more than Pluto, but we’ll have to wait and see what New Horizons turns up as it continues to close on its target. It’s worth thinking about how our views of this place have changed over time. The world found by Clyde Tombaugh seemed small enough when he found it, but a fraction of its light was actually coming from its yet smaller moon, which wouldn’t be discovered until USNO astronomer James Christy nailed it in 1978.

Gregory Benford depicted Pluto with a nitrogen sea in a 2006 novel called The Sunborn, one in which he explored the possibility of life at -185 degrees Celsius, the lifeforms themselves the result of an experiment by heliopause beings who drew energy from magnetic interactions far from the Sun. Even more speculative is Stephen Baxter’s story “Goose Summer” (from the Vacuum Diagrams collection of 2001), in which Plutonian life physically interacts with Charon, the latter ‘seeding’ the Plutonian surface.

But the real thing beckons. We now have images taken between May 8 and 12, downlinked last week. Here we’re looking at Pluto from a distance of 77 million kilometers using the now familiar Long-Range Reconnaissance Imager (LORRI) aboard New Horizons. The differences between this view and what we saw in April are striking. We’re now 35 million kilometers closer and have twice the pixels to work with, aided by image deconvolution techniques to tease out detail.

Here’s the May 12 imagery as contrasted with April 16 — a click on the New Horizons link will show you two other photo sets contrasting the earlier and later views.


Image: These images show Pluto in the latest series of New Horizons Long Range Reconnaissance Imager (LORRI) photos, taken May 8-12, 2015, compared to LORRI images taken one month earlier. All images have been rotated to align Pluto’s rotational axis with the vertical direction (up-down), as depicted schematically in the center panel. Between April and May, Pluto appears to get larger as the spacecraft gets closer, with Pluto’s apparent size increasing by approximately 50 percent. Pluto rotates around its axis every 6.4 Earth days, and these images show the variations in Pluto’s surface features during its rotation. All of the images are displayed using the same linear brightness scale.

The deconvolution method used to sharpen the images can, the New Horizons team reminds us, sometimes create artifacts, meaning that we’ll need to have the smaller details of these images confirmed as New Horizons gets closer. According to mission project scientist Hal Weaver (JHU/APL), as quoted in the New Horizons update linked to above, we’ll be seeing images with 5000 times better resolution when we reach closest approach during the July 14 flyby.

Are we looking at a polar cap in this imagery? Mission principal investigator Alan Stern comments:

“These new images show us that Pluto’s differing faces are each distinct; likely hinting at what may be very complex surface geology or variations in surface composition from place to place. These images also continue to support the hypothesis that Pluto has a polar cap whose extent varies with longitude; we’ll be able to make a definitive determination of the polar bright region’s iciness when we get compositional spectroscopy of that region in July.”

We’re just seven weeks away from the flyby, with Stern now moving to the east coast for the encounter operations through late July. As excitement builds, a Pluto Safari smartphone app has appeared, available for iOS as well as Android. Produced by Simulation Curriculum Corp., the free app offers interactive views of the locations of Pluto and New Horizons, along with a timeline of New Horizons mission milestones and the latest news about the spacecraft. Pluto Safari is available here in its iOS version and here for the Android iteration.


Meanwhile, looking over my collection of old science fiction magazines, I enjoyed tracking down Stanton A. Coblentz’ story “Into Plutonian Depths,” which ran in Wonder Stories Quarterly in the Spring 1931 issue. To my knowledge, this was the first story written with knowledge of Clyde Tombaugh’s discovery. Using a ‘gravity insulator,’ our protagonist and Stark, his science mentor, set out for the most distant planet known. Coblentz refers to ‘the trans-Neptunian planet’ found by Tombaugh and conjures up mystery in the name Pluto. As Stark exclaims:

“Think of it a billion miles or so beyond Neptune, a globe perhaps no larger than the earth, lost in the blackness of the outer void, its years longer than our centuries, its seasons longer than our lives! What stories it would be able to tell! Are there any living creatures there? Were any living beings ever able to endure the terror of its sunless, frozen plains? Would we find the imprint of lost races upon its shores? — races that flourished while the planet was heated from within, but that have long ago fallen in the struggle with the cold?”

And so on. It’s a lively tale, a bit mesmerizing in its day (though it goes on far too long), and it mimics the approach of New Horizons as it describes the travelers’ view of “silvery white plains and its broken and enormous mountain ranges, whose snowy summits were offset by sheer black escarpments and ravines as hideous to contemplate as the craters of the moon…” The inhabitants of the ninth planet turn out to be nothing like Benford or Baxter’s creations, and the tale turns into something closer to Rider Haggard than modern SF as it winds its way to a conclusion.

But still, what fun to think about Pluto as it was first envisioned in fiction while we have a spacecraft moving in on the Pluto/Charon system at 1.2 million kilometers per day. Each new set of New Horizons images is going to sharpen our view of a world. If only Stanton Coblentz, and for that matter Clyde Tombaugh himself, were able to watch this encounter unfold.



LightSail Glitch: Hoping for a Reboot

by Paul Gilster on May 27, 2015

The Planetary Society’s LightSail won’t stay in orbit long once its sail deploys, a victim of inexorable atmospheric drag. But we’re all lucky that in un-deployed form — as a CubeSat — LightSail can maintain its orbit for about six months. Some of that extended period may be necessary given the problem the spacecraft has encountered: After returning a healthy stream of data packets over its first two days of operations, the solar sail mission has fallen silent.

Jason Davis continues his reporting on LightSail, with the latest update on the communications problem now online. We learn that the suspected culprit for LightSail’s silence is a simple software glitch. Everything else looked good when communications ceased, with power and temperature readings stable. Davis explains that during normal operations, LightSail transmits a telemetry beacon every 15 seconds. The Linux-based flight software writes data on each transmission to a .csv file, a spreadsheet-like record of ongoing procedures.

This file continues to grow, and when it reaches a certain size, trouble can happen:

As more beacons are transmitted, the file grows in size. When it reaches 32 megabytes—roughly the size of ten compressed music files—it can crash the flight system. The manufacturer of the avionics board corrected this glitch in later software revisions. But alas, LightSail’s software version doesn’t include the update.

Late Friday, the team received a heads-up warning them of the vulnerability. A fix was quickly devised to prevent the spacecraft from crashing, and it was scheduled to be uploaded during the next ground station pass. But before that happened, LightSail fell silent. The last data packet received from the spacecraft was May 22 at 21:31 UTC (5:31 p.m. EDT).


Let’s hope we’ll still see a deployed LightSail, as in the image above. But anyone who has stared at a PC frozen into immobility knows the feeling that LightSail’s ground controllers must have experienced. The machine is not responding, which means it’s time for a reboot. A manual reboot being out of the question, a reboot command from the ground has to be used, and more than one has been sent. In fact, Cal Poly has been transmitting a new reboot command every few ground station passes. So far, no luck.

A fix may still be in the works from a natural source, but first, the situation led to a bit of humor, in the form of an email Davis received, as recorded in this tweet:


Davis also suggests a LightSail successor to be called BourbonSat, a flight spare that sits in each team member’s kitchen to offer quick stress relief. The humor is edgy but that’s because we may now be reliant on a hands-off fix: Charged particles striking an electronic component in just the right way to cause a reboot. If that sounds extreme, be aware that the phenomenon is not unusual in CubeSats. In fact, Cal Poly’s experience says that most reboot within the first three weeks of operations. You can place this in the context of the 28-day sail deployment timeline and see we might come out just fine.

What happens next depends upon when — and if — that reboot occurs, assuming the continued reboot commands from the ground are not effective. Various software fixes are being tested to see which could be inserted after contact is restored, so that the troublesome .csv file doesn’t cause further problems. Davis also says that when LightSail comes back online, the team will probably begin a manual sail deployment as soon as possible. Let’s make sure, in other words, that when we have a communicating spacecraft, we do what we sent it out there to do.



Exoplanet Exploration Organization Proposed

by Paul Gilster on May 26, 2015

We’ve recently looked at the role of small spacecraft, inspired in part by The Planetary Society’s LightSail, a CubeSat-based sail mission that launched last week. It’s interesting in that regard to consider small missions in the exoplanet realm. ExoplanetSat, for example, is a 3-unit CubeSat designed at MIT as a mission to discover Earth-sized exoplanets around nearby stars. Here the beauty of the CubeSat is obvious: The platform is low-cost, the development time is relatively short, and there are frequent launch opportunities. Up to 100 ExoplanetSats are planned.

Pulling big benefits from small packages is not new, as the example of the Canadian MOST mission (Microvariability and Oscillations of STars) reminds us. MOST was the first mission dedicated to asteroseismology, to be followed by CoRoT (COnvection ROtation and planetary Transits) and then Kepler. Now we have a proposal for what is being called the United Quest for Exoplanets (UniQuE), which grows out of work performed by an interdisciplinary team hosted by the International Space University. Meeting in Montreal last summer, the group produced a report giving its recommendations to enhance the entire field of exoplanet research.


Michael Michaud passed along an article on this work called “Going Global with Exoplanets” that ran in Space Times, which is the newsletter of the American Astronautical Society. I want to dig into it a bit because one of the larger questions here is how to generate international support for planet-hunting, and as Michaud reminds me, this fits in with what could be a useful collaboration between the exoplanet community and advocates of interstellar flight.

The Montreal team, which was sponsored by NASA and Lockheed Martin, consisted of 28 participants from twelve countries. Its report proposes the creation of the Exoplanet eXploration Organization (EXO), drawing on the fact that while other organizations include exoplanets as part of their mission, most have wider agendas, or are focused solely on a specific group of people. EXO would from the outset take an international perspective that crosses numerous areas of expertise. One part of its charter, which I’ll return to in a moment, is the above-mentioned UniQuE, the creation of small satellites to conduct exoplanet work, just the kind of thing MIT has in mind for ExoplanetSat.

Image: Kepler-16b envisioned in a JPL ‘travel poster’ showing a sky with two suns from the planet that orbits both. Part of a modest but effective public outreach effort from JPL. The EXO proposal similarly looks at ways to heighten public interest in exoplanets and deep space.

But characterizing habitable planets is a multidisciplinary endeavor whose practitioners are located around the globe. If we are to move to characterizing exoplanet atmospheres and pushing into increasingly detailed observations of these worlds, we’re probably going to be in a cost range where funding by a single nation or agency is problematic. One way to encourage international cooperation is through better information exchange and the development of better databases.

Data availability is, the article argues, an area that needs improvement. We do have rich databases like the Exoplanet Data Explorer (California Planet Survey), the Extrasolar Planets Encyclopedia (L’Observatoire de Paris), MIT’s Open Exoplanet Catalogue and NASA’s Exoplanet Archive, but the Montreal team finds the presentation of data between these groups to be inconsistent both in the data provided and the metadata used to search the material. It argues that EXO could be of service:

One EXO initiative would be to support developing and promoting a common data and metadata format for publically available data. The Extended Extrasolar Planets Encyclopedia (EEPE) aims at making new data-fields, links to other databases, and raw data easier to add to the database. The rationale behind proposing EEPE and a structured format is to maximize the use of existing infrastructure, and make it easier to access and update data.

Other possibilities include providing consulting services for exoplanet projects and pooling fundraising expertise, as well as helping researchers promote exoplanet projects to government agencies. Like The Planetary Society, EXO is envisioned as an advocate for its areas of interest, one that also serves as a bridge between the public and the scientific community. Thus educational projects to reach non-specialized audiences are a factor, and so are crowdsourcing efforts to consolidate, sort and analyze the vast amount of incoming data:

While much of it can be handled by computer algorithms, there are various phenomena that need human investigation. At present, raw data is gathered much faster than it is studied. One example is the website, which provides a crowdsourcing platform. After visitors complete a brief tutorial, participants analyze Kepler data by marking potential candidates while discarding uninteresting light curves. Another example is NASA-sponsored OSCAAR (Open Source differential photometry Code for Accelerating Amateur Research), which allows amateur astronomers to contribute their own light curves from amateur observations. These participative initiatives provide people with the ability to engage in the research community. Crowdsourcing, however, is only as powerful as the participants, which emphasizes the need for effective international outreach programs.


What is being proposed is cross-pollination between the exoplanet community and numerous other areas of public interest in space. I like the idea, for example, of offering massive open online courses (MOOCs) as a way not only of reaching the public but also of developing open source tools like lesson plans for teachers and software for amateur astronomers, who would use the EXO platform to connect with each other’s work across borders and disciplines.

But back to small satellites. The low cost and broad involvement, from space agencies to universities and small laboratories, that CubeSats offer can be leveraged here. Think of CubeSats as a route into space for countries that lack the resources for more costly missions. The UniQuE mission proposed here has much in common with ExoplanetSat in that it involves the creation not of single satellites but a constellation of low-cost small spacecraft whose mission would be characterization of the atmospheres of previously detected nearby planets:

The UniQuE team envisioned a standardized design based on a 15 kg 12 unit CubeSat layout carrying a space-proven near-IR mini-spectrometer covering the aforementioned waveband with a sensitivity range compatible with this mission. EXO would provide an overall baseline design and participating entities would have the freedom to customize and size all relevant subsystems so long as overall mission requirements are met, thereby allowing freedom for the inclusion of innovative concepts. The ideal constellation is composed of three to six pairs of satellites on a dawn-dusk sun synchronous orbit, which are launched as piggyback or secondary payloads.

What the Montreal team envisions is that all the UniQuE satellites would be required to observe the transit of an approaching planetary transit of a nearby star, while during the remaining time, the owners of the individual satellites could use them for their own scientific purposes. Results would be disclosed across the range of participating countries to provide a testbed for collaborative research and the development of ever more sophisticated spacecraft.

To read more about the thinking behind EXO, you can access the final report here. It could be argued that we have many of the tools the report recommends already in place, and that researchers currently interact through a variety of networked venues. But I think the development of an exoplanet organization with an international focus and a determined public outreach could consolidate some of these gains while providing useful collaborative tools. Moreover, the public engagement built into this kind of organization could benefit the spread of deep space ideas as we ponder future programs of exploration.



A Mass-Radius Relationship for ‘Sub-Neptunes’

by Paul Gilster on May 22, 2015

The cascading numbers of exoplanet discoveries raise questions about how to interpret our data. In particular, what do we do about all those transit finds where we can work out a planet’s radius and need to determine its mass? Andrew LePage returns to Centauri Dreams with a look at a new attempt to derive the relationship between mass and radius. Getting this right will be useful as we analyze statistical data to understand how planets form and evolve. LePage is the author of an excellent blog on exoplanetary science called Drew ex Machina, and a senior project scientist at Visidyne, Inc. specializing in the processing and analysis of remote sensing data.

By Andrew LePage


As anyone with even a passing interest in planetary studies can tell you, we are witnessing an age of planetary discovery unrivaled in the long history of astronomy. Over the last two decades, thousands of extrasolar planets have been discovered using a variety of techniques. The most successful of these to date in terms of sheer number of finds is the transit method – the use of precision photometric measurements to spot the tiny decrease in a star’s brightness as an orbiting planet passes directly between us and the star. The change in the star’s brightness during the transit allows astronomers to estimate the size of the planet relative to the star while the time between successive transits allows the orbital period of the planet to be determined. Combined with information about the properties of the star being observed, other characteristics can be calculated such as the actual size of the planet and its orbit. The most successful campaign to date to search for planets using the transit method has been performed using NASA’s Kepler spacecraft, launched in 2009.

One of the other important bulk properties of a planet that is of interest to scientists is its mass. Unfortunately, the transit method is typically unable to supply us with this information except in special circumstances where planets in a system strongly interact with each other to produce measurable variations in the timing or duration of their transits. The transit timing variation (TTV) or transit duration variation (TDV) methods can be used to estimate the masses of the planets of a system including non-transiting planets that might be present. Based on an analysis of Kepler results to date, however, this method can be used in only about 6% of planetary systems that produce transits.

A more widely applicable method to determine the mass of an extrasolar planet is through the precision measurement of a star’s radial velocity to detect the reflex motion caused by the orbiting planet. Combined with information from transit observations as well as the star’s properties, it is possible to calculate the actual mass of a planet and further refine its orbital properties. Unfortunately, NASA’s Kepler mission has discovered thousands of planets and making precision radial velocity measurements takes a lot of time on a limited number of busy telescopes that are equipped to make the required observations. In addition, many of the stars observed by Kepler are too dim or their planets too small for the current generation of instruments to detect radial velocity variations above the noise. This is especially a problem for sub-Neptune size planets including Earth-size terrestrial planets. Taken as a whole, only a small minority of all of Kepler’s finds currently have had their masses measured.

Puzzling Out a Planetary Mass

While astronomers continue to struggle to measure the masses of thousands of individual extrasolar planets found by Kepler, there have been efforts to derive a mass-radius relationship so that the mass of a planet with a known radius can at least be estimated. In addition to being useful for evaluating the level of accuracy required for detection using radial velocity measurements or other methods, such mass estimates are also valuable for scientists wishing to use Kepler radius and orbit data in statistical studies of planetary properties, dynamics, formation and evolution. Over the past few years, there have been various investigators who have attempted to derive a planetary mass-radius relationship as information on the mass and radius of known planets has expanded. These relationships have taken a mathematical form known as a power law such as M = CRγ where M is the mass of the planet (in terms of Earth mass or ME), R is its radius (in terms of Earth radii or RE) and C and γ are constants determined by analysis.

The latest work to derive a mass-radius relationship for sub-Neptune size planets (i.e. planets whose radii are less than 4RE) is a paper by Angie Wolfgang (University of California – Santa Cruz), Leslie A. Rogers (California Institute of Technology), and Eric B. Ford (Pennsylvania State University), which they recently submitted for publication in The Astrophysical Journal. These sub-Neptune size worlds are of particular interest to the scientific community since they span the size range between the Earth and Neptune where no Solar System analogs exist to provide guidance for deriving a mass-radius relationship.

Earlier work over the last few years on the planetary mass-radius relationship relied on least squares regression analysis of a set of planetary radius and mass measurements – a fairly straightforward mathematical method used to determine the constants of an equation that provides the best fit to a set of data points. Unfortunately, this classic method has some drawbacks. It does not properly take into account the uncertainty in the independent variable (i.e. the planet radius, in this case) or instances where the planet has not been detected using precision radial velocity measurements and only an upper limit of the mass can be derived. Another issue is that the least squares regression method assumes a deterministic relationship where a particular planetary radius value corresponds to a unique mass value. In reality, planets with a given radius can have a range of different mass values, in part reflecting the variation in planetary composition running from massive rocky planets with large iron-nickel cores to less massive, volatile-rich planets with deep atmospheres. These variations are expected to be especially important in sub-Neptune-class worlds.

A Bayesian Approach to the Mass/Radius Problem

Instead of using the least squares regression method, Wolfgang, Rogers and Ford evaluated their data using a hierarchal Bayesian technique which allowed them not only to derive the parameters for a best fit of the available data, but also to quantify the uncertainty in those parameters as well as the distribution of actual planetary mass values. Using their approach, they have derived a probabilistic mass-radius relationship where the most likely mass and the distribution of those values are determined. The team considered a total of 90 extrasolar planets with known radii less than 4RE whose masses have been measured or constrained using radial velocity or TTV methods. Neither unconfirmed planets nor circumbinary planets were considered to keep the sample as homogeneous as possible. The team also truncated the mass distribution to physically plausible values that were no less than zero (since it is physically impossible to have a negative mass) and no greater than the mass of a planet composed of iron (since it is unlikely for a planet to have a composition dominated by any element denser than iron).


Image: This plot shows the available mass and radius data (and associated error bars) used in the latest analysis of the mass-radius relationship for sub-Neptune size planets. Various fits to these data are shown including an earlier analysis by Lauren Weiss and Geoffrey Marcy (black dashed line) as well as fits for radii <8 RE, <4RE and <1.6RE (solid colored lines). (credit: Wolfgang et al.)

The detailed analysis of the dataset by Wolfgang, Rogers and Ford found that the subset of extrasolar planets whose masses were measured using the TTV method has a definite bias towards lower density planets. This bias had been suspected since a low density planet will have a larger radius than a denser planet with the same mass. And all else being equal, a larger planet is more likely to be detected using the transit method than a smaller planet. When only considering the sample of extrasolar planets with masses determined using precision radial velocity measurements, this team found that the best fit for the data set was a power law with C = 2.7 and γ = 1.3 (i.e. M = 2.7R1.3). Based on their statistical analysis, Wolfgang, Rogers and Ford found that the data were consistent with a Gaussian or bell-curve distribution of actual planet masses with a sigma of 1.9ME at any given radius value. Just as has been suspected, planets with radii less than 4RE display a range of compositions that is reflected as a fairly broad distribution of actual mass values.

In earlier work by Rogers, it was found that there seems to be a transition in planet composition at a radius no larger than 1.6 RE, above which planets are unlikely to be dense, rocky worlds like the Earth and much more likely to be less dense, volatile-rich planets like Neptune (see The Transition from Rocky to Non-Rocky Planets in Centauri Dreams for a full discussion of this work). For the sample of planets considered here with radii less than 1.6 RE, the team found that C = 1.4 and γ = 2.3. Unfortunately, the sample considered by Wolfgang, Rogers and Ford has little good data for planets in this size range and the masses with their large uncertainties tend to span the full range of physically plausible values. As a result, this analysis can not rule out the possibility of a deterministic mass-radius relationship where there is only a very narrow range of actual planet masses for any particular radius value. Recent work by others suggests that these smaller planets tend to have a more Earth-like, rocky composition which could be characterized with a more deterministic mass-radius relationship (see The Composition of Super-Earths in Drew Ex Machina for a discussion of this work).

This new work by Wolfgang, Rogers and Ford represents the best attempt to date to determine the mass-radius relationship for planets smaller than Neptune. While more data of better quality for planets in this size range are needed, it does appear that sub-Neptunes can have a range of different compositions and therefore possess a range of mass values at any given radius. This new relation will be most useful to scientists hoping to get the maximum benefit out of the ever-growing list of Kepler planetary finds where only the radius is known. Much more data will be required to determine more accurately the mass-radius distribution of planets with radii less than 1.6 RE and more precisely characterize the transition from large, rocky Earth-like planets to larger, volatile-rich planets like Neptune.

The preprint of the paper by Wolfgang, Rogers and Ford, “Probabilistic Mass-Radius Relationship for Sub-Neptune-Sized Planet”, can be found here.