Interstellar Travel and Stellar Evolution

The stars move ever on. What seems like a fixed distance due to the limitations of our own longevity morphs over time into an evolving maze of galactic orbits as stars draw closer to and then farther away from each other. If we were truly long-lived, we might ask why anyone would be in such a hurry to mount an expedition to Alpha Centauri. Right now we’d have to travel 4.2 light years to get to Proxima Centauri and its interesting habitable zone planet. But 28,000 years from now, Alpha Centauri — all three stars — will have drawn to within 3.2 light years of us.

But we can do a lot better than that. Gliese 710 is an M-dwarf about 64 light years away in the constellation Serpens Cauda. For the patient among us, it will move in about 1.3 million years to within 14,000 AU, placing it well within the Oort Cloud and making it an obvious candidate for worst cometary orbit disruptor of all time. But read on. Stars have come much closer than this. [Addendum: A reader points out that some sources list this star as a K-dwarf, rather than class M. Point taken: My NASA source describes it as “orange-red or red dwarf star of spectral and luminosity K5-M1 V.” So Gliese 710 is a close call in more ways than one].

In any case, imagine another star being 14,000 AU away, 20 times closer than Proxima Centauri is right now. Suddenly interstellar flight looks a bit more plausible, just as it would if we could, by some miracle, find ourselves in a globular cluster like M80, where stellar distances, at the densest point, can be something on the order of the size of the Solar System.

Image: This stellar swarm is M80 (NGC 6093), one of the densest of the 147 known globular star clusters in the Milky Way galaxy. Located about 28,000 light-years from Earth, M80 contains hundreds of thousands of stars, all held together by their mutual gravitational attraction. Globular clusters are particularly useful for studying stellar evolution, since all of the stars in the cluster have the same age (about 12 billion years), but cover a range of stellar masses. Every star visible in this image is either more highly evolved than, or in a few rare cases more massive than, our own Sun. Especially obvious are the bright red giants, which are stars similar to the Sun in mass that are nearing the ends of their lives. Credit: NASA, The Hubble Heritage Team, STScI, AURA.

These thoughts are triggered by a paper from Bradley Hansen and Ben Zuckerman, both at UCLA, with the interesting title “Minimal Conditions for Survival of Technological Civilizations in the Face of Stellar Evolution.” The authors note the long-haul perspective: The physical barriers we associate with interstellar travel are eased dramatically if species attempt such journeys only in times of close stellar passage. Put another star within 1500 AU, dramatically closer than even Gliese 710 will one day be, and the travel time is reduced perhaps two orders of magnitude compared with the times needed to travel under average stellar separations near the Sun today.

I find this an interesting thought experiment, because it helps me visualize the galaxy in motion and our place within it in the time of our civilization (whether or not our civilization will last is Frank Drake’s L factor in his famous equation, and for today I posit no answer). All depends upon the density of stars in our corner of the Orion Arm and their kinematics, so location in the galaxy is the key. Just how far apart are stars in Sol’s neighborhood right now?

Drawing on research from Gaia data as well as the stellar census of the local 10-parsec volume compiled by the REsearch Consortium On Nearby Stars (RECONS), we find that 81 percent of the main-sequence stars in this volume have masses below half that of the Sun, meaning most of the close passages we would experience will be with M-dwarfs. The average distance between stars in our neck of the woods is 3.85 light years, pretty close to what separates us from Alpha Centauri. RECONS counts 232 single-star systems and 85 multiple in this space.

Hansen and Zuckerman are intrigued. They ask what a truly patient civilization might do to make interstellar travel happen only at times when a star is close by. We can’t know whether a given civilization would necessarily expand to other stars, but the authors think there is one reason that would compel even the most recalcitrant into attempting the journey. That would be the swelling of the parent star to red giant status. Here’s the question:

As mentioned above, this stellar number density yields an average nearest neighbor distance between stars of 3.85 light years. However, such estimates rely on the standard snapshot picture of interstellar migration ? that a civilization decides to embark instantaneously (at least, in cosmological terms) and must simply accept the local interstellar geography as is. If one were prepared to wait for the opportune moment, then how much could one reduce the travel distance, and thus the travel time?

Maybe advanced civilizations don’t tend to make interstellar journeys until they have to, meaning when problems arise with their central star. If this is the case, we might expect stars in close proximity at any given era — ruling out close binaries but talking only about stars that are passing and not gravitationally bound — to be those between which we could see signs of activity, perhaps as artifacts in our data implying migration away from a star whose gradual expansion toward future red giant phase is rendering life on its planets more and more unlivable.

Here we might keep in mind that in our part of the galaxy, about 8.5 kiloparsecs out from galactic center, the density of stars is what the authors describe as only ‘modest.’ Higher encounter rates occur depending on how close we want to approach galactic center.

Reading this paper reminds me why I wish I had the talent to be a science fiction writer. Stepping back to take the ‘deep time’ view of galactic evolution fires the imagination as little else can. But I leave fiction to others. What Hansen and Zuckerman point out is that we can look at our own Solar System in these same terms. Their research shows that if we take the encounter rate they derive for our Sun and multiply it by the 4.6 billion year age of our system, we can assume that at some point within that time a star passed within a breathtaking 780 AU.

Image: A passing star could dislodge comets from otherwise stable orbits so that they enter the inner system, with huge implications for habitable worlds. Is this a driver for travel between stars? Credit: NASA/JPL-Caltech).

Now let’s look forward. A gradually brightening Sun eventually pushes us — our descendants, perhaps, or whatever species might be on Earth then — to consider leaving the Solar System. Recent work sees this occurring when the Sun reaches an age of about 5.7 billion years. Thus the estimate for remaining habitability on Earth is about a billion years. The paper’s calculations show that within this timeframe, the median distance of closest stellar approach to the Sun is 1500 AU, with an 81 percent chance that a star will close to within 5000 AU. From the paper:

Thus, an attempt to migrate enough of a terrestrial civilization to ensure longevity can be met within the minimum requirement of travel between 1500 and 5000 AU. This is two orders of magnitude smaller than the current distance to Proxima Cen. The duration of an encounter, with the closest approach at 1500 AU, assuming stellar relative velocities of 50km/s, is 143 years. In the spirit of minimum requirements, we note that our current interstellar travel capabilities are represented by the Voyager missions (Stone et al. 2005); these, which rely on gravity assists off the giant planets, have achieved effective terminal velocities of ? 20 km/s. The escape velocity from the surface of Jupiter is ? 61 km/s, so it is likely one can increase these speeds by a factor of 2 and achieve rendezvous on timescales of order a century.

My takeaway on this parallels what the authors say: We can conceive of an interstellar journey in this distant era that relies on technologies not terribly advanced beyond where we are today, with travel times on the order of a century. The odds on such a journey being feasible for other civilizations rise as we move closer to galactic center. At 2.2 kiloparsecs from the center, where peak density seems to occur, the characteristic encounter distance is 250 AU over the course of 10 billion years, or an average 800 AU during a single one billion year period.

You might ask, as the authors do, how binary star systems would affect these outcomes, and it’s an interesting point. Perhaps 80 percent of all G-class star binaries will have separations of 1000 AU or less, which the authors consider disruptive to planet formation. Where technological civilizations do arise in binary systems, having a companion star is an obvious driver for interstellar travel. But single stars like ours would demand migration to another system.

We can plug Hansen and Zuckerman’s work into the ongoing discussion of interstellar migration. From the paper:

Our hypothesis bears resemblance to the slow limit in models of interstellar expansion (Wright et al. 2014; Carroll-Nellenback et al. 2019). In a model in which civilizations diffuse away from their original locations with a range of possible speeds, the behavior at low speeds is no longer a diffusion wave but rather a random seeding dominated by the interstellar dispersion. Even in this limit, the large age of the Galaxy allows for widespread colonization unless the migration speeds are sufficiently small. In this sense our treatment converges with prior work, but our focus is very different. We are primarily interested in how a long-lived technological civilization may respond to stellar evolution and not how such civilizations may pursue expansion as a goal in and of itself. Thus our discussion demonstrates the requirements for technological civilizations to survive the evolution of their host star, even in the event that widespread colonization is physically infeasible.

It’s interesting that the close passage of a second star is a way to reduce the search space for SETI purposes if we go looking for the technological signature of a civilization in motion. Separating out stars undergoing close passage from truly bound binaries is another matter, and one that would, the authors suggest, demand a solid program for eliminating false positives.

Ingenious. An imaginative exercise like this, or Greg Laughlin and Fred Adams’ recent work on ‘black cloud’ computing, offers us perspectives on the galactic scale, a good way to stretch mental muscles that can sometimes atrophy when limited to the near-term. Which is one reason I read science fiction and pursue papers from people working the far edge of astrophysics.

The paper is Hansen and Zuckerman, “Minimal conditions for survival of technological civilizations in the face of stellar evolution,” in process at the Astronomical Journal (preprint). Thanks to Antonio Tavani for the pointer on a paper I hadn’t yet discovered.

tzf_img_post

‘Farfarout’ Confirmed Far Beyond Pluto

One thing is certain about the now confirmed object that is being described as the most distant ever observed in our Solar System. We’ll just be getting used to using the official designation of 2018 AG37 (bestowed by the Minor Planet Center according to IAU protocol) when it will be given an official name, just as 2003 VB12 was transformed into Sedna and 2003 UB313 became Eris. It’s got a charming nickname, though, the jesting title “Farfarout.”

I assume the latter comes straight from the discovery team, and it’s a natural because the previous most distant object, found in 2018, was dubbed “Farout” by the same team of astronomers. That team includes Scott Sheppard (Carnegie Institution for Science), Chad Trujillo (Northern Arizona University) and David Tholen (University of Hawai?i). Farout, by the way, has the IAU designation 2018 VG18, but has not to my knowledge received an official name. Trans-Neptunian objects can be useful for investigating the gravitational effects of possible larger objects — like the putative Planet 9 — deep in the reaches of the system.

Image: Solar System distances to scale, showing the newly discovered planetoid, nicknamed “Farfarout,” compared to other known Solar System objects, including the previous record holder 2018 VG18 “Farout,” also found by the same team. Credit: Roberto Molar Candanosa, Scott S. Sheppard (Carnegie Institution for Science) and Brooks Bays (University of Hawai?i).

As to Farfarout, it turned up in data collected at the Subaru 8-meter telescope at Maunakea (Hawai?i) in 2018, with observations at Gemini North and the Magellan telescopes (Las Campanas Observatory, Chile) helping to constrain its orbit. Its average distance from the Sun appears to be 101 AU, but the orbit is elliptical, reaching 175 AU at aphelion and closing to 27 AU (inside the orbit of Neptune) at its closest approach to the Sun. That makes for a single revolution about the Sun that lasts a thousand years, and a long history of gravitational interactions with Neptune.

Farfarout is thought to be about 400 kilometers in diameter, making it a very small dwarf planet, though this would depend on interpretations of its albedo and the assumption that it is an icy object. In any case, its gravitational dealings with Neptune over the course of the Solar System’s history affect its usefulness as a marker for detecting massive objects further out. For that, we turn to objects like Sedna and 2012 VP113, which do not approach Neptune.

On the other hand, the Neptune interactions can be useful, as Chad Trujillo points out:

“Farfarout’s orbital dynamics can help us understand how Neptune formed and evolved, as Farfarout was likely thrown into the outer solar system by getting too close to Neptune in the distant past. Farfarout will likely strongly interact with Neptune again since their orbits continue to intersect.”

Image: An early estimate of Farfarout’s orbit. Credit: By Tomruen – JPL [1], CC BY-SA 4.0.

We’re at the early stages of our explorations of the outer system, and it’s safe to assume that a windfall of such objects awaits astronomers as our cameras and telescopes continue to improve. Sheppard, Tholen and Trujillo will doubtless turn up more as they continue the hunt for Planet 9.

tzf_img_post

Imaging Alpha Centauri’s Habitable Zones

We may or may not have imaged a planet around Alpha Centauri A, possibly a ‘warm Neptune’ at an orbital distance of roughly 1 AU, the distance between Earth and the Sun. Let’s quickly move to the caveat: This finding is not a verified planet, and may in fact be an exozodiacal disk detection or even a glitch within the equipment used to see it.

But as the paper notes, the finding called C1 is “is not a known systematic artifact, and is consistent with being either a Neptune-to-Saturn-sized planet or an exozodiacal dust disk.“ So this is interesting.

As it may be some time before we can make the call on C1, I want to emphasize the importance not so much of the possible planet but the method used to investigate it. For what the team behind a new paper in Nature Communications has revealed is a system for imaging in the mid-infrared, coupled with long observing times that can extend the capabilities of ground-based telescopes to capture planets in the habitable zone of other nearby stars.

Lead author Kevin Wagner (University of Arizona Steward Observatory) and colleagues describe a method showing a tenfold improvement over existing direct imaging solutions. Wavelength is important here, for exoplanet imaging usually works at infrared wavelengths below the optimum. Wagner points to the nature of observations from a warm planetary surface to explain why the wavelengths where planets are brightest can be problematic:

“There is a good reason for that because the Earth itself is shining at you at those wavelengths. Infrared emissions from the sky, the camera and the telescope itself are essentially drowning out your signal. But the good reason to focus on these wavelengths is that’s where an Earthlike planet in the habitable zone around a sun-like star is going to shine brightest.”

With exoplanet imaging up to now operating below 5 microns, where background noise is low, the planets we’ve been successful at imaging have been young, hot worlds of Jupiter class in wide orbits. Let me quote from the paper on this as well:

Their high temperatures are a remnant of formation and reflect their youth (~1–100?Myr, compared to the Gyr ages of typical stars). Imaging potentially habitable planets will require imaging colder exoplanets on shorter orbits around mature stars. This leads to an opportunity in the mid-infrared (~10?µm), in which temperate planets are brightest. However, mid-infrared imaging introduces significant challenges. These are primarily related to the much higher thermal background—that saturates even sub-second exposures—and also the ~2–5× coarser spatial resolution due to the diffraction limit scaling with wavelength. With current state-of-the-art telescopes, mid-infrared imaging can resolve the habitable zones of roughly a dozen nearby stars, but it remains to be shown whether sensitivity to detect low-mass planets can be achieved.

Getting around these challenges is part of what Breakthrough Watch is trying to do via its NEAR (New Earths in the Alpha Centauri Region) experiment, which focuses on the technologies needed to directly image low-mass habitable-zone exoplanets. The telescope in question is the European Southern Observatory’s Very Large Telescope in Chile, where Wagner and company are working with an adaptive secondary telescope mirror designed to minimize atmospheric distortion. That effort works in combination with a light-blocking mask optimized for the mid-infrared to block the light of Centauri A and then Centauri B in sequence.

Remember that stable habitable zone orbits have been calculated for both of these stars. Switching between Centauri A and B rapidly — as fast as every 50 milliseconds, in a method called ‘chopping’ — allows both habitable zones to be scrutinized simultaneously. Background light is further reduced by image stacking and specialized software.

“We’re moving one star on and one star off the coronagraph every tenth of a second,” adds Wagner. “That allows us to observe each star for half of the time, and, importantly, it also allows us to subtract one frame from the subsequent frame, which removes everything that is essentially just noise from the camera and the telescope.”

Among possible systematic artifacts, the paper notes the presence of ‘negative arcs’ due to reflections that are introduced within the system and must be eliminated. The image below shows the view before the artifacts have been removed and a second after that process is complete.

Image: This is Figure 2 from the paper. Caption: a high-pass filtered image without PSF subtraction or artifact removal. The ? Centauri B on-coronagraph images have been subtracted from the ? Centauri A on-coronagraph images, resulting in a central residual and two off-axis PSFs to the SE and NW of ? Centauri A and B, respectively. Systematic artifacts labeled 1–3 correspond to detector persistence from ? Centauri A, ? Centauri B, and an optical ghost of ? Centauri A. b Zoom-in on the inner regions following artifact removal and PSF subtraction. Regions impacted by detector persistence are masked for clarity. The approximate inner edge of the habitable zone of ? Centauri A13 is indicated by the dashed circle. A candidate detection is labeled as ‘C1’. Credit: Wagner et al.

Over the years, we’ve seen the size of possible planetary companions of Centauri A and B gradually constrained, and as the paper notes, radial velocity work has excluded planets more massive than 53 Earth masses in the habitable zone of Centauri A (by comparison, Jupiter is 318 Earth masses). The constraint at Centauri B is 8.4 Earth masses, meaning that in both cases, lower-mass planets could still be present and in stable orbits. We already know of two worlds orbiting the M-dwarf Proxima Centauri.

You can find the results of the team’s nearly 100 hours of observations (enough to collect more than 5 million images) in the 7 terabytes of data now made available at http://archive.eso.org. Wagner is forthcoming about the likelihood of the Centauri A finding being a planet:

“There is one point source that looks like what we would expect a planet to look like, that we can’t explain with any of the systematic error corrections. We are not at the level of confidence to say we discovered a planet around Alpha Centauri, but there is a signal there that could be that with some subsequent verification.”

A second imaging campaign is planned in several years, which could reveal the same possible exoplanet at a different part of its modeled orbit, with potential confirmation via radial velocity methods. From the paper:

The habitable zones of ? Centauri and other nearby stars could host multiple rocky planets–some of which may host suitable conditions for life. With a factor of two improvement in radius sensitivity (or a factor of four in brightness), habitable-zone super-Earths could be directly imaged within ? Centauri. An independent experiment (e.g., a second mid-infrared imaging campaign, as well as RV, astrometry, or reflected light observations) could also clarify the nature of C1 as an exoplanet, exozodiacal disk, or instrumental artifact. If confirmed as a planet or disk, C1 would have implications for the presence of other habitable zone planets. Mid-infrared imaging of the habitable zones of other nearby stars, such as ? Eridani, ? Indi, and ? Ceti is also possible.

It’s worth keeping in mind that the coming extremely large telescopes will bring significant new capabilities to ground-based imaging of planets around nearby stars. Whether or not we have a new planet in this nearest of all stellar systems to Earth, we do have significant progress at pushing the limits of ground-based observation, with positive implications for the ELTs.

The paper is Wagner et al., “Imaging low-mass planets within the habitable zone of ? Centauri,” Nature Communications 12: 922 (2021). Abstract / full text.

tzf_img_post

A Black Cloud of Computation

Moore’s Law, first stated all the way back in 1965, came out of Gordon Moore’s observation that the number of transistors per silicon chip was doubling every year (it would later be revised to doubling every 18-24 months). While it’s been cited countless times to explain our exponential growth in computation, Greg Laughlin, Fred Adams and team, whose work we discussed in the last post, focus not on Moore’ Law but a less publicly visible statement known as Landauer’s Principle. Drawing from Rolf Landauer’s work at IBM, the 1961 equation defines the lower limits for energy consumption in computation.

You can find the equation here, or in the Laughlin/Adams paper cited below, where the authors note that for an operating temperature of 300 K (a fine summer day on Earth), the maximum efficiency of bit operations per erg is 3.5 x 1013. As we saw in the last post, a computational energy crisis emerges when exponentially increasing power requirements for computing exceed the total power input to our planet. Given current computational growth, the saturation point is on the order of a century away.

Thus Landauer’s limit becomes a tool for predicting a problem ahead, given the linkage between computation and economic and technological growth. The working paper that Laughlin and Adams produced looks at the numbers in terms of current computational throughput and sketches out a problem that a culture deeply reliant on computation must overcome. How might civilizations far more advanced than our own go about satisfying their own energy needs?

Into the Clouds

We’re familiar with Freeman Dyson’s interest in enclosing stars with technologies that can exploit the great bulk of their energy output, with the result that there is little to mark their location to distant astronomers other than an infrared signature. Searches for such megastructures have already been made, but thus far with no detections. Laughlin and Adams ponder exploiting the winds generated by Asymptotic Giant Branch stars, which might be tapped to produce what they call a ‘dynamical computer.’ Here again there is an infrared signature.

Let’s see what they have in mind:

In this scenario, the central AGB star provides the energy, the raw material (in the form of carbon-rich macromolecules and silicate-rich dust), and places the material in the proper location. The dust grains condense within the outflow from the AGB star and are composed of both graphite and silicates (Draine and Lee 1984), and are thus useful materials for the catalyzed assembly of computational components (in the form of nanomolecular devices communicating wirelessly at frequencies (e.g. sub-mm) where absorption is negligible in comparison to required path lengths.

What we get is a computational device surrounding the AGB star that is roughly the size of our Solar System. In terms of observational signatures, it would be detectable as a blackbody with temperature in the range of 100 K. It’s important to realize that in natural astrophysical systems, objects with these temperatures show a spectral energy distribution that, the authors note, is much wider than a blackbody. The paper cites molecular clouds and protostellar envelopes as examples; these should be readily distinguishable from what the authors call Black Clouds of computation.

It seems odd to call this structure a ‘device,’ but that is how Laughlin and Adams envision it. We’re dealing with computational layers in the form of radial shells within the cloud of dust being produced by the AGB star in its relatively short lifetime. It is a cloud in an environment that subjects it to the laws of hydrodynamics, which the paper tackles by way of characterizing its operations. The computer, in order to function, has to be able to communicate with itself via operations that the authors assume occur at the speed of light. Its calculated minimum temperature predicts an optimal radial size of 220 AU, an astronomical computing engine.

And what a device it is. The maximum computational rate works out to 3 x 1050 bits s-1 for a single AGB star. That rate is slowed by considerations of entropy and rate of communication, but we can optimize the structure at the above size constraint and a temperature between 150 and 200 K, with a mass roughly comparable to that of the Earth. This is a device that is in need of refurbishment on a regular timescale because it is dependent upon the outflow from the star. The authors calculate that the computational structure would need to be rebuilt on a timescale of 300 years, comparable to infrastructure timescales on Earth.

Thus we have what Laughlin, in a related blog post, describes as “a dynamically evolving wind-like structure that carries out computation.” And as he goes on to note, AGB stars in their pre-planetary nebula phase have lifetimes on the order of 10,000 years, during which time they produce vast amounts of graphene suitable for use in computation, with photospheres not far off room temperature on Earth. Finding such a renewable megastructure in astronomical data could be approached by consulting the WISE source catalog with its 563,921,584 objects. A number of candidates are identified in the paper, along with metrics for their analysis.

These types of structures would appear from the outside as luminous astrophysical sources, where the spectral energy distributions have a nearly blackbody form with effective temperature T ? 150 ? 200 K. Astronomical objects with these properties are readily observable within the Galaxy. Current infrared surveys (the WISE Mission) include about 200 candidate objects with these basic characteristics…

And a second method of detection, looking for nano-scale hardware in meteorites, is rather fascinating:

Carbonaceous chondrites (Mason 1963) preserve unaltered source material that predates the solar system, much of which was ejected by carbon stars (Ott 1993). Many unusual materials have been identified within carbonaceous chondrites, including, for example, nucleobases, the informational sub-units of RNA and DNA (see Nuevo et al. 2014). Most carbonaceous chondrites have been subject to processing, including thermal metamorphism and aqueous alteration (McSween 1979). Graphite and highly aromatic material survives to higher temperatures, however, maintaining structure when heated transiently to temperatures of order, T ? 700K (Pearson et al. 2006). It would thus potentially be of interest to analyze carbonaceous chondrites to check for the presence of (for example) devices resembling carbon nanotube field-effect transistors (Shulakar, et al. 2013).

Meanwhile, Back in 2021

But back to the opening issue, the crisis posited by the rate of increase in computation vs. the energy available to our society. Should we tie Earth’s future economic growth to computation? Will a culture invariably find ways to produce the needed computational energies, or are other growth paradigms possible? Or is growth itself a problem that has to be surmounted?

At the present, the growth of computation is fundamentally tied to the growth of the economy as a whole. Barring the near-term development of practical ireversible computing (see, e.g., Frank 2018), forthcoming computational energy crisis can be avoided in two ways. One alternative involves transition to another economic model, in contrast to the current regime of information-driven growth, so that computational demand need not grow exponentially in order to support the economy. The other option is for the economy as a whole to cease its exponential growth. Both alternatives involve a profound departure from the current economic paradigm.

We can wonder as well whether what many are already seeing as the slowdown of Moore’s Law will lead to new forms of exponential growth via quantum computing, carbon nanotube transistors or other emerging technologies. One thing is for sure: Our planet is not at the technological level to exploit the kind of megastructures that Freeman Dyson and Greg Laughlin have been writing about, so whatever computational crisis we face is one we’ll have to surmount without astronomical clouds. Is this an aspect of the L term in Drake’s famous equation? It referred to the lifetime of technological civilizations, and on this matter we have no data at all.

The working paper is Laughlin et al., “On the Energetics of Large-Scale Computation using Astronomical Resources.” Full text. Laughlin also writes about the concept on his oklo.org site.

tzf_img_post

Cloud Computing at Astronomical Scales

Interesting things happen to stars after they’ve left the main sequence. So-called Asymptotic Giant Branch (AGB) stars are those less than nine times the mass of the Sun that have already moved through their red giant phase. They’re burning an inner layer of helium and an outer layer of hydrogen, multiple zones surrounding an inert carbon-oxygen core. Some of these stars, cooling and expanding, begin to condense dust in their outer envelopes and to pulsate, producing a ‘wind’ off the surface of the star that effectively brings an end to hydrogen burning.

Image: Hubble image of the asymptotic giant branch star U Camelopardalis. This star, nearing the end of its life, is losing mass as it coughs out shells of gas. Credit: ESA/Hubble, NASA and H. Olofsson (Onsala Space Observatory).

We’re on the way to a planetary nebula centered on a white dwarf now, but along the way, in this short pre-planetary nebula phase, we have the potential for interesting things to happen. It’s a potential that depends upon the development of intelligence and technologies that can exploit the situation, just the sort of scenario that would attract Greg Laughlin (Yale University) and Fred Adams (University of Michigan). Working with Darryl Seligman and Khaya Klanot (Yale), the authors of the brilliant The Five Ages of the Universe (Free Press, 2000) are used to thinking long-term, and here they ponder technologies far beyond our own.

For a sufficiently advanced civilization might want to put that dusty wind from a late Asymptotic Giant Branch star to work on what the authors call “a dynamically evolving wind-like structure that carries out computation.” It’s a fascinating idea because it causes us to reflect on what might motivate such an action, and as we learn from the paper, the energetics of computation must inevitably become a problem for any technological civilization facing rapid growth. A civilization going the AGB route would also produce an observable signature, a kind of megastructure that is to my knowledge considered here for the first time.

Laughlin refers to the document that grows out of these calculations (link below) as a ‘working paper,’ a place for throwing off ideas and, I assume, a work in progress as those ideas are explored further. It operates on two levels. The first is to describe the computational crisis facing our own civilization, assuming that exponential growth in computing continues. Here the numbers are stark, as we’ll see below, and the options problematic. The second level is the speculation about cloud computing at the astronomical level, which takes us far beyond any solution that would resolve our near-term problem, but offers fodder for thought about the possible behaviors of civilizations other than our own.

Powering Up Advanced Computation

The kind of speculation that puts megastructures on the table is productive because we are the only example of technological civilization that we have to study. We have to ask ourselves what we might do to cope with problems as they scale up to planetary size and beyond. Solar energy is one thing when considered in terms of small-scale panels on buildings, but Freeman Dyson wanted to know how to exploit not just our allotted sliver of the energy being put out by the Sun but all of it. Thus the concept of enclosing a star with a shell of technologies, with the observable result of gradually dimming the star while producing a signature in the infrared.

Laughlin, Adams and colleagues have a new power source that homes in on the drive for computation. It exploits the carbon-rich materials that condense over periods of thousands of years and are pushed outward by AGB winds, offering the potential for computers at astronomical scales. It’s wonderful to see them described here as ‘black clouds,’ a nod the authors acknowledge to Fred Hoyle’s engaging 1957 novel The Black Cloud, in which a huge cloud of gas and dust approaches the Solar System. Like the authors’ cloud, Hoyle’s possesses intelligence, and learning how to deal with it powers the plot of the novel.

Hoyle wasn’t thinking in terms of computation in 1957, but the problem is increasingly apparent today. We can measure the capabilities of our computers and project the resource requirements they will demand if exponential rates of growth continue. The authors work through calculations to a straightforward conclusion: The power we need to support computation will exceed that of the biosphere in roughly a century. This resource bottleneck likewise applies to data storage capabilities.

Just how much computation does the biosphere support? The answer merges artificial computation with what Laughlin and team call “Earth’s 4 Gyr-old information economy.” According to the paper’s calculations, today’s artificial computing uses 2 x 10-7 of the Earth’s insolation energy budget. The biosphere in these terms can be considered an infrastructure for copying the digital information encoded in strands of DNA. Interestingly, the authors come up with a biological computational efficiency that is a factor of 10 more efficient than today’s artificial computation. They also provide the mathematical framework for the idea that artificial computing efficiencies can be improved by a factor of more than 107.

Artificial computing is, of course, a rapidly moving target. From the paper:

At present, the “computation” performed by Earth’s biosphere exceeds the current burden of artificial computation by a factor of order 106. The biosphere, however, has carried out its computation in a relatively stable manner over geologic time, whereas artificial computation by devices, as well as separately, the computation stemming from human neural processing, are both increasing exponentially – the former though Moore’s Law-driven improvement in devices and increases in the installed base of hardware, and the latter through world population growth. Environmental change due to human activity can, in a sense, be interpreted as a computational “crisis” a situation that will be increasingly augmented by the energy demands of computation.

We get to the computational energy crisis as we approach the point where the exponential growth in the need for power exceeds the total power input to the planet, which should occur in roughly a century. We’re getting 1017 watts from the Sun here on the surface and in something on the order of 100 years we will need to use every bit of that to support our computational infrastructure. And as mentioned above, the same applies to data storage, even conceding the vast improvements in storage efficiency that continue to occur.

Civilizations don’t necessarily have to build megastructures of one form or another to meet such challenges, but we face the problem that the growth of the economy is tied to the growth of computation, meaning it would take a transition to a different economic model — abandoning information-driven energy growth — to reverse the trend of exponential growth in computing.

In any case, it’s exceedingly unlikely that we’ll be able to build megastructures within a century, but can we look toward the so-called ‘singularity’ to achieve a solution as artificial intelligence suddenly eclipses its biological precursor? This one is likewise intriguing:

The hypothesis of a future technological singularity is that continued growth in computing capabilities will lead to corresponding progress in the development of artificial intelligence. At some point, after the capabilities of AI far exceed those of humanity, the AI system could drive some type of runaway technological growth, which would in turn lead to drastic changes in civilization… This scenario is not without its critics, but the considerations of this paper highlight one additional difficulty. In order to develop any type of technological singularity, the AI must reach the level of superhuman intelligence, and implement its civilization-changing effects, before the onset of the computational energy crisis discussed herein. In other words, limits on the energy available for computation will place significant limits on the development of AI and its ability to instigate a singularity.

So the issues of computational growth and the energy to supply it are thorny. Let’s look in the next post at the ‘black cloud’ option that a civilization far more advanced than ours might deploy, assuming it figured out a way to get past the computing bottleneck our own civilization faces in the not all that distant future. There we’ll be in an entirely speculative realm that doesn’t offer solutions to the immediate crisis, but makes a leap past that point to consider computational solutions at the scale of the solar system that can sustain a powerful advanced culture.

The paper is Laughlin et al., “On the Energetics of Large-Scale Computation using Astronomical Resources.” Full text. Laughlin also writes about the concept on his oklo.org site.

tzf_img_post