≡ Menu

Starflight: Engagement with Risk

How we’ll go to the stars is often a question we answer with propulsion options. But of course the issue is larger than that. Will we, for example, go as biological beings or in the form of artificial intelligence? For that matter, if we start thinking about post-human intelligence, as Martin Rees does in the recently published Starship Century, are we talking about reality or a simulation? As the Swedish philosopher Nick Bostrom has speculated, a supremely advanced culture could create computer simulations capable of modeling the entire universe.

Rees recapitulates the argument in his essay “To the Ends of the Universe”: A culture that could create simulations as complex as the universe we live in might create virtual universes in the billions as a ripe domain for study or pure entertainment, allowing a kind of ‘time travel’ in which the past is reconstructed and the simulation masters can explore their history. So there we have a play on the nature of reality itself and the notion that our perceptions are constricted — as limited, Rees says, “as the perspective of Earth available to a plankton whose ‘universe’ is a spoonful of water.’

rees

It’s a mind-blowing thought and Rees uses it to point out that our concepts of physical reality may need adjustment. What engages him about post-human evolution, though, isn’t the simulation speculation as much as the idea that genetic modifications and a simultaneous advance in machine intelligence will allow the kind of ‘directed’ evolution we’ll need if we are to expand into colonies beyond Earth. Here’s the argument compacted into a paragraph:

Darwin himself realised that ‘No living species will preserve its unaltered likeness into a distant futurity.’ We now know that ‘futurity’ extends much farther, and alterations can occur far faster than Darwin envisioned. And we know that the cosmos, through which life could spread, offers a far more extensive and varied habitat than he ever imagined. So humans are surely not the terminal branch of an evolutionary tree, but a species that emerged early in the overall roll-call of species, with special promise for diverse evolution — and perhaps of cosmic significance for jump-starting the transition to silicon-based (and potentially immortal) entities that can more readily transcend human limitations.

Humanity’s Special Moment?

Rees thinks we are at a special moment in time, a century that is the first occasion when the fate of the entire planet is in the hands of a single species. It’s also the century when we have the capability of starting the expansion of human life into the rest of the Solar System and beyond. This thought will recall Rees’ 2003 book Our Final Century (published in the US as Our Final Hour), whose subtitle says it all: ‘A Scientist’s Warning: How Terror, Error, and Environmental Disaster Threaten Humankind’s Future In This Century – On Earth and Beyond.’ There follows a study of the risk factors that hang over our culture’s head.

Which do you think would be the most likely cause of our demise? The possibilities are legion, ranging from environmental collapse to biological terrorism or the inadvertent results of using new technology, perhaps in the release of nanotechnology that goes out of control. The argument in the essay is similar to the book, that we have the ability to surmount these problems if we are wise enough to expand the frontiers of science and of exploration. The outcome is by no means certain but I think the picture may look darker than it really is, for while we can see the problems Rees outlines emerging, we cannot know what new knowledge we will gain as we confront them that will make solutions possible. Put me, then, on the more optimistic side of Rees’ speculations, a space the Astronomer Royal dwells on in this essay.

Let’s assume, then, that the outcome is indeed human survival and movement into space. That calls for near-term solutions to re-energizing the global space effort. How do we avoid the blunders of the past, the failure to follow up the Apollo landings with a credible and sustained program to continue manned exploration? The answer may well lie in private funding:

Unless motivated by pure prestige and bankrolled by superpowers, manned missions beyond the moon will need perforce to be cut-price ventures, accepting high risks — perhaps even ‘one-way tickets.’ These missions will be privately funded; no Western governmental agency would expose civilians to such hazards.

Here, of course, we think of Inspiration Mars, the plan to send a crew of two on a Mars flyby with launch in 2018, or the Dutch Mars One program, in which settlers would go to Mars knowing they would not be returning. Rees continues:

There would, despite the risks, be many volunteers — driven by the same motives as early explorers, mountaineers, and the like. Private companies already offer orbital flights. Maybe within a decade adventurers will be able to sign up for a week-long trip round the far side of the Moon — voyaging farther from Earth than anyone has been before (but avoiding the greater challenge of a Moon landing and blast-off). And by mid-century the most intrepid (and wealthy) will be going farther.

On Risk as Necessity

That scenario winds up with human outposts scattered through the Solar System, beginning with Mars or, perhaps, the asteroids. The post-human era may begin as we bring genetic engineering to bear on making ourselves more adaptable to such radically alien environments. A parallel development in artificial intelligence will make it an open question whether our forays beyond the Solar System are conducted by human or machine or a mixture of both. Given the time-scales involved in interstellar journeys, any human crews would be heavily modified, but it’s more likely that they will be non-biological entities altogether, adapted for long and solitary exploration.

Is interstellar travel possible? The answer may well be yes, but for whom? Rees’ point is that the options will grow as we gain experience with the new environments we explore. SETI may help us find extraterrestrial intelligence, but maybe not. Either way, evolution into the post-human era will be driven by the imperative to protect — and extend — the species. Whether or not such extension leads in the course of millennia to a Kardashev Type III civilization, actively shaping the fate of its entire galaxy, will depend at least in part upon our ability to transcend the kind of existential risks Rees discusses in Our Final Century.

Where I disagree with Rees is in the implication that, the risks of our century being solved, we will have transcended them. Risk survival is a continual process activated by the very breakthroughs in technology and exploration the astrophysicist here explores. We can only imagine what kind of dangers a rapidly growing starfaring culture would expose itself to purely in terms of its own engineering, not to mention its possible encounters with other civilizations.

Our encounter with risk has always been a precondition for our encounter with life, making the 21st Century a bit less unique than Rees would have it, but no less significant for the opportunity it offers to re-evaluate and once again engage the inevitable risks of growing starward.

tzf_img_post

Comments on this entry are closed.

  • Richard July 8, 2013, 10:55

    Elon Musk made the point earlier this year that we should try to establish acolony on Mars as soon as possible since there was no guarantee that the current technical window of opportunity would last.

    I’d never considered that possibility before. You get used to the continued (if slow) advance. To think we might actually fall back a few rungs due to some disaster or cultural lethargy is chilling.

  • James Benford July 8, 2013, 13:43

    John Cramer gives a refutation of the Bostrom argument in the current issue of Analog, in his Alternate View column, ‘Is Our World Just a Computer Simulation?’.

    Jim Benford

  • Eniac July 8, 2013, 20:37

    Paul:

    Where I disagree with Rees is in the implication that, the risks of our century being solved, we will have transcended them. Risk survival is a continual process activated by the very breakthroughs in technology and exploration the astrophysicist here explores. We can only imagine what kind of dangers a rapidly growing starfaring culture would expose itself to purely in terms of its own engineering, not to mention its possible encounters with other civilizations.

    I have to say I am with Rees, here. A starfaring species without FTL could never be one “culture”. It would by necessity be a large collection of many cultures. Ultimately, at least as many as, likely many more than, there are inhabitable star systems in the galaxy. It is hard to come up with a realistic scenario that would allow for the simultaneous extermination of every single one of them.

    A superior race of ETI might do it, but then the galaxy would still be inhabited, just not by our own descendents.

  • Eniac July 8, 2013, 21:04

    On the reality/simulation issue: In my view a simulation is not necessary or useful. A mathematical model and its solutions exist, whether someone simulates/calculates them or not. We are part of such a model, and we have very nearly figured out its fundamental equations (or so we think). Whether the true model turns out to be continuous or discrete is quite irrelevant.

    Occams razor would dictate that the model be as simple as possible and still explain all we see. This is the very essence and beauty of theoretical physics, first truly grasped by Einstein.

    I am pretty sure that a machine can never be self-simulating, so the postulate that we are a simulation implies that there is a much “bigger” reality in which the simulator exists. That is just kicking the problem one turtle down, an exercise in futility that would have Occam rotate in his grave.

  • A. A. Jackson July 8, 2013, 21:20

    “A culture that could create simulations as complex as the universe we live in might create virtual universes …..”
    I have wonder if Rees is a science fiction reader?
    I wonder if he ever read “Microcosmic God”? (1941!)
    A reclusive biochemist named Kidder creates an micro sized life form that by lives quickly but by creative invention of changing environments evolves so quickly it becomes an advanced civilization. To protect himself from malevolent outside forces Killer asked this synthetic life form , he calls ‘neoterics’ to create an invincible dome over the island he lives on to protect him , the reader is left to speculate on what further goes on under the dome.
    No need to guess who wrote this story in 1941, the Grand Master of all SF Grand Masters Theodore Sturgeon.
    Maybe not on as grand a scale as The Matrix or Rees’s speculation (I do believe other SF writers precede Rees on this , but it does trump him by about ~ 70 years!

    “Could I but rotate my arm out of the limits set to it,” …., “I could thrust it into a thousand dimensions” — H.G. Wells

  • Kudzu Bob July 9, 2013, 1:34

    Which do you think would be the most likely cause of our demise?

    The difference in birth rates between the smart and the stupid. Once the number of high-IQ people drops below a certain critical level, our dumbed-down society will blunder into some sort of catastrophe that could have been averted, such as a global nuclear war, failure to cope with the onset of another ice age, or perhaps just using Brawndo instead of water to irrigate the crops.

    The future belongs to those who show up, something that smart people, with their one-point-nada Total Fertility Rates, do not seem to realize. All the Head Start programs and Baby Einstein DVDs in the world will never change that.

  • Joy July 9, 2013, 2:56

    I apologize in advance for paraphrasing the noted prevaricator Bill Clinton, but one can not discuss “our demise” without clarifying the intended meaning of “our” and the intended meaning of “demise”.

    If one considers “our” to mean genus Homo, I have no worries on that account. This genus has blossomed, been trimmed, gone through population bottlenecks and effloresced before and can do so again. It is too versatile and widespread to be threatened with any great risk of extinction for millions of years to come. Of note, even the famous “extinction” of the dinosaurs is now known to be utterly false as any clade broad enough to include all of the famous large dinosaurs includes all living birds as well.

    Whether our descendants will be the multitudes of godlike entities of Kurzweilian fantasy or smaller numbers of more human beings living at a much lower level of complexity remains to be seen. What is certain, is that the current phenomenon of over 7 billion increasingly obese, dysgenic descendants of the larger brained, early modern Homo sapiens, living in fossil fueled ease and splendor (and ruining their life support system in the process) will NOT persist.

    PS: I am well aware of wealth inequality, however even the poorest contemporary people enjoy a material wealth undreamed of by our stone age ancestors. Regarding Captain Cook’s (fatal) visit to Hawai’i: On 21 January 1779, wrote Samwell, “People are so eager after our Iron that they pick the Sheathing Nails out of the Ship’s bottom, & our Men pull as many as they can conveniently on the inside to give to the Girls, so that between them both was there not a strict Eye kept over them we should have the Ships pulled to pieces at this place.” Not even in the meanest slums of Haiti today would a simple iron nail have such value! Everyone living today is (materially) fabulously wealthy by any reasonable historical standard. Contrariwise, to the proposition that too many people today are culturally, intellectually, physically, morally, and spiritually impoverished I would agree.

  • Sean M. Brooks July 9, 2013, 3:17

    Hmmm, Paul Gilster’s citing of Nick Bostrom thought in “a supremely advanced culture would create computer simulations capable of modeling the entire universe” reminded me of Poul Anderson’s novel GENESIS and his four HARVESTS OF STARS books. Both have AI computers said to be capable of modeling the entire cosmos. Both have AIs which included downloads of human beings, a kind of immortality, in a way, for those human beings.

    Sean M. Brooks

  • Dmitri July 9, 2013, 6:36

    Based on John Cramer’s article in Analog it’s now proven beyond shadow of doubt that cosmological fine-constant alpha is constant.

    http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.111.010801

    http://newsroom.unsw.edu.au/news/science/white-dwarf-star-throws-light-constant-nature

  • David Cummings July 9, 2013, 7:25

    This article starts out talking about simulations so its appropriate to link to a recent story about the TianHe-2, China’s newest super-computer:

    http://singularityhub.com/2013/07/01/chinas-tianhe-2-doubles-worlds-top-supercomputing-speed-two-years-ahead-of-schedule/

    That article talks about a list of the top 500 supercomputers in the world and ends with this paragraph:

    “If supercomputers continue along their current trajectory, sometime around 2018—maybe a little later—the world’s fastest computer will breach the exaflop/s barrier. That’s five times faster than the entire June 2013 list combined.”

    That’s a lot of computing power. And along with all that growth in computing power at the top end — in the list of supercomputers — there is similar growth in computing power at the consumer level.

    Moore’s Law is still in effect, at least as of now, though there is talk of it hitting a wall in 2014:

    http://news.cnet.com/8301-13924_3-10265373-64.html

    Regardless of that, other technologies — quantum computing is on the horizon — will continue to push the capabilities of hardware ever onward and upward so that the simulations talked about in the beginning of this piece here in Centauri Dreams may definitely become a reality.

    If energy and economic problems are solved so that it’s easy to have a free apartment, free food, free travel and free medical care for life, then what is to prevent a large minority of people — even a majority — from disappearing into a virtual reality tank and living their entire lives inside simulations?

    Everyone can be their own Starship Captain in their own encapsulating universe. And of course there will be networking of these universes, so you can play Battle Starship with your friends in the next tank, or in tanks in the next town, or in tanks in China.

    If that sounds like a depressing thought — humanity disappearing into VR tanks, like a race of slugs living inside a kind of high-tech pond-scum — rest assured that there will always be a minority of us on the outside, trying to get to the real moon, the real Mars, and the real stars. And the technologies that allow everyone else to exist inside their simulations will certainly help.

    I’m going to close my rambling thoughts here with the mention of the word Singularity, a word that has fallen out of fashion these days, but a word which represents something that, like Quantum Computing, is definitely on the horizon.

    The Singularity will change everything and is absolutely unpredictable. It seems far away now (many thought we should have reached it already), but to say that just because it hasn’t happened that it can never happen is to put your head in the sand… which is a kind of ancient simulation of its own.

  • stephen July 9, 2013, 12:55

    Our conclusions about whether we live in a simulation is based on our measurements–but our measurements could also be a simulation, unrelated to the dimensions and features of the real universe.

    Our VR simulations could still be useful in anticipating the problems and solutions we might find in our real space travels.

  • spaceman July 9, 2013, 15:46

    Paul,

    In the last part of this post you mentioned that even advanced civilizations who manage to establish themselves in deep space–perhaps even on an interstellar scale–will face their own set of existential risks. In other words, just because a civilization is well established across the light years, there may still be things that could cause their demise even if it is hard to imagine what those things might be from our very limited current perspective. The first demise-inducing thing could be their own advanced technology– perhaps the same advancements that allow them to spread out could be used in a nefarious (or otherwise) manner to threaten their very existence (antimatter weapons, drones, nanotech, etc.?). The second demise-inducing thing could be a competitive hostile encounter with another advanced civilization e.g. as what was hinted at in the film “Prometheus”. The third demise inducing thing, and the one I find the most troubling, could be advanced surveillance technologies used to ferret out those interstellar travelers who chose to remain hidden to avoid the first two demise-inducing things.

    Frankly, these are notions that I’ve either avoided thinking about because I’ve wanted to maintain a sense of hope/optimism, or, I just assumed that a spacefaring civilization would become immortal in line with Carl Sagan’s ideas. Correct me if I am wrong, but it seems like at the end of this post you seem to be challenging a long-held assumption in the interstellar travel community–namely, that unforeseen existential risks could bring down even the most well-established galactic civilizations?? Is (effective) immortality even possible since if it does not exist for a galactic civilization established on multiple worlds, then what sentient entity could attain it?

  • Paul Gilster July 9, 2013, 16:32

    spaceman writes:

    Correct me if I am wrong, but it seems like at the end of this post you seem to be challenging a long-held assumption in the interstellar travel community–namely, that unforeseen existential risks could bring down even the most well-established galactic civilizations?? Is (effective) immortality even possible since if it does not exist for a galactic civilization established on multiple worlds, then what sentient entity could attain it?

    Let’s say I’m playing around with the idea more than advocating it. Clearly, the more widely dispersed a civilization becomes among the stars, the more likely its survival. But I don’t think we would necessarily know the kind of existential threats such a culture might face through factors like those Rees mentions; i.e., its own technologies, perhaps new tools or experiments being misapplied or misunderstood. On the broader level, though, what I’m getting at is that just as we cannot know what knowledge we will gain to apply to emerging problems, we can’t necessarily know what risks we will encounter as we expand into the galaxy. Some of them are bound to surprise us.

  • Geoff July 9, 2013, 21:38

    What could take down an interstellar civilization? Information viruses, that’s what. Imagine something that gets into a solar system’s data networks and persuades the entire local culture to self-immolate – but not before sending copies of itself to other stars at the speed of light.

  • Rob Henry July 10, 2013, 7:06

    Eniac writes
    “I am pretty sure that a machine can never be self-simulating, so the postulate that we are a simulation implies that there is a much “bigger” reality in which the simulator exists. That is just kicking the problem one turtle down, an exercise in futility that would have Occam rotate in his grave.”

    And that very much sums up what I used to believe. My faith in such orthodoxy was shaken to the core by an isolated comment in a book, that, if memory servers me rightly, was by Lee Smolin. After previously mentioning the joke that our world was supported by “turtles all the way down” that incredible remark came without any further detail. I paraphrase it below.

    ‘It is even possible that there was no real physical world anywhere and we were part of an infinite series of nestled simulations.’

    At first I thought it easily disproved in that a simulating computer with infinite time still has a finite size, and thus a finite number of states that it can be in. The simulated world has less such states ( = Eniacs “I am pretty sure that a machine can never be self simulating” remark), but the mistake we both made was that the size of that machine was static. It if was (slowly) growing without limit then that would be wrong. Here I also note that, like the equally weird “final anthropic principle”, this would also necessitate certain characteristics about the large scale structure and expansion of our universe if we were destined to continue the cycle.

    So, we cant rule out that possibility by logic or mathematics, but it would need a new type of enforcing “God”. Here instead of enforcing laws that are invariant in our time and space, she would prevent anyone tripping over the chords of those simulators at each of the infinite levels that proceeded us.

    Also, if you narrow that problem just to us, then that simulation of sentience aspect doesn’t need to go any turtles down at all. It simply states that if there is inevitably far more information processing in the computing world than in the biological world, then there will end up millions of times (Bostrom’s figures) more simulated intelligences than real ones, and if our current interests are any indication, many (most?) of these will be simulations of our ancestors. Thus we are more likely to be one than real.

    Fair enough about Occam’s razor though.

  • Peter Chapin July 10, 2013, 19:27

    I recently read some material online about the issue of vacuum stability. I’m not enough of a physicist to understand the details but apparently there is some possibility that the vacuum is “metastable” and could decay to an even more stable state. How real this is depends on the precise mass of the Higgs boson (and the top quark also), both of which are not known exactly enough right now to be sure.

    If the vacuum is metastable then there is apparently a chance that some region of our universe could revert to a more stable configuration. The affected region would, apparently, expand at the speed of light converting an ever increasing volume of space. Unfortunately for us, and everything else, the “new” laws of physics in this volume of space preclude the existence of protons and other conventional particles.

    What would it take to push a volume of space into the more stable configuration (assume it’s even an issue)? I certainly do not know. But I wonder if it’s something a sufficiently advanced civilization worries about. Perhaps an accident with some ultra advanced technology could literally destroy the universe (as we know it). Perhaps such an accident has already occurred in a galaxy a billion light years away and the transformed region of space is rushing at us now at the speed of light.

  • Eniac July 10, 2013, 23:04

    @Paul, spaceman, Geoff: You are still talking about “a galactic civilization” as if such a thing could exist. I have asserted previously that in the absence of FTL there can only be many scattered, independent civilizations. Simultaneous self-destruction of all is extremely implausible.

    Do you disagree with my assertion? If so, please tell.

  • Paul Gilster July 11, 2013, 9:32

    Eniac writes:

    I have asserted previously that in the absence of FTL there can only be many scattered, independent civilizations. Simultaneous self-destruction of all is extremely implausible.

    I also find simultaneous self-destruction implausible at this scale. But I’m asking whether we’re missing factors that could make such a thing (however unlikely) possible, the point being that with this many unknowns, predicting all the possible risk factors is beyond our capabilities. New technologies can bring existential threats that we haven’t yet imagined.

  • Eniac July 11, 2013, 20:48

    Your are right, there could be dangers we cannot imagine, but it is kind of hard to talk about them if we can’t imagine them…

    My point, though, was directed less at the notion of extinction than about the notion of a “galactic civilization”. That notion is commonly bandied about, but it is deeply flawed. Because of the light speed lag, even the most powerful super-intelligences would, in aggregate, cooperate in the galaxy no more than bacteria in a Petri dish. In fact, allowing that said super-intelligences probably think super-fast, cooperative groups may even be confined to individual planetary bodies (or habitats), for all practical purposes.

    This Earth is the last place we can ever all be together in the same boat.

  • Paul Gilster July 12, 2013, 9:48

    Like you, Eniac, I’m deeply skeptical of any kind of unified galactic civilization, though I wouldn’t go so far as to limit a unified culture just to our one planet. There may be advantages in trade and cooperation that would keep an interplanetary civilization more or less unified. But expansion even to the nearby stars makes keeping a common governance extremely problematic!

  • Eniac July 12, 2013, 23:38

    True, I should have said “This solar system is the last place we can ever all be together in the same boat”. BUT, if our descendents are really going to be ultra-fast ultra-smart AI living in superconducting circuits, they may perceive seconds as we do hours or even years, making both the communication lag and interstellar travel times much more of a barrier. Rather than less, as is commonly held.

  • Geoff July 15, 2013, 20:52

    @Eniac: I agree that an “interstellar civilization” cannot really be much more than a scattered collection of separate civilizations based around individual stars. Think of the medieval Old World: civilizations in Europe, China, India, and Africa were vaguely aware of each other’s existence, but with round-trip communication lags measured in years, there wasn’t much cultural intercourse.

    But I disagree with the idea of such civilizations being truly “independent” — unless they had independent origins, or have somehow lost the interstellar travel/communication technologies they used to set themselves up in the first place. As long as there are information-carrying photons being exchanged between civilizations, one can imagine threats (parasitic AIs, information viruses) that could spread from one to another, and so destroy them all.

    Perhaps the only way to be safe is to choose not to communicate with one’s interstellar neighbours; a possible explanation for Fermi’s paradox there.