As the Thanksgiving holiday approaches here in the US, I’m looking at a new paper in the journal Anthropocene that calls the attention of those studying sustainability to the discipline of astrobiology. At work here is a long-term perspective on planetary life that takes into account what a robust technological society can do to affect it. Authors Woodruff Sullivan (University of Washington) and Adam Frank (University of Rochester) make the case that our era may not be the first time “…where the primary agent of causation is knowingly watching it all happen and pondering options for its own future.”
How so? The answer calls for a look at the Drake Equation, the well-known synthesis by Frank Drake of the factors that determine the number of intelligent civilizations in the galaxy. What exactly is the average lifetime of a technological civilization? 500 years? 50,000 years? Much depends upon the answer, for it helps us calculate the likelihood that other civilizations are out there, some of them perhaps making it through the challenges of adapting to technology and using it to spread into the cosmos. A high number would also imply that we too can make it through the tricky transition and hope for a future among the stars.
Sullivan and Frank believe that even if the chances of a technological society emerging are as few as one in 1000 trillion, there will still have been 1000 instances of such societies undergoing transitions like ours in “our local region of the cosmos.” The authors refer to extraterrestrial civilizations as Species with Energy-Intensive Technology (SWEIT) and discuss issues of sustainability that begin with planetary habitability and extend to mass extinctions and their relation to today’s Anthropocene epoch, as well as changes in atmospheric chemistry, comparing what we see today with previous eras of climate alteration, such as the so-called Great Oxidation Event, the dramatic increase in oxygen levels (by a factor of at least 104) that occurred some 2.4 billion years ago.
Out of this comes a suggested research program that models SWEIT evolution and the evolution of the planet on which it arises, using dynamical systems theory as a theoretical methodology. As with our own culture, these ‘trajectories’ (development paths) are tied to the interactions between the species and the planet on which it emerges. From the paper:
Each SWEIT’s history defines a trajectory in a multi-dimensional solution space with axes representing quantities such as energy consumption rates, population and planetary systems forcing from causes both “natural” and driven by the SWEIT itself. Using dynamical systems theory, these trajectories can be mathematically modeled in order to understand, on the astrobiology side, the histories and mean properties of the ensemble of SWEITs, as well as, on the sustainability science side, our own options today to achieve a viable and desirable future.
Image: Schematic of two classes of trajectories in SWEIT solution space. Red line shows a trajectory representing population collapse. Blue line shows a trajectory representing sustainability. Credit: Michael Osadciw/University of Rochester.
The authors point out that other methodologies could also be called into play, in particular network theory, which may help illuminate the many routes that can lead to system failures. Using these modeling techniques could allow us to explore whether the atmospheric changes our own civilization is seeing are an expected outcome for technological societies based on the likely energy sources being used in the early era of technological development. Rapid changes to Earth systems are, the authors note, not a new phenomenon, but as far as we know, this is the first time where the primary agent of causation is watching it happen.
Sustainability emerges as a subset of the larger frame of habitability. Finding the best pathways forward involves the perspectives of astrobiology and planetary evolution, both of which imply planetary survival but no necessary survival for a particular species. Although we know of no extraterrestrial life forms at this time, this does not stop us from proceeding, because any civilization using energy to work with technology is also generating entropy, a fact that creates feedback effects on the habitability of the home planet that can be modeled.
Image: Plot of human population, total energy consumption and atmospheric CO2 concentration from 10,000 BCE to today as trajectory in SWEIT solution space. Note the coupled increase in all 3 quantities over the last century. Credit: Michael Osadciw/University of Rochester.
Modeling evolutionary paths may help us understand which of these are most likely to point to long-term survival. In this view, our current transition is a phase forced by the evolutionary route we have taken, demanding that we learn how a biosphere adapts once a species with an energy-intensive technology like ours emerges. The authors argue that this perspective “…allows the opportunities and crises occurring along the trajectory of human culture to be seen more broadly as, perhaps, critical junctures facing any species whose activity reaches significant level of feedback on its host planet (whether Earth or another planet).”
The paper is Frank and Sullivan, “Sustainability and the astrobiological perspective: Framing human futures in a planetary context,” Anthropocene Vol. 5 (March 2014), pp. 32-41 (full text). This news release from the University of Washington is helpful, as is this release from the University of Rochester.
Apropos of yesterday’s post questioning what missions would follow up the current wave of planetary exploration, the Jet Propulsion Laboratory has released a new view of NASA’s intriguing moon Europa. The image, shown below, looks familiar because it was published in 2001, though at lower-resolution and with considerable color enhancement. The new mosaic gives us the largest portion of the moon’s surface at the highest resolution, and without the color enhancement, so that it approximates what the human eye would see.
The mosaic of images that go into this view was put together in the late 1990s using imagery from the Galileo spacecraft, which again makes me thankful for Galileo, a mission that succeeded despite all its high-gain antenna problems, and anxious for renewed data from this moon. The original data for the mosaic were acquired by the Galileo Solid-State Imaging experiment on two different orbits through the system of Jovian moons, the first in 1995, the second in 1998.
NASA is also offering a new video explaining why the interesting fracture features merit investigation, given the evidence for a salty subsurface ocean and the potential for at least simple forms of life within. It’s a vivid reminder of why Europa is a priority target.
Image (click to enlarge): The puzzling, fascinating surface of Jupiter’s icy moon Europa looms large in this newly-reprocessed color view, made from images taken by NASA’s Galileo spacecraft in the late 1990s. This is the color view of Europa from Galileo that shows the largest portion of the moon’s surface at the highest resolution. Credit: NASA/Jet Propulsion Laboratory.
Areas that appear blue or white are thought to be relatively pure water ice, with the polar regions (left and right in the image — north is to the right) bluer than the equatorial latitudes, which are more white. This JPL news release notes that the variation is thought to be due to differences in ice grain size in the two areas. The long cracks and ridges on the surface are interrupted by disrupted terrain that indicates broken crust that has re-frozen. Just what do the reddish-brown fractures and markings have to tell us about the chemistry of the Europan ocean, and the possibility of materials cycling between that ocean and the ice shell?
Even though its arrival on the surface of comet 67P/Churyumov-Gerasimenko did not go as planned, the accomplishment of the Rosetta probe is immense. We have a probe on the surface that was able to collect 57 hours worth of data before going into hibernation, and a mother ship that will stay with the comet as it moves ever closer to the Sun (the comet’s closest approach will be on August 13 of next year).
What a shame the lander’s ‘docking’ system, involving reverse thrusters and harpoons to fasten it to the surface, malfunctioned, leaving it to bounce twice before it landed with solar panels largely shaded. But we do know that the Philae lander was able to detect organic molecules on the cometary surface, with analysis of the spectra and identification of the molecules said to be continuing. The comet appears to be composed of water ice covered in a thin layer of dust. There is some possibility the lander will revive as the comet moves closer to the Sun, according to Stephan Ulamec (DLR German Aerospace Center), the mission’s Philae Lander Manager, and we can look forward to reams of data from the still functioning Rosetta.
What an audacious and inspiring mission this first soft landing on a comet has been. Congratulations to all involved at the European Space Agency as we look forward to continuing data return as late as December 2015, four months after the comet’s closest approach to the Sun.
Image: The travels of the Philae lander as it rebounds from its touchdown on Comet 67P/Churyumov Gerasimenko. Credit: ESA/Rosetta/Philae/ROLIS/DLR.
A Wave of Discoveries Pending
Rosetta used gravitational assists around both Earth and Mars to make its way to the target, hibernating for two and a half years to conserve power during the long journey. Now we wait for the wake-up call to another distant probe, New Horizons, as it comes out of hibernation for the last time on December 6. Since its January, 2006 launch, the Pluto-bound spacecraft has spent 1,873 days in hibernation, fully two-thirds of its flight time, in eighteen hibernation periods ranging from 36 days to 202 days, a way to reduce wear on the spacecraft’s electronics and to free up an overloaded Deep Space Network for other missions.
When New Horizons transmits a confirmation that it is again in active mode, the signal will take four hours and 25 minutes to reach controllers on Earth, at a time when the spacecraft will be more than 2.9 billion miles from the Earth, and less than twice the Earth-Sun distance from Pluto/Charon. According to the latest report from the New Horizons team, direct observations of the target begin on January 15, with closest approach on July 14.
Nor is exploration slowing down in the asteroid belt, with the Dawn mission on its way to Ceres. Arrival is scheduled for March of 2015. Eleven scientific papers were published last week in the journal Icarus, including a series of high-resolution geological maps of Vesta, which the spacecraft visited between July of 2011 and September of 2012.
Image (click to enlarge): This high-resolution geological map of Vesta is derived from Dawn spacecraft data. Brown colors represent the oldest, most heavily cratered surface. Purple colors in the north and light blue represent terrains modified by the Veneneia and Rheasilvia impacts, respectively. Light purples and dark blue colors below the equator represent the interior of the Rheasilvia and Veneneia basins. Greens and yellows represent relatively young landslides or other downhill movement and crater impact materials, respectively. This map unifies 15 individual quadrangle maps published this week in a special issue of Icarus. Credit: NASA/JPL.
Geological mapping develops the history of the surface from analysis of factors like topography, color and brightness, a process that took two and a half years to complete. We learn that several large impacts, particularly the Veneneia and Rheasilvia impacts in Vesta’s early history and the much later Marcia impact, have been transformative in the development of the small world. Panchromatic images and seven bands of color-filtered images from the spacecraft’s framing camera, provided by the Max Planck Society and the German Aerospace Center, helped to create topographic models of the surface that could be used to interpret Vesta’s geology. Crater statistics fill out the timescale as scientists date the surface.
With a comet under active investigation, an asteroid thoroughly mapped, a spacecraft on its way to the largest object in the asteroid belt, and an outer system encounter coming up for mid-summer of 2015, we’re living in an exciting time for planetary discovery. But we need to keep looking ahead. What follows New Horizons to the edge of the Solar System and beyond? What assets should we be hoping to position around Jupiter’s compelling moons? Is a sample-return mission through the geysers of Enceladus feasible, and what about Titan? Let’s hope Rosetta and upcoming events help us build momentum for following up our current wave of deep space exploration.
Back in the 1970s, Peter Glaser patented a solar power satellite that would supply energy from space to the Earth, one involving space platforms whose cost was one of many issues that put the brakes on the idea, although NASA did revisit the concept in the 1980’s and 90’s. But changing technologies may help us make space-based power more manageable, as John Mankins (Artemis Innovations) told his audience at the Tennessee Valley Interstellar Workshop.
What Mankins has in mind is SPS-ALPHA (Solar Power Satellite by means of Arbitrarily Large Phased Array), a system of his devising that uses modular and reconfigurable components to create large space systems in the same way that ants and bees form elegant and long-lived ecosystems on Earth. The goal is to harvest sunlight using thin-film reflector surfaces as part of an ambitious roadmap for solar power. Starting small — using small satellites and beginning with propulsion stablization modules — we begin scaling up, one step at a time, to full-sized solar power installations. The energies harvested are beamed to a receiver on the ground.
Image: An artist’s impression of SPS-ALPHA at work. Credit: John Mankins.
All this is quite a change from space-based solar power concepts from earlier decades, which demanded orbital factories to construct and later maintain the huge platforms needed to harvest sunlight. But since the late 1990s, intelligent modular systems have come to the fore as the tools of choice. Self-assembly involving modular 10 kg units possessed of their own artificial intelligence, Mankins believes, will one day allow us to create structures of sufficient size that can essentially maintain themselves. Thin-film mirrors to collect sunlight keep the mass down, as does the use of carbon nanotubes in composite structures.
There is no question that we need the energy if we’re thinking in terms of interstellar missions, though some would argue that fusion may eventually resolve the problem (I’m as dubious as ever on that idea). Mankins harked back to the Daedalus design, estimating its cost at $4 trillion and noting that it would require an in-space infrastructure of huge complexity. Likewise Starwisp, a Robert Forward beamed-sail design, which would need to power up beamers in close solar orbit to impart energy to the spacecraft. Distance and time translates into energy and power.
Growing out of the vast resources of space-based solar power is a Mankins idea called Star Sling, in which SPS-ALPHA feeds power to a huge maglev ring as a future starship accelerates. Unlike a fusion engine or a sail, the Star Sling allows acceleration times of weeks, months or even years, its primary limitation being the tensile strength of the material in the radial acceleration direction (a fraction of what would be needed in a space elevator, Mankins argues). The goal is not a single starship but a stream of 50 or 100 one to ten ton objects sent one after another to the same star, elements that could coalesce and self-assemble into a larger starship along the way.
Like SPS-ALPHA itself, Star Sling also scales up, beginning with an inner Solar System launcher that helps us build the infrastructure we’ll need. Also like SPS-ALPHA, a Star Sling can ultimately become self-sustaining, Mankins believes, perhaps within the century:
“As systems grow, they become more capable. Consider this a living mechanism, insect-class intelligences that recycle materials and print new versions of themselves as needed. The analog is a coral atoll in the South Pacific. Our systems are immortal as we hope our species will be.”
All of this draws from a 2011-2012 Phase 1 project for the NASA Innovative Advanced Concepts program on SPS-ALPHA, one that envisions “…the construction of huge platforms from tens of thousands of small elements that can deliver remotely and affordably 10s to 1000s of megawatts using wireless power transmission to markets on Earth and missions in space.” The NIAC report is available here. SPS-ALPHA is developed in much greater detail in Mankins’ book The Case for Space Solar Power.
Ultra-Lightweight Probes to the Stars
Knowing of John Rather’s background in interstellar technologies (he examined Robert Forward’s beamed sail concepts in important work in the 1970s, and has worked with laser ideas for travel and interstellar beacons in later papers), I was anxious to hear his current thoughts on deep space missions. I won’t go into the details of Rather’s long and highly productive career at Oak Ridge, Lawrence Livermore and the NRAO, but you can find a synopsis here, where you’ll also see how active this kind and energetic scientist remains.
Like Mankins, Rather (Rather Creative Innovations Group) is interested in structures that can build and sustain themselves. He invoked self-replicating von Neumann machines as a way we might work close to the Sun while building the laser installations needed for beamed sails. But of course self-replication plays out across the whole spectrum of space-based infrastructure. As Rather noted:
“Tiny von Neumann machines can beget giant projects. Our first generation projects can include asteroid capture and industrialization, giving us the materials to construct lunar colonies and expand to Mars and the Jovian satellites. We can see some of the implementing technologies now in the form of MEMS – micro electro-mechanical systems – along with 3D printers. As we continue to explore tiny devices that build subsequent machines, we can look toward expanding from colonization of our own Solar System into the problems of interstellar transfer.”
Building our system infrastructure requires cheap access to space. Rather’s concept is called StarTram, an electromagnetic accelerator that can launch unmanned payloads at Mach 10 (pulling 30 g’s at launch). The key here is to drop launch costs down from roughly $20,000 per kilogram to $100 per kilogram. Using these methods, we can turn our attention to asteroid materials that can, via self-replicating von Neumann technologies, build solar concentrators, lightsails and enormous telescope apertures (imagine a Forward-class lens 1000-meters in radius). 100-meter solar concentrators could change asteroid orbits for subsequent mining.
This is an expansive vision that comprises a blueprint for an eventual interstellar crossing. With reference to John Mankins’ Star Slinger, Rather mused that a superconducting magnetically inflated cable 50,000 kilometers in radius could be spun around the Earth, allowing the kind of solar power concentrator just described to power up the launcher. Taking its time to accelerate, a lightweight probe could reach three percent of lightspeed within 300 days, launching a 30 kg payload to the stars. The macro-engineering envisioned by Robert Forward still lives, to judge from both Rather’s and Mankins’ presentations, transformed by what may one day be our ability to create the largest of structures from tiny self-replicating machines.
The Solar Power Pipeline
Back when I was writing Centauri Dreams in 2004, I spent some time at Marshall Space Flight Center in Huntsville interviewing people like Les Johnson and Sandy Montgomery, who were both in the midst of the center’s work on advanced propulsion. A major player in the effort that brought us NanoSail-D, Sandy has been interstellar-minded all long, as I discovered the first time I talked to him. I had asked whether people would be willing to turn their back on everything they ever knew to embark on a journey to another star, and he reminded me of how many people had left their homes in our own history to voyage to and live at the other side of the world.
Image: Edward “Sandy” Montgomery, NanoSail-D payload manager at Marshall (in the red shirt) and Charlie Adams, NanoSail-D deputy payload manager, Gray Research, Huntsville, Ala. look on as Ron Burwell and Rocky Stephens, test engineers at Marshall, attach the NanoSail-D satellite to the vibration test table. In addition to characterizing the satellite’s structural dynamic behavior, a successful vibration test also verifies the structural integrity of the satellite, and gauges how the satellite will endure the harsh launch environment. Credit: NASA/MSFC/D. Higginbotham.
We’re a long way from making such decisions, of course, but Montgomery’s interest in Robert Forward’s work has stayed active, and in Oak Ridge he described a way to power up a departing starship that didn’t have to rely on Forward’s 1000-kilometer Fresnel lens in the outer Solar System. Instead, Montgomery points to building a power collector in Mercury orbit that would use optical and spatial filtering to turn sunlight into a coherent light source and stream it out into the Solar System through a series of relays built out of lightweight gossamer structures.
Work the calculations as Montgomery has and you wind up with 23 relays between Earth orbit and the Sun, with more extending deeper into the Solar System. Sandy calls this a ‘solar power pipeline’ that would give us maximum power for a departing sailcraft. The relaying of coherent light has been demonstrated already in experiments conducted by the Department of Defense, in a collector and re-transmitter system developed by Boeing and the US Air Force. Although some loss occurs because of jitter and imperfect coatings, the concept is robust enough to warrant further study. I suspect Forward would have been eager to run the calculations on this idea.
Wrapping Up TVIW
Les Johnson closed the formal proceedings at TVIW late on the afternoon of the 11th, and that night held a public outreach session, where I gave a talk running through the evolution of interstellar propulsion concepts in the last sixty years. Following that was a panel with science fiction writers Sarah Hoyt, Tony Daniel, Baen Books’ Toni Weisskopf and Les Johnson on which I, a hapless non-fiction writer, was allowed to have a seat. A book signing after the event made for good conversations with a number of Centauri Dreams readers.
All told, this was an enthusiastic and energizing conference. I’m looking forward to TVIW 2016 in Chattanooga. What a pleasure to spend time with these people.
People keep asking what I think about Christopher Nolan’s new film ‘Interstellar.’ The answer is that I haven’t seen it yet, but plan to early next week. Some of the attendees of the Tennessee Valley Interstellar Workshop were planning to see the film on the event’s third day, but I couldn’t stick around long enough to join them. I’ve already got Kip Thorne’s The Science of Interstellar queued up, but I don’t want to get into it before actually seeing the film. I’m hoping to get Larry Klaes, our resident film critic, to review Nolan’s work in these pages.
Through the Wormhole
Wormholes are familiar turf to Al Jackson, who spoke at TVIW on the development of our ideas on the subject in science and in fiction. Al’s background in general relativity is strong, and because I usually manage to get him aside for conversation at these events, I get to take advantage of his good humor by asking what must seem like simplistic questions that he always answers with clarity. Even so, I’ve asked both Al and Marc Millis to write up their talks in Oak Ridge, because both of them get into areas of physics that push beyond my skillset.
Al’s opening slide was what he described as a ‘traversable wormhole,’ and indeed it was, a shiny red apple with a wormhole on its face. What we really want to do, of course, is to connect two pieces of spacetime, an idea that has percolated through Einstein’s General Relativity down through Schwarzchild, Wheeler, Morris and Thorne. The science fiction precedents are rich, with a classic appearance in Robert Heinlein’s Starman Jones (1953), the best of his juveniles, in my opinion. Thus our hero Max explains how to get around the universe:
You can’t go faster than light, not in our space. If you do, you burst out of it. Buf it you do it where space is folded back and congruent, you pop right back into our space again but it’s a long way off. How far off depends on how it’s folded. And that depends on the mass in the space, in a complicated fashion that can’t be described in words but can be calculated.
I chuckled when Al showed this slide because the night before we had talked about Heinlein over a beer in the hotel bar and discovered our common admiration for Starman Jones, whose description of ‘astrogators’ — a profession I dearly wanted to achieve when I read this book as a boy — shows how important it is to be precisely where you need to be before you go “poking through anomalies that have been calculated but never tried.” Great read.
If natural wormholes exist, we do have at least one paper on how they might be located, a team effort from John Cramer, Robert Forward, Michael Morris, Matt Visser, Gregory Benford and Geoffrey Landis. As opposed to gravitational lensing, where the image of a distant galaxy has been magnified by the gravitational influence of an intervening galaxy, a wormhole should show a negative mass signature, which means that it defocuses light instead of focusing it.
Al described what an interesting signature this would be to look for. If the wormhole moves between the observer and another star, the light would suddenly defocus, but as it continues to cross in front of the star, a spike of light would occur. So there’s your wormhole detection: Two spikes of light with a dip in the middle, an anomalous and intriguing observation! It’s also one, I’ll hasten to add, that’s never been found. Maybe we can manufacture wormholes? Al described plucking a tiny wormhole from the quantum Planck foam, the math of which implies we’d have to be way up the Kardashev scale to pull off any such feat. For now, about the best we can manage is to keep our eyes open for that astronomical signature, which would at least indicate wormholes actually exist. The paper cited above, by the way, is “Natural Wormholes as Gravitational Lenses,” Physical Review D (March 15, 1995): pp. 3124–27.
Enter the Space Drive
To dig into wormholes, the new Thorne book would probably be a good starter, though I base this only on reviews, as I haven’t gotten into it yet. Frontiers of Propulsion Science (2009) also offers a look into the previous scholarship on wormhole physics and if you really want to dig deep, there’s Matt Visser’s Lorentzian Wormholes: From Einstein to Hawking (American Institute of Physics, 1996). I wanted to talk wormholes with Marc Millis, who co-edited the Frontiers of Propulsion Science book with Eric Davis, but the tight schedule in Oak Ridge and Marc’s need to return to Ohio forced a delay.
In any event, Millis has been working on space drives rather than wormholes, the former being ways of moving a spacecraft without rockets or sails. Is it possible to make something move without expelling any reaction mass (rockets) or in some way beaming momentum to it (lightsails)? We don’t know, but the topic gets us into the subject of inertial frames — frames of reference defined by the fact that the law of inertia holds within them, so that objects observed from this frame will resist changes to their velocity. Juggling balls on a train moving at a constant speed (and absent visual or sound cues), you could not determine whether the train was in motion or parked. The constant-velocity train is considered an inertial frame of reference.
Within the inertial frame, in other words, Newton’s laws of motion hold. An accelerating frame of reference is considered a non-inertial frame because the law of inertia is not maintained in it. If the conductor pulls the emergency brake on the train, you are pushed forward suddenly in this decelerating frame of reference. From the standpoint of the ground (an inertial frame), you aboard the train simply continue with your forward motion when the brake is applied.
We have no good answers on what causes an inertial frame to exist, an area where unsolved physics regarding the coupling of gravitation and inertia to other fundamental forces leave open the possibility that one could be used to manipulate the other. We’re at the early stages of such investigations, asking whether an inertial frame is an intrinsic property of space itself, or whether it somehow involves, as Ernst Mach believed, a relationship with all matter in the universe. That leaves us in the domain of thought experiments, which Millis illustrated in a series of slides that I hope he will discuss further in an article here.
Fusion’s Interstellar Prospects
Rob Swinney, who is the head of Project Icarus, used his time at TVIW to look at a subject that would seem to be far less theoretical than wormholes and space drives, but which still has defeated our best efforts at making it happen. The subject is fusion and how to drive a starship with it. The Daedalus design of the 1970s was based on inertial confinement fusion, using electron beams to ignite fusion in fuel pellets of deuterium and helium-3. Icarus is the ongoing attempt to re-think that early Daedalus work in light of advances in technology since.
But like Daedalus, Icarus will need to use fusion to push the starship to interstellar speeds. Robert Freeland and Andreas Hein, also active players in Icarus, were also in Oak Ridge, and although Andreas was involved with a different topic entirely (see yesterday’s post), Robert was able to update us on the current status of the Icarus work. He illustrated one possibility using Z-pinch methods that can confine a plasma to heat it to fusion conditions.
Three designs are still in play at Icarus, with the Z-pinch version (Freeland coined it ‘Firefly’ because of the intense glow of waste heat that would be generated) relying on the same Z-pinch phenomenon we see in lightning. The trick with Z-pinch is to get the plasma moving fast enough to create a pinch that is free of hydrodynamic instabilities, but Icarus is tracking ongoing work at the University of Washington on the matter. As to fuel, the team has abandoned deuterium/helium-3 in favor of deuterium/deuterium fusion, a choice that must flow from the problem of obtaining the helium-3, which Daedalus assumed would be mined at Jupiter.
Freeland described the Firefly design as having an exhaust velocity of 10,000 kilometers per second, with a 25 year acceleration period to reach cruise speed. The cost: $35 billion a year spread out over 15 years. I noted in Rob Swinney’s talk that the Icarus team is also designing interstellar precursor missions, with the idea of building a roadmap. All told, 35,000 hours of volunteer research are expected to go into this project (I believe Daedalus was 10,000), with the goal of not just reaching another star but decelerating at the target to allow close study.
Image: Artist’s conception of Icarus Pathfinder. Credit: Adrian Mann.
Let me also mention a design from the past that antedates Daedalus, which was begun in 1973. Brent Ziarnick is a major in the US Air Force who described the ARPA-funded work on nuclear pulse propulsion that grew into Orion, with work at General Atomics from 1958 to 1965. Orion was designed around the idea of setting off nuclear charges behind the spacecraft, which would be protected by an ablation shield and a shock absorber system to cushion the blasts.
We’ve discussed Orion often in these pages as a project that might have opened up the outer Solar System, and conceivably produced an interstellar prototype if Freeman Dyson’s 1968 paper on a long-haul Orion driven by fusion charges had been followed up. Ziarnick’s fascinating talk explained how the military had viewed Orion. Think of an enormous ‘battleship’ of a spacecraft that could house a nuclear deterrent in a place that Soviet weaponry couldn’t reach. At least, that was how some saw the Cold War possibilities in the early years of the 1960s.
The military was at this time looking at stretch goals that went way beyond the current state of the art in Project Mercury, and had considered systems like Dyna-Soar, an early spaceplane design. With a variety of manned space ideas in motion and nuclear thermal rocket engines under investigation, a strategic space base that would be invulnerable to a first strike won support all the way up the command chain to Thomas Power at the Strategic Air Command and Curtis LeMay, who was then Chief of Staff of the USAF. Ziarnick followed Orion’s budget fortunes as it ran into opposition from Robert McNamara and ultimately Harold Brown, who worked under McNamara as director of defense research and engineering from 1961 to 1965.
Orion would eventually be derailed by the Atmospheric Test Ban Treaty of 1963, but the idea still has its proponents as a way of pushing huge payloads to deep space. Ziarnick called Orion ‘Starfleet Deferred’ rather than ‘Starflight Denied,’ and noted the possibility of renewed testing of pulse propulsion without nuclear pulse units. The military lesson from Orion:
“The military is not against high tech and will support interstellar research if they can find a defense reason to justify it. We learn from Orion that junior officers can convince senior leaders, that operational commanders like revolutionary tech. Budget hawks distrust revolutionary tech. Interstellar development will be decided by political, international, defense and other concerns.”
Several other novel propulsion ideas, as well as a book signing event, will wrap up my coverage of the Tennessee Valley Interstellar Workshop tomorrow.