A Triton Lander Mission

What would be our next step in the exploration of the outer system once New Horizons has visited one or more Kuiper Belt objects (KBOs)? One intriguing target with a nearby ice giant to recommend it is Triton, Neptune’s unusual moon, which was imaged up close only once, by Voyager 2 in 1989. The views were spectacular but at the time of the encounter, most of Triton’s northern hemisphere remained unseen because it was in darkness. Only one hemisphere showed up clearly as the spacecraft passed the moon at a distance of 40,000 kilometers.

Our next visit should tell us much more, but we’re still working out the concept. Thus Steven Oleson’s Phase II grant from NASA’s Innovative Advanced Concepts (NIAC) office. Oleson (NASA GRC) calls the idea Triton Hopper. In his Phase I study, he identified the various risks of the mission, analyzing its performance and its ability to collect propellant. For Triton Hopper — moving from point to point — would rely on a radioisotope engine that would collect nitrogen ice and use it for propellant, mining the moon’s surface to keep the mission viable.

Image: Graphic depiction of Triton Hopper: Exploring Neptune’s Captured Kuiper Belt Object. Credit: S. Oleson.

Triton is interesting on a number of levels, one of which has received recent examination As with other outer system moons, we’re learning that there may be a liquid ocean beneath the crust. Let me quote a short presentation from Terry Hurford (NASA GSFC) on this:

There is compelling evidence that Triton should be considered an ocean world. Fractures observed on Triton’s surface are consistent in location and orientation with tidal stresses produced by the decay of Triton’s orbit as it migrates toward Neptune. Tidal stresses can only reach levels to fracture the surface if a subsurface ocean exists; a solid interior will result in smaller tidal stress and likely no tectonic activity. Tidal stresses therefore provide a mechanism for fracturing and volcanism analogous to activity observed on Enceladus and, possibly, Europa. Given that Triton’s interior has dissipated a tremendous amount of energy as heat, which likely drove differentiation, and that this heat may remain until the present day, an energy source likely exists to drive geologically recent activity. Moreover, it is possible that tidal volcanism has facilitated, if not dictated, the expression of this activity on Triton’s surface.

Triton’s surface seems to be in geological motion, given how few craters show up in the Voyager views. We can also factor in that this is the only large moon in the Solar System with a retrograde orbit, leading to the view that it is a captured dwarf planet from the Kuiper Belt. That nitrogen that Steven Oleson wants to use should be abundant at the surface, with a mostly water-ice crust to be found below. Also of considerable interest: Triton’s surface deposits of tholins, organic compounds that may be precursor chemicals to the origin of life.

Image: Triton’s south polar terrain photographed by the Voyager 2 spacecraft. About 50 dark plumes mark what may be ice volcanoes. This version has been rotated 90 degrees counterclockwise and artificially colorized based on another Voyager 2 image. Credit: NASA/JPL.

Geologically active places like Triton are intriguing — think of Io and Europa, Enceladus and Titan — and we can add Triton’s nitrogen gas geysers into the mix, along with its tenuous nitrogen atmosphere. No question a lander here would offer abundant science return. Oleson proposes heating that surface nitrogen ice under pressure and using it as a propellant that would allow a continuing series of ‘hops’ as high as 1 kilometer and 5 kilometers downrange. Thus we would get images and videos while aloft, and surface analysis while on the ground.

Intriguing. The thin atmosphere and even the geysers could be sampled by a Triton Hopper in the same way we have looked at the Enceladus plumes, by passing directly through them.

Working with GRC colleague Geoffrey Landis, Oleson presented Triton Hopper last year at the Planetary Science Vision 2050 Workshop in Washington DC. The thinking is to land near the south pole in 2040, in the area where geysers have already been detected. The surface can then be explored in as many as 60 hops, covering some 300 kilometers. Using in situ ices as propellants offers a uniquely renewable potential for mobility.

Oleson’s Phase II work will cover, in addition to mission options to reach Triton and descend to the surface in about 15 years, details of safe landing and takeoff of the hopper. Propellant gathering is obviously a major issue, one that will be explored through a bevameter experiment on frozen nitrogen (a bevameter can measure the properties of a surface in terms of interaction with wheeled or tracked vehicles). Also in play: How to collect and heat the nitrogen propellant and find ways to increase hop distance, solutions that could play into other icy moon missions.

Be aware, too, of a Phase II grant to Michael VanWoerkom (ExoTerra Resource), who will be studying in situ resource utilization (ISRU) and miniaturization. VanWoerkom’s NIMPH project (Nano Icy Moons Propellant Harvester) will deepen his investigation into mission refueling at destination, producing return propellant on site. The work thus complements Triton Hopper and deepens our catalog of strategies for sample return from a variety of surfaces.

The NASA precis for Oleson’s Phase II study is here. The NIMPH precis is here. The Hurford presentation is available as Hurford et al., “Triton’s Fractures as Evidence for a Subsurface Ocean,” Lunar and Planetary Science XLVIII (2017) (full text), but see as well Should we reconsider our view on Neptune’s largest moon?, which ran at Astronomy.com.

tzf_img_post

Laser Beaming and Infrastructure

Looking at John Brophy’s Phase II NIAC award reminds us how useful the two-step process can be at clarifying and re-configuring deep space concepts. Brophy (Jet Propulsion Laboratory) had gone to work in Phase I with a study called “A Breakthrough Propulsion Architecture for Interstellar Precursor Missions.” The work studied a lithium-fueled ion thruster with a specific impulse of 58,000 seconds. If that didn’t get your attention, consider that the Dawn spacecraft’s ISP is 3,000 seconds, and think about what we might be able to do with that higher figure.

I think about ideas like this in terms of infrastructure. The relation to interstellar flight is this: While we may well get robotic nano-probes off on interstellar missions (think Breakthrough Starshot) some time this century, the idea of human expansion into the cosmos awaits the growth of our civilization into the rest of the Solar System. Along the way, we will learn the huge lessons of closed-loop life support, means of planetary protection and propulsion technologies to shorten trip times. Building that infrastructure is an early phase of the interstellar project.

What Brophy has been working on with his Phase I study is the question of how to provide power to craft at the outer edge of our planetary system. When I looked at the 2017 NIAC awards, a stumbling block for Brophy seemed to be the need for a 10-kilometer laser array.

Brophy’s idea was to beam power to a lightweight photovoltaic array aboard the actual spacecraft — this is how he generates the power needed to drive those ion thrusters. Closer to home we would use solar power to do the trick today, while looking at nuclear power when far from the Sun. Brophy is talking about ion thruster operation in deep space without the need for a heavy onboard power source. This gets around the problem of increasingly inefficient solar panels as we go further from the Sun, as well as their alternative, a bulky nuclear reactor.

Image: John Brophy (JPL) initiated the NSTAR Project that provided the ion propulsion system for Deep Space 1, delivered the ion propulsion system for the Dawn mission, and co-led the study at Caltech’s Keck Institute that resulted in the Asteroid Redirect Mission. His Phase II NIAC award will be used to investigate beamed power delivery to a powerful ion engine. Credit: JPL.

If we can power up the lithium-ion engine, mission velocities of 100 to 200 km/s would be possible. This is exciting stuff: The Phase I study talked about a 12-year flight-time to 500 AU, which gets us into gravitational lensing territory, while flight time to distant Pluto/Charon could be reduced to 3.6 years. For that matter, imagine a Jupiter mission in roughly a year.

A 10-kilometer array is daunting indeed, but now we’re going into Phase II, which significantly refines the concept based on the results of the Phase I study. Phase I assumed a 100 MW output power at a laser frequency of 1064 nm, feeding a 70 MW electric propulsion spacecraft with a 175-meter photovoltaic array coupled to its lithium-fueled ion thrusters.

What the Phase I study revealed was a better set of parameters: Brophy will begin the Phase II study with a laser array of 2-kilometer diameter aperture with an output power of 400 MW at a laser frequency of 300 nm. Aboard the vehicle, a 110-meter diameter photovoltaic array is now considered. It will power a 10 MW electric propulsion system.

Results of the Phase I work brought the specific impulse down to 40,000 seconds, which is the figure of merit for Phase II, all in the service of a specific mission outcome: A journey to the solar gravity focus at 550 AU. The changes in configuration came from feasibility analysis — could we develop the required photovoltaic arrays with the required areal density (200 g/m2)? Could we achieve photovoltaic cells tuned to the laser frequency with efficiencies greater than 50%? And what about pointing the laser array with the needed accuracy, not to mention stability to supply the reference mission at the gravity focus?

In terms of beamed propulsion, Brophy’s laser array should remind us of Breakthrough Starshot’s own plans for such an array, though Starshot does not plan on building the array in space. Starshot’s array is to be on the ground, avoiding the huge issue of constructing and controlling a gigantic laser outside our planet, while creating a host of other questions, such as atmospheric attenuation of the laser signal. In both instances, however, we deal with a power source separate from the spacecraft, producing real benefits in terms of weight and efficiency, and creating a reusable driver for missions throughout the system.

Image: Graphic depiction of A Breakthrough Propulsion Architecture for Interstellar Precursor Missions. Credit: J. Brophy.

And what about the power developed in the onboard photovoltaic array? The output in the Phase I study was expected to be 12 kV, which goes a long way beyond the best solar arrays available today. Those on the International Space Station, for example, produce 160 volts. One part of Phase II, then, will be to show that the photovoltaic system can be operated at more than 6 kV in the plasma environment produced by the lithium-ion propulsion system.

Have a look at the precis to see what Brophy considers to be the outstanding technical issues, which include modeling the lithium plasma plume created by the engine and demonstrating a small aperture phased array that can become scalable to larger apertures. Depending on the outcome of these investigations, a roadmap should emerge showing how we might develop demonstrator missions along the way to finalizing a workable system architecture.

Let’s put all this into perspective. The challenge of operating beyond 5 AU is the rapid dropoff in solar power available to a spacecraft, despite recent advances in photovoltaic array technology. A laser system like Brophy’s could increase the power density of photons available to a spacecraft bound for the outer Solar System by several orders of magnitude, giving us an ion propulsion system of considerable power. Moreover, this would be a spacecraft that does not need to create its own power but receives it, doing away with power processing hardware.

I come back to infrastructure when evaluating concepts like this. A workable laser array of this magnitude becomes a way to solve power issues far from the Sun that could drive missions throughout the system. Like the combined laser / neutral particle beam concept we’ve looked at over the last two days, it assumes a hybrid meshing of technologies that could go a long way toward enabling robotic and manned missions deep into the Solar System. That makes laser beaming to an ion engine an alternative well worth the continued scrutiny of NIAC Phase II.

tzf_img_post

Tightening the Beam: Correspondence on PROCSIMA

Yesterday’s post on PROCSIMA (Photon-paRticle Optically Coupled Soliton Interstellar Mission Accelerator) has been drawing a good deal of comment, and I wanted to dig deeper into the concept this morning by presenting some correspondence between plasma physicist Jim Benford, a familiar face on Centauri Dreams, and PROCSIMA’s creator, Chris Limbach (Texas A&M Engineering Experiment Station). As we saw yesterday, PROCSIMA goes to work on the problem of beam spread in both laser and particle beam propulsion concepts.

In my own email exchange with Dr. Limbach, he took note of the comments to yesterday’s Centauri Dreams article, with a useful nod to a concept called ‘optical tweezers’ that may be helpful. So let me start with his message of April 4, excerpting directly from the text:

I took a quick glance at the comments, and I see that the laser guiding (i.e. waveguide) effect is fairly well understood, but the guiding of the particles is less clear. I admit this is the less intuitive aspect and the weak interaction requires special consideration in the combined beam design. But to give a general sense, we are taking advantage of the same effect as optical tweezers (https://en.wikipedia.org/wiki/Optical_tweezers) except applied on the level of atoms instead of nanoparticles. That is, the atoms in our neutral beam are drawn to the high intensity region because they can be polarized.

I hope your readers are as excited about this project as I am!

Personally, I do find the project exciting because I’ve been writing about the problems of keeping a laser beam collimated for an interstellar mission ever since I began digging into Robert Forward’s papers back around the turn of this century. You may remember the vast Fresnel lens that Forward proposed in the outer Solar System as a way of collimating the laser beam for interstellar use. Avoiding such colossal feats of engineering would be a welcome outcome!

We’ve examined the pros and cons of particle beams in these pages as well, learning that there is controversy over the question of whether neutral particle beams would not likewise be subject to beam spread. Geoff Landis has argued that “…beam spread due to diffraction is not a problem,” while Jim Benford has offered a strong disagreement. See yesterday’s post, as well as Beaming to a Magnetic Sail.

The PROCSIMA idea combines a neutral particle beam and a laser beam to eliminate beam spread and diffraction in both. If it can be made to work, it seems to offer long periods of acceleration for beamed interstellar sails and high delta-V. An Alpha Centauri mission with a flight time of about 40 years becomes possible with a spacecraft reaching 10 percent of the speed of light. Dr. Limbach had been discussing the idea with plasma physicist Benford before the NIAC Phase I award was granted, and they engaged in further correspondence about the idea shortly after.

Here is an excerpt of a Benford message from last August with regard to PROCSIMA. The paper he refers to is a fleshed out and much more detailed version of Jim’s Sails Driven by Diverging Neutral Particle Beams, which ran in these pages in 2014. It has been accepted at the Journal of the British Interplanetary Society, where publication is expected this fall:

Chris: I made more revisions on my paper than I had expected, and submitted it to JBIS last night. It is attached.

On your laser tweezers idea, I assume the wavelength of the laser will be much much larger than the size of the atoms. So you will treat their interaction as electric dipoles in the electric field of the laser. What intensities of laser would you need in order to defeat the divergence of such a beam? The beams themselves will probably be on the order of 10 cm-1 m in diameter and so the laser beam will be of comparable size, I suppose.

Of course, the introduction of a powerful laser adds a complexity to the overall system, but the remarkable focus that you are expecting would be very interesting to see.

I will keep your idea to myself, but I’m sure that the community, in particular Gerry Nordley, Adam Crowl and Geoff Landis, would be very interested to hear about it.

By the way, there is at a meeting that’s entirely relevant to this, in October in Huntsville Alabama. It’s the Tennessee Valley Interstellar Workshop, which expects to have about 200 people attending. I attach their newsletter. Unfortunately, they don’t do streaming, so one has to attend to hear the talks!

Image: Beamed propulsion leaves propellant behind, a key advantage. Coupled with very small probes, it could provide a path for flyby missions to the nearest stars. PROCSIMA studies the possibility that the problem of beam spread can be resolved. Credit: Adrian Mann.

Chris Limbach was unable to attend the TVIW meeting, but he replied to Benford in a message on August 15:

Thank you for your quick reply. My timing was fortuitous! Also, thank you for offering to send an updated version of your forthcoming paper.

I would like to hold this concept closely until I submit the full proposal, but I will describe the general outline. I only ask that you do not share with anyone in the near-term.

Essentially, I have discovered that a neutral particle beam and high intensity laser beam can be combined in such a way as to simultaneously eliminate the problems of diffraction and beam divergence. This is possible because of physical mechanisms that 1) attract atoms into regions of high optical intensity (i.e. toward the center of the laser beam) and 2) provide an optical focusing effect in regions of high atom density (i.e. toward the center of the neutral beam). If these two effects can be balanced then both the neutral beam and laser beam will propagate, together, without any divergence. After running the numbers, I believe a spot size of 5 meters could be maintained over several astronomical units (!).

I am still concerned that higher-order effects will cause problems, but I believe the basic numbers work out and the concept warrants further investigation/optimization. I am interested in your paper because the neutral beam divergence will place fundamental constraints on certain parameters (e.g. particle density, laser beam intensity, …) for this concept.

After the August messages, the correspondence ended until news of the recent NIAC funding, about which Dr. Limbach informed Jim Benford, leading to my own conversation with Jim and agreement with both scientists that this correspondence could be reproduced to help clarify aspects of the PROCSIMA project. As I mentioned yesterday, there are two levels of funding at NIAC, with PROCSIMA currently receiving Phase I funding. After Phase I’s initial definition and analysis, a Phase II grant can be applied for to develop the concept further.

We’ll await the completion of the Phase I study with great interest, given that a successful PROCSIMA would deliver the best of both the laser and neutral particle beam ideas, while removing one of their biggest problems. If it works, this idea should be readily scalable, pointing to its uses in fast missions throughout the Solar System and interstellar precursors far beyond the heliosphere. The idea has to be shaken out through this initial NIAC work, but it is certainly gaining the attention of the beamed propulsion community.

tzf_img_post

PROCSIMA: Wedding Two Beam Concepts

The name Proxima will always have resonance with interstellar theorists given that our nearest target — and one with a potentially life-bearing planet at that — is Proxima Centauri. Thus an acronym with the same pronunciation is bound to catch the attention. PROCSIMA stands for Photon-paRticle Optically Coupled Soliton Interstellar Mission Accelerator, one of 25 early-stage technology proposals selected for Phase I funding by the NASA Innovative Advanced Concepts (NIAC) office. A number of Phase II proposals selected for funding was also announced.

These awards are always fascinating to watch because they’re chosen from a host of bleeding edge ideas, helping us keep a finger on the pulse of deep space thinking even if many of them end with their Phase I funding, $125,000 over nine months to produce an initial definition and analysis. Should the results be encouraging, Phase II funding becomes a possibility, ramping the money up to $500,000 over two years to encourage further development.

The 2018 Phase I competition involved over 230 proposals and just 25 winners, a tough selection process that resulted in a number of interesting proposals. NIAC works by fostering ideas from a wide range of scientists working outside NASA’s umbrella, as Jim Reuter, acting associate administrator of NASA’s Space Technology Mission Directorate, notes:

“The NIAC program gives NASA the opportunity to explore visionary ideas that could transform future NASA missions by creating radically better or entirely new concepts while engaging America’s innovators and entrepreneurs as partners in the journey. The concepts can then be evaluated for potential inclusion into our early stage technology portfolio.”

Creating a Tight Beam

We’ll have plenty to work with over the next few days, but I’ll start with PROCSIMA, which comes from Chris Limbach (Texas A&M Engineering Experiment Station), and points to the possibility of solving a tricky problem in beamed propulsion. Specifically, if you’re using a laser beam to push a sail, how can you reduce the spread of the beam, keeping it collimated so that it will disperse as little as possible with distance? A perfectly collimated beam seems impossible because of diffraction, thus limiting the length of time our sail can remain under acceleration.

Image: Texas A&M’s Christopher Limbach. Credit: Texas A&M.

Particle beams, which actually offer more momentum per unit energy than laser beams, likewise tend to diverge, although as we’ve seen in earlier articles, the nature of the divergence is problematic (see the contrasting views of Jim Benford and Geoff Landis on the matter, as in Beaming to a Magnetic Sail). Particle beams might turn out to be just the ticket for fast in-system transportation as far out as the Oort Cloud, while being limited because of beam spread when it comes to interstellar applications. That makes divergence an issue for both types of beam.

But I should quote Geoff Landis (NASA GRC) first, because he thinks the neutral particle beam problem can be surmounted. Landis works with mercury in his example:

[Thermal beam divergence] could be reduced if the particles in the beam condense to larger particles after acceleration. To reduce the beam spread by a factor of a thousand, the number of mercury atoms per condensed droplet needs to be at least a million. This is an extremely small droplet (10-16 g) by macroscopic terms, and it is not unreasonable to believe that such condensation could take place in the beam. As the droplet size increases, this propulsion concept approaches that of momentum transfer by use of pellet streams, considered for interstellar propulsion by Singer and Nordley.

Benford sees the divergence problem as fundamental. Charged beams would interact, spiraling around each other to produce transverse motion that creates beam divergence. Neutral particle beams would seem to be the ticket if Landis is right, but Benford sees three problems. Let me quote him (from Sails Driven by Diverging Neutral Particle Beams; a JBIS paper on these matters has been accepted for publication but is not yet available):

First, the acceleration process can give the ions a slight transverse motion as well as propelling them forward. Second, focusing magnets bend low-energy ions more than high-energy ions, so slight differences in energy among the accelerated ions lead to divergence (unless compensated by more complicated bending systems).

Third, and quite fundamentally, the divergence angle introduced by stripping electrons from a beam of negative hydrogen or tritium ions to produce a neutral beam gives the atom a sideways motion. (To produce a neutral hydrogen beam, negative hydrogen atoms with an extra electron are accelerated; the extra electron is removed as the beam emerges from the accelerator.)

Reducing the first two causes of beam divergence, Benford believes, is theoretically possible, but he sees the last source of divergence as unavoidable, nor does he accept Gerald Nordley’s idea of reducing neutral particle beam divergence through laser cooling. And he finds Geoff Landis’ idea of having neutral atoms in the particle beam condense (see Landis citation below) to be unlikely to succeed. Are our beaming strategies hopelessly compromised by all this?

PROCSIMA tries to get around the problem by combining a neutral particle beam and a laser beam, a technique that, according to Chris Limbach, could prevent spread and diffraction in both kinds of beam. Let me quote him from the NIAC description:

The elimination of both diffraction and thermal spreading is achieved by tailoring the mutual interaction of the laser and particle beams so that (1) refractive index variations produced by the particle beam generate a waveguide effect (thereby eliminating laser diffraction) and (2) the particle beam is trapped in regions of high electric field strength near the center of the laser beam. By exploiting these phenomena simultaneously, we can produce a combined beam that propagates with a constant spatial profile, also known as a soliton.

Image: Graphic depiction of PROCSIMA: Diffractionless Beamed Propulsion for Breakthrough Interstellar Missions. Credit: C. Limbach.

An interesting concept because it draws from recent work in high-energy lasers as well as high-energy neutral particle beams, producing a hybrid notion that seems worth exploring. In his precis on PROCSIMA, Limbach says he believes this beamed propulsion architecture would increase the probe acceleration distance by a factor of ~10,000, allowing us to send a 1 kg payload to Proxima Centauri at 10 percent of lightspeed, making for a 42-year mission.

We watch laser developments with interest particularly with regard to Breakthrough Starshot, which assumes a similar high-energy laser capability in the 50 GW range. Starshot is an investigation into nano-scale payloads carried by small beamed sails to nearby stars. Can we tap neutral particle beam technology to achieve increased delta-V, solving the diffraction problem at the same time? As Limbach points out, such technologies are a hot topic within the nuclear fusion community, which looks at heating magnetically confined fusion plasmas.

Expect more on this concept in short order. The Landis paper is “Interstellar flight by particle beam,” Acta Astronautica 55 (2004), 931-934. I’ll have a full citation on the Benford paper as soon as it is published.

tzf_img_post

2001: A Space Odyssey – 50 Years Later

Fifty years ago today, 2001: A Space Odyssey was all the buzz, and I was preparing to see it within days on a spectacular screen at the Loew’s State Theater in St. Louis. The memory of that first viewing will always be bright, but now we have seasoned perspective from Centauri Dreams regular Al Jackson, working with Bob Mahoney and Jon Rogers, to put the film in perspective. The author of numerous scientific papers, Al’s service to the space program included his time on the Lunar Module Simulator for Apollo, described below, and his many years at Johnson Space Center, mostly for Lockheed working the Shuttle and ISS programs. But let me get to Al’s foreword — I’ll introduce Bob Mahoney and Jon Rogers within the text in the caption to their photos. Interest in 2001 is as robust as ever — be aware that a new 513-page book about the film is about to be published. It’s Michael Benson’s Space Odyssey: Stanley Kubrick, Arthur C. Clarke, and the Making of a Masterpiece. Let’s now return to the magic of Kubrick’s great film.

By Al Jackson, Bob Mahoney and Jon Rogers

Foreword

By the 1st of April, 1968 I had been working in the Apollo program as an astronaut trainer for the Lunar Mission Simulator, LMS, for 2 and a half years. I had also been a science fiction fan for 15 years at that time, so I had kept up with, as best as one could, Stanley Kubrick’s production of 2001: A Space Odyssey. The film premiered in Washington, DC on April 2, 1968 (it had an earlier test showing in New York). I think it premiered in Houston on a Friday, April 5 (a day after the Martin Luther King assassination). Boy! I sure tried to wangle a ticket for that but could not. However I did see the film on April 6, at the Windsor Theater in Houston on a gorgeous 70mm Cinerama screen. It was a stunning film since I had been nuts about space flight since I was 10 years old. A transcendental moment! A few things:

(1) Having read everything Arthur C. Clarke had written, I grasped essence of the story the first time through. I saw the film six more times at the Windsor in 1968, and once in March of 1969 on another big screen in Houston. That was the last time I have seen it in true 70mm. All was confirmed when I read Clarke’s novel a few weeks later.

(2) On Monday morning, the 8th of April, 1968 Neil Armstrong and Buzz Aldrin were scheduled in the LMS for training. I remember James Lovell, Bill Anders and Fred Haise standing around the coffee pot talking to Buzz. He surprised me by holding forth on how 2001’s narrative seemed to him a rework of ideas by Clarke in Childhood’s End, as well as Clarke’s thoughts in essays. In all the time he spent in the simulator, I don’t think we ever talked science fiction.

(3) Having been ’embedded’ in space flight first as a ‘space cadet’ and then plunged, as a NASA civil servant, into the whirlwind that was Apollo, I had a jaundiced eye for manned space flight. I was thrilled to see a nuclear powered exploration spacecraft, a monstrous space station and a huge base on the Moon, all technically realizable in 30 or so years. But I was getting pretty seasoned on the realities of manned spaceflight, so a little voice in the back of my brain said No Way, No How is that going to be there in 30 years. It all passed into an alternate universe in 2001. Yet I didn’t imagine that no manned flight would go out of low Earth orbit in 50 years!

Half a Century of 2001

April of 2018 marks the fiftieth anniversary of the release of Stanley Kubrick and Arthur C. Clarke’s science fiction film 2001: A Space Odyssey and the novel of the same name. The narrative structure of the film is a transcendent philosophical meditation on extraterrestrial civilizations and biological evolution, a theme known in science fiction prose from H.G. Wells to the present as BIG THINKS. Books, articles, and even doctoral dissertations have been written about the film. Framing these deeper speculations was a ‘future history’ constructed on a foundation of rigorously researched, then-current scientific and engineering knowledge. We address this technology backdrop and assess its accuracy. We’ll leave any commentary on the film’s philosophy to film critics and buffs.

Hailed at the time as a bold vision of our future in space, it presented both Kubrick’s and. Clarke’s predictions of spaceflight three decades beyond the then-current events of the Gemini and early Apollo programs. One’s impression now is that Kubrick and Clarke were overly optimistic. We certainly don’t have 1000-foot diameter space stations (the one shown in the film was SS Five; we never see the other four) spinning in Earth orbit and multiple large bases on the lunar surface supporting hundreds of people. And the Galileo and Cassini probes were a far cry from nuclear-powered manned missions to the outer planets.

But those extrapolations are programmatic in nature, not technology-related. One must recall that the film was conceived in early 1965, when the space programs of both the U.S. and U.S.S.R. were racing ahead at full speed. Prose science fiction of the late 1930’s through 1965 is an indicator that many writers assumed extensive spaceflight exploits were perfectly feasible (even inevitable) by the turn of the last century. (It should be noted that many prose science fiction writers such as Robert Heinlein, Isaac Asimov and Arthur C. Clarke, and many others, hedged their bets and put the technological developments depicted in the film more than 100 years beyond 2001.) The many social ,economic and political circumstances that would place pressures on space program funding were not fully understood at the film’s production. This was also at a time when individual countries unilaterally pursued their own space programs.

Yet when one looks beyond the obvious programmatic “overshoot” of the film to the technical and operational details of their portrayal of spaceflight, Kubrick and Clarke’s cinematic glance into the crystal ball seems much more remarkable. There are few sources of technical material about the spacecraft in the film although we know that Marshall engineers Frederick Ordway and Harry Lange (7,8), spent nearly three years designing the technical details for the film with the help of both the American and British aerospace industries. Their considerable efforts are evident even down to the details.

Image: Discovery in Jupiter space. Credit: Jon Rogers.

Space Transport to Earth Orbit

Take the Orion III space plane. Excluding the Pan Am logo (no one would have expected Pan Am to go bankrupt only 15 years later!), the space plane ferrying Dr. Floyd into Earth orbit shares an amazing number of features with the once active space plane, the Space Shuttle. Not only does it have a double-delta wing, but the three sweep angles (both leading and trailing edges) are within 10 degrees of those on NASA’s shuttle. One also finds it intriguing (as superficial a matter as this might be) that the back ends of both vehicles have double bulges to accommodate their propulsion systems.

Watching the docking sequence cockpit view in the film is like sitting in the flight engineer’s seat on a shuttle’s flight deck. Three primary computer screens, data meters spanning the panel over the windshield, a computer system by IBM, dynamic graphics of the docking profile can be found in the Orbiters. The real Shuttle approaches not nose-first but top-first, and the dynamic images of space station approaches are displayed on laptop computers, not the primary computer screens. The laptop imagery in the Shuttles was due to the evolution in computer technology, which did not exist when the 2001 space planes were conceived.

Another implied technical feature of the Orion III docking operation is that the space plane’s crew is not doing the flying; the space plane’s computers are. While this isn’t quite the way the Shuttle docked to the space station today (the crews flew most of the approach and docking manually but their stick and engine-firing commands pass through the computers), fully automatic dockings have been the norm for Russian Soyuz and Progress for many years and for the ESA Automated Transfer Vehicle. The premise of the space plane flying itself as the crew monitors its progress was standard operating procedure for Shuttle ascents and entry.

Speaking of ascent, the film leaves us to speculate on how the Orion III achieved orbit. The book 2001: A Space Odyssey [1] fills this in—and here we find a serious divergence from the stage-and-a-half vertically launched shuttle. It is now known that the Orion III was a ‘III’ because the first stage, Orion I, was the booster (while Orion II was a cargo carrier [2a] (see note 1)]. In the novel, Clarke clearly describes the Orion space liner configuration as a piggy-back Two-Stage-to-Orbit (TSTO), Horizontal Take Off and Horizontal Landing (HTOHL) tandem vehicle launched on some form of railed accelerator sled [2a].

Image: Orion 1 and 3 mated for flight. Credit: Ian Walsh, who in addition to being a key player in designing and building the largest aperture telescope in the southwest of England is also a builder of scale models both factual and fictional.

This mid-60’s speculation for an ascent/entry transportation system is remarkably in line with (minus the rail sled) many of the European Space Agency’s extensive ’80s/’90s design studies of their Sänger II/Horus-3b shuttle [9]. (A multitude of TSTO studies in the Future European Space Transportation Investigations Programme [9] echoed the space transportation system suggested in the film 2001.) Perhaps Clarke’s prescience (and ESA’s design inspiration) stemmed not from looking forward but from looking back. Amazingly, basic physics had guided Eugen Sänger and Irene Brent—in 1938!—to define this fundamental configuration (including the rail sled) for their proposed ‘orbital space plane’ the Silbervogel.

Image: The 2001 space plane going for orbit. Credit: Jon Rogers.

It is interesting that Arthur C. Clarke wrote a novel in 1947, Prelude to Space, with a horizontal take off two-stage-to-orbit spacecraft and twenty one years later it was depicted, though one has to read the novel to find this out. The second stage in Prelude to Space is nuclear powered while the Orion III is liquid oxygen-liquid hydrogen propulsion.

In recent times it was noticed that there was another shuttle to space station V, the Russian ‘Titov’, but it can only be seen inside an office on the station.

Image: Model of the Russian Titov shuttle, a tough catch unless you’re watching the movie extremely closely. Credit: Ian Walsh.

Zero G

One of the best aspects of 2001, and certainly a significant reason most knowledgeable space enthusiasts admire it, is its artistic use of true physics. Nearly every spaceflight scene shown in 2001 conforms to the way the real universe works. (And talk about a compliment to the special effects crew of the 1968 film — the Apollo 13 team achieved most of their zero-g effects by filming inside the NASA KC-135 training aircraft as it flew parabolic arcs.) 2001 has had a lasting influence on using facts in the story telling: in recent years, Gravity, Interstellar and The Martian used reality as a canvas.

If you are in an orbiting spacecraft that’s not undergoing continuous acceleration due to its own propulsion, you can’t just walk around like you’re heading into the kitchen from the living room. Those bastions of “science” fiction pop culture, Star Trek and Star Wars, conveniently used an old prose science fiction ploy, ‘super-science’ ‘field-effect’ gravity, to permit walking (due to F/X budgets or artistic license ). However, careful comparison of the 1968 film scenes to those of crew members operating in spacecraft today quickly reveals that Kubrick dealt with the technical (and potentially F/X-budget-busting) challenge of faking zero gravity by blending scientifically legitimate speculation, real physics on the soundstage, and a touch of artistic license that collectively helped to produce visually compelling aesthetics.

In 2001, when the crew move about in non-rotating parts of their spacecraft, they walk (and even climb up and down ladders) on Velcro (or some similar material) with special footwear. We first see this in the Orion III ‘shuttle’, when a stewardess walks in zero g using grip-shoes. In fact, one of the more visually interesting sequences (the stewardess heading up to the cockpit in the Aries IB moon shuttle) gains its impact with the idea — the stewardess calmly walks her way up a curving wall until she’s upside down. Station astronauts must fidget anxiously when they watch this scene since they would accomplish a similar trip today in seconds with just a few pushes. In today’s spacecraft, you don’t walk anywhere; you float.

DCF 1.0

Image: Jon Rogers on the right, with Jack Hagerty, who along with Ian Walsh added comments and suggestions for this essay. Some background on Jon Rogers: An A.I.A.A. member for many years, Jon started his career as QA inspector on Apollo Hi-Gain antenna system, built microcircuits for the Space Shuttles, and was Sr. Mfg. Engineer on the GOES, INTELSAT-V, SCS1-4 Satellites. Mr. Rogers has written articles, co-authored/illustrated the Spaceship Handbook, and presented to the A.A.S. national convention, on the early history of spaceships. He received his degree from SJSU in 2000. Credit: Al Jackson.

Kubrick adopted the concept (not necessarily an unreasonable one for the mid-60s) that routine space flyers would insist on retaining the norms of earth operations, including walking, while in zero gravity. (In fact, most crewmembers do prefer at least a visual sense of a consistent up-and-down in spacecraft cabins.) While this helped him out of a major cinematic challenge, it predated the early 70’s Skylab program, when astronauts finally had enough room to really move around and learn the true freedom of zero gravity. You’ll note too that the flight attendant’s cushioned headgear also helped avoid the likely impossible task of cinematically creating freely floating long hair, a common sight in today’s downlink video.

2001 was probably the first space flight film to actually use zero g to depict zero g. In the scene where astronaut Dave Bowman re-enters the Discovery through the emergency hatch, the movie set was built vertically. This allowed actor Keir Dullea to be dropped, and thus undergo a second or two of freefall, before the wire harness arrested his plunge. (Note 5)

Of course, today’s space flyers use velcro to secure just about everything else. Cameras, checklists, pens, food containers — you can tell the space items apart from their earthbound cousins by their extensive strips of fuzzy tape. Unfortunately, this convenient fastener’s days in the space program may be numbered for long-duration flight. Velcro, composed of tiny plastic hooks, eventually wears out and small pieces break off and can become airborne hazards to equipment and crew. Consequently, long-stay crews keep equipment and themselves in place with other fastening techniques: Magnets, bungee cords, plastic clips, or even just simple foot straps.

Image: Bob Mahoney. Passion for spaceflight propelled Bob Mahoney through bachelor’s and master’s degree programs in aerospace engineering at the University of Notre Dame and the University of Texas at Austin, respectively. Love of writing carried him into lead editorships of his high school’s literary magazine and Notre Dame Engineering’s Technical Review. Bob discovered an outlet for both of these passions while serving nearly ten years as a spaceflight instructor in the Mission Operations Directorate at Johnson Space Center. While working at JSC, he taught astronauts, flight controllers, and fellow instructors in the disciplines of orbital mechanics, computers, navigation, rendezvous, and proximity operations. His duties included development of simulation scripts for both crew-specific and mission control team training. Bob supported many missions, including STS 35, the first flight of Spacelab post-Challenger, and STS 71, the first shuttle docking to Mir. As Lead Rendezvous Instructor for STS 63, the first shuttle-Mir. rendezvous, and STS 80, the first dual free-flyer deploy-and-retrieve, he ensured both crew and flight control team preparedness in rendezvous and proximity operations.

Artificial Gravity

Kubrick and Clarke’s other method of fighting zero g was well-established in the literature of the time: centrifugal force. The physiological effects of zero g on humans had been a worry from the early days of theoretical thinking about spaceflight. Some thought it might be beneficial, but many worried that since the human body evolved in one g, long exposure to no or reduced gravity might be detrimental. In the film, both the large space station orbiting Earth and the habitation deck of the Jupiter mission’s Discovery spacecraft rotate to create artificial gravity for the inhabitants. While Gemini 11 achieved this during an experiment, the general trend in space operations has been to live with zero gravity (properly termed microgravity) while combating its effects on the human body through exercise. This path was chosen for two reasons: A rotating spacecraft’s structure must be significantly sturdier (and thus more massive, and thus more expensive to launch) to handle the stresses of spinning, and the utility of zero g seems to outweigh its negative aspects. Yet one must note that research on the ISS has indicated there are limits to how much exercise and other similar countermeasures can counteract physical deterioration. Living in zero g for extended periods of time, for interplanetary flight, now no longer seems possible. This is one aspect of space medicine research that makes the ISS such an important laboratory.

Ordway and Lange designed the Discovery‘s crew quarters centrifuge realistically to simulate/generate 0.3 g while counteracting Coriolis forces, but a 300-foot diameter wheel was just not feasible as a set. Nevertheless, Vicker’s aircraft built the fully working prop with remarkable accuracy. (6) The space station interior set did not rotate, but consisted of a fixed curved structure nearly 300 feet long and nearly 40 feet high. (2) The curve was gentle enough to permit the actors to walk smoothly down the sloping floor and maintain the desired illusion.

Food

A bit of a miss here, more dictated by the design of ‘space food’ in the 1960s than anything else. Whereas the Council of Astronautics’ Chief Heywood Floyd sips liquid peas and carrots through straws on the way to the Moon and the Jupiter-bound Discovery crew eats what could best be described as colored paste, today’s astronauts get to eat shrimp cocktail and Thanksgiving turkey dinners. Of course, these are either dehydrated or military-style MREs (Meals Ready to Eat), but they beat the zero-g mess problem simply by being sticky via sauce or gravy. The most accurate culinary prediction of Kubrick and Clarke was the lunar shuttle bus meal: sandwiches. However, when an ISS crewmember prepares that old staple peanut butter and jelly, he or she uses tortillas in place of bread. Like worn-out Velcro, bread makes too many crumbs, and crumbs can get into the electronics.

The galley on Discovery, however, has its counterpart on the Shuttle. While it didn’t automatically dole out an entire five-course meal based on crewmember selection (astronauts did this manually before liftoff back in Houston, and then their meals get packed in storage lockers), the shuttle galley did let them heat up items that are supposed to be hot and rehydrate that shrimp cocktail. The zero-gravity toilet instructions shown in the film, an intentional Kubrick joke, are much longer than the shuttle’s Waste Containment System crew checklist.

Propulsion

While the Orion III space plane’s external propulsion elements hint at systems a few years beyond even today’s state-of-the-art (possible air intakes for a scramjet and a sloping aerospike-like exhaust nozzle), the spherical Aries 1B moon shuttle has rocket nozzles which would look perfectly at home on the old lunar module. Kubrick was smart not to show exhaust as they fired, only lunar dust being blown off the lunar surface landing pad. Ordway has indicated that the propellants were LOX/LH2 and thus the exhaust would be extremely difficult to see in a vacuum in sunlight. Even the MMH/N2O4 shuttle jet firings washed out during orbital day.

One particular propulsion depiction potentially unique to 2001, which Kubrick likely used more for cinematic aesthetics as well for the sake of realism, is the roar, whine, or boom of engines in the space vacuum. Even Apollo 13 fell down here, going for the rock ‘n’ roll excitement of the service module’s jets pounding away with bangs and rumbles in external views. (Arguably Kubrick’s most inspired move ever was to overlay Johann Strauss’s ‘The Blue Danube’.) Something that almost all fictional space TV and movies miss is the cabin noise of those jets firing, however. Unlike the vacuum outside, a spacecraft’s structure can carry sound and the Shuttle crews do hear their jet firings; at least the ones up front near the cabin. They are quite loud — crewmembers have compared them to howitzers going off. (The 2013 film Gravity did use ‘interior’ sounds well, subtle enough that one does not catch it at first. Sounds transmitted through space suits and ship structures, and a clever use of the vacuum!)

2001‘s one big cinematic overshoot propulsion-wise is the nuclear rockets of the interplanetary Discovery. While deep-space probes such as Voyager, Galileo, and Cassini employ RTG units to generate electricity with the radioactive heat of their plutonium, no nuclear propulsion has ever flown in space to date, primarily because there has been no return to research on nuclear propulsion. (Note: The U.S. NERVA nuclear thermal rocket program was not canceled until 1972, a full four years after the movie’s release.)

The Discovery was powered by a gaseous fission reactor for rocket propulsion. The highest reactor core temperature in a nuclear rocket can be achieved by using gaseous fissionable material. In the gas-core rocket concept, radiant energy is transferred from a high-temperature fissioning plasma to a hydrogen propellant. In this concept, the propellant temperature can be significantly higher than the engine structural temperature.

Regarding the depiction of that nuclear propulsion in the film, Discovery was actually missing a major component: massive thermal radiators. As any nuclear engineer could point out (and described properly in the novel), these huge panels would have dominated the otherwise vertebrae-like Discovery‘s structure like giant butterfly wings. Even the decidedly non-nuclear fuel-cell-powered Shuttle and solar-cell-powered ISS sport sizable radiators to dump the heat of their electricity-powered hardware. Ordway and Lange were quite aware of the need for such radiators and appropriate models were built, but in the end aesthetics carried the day, so the cinematic Discovery coasted along (silently) somewhat sleeker than known physics demanded. (One interesting tidbit here: some Glenn Research Center engineers redesigned the Discovery recently as an engineering exercise.(10))

Image Credit: Jon Rogers.

Cabin Interior

Speaking of sound, 2001 may be the only fictional film to convey the significant background noise in a spacecraft cabin. Every interior scene in Discovery is colored with a background hum, most certainly meant to be the many spacecraft systems running continuously, including air circulation fans. Crews have reported that the Shuttle cabin is a very noisy workplace, and some portions of ISS were once rumored to merit earplugs.

As already noted, the Orion III cockpit is remarkably similar to the Shuttle cockpit. One notes that all of the cockpits in the film are “glass” cockpits, where all information is displayed on computer screens (versus the dials and meters typical of 1960s technology). But this time it was the real-world Shuttles that caught up with the film (and a significant portion of the world’s airliners). During the 1990s the orbiter’s cockpit displays (including many 1972-era dials and meters) were entirely replaced with glass-cockpit technology.

Hibernation

Nope. Still can’t do that today. Suspended animation is an old story device in prose science fiction, not seen as much these days. About the only progress there is therapeutic hypothermia which may be a step towards ‘hypersleep’.

In fact, the Salyut, Mir, and ISS programs were geared toward keeping crew members active for longer and longer durations, not asleep. The concept of conservation of crew supplies is reasonable enough, but even in 1968 multiyear missions did envision years-long expeditions with enough self-contained logistical support. The sleep stations shown on Discovery, however, appear to offer crewmembers the same small volume as those on the Shuttle or ISS.

Communications

This is a technology that really tends to hide behind the flashier and more obvious equipment but is so critical that it should never be taken for granted. Unfortunately, by using it as a device serving the subplot involving HAL and the Discovery crew, Kubrick and Clarke committed a serious misstep in predicting the technology of today — er, yesterday. If you recall, the first sign of HAL’s neurosis is his false report that the AE-35 unit (the electronic black box responsible for keeping Discovery‘s antennae suite pointed at Earth) is going to fail. Mission Commander Dave Bowman must take a spacewalk to haul it inside after replacing it with a substitute. After finding nothing wrong with it, the crew (acting on the ominous suggestion of the erroneous HAL computer) decides to put the original unit back.

Here’s the problem: a system as critical as the communications pointing system would not have a single-point failure, especially in a manned spacecraft flying all the way to Jupiter! In fact, a Shuttle launch was scrubbed because one of two communications black boxes was not working properly. The Shuttle was designed with fail-operational fail-safe redundancy. In other words, if a critical unit fails, the shuttle can still support mission operations. If a second, similar unit fails, the shuttle can get home safely. Realistically, such a failure in a sophisticated Jupiter-bound manned spacecraft would call for simple rerouting of the commands through a backup unit, with at least one more unit waiting in reserve beyond that. This wouldn’t be a very dramatic turn, but such a sequence would better parallel the occasional Shuttle and ISS systems failures that have thus far been irritating but not showstoppers. (Of course, not all those many years ago Mir lost all attitude control when its one computer failed, so perhaps the premise isn’t too far fetched…) Once again, though, this technical ‘glitch’ was dictated by the cinematic narrative. Is it not interesting, however, that our premier unmanned Jupiter probe, Galileo, suffered a crippling failure of its primary communications antenna?

On the spinning space station, Dr. Floyd makes an AT&T videophone call to his daughter. Today’s space station inhabitants do, in fact, converse with their families over a video link, but it’s reasonably certain that their calls don’t cost $1.70 and get charged to their calling cards.

Spacesuits & EVA

This is the issue wherein 2010, that bastard child, really makes “hard” science fiction fans hang their heads low. 2001 presented spacesuits (especially those on the Jupiter mission) that were sophisticated, logical in design, quite impressive in capability, and perfectly believable for an advanced space program. 2010‘s American spacesuits look like they rolled off the EMU rack at Johnson Space Center!

The single component of the 2001 suits worn by Dave Bowman and Frank Poole most analogous to the EMUs today is the built-in jet maneuvering pack. Small, unobtrusive, minimal — that pretty much sums up the SAFER unit developed for station EVA. The 2001 suit does reflect the modularity concept of the Shuttle-era suit as well: helmet and gloves attach to the main suit with ring seals, but Ordway and Lange did not anticipate the move towards “hard shell” design (the American suit’s rigid torso, the Russian back-door-entry Orlan). The cinematic suits appear much more akin to the old Mercury and Gemini configurations, or even the recent advanced design of Dr. Dava Newman at MIT. (Note 3)

One notes there is a glaring design failure in the Discovery EVA suits. There is an external oxygen hose from helmet to backpack. This does not exist in Harry Lange’s initial suit designs and that hose does not exist on Heywood Floyd’s suit and the other suits on the visit to the Lunar Monolith. It was a bit of cinematic drama that could have been taken care of with a suit tear, a rare oversight by Kubrick. Real space suits just don’t have a vulnerable oxygen supply tube from the backpack EMS unit to the suit helmet (we see Frank Poole fighting to reattach his). Even in 1965 Apollo space suits had much more secure fittings.

While not a matter of technical prognostication, we can’t help but mention that the EVAs shown in 2001 remain the most realistic fictional depiction of spacewalking ever put on film. (That is, until the 2013 film Gravity.) The fluidity of motion and the free-floating grace of the crewmembers as they move completely in line with real physical laws is nearly identical to what you see downlinked on NASA TV. Not impressed? Compare Bowman’s approach and arrival at the Discovery‘s antenna in 2001. How’d Kubrick pull it off? Skilled stuntmen suspended from cables filmed from below—that was the key. Filming at high speed and then slowing the footage for incorporation in the film also helped. (In Apollo 13 extensive use was made of flying parabolic arcs in a Boeing KC-135 — that was real zero g. Alfonso Cuarón used ‘motion capture’ and computer generated imagery in Gravity.

One EVA item in 2001 that today’s engineers and astronauts would love to have but don’t is the space pod: that ball of a spacecraft with the pair of multi-jointed arms out front. Such a vehicle would eliminate the need for some suited EVAs (the crewmember could just stay inside the pod) and would make others much easier (since the pod, under the control of the ship’s highly intelligent computer, could help out). But if you think about it, we’re really not too far off with the Shuttle and ISS remote manipulator systems (the Canadarms) These now (especially with the recent addition of DEXTRE with its two multi-jointed arms) permit the accomplishment of some tasks outside the spacecraft without EVA, and they have proven themselves capable EVA assistants under the control of a highly intelligent computer—namely, a crew member back inside the spacecraft cabin!

The most glaring difference in EVA, however, lies in protocol. During the EVAs in 2001, a single crewmember conducts the EVA. This is just not done today. Both Americans and Russians always leave their spacecraft in pairs (and in rare circumstances, as a trio) for safety’s sake — if one crewmember gets in trouble the other can come immediately to their aid. However, practically speaking, both crew members did have a companion: HAL the computer, controlling the space pod. But, of course, the solo EVA, the single-point communications failure necessitating the EVA, and HAL’s control of the space pod, collectively set up the greatest drama in the film: HAL trying to kill off all his crewmates.

Image Credit: Jon Rogers.

Artificial Intelligence

Given the many different levels at which the space program (and society as a whole) use computers today, it is within this niche that the film’s accuracy is most difficult to gauge.

Certainly, we have computers that can talk, and Shuttle and station crews have experimented with voice-activated controls of some systems, but these are superficial similarities. More importantly, we find a better comparison in the command and control realm: during large portions of a Shuttle’s mission, the fail-operational fail-safe primary & backup computer suite of five GPC computers did control the vehicle just as HAL completely controlled Discovery. (In fact, as a particularly curious side note, the programming language for the Shuttle’s primary computers is actually termed HAL/S, for Higher Assembler Language/Shuttle, but this just might be a not-so-subtle homage to the film by the software development team.) Unlike a lot of prose science fiction, 2001 did anticipate flat screen TVs and what seem to be IPADs!

The film was created at a point in history when the computer’s invasion of our society (including spaceflight) might have taken one of two paths: bigger and more powerful mainframe computers that would interface everywhere through an extensive but centrally controlled communications network, OR smaller and smaller special-purpose computers that each would do a little bit of the work. HAL certainly represents the pinnacle of achievement for the former — an artificial intelligence that had control of every single aspect of the Discovery‘s operations. The distributed PC network controlling the ISS today reflects the latter.

Yet the depiction of the HAL 9000 (Heuristically programmed Algorithmic Computer) in 2001 remains one of the film’s most eerie elements. For their description of artificial intelligence, Kubrick and Clarke only had the terminology and the vision of the mid-1960’s as their guide. At that time the prevailing concept expected ‘AI’ to be a programmed computer. Thus the term ‘computer’, with all its implications of being a machine, occurs repeatedly.

But in the last 50 years no true ‘strong’ AI has emerged. Today’s corresponding term would be ‘strong AI’ (11); their use of mid-1960’s terminology obscures the fact that Kubrick constructed an AI that is unmistakably ‘strong’, that is, capable of “general intelligent action.” How this would have been achieved Kubrick and Clarke left to the imagination of the viewer and the reader.

As HAL seems to be a ‘strong AI’, capable of feeling, independent thought, emotions, and almost all attributes of human intelligence, anyone viewing the film today should forget the film’s and novel’s use of the terms ‘computer ‘and ‘programming’. HAL seemed able to reason, use strategy, solve puzzles, make judgments understand uncertainty, represent knowledge including common-sense knowledge, plan, learn, communicate in natural language, perceive and especially see, have social intelligence, be able to move and manipulate objects (robotics), and integrate all these skills toward common goals. These attributes are possible not through programming as much as through ‘evolving’ or ‘growing-learning’ … a ‘solid-state intelligence’. That is why it is amazing to watch the film today (despite its use of clunky, ill-suited words like computer and program) and realize that HAL was a TRUE AI. HAL likely will exist in a universe which we have yet to realize, but one has no idea when! (See note 4.)

Some Reverse Engineering

From the moment we meet HAL we are given to believe that this particular AI has total control of everything in the Discovery. He can take action — open pod doors, open pod bay doors, even adjust couch cushions! — at a crewmember’s spoken word. Yet after HAL kills Frank Poole and the hibernating crewmembers followed by Dave Bowman’s return to the Discovery, what do we see? A manual, emergency airlock/entrance.

What is that doing there? Directly, to provide the film with an ‘action scene’, but the implications are deeper. Ordway/Lange of the Discovery knew their spaceships! A ship that substantial on an important mission must have redundancy; if not in the communications system, then at least to back up the onboard AI! Ordway wrote a memo to Kubrick about ship redundancy (7a). What if HAL had been ‘holed’ by a very freak meteor hit? What if an ultra-high-energy cosmic ray bored a damaging track through one of HAL’s solid state modules? Any of a number of possible unpredictable second- and third-order failures might occur, so the crew might be forced to take care of the ship and mission ‘on their own.’

And it is here, in the consideration of backup systems, where we catch Kubrick and Clarke, the storytellers, at odds with Kubrick and Clarke, the prognosticators of realistic spacecraft design. We find Bowman and Poole discussing how to partially shut HAL down by leaving only primitive functions operating. That could only be an option if the human crew could control the Discovery manually (or rather more practically, semi-manually) with a lot of help from still-working automated systems. This issue is more explicit in the movie and implicit in the film.

In fact Kubrick mildly trumps Clarke technically and dramatically in the film’s narrative structure (wherein Bowman leaves the ship to rescue Poole, setting up the emergency airlock action scene). In the novel, Clarke merely has HAL ‘blow down’ the Discovery by opening the pod bay doors. However, examining Lange and Ordway’s drawings of the Discovery‘s living quarters reveals at least two airlocks between the pod bay and the centrifuge (7,7a,8,9). Independent double and triple overrides (over which HAL had no control) would have come into play to prevent this very scenario from happening, mechanically or insane-AI-instigated.

How about HAL’s control of Dave’s pod? Actually one can capture a frame in the pod (‘N/A HAL COMLK’) that shows that Bowman, even though he left his helmet behind, had the sense to cut HAL’s control of the pod. It is impressive but not surprising that Kubrick and his team thought to include such a detail.

When Comes the Future?

While Kubrick and Clarke’s iconic 1968 vision of spaceflight’s future may have been far off the mark in terms of how much we would have accomplished by the turn of the millennium, its accurate anticipation of so many operational and technological details remains a fitting testament to the engineering talent of their supporting players, especially Fred Ordway and Harry Lange. The astounding prescience in their projections of the specifics of space operations decades beyond the then-current real spaceflight of Gemini and Apollo, even when constrained by storytelling aesthetics, offers the promise that their spectacular rendering of a spacefaring society may still come to pass.

With the United States and other nations now finally developing systems to return human crews to the Moon and enable travel beyond, and with commercial entities actively pursuing private spaceflight across a spectrum of opportunities long considered a matter of fantasy, perhaps we can take heart in the possibility that by the time another fifty years have passed, Kubrick and Clarke’s brilliant, expansive, and yet convincingly authentic future may finally become real in both its details and its scope.

Selected Bibliography

(1) 2001: A Space Odyssey, by Arthur C. Clarke, based on a screenplay by Stanley Kubrick and Arthur C. Clarke. Copyright 1968. The New American Library, Inc.

(2) The Making of Kubrick’s 2001, edited by Jerome Agel. Copyright 1970 The Agel Publishing Company, Inc. The New American Library, Inc.

(3) 2001: filming the future, by Piers Bizony. Copyright 1994. Aurum Press Limited.

(4) The Lost Worlds of 2001, by Arthur C. Clarke. Copyright 1972. The New American Library, Inc.

(5) The Odyssey File, by Arthur C. Clarke and Peter Hyams. Copyright 1984. Ballantine Books.

(6) “2001: A Space Odyssey,” F.I. Ordway, Spaceflight, Vol. 12, No. 3, Mar. 1970, pp. 110-117. (Publisher: The British Interplanetary Society).

(7) Part B: 2001: A Space Odyssey in Retrospect, Frederick I Ordway, III Volume 5, and American Astronautical Society History Series, Science Fiction and Space Futures: Past and Present, F.I. Ordway, Edited by Eugene M. Emme, 1982, pages 47 – 105. (ISBN 0-87703-173-8).

(7a) Johnson, Adam (2012). 2001 The Lost Science. Burlington Canada: Apogee Prime

(7b) Johnson, Adam (2016). 2001 The Lost Science Volume 2. Burlington Canada: Apogee Prime.

(8) Jack Hagerty and Jon C. Rogers, Spaceship Handbook: Rocket and Spacecraft Designs of the 20th Century, ARA Press, Published 2001, pages 322-351, ISBN 097076040X.

(9) Dieter Jacob, G Sachs, Siegfried Wagner, Basic Research and Technologies for Two-Stage-to-Orbit Vehicles: Final Report of the Collaborative Research Centres 253, 255 and 259 (Sonderforschungsberiche der Deutschen Forschung) Publisher: Wiley-VCH (August 19, 2005).

(9) Realizing 2001: A Space Odyssey: Piloted Spherical Torus Nuclear Fusion Propulsion NASA/TM-2005-213559 March 2005 AIAA-2001-3805.

(10) Searle, J. (1997). The Mystery of Consciousness. New York, New York Review Press.

(11) The film 2001: A Space Odyssey, premiere date April 6 1968.

Notes

(1) Acknowledgments: Thanks to Ian Walsh, Jack Hagerty and Wes Kelly , personal communications. Also, Special thanks to Douglas Yazell and the Houston section of the American Institute of Aeronautics and Astronautics for hosting the first edition of this article in 2008.

(2) In the mid 1960’s many of the SETI pioneers were afraid that revelation of the existence of an advanced extraterrestrial civilization might cause a social disruption; many others disagreed with this. Kubrick and Clarke decided to keep this as a plot device.

(3) There is an amusing bit of homage to George Pal in the film. In the 1950 movie the commander’s suit is red, the 2nd in command has a yellow one, all the rest are blue. Same is true in 2001! [8]

(4) The novel 2010 explains HAL’s ‘insanity’ in terms of his keeping the discovery of the TMA-1 monolith a secret for reasons of national security. (note 2) (Whatever that means.) This contradiction against his programming to never report erroneous information created a “Hofstadter-Moebius loop,” which reduced HAL to paranoia. Since nothing explicit is presented in the original film, and taking the characterization of HAL as a strong AI (for all intents and purposes making him ‘human’), HAL could have just as well gone bonkers for no good reason at all!

(5) A technical point about the emergency entry into the Discovery. Where did the pod hatch go? One notes that the pod doors slide ‘transversely’, i.e., they don’t swing in or out. In the airlock entrance scene Dave launches himself from ‘frame right’; normally the pod door slides open toward frame right (we’re seeing the rear of the pod in the scene). Thus the door’s guide track ran on both sides of the pod’s hatchway. Thus the normal open/close mechanism wouldn’t have to be retracted out of the way in an emergency. The pyros would be on the attach points where the door joins the mechanism, and in an emergency they’d blow the door further around the track, i.e., ‘frame left’, out of the way, while the regular mechanism stays put. (Thanks to Jack Hagerty for this observation.)

(6) 2018 is also the 30th anniversary of the viable ‘traversable wormhole’ by Morris and Thorne, this gives the ‘Star Gate’ in 2001 some physics which it did not have in 1968. M. S. Morris and K. S. Thorne, “Wormholes in spacetime and their use for interstellar travel: A tool for teaching General Relativity”, Am. J. Phys. 56, 395 (1988).

tzf_img_post