Centauri Dreams
Imagining and Planning Interstellar Exploration
New Exomoon Project Will Use Kepler Data
Exomoons are drawing more interest all the time. It may seem fantastic that we should be able to find moons around planets circling other stars, but the methods are under active investigation and may well yield results soon. Now David Kipping (Harvard-Smithsonian Center for Astrophysics) and colleagues have formed a new project called HEK — the Hunt for Exomoons with Kepler. We thus move into fertile hunting ground, for there has never been a systematic search for exomoons despite the work of ground-breaking researchers like Kipping, Gaspar Bakos (Princeton) and Jean Schneider (Paris Observatory). It’s definitely time for HEK as Kepler’s exoplanet candidate list grows.
Kepler, of course, works with transit methods, noting the dip in starlight as an exoplanet passes in front of the star under observation. HEK will use Kepler photometry to look for perturbations in the motion of the host planet that could flag the presence of a moon. Variations in transit timing (TTV) and duration (TDV) should be the most observable effects, the former being variations in the time it takes the planet to transit its star, while transit duration variation is caused by velocity changes induced by the fact that the planet and moon orbit a common center of mass.
The team will also look for eclipse features, where the moon might occult the planet during a planet-star eclipse. Back in 2009, Kipping and team ran a feasibility study on Kepler’s ability to find the moon of a gas giant in the habitable zone of a star (see Habitable Moons and Kepler). Assuming moons on circular, coplanar orbits around the host planet, the results showed that Kepler could detect exomoons down to 0.2 Earth masses. This is a large moon indeed, for as Kipping’s new paper on this work points out, the most massive moon in our Solar System, Ganymede, is 0.025 Earth masses (our own Moon is 0.0123 Earth masses). No question, then, that HEK will be looking for large moons, moons bigger than any we see in our own system.
Image: The view from a large exomoon would be like nothing we’ve seen in our own system, especially if that world proved suitable for life. Credit: Dan Durda.
Of course, binary planets also fall within the scope of this study — Kipping draws the line between a binary planet and a true planet-moon pair at the point where the center of mass of the two bodies is outside the radius of both bodies, but HEK can work comfortably with both scenarios. The paper runs through the likelihood that such large objects might exist, forming either around the host planet as it undergoes planetary growth, or (more likely) being captured by the host — here we think of moons like Triton in our own system, or of impact scenarios between planetesimals or young planets like that thought to have produced our own Moon.
Other scenarios are also possible, as the paper announcing HEK notes:
For planets which do not migrate through a proto-Kuiper belt or under the assumption that such objects will never reach su?cient mass to qualify as large moons, an alternative source of terrestrial mass objects is required. This object could be an inner terrestrial planet encountered during the gas giant’s inward migration or even a large, unstable Trojan which librates too close to the planet. Indeed, Eberle et al. (2010) have shown that a gas giant planet (in their case HD 23079b) can capture an Earth-mass Trojan into a stable satellite orbit, occurring in 1 out of the 37 simulations they ran.
How long would such a system be stable? The capture process would produce what the paper describes as ‘very loosely-bound initial orbits,’ but there has been work showing that captured moons have relatively high survival rates, as high as 50 percent in various configurations. Producing binary planets through the same methods is plausible, and the paper notes that a Jupiter orbited by an Earth-class planet could be considered an example of an extreme binary.
Examining these origin scenarios as well as the evolution of large moons in detail, the paper goes on to note the project’s objectives:
1. The primary objective of HEK is to search for signatures of extrasolar moons in transiting systems.
2. The secondary objective of HEK will be to derive posterior distributions, marginalised over the entire prior volume, for a putative exomoon’s mass and radius, which may be used to place upper limits on such terms (where conditions permit such a deduction).
3. The tertiary objective of HEK is to determine… the frequency of large moons bound to the Kepler planetary candidates which could feasibly host such an object (in an analogous manner to ?? – the frequency of Earth-like planets).
We know that in our own system, Europa, Titan and even tiny Enceladus are possible candidates for life. The Hunt for Exomoons with Kepler project won’t be able to tell us anything about astrobiology on an exoplanet’s moon, but if we begin to find Earth-sized objects orbiting gas giants in the habitable zone, we’ll have taken a first step toward learning whether exomoons could be just as viable a place for life as a planetary surface. The HEK home page goes so far as to speculate that planet-based life could actually be outnumbered by life on habitable moons. Step one, of course, is to find out if such moons actually exist, using Kepler’s crucial data.
The paper is Kipping et al., “The Hunt for Exomoons with Kepler (HEK): I. Description of a New Observational Project,” submitted to the Astrophysical Journal (preprint). The 2009 study is Kipping et al., “On the detectability of habitable exomoons with Kepler-class photometry,” Monthly Notices of the Royal Astronomical Society, published online 24 September, 2009 (abstract).
Three Exoplanets Smaller than Earth
It’s always gratifying to note the contributions of amateur astronomers to front-line science. In the case of three small planets discovered around the Kepler star KOI-961, the kudos go to Kevin Apps, now a co-author of a paper on the new work. It was Apps who put postdoc Philip Muirhead (Caltech) on to the idea that KOI-961, a red dwarf, was quite similar to another red dwarf, the well-characterized Barnard’s Star, some six light years away in the constellation Ophiuchus. It was a useful idea, because we do have accurate estimates of Barnard Star’s size, and the size of the star becomes a key factor in exoplanet detections.
For the depth of a light curve — the dimming of the star over time due to the passage of planets across its surface as seen from Kepler — reveals the size of the respective planets. Researchers from Vanderbilt University aided the Caltech team in determining KOI-961’s size, a difficult call because while Kepler offers data about a star’s diameter, that data is considered unreliable for red dwarfs. Detailed spectra from both Palomar and Keck showed that Kevin Apps was right. “When we compared its fingerprint with those of the best known M dwarfs we found that Barnard’s Star was the best match,” says Vanderbilt astronomer Keivan Stassun.
Having established the size, mass and luminosity of KOI-961, the team could calculate the size and characteristics of the planets around it. Confirming three planets that are actually smaller than Earth took intensive follow-up, using photographs taken with the Palomar Observatory’s Samuel Oschin Telescope in 1951. KOI-961 is in Cygnus about 130 light years away, close enough to show motion in the time period involved, and it was clear from studying the photographs that no background stars could have accounted for the light curves.
We wind up with three small exoplanets that range in size from 0.57 to 0.78 times the radius of Earth. The primary is only about 70 percent bigger than Jupiter, but all three of the planets are so close to it that their temperatures are expected to range from 200 degrees Celsius for the outermost planet up to 500 degrees Celsius for the innermost. The entire system is small enough that Caltech astronomer John Johnson compares it to Jupiter and its moons, adding “This is causing me to have to fully recalibrate my notion of planetary and stellar systems.”
Image: This artist’s conception compares the KOI-961 planetary system to Jupiter and the largest four of its moons. The KOI-961 system hosts the three smallest planets known to orbit a star beyond our sun (called KOI-961.01, KOI-961.02 and KOI-961.03). The planet and moon orbits are drawn to the same scale. The sizes of the stars, planets and moons have been increased for visibility. Credit: Caltech.
All three planets are thought to be rocky, so small that only such a composition would allow them to hold together. And while the three are obviously not habitable, the fact that we’re now finding small worlds around red dwarfs has undeniable implications. Johnson again:
“Red dwarfs make up eight out of every ten stars in the galaxy. That boosts the chances of other life being in the universe—that’s the ultimate result here. If these planets are as common as they appear—and because red dwarfs themselves are so common—then the whole galaxy must be just swarming with little habitable planets around faint red dwarfs.”
After all, while Kepler reported 900 exoplanet candidates in February, only 85 or so were in red dwarf systems, a small sample but one that is already producing planets. The vast numbers of red dwarfs in the Milky Way thus gain further interest as possible sites for life, although as we’ve discussed many times in these pages, huge issues remain, such as tidal locking and the flare activity often found in younger red dwarfs. We can expect continuing study of the question of whether a rocky world in the habitable zone of such a star could offer life a viable foothold.
Ongoing red dwarf studies will add to our picture of the prevalence of such planets, as the paper on this work notes:
Combined with the low probability of a planetary system being geometrically aligned such that a transit is observed (only 13% in the case of KOI 961.01), Kepler’s discovery of planets around KOI 961 could be an indication that planets are common around mid-to-late M dwarfs, or at least not rare. This would be consistent with the results of Howard et al. (2011), who used the Kepler detections to show that the frequency of sub-Neptune-size planets (RP = 2-4 R?) increases with decreasing stellar e?ective temperature for stars earlier than M0. Results from on-going exoplanet surveys of M dwarfs such as MEarth (e.g. Irwin et al. 2011b), and future programs such as the Habitable Zone Planet Finder (e.g. Mahadevan et al. 2010; Ramsey et al. 2008) and CARMENES (Quirrenbach et al. 2010), will shed light on the statistics of low-mass planets around mid and later M dwarfs.
And what of Barnard’s Star itself? This Vanderbilt news release discusses the star’s appearance in televised science fiction but misses the wonderful use Robert Forward made of it in Rocheworld, a novel he wrote to illustrate his concept of ‘staged’ laser sails carrying a manned mission to the stars. In any case, the idea of planets around Barnard’s Star is not new. It was in the late 1960s that Peter van de Kamp announced what he thought were two gas giants in orbit around the star, but neither planet could be confirmed and the detection is now thought to have been the result of a systematic error produced by van de Kamp’s equipment.
Nonetheless, while we’ve been able to exclude planets larger than about five Earth masses within 1.8 AU of the star, Barnard’s Star could still have smaller undetected planets. Doubtless the steadily growing interest in red dwarfs will result in further studies of this nearby target.
The paper is Muirhead et al., “Characterizing the Cool KOIs III. KOI-961: A Small Star with Large Proper Motion and Three Small Planets,” accepted at the Astrophysical Journal (abstract).
Planets by the Billions in Milky Way
People sometimes ask why we are spending so much time searching for planets that are so far away. The question refers to the Kepler mission and the fact that the distance to its target stars is generally 600 to 3,000 light years. In fact, fewer than one percent of the stars Kepler is examining out along the Orion arm are closer than 600 light years. The reason: Kepler is all about statistics, and our ability to learn how common exoplanets and in particular terrestrial planets are in the aggregate. The last thing the Kepler team is thinking about is targets for a future interstellar probe.
Studies of closer stars continue — we have three ongoing searches for planets around the Alpha Centauri stars, for example. But there is so much we still have to learn about the overall disposition of planets in our galaxy. New work by an international team of astronomers involves gravitational microlensing to answer some of these questions, and the results suggest that planets — even warm, terrestrial ones — are out there in vast numbers. Here again statistical analysis plays a crucial role, in conjunction with other forms of exoplanet detection. Arnaud Cassan (Institut d?Astrophysique de Paris) is lead author of the paper on this work in Nature:
“We have searched for evidence for exoplanets in six years of microlensing observations. Remarkably, these data show that planets are more common than stars in our galaxy. We also found that lighter planets, such as super-Earths or cool Neptunes, must be more common than heavier ones.”
Gravitational microlensing is yet another tool in the exoplanet hunt, and an extremely useful one because it gets around some of the limitations of the other major methods. Radial velocity studies tend to favor large planets that are close to their star, although with time and improving techniques, we’re using RV to learn about smaller and more distant worlds. Transit studies like Kepler’s are powerful but take time, as we wait for lengthy planetary orbits to be completed and confirm the presence of planets suggested by slight dips in starlight. But microlensing can detect planets over a wide mass range and also spot planets much further from their stars.
Image: The Milky Way above the dome of the Danish 1.54-metre telescope at ESO’s La Silla Observatory in Chile. The central part of the Milky Way is visible behind the dome of the ESO 3.6-metre telescope in the distance. On the right the Magellanic Clouds can be seen. This telescope was a major contributor to the PLANET project to search for exoplanets using microlensing. The picture was taken using a normal digital camera with a total exposure time of 15 seconds. Credit: ESO/Z. Bardon/ProjectSoft.
The current work uses data from the PLANET and OGLE microlensing teams, two studies that rely on a foreground star magnifying the light of a much more distant star lined up behind it. If the lensing star also has an orbiting planet, the planet’s effect in brightening the background star is measurable. The method gives us the chance to look for planets at a wide range of distances from the Earth, but it also relies on purely chance alignments that are obviously rare. In fact, from 2002 to 2007, only 3247 such events were identified, with 500 studied at high resolution. All this from a microlensing search that involved millions of stars.
The researchers combined the PLANET and OGLE data with detections from earlier microlensing work and weighed these against non-detections during the six year period of study. They then analyzed these data in conjunction with radial velocity and transit findings. The result: Given the odds against finding planets through these chance celestial alignments, planets must be abundant in the Milky Way. In fact, the researchers conclude that one in six of the stars studied hosts a planet with a Jupiter-class companion, half have planets of Neptune’s mass and two-thirds are likely to have super-Earths. Note that the survey was sensitive to planets with masses ranging from five times the Earth’s up to ten times the mass of Jupiter.
Uffe Gråe Jørgensen is head of the research group in Astrophysics and Planetary Science at the Niels Bohr Institute at the University of Copenhagen:
“Our microlensing data complements the other two methods by identifying small and large planets in the area midway between the transit and radial velocity measurements. Together, the three methods are, for the first time, able to say something about how common our own solar system is, as well as how many stars appear to have Earth-size planets in the orbital area where liquid water could, in principle, exist as lakes, rivers and oceans — that is to say, where life as we know it from Earth could exist in principle.”
Jørgensen goes on to conclude that out of the Milky Way’s 100 billion stars, there are about 10 billion with planets in the habitable zone, “…billions of planets with orbits like Earth and of comparable size to the Earth.” Daniel Kubas (ESO, and co- lead author of the paper), takes all this into account and concludes: “We used to think that the Earth might be unique in our galaxy. But now it seems that there are literally billions of planets with masses similar to Earth orbiting stars in the Milky Way.” Statistics tell the tale, one that will be refined with each new exoplanet detection, but one that points increasingly to a galaxy where Earth-sized planets are common.
The paper is Cassan, Kubas et al., “One or more bound planets per Milky Way star from microlensing observations,” Nature 481, 167–169 (12 January 2012). Abstract available.
Kepler-16b: Inside a Chilly Habitable Zone?
The annual meeting of the American Astronomical Society is now in session in Austin, sure to provide us with interesting fodder for discussion in coming days. Just coming off embargo yesterday was news of further study of the interesting Kepler-16 system. This one made quite a splash last fall when the planet known as Kepler-16b was discovered to orbit two stars, with the inevitable echoes of Star Wars and the twin suns that warmed the planet Tatooine. This planet, though, was a gas giant more reminiscent of chilly Saturn than a cozily terrestrial world.
Image: An artist’s conception of the Kepler-16 system (white) from an overhead view, showing the planet Kepler-16b and the eccentric orbits of the two stars it circles (labeled A and B). For reference, the orbits of our own solar system’s planets Mercury and Earth are shown in blue. New work out of the University of Texas at Arlington explores the question of habitability in a system like this. Credit: NASA/Ames/JPL-Caltech.
You’ll recall, too, that Kepler-16b circles both a K-dwarf with about 70 percent of the Sun’s mass and a red dwarf of about a fifth of a solar mass. Although the planet’s orbit takes it within Venus-like distances of them, Kepler-16b’s central stars are small enough that temperatures would appear to be too cold for life. At the Austin meeting, however, researchers from the University of Texas at Arlington have made the case that an Earth-class planet could exist here as an exomoon orbiting the gas giant. They have no indication that such a planet actually exists, as Zdzislaw Musielak (UT-Arlington) is quick to point out, but the work is interesting nonetheless:
“This is an assessment of the possibilities,” said Musielak. We’re telling them where a planet has to be in the system to be habitable. We’re hoping they will look there.”
Making conditions on such a moon habitable would require an atmosphere with a strong warming effect that could be provided by high levels of greenhouse gases like carbon dioxide or methane. Such an atmosphere would widen what we would normally consider to be the habitable zone around the two stars. Al Jackson noted today in an email from Austin that all kinds of new ways to study Kepler candidates are coming to the fore, and remember that Kepler still has a long way to go before its primary mission is accomplished. As to exomoons, we’ve yet to identify one, but so much good work has been accomplished on how to achieve such a detection that it’s surely not going to be long before we have candidate exomoons to focus in on.
The paper on this work is not yet out, but I’ll announce it here when it’s available. Meanwhile, thoughts on how many habitable worlds are out there continue to be expansive. More on this tomorrow, as we return to news coming out of the Austin conference.
Innovative Interstellar Explorer: A Response to Questions
Ralph McNutt’s recent update on the progress of the Innovative Interstellar Explorer concept elicited plenty of comments, enough that Dr. McNutt wanted to answer them in a new post. Now at Johns Hopkins University Applied Physics Laboratory, McNutt is Project Scientist and a Co-Investigator on NASA’s MESSENGER mission to Mercury, Co-Investigator on NASA’s Solar Probe Plus mission to the solar corona, Principal Investigator on the PEPSSI investigation on the New Horizons mission to Pluto, a Co-Investigator for the Voyager PLS and LECP instruments, and a Member of the Ion Neutral Mass Spectrometer Team on the Cassini Orbiter spacecraft. With all that on his plate, it’s hard to see how he has time for anything else, but McNutt also continues his work as a consultant on the Project Icarus interstellar design study. His Innovative Interstellar Explorer is a precursor mission designed to push our technologies hard.
by Ralph McNutt
I typically do not get involved with commenting on comments just because of the time constraints of protracted discussions, but some of the questions raised by your readers are, I think, very good and deserving of a response. [The original post is Update on Innovative Interstellar Explorer — readers may want to skim through the comments there to get up to speed — PG].
Let me try to take the comments, for the most part, in order. At one point we did take a look at Sedna and the other large trans-Neptunian Objects (TNOs). The orbit of Sedna (can be found here) will move through ~60° of arc and through its perihelion between now and 2100 (just prior to the aphelion of Pluto) — this is out of an orbital period of ~12,600 years. All of this motion is within 90 AU of the Sun, the orbital inclination is ~12° and is certainly accessible with the appropriate “tweak” at a Jupiter gravity assist. Such an aim point also puts constraints on exactly where with respect to the direction of the incoming interstellar wind one is aiming. To exit the solar system rapidly, one wants a speed as high as possible. Traveling “only” ~17 km/s (about the flyby speed of Voyager 1 past Titan and faster than the speed of New Horizons past Pluto of ~13 km/s), close imaging is problematic (with a radius of 1500 km, this is an object radius travelled in ~100s). Several months of high resolution imaging are possible with a large camera such as LORRI on New Horizons but not with a cell phone camera (which would die rapidly in the space radiation environment shortly after launch anyway).
Eris, with an orbital inclination approaching 45° is currently about 97 AU from the Sun (orbit here) and is inbound to perihelion, crossing the plane of the ecliptic at ~90 AU in the early 2070’s, but still outside of 83 AU in 2100. Makemake (orbit) is currently well above the plane of the ecliptic, passing through the plane of the ecliptic just after the end of this century and just inside of 50 AU; its orbit has a relatively small eccentricity of ~0.16 and an inclination of ~29, etc. The real problem is that doing a flyby of a TNO really is a different mission.
It is perhaps also worth noting that nuclear electric propulsion has been looked at – and in some detail under NASA’s Project Prometheus. The problem is that the power system needs to have a specific mass no greater than ~30 kg/kW (something noted by Ernst Stuhlinger back in the 1960’s — Stuhlinger literally wrote the book on ion propulsion) to have an advantage in speed delivered by nuclear electric propulsion (NEP). But that has to include the mass of the system for dumping the waste heat of the reactor (from the second law of thermodynamics) as well as its mechanical supports. The Prometheus architecture came in at over twice that, and that is the problem. To date all NEP designs come in underpowered when engineering closure on the system as a whole is examined. Think of Hiram Maxim’s steam-powered airplane versus the gasoline-powered airplane of the Wright Brothers. This is ultimately the problem with VASIMIR as well – a more mass-efficient means of providing the wall-plug electricity is needed, if it is to ever become a real system.
The spacecraft mass question is a good one as well. We tried pushing that on the precursor to IIE that was funded by NIAC – an “all the stops pulled out” approach that reduces the spacecraft mass to ~150 kg including a payload. Again the problem is engineering closure. Even if I miniaturize the electronics to microminiaturized solid state items, I need communications, guidance and control, power, thermal control, and a payload. The payload sensors have to be a finite size just to collect the data if all I am fighting is Poisson statistics – which can be traded against integration time (but it makes no sense to spend 10 years to make one measurement). Even with an iPad or equivalent that is radiation hardened, one cannot reduce the mass arbitrarily and then still make the measurements that are the raison d’etre for the effort in the first place. Ultimately, one runs into physical limits set by the properties of the materials from which one constructs components.
Part of this is manufacturing and part is the physics of the material itself. Practicalities are also involved. For the NIAC effort, we looked at the idea (and not a new one) of using ultra-low power (ULP) electronics running at liquid nitrogen temperatures. But now I have a real problem in testing such devices, as the coefficients of thermal expansion of the materials as well as the Johnson noise can preclude operation at room temperature. I could fix that with a lab and facilities on the Moon, but now that infrastructure is required, and the technicians would have to work in space suits – and I have a scenario that does not close economically (and may not technically either). Everyone in the deep-space robotic business has mass reduction as a primary goal – on everything. One can build *something* for less than ~250 kg, but the indications are that to build the desired functionality, that type of mass limit will be “sporty.”
Image: IIE Initial Concept Closeup. Credit: JHU/APL.
With respect to launch vehicles, the use of “really, big” vehicles for robotic missions has always been problematic because of the cost. There was a Voyager Mars Program in the late 1960’s which envisioned using a Saturn V to send to large rovers to Mars. Similarly we looked at implementing IIE with an Ares V combined with either a Centaur or NERVA upper stage. While flyout times are reduced, the decrease is not a factor of two.
With respect to communications, in the NIAC work we looked at an IR optical communications systems running at about 890 nm (see this paper). That was not the problem. The problem was holding spacecraft pointing well enough to keep the laser spot on the Earth from 1000 AU (the requirement for that more aggressive mission). One can certainly do the pointing with a sufficiently capable guidance system – but that drove the mass even more. We found that the best trade was a high gain antenna (HGA) of just under 3 meters diameter (about what is on Pioneer 10 and 11 and New Horizons). One driver is holding tolerances during manufacture and another is holding them under the vibration environment imposed by the launch. Materials are not infinitely stiff (which is good, because then they would break), but that means corrections and feedbacks as required. The other interesting thing about a laser com system running from ~5 light days out is that closed-loop operation is not credible, and the beam is sufficiently small and the distance sufficiently large that to minimize power, I need a clock with an ephemeris that can be used to point the transmitter to where the Earth will be when the modulated laser carry arrives there.
Our Meaning-Stuffed Dreams
Gregory Benford’s work is so widely known that it almost seems absurd to introduce him, but for any Centauri Dreams readers who have somehow missed it, I challenge you to read In the Ocean of Night and not become obsessed with reading this author’s entire output. This week has been a science fictional time for Centauri Dreams, with discussion of SF precedents to modern discoveries in the comments for stories like Marc Millis’ ‘Future History.’ So it seems appropriate to end the week with an essay Greg published yesterday on his own site, one that appealed to me so much that I immediately asked him for permission to run it again here.
In the essay, Greg takes a look at science fiction writer Thomas Disch and in particular the way his thoughts on SF illuminate not just the genre but the world we live in. It’s insightful stuff, and makes me reflect on how our ideas of the future shape our upcoming realities. I will also admit to a fascination with science fiction’s history that never wanes, a passion that is reignited whenever I see serious thought being given to the intricate machinery of modern prose.
by Gregory Benford
I recently reread The Dreams Our Stuff Is Made Of by my old friend Thomas Disch (Free Press, 1998, $25). Tom is now gone, but his ideas seem fresh as ever about science fiction and where it’s gone.
Here are some thoughts on the book, which still bears consideration. This sadly sardonic survey of science fiction worries its subject from many angles: historical, literary, sociological. Science fiction (sf) is perhaps the defining genre of the twentieth century, its conquering armies still camped outside the Rome of the literary citadels.
It’s an old story. Throughout this century, conventional literature persistently avoided thinking about conceptually altered tomorrows, and retreated into a realist posture of fiction of ever-smaller compass. By foregrounding personal relations, the novel of character came-especially in a classic debate around World War I between Henry James and H.G. Wells-to claim the pinnacle of orthodox fiction. James won that argument, surrendering the future to the genre that would later increasingly set the terms of social debate.
Disch underlies his wryly witty observations with poet Delmore Schwartz’s resonant title from 1938, *In Dreams Begin Responsibilities*. This “pregnant truth” is his clarion call to the genre that once fascinated him but plainly calls to him less since the mid-1980s. Sf takes up Big Ideas, but does not always treat them well. This unfulfilled promise vexes Disch, and he rummages among the cranks, fakes and crazies that often camped near the Legions of the Future. He treats us to tours of mesmerism from the time of Poe, to UFOs and their exploiters (Whitley Strieber, a flagrant example), to the huge religion invented in an sf magazine, Scientology. These unseemly neighbors of the genre betray America’s great historical trouble: high dreams, ready gullibility. Some skepticism is quite in order, particularly in the New Age.
The persistence of cranks and fools in the ranks of sf is sobering. We’ll scarcely be invited to tea if we keep such companions. This blends with Disch’s class analysis of literature.
Still, “The difference between highbrow and low — between Eliot and Poe, between mainstream and scifi-is not one that can be mapped by the conventional criteria of criticism.” He supports this by showing that Poe is more a formalist than Eliot, and less given to overt lecturing and preachiness. Instead, “The essential difference is not one of aesthetics or of some subtler metaphysical nature, but of the two writers’ antithetical social and economic positions.” Poe was a popular, market-driven writer, a “magazinist,” while Eliot was supported by a high culture with subtle patronage.
Sf is best seen as the voice of a rising class that sprang from the burgeoning American masses, hopeful middle class technological types. Their very earnestness carried their arguments and visions into the souls of the one country most responsible for our visions of the future; sf is notably an American creation, since the great era of Wells.
Predictably, its grandiose dreams lead to its worse faults. Sf’s greatest vice is lecturing. In the face of such large ideas, many authors became the “School Teacher Absolute, a fate that would befall so many later sf writers-Heinlein, Asimov, Bradbury, Le Guin, Delany-that it must be considered an occupational hazard.” It can carry a writer away. Disch sees the later work of Philip K. Dick, particularly the important Valis, as “madness recollected in a state of borderline lucidity.”
Such faults go with the territory, but they do not dominate. The true strength of the genre lies in its power to convince by imagining. “A theory can be controverted; a myth persuades at gut level.”
We sf writers were often great makers of myth, some lifted from written sf and tarted up for media consumption *Star Trek* is notorious for looting the more thoughtful work of writers for their striking effects, leaving behind most of the thought and subtlety. Of the show’s huge global audience, he observes, “few audiences like to be challenged,” for after all, “it is traditionally the prelude to a duel, not to a half-hour of light entertainment. Any artist’s first order of business is not to challenge but to entice.”
He views this most persistent of any TV show from a fashion angle: actors in pajamas. Their starship looks much like an office from the inside, with lookalike uniforms: “the same parables of success-through-team effort that can be found on such later workplace-centered sitcoms as The Mary Tyler Moore Show and Designing Women.”
Trek was thus the prophet of the politically correct multicultural future just ahead of us, with workplace equality conspicuously displayed. Disch wrings much humor from this insight, yet surely the crucial nature of both Star Trek and Star Wars lies in their invocation of family. The strangeness of outer space futures had before been so daunting for audiences that typically it is the backdrop of horror (the Alien series, etc.).
Star Trek’s insight lay in the promise of going to the stars together, with well defined stereotypes who could supply the emotional frame for the potentially jarring truths of these distant places. That is why the cultures they meet proved so boring: “Blandness and repetition can be comforting, and comfort is a major deseratum in bedtime stories.” Alas, the genre set out to do more than rock us to sleep.
The market now mirrors his withering analysis. Despite his assertion that “three or four slots on the best-seller lists are occupied by SF titles” in fact their occupants are fantasy tomes and Michael Crichton clones, not actual sf at all. Only one true sf novel I can recall from the 1990s made the lists for long, Arthur Clarke’s 3001, a media-driven sequel to a sequel to a sequel. Instead, fantasy reigns supreme.
Indeed, Disch believes that once space travel, sf’s grand metaphor, proved to mean long voyages to inhospitable places, the genre reverted to fantasy-like motifs. There is truth in this, both in the rise of genre fantasy in books (now plagued with a numbing sameness and endless trilogies) and in the Joseph Campbell (savant of the mythic archetype theory of storytelling, as used by George Lucas in Star Wars) over John W. Campbell (tough-minded editor of Astounding magazine, the font of sf’s Golden Age, yet also the crucible of Scientology and crank ideas like the infamous Dean Drive).
This retreat from the observable fact-that the moon in indeed a harsh mistress-to Disch signals the end of sf’s best days. Though he scorns the Heinlein-Pournelle wing of hard sf (“Space is like Texas, only larger.”) he confesses a fondness for that seminal work of physical exploration, Hal Clement’s Heavy Planet.
Certainly, “hardness” in the sense of scrupulous concern for the facts and methods of science remains for many the core of the field, and its always hopeful promise. Hardness has been appropriated by some for political hard-nosed analysis, often with a libertarian bias, sometimes even for a conservative one — a seeming contradiction, for a “literature of change.”
Clement’s seminal world-building took us to far exotica, to meet the strange face to face. Indeed, aliens are the most pointed sf motif. “If God can’t be coerced into breaking his silence, at least he can send emissaries,” a neat compression of science’s failure to reveal the holy, and sf’s literary attempt to find it metaphorically in the alien. Aliens are only passingly interesting to see; what one wants to do is talk to them, sense the strangeness of another mind.
Yet this is not the focus of the movies and TV, which have turned sf’s aliens into horror shows or neat parables. “Screenwriters do not have the luxury that novelists enjoy of taking the time to explain things, to pose riddles and work them out, to think. Such bemusements can be the glory of sf (as of the deductive mystery, another genre poorly served by film)” and we see it seldom in the torrent of special effects circuses pouring from our screens.
In the late 1990s we have entered an era when special effects can show us just about anything, sometimes at surprisingly little cost. This could liberate sf in the arena by which it is increasingly judged, the visual.
I believe this to be the great challenge to the genre: to use its insights and methods to reach the great potential audience with more than simple spectacle. The western made such a transition in the 1950s, producing its highest works (High Noon, The Searchers, Shane) before running out of conceptual gas.
Written sf may have lesser prospects. Media tie-in work fills a (thankfully) separate section of the sf division in the larger book stores. In the rising tide of media spinoff novels and “sharecropping” of imaginative territories pioneered by early greats, Disch seen the genre’s probable fate: “more of the same and more of the sameness.”
Need this be so? I find the quantity of fine written sf has never been higher, counter-balancing the media tie-in clones. This goes little noticed in the windy passageways of the literary castles, for the division of that Wells-James debate persists. There is a curious mismatch between the reviewing media and the reading public. One would expect an efficient market to shape book reviewing to the great strengths of contemporary America: genres, from the hardboiled detective to cutting-edge sf to wispy, traditional fantasy.
In the end, Disch seems saddened because the promise of the New Wave, just breaking when he entered the field in the 1960s, hissed away into the sands of time. But the legacy of his generation is deeper, raising the net in the genre’s perpetual tennis match between conventional literature’s subtle, stylish stamina versus sf’s blunt, intellectual energies. True, Disch’s fellow marchers have largely fallen silent, but the advance of hard sf after them used weaponry they had devised. From Clement’s beginning, hard sf has fashioned a whole armament of methods, some of which mainstream mavens like Tom Clancy, and savvy insiders like Larry Niven and Jerry Pournelle, have built rich provinces of their own. Neal Stephenson’s cultural insights and technoriffs too have found a huge audience.
Genres are best seen as constrained conversations, and sf is the leader and innovator in this. Constraint is essential, defining the rules and assumptions open to an author. If hard sf occupies the center of science fiction, that is probably because hardness gives the firmest boundary.
Genres are also like immense discussions, with ideas developed, traded, and variations spun down through time. Players ring changes on each other-a steppin’-out jazz band, not a solo concert in a plush auditorium. Contrast “serious” fiction-more accurately described, I believe, as merely self-consciously solemn-which proceeds from canonical classics that supposedly stand outside of time, deserving awe, looming great and intact by themselves.
Disch seems to sense the central draw of sf, but because he has been so isolated from it for so long, his expedition never reaches the core. Genre pleasures are many, but the quality of shared values within an on-going discussion may be the most powerful, enlisting lifelong devotion in its fans. In contrast to the Grand Canon view, genre reading satisfactions are a striking facet of modern democratic (“pop”) culture.
Disch does deplore the recent razoring of literature by critics-the tribes of structuralists, post-modernists, deconstructionists. To many sf writers, “post-modern” is simply a signature of exhaustion. Its typical apparatus-self-reference, heavy dollops of obligatory irony, self-conscious use of older genre devices, pastiche and parody-betrays lack of invention, of the crucial coin of sf, imagination. Some deconstructionists have attacked science itself as mere rhetoric, not an ordering of nature, seeking to reduce it to the status of the ultimately arbitrary humanities. Most sf types find this attack on empiricism a worn old song with new lyrics, quite retro.
At the core of sf lies the experience of science. This makes the genre finally hostile to such fashions in criticism, for it values its empirical ground. Deconstructionism’s stress on a contradictory or self-contained internal differences in texts, rather than their link to reality, often merely leads to literature seen as empty word games.
Sf novels give us worlds which are not to be taken as metaphors, but as real. We are asked to participate in wrenchingly strange events, not merely watch them for clues to what they’re really talking about. Sf pursues a “realism of the future” and so does not take its surrealism neat, unlike much avant-garde work which is easily confused with it. Thes followers of James have yet to fathom this. The Mars and stars and digital deserts of our best novels are, finally, to be taken as real, as if to say: life isn’t like this, it is this.
The best journeys can go to fresh places, not merely return us to ourselves. Despite Disch’s sad eulogy for the genre’s past, which he considers its high point, I suspect there are great trips yet to be taken.