Laniakea: Milky Way’s Address in the Cosmos

by Paul Gilster on September 4, 2014

Science fiction writers have a new challenge this morning: To come up with a plot that takes in not just the galaxy and not just the Local Group in which the Milky Way resides, but the far larger home of both. Laniakea is the name of this supercluster, after a Hawaiian word meaning ‘immense heaven.’ And immense it is. Superclusters are made up of groups like the Local Group — each of these contain dozens of galaxies — and clusters that contain hundreds more, interconnected by a filamentary web whose boundaries have proven hard to define.

Where does one supercluster begin and another end? As explained in a cover story in the September 4 issue of Nature, an emerging way to tune up our cosmic maps is to look at the effect of large-scale structures on the movements of galaxies. A team under R. Brent Tully (University of Hawaii at Manoa) has been using data from radio telescopes to study the velocities of 8000 galaxies, adjusting for the universe’s accelerating expansion to create a map of the cosmic flow of these galaxies as determined by gravitational effects.

The boundaries between superclusters, such as those between Laniakea and the Perseus-Pisces Supercluster, are where the galactic flows diverge and neighboring structures shear apart. As this National Radio Astronomy Observatory news release points out, within the boundaries of the Laniakea Supercluster, the motions of galaxies are directed inward. In other superclusters, the flow of galaxies goes toward a different gravitational center.

This is how our horizons get adjusted. Previously we thought of the Milky Way as part of the Virgo Supercluster, but now we see even this region as just part of the far larger Laniakea Supercluster. We’re talking about a structure some 520 million light years in diameter that contains the mass of one hundred million billion suns across a staggering 100,000 galaxies. And just as the Sun is in the galactic ‘suburbs’ of the Milky Way, a long way from the galaxy’s teeming center, so the Milky Way itself lies on the outskirts of the Laniakea Supercluster.


Image: A slice of the Laniakea Supercluster in the supergalactic equatorial plane — an imaginary plane containing many of the most massive clusters in this structure. The colors represent density within this slice, with red for high densities and blue for voids — areas with relatively little matter. Individual galaxies are shown as white dots. Velocity flow streams within the region gravitationally dominated by Laniakea are shown in white, while dark blue flow lines are away from the Laniakea local basin of attraction. The orange contour encloses the outer limits of these streams, a diameter of about 160 Mpc. This region contains the mass of about 100 million billion suns. Credit: SDvision interactive visualization software by DP at CEA/Saclay, France.

Those of us with an interest in Polynesia will love the name Laniakea, which was chosen to honor the Polynesian sailors who used their deep knowledge of the night sky to navigate across the Pacific. If you look through the essays in Interstellar Migration and the Human Experience (University of California Press, 1985), you’ll find several that dwell on the historical example of the Polynesian navigators as a way of examining future migration into the stars. The theme resonates and I invariably hear it mentioned at the various conferences on interstellar flight.

The diagrams below offer another way of viewing the gravitational interactions that pull together the immense supercluster. You’ll notice the Great Attractor, a gravitational focal point that influences the motion of galaxy clusters including our own Local Group. The NRAO refers to it as a ‘gravitational valley’ whose effects can be felt across the Laniakea Supercluster.


Image: Two views of the Laniakea Supercluster. The outer surface shows the region dominated by Laniakea’s gravity. The streamlines shown in black trace the paths along which galaxies flow as they are pulled closer inside the supercluster. Individual galaxies’ colors distinguish major components within the Laniakea Supercluster: the historical Local Supercluster in green, the Great Attractor region in orange, the Pavo-Indus filament in purple, and structures including the Antlia Wall and Fornax-Eridanus cloud in magenta. Credit: SDvision interactive visualization software by DP at CEA/Saclay, France.

Have a look at this video from Nature to see the whole supercluster set in motion.

So now we know that our home supercluster is actually 100 times larger in volume and mass than we previously thought. In an article summarizing these findings in Nature, Elizabeth Gibney points out that a somewhat different definition of a supercluster is being used by Gayoung Chon (Max Planck Institute for Extraterrestrial Physics, Germany) and colleagues, who base their definition on structures that will one day collapse into a single object, something that cannot be said for Laniakea because some of its galaxies will always move away from each other. Clearly, the definition of a supercluster is a work in progress, but let’s hope the name sticks.

The paper is Tully et al., “The Laniakea supercluster of galaxies,” Nature 513 (4 September 2014), 71-73 (abstract).



Red Dwarf Planets: Weeding Out the False Positives

by Paul Gilster on September 3, 2014

For those of you who, like me, are fascinated with red dwarf stars and the prospects for life around them, I want to mention David Stevenson’s Under a Crimson Sun (Springer, 2013), with the caveat that although it’s on my reading list, I haven’t gotten to it yet. More about this title after I’ve gone through it, but for now, notice that the interesting planet news around stars like Gliese 581 and GJ 667C is catching the eye of publishers and awakening interest in the public. It’s easy to see why. Planets in the habitable zone of such stars would be exotic places, far different from Earth, but possibly bearing life.

At the same time, we’re learning a good deal more about both the above-mentioned stars. A new paper by Paul Robertson and Suvrath Mahadevan (both at Pennsylvania State) looks at GJ 667C with encouraging — and cautionary — results. The encouraging news is that GJ 667Cc, a super-Earth in the habitable zone of the star, is confirmed by their work. The cautionary note is that stellar activity can mimic signals that we can interpret, wrongly, as exoplanets, and not every planet thought to be in this system may actually be there.


Image: The view from GJ 667Cc as presented in an artist’s impression. Note the distant binary to the right of the parent red dwarf. New work confirms the existence of this interesting world in the habitable zone. Credit: ESO/L. Calçada.

Remember that this is a system that was originally thought to have two super-Earths: GJ 667Cb and GJ 667Cc. The complicated designation is forced by the fact that the red dwarf in question, GJ 667C, is part of a triple star system, a distant companion to the binary pair GJ 667AB. It was just last year that the first two planets around GJ 667C were announced, followed by results from a different team showing five more super-Earths around the same star. See Gliese 667C: Three Habitable Zone Planets for my discussion of the apparent result, which at the time seemed spectacular.

The new paper from Robertson and Mahadevan takes a critical look at this system, examining the amount of stellar activity found in the host star and finding ways to study the average width of the star’s spectral absorption lines, which should flag changes to the spectrum being produced by magnetic features like starspots. Using these methods the team was able to remove the stellar activity component from the observed signals, allowing the signature of the real planets to remain while suggesting problems with the other candidates.

GJ 667Cc survives the test, a happy outcome for those interested in the astrobiological prospects here. GJ 667Cb also makes the cut, but Robertson and Mahadevan believe that planet d in this system, originally thought to be near the outer edge of the habitable zone, is a false positive created by stellar activity and the rotation of the star. As for the other planet candidates in this system, Paul Robertson has this to say in an online post:

The signals associated with them are so small that they cannot be seen with “industry-standard” analysis techniques, regardless of whether we have corrected for activity. However, considering how successful our activity correction has been at boosting the signals of real planets, the fact that we see no sign of any of these planet candidates after the activity correction leads us to strongly doubt their existence.

All of this should remind us not to jump too swiftly to conclusions about planet candidates, particularly given the sensitivity involved with the spectrographs used in our planet-finding work. Radial velocity data that looks strong can actually be the result of magnetic events on the surface of the star being observed. When Robertson and Mahadevan looked at Gliese 581, another highly interesting system because of planets possibly in the habitable zone, they found no sign of Gliese 581g, a controversial candidate whose existence is still being debated. In Gliese 581 and the Stellar Activity Problem, Robertson has this to say:

With an orbital period of 33 days, the controversial “planet g” also lies at an integer ratio of the stellar rotation period. Sure enough, no sign of g remains after our activity correction, revealing that it too was an artifact of magnetic activity. While this outcome is certainly disappointing for anyone hoping to find signs of life in the GJ 581 system, it is heartening to finally put the confusion and dispute surrounding this system to rest.

Moreover, another candidate potentially in the habitable zone, Gl 581d, falls back into the measurement noise, meaning that it was another signature of stellar activity rather than an actual planet. The red dwarf Gliese 581 is thus reduced to three planets in its system, and we’ve lost the best candidates for life. That may sound discouraging, but I think we can take heart from the fact that work like this shows we’re getting much better at eliminating false positives. Using these methods, real planets stand out in the data, which means we can more readily identify habitable zone planets as our spectrographic instrumentation improves.

The paper is Robertson and Mahadevan, “Disentangling Planets and Stellar Activity for Gliese 667C,” accepted for publication at Astrophysical Journal Letters (preprint). For Gliese 581, see Robertson and Mahadevan, “Stellar Activity Masquerading as Planets in the Habitable Zone of the M dwarf Gliese 581,” published in Science Express (3 July 2014).



Streamers of Gravel near Orion Nebula?

by Paul Gilster on September 2, 2014

I have a soft spot in my heart for the Green Bank Telescope in West Virginia. It’s not just that Frank Drake started Project Ozma on the site in 1960, or that Benjamin Zuckerman and Patrick Palmer ran an Ozma follow-up there in the mid-1970s. I was tracking SETI closely by 1980 or so and knew of these observations, but it was my friend Mike Gingell whose yearly trips to Green Bank kept the place firmly in mind. Like me, Mike was a member of the Society of Amateur Radio Astronomers, and unlike me, he was a highly qualified engineer.

Mike died just last year and I went out to his house to look through a collection of old radio books his wife thought I might be interested in. There in the back yard were three radio dishes, all tuned not for television but for the radio astronomy work Mike was so engaged in. Seeing them already beginning to succumb to foliage — Mike had been ill for some time and couldn’t keep up with them — reminded me strongly of some of J.G. Ballard’s fiction, like 1968’s “The Dead Astronaut,” in which the launch gantries and control rooms of Cape Canaveral have all been abandoned, succumbing to time and spreading sawgrass.

Fortunately for us, the Green Bank Telescope is vibrantly alive, and has just reported on findings that would have had Mike poring over the Monthly Notices of the Royal Astronomical Society, in which they will shortly appear. The big radio dish, the world’s largest fully steerable radio telescope, has been turned to the Orion Molecular Cloud Complex, a star-forming region that is home to the famous Orion Nebula. Star-forming material here is found to be filled with planetary building blocks the size of pebbles.


Image: Radio/optical composite of the Orion Molecular Cloud Complex showing the OMC-2/3 star-forming filament. GBT data is shown in orange. Uncommonly large dust grains there may kick-start planet formation. Credit: S. Schnee, et al.; B. Saxton, B. Kent (NRAO/AUI/NSF), acknowledging the use of NASA’s SkyView Facility located at NASA Goddard Space Flight Center.

If we can confirm the finding, it will show that objects up to a thousand times larger than the dust grains normally found around protostars may be a new class of mid-sized particles that could make planet formation that much easier to begin. What astronomer Scott Schnee (National Radio Astronomy Observatory) and team have found is an unusually nurturing environment for planets in which some protostars evidently form. The star-forming material here exists in the form of dust-rich filaments dotted with dense knots known as cores.

We’re looking at what, in a million years and perhaps less, will begin to evolve into a star cluster, all within a region called OMC-2/3 in the northern part of the Orion Molecular Cloud Complex. The Green Bank Telescope revealed that the region was shining much brighter than expected in millimeter-wavelength light, based on earlier studies at the IRAM 30 meter radio telescope in Spain. Says Schnee:


“This means that the material in this region has different properties than would be expected for normal interstellar dust. In particular, since the particles are more efficient than expected at emitting at millimeter wavelengths, the grains are very likely to be at least a millimeter, and possibly as large as a centimeter across, or roughly the size of a small Lego-style building block.”

But just how unusual is the finding? The paper on the work comments:

…it will be important to determine if OMC-2/3 is unique in exhibiting large grains or if this is a common feature of star-forming filaments. Although OMC-2/3 is unique in that it has a higher density of starless and protostellar cores than other regions within ~500 pc of the Sun, there are no other properties (mass, density, temperature, etc.) that would lead one to suspect that the dust grains in OMC-2/3 ought to have properties significantly different than those found in other nearby molecular clouds.

Image: Zoom in of the OMC-2/3 region. Credit: S. Schnee, et al.; B. Saxton, B. Kent (NRAO/AUI/NSF), acknowledging the use of NASA’s SkyView Facility located at NASA Goddard Space Flight Center.

If the star-forming filaments, by virtue of their lower temperatures, higher densities and lower velocities (compared to molecular clouds) are themselves the cause of the formation of these large grains, then we may be looking at a population of rocky particles not previously identified at this juncture in stellar evolution, what NRAO astronomer Jay Lockman calls a ‘vast streamer of gravel.’

The other possibility, noted in this NRAO news release, is that the rocky particles observed in OMC-2/3 may have emerged inside earlier protoplanetary disks and have simply escaped back into the surrounding molecular cloud. Whatever the case, the paper points out that there may be other explanations for the bright signature of the OMC emissions, which is why the work continues. This region contains a high concentration of protostars that serve as a laboratory for our study of star formation and the molecular clouds from which they emerge.

The paper is Schnee et al., “Evidence for Large Grains in the Star-forming Filament OMC-2/3,” accepted at Monthly Notices of the Royal Astronomical Society (preprint).



Remembering Voyager: Triton’s New Map

by Paul Gilster on August 29, 2014

I’m glad to see Ralph McNutt quoted in a recent news release from the Johns Hopkins Applied Physics Laboratory. McNutt has been working on interstellar concepts for a long time, including the Innovative Interstellar Explorer mission that could become a follow-up to New Horizons. But he’s in the news in late August because of Voyager, and in particular Voyager 2, which made its flyby of Neptune on August 25, 1989, some 25 years ago. McNutt recalls those days, when he was a member of the Voyager plasma-analysis team:

“The feeling 25 years ago was that this was really cool, because we’re going to see Neptune and Triton up-close for the first time. The same is happening for New Horizons. Even this summer, when we’re still a year out and our cameras can only spot Pluto and its largest moon as dots, we know we’re in for something incredible ahead.”

I can only envy someone who was up close with the Voyager outer planet flybys and is now a key player on New Horizons, for which McNutt leads the energetic-particle investigation team. The image below is a long way from the much closer views Voyager gave us of Neptune, but it’s what New Horizons could make out with its Long-Range Reconnaissance Imager in mid-July. It’s what NASA’s Jim Green calls a ‘cosmic coincidence’ that New Horizons crossed the orbit of Neptune on the 25th anniversary of the Voyager flyby.


Image: The New Horizons spacecraft captured this view of the giant planet Neptune and its large moon Triton on July 10, 2014, from a distance of about 3.96 billion kilometers — more than 26 times the distance between the Earth and sun. The 967-millisecond exposure was taken with the New Horizons telescopic Long-Range Reconnaissance Imager (LORRI). New Horizons traversed the orbit of Neptune on Aug. 25, 2014 — its last planetary orbit crossing before beginning an encounter with Pluto in January 2015. In fact, at the time of the orbit crossing, New Horizons was much closer to its target planet — just about 440 million kilometers — than to Neptune.

I can remember staying up late the night of the Neptune encounter, being most curious not about Neptune itself but its moon Triton. We had already learned to expect surprises from Voyager — Io alone made that point — and Triton did not disappoint us with its unanticipated plumes, signs that the frozen world was active, and its odd ‘cantaloupe’ terrain. A bit larger than Pluto, Triton serves as a rough guide for what to expect at Pluto/Charon, but it’s also a point of departure, given its evident capture by Neptune and the resulting tidal heating.

Remember, this is a world that follows a retrograde orbit, moving opposite to Neptune’s rotation. The odds are strong that we’re looking at an object captured from the Kuiper Belt. Gravitational stresses would account for melting within this ice world, and explain the fractures and plume activity, evidently geysers of nitrogen, that Voyager saw. A newly restored Triton map, produced by Paul Schenk (Lunar and Planetary Institute) has a resolution of 600 meters per pixel and has been enhanced for contrast.


Image: The best-ever global color map of Neptune’s large moon Triton, produced by Paul Schenk. This map has a resolution of 600 meters per pixel. The colors have been enhanced to bring out the contrast but are a close approximation to Triton’s natural colors. Voyager’s “eyes” saw in colors slightly different from human eyes, and this map was produced using orange, green and blue filter images. Credit: Paul Shenk/LPI.

The video using the same data is a bit breathtaking. Have a look.

Keep in mind the limitations of the imagery. In 1989, the year of the Voyager flyby, Triton’s northern hemisphere was swathed in darkness, allowing the spacecraft to have a clear view of only one hemisphere during its closest approach. Now we wait to see what views New Horizons will generate of Pluto/Charon next summer. Given that Triton and Pluto are similar in density and composition, with carbon monoxide, carbon dioxide, nitrogen and methane ices on the surface, we may see some similar features. Will there be plumes on Pluto?



Thinking about Magnetic Sails

by Paul Gilster on August 28, 2014

Magnetic sails — ‘magsails’ — are a relative newcomer on the interstellar propulsion scene, having been first analyzed by Dana Andrews and Robert Zubrin in 1988. We saw that the particle beam concept advanced by Alan Mole and discussed this week by Jim Benford would use a magsail in which the payload and spacecraft were encircled by a superconducting loop 270 meters in diameter. The idea is to use the magnetic field to interact with the particle beam fired from an installation in the Solar System toward the departing interstellar craft.

Within our own system, we can also take advantage of the solar wind, the plasma stream flowing outward from the Sun at velocities as high as 600 kilometers per second. A spacecraft attempting to catch this wind runs into the problem that sunlight contains far more momentum, which means a magnetic sail has to deflect a lot more of the solar wind than a solar sail needs to deflect sunlight. A physical sail, though, is more massive than a spacecraft whose ‘sail’ is actually a magnetic field, so the magsail spacecraft can be the less massive of the two.

Science fiction began exploring basic solar sails in the 1960s through stories like Clarke’s “Sunjammer” and Cordwainer Smith’s “The Lady Who Sailed the Soul.” In fact, SF writers have done an excellent job in acquainting the public with how solar sails would operate and what their capabilities might be. But magsails are hard to find in science fiction, and the only novel that springs readily to mind is Michael Flynn’s The Wreck of the River of Stars, whose haunting title refers to a magsail passenger liner at the end of its lifetime.

Here’s Flynn in ‘Golden Age’ Heinlein style introducing the tale:

They called her The River of Stars and she spread her superconducting sails to the solar wind in 2051. She must have made a glorious sight then: her fuselage new and gleaming, her sails shimmering in a rainbow aurora, her white-gloved crew sharply creased in black-and-silver uniforms, her passengers rich and deliciously decadent. There were morphy stars and jeweled matriarchs, sports heroes and prostitutes, gangsters and geeks and soi-disant royalty. Those were the glamour years, when magsails ruled the skies, and The River of Stars was the grandest and most glorious of that beautiful fleet.


Image: There are few science fiction stories involving magsails, and even fewer visual depictions. The cover art for Michael Flynn’s book, by the artist Stephan Martiniere, is a striking exception.

The novel takes place, though, many years later, when the grand passenger liner has become no more than an obsolete freighter whose superconducting sail structure has been decommissioned in favor of newly developed fusion drives. What happens when she needs to power up the sail again because of a fusion emergency makes up the bulk of the tale. The Wreck of the River of Stars is not about an interstellar journey but a highly developed infrastructure within the Solar System that, for a time, used the solar wind. It will be interesting to see what science fiction tales grow out of the current interstellar thinking.

For magsails emerged in an interstellar context, and if it was Robert Zubrin and Dana Andrews who worked through the equations of what we conceive today as a magsail, it was Robert Bussard who first brought life to the idea through his notion of an interstellar ramjet that would use magnetic fields to scoop up fuel between the stars. Both Zubrin and Andrews saw the potential uses of a magsail for deceleration against a stellar wind. If beam dispersal cannot be prevented to allow an interstellar magsail to be accelerated by particle beam, we might still consider equipping a beamed laser sailcraft with magsail capabilities for use upon arrival.

And when it comes to magsails closer to home, one cautionary note is provided by a 1994 paper from the Italian physicist Giovanni Vulpetti, who describes the problems we may have operating superconductors within the orbit of Mars. The paper notes that superconductivity can be lost this close to the Sun unless massive thermal shielding is applied and that, of course, ramps up the spacecraft mass. This evidently does not preclude outer system work, but it could serve as a brake on using magsails near the Earth, at least until we make considerable advances in superconductor technology.

The Vulpetti paper is “A Critical Review on the Viability of Space Propulsion Based on the Solar Wind Momentum Flux,” Acta Astronautica 37 (1994), 641-642.



Jim Benford’s article on particle beam propulsion, published here last Friday and discussed in the days since, draws from the paper he will soon be submitting to one of the journals. I like the process: By running through the ideas here, we can see how they play before this scientifically literate audience, with responses that Jim can use in tweaking the final draft of the paper. Particle beam propulsion raises many issues, not surprising given the disagreements among the few papers that have tackled the subject. Are there ways of keeping the beam spread low that we haven’t thought of yet? Does a particle beam require shielding for the payload? Does interplanetary particle beam work require a fully built infrastructure in the Solar System? We have much to consider as the analysis of this interesting propulsion concept continues. Dr. Benford is President of Microwave Sciences in Lafayette, California, which deals with high power microwave systems from conceptual designs to hardware.

by James Benford


Let me first say that I appreciate the many comments on my piece on neutral particle beam propulsion. With so many comments I can react in only a limited sense. I appreciate in particular the many comments and suggestions by Alex Tolley, swage, Peter Popov, Dana Andrews, Michael, Greg (of course), Project Studio and David Lewis.

Galacsi: The launch system as envisioned by Dana Andrews and Alan Mole would be affixed to an asteroid that would provide sufficient mass to prevent the reaction from the launch of the beam from altering the orbit of the Beamer and changing the direction of the beam itself. No quantitative valuation of this has been provided to date.

James Messick says we can have thrusters to maintain the Beamer in place, but the thrusters must have the same thrust as the Beamer in order to prevent some serious motion.

Rangel is entirely right; one has to start at lower power nearer objectives, as we have to do for all interstellar concepts.

Alex Tolley is quite correct that what is envisioned here is a series of beam generators at each end of the journey for interplanetary missions, which means a big and mature Solar System economy. That’s why I placed this in future centuries. And I agree with him that in the short term beamed electromagnetic or electric sails are going to be much more economic because they don’t require deceleration at the destination.

Adam: the Beamer requirement if the magsail expands as the pressure falls off probably doesn’t scale well, as B falls off very quickly- I don’t think the scaling justifies any optimism.

There are certainly a lot of questions about the solar wind’s embedded magnetic field. All these requirements would benefit from a higher magnetic field from the magsail, which unfortunately also increases the mass of the probe.

Alex Tolley correctly points out that deflecting high-energy particles produces synchrotron radiation, which will require some shielding of the payload. Shielded payloads are available now, due to DOD requirements. [Jim adds in an email: “Shielding is needed for the payload while the beam is on. Keep it, don't discard, as there are cosmic rays to shield against on all flights].

Swage is correct in saying that we need to start small, meaning interplanetary, before we think large. Indeed lasers are far less efficient than the neutral beam concept. That’s because deflecting material particles is a much higher efficiency process than deflecting mere photons. Swage is completely correct about the economics of using beam propulsion.

And using multiple smaller beams doesn’t reduce divergence. ‘Would self focusing beams be an option?’ No. Charged beams don’t self-focus in a vacuum, they need a medium for that and it isn’t easy to make happen. Charged particle beams can be focused using their self-generated magnetic field only when some neutralization of charges is provided. There is also a large set of instabilities that can occur in such regimes. That’s a basic reason why charged particle beams are not being seriously considered as weapons and neutral beams are the only option.


Image: The divergence problem. A charged-particle beam will tend naturally to spread apart, due to the mutually repulsive forces between the like-charged particles constituting the beam. The electric current created by the moving charges will generate a surrounding magnetic field, which will tend to bind the beam together. However, unless there is some neutralization of the charge, the mutually repulsive force will always be the stronger force and the beam will blow itself apart. Even when the beam is neutralized, the methods used to neutralize it can still lead to unavoidable beam divergence over the distances needed for interstellar work. Image credit: Richard Roberds/Air University Review.

Peter Popov asked whether you could focus sunlight directly. You can’t focus sunlight to a smaller angular size than it fills in your sky. (That is because the sun is an incoherent source. The focusability of sunlight is limited by its incoherence, meaning that the radiation from the sun comes from a vast number of radiating elements which are not related to one another in a coherent way.) Therefore the ability to focus sunlight is limited, and is in no way related to the focusing of coherent light. However, you can increase the focusing aperture, collecting more light, making the power density higher, but the spot size doesn’t grow.

Dana Andrews’ comment that the neutral “atoms with any transverse velocity are eliminated before they are accelerated” means that you throw away all but one part in a million of the initial beam: Suppose this device, which separates particles out, reduces the divergence by 3 orders of magnitude. That implies, for a beam uniform in angular distribution, a reduction in Intensity of 1 million (because the solid angle scales with the square of the opening angle). Such a vast inefficiency is unaffordable.

For Dana & Alex Tolley, re-ionizing the beam as it reaches the magsail will not be difficult. The reason is that they are in relativistically separated frames so that the magnetic field of the magsail will appear as an electric field in the frame of the atoms, a field sufficient to ionize the atom. No on-board ionizer is required.

Michael suggests going to ultrarelativistic beams, but that means much more synchrotron radiation when the beam deflects from the magsail. Consequently, very much higher fields are necessary for deflection. That would mean either much more current or much larger diameter in the magsail. My instinct is that that does not scale well. And the divergence I described is not changed by going ultrarelativistic, as it just depends on ratios of mass and energies of electron to ion. Also, using heavier atoms helps but, with a square root dependence, not enough.

ProjectStudio also advocates that an ultrarelativistic neutral beam would have a reduced divergence, for which see above. I note again the enormous amount of radiation they produce whenever they are either deflected by the magnetic field or collide with matter. In fact, going in the Andrews/Mole concept from 0.2 c to 0.9c means the synchrotron radiation increases by a factor of 2300! That bathes the payload, as the ions swing round.

Alex Jolie is also correct in saying that we need to look into the development of beam power infrastructure. Once it’s in place economics drives down the price of transportation; the same was true for the railroads.

David Lewis seems to get the concept entirely.



Beaming to a Magnetic Sail

by Paul Gilster on August 26, 2014

Jim Benford’s work on particle beam propulsion concepts, and in particular on the recent proposal by Alan Mole for a 1 kg beam-driven interstellar probe, has demonstrated the problem with using neutral particle beams for interstellar work. What we would like to do is to use a large super-conductor loop (Mole envisions a loop 270 meters in diameter) to create a magnetic field that will interact with the particle beam being fired at it. Benford’s numbers show that significant divergence of the beam is unavoidable, no matter what technology we bring to bear.

That means that the particle stream being fired at the receding starship is grossly inefficient. In the case of Mole’s proposal, the beam size will reach 411 kilometers by the end of the acceleration period. We have only a fraction of the beam actually striking the spacecraft.

This is an important finding and one that has not been anticipated in the earlier literature. In fact, Geoffrey Landis’ 2004 paper “Interstellar Flight by Particle Beam” makes the opposite statement, arguing that “For a particle beam, beam spread due to diffraction is not a problem…” Jim Benford and I had been talking about the Landis paper — in fact, it was Jim who forwarded me the revised version of it — and he strongly disagrees with Landis’ conclusion. Let me quote what Landis has to say first; he uses mercury as an example in making his point:

[Thermal beam divergence] could be reduced if the particles in the beam condense to larger particles after acceleration. To reduce the beam spread by a factor of a thousand, the number of mercury atoms per condensed droplet needs to be at least a million. This is an extremely small droplet (10-16 g) by macroscopic terms, and it is not unreasonable to believe that such condensation could take place in the beam. As the droplet size increases, this propulsion concept approaches that of momentum transfer by use of pellet streams, considered for interstellar propulsion by Singer and Nordley.

We’ve talked about Cliff Singer’s ideas on pellet propulsion and Gerald Nordley’s notion of using nanotechnology to create ‘smart’ pellets that can navigate on their own (see ‘Smart Pellets’ and Interstellar Propulsion for more, and on Singer’s ideas specifically, Clifford Singer: Propulsion by Pellet Stream). The problem with the Landis condensed droplets, though, is that we are dealing with beam temperatures that are extremely high — these particles have a lot of energy. Tomorrow, Jim Benford will be replying to many of the reader comments that have come in, but this morning he passed along this quick response to the condensation idea:

Geoff Landis’ proposal to reduce beam divergence, by having neutral atoms in the particle beam condense, is unlikely to succeed. Just because the transverse energy in the relativistic beam is only one millionth of the axial energy does not mean that it is cool. Doing the numbers, one finds that the characteristic temperature is very high, so that condensation won’t occur. The concepts described are far from cool beams.

Where there is little disagreement, however, is in the idea that particle beam propulsion has major advantages for deep space work. If it can be made to work, and remember that Benford believes it is impractical for interstellar uses but highly promising for interplanetary transit, then we are looking at a system that is extremely light in weight. The magsail itself is not a physical object, so we can produce a large field to interact with the incoming particle stream without the hazards of deploying a physical sail, as would be needed with Forward’s laser concepts.


Image: The magsail as diagrammed by Robert Zubrin in a NIAC report in 2000. Note that Zubrin was looking at the idea in relation to the solar wind (hence the reference to ‘wind direction’), but deep space concepts involve using a particle stream to drive the sail. Credit: Robert Zubrin.

Another bit of good news: We can achieve high accelerations because unlike the physical sail, we do not have to worry about the temperature limits of the sail material. The magnetic field is not going to melt. Although Landis is talking about a different kind of magsail technology than envisioned by Alan Mole, the point is that higher accelerations come from increasing the beam power density on the sail, and that means cruise velocity is reached in a shorter distance. That at least helps with the beam divergence problem and also with the aiming of the beam.

Two other points bear repeating. A particle beam, Landis notes, offers much more momentum per unit energy than a laser beam, so we have a more efficient transfer of force to the sail. Landis also points to the low efficiency of lasers at converting electrical energy, “typically less than 25% for lasers of the beam quality required.” Even assuming future laser efficiency in the fifty percent range, this contrasts with a particle beam that can achieve over 90 percent efficiency, which reduces the input power requirements and lowers the waste heat.

But all of this depends upon getting the beam on the target efficiently, and Benford’s calculations show that this is going to be a problem because of beam divergence. However, the possibility of fast travel times within the Solar System and out as far as the inner Oort Cloud make neutral particle beams a topic for further study. And certainly magsail concepts retain their viability for interstellar missions as a way of slowing the probe by interacting with the stellar wind of the target star.

I’ll aim at wrapping up the current discussion of particle beam propulsion tomorrow. The image in today’s article was taken from Robert Zubrin and Andrew Martin’s “The Magnetic Sail,” a Final Report for the NASA Institute of Advanced Concepts in 2000 (full text). The Landis paper is “Interstellar flight by particle beam,” Acta Astronautica 55 (2004), 931-934.



Beamed Sails: The Problem with Lasers

by Paul Gilster on August 25, 2014

We saw on Friday through Jim Benford’s work that pushing a large sail with a neutral particle beam is a promising way to get around the Solar System, although it presents difficulties for interstellar work. Benford was analyzing an earlier paper by Alan Mole, which had in turn drawn on issues Dana Andrews raised about beamed sails. Benford saw that the trick is to keep a neutral particle beam from diverging so that the spot size of the beam quickly becomes much larger than the diameter of the sail. By his calculations, only a fraction of the particle beam Mole envisaged would actually strike the sail, and even laser cooling methods were ineffective at preventing this.


It seems a good time to look back at Geoffrey Landis’ paper on particle beam propulsion. I’m hoping to discuss some of these ideas with him at the upcoming Tennessee Valley Interstellar Workshop sessions in Oak Ridge, given that Jim Benford will also be there. The paper is “Interstellar Flight by Particle Beam” (citation below), published in 2004 in Acta Astronautica, a key reference in an area that has not been widely studied. In fact, the work of Mole, Andrews and Benford, along with Landis and Gerald Nordley, is actively refining particle beam propulsion concepts, and what I’m hoping to do here is to get this work into a broader context.

Image: Physicist and science fiction writer Geoffrey Landis (Glenn Research Center), whose ideas on particle beam propulsion have helped bring the idea into greater scrutiny.

Particle beams are appealing because they solve many of the evident limitations of laser beaming methods. To understand these problems, let’s look at their background. The man most associated with the development of the laser sail concept is Robert Forward. Working at the Hughes Aircraft Company and using a Hughes fellowship to assist his quest for degrees in engineering (at UCLA) and then physics (University of Maryland), Forward became aware of Theodore Maiman’s work on lasers at Hughes Research Laboratories. The prospect filled him with enthusiasm, as he wrote in an unfinished autobiographical essay near the end of his life:

“I knew a lot about solar sails, and how, if you shine sunlight on them, the sunlight will push on the sail and make it go faster. Normal sunlight spreads out with distance, so after the solar sail has reached Jupiter, the sunlight is too weak to push well anymore. But if you can turn the sunlight into laser light, the laser beam will not spread. You can send on the laser light, and ride the laser beam all the way to the stars!”

The idea of a laser sail was a natural. Forward wrote it up as an internal memo within Hughes in 1961 and published it in a 1962 article in Missiles and Rockets that was later reprinted in Galaxy Science Fiction. George Marx picked up on Forward’s concepts and studied laser-driven sails in a 1966 paper in Nature. Remember that Forward’s love of physical possibility was accompanied by an almost whimsical attitude toward the kind of engineering that would be needed to make his projects possible. But the constraints are there, and they’re formidable.

Landis, in fact, finds three liabilities for beamed laser propulsion:

  • The energy efficiency of a laser-beamed lightsail infrastructure is extremely low. Landis notes that the force produced by reflecting a light beam is no more than 6:7 N/GW, and that means that you need epically large sources of power, ranging in some of Forward’s designs all the way up to 7.2 TW. We would have to imagine power stations built and operated in an inner system orbit that would produce the energy needed to drive these mammoth lasers.
  • Because light diffracts over interstellar distances, even a laser has to be focused through a large lens to keep the beam on the sail without wasteful loss. In Forward’s smaller missions, this involved lenses hundreds of kilometers in diameter, and as much as a thousand kilometers in diameter for the proposed manned mission to Epsilon Eridani with return capability. This seems highly impractical in the near term, though as I’ve noted before, it may be that a sufficiently developed nanotechnology mining local materials could construct large apertures like this. The time frame for this kind of capability is obviously unclear.
  • Finally, Landis saw that a laser-pushed sail would demand ultra-thin films that would need to be manufactured in space. The sail has to be as light as possible given its large size because we have to keep the mass low to achieve the highest possible mission velocities. Moreover, that low mass requires that we do away with any polymer substrate so that the sail is made only of an extremely thin metal or dielectric reflecting layer, something that cannot be folded for deployment, but must be manufactured in space. We’re a long way from these technologies.

This is why the particle beam interests Landis, who also looked at the concept in a 1989 paper, and why Dana Andrews was drawn to do a cost analysis of the idea that fed into Alan Mole’s paper. Gerald Nordley also discussed the use of relativistic particle beams in a 1993 paper in the Journal of the British Interplanetary Society. Here is Landis’ description of the idea as of 2004:

In this propulsion system, a charged particle beam is accelerated, focused, and directed at the target; the charge is then neutralized to avoid beam expansion due to electrostatic repulsion. The particles are then re-ionized at the target and reflected by a magnetic sail, resulting in a net momentum transfer to the sail equal to twice the momentum of the beam. This magnetic sail was originally proposed to be in the form of a large superconducting loop with a diameter of many tens of kilometers, or “magsail” [7].

The reference at the end of the quotation is to a paper by Dana Andrews and Robert Zubrin discussing magnetic sails and their application to interstellar flight, a paper in which we learn that some of the limitations of Robert Bussard’s interstellar ramjet concept — especially drag, which may invalidate the concept because of the effects of the huge ramscoop field — could be turned around and used to our advantage, either for propulsion or for braking while entering a destination solar system. Tomorrow I’ll continue with this look at the Landis paper with Jim Benford’s findings on beam divergence in mind as the critical limiting factor for the technology.

The Landis paper is “Interstellar flight by particle beam,” Acta Astronautica 55 (2004), 931-934. The Dana Andrews paper is “Cost considerations for interstellar missions,” Paper IAA-93-706, 1993. Gerald Nordley’s 1993 paper is “Relativistic particle beams for interstellar propulsion,” Journal of the British Interplanetary Society 46 (1993) 145–150.



Sails Driven by Diverging Neutral Particle Beams

by Paul Gilster on August 22, 2014

Is it possible to use a particle beam to push a sail to interstellar velocities? Back in the spring I looked at aerospace engineer Alan Mole’s ideas on the subject (see Interstellar Probe: The 1 KG Mission and the posts immediately following). Mole had described a one-kilogram interstellar payload delivered by particle beam in a paper in JBIS, and told Centauri Dreams that he was looking for an expert to produce cost estimates for the necessary beam generator. Jim Benford, CEO of Microwave Sciences, took up the challenge, with results that call interstellar missions into doubt while highlighting what may become a robust interplanetary technology. Benford’s analysis, to be submitted in somewhat different form to JBIS, follows.

by James Benford


Alan Mole and Dana Andrews have described light interstellar probes accelerated by a neutral particle beam. I’ve looked into whether that particle beam can be generated with the required properties. I find that unavoidable beam divergence, caused by the neutralization process, makes the beam spot size much larger than the sail diameter. While the neutral beam driven method can’t reach interstellar speeds, fast interplanetary missions are more credible, enabling fast travel of small payloads around the Solar System.

Neutral-Particle-Beam-Driven Sail

Dana Andrews proposed propulsion of an interstellar probe by a neutral particle beam and Alan Mole later proposed using it to propel a lightweight probe of 1 kg [1,2] The probe is accelerated to 0.1 c at 1,000 g by a neutral particle beam of power 300 GW, with 16 kA current, 18.8 MeV per particle. The particle beam intercepts a spacecraft that is a magsail: payload and structure encircled by a magnetic loop. The loop magnetic field deflects the particle beam around it, imparting momentum to the sail, and it accelerates.

Intense particle beams have been studied for 50 years. One of the key features is that the intense electric and magnetic fields required to generate such beams determine many features of the beam and whether it can propagate at all. For example, intense charged beams injected into a vacuum would explode. Intense magnetic fields can make beam particles ‘pinch’ toward the axis and even reverse their trajectories and go backwards. Managing these intense fields is a great deal of the art of using intense beams.

In particular, a key feature of such intense beams is the transverse velocity of beam particles. Even though the bulk of the energy propagates in the axial direction, there are always transverse motions caused by the means of generation of beams. For example, most beams are created in a diode and the self-fields in that diode produce some transverse energy. Therefore one cannot simply assume that there is a divergence-less beam.

What I will deal with here is how small that transverse energy can be made to be. The reason this is important for the application is that the beam must propagate over the large distances, to accelerate the probe to 0.3 AU or 45,000,000 km. That requires that the beam divergence be very small. In the original paper on the subject by Dana Andrews (2), the beam divergence is simply stated to be 3 nanoradians. This very small divergence was simply assumed, because without it the beam will spread much too far and so the beam energy will not be coupled to the magsail. (Note that at 0.3 AU, this divergence results in a 270 m beam cross-section, about the size of the magsail capture area.)

Just what are a microradian and nanoradian? A beam from Earth to the moon with microradian divergence would hit the moon with a spot size of about 400 m. For a nanoradian it would be a very small 0.4 m, which is about 15 inches.

One method of getting a neutral particle beam might be to generate separate ion and electron beams and combine them. But two nearby charged beams would need to be propagated on magnetic field lines or they would simply explode due to the electrostatic force. If they are propagating parallel to each other along magnetic field lines, they will interact through their currents as well as their charges. The two beams will experience a JxB force, which causes them to spiral about each other. This produces substantial transverse motion before they merge. This example shows why the intense fields of particle beams create beam divergence no matter how carefully one can design them. But what about divergence of neutral particle beams?

Sailship V3

Image: A beamed sail mission as visualized by the artist Adrian Mann.

Neutral Beam Divergence

The divergence angle of a neutral beam is determined by three factors. First, the acceleration process can give the ions a slight transverse motion as well as propelling them forward. Second, focusing magnets bend low-energy ions more than high-energy ions, so slight differences in energy among the accelerated ions lead to divergence (unless compensated by more complicated bending systems).

Third, and quite fundamentally, the divergence angle introduced by stripping electrons from a beam of negative hydrogen or tritium ions to produce a neutral beam gives the atom a sideways motion. (To produce a neutral hydrogen beam, negative hydrogen atoms with an extra electron are accelerated; the extra electron is removed as the beam emerges from the accelerator.)

Although the first two causes of divergence can in principle be reduced, the last source of divergence is unavoidable.

In calculations I will submit to JBIS, the divergence angle introduced by stripping electrons from a beam of negative ions to produce a neutral beam, giving the resulting atom a sideways motion, produces a fundamental divergence. It’s a product of two ratios, both of them small: a ratio of particle masses (≤10-3) and a ratio of neutralization energy to beam particle energy (≤10-7 for interstellar missions). Divergence is small, typically 10 microradians, but far larger than the nanoradians assumed by Andrews and Mole. Furthermore, the divergence is equal to the square root of the two ratios, making it insensitive to changes in ion mass and ionization energy.

In Alan Mole’s example, the beam velocity is highest at the end of acceleration, 0.2 c, twice the ship final velocity. Particle energy for neutral hydrogen is 18.8 MeV. The energy imparted to the electron to drive it out of the beam, resulting in a neutral, is 0.7 eV for hydrogen. Evaluation of Eq. 3 gives beam divergence of 4.5 microradians.

This agrees with experimental data from the Strategic Defense Initiative, SDI. The observed divergence of a 100 MeV neutral beam as 3.6 microradians; for a Triton beam (atomic weight 3), 2 microradians.

The beam size at the end of acceleration will be 411 km. Alan Mole’s magnetic hoop is 270 m in diameter. Therefore the ratio of the area of the beam to the area of the sail is 2.3 106. Only a small fraction of the beam impinges on the spacecraft. To reduce the beam divergence, one could use heavier particles but no nucleus is heavy enough to reduce the beam spot size to the sail diameter.

Laser Cooling of Divergence?

Gerry Nordley has suggested that neutral particle divergence could be reduced by use of laser cooling. This method uses lasers that produce narrowband photons to selectively reduce the transverse velocity component of an atom, so must be precisely tunable. It is typically used in low temperature molecular trapping experiments. The lasers would inject transversely to the beam to reduce divergence. This cooling apparatus would be located right after the beam is cleaned up as it comes out of the injector. They would have substantial powers in order to neutralize the beam as it comes past at a fraction of the speed of light. Consequently, the coupling between the laser beam and the neutral beam is extraordinarily poor, about 10-5 of the laser beam. This highly inefficient means of limiting divergence is impractical.

Fast Interplanetary Sailing

Beam divergence limits the possibilities for acceleration to interstellar speeds, but fast interplanetary missions look credible using the neutral beam/magsail concept. That enables fast transit to the planets.

Given that the beam divergence is fundamentally limited to microradians, I used that constraint to make rough examples of missions. A neutral beam accelerates a sail, after which it coasts to its target, where a similar system decelerates it to its final destination. Typically the accelerator would be in high Earth orbit, perhaps at a Lagrange point. The decelerating system is in a similar location about another planet such as Mars or Saturn.

From the equations of motion, To get a feeling for the quantities, here are the parameters of missions with sail probes with microradian divergence and increasing acceleration, driven by increasingly powerful beams.

Beam/Sail Parameters
Fast Interplanetary
Faster Interplanetary
Interstellar Precursor
θ1 microradian1 microradian1 microradian
acceleration100 m/sec21000 m/sec210,000 m/sec2
Ds270 m270 m540 m
V0163 km/sec515 km/sec2,300 km/sec
R135,000 km135,000 km270,000 km
t027 minutes9 minutes4 minutes
mass3,000 kg3,000 kg3,000 kg
EK4 1013 J4 1014 J8 1015 J
P24 GW780 GW34 TW
particle energy50 MeV50 MeV50 MeV
beam current490 A15 kA676 kA
time to Mars8.7 days34 hours8 hours

The first column shows a fast interplanetary probe, with high interplanetary-scale velocity, acceleration 100 m/sec2, 10 gees, which a nonhuman cargo can sustain. Time required to reach this velocity is 27 minutes, at which time the sail has flown to 135,000 km. The power required for the accelerator is 24GW. If the particle energy is 50MeV, well within state-of-the-art, then the required current is 490A. How long would an interplanetary trip take? If we take the average distance to Mars as 1.5 AU, the probe will be there in 8.7 days. Therefore this qualifies as a Mars Fast Track accelerator.

An advanced probe, at 100 gees acceleration, requires 0.78 TW power and the current is 15 kA. It takes only 34 hours to reach Mars. At such speeds the outer solar system is accessible in a matter of weeks. For example, Saturn can be reached by a direct ascent in the time as short as 43 days.

A very advanced probe, an Interstellar Precursor, at 1000 gees acceleration, reaches 0.8% of light speed. It has a power requirement 34 TW and the current is 676 kA. It takes only 8 hours to reach Mars. At such speeds the outer solar system is accessible in a matter of days. For example, Saturn can be reached by a direct ascent in the time as short as a day. The Oort Cloud at 2,000 AU, can be reached in 6 years.


The rough concepts that have been developed by Andrews, Mole and myself show that neutral beam-driven magnetic sails deserve more attention. But the simple mission scenarios described in the literature to date don’t come to grips with many of the realities. In particular, the efficiency of momentum transfer to the sail should be modeled accurately. Credible concepts for the construction of the sail itself, and especially including the mass of the superconducting hoop, should be assembled. As addressed above, concepts for using laser cooling to reduce divergence are not promising but should be looked into further.

A key missing element is that there is no conceptual design for the beam generator itself. Neutral beam generators thus far have been charged particle beam generators with a last stage for neutralization of the charge. As I have shown, this neutralization process produces a fundamentally limiting divergence.

Neutral particle beam generators so far have been operated in pulsed mode of at most a microsecond with pulse power equipment at high voltage. Going to continuous beams, which would be necessary for the minutes of beam operation that are required as a minimum for useful missions, would require rethinking the construction and operation of the generator. The average power requirement is quite high, and any adequate cost estimate would have to include substantial prime power and pulsed power (voltage multiplication) equipment, a major cost item in the system. It will vastly exceed the cost of the magnetic sails.

The Fast Interplanetary example in table 1 requires 24 GW power for 27 minutes, which is an energy of 11 MW-hours. This is within today’s capability. The Three Gorges dam produces 225 GW, giving 92 TWhr. The other two examples cannot be powered directly off the grid today. So the energy would be stored prior to launch, and such storage, perhaps in superconducting magnets, would be massive.

Furthermore, if it were to be space-based the heavy mass of the high average power required would mean a substantial mass System in orbit. The concept needs economic analysis to see what the cost optimum would actually be. Such analysis would take into account the economies of scale of a large system as well as the cost to launch into space.

We can see that there is in Table 1 an implied development path: a System starts with lower speed, lower mass sails for faster missions in the inner solar system. The neutral beam driver grows as technology improves. Economies of scale lead to faster missions with larger payloads. As interplanetary commerce begins to develop, these factors can be very important to making commerce operate efficiently, counteracting the long transit times between the planets and asteroids. The System evolves.

We’re now talking about matters in the 22nd and 23rd centuries. On this time scale, neutral beam-driven sails can address interstellar precursor missions and interstellar itself from the standpoint of a much more advanced beam divergence technology than we have today.


Alan Mole, “One Kilogram Interstellar Colony Mission”, JBIS, 66, pp.381-387, 2013

Dana Andrews, “Cost Considerations for Interstellar Missions”, Acta Astronautica, 34, pp. 357-365, 1994.

Ashton Carter, Directed Energy Missile Defense in Space–A Background Paper, Office of Technology Assessment, OTA-BP-ISC-26, 1984.

G. A. Landis, “Interstellar Flight by Particle Beam”, Acta Astronautica, 55, pp. 931-934, 2004.

G. Nordley, “Jupiter Station Transport By Particle Beam Propulsion”, NASA/OAC, 1994. And


Mapping the Interstellar Medium

by Paul Gilster on August 21, 2014

The recent news that the Stardust probe returned particles that may prove to be interstellar in origin is exciting because it would represent our first chance to study such materials. But Stardust also reminds us how little we know about the interstellar medium, the space beyond our Solar System’s heliosphere through which a true interstellar probe would one day travel. Another angle into the interstellar medium is being provided by new maps of what may prove to be large, complex molecules, maps that will help us understand their distribution in the galaxy.

The heart of the new work, reported by a team of 23 scientists in the August 15 issue of Science, is a dataset collected over ten years by the Radial Velocity Experiment (RAVE). Working with the light of up to 150 stars at a time, the project used the UK Schmidt Telescope in Australia to collect spectroscopic information about them. The resulting maps eventually drew on data from 500,000 stars, allowing researchers to determine the distances of the complex molecules flagged by the absorption of their light in the interstellar medium.

About 400 of the spectroscopic features referred to as ‘diffuse interstellar bands’ (DIBs) — these are absorption lines that show up in the visual and near-infrared spectra of stars — have been identified. They appear to be caused by unusually large, complex molecules, but no proof has existed as to their composition, and they’ve represented an ongoing problem in astronomical spectroscopy since 1922, when they were first observed by Mary Lea Heger. Because objects with widely different radial velocities showed absorption bands that were not affected by Doppler shifting, it became clear that the absorption was not associated with the objects themselves.

That pointed to an interstellar origin for features that are much broader than the absorption lines in stellar spectra. We need to learn more about their cause because the physical conditions and chemistry between the stars are clues to how stars and galaxies formed in the first place. Says Rosemary Wyse (Johns Hopkins), one of the researchers on the project:

“There’s an old saying that ‘We are all stardust,’ since all chemical elements heavier than helium are produced in stars. But we still don’t know why stars form where they do. This study is giving us new clues about the interstellar medium out of which the stars form.”


Image courtesy of Petrus Jenniskens and François-Xavier Désert. See reference below.

But the paper makes clear how little we know about the origins of the diffuse interstellar bands:

Their origin and chemistry are thus unknown, a unique situation given the distinctive family of many absorption lines within a limited spectral range. Like most molecules in the ISM [interstellar medium] that have an interlaced chemistry, DIBs may play an important role in the life-cycle of the ISM species and are the last step to fully understanding the basic components of the ISM. The problem of their identity is more intriguing given the possibility that the DIB carriers are organic molecules. DIBs remain a puzzle for astronomers studying the ISM, physicists interested in molecular spectra, and chemists studying possible carriers in the laboratories.

The researchers have begun the mapping process by producing a map showing the strength of one diffuse interstellar band at 8620 Angstroms, covering the nearest 3 kiloparsecs from the Sun. Further maps assembled from the RAVE data should provide information on the distances of the material causing a wider range of DIBs, helping us understand how it is distributed in the galaxy. What stands out in the work so far is that the complex molecules assumed to be responsible for these dark bands are distributed differently from the dust particles that RAVE also maps. The paper notes two options for explaining this:

…either the DIB carriers migrate to their observed distances from the Galactic plane, or they are created at these large distances, from components of the ISM having a similar distribution. The latter is simpler to discuss, as it does not require knowledge of the chemistry of the DIB carrier or processes in which the carriers are involved. [Khoperskov and Shchekinov] showed that mechanisms responsible for dust migration to high altitudes above the Galactic plane segregate small dust particles from large ones, so the small ones form a thicker disk. This is also consistent with the observations of the extinction and reddening at high Galactic latitudes.

Working with just one DIB, we are only beginning the necessary study, but the current paper presents the techniques needed to map other diffuse bands that future surveys will assemble.

The paper is Kos et al., “Pseudo–three-dimensional maps of the diffuse interstellar band at 862 nm,” Science Vol. 345, No. 6198 (15 August 2014), pp. 791-795 (abstract / preprint). See also Jenniskens and Désert, “Complex Structure in Two Diffuse Interstellar Bands,” Astronomy & Astrophysics 274 (1993), 465-477 (full text).