Did Stardust Sample Interstellar Materials?

by Paul Gilster on August 18, 2014

Space dust collected by NASA’s Stardust mission, returned to Earth in 2006, may be interstellar in origin. We can hope that it is, because the Solar System we live in ultimately derives from a cloud of interstellar gas and dust, so finding particles from outside our system takes us back to our origins. It’s also a first measure — as I don’t have to tell this audience — of the kind of particles a true interstellar probe will encounter after it has left our system’s heliosphere, the ‘bubble’ in deep space blown out by the effects of the Sun’s solar wind.

Stardust-spacecraft

Image: Artist’s rendering of the Stardust spacecraft. The spacecraft was launched on February 7, 1999, from Cape Canaveral Air Station, Florida, aboard a Delta II rocket. It collected cometary dust and suspected interstellar dust and sent the samples back to Earth in 2006. Credit: NASA JPL.

The cometary material has been widely studied in the years since its return, but how to handle the seven potentially interstellar grains thus far found, and verify their origin? It’s not an easy task. Stardust exposed its collector on the way to comet Wild 2 between 2000 and 2002. Aboard the spacecraft, sample collection trays made up of aerogel and separated by aluminum foil trapped three of the potentially interstellar particles, which are only a tenth as large as Wild 2’s comet dust, within the aerogel, while four other particles of interest left pits and rim residue in the aluminum foil. At Berkeley, synchrotron radiation from the lab’s Advanced Light Source, along with scanning transmission x-ray and Fourier transform infrared microscopes, have ruled out many interstellar candidate dust particles because they are contaminated with aluminum.

The latter may have been knocked off the spacecraft to become embedded in the aerogel, but we’ll learn more as the work continues. The grains are more than a thousand times smaller than a grain of sand. To confirm their interstellar nature it will be necessary to measure the relative abundance of three stable isotopes of oxygen, says Andrew Westphal (UC-Berkeley), lead author of a paper published last week in Science. In this news release from Lawrence Berkeley National Laboratory, Westphal says that while the analysis would confirm the dust’s origin, the process would destroy the samples, which is why the team is hunting for more particles in the Stardust collectors even as it practices isotope analysis on artificial dust particles.

micrograph-of-sorok-picokeystone_2

Image: The bulbous impact from the vaporized dust particle called Sorok can barely be seen as the thin black line in this section of aerogel in the upper right corner. Credit: Westphal et al. 2014, Science/AAAS.

So far the analysis has been entirely non-destructive and the results have been in some ways surprising. Twelve papers being published in Meteoritics & Planetary Science are outlining the methods now being deployed. Finding the grains has meant probing the aerogel panels by studying tiny photographic ‘slices’ at different visual depths, producing a sequence of millions of images that was turned into a video. A citizen science project called Stardust@home was a player in the analysis, using distributed computing and the eyes of volunteers to study the video to look for tracks caused by the dust. So far, more than 100 tracks have been found but not all have been analyzed, and only 77 of the 132 aerogel panels have been scanned.

Orion

So we have the potential for further finds. What we’re learning is that if this dust is indeed interstellar, it’s surprisingly diverse. Says Westphal:

“Almost everything we’ve known about interstellar dust has previously come from astronomical observations—either ground-based or space-based telescopes. The analysis of these particles captured by Stardust is our first glimpse into the complexity of interstellar dust, and the surprise is that each of the particles are quite different from each other.”

Image: The dust speck called Orion contained crystalline minerals olivine and spinel as well an an amorphous material containing magnesium, and iron. Credit: Westphal et al. 2014, Science/AAAS.

Two of the larger particles have a fluffy composition that Westphal compares to a snowflake, a structure not anticipated from earlier models of interstellar dust. Interestingly, they contain olivine, a mineral composed of magnesium, iron and silicon, which implicates disk material or outflows from other stars modified by its time in the interstellar deep. The fact that three of the particles found in the aluminum foil between tiles on the collector tray also contained sulfur compounds is striking, as its presence was not expected in interstellar particles. The ongoing analysis of the remaining 95 percent of the foils in the collector may help clarify the situation.

The paper is Westphal et al., “Evidence for Interstellar Origin of Seven Dust Particles Collected by the Stardust Spacecraft,” Science Vol. 345, No. 6198 (2014), pp. 786-791 (abstract).

tzf_img_post

{ 2 comments }

A Dramatic Upgrade for Interferometry

by Paul Gilster on August 15, 2014

What can we do to make telescopes better both on Earth and in space? Ashley Baldwin has some thoughts on the matter, with reference to a new paper that explores interferometry and advocates an approach that can drastically improve its uses at optical wavelengths. Baldwin, a regular Centauri Dreams commenter, is a consultant psychiatrist at the 5 Boroughs Partnership NHS Trust in Warrington, UK and a former lecturer at Liverpool and Manchester Universities. He is also a seriously equipped amateur astronomer — one who lives a tempting 30 minutes from the Jodrell Bank radio telescope — with a keen interest in astrophysics and astronomical imaging. His extensive reading takes in the latest papers describing optical breakthroughs, making him a key information source on these matters. His latest find could have major ramifications for exoplanet detection and characterization.

by Ashley Baldwin

baldwin2

An innocuous looking article by Michael J. Ireland (Australian National University, Canberra) and John D. Monnier (University of Michigan) may represent a big step towards one of the greatest astronomical instrument breakthroughs since the invention of the telescope. In true Monnier style it is down-played. But I think you should pay attention to “A Dispersed Heterodyne Design for the Planet Formation Imager (PFI),” available on the arXiv site. The Planet Formation Imager is a future world facility that will image the process of planetary formation, especially the formation of giant planets. What Ireland and Monnier are advocating is a genuine advance in interferometry.

An interferometer essentially combines the light of several different telescopes, all in the same phase, so it adds together “constructively” or coherently, to create an image via a rather complex mathematical process called a Fourier transform (no need to go into detail but suffice to say it works). We wind up with detail or angular resolution equivalent to the distance between the two telescopes. In other words, it’s like having a single telescope with an aperture equivalent to the distance, or “baseline” between the two. If you combine several telescopes, this creates more baselines which in effect help fill in more detail to the virtual singular telescopes’ “diluted aperture”. The equation for baseline number is n(n-1) /2, where n is the number of telescopes. If you have 30 telescopes this gives an impressive 435 baselines with angular resolution orders of magnitude beyond the biggest singular telescope. So far so easy? Wrong.

The principle was originally envisaged in the 1950s for optical/infrared telescopes. The problem is the coherent mixing of the individual wavelengths of light. It must be accurate to a tiny fraction of a wavelength, which for optical light is a few billionths of a metre. Worse still, how do you arrange for light, each signal at a slightly different phase, to be mixed from telescopes a large distance apart?

Radio interferometers do this via optical fibres. Easy. Remember, you have to allow for the different times at which waves from different sources each arrive at the “beam combining” mirror by mixing them in the phase they left the original scope. This is done electronically. The radio waves are converted into electrical impulses at source, each representing the phase at which they hit the telescope. They can then be converted back to the correct phase radio wave later, to be mixed at leisure by a computer and the Fourier transform used to create an image.

The more telescopes, the more baselines and the longer they are, the greater the singular resolution. This has been done in the UK by connecting seven large radio telescopes by fibre optic cable to create an interferometer, eMerlin, with 15 baselines, the longest of which is 200 kilometers. Wow! This has been connected with radio telescopes across Europe to make an even bigger device. The US radio telescopes have been connected into the Very Long Baseline Array, from Hawaii to mainland US to the Virgin Islands, to create a maximum baseline of thousands of kilometers. The European and US devices can be connected for even bigger baselines and even connected to space radio telescopes to give baselines wider than our planet’s radius. Truly awesome resolution results.

emerlin3b

Image: e-Merlin is an array of seven radio telescopes, spanning 217 km, connected by a new optical fibre network to Jodrell Bank Observatory. Credit: Jodrell Bank Observatory/University of Manchester.

Where does all this leave optical/infrared interferometry, I hear you say? Well, a long way behind, so far. Optical/infrared light is at too high a frequency to convert to stable equivalent electrical pulse proxies as with radio, and current optical cable, despite being good, loses too much of its transmitted signal (so called dispersion) to be of any use for transferral over distance as with the radio interferometer (although optical cables are rapidly improving in quality). There are optical/infrared interferometers, involving the Keck telescopes and the Very Large Telescope in Chile. There is also the CHARA (Center for High Angular Resolution Astronomy) array of Georgia State University and the Australian SUSI (Sydney University Stellar Interferometer). Amongst others.

These arrays transmit the actual telescope light itself before mixing it, a supercomputer providing the accuracy needed to keep the correct phase of light as it was at the aperture. They all use multiple vacuum filled tunnels with complex mirror arrays, “the optical train,” to reflect the light to the beam mixer. It works, but at a cost. Even over the hundred metres or so of distance between telescopes, up to 95% of the light is lost, meaning only small but bright targets such as the star Betelgeuse can be observed. Fantastic angular resolution though. The star is 500 light years away yet the CHARA (just six one-metre telescopes) can resolve it into a disc! No single telescope, even one of the new super-large ELTs currently being built, could get close! This gives some idea of the sheer power of interferometry. Imagine a device in space with no nasty wobbly atmosphere to spoil things.

But the Ireland and Monnier paper represents hope and shows the way to the future of astronomical imaging. What the researchers are advocating is heterodyne interferometry, an old fashioned idea, again like interferometry itself. Basically it involves creating an electrical impulse as near in frequency as possible to the one entering the telescope, and then mixing it with the incoming light to produce an “intermediate frequency” signal. This signal still holds the phase information of the incoming light but in a stable electrical proxy that can be converted to the original source light and mixed with light from other telescopes in the interferometer to create an image. This avoids most of the complex light-losing “optical train”.

Unfortunately, the technique cannot be used for the beam combiner itself or the all important delay lines whereby light from different telescopes is diverted so it can all arrive at the combiner in phase to be mixed constructively. Both these processes still lose large amounts of light, although much less. The interferometer also needs a supercomputer to combine the source light accurately. Hence the delay till now. The light loss can be compensated for with lots of big telescopes in the interferometer — 4-8 meters is the ideal, as suggested in the paper. This allows baselines and associated massive increase in angular resolution of up to 7km. Bear in mind a few hundred metres was the previous best — you see the extent of the improvement.

The problem is obvious, though. Lots of big telescopes and a supercomputer add up to a lot of money. A billion dollars or more. Its a big step in the right direction, though. Extend the heterodyne concept to exclude the beam combiner and delay line loss and the loss of light approaches that of a radio interferometer. Imagine what could be seen. If the concept ends up in space then one day we will actually “see” exoplanets. This is another reason why “formation flying” for a telescope/star-shade combination (as explored in various NASA concepts) is so important, as it is a crucial element of a future space interferometer. The Planet Formation Imager discussed in the Monnier and Ireland paper is seen as a joint international effort to manage costs. The best viewing would be in Antarctica. One for the future, but a clearer and more positive future.

tzf_img_post

{ 12 comments }

What Io Can Teach Us

by Paul Gilster on August 14, 2014

Io doesn’t come into play very much on Centauri Dreams, probably because of the high astrobiological interest in the other Galilean satellites of Jupiter — Europa, Callisto and Ganymede — each of which may have an internal ocean and one, Europa, a surface that occasionally releases material from below. Io seems like a volcanic hell, as indeed it is, but we saw yesterday that its intense geological activity produces interactions with Jupiter’s powerful magnetosphere, leading to radio emissions that might be a marker for exomoon detection.

The exoplanet hunt has diverse tools to work with, from the transits that result from chance planetary alignments to radial velocity methods that measure the motion of a host star in response to objects around it. Neither is as effective at planets in the outer parts of a solar system as we’d like, so we turn to direct imaging for large outer objects and sometimes luck out with gravitational microlensing, finding a planetary signature in the occultation of a distant star. All these methods work together in fleshing out our knowledge of exoplanets, and it will be helpful indeed if electromagnetic detection is a second way beyond transits of looking for an exomoon.

That first exomoon detection will be a major event. But in studying Io’s interactions with Jupiter, the paper from Zdzislaw Musielak’s team at the University of Texas at Arlington (see yesterday’s post) leaves open the question of just how common such moons are, and of course we don’t know the answer, other than to say that we do have the example of Titan as a large moon with a thick, stable atmosphere. Clearly Io rewards study in and of itself, and its recent intense activity reminds us what can happen to an object this close to a gas giant’s enormous gravity well. With Musielak’s work in mind, then, lets have a run at recent Io findings.

What we learn from Imke de Pater (UC-Berkeley) and colleagues is that a year ago, Io went through a two-week period of massive volcanic eruptions sending material hundreds of kilometers above the surface, a pattern that may be more common than we once thought. Io is small enough (about 3700 kilometers across) that hot lava rises high above the surface, and in the case of the most recent events, pelted hundreds of square kilometers with molten slag.

Never a quiet place, Io is usually home to a large outburst every few years, but the scale here was surprising. Says de Pater colleague Ashley Davis (JPL/Caltech):

“These new events are in a relatively rare class of eruptions on Io because of their size and astonishingly high thermal emission. The amount of energy being emitted by these eruptions implies lava fountains gushing out of fissures at a very large volume per second, forming lava flows that quickly spread over the surface of Io.”

3outbursts-final

Image: Images of Io obtained at different infrared wavelengths (in microns, μm, or millionths of a meter) with the W. M. Keck Observatory’s 10-meter Keck II telescope on Aug. 15, 2013 (a-c) and the Gemini North telescope on Aug. 29, 2013 (d). The bar on the right of each image indicates the intensity of the infrared emission. Note that emissions from the large volcanic outbursts on Aug. 15 at Rarog and Heno Paterae have substantially faded by Aug. 29. A second bright spot is visible to the north of the Rarog and Heno eruptions in c and to the west of the outburst in d. This hot spot was identified as Loki Patera, a lava lake that appeared to be particularly active at the same time. Image by Imke de Pater and Katherine de Kleer, UC Berkeley.

De Pater discovered the first two outbursts on August 15, 2013, with the brightest at a caldera called Rarog Patera. The other occurred at the Heno Patera caldera (a caldera is not so much a crater but the collapse of surface after a volcanic eruption, leaving a large, bowl-shaped depression with surrounding scarps). The Rarog Patera event produced, according to observations conducted at the Keck II instrument in Hawaii, a 9-meter thick lava flow, one that covered 80 square kilometers. The Heno Patera flow covered almost 200 square kilometers.

But the main event was on August 29, revealed in observations led by Berkeley grad student Katherine de Kleer at the Gemini North telescope on Mauna Kea and the nearby Infrared Telescope Facility (IRTF). The actual thermal source of the eruption had an area of 50 square kilometers in an event apparently dominated by lava fountains. Usefully, the de Pater team tracked the third outburst for almost two weeks, providing data that will help us understand how such volcanic activity influences Io’s atmosphere. That, in turn, will give us insights into how eruptions support the torus of ionized gas that circles Jupiter in the region of Io’s orbit.

io_gemini

Image: The Aug. 29, 2013, outburst on Io was among the largest ever observed on the most volcanically active body in the solar system. Infrared image taken by Gemini North telescope, courtesy of Katherine de Kleer, UC Berkeley.

Here again we have helpful synergies between different tools, in this case the Japanese HISAKI (SPRINT-A) spacecraft, whose own observations of the Io plasma torus supplement what de Kleer observed in Hawaii. The correlation of the data sets may provide new insights into the process and, if Musielak’s methods at exomoon detection pay off through future radio observations, may help us interpret those results. The gravitational tugs of Jupiter, Europa and Ganymede feed Io’s volcanic activity, surely a scenario that is repeated around gas giants elsewhere. If so, the Io ‘laboratory’ will turn out to have surprising exomoon implications.

Three papers came out of this work, the first being de Pater et al., “Two new, rare, high-effusion outburst eruptions at Rarog and Heno Paterae on Io,” published online in Icarus 26 July 2014 (abstract). We also have de Kleer et al., “Near-infrared monitoring of Io and detection of a violent outburst on 29 August 2013,” published online in Icarus 24 June, 2014 (abstract) and de Pater, “Global near-IR maps from Gemini-N and Keck in 2010, with a special focus on Janus Patera and Kanehekili Fluctus,” published online in Icarus 10 July 2014 (abstract). This UC-Berkeley news release is also helpful.

tzf_img_post

{ 8 comments }

Radio Emissions: An Exomoon Detection Technique?

by Paul Gilster on August 13, 2014

Here’s an interesting notion: Put future radio telescopes like the Long Wavelength Array, now under construction in the American southwest, to work looking for exomoons. The rationale is straightforward and I’ll examine it in a minute, but a new paper advocating the idea homes in on two planets of unusual interest from the exomoon angle. Gliese 876b and Epsilon Eridani b are both nearby (15 light years and 10.5 light years respectively), both are gas giants, and both should offer a recognizable electromagnetic signature if indeed either of them has a moon.

The study in question comes out of the University of Texas at Arlington, where a research group led by Zdzislaw Musielak is looking at how large moons interact with a gas giant’s magnetosphere. The obvious local analogue is Io, Jupiter’s closest moon, whose upper atmosphere (presumably created by the active volcanic eruptions on the surface) encounters the charged plasma of the magnetosphere, creating current and radio emissions.

The researchers calls these “Io-controlled decametric emissions,” and they could be the key to an exomoon detection if we can find something similar around a nearby gas giant like those named above. Io’s atmosphere may be volcanic in origin, but we know from the example of Titan that moons in greatly different configurations can also have an atmosphere. The interactions with the magnetosphere are what is important. “We said, ‘What if this mechanism happens outside of our solar system?’” says Musielak. “Then, we did the calculations and they show that actually there are some star systems that if they have moons, it could be discovered in this way.”

plasma_torus

Image: Schematic of a plasma torus around an exoplanet, which is created by the ions injected from an exomoon’s ionosphere into the planet’s magnetosphere. Credit: UT Arlington.

We’ve often speculated about the habitability of a moon orbiting a gas giant, but neither of the planets named above, Gliese 876b and Epsilon Eridani b, is within the habitable zone of its respective star. The former has a semimajor axis of 0.208 AU, beyond the HZ outer edge for this M4V-class red dwarf. Epsilon Eridani b is likewise a gas giant (about 1.5 times Jupiter mass) with an orbital distance of approximately 3.4 AU, again outside the K2V primary’s habitable zone. So early work on these two planets would not be related to the habitability question but would serve as a useful test of our ability to detect exomoons using electromagnetic interactions.

I wrote David Kipping (Harvard-Smithsonian Center for Astrophysics) this morning to ask for his reaction to the electromagnetic approach to exomoon detection. Kipping heads The Hunt for Exomoons with Kepler, which uses techniques involving planetary transits and the signature of exomoons within. He called this work “…an inventive idea which could discover exomoons not detectable with any other technique,” and went on to point out just where electromagnetic methods might be the most effective.

Magnetospheres are more extended for gas giants on wide orbits, like Jupiter. So I would expect this technique to be most fruitful for cold Jupiters, whereas the transit technique is better suited for planets at the habitable-zone distance or closer. The complementary nature of these detection techniques will allow us to find moons around planets at a range of orbital separations.

Adding more tools to our inventory can only help as we proceed in our search for the first exomoon. Let me quote Kipping’s further thoughts on the method:

In order to make a detection with this method, the moon must possess an ionosphere and so some kind of atmosphere. Io has a tenuous atmosphere because of intense tidal friction leading to volcanism and subsequent sulphur dioxide outgassing, but we don’t really know how common such a scenario is. Alternatively, a moon may be able to retain an atmosphere much like the Earth does, but in the Solar System only Titan satisfies this criteria.

The host planet must have a strong magnetosphere. For Jupiter-sized planets, this is reasonable but Neptunes and mini-Neptunes dominate the planet census and if such objects have moons, their magnetospheres are unlikely to be strong enough to produce an observable radio signal via interaction with a moon’s ionosphere.

For these reasons, an absence of a radio signal would not necessarily mean that there were no moons, unlike the transit technique which can make more definitive statements.

The technique is most useful for nearby planetary systems, within a few parsecs, but then again these are likely the most interesting systems to explore!

Unlike the transit method, this technique does not require the orbital inclination of the planetary system to be nearly aligned to our line of sight – a significant advantage.

The best-case quoted sensitivities, 0.25 to 0.75 Earth radii, are comparable to the best-case sensitivities with the transit method.

This new exomoon work reminds me of Jonathan Nichols’ thinking on radio telescopes and exoplanet detection. An astronomer at the University of Leicester, Nichols proposed at a Royal Astronomical Society meeting in 2011 that a radio telescope like the Low Frequency Array (LOFAR), now under construction across France, Sweden, the Netherlands, Great Britain and Germany, could detect the radio waves generated by the aurorae of gas giants, emissions that we can detect from Jupiter and Saturn in our own system. Nichols believes we might use such methods to find planets up to 150 light years away. See Exoplanet Aurora as Detection Tool for more.

The paper is Noyola et al., “Detection of Exomoons Through Observation of Radio Emissions,” The Astrophysical Journal Vol. 791, No. 1 (2014), p. 25 (abstract). The paper on aurora detection is Nichols, “Magnetosphere-ionosphere coupling at Jupiter-like exoplanets with internal plasma sources: implications for detectability of auroral radio emissions,” Monthly Notices of the Royal Astronomical Society, published online July 1, 2011 (abstract / preprint).

tzf_img_post

{ 10 comments }

‘Aragoscope’ Offers High Resolution Optics in Space

by Paul Gilster on August 12, 2014

Our recent discussions of the latest awards from the NASA Innovative Advanced Concepts office remind me that you can easily browse through the older NIAC awards online. But first a word about the organization’s history. NIAC operated as the NASA Institute for Advanced Concepts until 2007 under the capable leadership of Robert Cassanova, who shepherded through numerous studies of interest to the interstellar-minded, from James Bickford’s work on antimatter extraction in planetary magnetic fields to Geoffrey Landis’ study of advanced solar and laser lightsail concepts. The NIAC Funded Studies page is a gold mine of ideas.

NIAC has been the NASA Innovative Advanced Concepts office ever since 2011, when the program re-emerged under a modified name. NASA’s return to NIAC in whatever form was a welcome development. Remember that we had lost the Breakthrough Propulsion Physics project in 2002, and there was a time there when the encouragement of ideas from outside the agency seemed moribund. Now we’re seeing opportunities for new space concepts that have ramifications on how NASA conducts operations, a welcome platform for experimentation and discovery.

Over the years I’ve written a number of times about Webster Cash’s ideas on ‘starshades,’ which came together under the New Worlds concept that has itself been through various levels of NASA funding. Starshades are large occulters that are used to block out the light of a central star to reveal the planets orbiting around it. Properly shaped, a starshade placed in front of a space telescope can overcome the diffraction of light (where light bends around the edges, reducing the occulter’s effectiveness). Cash’s New Worlds pages provide an overview of his starshade concepts and link to NASA study documents presenting the idea in detail.

With this background, it’s interesting to see that NIAC awarded Cash a new NIAC grant in June to study what he calls an Aragoscope, named after Dominique-François-Jean Arago, who carried out a key experiment demonstrating the wave-like nature of light in 1818. Rather than overcoming the diffraction of light, as the starshade is designed to do, the Aragoscope would take advantage of it, blocking the front of the telescope with a large disk but allowing the diffracted light to converge to form an image behind the disk. This ‘Arago Spot’ (also called the ‘Poisson Spot’) was what Arago had demonstrated, a bright point that appears at the center of a circular object’s shadow.

Poissonspot_setup_treisinger

Image: Arago spot experiment. A point source illuminates a circular object, casting a shadow on a screen. At the shadow’s center a bright spot appears due to diffraction, contradicting the prediction of geometric optics. Credit: Wikimedia Commons.

How to put this effect to work in the design of a space telescope? Unlike a starshade, the Aragoscope would be circular in shape, an opaque disk whose diffracted light is directed toward a pinhole camera at its center, then to a telescope that provides extremely high resolution views of stellar objects. Cash sees the method as a way to lower the cost of large optical systems limited by diffraction effects in a dramatic way. Rather than being overcome by such effects, his instrument would gather diffracted light and refocus it. From the NASA announcement last June:

The diagram in the summary chart shows a conventional telescope pointed at an opaque disk along an axis to a distant target. Rather than block the view, the disk boosts the resolution of the system with no loss of collecting area. This architecture, dubbed the “Aragoscope” in honor of the scientist who first detected the diffracted waves, can be used to achieve the diffraction limit based on the size of the low cost disk, rather than the high cost telescope mirror. One can envision affordable telescopes that could provide 7cm resolution of the ground from geosynchronous orbit or images of the sky with one thousand times the resolution of the Hubble Space Telescope.

2014-cash

Image: A view of the Aragoscope, an opaque disk with associated telescope. Credit: Webster Cash.

An article in Popular Mechanics this July notes that a 1000-kilometer Aragoscope could study the event horizon of black holes in the X-ray spectrum, making for highly detailed views of interesting galactic nuclei like that of M87. But Cash also talks about picking out features like sunspots and plasma ejections on nearby stars, and says an early target, once a space-based version of the Aragoscope could be launched, would be Alpha Centauri A and B. It’s intriguing that the Aragoscope, in some ways, turns Cash’s earlier starshade concepts on their head, aiming for high resolution rather than high contrast. “I spent a lot of time understanding the physics of destroying diffractive waves very efficiently,” he told the magazine. “In the process, it’s not hard to see that you can use those detractive waves to create images.”

tzf_img_post

{ 11 comments }

Electric Sail Concept Moves Forward

by Paul Gilster on August 11, 2014

Just how we follow up on the investigations of New Horizons remains an open question. But we need to be thinking about how we can push past the outer planets to continue our study of the heliopause and the larger interstellar environment in which the Sun moves. I notice that Bruce Wiegmann, writing a precis of a mission concept called the Heliopause Electrostatic Rapid Transit System (HERTS) has drawn inspiration from the Heliophysics Decadal Survey, which cites the need for in situ measurements of the outer heliosphere and beyond.

It’s good to see a bit more momentum building for continuing the grand voyages of exploration exemplified by the Pioneers, the Voyagers and New Horizons. I often cite the Innovative Interstellar Explorer concept developed at Johns Hopkins (APL), which targets nearby interstellar space at a distance of over 200 AU, but whether we’re talking about IIE or Claudio Maccone’s FOCAL mission or any other design aimed at exiting the Solar System, the key problem is propulsion. Weigmann’s team at Marshall Space Flight Center has been awarded a Phase I grant from NASA’s Innovative Advanced Concepts office to work on a dramatic solution.

The Heliopause Electrostatic Rapid Transit System involves a sail and thus propellant-less propulsion, but it’s not the conventional solar sail that uses the momentum provided by solar photons. The nomenclature is confusing, because the electric sail HERTS is designed around would interact with the solar ‘wind,’ which is not made up of photons at all but a stream of charged particles that flows constantly though erratically from the Sun at high velocity. A spacecraft riding the solar wind could, by some calculations, move between five and ten times faster than our best outer-system result so far, the 17.1 km/sec Voyager 1.

Wiegmann explains the principle at play in the precis:

The basic principle on which the HERTS operates is the exchange of momentum between an array of long electrically biased wires and the solar wind protons, which flow radially away from the sun at speeds ranging from 300 to 700 km/s. A high-voltage, positive bias on the wires, which are oriented normal to the solar wind flow, deflects the streaming protons, resulting in a reaction force on the wires—also directed radially away from the sun. Over periods of months, this small force can accelerate the spacecraft to enormous speeds—on the order of 100-150 km/s (~ 20 to 30 AU/year). The proposed HERTS can provide the unique ability to explore the Heliopause and the extreme outer solar system on timescales of less than a decade.

If you’re an old Centauri Dreams hand, you’ll recognize the HERTS sail as the offspring of Pekka Janhunen (Finnish Meteorological Institute), whose concept involves long tethers (perhaps reaching 20 kilometers in length) extended from the spacecraft, each maintaining a steady electric potential with the help of a solar-powered electron gun aboard the vehicle. As many as a hundred tethers — these are thinner than a human hair — could be deployed to achieve maximum effect. While the solar wind is far weaker than solar photon pressure, an electric sail with tethers in place is still efficient, according to Janhunen’s calculations, and can create an effective solar wind sail area of several square kilometers.

Szames_sail

Image: A full-scale electric sail consists of a number (50-100) of long (e.g., 20 km), thin (e.g., 25 microns) conducting tethers (wires). The spacecraft contains a solar-powered electron gun (typical power a few hundred watts) which is used to keep the spacecraft and the wires in a high (typically 20 kV) positive potential. The electric field of the wires extends a few tens of metres into the surrounding solar wind plasma. Therefore the solar wind ions “see” the wires as rather thick, about 100 m wide obstacles. A technical concept exists for deploying (opening) the wires in a relatively simple way and guiding or “flying” the resulting spacecraft electrically. Credit: Artwork by Alexandre Szames. Caption via Pekka Janhunen/Kumpula Space Centre.

MSFC’s Advanced Concepts Office has been studying the feasibility of the Janhunen sail during the past year, finding that the electric sail is able to reach velocities three to four times greater than any realistic current technology including solar (photon) sails and solar electric propulsion systems. Because we are dealing with a stream of particles flowing outward from the Sun (and because the electric sail can, like a solar sail, be ‘tacked’ for maneuvering), we are looking at a fast interplanetary propulsion system that avoids the deployment issues faced by large solar sails using photon momentum for their push. Deploying reels of tethers is, by comparison, straightforward.

Both photon-pushed sails and those riding the solar wind are limited by distance from the Sun, but the electric sail may have applications in future interstellar missions nonetheless. If we accelerate a (non-electric) sail by the use of a laser or microwave beam up to a small percentage of the speed of light, we could slow it down upon arrival by using the solar wind from the destination star, interacting with a tether system deployed as the spacecraft enters the new system. Having decelerated, the spacecraft could then use electric sail technology for exploration. Janhunen has explored the concept for electric sails (though not yet in detail), but an idea like this was also broached by Robert Zubrin and Dana Andrews for magnetic sail deceleration in 1990.

A key paper on electric sails is Janhunen and Sandroos, “Simulation study of solar wind push on a charged wire: solar wind electric sail propulsion,” Annales Geophysicae 25, (2007), pp. 755-767. For background, see Electric Solar Wind Sail Spacecraft Propulsion, which provides diagrams, a FAQ and various links to published papers.

tzf_img_post

{ 10 comments }

NIAC: An Orbiting Rainbow

by Paul Gilster on August 8, 2014

Remember Robert Forward’s beamed sail concepts designed for travel to another star? Forward was the master of thinking big, addressing questions of physics which, once solved, left it up to the engineers to actually build the enormous infrastructure needed. Thus his crewed mission to Epsilon Eridani, which would demand not only a large power station in the inner system but a huge Fresnel lens out between the orbits of Saturn and Uranus. A 75,000 TW laser system was involved, a ‘staged’ sail for deceleration at the destination, and as for that lens, it would mass 560,000 tons and be a structure at least a third the diameter of the Moon.

In addition to being a highly regarded physicist, Forward was also a science fiction writer who detailed his beamed sail concepts in Rocheworld (Baen, 1990), which grew out of a previous version in Analog. I always thought of the Epsilon Eridani mission as his greatest attempt to confound human engineering, but later came to think that vast structures like his outer system lens might be possible. Rather than legions of space-suited workers building the thing, perhaps nanotechnology could come to the rescue, so that one technology builds another. It’s another reason to include the possibility of vast structures in our SETI thinking.

Creating Apertures in Space

All of this is inspired by looking at the recent announcement from the NASA Innovative Advanced Concepts (NIAC) program, which has named twelve projects for Phase 1 awards and five for Phase 2. The latter receive up to $500,000 each over a two-year period, often growing out of ideas previously broached in a Phase 1 paper and refining the work explored therein. The Jet Propulsion Laboratory is leading one of the Phase II projects, one called ‘Orbiting Rainbows,’ that inevitably calls Forward to mind because it involves clouds of dust-like matter being shaped into the primary element for an ultra-large space aperture. In other words, a kind of lens.

Here’s a snippet from JPL’s Marco Quadrelli describing the principle:

Our objective is to investigate the conditions to manipulate and maintain the shape of an orbiting cloud of dust-like matter so that it can function as an ultra-lightweight surface with useful and adaptable electromagnetic characteristics, for instance, in the optical, RF, or microwave bands. Inspired by the light scattering and focusing properties of distributed optical assemblies in Nature, such as rainbows and aerosols, and by recent laboratory successes in optical trapping and manipulation, we propose a unique combination of space optics and autonomous robotic system technology, to enable a new vision of space system architecture with applications to ultra-lightweight space optics and, ultimately, in-situ space system fabrication.

Quadrelli points out that the cost of any optical system is always driven by the size of the primary aperture, which is why Forward’s vast lens seems so out of reach. Have a look at the image below for what Quadrelli proposes. Here we’re seeing a cloud of what he refers to as ‘dust-like objects’ that can be optically manipulated. The particles are shaped by light pressure into a surface that can be tuned to act coherently in specific frequencies. The idea seems to have grown out of recent work in the physics of optically manipulating small particles in the laboratory — think ‘optical tweezers’ of the kind that have made it possible to work at the nanotech scale.

quadrelli-phii_0

Image: Creation of lenses out of clouds of tiny objects. Credit: Marco Quadrelli/JPL.

The Orbiting Rainbows study is all about the feasibility of making a single aperture out of a cloud of particles, but the implications are intriguing and Quadrelli names them in his short description. Multiple ‘aerosol lenses’ could be combined to create powerful tools for exoplanet research, all the while teaching us much about remote manipulation of clouds of matter in space. The goal is a completely reconfigurable, fault-tolerant lensing system of huge size and low cost. This would complement the next generation of extremely large telescopes on Earth and bring an entirely new approach to the operation of telescopes in space. Quadrelli’s description continues:

A cloud of highly reflective particles of micron size acting coherently in a specific electromagnetic band, just like an aerosol in suspension in the atmosphere, would reflect the Sun’s light much like a rainbow. The only difference with an atmospheric or industrial aerosol is the absence of the supporting fluid medium. This new concept is based on recent understandings in the physics of optical manipulation of small particles in the laboratory and the engineering of distributed ensembles of spacecraft swarms to shape an orbiting cloud of micron-sized objects.

I dwell on this JPL concept because we’ve recently been talking about the potential of tiny spacecraft operating in swarms, and considered them even in terms of propulsion, moving from Clifford Singer’s pellet ideas to Gerald Nordley’s intelligent ‘snowflake’ designs (see, for example, ‘Smart Pellets’ and Interstellar Propulsion, and the sequence of articles around it). These pellets would constitute a propellant stream for a departing spacecraft, but we’ve also seen Mason Peck’s ideas about satellites the size of a microchip that can be manipulated through interactions with the magnetic fields of planets, and perhaps accelerated to interstellar velocities (see Sprites: A Chip-Sized Spacecraft Solution).

So miniaturization, swarm operations and propulsion through natural interactions (Peck’s Sprites take advantage of the Lorentz force that affects charged particles moving through a magnetic field) all factor into evolving thinking about deep space. Nanotechnology enables new operations at both ends of the scale spectrum, perhaps constructing the vast structures of the science fictional imagination (Dyson spheres come to mind) and potentially enabling tiny spacecraft whose low mass makes getting them up to speed a much easier matter than heavy rockets.

Back to Quadrelli. His reflective particles aren’t ‘intelligent,’ but the principle of shaping them into apertures using autonomous robotic technology shares in some of the same premises. We’ll need to see, of course, just how this shaping works, how it manipulates what kinds of dust-like matter, and just how adaptive the resultant clouds are to changes in configuration, but that is what a Phase II study is all about. The point is, when we need something huge, like Clifford Singer’s 105 kilometer particle accelerator, nature can sometimes offer a better solution, like the acceleration of Mason Peck’s Sprites within Jupiter’s powerful magnetic fields.

Finding ways to assemble huge lenses is a project cut out of the same cloth. And at the Phase I level at NIAC, several studies cry out for later analysis here, including Webster Cash’s Aragoscope, which the University of Colorado scientist hopes will ‘shatter the cost barrier for large, diffraction-limited optics,’ producing the possibility of telescopes with a thousand times the resolution of the Hubble instrument. Also impressive is the Heliopause Electrostatic Rapid Transit System from Marshall Space Flight Center (Janhunen’s electric sails!) and Justin Atchison’s work at Johns Hopkins on Swarm Flyby Gravimetry, which again attracts me by its use of miniaturization and swarm technologies. More on these Phase I NIAC studies next week

tzf_img_post

{ 5 comments }

Rosetta: Arrival at a Comet

by Paul Gilster on August 7, 2014

How do you close on a comet? Very carefully, as the Rosetta spacecraft has periodically reminded us ever since late January, when it was awakened from hibernation and its various instruments reactivated in preparation for operations at comet 67P/Churyumov–Gerasimenko. The spacecraft carried out ten orbital correction maneuvers between May and early August as its velocity with respect to the comet was reduced from 775 meters per second down to 1 m/s, which is about as fast as I was moving moments ago on my just completed morning walk.

What a mission this is. When I wrote about the January de-hibernation procedures (see Waking Up Rosetta), I focused on two things of particular interest to the interstellar-minded. Rosetta’s Philae lander will attempt a landing on the comet this November even as the primary spacecraft, now orbiting 67P/Churyumov–Gerasimenko, continues its operations. We’re going to see the landscape of a comet as if we were standing on it, giving Hollywood special effects people legions of new ideas and scientists a chance to sample an ancient piece of the Solar System.

You’ll want to bookmark the Rosetta Blog to keep up. But keep in mind the other piece of the puzzle for future space operations. Rosetta will be looking closely at the interactions between the solar wind — that stream of charged particles constantly flowing from the Sun — and cometary gases. We’ll learn a great deal about the composition of the particles in the solar wind and probably get new insights into solar storms.

Remember that this ‘solar wind’ isn’t what drives the typical solar sail, which gets its kick from the momentum imparted by solar photons. But there are other kinds of sail. The Finnish researcher Pekka Janhunen has discussed electric sail possibilities, craft that might use the charged particles of the solar wind instead of photons to reach speeds of 100 kilometers per second (by contrast, Voyager 1 is moving at about 17 km/s). Rosetta results may help us understand how feasible this concept is.

Comet_on_3_August_2014_large

Image: Comet 67P/Churyumov-Gerasimenko by Rosetta’s OSIRIS narrow-angle camera on 3 August from a distance of 285 km. The image resolution is 5.3 metres/pixel. Credit & Copyright: ESA / Rosetta / MPS for OSIRIS Team MPS / UPD / LAM /IAA / SSO / INTA / UPM / DASP / IDA.

That image is a stunner, no? Now that Rosetta has rendezvoused with 67P/Churyumov-Gerasimenko, we can think back not only to the orbital correction maneuvers but the three gravity assist flybys of the Earth and one assist at Mars, allowing a trajectory that produced data about asteroids Steins and Lutetia along the way. It’s been a long haul since 2004 and you can see why Jean-Jacques Dordain, the European Space Agency’s director general, is delighted:

“After ten years, five months and four days travelling towards our destination, looping around the Sun five times and clocking up 6.4 billion kilometres, we are delighted to announce finally ‘we are here.’ Europe’s Rosetta is now the first spacecraft in history to rendezvous with a comet, a major highlight in exploring our origins. Discoveries can start.”

Getting the hang of operations around the comet is going to be a fascinating process to watch. Right now the spacecraft is approximately 100 kilometers from the comet’s surface, and over the course of the coming six weeks, while close-up studies from its instrument suite proceed, it will nudge closer, down to 50 kilometers, and eventually closer still depending on comet activity. Remember that images from the OSIRIS camera showed a dramatic variation in activity between late April and early June as the comet’s gas and dust envelope — its ‘coma’ — brightened and then dimmed within the course of six weeks. These are quirky, lively objects, and we now proceed to teach ourselves the art of flying a spacecraft near them for extended periods.

This ESA news release tells us that the plan is to identify five landing sites by late August, with the primary site being chosen in mid-September. The landing is currently planned for November 11, after which we’ll have both lander and orbiter in operation at the comet until its closest solar approach in August of 2015. Comets are ancient pieces of the Solar System that may well have delivered the bulk of Earth’s oceans. Now we’ll see up close what happens to a comet as it approaches the Sun. Congratulations and Champagne are due all around for the planners, designers, builders and controllers of this extraordinary mission. Onward to the surface.

tzf_img_post

{ 22 comments }

Keeping a Planet Alive

by Paul Gilster on August 6, 2014

I’ve made no secret of my interest in red dwarf stars as possible hosts of life-bearing planets, and this is partially because these long-lived stars excite visions of civilizations that could have a stable environment for many billions of years. I admit it, the interest is science fictional, growing out of my imagination working on the possibility of life under the light of a class of stars that out-live all others. What might emerge in such settings, in places where tidal lock could keep the planet’s star fixed at one point in the sky and all shadows would be permanent?

Some of this interest grows out of an early reading of Olaf Stapledon’s 1937 novel Star Maker, in which the author describes life in the form of intelligent plants that live on such a tidally locked world. For that matter, Larry Niven developed an alien race called the Chirpsithra, natives of a red dwarf who have a yen for good drink and socializing with other species (you can sample Niven’s lively tales of these creatures in The Draco Tavern, a 2006 title from Tor). I tend to imagine red dwarf planet dwellers as something more like philosophers and sages than intelligent carrots or Niven’s incredibly tall barflies.

But no matter. A new paper from Christa Van Laerhoven and Rory Barnes (University of Washington) and Richard Greenberg (University of Arizona) has me absorbed in matters such as how close an Earth-class planet would need to be to stay habitable around a red dwarf. There’s no one answer because of the range of stellar temperatures between different types of red dwarf, but Laerhoven and company are looking at a star with a mass of 0.1 solar masses and a luminosity 1.15 x 10-3 times that of the Sun. Here it turns out that to receive the same incident flux as the Earth, the planet would need to orbit at 0.034 AU.

Now Mercury is about 0.38 AU from the Sun, which gives us a feel for how much cooler such a star must be. We can also note that because of their long lifetimes, many red dwarfs are much older than our Solar System, on the order of twice as old in some cases, and because a transiting Earth-class planet around such a star should be detectable (the transit depth would be huge), it’s possible that the first Earth-like habitable planet we find will be billions of years older than our own. Thus my visions of ancient races of philosophers under a darkened sky.

But maybe not. The Van Laerhoven paper makes the case that planets like these are going to be cooling internally as they age, enough so to cause problems. Plate tectonics are driven by heat, and we’re learning their necessary function in the carbon cycle that allows the planet to avoid greenhouse overheating. Here’s the issue (internal citations omitted):

On an Earth-like body, long before reaching twice Earth’s age, plate tectonics would probably have turned off as the planet cooled, primarily because solidification of the core would terminate the release of latent heat that drives mantle convection. While plate tectonics may not be essential for life on all habitable planets, an equivalent tectonic process to drive geochemical exchange between the interior and the atmosphere is a likely requirement. The necessary amount of internal heat for such activity is uncertain (even the mechanisms that govern the onset and demise of terrestrial plate tectonics are still poorly understood and controversial), but it seems likely that a planet ~10 Gyr would have cooled too much…

So my race of philosophers and poets may have a much shorter time to thrive than the ten trillion years its dim star will live. What we need is an additional heat source, and the possibility in play in this paper is tidal heating, which the paper argues calls for either non-synchronous rotation or an eccentric orbit. Even these are a problem because tidal effects gradually synchronize the rotation and circularize the orbit, but we need them to help us on geological timescales.

The solution may be another planet in the same system, an outer companion that can keep the inner planet’s orbit from circularizing and thus maintain the tidal stresses that heat the planet. The computer models the researchers used allow this effect to keep the inner world habitable for billions of years even when other internal sources of heat have long perished. The paper argues that this effect, while studied here only in terms of two-planet systems, can also come into play in systems with a larger number of planets. From the paper:

…a reasonable fraction of terrestrial-scale planets in the HZ of very old, low-mass stars may be able to sustain life, even though without a satisfactory companion they would have cooled off by now. The requirements on the outer planet are not extremely stringent. For example, one could well imagine a Neptune-size outer planet a few times farther out than the rocky planet with an orbital eccentricity ~0.01-0.02. Not only would such an outer planet yield an appropriate amount of tidal heating to allow life, but the heating would be at a steady rate for at least tens of Gyr.

10TrillionAD_2

Image: For certain ancient planets orbiting smaller, older stars, the gravitational influence of an outer companion planet might generate enough energy through tidal heating to keep the closer-in world habitable even when its own internal fires burn out. But what would such a planet look like on its surface? Here, UW astronomer Rory Barnes provides a speculative illustration of a planet in the habitable zone of a red dwarf. “The star would appear about 10 times larger in the sky than our sun, and the crescent is not a moon but a nearby Saturn-sized planet that maintains the tidal heating,” Barnes notes. “The sky is mostly dark because cool stars don’t emit much blue light, so the atmosphere doesn’t scatter it.” Credit: Rory Barnes / University of Washington.

It could be, then, that a planet in this configuration — a terrestrial world like the Earth orbiting a 0.1 solar mass star with an outer companion — could experience enough tidal heating to make it the longest lived surface habitat in the galaxy. Is such a world, as the authors speculate, a possible home for humanity in the remote future, when our own Earth becomes uninhabitable? For that matter, given that such worlds seem made to order for ancient civilizations, shouldn’t we consider them as good SETI candidates? The paper recommends that any search for habitable Earth-scale planets should include a search for outer system companions.

The paper is Van Laerhoven et al., “Tides, planetary companions, and habitability: Habitability in the habitable zone of low-mass stars,” Monthly Notices of the Royal Astronomical Society, published online 12 May, 2014 (abstract / preprint).

tzf_img_post

{ 28 comments }

What We Want to Hear

by Paul Gilster on August 5, 2014

“A man hears what he wants to hear and disregards the rest.”

So sang Simon & Garfunkel in their 1968 ballad “The Boxer.” Human nature seems to drive us to look for what we most want to happen. It’s a tendency, though, that people who write about science have to avoid because it can lead to seriously mistaken conclusions. In science itself there is a robust system of peer review to evaluate ideas. It’s not perfect but it’s a serious attempt to filter out our preconceptions. As with the flap about ‘faster than light’ neutrinos at CERN, we want as many qualified eyes as possible on the problem.

Journalists come in all stripes, but of late there has been a disheartening tendency to prove Paul Simon’s axiom. Not long ago we went through a spate of news stories to the effect that NASA was investigating warp drive. True enough — the Eagleworks team at Johnson Space Center, under the direction of Harold “Sonny” White, has been looking at warp drive possibilities for some time, though it could hardly be said to be a well-funded priority of the space agency. The budget for the Eagleworks effort has been small, and Eagleworks is only a small part of Dr. White’s job description, which focuses mostly on his acknowledged expertise in ion thrusters and related technologies.

ram-scoop-manchu

But many of the recent stories went well beyond the facts, implying that warp drive is a major project at NASA. Numerous sites featured images of what the purported ship would look like, and the implication was that NASA had already produced designs for the vessel, meaning that breakthroughs that would allow faster than light propulsion were in the works. Anyone involved with the breakthrough propulsion community can tell you that this is not the case despite the exultant nature of some of the Internet postings. Dr. White himself has always criticized media hype and has done everything he can to distance himself from it.

Science proceeds through careful experimentation and theorizing. We also need to see well-developed analysis of any experimental apparatus that is producing anomalous results, to see if we can verify what’s going on. If the apparatus has a flaw, those operating it may not realize that effects apparently being generated by their theory are actually artifacts of the equipment being used. Such a result may be developing with regard to the White/Juday Interferometer, the key tool involved in the JSC studies of warp drive physics.

It’s not making any headlines, but a new study from Jeff Lee and Gerald Cleaver (both affiliated with the Early Universe Cosmology & Strings Group, Baylor University) has appeared, bearing a title that makes the paper’s case: “The Inability of the White-Juday Warp Field Interferometer to Spectrally Resolve Spacetime Distortions.” You can find it here. The tool in question is the one being used at Eagleworks to study possible space-time distortions of the sort that might lead one day to a warp drive. About it, the paper has this to say:

The White-Juday Warp Field Interferometer has been demonstrated to be incapable of resolving the minute distortions of spacetime created by both 106 V·m-1 electric fields and a 1 kg mass.

And this:

Variations in temperature were shown to produce potentially detectable changes in the refractive index of air, which could result in occasional spurious interference fringes. Although a more rigorous model, which considers a time-changing index of refraction gradient along the interferometer arm, would result in a smaller lateral beam deviation, the purpose for which the WJWFI is intended has been shown to be unachievable.

And this:

…were any signals to appear in the White-Juday Warp Field Interferometer, they would most often be attributable to either electronic noise or the classical electrodynamics interaction between the ionized air between the plates and the electromagnetic radiation of the laser.

Note that last point: Noise within the experimental equipment may be what is being observed.

What to make of this? Two things. First, we are trying to learn whether a particular experimental setup can do what its builders hope. Examining the apparatus is key to science, and it’s something that both the experiments and those reviewing the work take as a solemn responsibility. If the White-Juday Warp Field Interferometer doesn’t work as originally expected, this now gives the experimenters the opportunity to use this knowledge to add to their database, and possibly use it in refining future experimental efforts in this area.

Secondly, this entirely natural development of studying the apparatus and working out the implications doesn’t fare well when journalists jump to conclusions. It is entirely normal for ideas to be advanced in the give and take of conferences and scientific papers as researchers proceed with the dogged task of finding the truth. Journalism likes a good story, however, and the temptation to take tentative conclusions and make them sound permanent is irresistible. Thus we get headlines like The Washington Post’s This is the amazing design for NASA’s Star Trek-style space ship, the IXS Enterprise.

Sonny White, who is the kindest of men, is a friend, and every time I’ve talked to him about these matters he has pointed out to me how much he deplores the hype that accompanies work in these areas. Sonny would like there to be a way to get to a warp drive and so would I, and he may well want to rebut the paper above with a new analysis of his own. So the work proceeds, but it should always do so with the understanding that ideas can be blown far out of proportion in the era of a global Internet and a willingness to go for the big story rather than the considered truth. The truth here is that we are in a process of learning what works and what does not.

Enter the Quantum Vacuum Thruster

So we need to calm down. Over the past few days there has been a flare-up about so-called quantum vacuum thrusters, following a story in Wired that made several bold statements, such as the title: NASA Validates ‘Impossible’ Space Drive. It is true that Eagleworks tested a quantum vacuum thruster device, a ‘propellant-less microwave thruster’ which was developed by Guido Fetta. The work on what Fetta calls the ‘Cannae Drive’ was presented in late July at the 50th Joint Propulsion Conference in Cleveland. Independent of this effort, British scientist Roger Shawyer has been working on a similar thruster for years, one recently tested by a team in China.

I always appreciate it when people send me interesting links, and a number of readers passed the Wired story along. I can certainly understand their interest! For the propellantless thruster seems to violate the principle of conservation of momentum, a very big thing if true, and it’s also true that a drive that could do these things could lead to entirely new designs in propulsion. There is no sense, however, in which NASA could be said to have ‘validated’ this device.

Gizmodo popped up with a headline of its own, making the bald statement: NASA: New “impossible” engine works, could change space travel forever. The article also tells us: “the fact is that the quantum vacuum plasma thruster works and scientists can’t explain why.”

But does it work? To know, we would need to study the experimental apparatus carefully to make sure there were no effects happening within it that could replicate the minute perceived signal. In other words, we may be looking at equipment noise. My sources, which I consider highly reliable, tell me that a review of the equipment used in the JSC quantum vacuum thruster tests has been completed but because it has not yet been released, I cannot make a comment on it beyond saying that it will likewise upgrade our understanding of the kind of experiment that was run, and how valid the results might be.

I would love to see the emergence of a genuine ‘impulse’ engine of the sort that the media have written about and would rejoice in its implications. But we are part way into a complicated story that has reached no conclusion. Fortunately, several media stories have also appeared that have begun to take a more probing look at these matters, such as A New Thruster Pushes Against Virtual Particles!…or is a Lab Error in io9. Mika McKinnon noted that the testing of the Cannae drive was reported in a conference paper and presentation, a setting where preliminary results are often announced on work that is ongoing. Quoting McKinnon:

As someone who has done my fair share of novel research that didn’t go exactly as expected, this conference abstract reads like the researchers were looking for extra eyeballs to figure out what about their testing rig might be flawed — not a grand announcement of a spectacular breakthrough. This has the potential to be cool, but at the moment, about the strongest thing that it’s scientifically responsible to say about these test results is that the researchers need to revise their testing setup.

We also have sound advice in an article called Don’t buy stock in impossible space drives just yet from Ars Technica, and an essay in Popular Science quoting Michael Baine, chief of engineering at Intuitive Machines:

“Whenever you get results that have extraordinary implications, you have to be cautious and somewhat skeptical that they can be repeated before you can accept them as a new theory,” Baine says. “Really, it’s got to come down to peer review and getting that done before you can get any kind of acceptance that something exotic is going on here.”

The Chinese team in Xian claims results that back the quantum vacuum thruster idea. Let’s put their analysis under the same level of scrutiny. We have no choice in this, because finding a hole in conservation of momentum would be a result so unexpected that we can expect any laboratory producing such results to undergo examination about its methodology. We can also expect papers undergoing peer review that defend the findings. All of that would jibe with scientific method aimed at ferreting out the truth. But getting ahead of ourselves when we’re only part way into the story can only lead to confusion. As I said above, other shoes are about to drop on the quantum vacuum thruster story, and when they do, we’ll look at them with equal interest.

I love “The Boxer.” And when I think about how some in the media react to advanced propulsion stories, its lyrics keep coming to mind. Here’s the complete first verse:

I am just a poor boy.
Though my story’s seldom told,
I have squandered my resistance
For a pocketful of mumbles,
Such are promises
All lies and jest
Still, a man hears what he wants to hear
And disregards the rest.

I’m a writer and journalist, not a scientist. But the researchers I talk to are taken aback by the wave of hype that has accompanied many recent advanced propulsion stories. Let’s hope a bit of caution seeps in, for scientific breakthroughs do not come easily. If we are on the edge of one, which I seriously doubt, the matter will resolve itself because more and more data will be accumulated, subjected to review, and put through rigorous testing. What we want to hear is not what’s important. The universe parcels out its answers according to what is true.

tzf_img_post

{ 65 comments }