Inconstant Moons: A New Lunar Origin Scenario

by Paul Gilster on January 13, 2017

A recent snowfall followed by warming temperatures produced a foggy night recently, one in which I was out for my usual walk and noticed a beautiful Moon trying to break through the fog layers. The scene was silvery, almost surreal, the kind of thing my wife would write a poem about. For my part, I was thinking about the effect of the Moon on life, and the theory that a large single moon might have an effect on our planet’s habitability. Perhaps its presence helps to keep Earth’s obliquity within tolerable grounds, allowing for a more stable climate.

But that assumes we’ve had a single moon all along, or at least since the ‘big whack’ the Earth sustained from a Mars-sized protoplanet that may have caused the Moon’s formation. Is it possible the Earth has had more than one moon in its past? It’s an intriguing question, as witness a new paper in Nature Geoscience from researchers at the Technion-Israel Institute of Technology and the Weizmann Institute of Science. The paper suggests the Moon we see today is the last of a series of moons that once orbited the Earth.

“Our model suggests that the ancient Earth once hosted a series of moons, each one formed from a different collision with the proto-Earth,” says co-author Assistant Prof. Perets (Technion). “It’s likely that such moonlets were later ejected, or collided with the Earth or with each other to form bigger moons.”

To explore alternatives to giant impact theories, the researchers have produced simulations of early Earth impacts, varying the values for the impactor’s velocity, mass, angle of impact and the initial rotation of the target. The process that emerges involves multiple impacts that would produce small moons, whose gravitational interactions would eventually cause collisions and mergers, to produce the Moon we see today. Here’s how the paper describes the process:

… we consider a multi-impact hypothesis for the Moon’s formation. In this scenario, the proto-Earth experiences a sequence of collisions by medium- to large-size bodies (0.01–0.1M). Small satellites form from the impact-generated disks and migrate outward controlled by tidal interactions, faster at first, and slower as the body retreats away from the proto-Earth. The slowing migration causes the satellites to enter their mutual Hill radii and eventually coalesce to form the final Moon. In this fashion, the Moon forms as a consequence of a variety of multiple impacts in contrast to a more precisely tuned single impact.

Here’s a graphic from the paper (listed as Figure 1) that shows the process at work:


Image (click to enlarge): a,b, Moon- to Mars-sized bodies impact the proto-Earth (a) forming a debris disk (b). c, Due to tidal interaction, accreted moonlets migrate outward. d,e, Moonlets reach distant orbits before the next collision (d) and the subsequent debris disk generation (e). As the moonlet–proto-Earth distance grows, the tidal acceleration slows and moonlets enter their mutual Hill radii. f, The moonlet interactions can eventually lead to moonlet loss or merger. The timescale between these stages is estimated from previous works.

The Hill radius mentioned above describes the gravitational sphere of influence of an object; in this case, meshing Hill radii can produce interactions that sometimes lead to mergers. The paper notes that in head-on impacts, the rotation of the planet is important because the disk needs angular momentum resulting from the rotation to stay stable. With increased rates of rotation, the angular momentum of the disks increases. Moons like ours emerge from many of the simulations:

We find that debris disks resulting from medium- to large-size impactors (0.01–0.1M) have sufficient angular momentum and mass to accrete a sub-lunar-size moonlet. We performed 1,000 Monte Carlo simulations of sequences of N = 10, 20 and 30 impacts each, to estimate the ability of multiple impacts to produce a Moon-like satellite. The impact parameters were drawn from distributions previously found in terrestrial formation dynamical studies. With perfect accretionary mergers, approximately half the simulations result in a moon mass that grows to its present value after ~20 impacts.

If the multi-moon hypothesis proves credible, how would it affect the larger astrobiology question? In Ward and Brownlee’s Rare Earth (Copernicus, 2000), after a discussion of obliquity and the Moon’s effect on the Earth’s early history, the authors say this:

If the Earth’s formation could be replayed 100 times, how many times would it have such a large moon? If the great impactor had resulted in a retrograde orbit, it would have decayed. It has been suggested that this may have happened for Venus and may explain that planet’s slow rotation and lack of any moon. If the great impact had occurred at a later stage in Earth’s formation, the higher mass and gravity of the planet would not have allowed enough mass to be ejected to form a large moon. If the impact had occurred earlier, much of the debris would have been lost to space, and the resulting moon would have been too small to stabilize the obliquity of Earth’s spin axis. If the giant impact had not occurred at all, the Earth might have retained a much higher inventory of water, carbon and nitrogen, perhaps leading to a Runaway Greenhouse atmosphere.

The idea of a series of impacts eventually leading to a larger moon significantly muddies the waters here. It is true that in our Solar System, the inner planets are nearly devoid of moons, but we have no way of extending this situation to exoplanets without collecting the necessary data, which will begin with our first exomoon detections. Certainly if numerous collisions in an early planetary system can produce a large moon, as this paper argues, then we can expect similar collisional scenarios in many systems, making such moons a frequent outcome.

The paper is Rufu, Oharonson & Perets, “A Multiple Impact Hypothesis for Moon Formation,” published online by Nature Geoscience 9 January 2017 (abstract).



A New Look at ‘Exocomets’

by Paul Gilster on January 12, 2017

Moving groups are collections of stars that share a common origin, useful to us because we can study a group of stars that are all close to each other in age. Among these, the Beta Pictoris moving group is turning out to be quite productive for the study of planet formation. These are young stars, aged in the tens of millions of years (Beta Pictoris itself is between 20 and 26 million years old). Within the moving group, we’ve detected planets around 51 Eridani and Beta Pictoris, while infalling, star-grazing objects have been found around Beta Pictoris.

Evidence of comet activity around another of these stars was discussed at the American Astronomical Society meeting in Texas. The star HD 172555, 23 million years old and about 95 light years from Earth, shows the presence of the vaporized remnants of cometary nuclei, marking the third extrasolar system where such activity has been traced. All the stars involved are under 40 million years old, giving us a glimpse of the kind of activity that happens during the era when young terrestrial planets have begun to emerge in their systems.


Image: This illustration shows several comets speeding across a vast protoplanetary disk of gas and dust and heading straight for the youthful, central star. The comets will eventually plunge into the star and vaporize. The comets are too small to photograph, but their gaseous spectral “fingerprints” on the star’s light were detected by NASA’s Hubble Space Telescope. The gravitational influence of a suspected Jupiter-sized planet in the foreground may have catapulted the comets into the star. This star, called HD 172555, represents the third extrasolar system where astronomers have detected doomed, wayward comets. The star resides 95 light-years from Earth. Credit: NASA, ESA, and A. Feild and G. Bacon (STScI).

Carol Grady (Eureka Scientific/NASA GSFC) led the study reported on at the AAS. Her thoughts:

“Seeing these sun-grazing comets in our solar system and in three extrasolar systems means that this activity may be common in young star systems. This activity at its peak represents a star’s active teenage years. Watching these events gives us insight into what probably went on in the early days of our solar system, when comets were pelting the inner solar system bodies, including Earth. In fact, these star-grazing comets may make life possible, because they carry water and other life-forming elements, such as carbon, to terrestrial planets.”

The deflection of comets by the gravitational influence of a massive gas giant in an emerging planetary system is a vivid picture, one clarified by Grady and team’s work with the Hubble Space Telescope Imaging Spectrograph (STIS) and the Cosmic Origins Spectrograph (COS) in 2015. The team’s spectrographic analysis, using Hubble data collected from two observing runs separated by six days, detected carbon gas and silicon in the light of HD 172555 moving across the face of the star at a speed of 160 kilometers per second.

This work follows up a French study that first found exocomets transiting the same star in archival data from the HARPS spectrograph. That work detected signs of calcium. Grady and team have extended the analysis with a spectrographic analysis in ultraviolet light. They believe they are seeing gaseous debris left behind as comets disintegrated, vaporized materials that contain large chunks of the original comet. Helpfully, the disk around HD 172555 is seen almost edge-on from Earth, offering Hubble a clear view of the highly dispersed activity.

“As transiting features go, this vaporized material is easy to see because it contains very large structures,” Grady said. “This is in marked contrast to trying to find a small, transiting exoplanet, where you’re looking for tiny dips in the star’s light.”

To confirm that they are seeing the disintegration of icy comets as opposed to rocky asteroids, Grady’s researchers hope to use the STIS again to search for oxygen and hydrogen, a composition that would add further weight to these conclusions.



Hubble Looks at Voyager’s Future

by Paul Gilster on January 11, 2017

Nothing built by humans has ever gotten as far from our planet as Voyager 1, which is now almost 21 billion kilometers from Earth. We’ve talked about the future of both Voyagers before in these pages — Voyager 1 passes within about 1.6 light years of the star Gliese 445 in some 40,000 years, its closest approach to a neighboring star. Voyager 2, which is now almost 17 billion kilometers out, closes to within 1.7 light years of Ross 248 in the same 40,000 years.

My case for doing what Carl Sagan once discussed, giving each Voyager a final kick with its remaining hydrazine, so that those closing distances could be reduced, can be found in Voyager to a Star. It would be a symbolic and philosophical act rather than a scientific one, as both Voyagers are losing their ability to transmit data and will be silent in about a decade. And nothing can reduce those huge timeframes, which means that any such symbolic statement would be made to the future, a way of saying we are learning to be a starfaring species.


Image: In this artist’s conception, NASA’s Voyager 1 spacecraft has a bird’s-eye view of the solar system. The circles represent the orbits of the major outer planets: Jupiter, Saturn, Uranus, and Neptune. Launched in 1977, Voyager 1 visited the planets Jupiter and Saturn. The spacecraft is now 21 billion kilometers from Earth, making it the farthest and fastest-moving human-made object ever built. In fact, Voyager 1 is now zooming through interstellar space, the region between the stars that is filled with gas, dust, and material recycled from dying stars. Credit: NASA, ESA, and J. Zachary and S. Redfield (Wesleyan University); Artist’s Illustration Credit: NASA, ESA, and G. Bacon (STScI).

Meanwhile, we still have two viable spacecraft in the outer reaches of our Solar System, taking data on interstellar material, magnetic fields and cosmic ray hits and giving us a sense of what the local interstellar medium (LISM) is like. That’s crucial information, of course, for one day we hope to have not just a few but many spacecraft operating on the edge of interstellar space, and going beyond our system will require us to know the nature of the medium through which they move. On that score, the best book I know is Bruce Draine’s Physics of the Interstellar and Intergalactic Medium (Princeton, 2010). I enjoyed talking to Draine (Princeton University) at the latest Breakthrough Starshot sessions.

As you can imagine, learning more about the interstellar medium is a prerequisite if you’re thinking of pushing something up to 20 percent of lightspeed, as Breakthrough Starshot is, so the topic was a lively one at those meetings. At the recent American Astronomical Society meetings in Texas, we learned that astronomers have been using Hubble data to supplement what Voyager has been giving us, charting the hydrogen clouds and other elements of the LISM. Seth Redfield (Wesleyan University), who leads the study, offers this comment:

“This is a great opportunity to compare data from in situ measurements of the space environment by the Voyager spacecraft and telescopic measurements by Hubble. The Voyagers are sampling tiny regions as they plow through space at roughly 38,000 miles per hour [61,000 kph). But we have no idea if these small areas are typical or rare. The Hubble observations give us a broader view because the telescope is looking along a longer and wider path. So Hubble gives context to what each Voyager is passing through.”


Image: In this illustration, NASA’s Hubble Space Telescope is looking along the paths of NASA’s Voyager 1 and 2 spacecraft as they journey through the solar system and into interstellar space. Hubble is gazing at two sight lines (the twin cone-shaped features) along each spacecraft’s path. The telescope’s goal is to help astronomers map interstellar structure along each spacecraft’s star-bound route. Each sight line stretches several light-years to nearby stars. Credit: NASA, ESA, and Z. Levy (STScI).

The Hubble work makes it clear that in two thousand years or so, Voyager 2 will move out of the interstellar cloud that surrounds the Solar System before moving into another cloud, in which it will remain for as much as 90,000 years. The astronomers find slight variations in the abundances of the chemical elements in these clouds, which could chart a history involving different paths to formation. We do know that as the solar wind pushes against the interstellar medium, the heliosphere can be compressed, only to expand again when the Sun moves through lower-density matter. For more, see this Hubblesite news release.

We still haven’t built the next generation LISM explorer, one crafted from the outset as an interstellar data gatherer. As much as the Voyagers continue to give us, we have to remember that they were designed as planetary probes, their survival to this point being an amazing and unexpected gift, but one that has to be adapted to the medium through which the spacecraft move. A spacecraft fine-tuned for exploration beyond the heliopause is a goal that continues to see its share of study (more on this soon), but when it will fly remains an open question.



Upgraded Search for Alpha Centauri Planets

by Paul Gilster on January 10, 2017

Breakthrough Starshot, the research and engineering effort to lay the groundwork for the launch of nanocraft to Alpha Centauri within a generation, is now investing in an attempt to learn a great deal more about possible planets around these stars. We already know about Proxima b, the highly interesting world orbiting the red dwarf in the system, but we also have a K- and G-class star here, either of which might have planets of its own.


Image: The Alpha Centauri system. The combined light of Centauri A (G-class) and Centauri B (K-class) appears here as a single overwhelmingly bright ‘star.’ Proxima Centauri can be seen circled at bottom right. Credit: European Southern Observatory.

To learn more, Breakthrough Initiatives is working with the European Southern Observatory on modifications to the VISIR instrument (VLT Imager and Spectrometer for mid-Infrared) mounted at ESO’s Very Large Telescope (VLT). Observing in the infrared has advantages for detecting an exoplanet because the contrast between the light of the star and the light of the planet is diminished at these wavelengths, although the star is still millions of times brighter.

To surmount the problem, VISIR will be fitted out for adaptive optics. In addition, Kampf Telescope Optics of Munich will deliver a wavefront sensor and calibration device, while the University of Liège (Belgium) and Uppsala University (Sweden) will jointly develop a coronagraph that will mask the light of the star enough to reveal terrestrial planets.


Image: Paranal at sunset. This panoramic photograph captures the ESO Very Large Telescope (VLT) as twilight comes to Cerro Paranal. The enclosures of the VLT stand out in the picture as the telescopes in them are readied for the night. The VLT is the world’s most powerful advanced optical telescope, consisting of four Unit Telescopes with primary mirrors 8.2 metres in diameter and four movable 1.8-metre Auxiliary Telescopes (ATs), which can be seen in the left corner of the image. Credit: ESO.

According to the agreement signed by Breakthrough Initiatives executive director Pete Worden and European Southern Observatory director general Tim de Zeeuw, Breakthrough Initiatives will pay for a large part of the technology and development costs for the VISIR modifications. Meanwhile, the ESO will provide the necessary telescope time for a search program that will be conducted in 2019. The VISIR work, according to this ESO news release, should provide a proof of concept for the METIS instrument (Mid-infrared E-ELT Imager and Spectrograph), the third instrument on the upcoming European Extremely Large Telescope.



Garnet World: Stellar Composition & Planetary Outcomes

by Paul Gilster on January 9, 2017

What effect does the composition of a star have on the planets that form around it? Enough of one that we need to take it into account as we assess exoplanets in terms of astrobiology. So says a study that was presented at the American Astronomical Society meeting in Texas last week, looking at ninety specific stars identified by Kepler as having evidence of rocky planets.

We know about the composition of these stars because they are part of the 200,000 star dataset compiled by APOGEE, the Apache Point Observatory Galactic Evolution Experiment spectrograph mounted on the 2.5m Sloan Foundation telescope in New Mexico. APOGEE allows us to examine the spectra of stellar atmospheres to identify their elements.

Modeling the formation of planets around these stars shows us the implications for astrobiology. Johana Teske (Carnegie Observatories) explains:

“Our study combines new observations of stars with new models of planetary interiors. We want to better understand the diversity of small, rocky exoplanet composition and structure — how likely are they to have plate tectonics or magnetic fields?”

At the AAS meeting, Teske described how the team of astronomers and geoscientists she is working with focused on Kepler 102 and Kepler 407, the former a star slightly less luminous than the Sun hosting five known planets, the latter hosting two planets orbiting a star of roughly the Sun’s mass. The APOGEE data show that in terms of chemical composition, Kepler 102 is similar to the Sun, while Kepler 407 is much richer in silicon.

Geophysicist Cayman Unterborn (Arizona State) ran computer simulations of planet formation incorporating the APOGEE data. The result:

“We took the star compositions found by APOGEE and modeled how the elements condensed into planets in our models. We found that the planet around Kepler 407, which we called ‘Janet,” would likely be rich in the mineral garnet. The planet around Kepler 102, which we called ‘Olive,’ is probably rich in olivine, like Earth.”


Image: The picture shows what minerals are likely to occur at several different depths. Kepler 102 is Earth-like, dominated by olivine minerals, whereas Kepler 407 is dominated by garnet, so less likely to have plate tectonics. Credit: Robin Dienel, Carnegie DTM.

In Unterborn’s view, the difference is significant because garnet, a far stiffer mineral than olivine, flows more slowly, implying a garnet planet would be unlikely to have long-term plate tectonics. Like the Earth, the planet around Kepler 102 could sustain tectonics, which are thought to be essential for life because atmospheric recycling through geological processes like volcanoes and ocean ridge formation regulates the atmosphere’s composition. Without such geological processes, life would not necessarily have the chance to evolve.

Centauri Dreams’ take: The interplay of the two datasets — APOGEE and Kepler — is deeply productive, but we’re only at the beginning of the analysis. APOGEE’s 200,000 stars include others known to host small planets, so similar methods can now be put to work on the mineral content of these worlds. Those most Earth-like in their mineral content would rank higher on our list for further astrobiological study, helping us refine our targets for future observation.



NASA Selects Two Asteroid Missions

by Paul Gilster on January 6, 2017

Among the five finalists for NASA’s Discovery program, I had become attached to the Near Earth Object Camera (NEOCam), whose purpose was to expand our catalog greatly, with the potential, according to mission backers, of finding ten times more NEOs than we’ve found to date. We’ll see if NEOCam has a future (I’ve just learned that it has been given extended funding for an additional year by NASA), but for now NASA has announced two other Discovery-class missions, both of which have objectives among the asteroids.

Lucy, scheduled for a launch in the fall of 2021, is to be a robotic mission with the goal of exploring six of the Jupiter Trojan asteroids. The Trojans share Jupiter’s orbit while moving swarm-like around the planet’s L4 and L5 Lagrangian points. Over 6000 Jupiter Trojans are now known, but the population is thought to be vast, with as many as 1 million Trojans larger than 1 kilometer in diameter. As to their origin, there is much to learn. They may be captured asteroids or comets, or as this short NASA video explains, even Kuiper Belt Objects.

From the standpoint of Solar System evolution, the Trojans make for interesting science. They’re relics of the primordial material of the outer system, and I see that principal investigator Harold F. Levison cites the mission’s name in connection with another Lucy, the fossil fragments that have been so significant in our understanding of human development. We’ll see if this Lucy gets as much public attention as its namesake, which acquired its name from the Beatles song ‘Lucy in the Sky with Diamonds,’ played at the recovery site in Ethiopia. Breaking out the Sgt. Pepper album on this Lucy’s arrival at its first target seems a natural.

There are connections between the Lucy effort and the highly successful New Horizons mission, in the form of later versions of the familiar RALPH and LORRI science instruments, and evidently several members of the Lucy mission team are connected with New Horizons as well. Lucy also benefits from the contributions of several members of the OSIRIS-REx team, the latter a robotic spacecraft now on its way to rendezvous with asteroid Bennu.


Image: (Left) An artist’s conception of the Lucy spacecraft flying by the Trojan Eurybates – one of the six diverse and scientifically important Trojans to be studied. Trojans are fossils of planet formation and so will supply important clues to the earliest history of the solar system. (Right) Psyche, the first mission to the metal world 16 Psyche will map features, structure, composition, and magnetic field, and examine a landscape unlike anything explored before. Psyche will teach us about the hidden cores of the Earth, Mars, Mercury and Venus.
Credit: SwRI and SSL/Peter Rubin.

The other mission is Psyche, dedicated to a single asteroid of that name that appears to be the survivor of an early collision with another object that violently disrupted a protoplanet. About 210 kilometers in diameter, 16 Psyche is thought to be composed mostly of metallic iron and nickel, a composition similar to the Earth’s core. We seem to be looking at what would have become the core of a Mars-sized planet, now without its outer rocky layers. Thomas H. Prettyman, a co-investigator on the Psyche mission, explains:

“Psyche is thought to be the exposed core of a planetary embryo – perhaps like Vesta – that initially melted and later cooled to form a central metallic core, silicate mantle, and basaltic crust. The outer layers may have been removed in a violent collision, leaving the core exposed. Psyche will provide a close-up look at a planetary core, providing new insights into the evolution and inner workings of terrestrial planets.”

The robotic Psyche mission will launch in the fall of 2023, with arrival at 16 Psyche in 2030 after two gravity assists, one from an Earth flyby, the second from a flyby of Mars. Both missions have this is common: They target the development of the early Solar System, one by observing the remnants of formation among the Jupiter Trojans, the other by seeing the interior of what might have become a planet. Let’s hope for the kind of success for both that we saw in earlier Discovery missions like MESSENGER and Dawn. OSIRIS-REx, meanwhile, is on course for a 2018 rendezvous with asteroid Bennu, with sample return to follow.



Pinpointing a Fast Radio Burst

by Paul Gilster on January 5, 2017

Fast Radio Bursts (FRBs) are problematic. Since their discovery about a decade ago, the question has been their place of origin. These transient pulses last no more than milliseconds, yet they emit enormous energies, and we’ve had only the sketchiest idea where they came from. Now we learn, from an announcement at the 229th meeting of the American Astronomical Society in Grapevine, Texas, that a repeating source of FRBs has been spotted. That makes tracing the burst back to its source and characterizing it an ongoing proposition.

“We now know that this particular burst comes from a dwarf galaxy more than three billion light-years from Earth,” says Shami Chatterjee, of Cornell University. “That simple fact is a huge advance in our understanding of these events.” Papers on the work are being presented in Nature as well as Astrophysical Journal Letters.

Research behind the investigation of FRB 121102 has been mounted by an international team of astronomers, representing a spread of instruments that is important because a single-dish detection cannot target the object’s location. Because it repeats, this burst allows telescopes separated by large distances to home in on it and investigate it at various wavelengths.

The FRB was discovered at Arecibo, but observations with the Very Large Array in New Mexico have found a total of nine radio bursts from this source. Observations using the 8-meter Gemini North instrument on Mauna Kea have been able to pinpoint the host galaxy, which comes in at a redshift value that puts its distance at over 3 billion light years. Between Arecibo, the VLA and the European VLBI Network (EVN), astronomers have now been able to determine the position of the burst to a fraction of an arcsecond, more than 200 times as accurate as previous measurements. An ongoing and persistent source of weak radio emission is also found in the same region.


Image: Gemini composite image of the field around FRB 121102 (indicated). The dwarf host galaxy was imaged, and spectroscopy performed, using the Gemini Multi-Object Spectrograph (GMOS) on the Gemini North telescope on Maunakea in Hawai’i. Data were obtained on October 24-25 and November 2, 2016. Credit: Gemini Observatory/AURA/NSF/NRC.

Remember that until this event, only the Parkes Radio Telescope in Australia had detected FRBs, and only a small number at that. Now we are talking not only about locating the source in visible light but associating it with a radio source. Benito Marcote works at JIVE (Joint Institute for VLBI in Europe), which includes a 100-meter radio telescope in Effelsberg, Germany.

“With a bit of luck,” says Marcote, “we were able to detect bursts from FRB 121102 with the EVN and now we know that the origin of the bursts is right on top of the persistent radio source… We think that the bursts and the continuous source are likely to be either the same object or that they are somehow physically associated with each other.”

This FRB, at least, is now known incontrovertibly to have an origin far outside our own galaxy, although the galaxy itself is a surprise. It’s a small dwarf galaxy younger than ours, one that may be able to produce more massive stars than we see in the Milky Way. One possibility is that FRB 121102 is from the collapsed remnant of such a star. Shriharsh Tendulkar (McGill University) is lead author of one of the papers studying the event.

“The host galaxy for this FRB appears to be a very humble and unassuming dwarf galaxy, which is less than 1% of the mass or our Milky Way galaxy. That’s surprising. One would generally expect most FRBs to come from large galaxies which have the largest numbers of stars and neutron stars — remnants of massive stars. This dwarf galaxy has fewer stars, but is forming stars at a high rate, which may suggest that FRBs are linked to young neutron stars. There are also two other classes of extreme events — long duration gamma-ray bursts and superluminous supernovae — that frequently occur in dwarf galaxies, as well. This discovery may hint at links between FRBs and those two kinds of events.”

A burst originating from the region near a massive black hole in the galaxy’s core — an active galactic nucleus emitting jets of material — is a candidate for FRB 121102. And as data continue to accumulate, any periodicity found in future observations may point to the involvement of a rotating neutron star. Further entangling the story is a key question: Can we assume that all FRBs we’ve thus far detected have the same origins, or are we actually detecting more than one kind of cosmic event? Given that FRB 121102 is the only one of 18 known FRBs that repeats, we may be looking at different physical processes at work.

The papers are Chatterjee et al., “A direct localization of a fast radio burst and its host,” Nature 541 (5 January 2017), 58-61 (abstract); Tendulkar et al., “The Host Galaxy and Redshift of the Repeating Fast Radio Burst FRB 121102,” Astrophysical Journal Letters Vol. 834, No. 2 (4 January 2017) (abstract); B. Marcote et al., “The Repeating Fast Radio Burst FRB 121102 as Seen on Milliarcsecond Angular Scales,” Astrophysical Journal Letters Vol. 834, No. 2 (4 January 2017)(abstract).



Hitchhiker to the Outer System?

by Paul Gilster on January 4, 2017

Years ago at the Aosta conference on interstellar studies, Greg Matloff told attendees about an interesting way to travel the Solar System. If the goal is to get to Mars, for example, it turns out that there are two objects — 1999YR14 and 2007EE26 — that pass close to both Earth and Mars, each with transit time of about a year. Let me quote from Greg’s paper:

Since orbital characteristics are known for a few thousand NEOs, it is reasonable to assume that about 0.1% of the total NEO population could be applied for Earth-Mars or Mars-Earth transfers during the time period 2020-2100. Because a few hundred thousand NEOs must exist that are greater in dimension than 10m, hundreds of small NEOs must travel near-Hohmann trajectories between Earth and Mars or Mars and Earth. It seems likely that a concerted search will find one or more candidate NEOs for shielding application during any opposition of the two planets.

The notion is provocative. Could we somehow hitch a ride on one of these objects, taking advantage of its capabilities as a radiation shield by digging into its surface and exploiting its resources along the way? And maybe we can look further than Mars. In 2014, a NEO called 2000WO148 swings by the Earth enroute to the main belt asteroid Vesta in 2043. The question becomes, are there other NEOs on interesting trajectories that might be of use in our explorations?

I was reminded of the NEO hitchhike idea this morning while reading about another interesting object. NEOWISE detected 2016 WF9 in late November of 2016. Here we have a true sightseer. 2016 WF9 approaches the orbit of Jupiter at its furthest point from the Sun, and then, over just under five years, swings inward, coming in past the main asteroid belt and the orbit of Mars to move just inside the orbit of the Earth before heading back out.

We get closest approach to Earth’s orbit on February 25th of this year, although at 51 million kilometers, this object hardly poses a danger to our planet, nor will it in the foreseeable future. Whether 2016 WF9 is an asteroid or a comet is not known. What we know is that it is between 0.5 and 1 kilometer across, and has low reflectivity, as do many dark objects in the main asteroid belt. Although in a comet-like orbit, 2016 WF9 lacks the dust and gas we normally associate with a comet. James ‘Gerbs’ Bauer (JPL) is deputy chief investigator for NEOWISE:

“2016 WF9 could have cometary origins. This object illustrates that the boundary between asteroids and comets is a blurry one; perhaps over time this object has lost the majority of the volatiles that linger on or just under its surface.”


Image: An artist’s rendition of 2016 WF9 as it passes Jupiter’s orbit inbound toward the sun. Credit: NASA/JPL-Caltech.

Another object recently spotted by NEOWISE is indeed thought to be a comet, releasing dust as it nears the Sun. In the first week of the new year, C/2016 U1 NEOWISE will be in the southeastern sky shortly before dawn as seen from the northern hemisphere, reaching perihelion on January 14 inside the orbit of Mercury. Although it’s impossible to say for sure, it may become bright enough to be visible in binoculars, according to this JPL news release.

Since NEOWISE was reactivated in December of 2013, it has discovered either 9 or 10 comets, depending on what 2016 WF9 turns out to be. It 2016 WF9 is found to be an asteroid, then it would be the 100th discovered since reactivation. The original mission, the asteroid and comet-hunting part of the Wide-Field Infrared Survey Explorer (WISE) mission, discovered 34,000 asteroids. 31 of its discoveries pass within 20 lunar distances, and 19 are thought to be more than 140 meters in size, but reflect less than 10 percent of incident sunlight. They are objects as dark as new asphalt, absorbing most visible light but re-emitting energy at infrared wavelengths that the NEOWISE detectors can readily study.

For those interested in digging into these matters further, the NEOWISE data release, with access instructions and supporting documentation, is here. And on the fictional side, Kim Stanley Robinson’s novel 2312 looks at terraformed asteroids in terms of both habitats and intra-system transportation in an evolving space infrastructure.



Close Look at Recent EmDrive Paper

by Paul Gilster on January 3, 2017

The concluding part of the Tau Zero Foundation’s examination of what is being called the ‘EmDrive’ appears today. It’s a close analysis of the recent paper by Harold ‘Sonny’ White and Paul March in the Journal of Propulsion and Power. Electrical engineer George Hathaway runs Hathaway Consulting Services, which has worked with inventors and investors since 1979 via an experimental physics laboratory near Toronto, Canada. Hathaway’s concentration is on novel propulsion and energy technologies. He has authored dozens of technical papers as well as a book, is a patent-holder and has hosted and lectured at various international symposia.

Hathaway Consulting maintains close associations with advanced physics institutions and universities in the US and Europe. Those familiar with our Frontiers of Propulsion Science book will know his paper on gravitational experiments with superconductors, which closely examined past methods and cast a skeptical eye on early claims of anomalous forces (an earlier paper, “Gravity Modification Experiment using a Rotating Superconducting Disk and Radio Frequency Fields,” appeared in Physica C). Like Marc Millis, Hathaway calls for continued testing of EmDrive concepts and increased rigor in experimental procedures.

By George Hathaway

Comments on “Measurement of Impulsive Thrust from a Closed Radio Frequency Cavity in Vacuum” (White, March et al, published online by Jnl. Prop. & Power November 17, 2016).


White et al are to be congratulated for attempting to measure the small thrusts allegedly produced by a novel thruster whose operating mechanism is not only not understood but purportedly violates fundamental physical laws. They have made considerable effort to reduce the possibility of measurement artifact. However it appears that there are some fundamental problems with the interpretation of the measurement data produced by their thrust balance. This document will analyse the measurement procedure and comment on the interpretation.

The following comments roughly follow the order in the original text by White et al

Analysis and Comments

1. Null Test Orientation

Tests were performed in both the “Forward” and “Reverse” direction as well as in a “Null” direction where the alleged force vector pointed towards the rotational axis of the balance (pg 23). Apparently no Null tests were performed with the force vector pointing away from the balance axis nor were any tests performed with the “test article” force vector pointing up or down. These additional orientations would have provided much needed control data given the magnitude of the allegedly purely thermal signal seen in their “Null” test.

In addition, the Forward and Reverse tests should also have been performed by just re-orienting the test article whilst keeping all other rotating components untouched. In this type of control experiment, the spurious effect of the rest of the components is largely eliminated.

2. Axis Verticality

An optical bench was used as a platform to mount the vacuum chamber containing the balance. It is not stated whether the optical bench was itself mounted on pneumatic legs, however, this is usually the case with optical benches. The correct operation of any balance of this geometry requires that the pivots around which the balance arm rotates must be perfectly aligned vertically one above the other (for a 2-pivot system). When the pneumatic legs of the table are inflated, the axis of the balance is not typically able to be kept perfectly vertical, as required to obtain the maximum balance sensitivity and repeatability. There is no indication in the text stating how such verticality was assured throughout the test campaign, especially since the balance was housed in a large vacuum chamber.

3. Flexural Bearings

There is no information presented to indicate whether the linear flexure bearings were operating within the manufacturer’s axial loading specification, especially when additional ballast weight was required for the non-“split configuration” tests. It would also have been useful to see data on the natural frequency of the balance when loaded with the equivalent weights used in the thrust tests, given the damping method described. Also missing is an explanation of why none of the traces of the optical displacement sensor return to starting baseline after the calibration and “thrust” pulses. There seems to be an inherent bearing stiction problem preventing the balance from returning to its original baseline after a test. This is not due to general balance drift and is typical for overloaded bearings of this type. Long-term balance stability/drift plots would be useful.

4. Electrostatic Calibrator

Evidently the calibration of the electrostatic “fin” method of applying calibration pulses was performed using an electronic balance (Scientech SA-210). Unfortunately no data was provided to show exactly how this calibration was performed. In particular, no data was provided to show that there was no electrostatic interaction between the high-voltage calibration voltages and the operation of the balance. Since the Scientech balance properly reports vertical forces only, was care taken to translate these vertical forces into the horizontal calibration forces required by the thrust balance? It would have been useful for the authors to have employed a second, independent horizontal force calibration to verify the Scientech method such as a strain gauge-type force gauge with interpolation.

5. Vacuum System

The authors note that although turbomolecular pumps were used to evacuate the vacuum chamber, they caused no artificial vibrational signals. Turbo pumps require mechanical backing pumps to evacuate them to atmosphere. These mechanical pumps are connected to the turbo pumps typically via thick and stiff vacuum hoses. These hoses can transmit backing pump vibrations to the turbo pumps which are usually rigidly connected to the vacuum chamber. Was this source of vibration taken into account as well?

Additionally, no evidence is provided to show how the interior of the test article was evacuated coincidentally with the chamber evacuation. This is a different concern to that stated in the paper (pp 27, 28) regarding outgassing of the dielectric. The concern here is that if the test article cannot be fully evacuated coincidentally with the chamber evacuation, residual gas inside the test article can possibly escape during the time of a test, causing spurious force signals. Moreover, if the test article is rather well-sealed, the shell of the test article, especially the end plates, could expand upon evacuation of the chamber due to air trapped inside prior to chamber pump-down. This would alter the center of gravity (COG) of the balance causing a spurious signal, especially if the trapped air is heated upon application of RF power of tens of watts.

6. Liquid Metal Connections

“Galinstan screw and socket” rotary connections were employed to prevent any unwanted torques from upsetting the balance due to hard-wire connections between the rotating test article and the power supplies, analytical instruments etc fixed to the lab frame. There must have been quite a few of these connections for DC power, Forward and Reverse RF power, various tuning and drive signals etc. The authors failed to indicate how these connections were arranged geometrically. The ideal mounting arrangement is for such liquid metal connections to be stacked one on top of the other exactly coaxial with the main rotational axis of the balance. It seems unlikely that the design constraints of the balance within the chamber shown would accommodate this tall a stack of connections. Thus it is assumed that these connections were not arranged coaxially with the balance axis. If so, there could be spurious side thrusts generated by Ampère currents set up within the galinstan. This should have been tested and reported.

7. Thermal Expansion and Control Tests

The White et al paper contains considerable information on the effects of thermal expansion of the various test article components. It would be beneficial to see control experiments in which the test article is replaced by a suitable control article such as a purely cylindrical cavity of approximately the same dimensions, materials and construction and which supports similar RF modes as the frustrated conical test article.

According to pg 10, the heat sink unsurprisingly is the greatest source of heat during operation. It would be useful to perform control tests by separating the heat sink mechanically from the rest of the rotating components in such a way as to allow it to be oriented in any direction relative to the rest of the components to see the effect on the optical displacement signal.

Evidently, the test article assembly produces a relatively large thermal “thrust” signal as measured by the optical displacement sensor. The only explanation given is the change in center of gravity (COG) due to thermal expansion of various components causes a spurious torque on the balance. In fact the presence of a thrust signal due to thermal effects is only inferred, not proven. Not only that but it is stated (pg 10) that this thermal effect causes the balance arm to shift “with the same polarity as the impulsive signal” in Forward or Reverse tests. Here also it is implied but not proven that an “impulsive thrust” signal is even present (see below). The authors need to perform such control tests as to ascertain with certainty that there is indeed a “thermal thrust” before assuming without proof that it causes the balance arm to shift “with the same polarity”. One such test would be to construct a “control article” of the same shape, material and weight as the test article but with guaranteed no “impulsive thrust” and substitute it for the test article. Instead of powering it with an RF signal, put a resistor or light bulb inside to simulate the thermal characteristics.

This lack of proof of the presence of either a thermal thrust or an impulsive thrust thus precludes statements such as “the thermal signal in the vacuum runs is slightly larger than the magnitude of the impulsive signal [due to convective issues]”.

8. Confirmation Bias in Thrust Analysis

The entire edifice of the analysis of the signals from the optical displacement sensor rests on the assumption of the correctness and correct application of Fig. 5 to the present test situation. Fig. 5 shows an ad-hoc superposition of two assumed signals, namely a thermal signal and a pulse (impulse) signal. This is presented initially as a “conceptual simulation” and is reasonable in its own right. However, it then takes on the value of an accepted fact throughout the rest of the paper. Fig. 5 represents what the authors expect to see in the signal from the optical displacement sensor. When they see signals from this sensor which vaguely look like the expected superposition signal as represented in Fig 5, they assume that Fig 5 must actually represent what is going on in their system under test. This is a clear inductive reasoning fallacy called Confirmation Bias. This problem leads to baseless assumptions about the timing of the onset of expected effects after application of the stimulus (RF power), their proper shapes, and the joint amplitudes and thus the individual (impulse vs thermal) magnitudes.

In particular, the authors assume that the “true” impulse signal from the test article will look just like the assumed signal shown in Fig. 5, namely that it will look just like their calibration signal. This will include an initial fast-rising but well-behaved exponential slope up to a flat-topped constant thrust followed by a slower exponential falling section back to baseline. Next they assume that the thermal signal will be a well-behaved double exponential starting exactly at the same time as the impulse signal, also as shown in idealized form in Fig. 5. An additional assumption made by the authors is that there are no other spurious effects which might be represented as additional curves in Fig.5. The simple addition of the amplitudes of the thermal and impulse signals produces the resulting superposition signal. This signal is used as a template against which the actual sensor signal is compared. By stretching the imagination, the sensor signal can be force-fit onto the idealized superposition signal and, voila, the simple analysis can proceed to extract the magnitude of the true impulse signal.

This method is applied to all the sensor signals except that in Fig. 10 showing the “split configuration”.

There are additional problems with this force-fitting routine. For example, in Fig. 7, which is analysed in some detail, the initial rising slope of the displacement sensor signal should be an asymptotically flattening exponential according to Fig. 5. But it is clearly an asymptotically rising signal, perhaps exponential in shape. About half-way through the RF power application period, this rising slope suddenly breaks into a markedly linear (rising) slope. According to Fig. 5, this part of the signal should show an asymptotically decreasing (flattening) exponential slope, definitely not a linear slope. The authors even use linear curve fitting in this region, evidence that even they do not consider this part of the slope exponential. All the optical displacement signals shown in the other relevant figures (Figs. 13, 16) show this characteristic as well.

Then a sleight-of-hand is used to tease out the contributions of the assumed thermal vs the impulsive signal. According to pg. 11, “the characteristics of the curve [superposition curve in Fig. 5] after this discontinuity [the break in slope of the rising exponential due to the onset of steady thrust] are used as the baseline to be shifted down so that the line projects back to the “origin” or moment when RF power is activated.” The amount of this baseline shift is taken to represent the “true” impulse signal. Naturally, this assumes that the onset of thrust (and the thermal signal) are all coincident exactly with the application of RF power (and are all of the ideal shape according to Fig. 5). According to Fig. 7, it also assumes that a straight line can be used as this “baseline shift” rather than the more likely broken exponential shaped line depicted in Fig. 5. This has the added bonus of arbitrarily increasing the “calculated” impulsive thrust.

Pg. 13 introduces a “Slope Filtering: Alternate Approach” to the force-fitting approach discussed above whereby the time derivative of the displacement sensor signal is plotted. This procedure produces a curve of magnitudes of slopes (Fig. 9). Sadly, this method starts off with the same assumptions as in the above approach. It compounds these problems by invoking an arcane procedure whereby the parts of the original displacement sensor curve with slopes lower a particular arbitrary (and unstated) value are removed and what’s left of the curve allegedly represents the “true” impulse curve. None of this procedure is shown in detail and only the final result is shown which, conveniently for the authors, is within ~20% of the previous analysis method. Of course, this convenient coincidence is entirely dependent on the arbitrary slope magnitude removal value.

9. Split Configuration

On pg. 15 we learn that by splitting the test article from the rest of the electronics – one on each end of the balance arm, the response time is reduced as expected due to the reduction in ballast weight required, and that the “true” thrust amplitude has been reduced from 106 uN to 63 uN, all other things being equal! Additionally, the displacement sensor curve (Fig. 10) is completely different in shape from the non-split configuration tests. The only explanation proffered for this discrepancy is that “the thermal contribution…is smaller in magnitude compared to the impulsive signal.” No proof of the correctness of this statement is provided. Since the split and non-split configuration curves are so radically different, the authors chose not to apply either of the analysis methods discussed above. They arbitrarily take the amplitude of the displacement signal at the instant it starts an exponentially asymptotic downward slope as the correct point. Why not use a variant of this method and apply it to the non-split configuration? Because it would result in relatively and apparently unacceptably huge (eg ~260 uN at 60 W) thrusts!

10. Difference between Forward and Reverse Thrusts

Tables 2 and 3 allow us to compare “calculated” thrusts (using the ideal curve force-fitting method discussed above) from Forward and Reverse non-split configurations. The Reverse thrusts are consistently lower than their Forward thrust counterparts. For example for 60 W, average Forward thrusts are 108 uN vs 60 uN for Reverse thrusts. For 80 W, these numbers are 104 uN vs 71 uN. No explanation is given for these differences, nor for the fact that in the Forward configuration, the 80 W thrust is lower than the 60 W thrust.

11. Null Thrust Test

It is stated on pg. 23 that “The [COG] shift from thermal expansion causes a downward drift in the optical displacement sensor.” Why not an upward drift? There is no justification given for this statement as no control tests were performed to ascertain what the result of a purely thermal effect might be, expansion or otherwise.

Further, the authors state “The results from the null thrust testing show no impulsive element…only the thermal signal.” This is also an unproven statement since no purely impulsive or purely thermal signal has been positively identified in shape or amplitude. The authors appear to have forgotten the thermal curve they used in Fig. 5, namely a double exponential. There is no evidence for any exponential part of the supposedly “thermal only” curve of the Null Test in Fig. 18. It appears completely linear and if there is a slight hint of an exponential, it is in the wrong sense (asymptotically falling, not flattening)! Another hint as to the problem of assigning a purely thermal explanation of the curve in Fig. 18 is the fact that exactly at the time of shutting off the RF power, there is no thermal lag or overshoot: the linear slope breaks suddenly to become essentially flat.

The implication of the Null Thrust test is that the thermal signal apparently seen in the Null Test would be the same as that seen in the Forward and Reverse tests. If so, then the curve force-fitting routine discussed above is invalid as it assumes a double exponential thermal curve (Fig. 5).

The Null Thrust test depicted in Fig. 18 was run at 80 W RF power. The Reverse Thrust test in Fig. 16 run at 80 W shows an apparent thermal signal of approx. 70 uN using the force-fitting routine. For the same period, the Null Thrust test shows an apparent thermal signal of approx. 275 uN. This is a huge discrepancy begging for detailed explanation.


In addition to mechanical and related considerations, the authors’ methods of analysis of sensor data to derive thrusts rests on untenable grounds. Not only is there an assumption of the presence of only a “true” impulse signal as well as a thermal signal, there is an assumption that the observed signal can be broken down into just these 2 components and amplitudes can be calculated based on an idealized superposition assumption. Therefore, until more control tests are performed allowing a more accurate method for estimation of thrusts, no faith can be placed in the thrust magnitudes reported in the paper.



Uncertain Propulsion Breakthroughs?

by Paul Gilster on December 30, 2016

Now that the EmDrive has made its way into the peer-reviewed literature, it falls in range of Tau Zero’s network of scientist reviewers. Marc Millis, former head of NASA’s Breakthrough Propulsion Physics project and founding architect of the Tau Zero Foundation, has spent the last two months reviewing the relevant papers. Although he is the primary author of what follows, he has enlisted the help of scientists with expertise in experimental issues, all of whom also contributed to BPP, and all of whom remain active in experimental work. The revisions and insertions of George Hathaway (Hathaway Consulting), Martin Tajmar (Dresden University), Eric Davis (EarthTech) and Jordan Maclay (Quantum Fields, LLC) have been discussed through frequent email exchanges as the final text began to emerge. Next week I’ll also be presenting a supplemental report from George Hathaway. So is EmDrive new physics or the result of experimental error? The answer turns out to be surprisingly complex.

by Marc Millis, George Hathaway, Martin Tajmar, Eric Davis, & Jordan Maclay

It’s time to weigh in about the controversial EmDrive. I say, controversial, because of its profound implications if genuine, plus the lack of enough information with which to determine if it is genuine. A peer-reviewed article about experimental tests of an EmDrive was just published in the AIAA Journal of Propulsion and Power by Harold (Sonny) White and colleagues: White, H., March, P., Lawrence, J., Vera, J., Sylvester, A., Brady, D., & Bailey, P. (2016), “Measurement of Impulsive Thrust from a Closed Radio-Frequency Cavity in Vacuum,” Journal of Propulsion and Power, (print version pending, online version here.

That new article, plus related peer-reviewed articles, were reviewed by colleagues in our Tau Zero network, including two who operate similar low-thrust propulsion tests stands. From our reviews and discussions, I have reached the following professional opinions – summarized in the list below and then detailed in the body of this article. I regret that I can only offer opinions instead of definitive conclusions. That ambiguity is a significant part of this story that also merits discussion.



(1) The experimental methods and resulting data indicate a possible new force-producing effect, but not yet satisfying the threshold of “extraordinary evidence for extraordinary claims” – especially since this is a measurement of small effects.

(2) The propulsion physics explanations offered, which already assume that the measured force is real, are not sound.

(3) Experiments have been conducted on other anomalous forces, whose fidelity and implications merit comparable scrutiny, specifically Jim Woodward’s “Mach Effect Thruster.”


(1) If either the EmDrive or Mach Effect Thrusters are indeed genuine, then new physics is being discovered – the ramifications of which cannot be assessed until after those effects are sufficiently modeled. Even if it turns out that the effects are of minor utility, having new experimental approaches to explore unfinished physics would be valuable.

(2) Even if genuine, it is premature to assess the potential utility of these devices. Existing data only addresses some of the characteristics necessary to compare with other technologies. At this point, it is best to withhold judgment, either pro or con.

Pitfalls to Avoid

(1) The earlier repeated tactic, to attempt fast and cheap experimental tests, has turned out to be neither fast nor cheap. It’s been at least 14 years since the EmDrive first emerged (2002) and despite numerous tests, we still lack a definitive conclusion.

(2) In much the same way that thermal and chamber effects are obscuring the force measurements, our ability to reach accurate conclusions is impeded by our natural human behavior of jumping to conclusions, confirmation biases, sensationalism, and pedantic reflexes. This is part of the reality that also needs understanding so that we can separate those influences from the underlying physics.


(1) Continue scrutinizing the existing experimental investigations on both the EmDrive and Mach Effect Thrusters.

(2) To break the cycle of endlessly not doing the right things to get a definitive answer, begin a more in-depth experimental program using qualified and impartial labs, plus qualified and impartial analysts. The Tau Zero Foundation stands ready to make arrangements with suitable labs and analysts to produce reliable findings, pro or con.

(3) If it turns out that the effects are genuine, then continue with separate (a) engineering and (b) physics research, where the engineers focus on creating viable devices and the physicists focus on deciphering nature. In both cases:

  • Characterize the parameters that affect the effects.
  • Deduce mathematical models.
  • Apply those models to (a) assess scalability to practical levels, and (b) understand the new phenomena and its relation to other fundamental physics.
  • On all of the above, conduct and publish the research with a focus on the reliability of the findings rather than on their implications.


Pitfall 1 – The Fog of Want

Our decisions about this physics are influenced by behaviors that have nothing to do with physics. To ignore this human element would be a disservice to our readers. To get to the real story, we need to reveal that human element so that we can separate it from the rest of the data, like any good experiment. I’m starting off with this issue so that you are alert to its influences before you read the rest of this article.

As much as I strive to be impartial, I know I have an in-going negative bias on the EmDrive history. To create a review that reflects reality, rather than echoing my biases, I had to acknowledge and put aside my biases. Similarly, if you wish to extract the most from this article, you might want to check your perspectives. Ask yourself these three questions: (1) Do you already have an opinion about this effect and are now reading this article to see if we’ll confirm your expectation? (2) Do you want to know our conclusions without any regard to how we reached those conclusions? (3) Are you only interested in this EmDrive assessment, without regard to other comparable approaches?

If you answered “yes” to any of those questions, then you, like me, have natural human cognitive dysfunctions. To get past those reflexes, start by at least noticing that they exist. Then, take the time to notice both the pros and cons of the article, not just the parts you want to be true. Deciphering reality takes time instead of just listening to reflexive beliefs. It requires that one’s mind be open to the possibility you might be right and equally open to the possibility you might be wrong.

EmDrive History

This history is a recurring theme of incredible claims with non-credible evidence for those claims. In all cases, the effect is assumed to be real before the tests – which reflects a blinding bias. This dates back to at least 2002 when Roger Shawyer claimed to invent a device that “provides direct conversion from electrical energy to thrust, without expelling propellant.” I was still at NASA and vaguely remember reviewing it then. Regardless of the claims, the fidelity of the methods were below average. Over the years I heard about several other tests, but never saw any data. Eventually there was a press story about tests in China, along with this photo. It turns out that this photo is not a Chinese rig, but one of Shawyer’s:


Shawyer’s device and supporting equipment are on a rotating frame, where that rotation is used to determine if the device is thrusting. Note, however, the radiator and coolant lines. Any variation in the coolant flow would induce a torque that would obscure any real force measurements. Knowing the claimed thrusting effect is small and having enough experience to guess the likely variations in coolant flow, I considered this test set-up flawed.

Regarding the Chinese tests, I did not previously know they are described in peer-reviewed articles. Since many of us did not know either, I’m listing them here along with cursory impressions:

Juan, Y., et al, (2012). Net thrust measurement of propellantless microwave thrusters. Acta Physica Sinica, Chinese Physical Society.

Due to all of the impressions below, I do not have any confidence in their data:

  • Assumes first that the EmDrive is genuine.
  • Verbally describes theory, but without predicting experimental findings.
  • The experiment is not described in enough detail to assess its fidelity, but is similar to the one in the photo. Regardless, there is absolutely no discussion of possible influences on the rotation from tilting, power lead forces, vibration effects, thermal effects, or others.
  • The behavior of the thrust stand was not characterized before installing the EmDrive. Testing the two together without first having characterized the thrust stand separately prevents separating their distinct characteristics from the data.
  • The data plots lack error bands.

Juan, Y., et al (2013). Prediction and experimental measurement of the electromagnetic thrust generated by a microwave thruster system. Chinese Physics B, 22(5), 050301.

Due to all of the impressions below, I do not have any confidence in their data:

  • The description of the experiment is improved from the 2012 paper and appears to be the same configuration. This time possible effects from tilting and the power lead forces are mentioned, but they still do not address vibration, thermal, coolant loop, or other effects.
  • Again, they fail to characterize the thrust stand separately from the EmDrive.
  • Unlike the 2012 paper, they attempt to make numerical predictions. Details are provided for their physics derivations (which I did not scrutinize). That theory is then applied to make predictions for their specific hardware, but only verbally described it, rather than showing an explicit derivation. They show plots of the predicted force versus power, but only up to 200W, where the experimental runs span about 100W to 2400W.
  • The experimental results do not match their linear predictions for the ratio of force-to-power. These differences are then evasively dismissed.

Juan, Y., et al. (2016), “Thrust Measurement of an Independent Microwave Thruster Propulsion Device with Three-Wire Torsion Pendulum Thrust Measurement System,” Journal of Propulsion Technology, vol. 37, no. 2, pp 362-371.

The text is in Chinese, which I did not translate, but the figures and plots are captioned in English. Therefore I comment only on those diagrams. Again, what is shown is not enough to support claims of anomalous forces:

  • From figures 2, 3, 6, 7, 16, and 19, it appears the prior apparatus is now hung from torsion wires instead of a rotating support from below. This time the coolant loop is explicitly shown, but in a conceptual drawing instead of showing specifics. Again, the influence of the coolant loop is ignored.
  • The only “measurement results” plot is “force versus serial number” – which conveys no meaningful information (without being able to read associated text).
  • I learned later from Martin Tajmar, that the observed thrust drops by more than an order of magnitude when the device is powered by batteries instead of the external cables (cables whose currents can induce forces).

I chose not to cite and comment on the many non-peer-reviewed articles on Shawyer’s website and related AIAA conference papers.

Shawyer eventually published a peer-reviewed article, specifically: Shawyer, R. (2015), “Second generation EmDrive propulsion applied to SSTO launcher and interstellar probe,” Acta Astronautica, vol. 116, pp 166-174. Shawyer states: “Theoretical and experimental work in the UK, China and the US has confirmed the basic principles of producing thrust from an asymmetric resonant microwave cavity.” That assertion has not held up to scrutiny. Therefore, all related assertions are equally unfounded. Instead of offering substantive evidence, this article instead predicts the performance for three variations of EmDrives that now claim to use superconductivity. From these, he presents conceptual diagrams for their respective spacecraft. He also mentions the “Cannae Drive,” by Guido Fetta, as another embodiment of his device.

Latest EmDrive Paper

The latest paper, in the AIAA Journal of Propulsion and Power, is an improvement in fidelity on the prior tests and may be indicative of a new propulsive effect. However, the methods and data are still not crossing the threshold of “extraordinary evidence for extraordinary claims” – especially since this is a measurement of small effects. With the improved fidelity of the reporting and the data traces themselves, I have to question my earlier bias that the prior data was entirely due to experimental artifacts and proponent biases.

The assessment offered below is a summary of discussions with the coauthors of this report plus a few other colleagues. Both Martin Tajmar and George Hathaway operate similar low-thrust propulsion test stands and thus are familiar with such details. George Hathaway’s more focused analysis will be posted in a future Centauri Dreams article.

The major problems with the paper are (1) lack of impartiality, (2) the test hardware is not sufficiently characterized to separate spurious effects from the test article’s effects, (3) the data analysis is marred by the use of subjective techniques, and (4) the data can be interpreted in more than one way – where one’s bias will affect one’s conclusions.

The first shortcoming of the paper is that it is biased. It assumes that the propulsion effect is genuine and then goes on to invent an explanation for that unverified effect. This bias skews how they collect and analyze the data. To be more useful, the paper should have reported impartially on its experimental and analytical methods to isolate a potential new force-producing effect from other contaminating influences.

The next shortcoming is insufficient testing for how spurious causes can affect the thrust stand. While this new paper is a significant improvement over the previous publications, it falls short of providing the needed information to reach a definitive conclusion. They use techniques comparable to engineering tests of conventional low-thrust electric propulsion. While such engineering techniques might be passable for checking electric propulsion design changes, it is not sufficient to demonstrate that a new physics effect exists. The specific shortcomings include:

  • Thrust stand tilting: The thrust stand has a vertical axis, where even slight changes of that alignment will affect how the thrust stand behaves. There are three parts to this, none of which are quantified: the fidelity of the thrust stand flexures and pivots, the alignment fidelity of that structure to the vacuum chamber, and the sustained levelness of the “optical bench” upon which the vacuum chamber is mounted.
  • Thrust stand characterization: The thrust stand does not return to its original position after tests, even for most calibration events. Additionally, the thrust stand is over-damped, meaning that it is slow to respond to changes, including the calibration events. Those characteristics (time for the thrust stand to respond to a known force and the difference between its before/after positions) are important to understand so that those artifacts can be separated from the data. These facets are largely ignored in the paper. The report does mention that the location of the masses on the thrust stand affects its response rate (“split configuration” versus “non-split”), but this difference is not quantified. The thrust stand uses magnetic dampers. Similar dampers used on one of Martin Tajmar’s thrust stands were found to cause spurious effects (subsequently replaced with oil dampers). Given the irregular behavior, it is fair to suspect that other causes are interfering with the motion of the thrust stand. The flexural bearings might be operated beyond their load capacity or might be affected by temperature.
  • Forces from power cables: To reduce the influence of electromagnetic forces from the power leads, Galinstan liquid metal screw and socket connections are used. While encouraging, it is not specified if these connections (several needed) are all coaxially aligned with the stand’s rotation axis (as required to minimize spurious forces). Also, there are no tests with power into a dummy load to characterize these possible influences.
  • Chamber wall interactions: Though mentioned as a possible source of error, the electromagnetic forces between the test device and the vacuum chamber walls are dismissed without quantitative estimates or tests. One way that this could have been explored is by using more variations in the position and orientation of the test device relative to the chamber. For example, in the “null thrust” configuration, only one of four possibilities is used (the device pointed toward the pivot axis). If also pointed up, down, and away from the pivot, more information would have been collected to help assess such effects.
  • Thermal effects: The paper acknowledges the possible contributions from thermal effects, but does not quantify that contribution. For example, there are no measurements of temperature over time compared to the thrust stand’s deflection. Such measurements should have been made during operation of the device and when running power through a dummy load. Absent that data, the paper resorts to subjectively determining which parts of the data are thermal effects. For example, without any validation, the paper assumes that the displacement measured during the “null thrust” configuration is entirely a thermal effect. It does not consider chamber wall interactions or any other possible sources. The paper does speculate that temperature changes might shift the center of gravity of the test article in a way that affects the thrust stand, but no diagrams are offered showing how a slight change in one of those dimensions would affect the thrust stand.

The third and most egregious shortcoming in the report is that they apply a vaguely described “conceptual simulation” (which is never mathematically detailed) as their primary tool to deduce which part of the data is attributable to their device and which is due to thermal effects. They assume a priori the shapes of both the “impulsive thrust” (their device) and thermal effects and how those signals will superimpose. There is no consideration of chamber wall effects, power lead forces, tilting, etc. As a reflection of how poorly defined this assumed superposition, the ‘magnitude’ and ‘time’ axes on the chart showing this relation (Fig. 5) are labeled as “arbitrary units.” Another problem is that their assumed impulsive thrust curve does not match the shape of most of the data that they attribute to impulsive thrust. Instead of the predicted smooth curve, the data shows deviations about halfway through the thrusting time. They then apply this subjective and arbitrary tool to reach their conclusions. Because they are biased that the effect is genuine and because their methods overlook critical measurements, I cannot trust the authors’ interpretations of their results.

Absent an adequate accounting for the magnitude and characteristics of secondary causes and how to remove those possible influences from the data, the fourth major problem with the report is that its data can then be interpreted more than one way.

Rather than evoking subjective techniques here, the comments that follow are based only on examining their data plots as a whole. To illustrate how this data can then be interpreted in more than one way, both dismissive and supportive interpretations are offered. In particular, we compare the traces from the “forward,” “null,” and “reverse” thrust configurations and then the force versus power compilation of the runs.

The data for the 80W operation of the device in the “forward,” “null,” and “reverse” thrust configurations is presented in Figures, 9c, 18, and 10c, respectively. Recall from the above discussions that this data includes all the uncharacterized spurious causes (thermal, chamber wall interactions, power lead forces, tilting of the thrust stand, and seismic effects), plus any real force from the test device. The values shown in the table below were read from enlarged versions of the figures.


Table of Noteworthy Data Comparisons Between Forward, Null, and Reverse Thrust Orientations

For a genuine thrusting effect, one would expect the results to show near-matching magnitudes for forward and reverse thrust and a zero magnitude for the null-thrust orientation. If one looks only at the “Total deflection,” all the magnitudes are roughly the same, including the null-thrust. Pessimistically, one could then infer that the spurious effects are great enough to be easily misinterpreted as a genuine thrust.


Conversely, if one considers how quickly the deflections occur, then the attention would be on the “Rate of deflection.” In that case, the thrusting configurations are roughly twice as large as the null-thrust configuration. From only that, one might infer that a new force-producing effect is larger than spurious causes.

To infer conclusions based on the deflection rates, one must also examine the rate of deflection for the calibration events, which should be the same in all configurations. The calibration deflection rate appears roughly the same in the forward and reverse thrust configuration, but more than 2.5 times larger in the null thrust configuration. That there is a difference compounds the difficulty of reaching conclusions. There are also significant inconsistencies with how the thrust stand rebounds once the power is turned off between the thrusting and null-thrust configurations, again compounding the difficulty of reaching conclusions.

Because a possible positive interpretation exists within those different perspectives, I cannot rule out the possibility that the data reflects a new force-producing effect. But as stated earlier, given all the uncharacterized secondary effects and the questionable subjective techniques used in the report, this is not sufficient evidence. Given the prominent role played by the rate of deflections, the dynamic behavior of the thrust stand must be more thouroughly understood before reaching firm conclusions.

Next, let’s examine the compilation of runs, namely Fig. 19. Based on a linear fit through the origin with the data, they conclude a thrust-to-power ratio of 1.2 ± 0.1 mN/kW (=µN/W). While this is true, the data can be interpreted more than one way. Note that the averages for 60 and 80 watts operations are the same, so a linear fit is not strictly defensible. One could just as easily infer that increasing power yields decreasing thrust, a constant 50 µNewton force, or an exponential curve that flattens out to a constant (saturated) thrust of about 100 uN. Note too that the null-thrust data (which could be interpreted to be as high as 211 µN) is not shown on this chart.


Recall too that they did not quantify the potential spurious effects, so their presumed error band of only ±6 µN does not stand up to scrutiny. Note, for example, the span in the 40W data is about ± 17µN, the 60W about ± 50µN, and the 80W about ± 32µN. What is not clear is if these 40, 60, and 80 Watt runs represent different operating parameters (Q-factor?), or if instead, these are the natural variations with fixed settings.

The pessimistic interpretation is that the deviations in the data represent variations for the same operating conditions, in which case the data are too varied from which to conclude any correlations. Conversely, the optimistic interpretation is to assume the variations are due to changes in operating parameters, but then that additional information should be made available and be an explicit part of the analysis.

In summary, this most recent report is a significant improvement, but has many shortcomings. Questionable subjective techniques are used to infer the “thrust” from the data. Other likely influences are not quantified. But also, despite those inadequacies, the possibility of a new force-producing effect cannot be irrefutably ruled out. This is intriguing, but still falling short of defensible evidence.

EmDrive and Other Space Drive Theories

First, I cannot stress enough that there is no new EmDrive “effect” yet about which to theorize. The physical evidence on the EmDrive is neither defensible nor does it include enough operating parameters to characterize a new effect. The data is not even reliable enough to deduce the force-per-power relationship, let alone any other important correlations. What about the effects of changing the dimensions or geometry, changing the materials, or changing the microwave frequencies or modulation? And then there is the unanswered question, what are the propulsion forces pushing on?

Assuming for the moment that the EmDrive is a new force-producing effect, we know at least two things (1) it is not a photon rocket, because the claimed forces are 360 times greater than the photon rocket effect, and (2) a force, without an “equal and opposite force,” goes beyond Newton’s laws. Note that I did not evoke the more familiar “violating conservation of momentum” point. That is because these experiments are still trying to figure out if there is a force. We won’t get to conservation of momentum until after those forces are applied to accelerate an object. If that happens, then we must ask what reaction mass is being accelerated in the opposite direction. If the effects are indeed genuine, then new physics is being discovered or old physics is being applied in a new, unfamiliar context.

For those claiming to have a theory to predict a new propulsion effect, it is necessary that those theories make testable numeric predictions. The predictions in Juan’s 2013 paper did not match its results. The analytical discussions in White’s 2016 experimental paper do not make theoretical predictions. The same is true with his 2015 theoretical paper: White (2015), “A discussion on characteristics of the quantum vacuum,” Physics Essays, vol. 28, no. 4, 496-502.

Short of having a self-consistent theory, any speculations should at least accurately echo the physics they cite. The explanations in the White’s 2016 experimental paper, White’s 2015 theory paper, and even White’s 2013 report on the self-named “White-Juday Warp Field Interferometer” (White (2013), “Warp Field Mechanics 101,” Journal of the British Interplanetary Society, vol. 66, pp. 242-247), did not pass this threshold. I’ll leave to other authors to elaborate on the 2015 and 2016 papers, while a review of the 2013 warp drive claims is available here. It is Lee & Cleaver (2014), “The Inability of the White-Juday Warp Field Interferometer to Spectrally Resolve Spacetime Distortions,” [physics.gen-ph].

In contrast, it is also important to avoid pedantic reflexes – summarily dismissing anything that does not fit what we already know, or assuming all of our existing theories are completely correct. For example, the observations that lead to the Dark Matter and Dark Energy hypotheses do not match existing theories, but that evidence has been reliably documented. Using that data, many different theories are being hypothesized and tested. The distinction here is that both the proponents and challengers make sure they are accurately representing what is, and is not yet, known.

If a propulsion physics breakthrough is to be found, it will likely be discovered by examining relevant open questions in physics. A relevant theoretical question to non-rocket propulsion concepts (including the EmDrive) is ensuring conservation of momentum. One way to approach this is to look for phenomena is space that might serve as a reaction mass in lieu of propellant, perhaps like the quantum vacuum. Another approach is to dig deeper into the nature of inertial frames. Inertial frames are the reference frames upon which the laws of motion and the conservation laws are defined, yet it is still unknown what causes inertial frames to exist or if they have any deeper properties that might prove useful.

Woodward Tests and Theory

In addition to the overtly touted EmDrive, there are about two-dozen other space drive concepts of varying degree of substance. One of them started out as a theoretical investigation into the physics of inertial frames which then advanced to make testable numeric predictions. Specifically I’m referring to what is now called the “Mach Effect Thruster” concept of James F. Woodward, which dates back at least to this article:

Woodward, James F. (1990), “A new experimental approach to Mach’s principle and relativistic gravitation,” Foundations of Physics Letters, vol. 3, no. 5, pp. 497-506.

A more in-depth and recent publication on these concepts is available as:

Woodward, James F. (2013) Making Starships and Stargates: The Science of Interstellar Transport and Absurdly Benign Wormholes. Springer Praxis Books.

Experiments have been modestly underway for years, including three recent independent replication attempts by George Hathaway in Toronto Canada, Martin Tajmar in Dresden Germany, and Nembo Buldrini in Wiener Neustadt, Austria. A workshop was held to review these findings in September 20-23, 2016, in Estes Park, Colorado. I understand from an email conversation with Jim Woodward that these reports and workshop proceedings are now undergoing peer review for likely publication early in 2017.

The main point here, by citing just this one other example, is that there are other approaches beyond the highly publicized EmDrive claims. It would be a disservice to our readers to let a media fixation with one theme blind us to alternatives.


If either the EmDrive or Mach Effect Thruster is indeed genuine, then new physics is being discovered or old physics is being applied in a new, unfamiliar context. Either would be profound. Today it is premature to assert than any of these effects are genuine, or conversely, to flatly rule out that such propulsion ambitions are impossible. When the discussions are constrained to exclude pedantic disdain and wishful interpretations, and limited to people who have either the education or experience in related fields, one encounters multiple, even divergent, perspectives.

Next, even if new physics-to-engineering is emerging, it is premature to assess its utility. The number of factors that go into deciding if a technology has an advantage over another are way beyond what data is yet available. Recall that the performance of the first aircraft, jet engine, transistor, etc, were all tiny examples of what those breakthroughs evolved to become. Reciprocally, we tend to forget about all the failed claims who have faded into obscurity. We just do not know enough today, pro or con, to judge.

I realize the urge within human behavior for fast, definitive answers that we can act on. This lingering uncertainty is aggravating, even more so when peppered with distracting hype or dismissive disdain. To get to the underlying reality, we must continue with a focus on the fidelity of the methods to produce reliable results, rather than jumping to conclusions on the implications.

What to Do About It

If you want definitive answers, then we must improve the reliability of the methods and data, and remain patiently open for the results to be as they are, good news or bad news. I alluded earlier to the broken tactic of trying to get answers with fast and cheap experiments. How many inadequate experiments and over how many years does it take before we change our tactics? I’ve had this debate more than once with potential funding sources and I hope they are reading now to see… “I told ya so!” Sorry, I could not resist that human urge to emotionally amplify a well-reasoned point. To break the cycle of endlessly not doing the right things to get a definitive answer, we must begin a more in-depth experimental program using qualified and impartial labs, plus qualified and impartial analysts. Granted, those types of service providers are not easy to find, where impartiality is the hardest to come by. Also, it might take three years to get a reliable answer, which is at least better than 14 years. And the trustworthy experiments will not be cheap, but quite likely far less than the aggregate spent on the repeated ‘cheap’ experiments. If any of those prior funding sources (or new) are reading this and finally want trustworthy answers, contact us. Tau Zero stands ready to make arrangements with suitable labs and analysts to conduct such a program.

And what if we do discover a breakthrough? In that case, we recommend distinguishing two themes of research, one from an engineering point of view to nudge the effect into a useful embodiment, and another from an academic point of view, to fully decipher and compare the new effects to physics in general. In both those cases we need to:

1. Characterize the parameters that affect the effects. Instead of just testing one design, vary the parameters of the device and the test conditions to get enough information to work with.

2. Deduce mathematical models from that more complete set of information.

3. Apply those models to (a) assess scalability to practical levels, and (b) explore the new phenomena and its relation to other fundamental physics.

4. On all of the above, conduct and publish the research with a focus on the reliability of the findings rather than on their implications.

For those of you who are neither researchers nor funding sources, what should you do? First, before reposting an article, take the time to see if it offers new and substantive information. If it turns out to be hollow click-bait, then do not share it. If it has both new information with meaningful details, then share it. Next, as your read various articles, notice which sources provide the kind of information that helps you understand the situation. Spend more time with those sources and avoid sources who do not.

Regarding questionable press stories, I’m not sure yet what to make of this: “The China Academy of Space Technology (CAST), a subsidiary of the Chinese Aerospace Science and Technology Corporation (CASC) and the manufacturer of the Dong Fang Hong satellites, has held a press conference in Beijing explaining the importance of the EmDrive research and summarizing what China is doing to move the technology forward.” Some stories claim there is a prototype device in orbit. If true, I would expect to see at least one photo of the device being tested in space. But we’ll see…

When faced with uncertain situations and where the data is unreliable, the technique I use to minimize my biases is to simultaneously entertain conflicting hypotheses, both the pro and con. Then, as new reliable information is revealed, I see which of those hypotheses are consistent with that new data. Eventually, after enough reliable data has accrued, the reality becomes easier to see.


The cited devices have gone by multiple names (e.g. EmDrive, EM Space Drive; Mach Effect Thruster, Mach-Lorentz Thruster), and the versions used in this article are the ones with the greatest number of Google search hits.