NASA Selects Two Asteroid Missions

by Paul Gilster on January 6, 2017

Among the five finalists for NASA’s Discovery program, I had become attached to the Near Earth Object Camera (NEOCam), whose purpose was to expand our catalog greatly, with the potential, according to mission backers, of finding ten times more NEOs than we’ve found to date. We’ll see if NEOCam has a future (I’ve just learned that it has been given extended funding for an additional year by NASA), but for now NASA has announced two other Discovery-class missions, both of which have objectives among the asteroids.

Lucy, scheduled for a launch in the fall of 2021, is to be a robotic mission with the goal of exploring six of the Jupiter Trojan asteroids. The Trojans share Jupiter’s orbit while moving swarm-like around the planet’s L4 and L5 Lagrangian points. Over 6000 Jupiter Trojans are now known, but the population is thought to be vast, with as many as 1 million Trojans larger than 1 kilometer in diameter. As to their origin, there is much to learn. They may be captured asteroids or comets, or as this short NASA video explains, even Kuiper Belt Objects.

From the standpoint of Solar System evolution, the Trojans make for interesting science. They’re relics of the primordial material of the outer system, and I see that principal investigator Harold F. Levison cites the mission’s name in connection with another Lucy, the fossil fragments that have been so significant in our understanding of human development. We’ll see if this Lucy gets as much public attention as its namesake, which acquired its name from the Beatles song ‘Lucy in the Sky with Diamonds,’ played at the recovery site in Ethiopia. Breaking out the Sgt. Pepper album on this Lucy’s arrival at its first target seems a natural.

There are connections between the Lucy effort and the highly successful New Horizons mission, in the form of later versions of the familiar RALPH and LORRI science instruments, and evidently several members of the Lucy mission team are connected with New Horizons as well. Lucy also benefits from the contributions of several members of the OSIRIS-REx team, the latter a robotic spacecraft now on its way to rendezvous with asteroid Bennu.

discovery-missions-v3

Image: (Left) An artist’s conception of the Lucy spacecraft flying by the Trojan Eurybates – one of the six diverse and scientifically important Trojans to be studied. Trojans are fossils of planet formation and so will supply important clues to the earliest history of the solar system. (Right) Psyche, the first mission to the metal world 16 Psyche will map features, structure, composition, and magnetic field, and examine a landscape unlike anything explored before. Psyche will teach us about the hidden cores of the Earth, Mars, Mercury and Venus.
Credit: SwRI and SSL/Peter Rubin.

The other mission is Psyche, dedicated to a single asteroid of that name that appears to be the survivor of an early collision with another object that violently disrupted a protoplanet. About 210 kilometers in diameter, 16 Psyche is thought to be composed mostly of metallic iron and nickel, a composition similar to the Earth’s core. We seem to be looking at what would have become the core of a Mars-sized planet, now without its outer rocky layers. Thomas H. Prettyman, a co-investigator on the Psyche mission, explains:

“Psyche is thought to be the exposed core of a planetary embryo – perhaps like Vesta – that initially melted and later cooled to form a central metallic core, silicate mantle, and basaltic crust. The outer layers may have been removed in a violent collision, leaving the core exposed. Psyche will provide a close-up look at a planetary core, providing new insights into the evolution and inner workings of terrestrial planets.”

The robotic Psyche mission will launch in the fall of 2023, with arrival at 16 Psyche in 2030 after two gravity assists, one from an Earth flyby, the second from a flyby of Mars. Both missions have this is common: They target the development of the early Solar System, one by observing the remnants of formation among the Jupiter Trojans, the other by seeing the interior of what might have become a planet. Let’s hope for the kind of success for both that we saw in earlier Discovery missions like MESSENGER and Dawn. OSIRIS-REx, meanwhile, is on course for a 2018 rendezvous with asteroid Bennu, with sample return to follow.

tzf_img_post

{ 45 comments }

Pinpointing a Fast Radio Burst

by Paul Gilster on January 5, 2017

Fast Radio Bursts (FRBs) are problematic. Since their discovery about a decade ago, the question has been their place of origin. These transient pulses last no more than milliseconds, yet they emit enormous energies, and we’ve had only the sketchiest idea where they came from. Now we learn, from an announcement at the 229th meeting of the American Astronomical Society in Grapevine, Texas, that a repeating source of FRBs has been spotted. That makes tracing the burst back to its source and characterizing it an ongoing proposition.

“We now know that this particular burst comes from a dwarf galaxy more than three billion light-years from Earth,” says Shami Chatterjee, of Cornell University. “That simple fact is a huge advance in our understanding of these events.” Papers on the work are being presented in Nature as well as Astrophysical Journal Letters.

Research behind the investigation of FRB 121102 has been mounted by an international team of astronomers, representing a spread of instruments that is important because a single-dish detection cannot target the object’s location. Because it repeats, this burst allows telescopes separated by large distances to home in on it and investigate it at various wavelengths.

The FRB was discovered at Arecibo, but observations with the Very Large Array in New Mexico have found a total of nine radio bursts from this source. Observations using the 8-meter Gemini North instrument on Mauna Kea have been able to pinpoint the host galaxy, which comes in at a redshift value that puts its distance at over 3 billion light years. Between Arecibo, the VLA and the European VLBI Network (EVN), astronomers have now been able to determine the position of the burst to a fraction of an arcsecond, more than 200 times as accurate as previous measurements. An ongoing and persistent source of weak radio emission is also found in the same region.

frb_inset

Image: Gemini composite image of the field around FRB 121102 (indicated). The dwarf host galaxy was imaged, and spectroscopy performed, using the Gemini Multi-Object Spectrograph (GMOS) on the Gemini North telescope on Maunakea in Hawai’i. Data were obtained on October 24-25 and November 2, 2016. Credit: Gemini Observatory/AURA/NSF/NRC.

Remember that until this event, only the Parkes Radio Telescope in Australia had detected FRBs, and only a small number at that. Now we are talking not only about locating the source in visible light but associating it with a radio source. Benito Marcote works at JIVE (Joint Institute for VLBI in Europe), which includes a 100-meter radio telescope in Effelsberg, Germany.

“With a bit of luck,” says Marcote, “we were able to detect bursts from FRB 121102 with the EVN and now we know that the origin of the bursts is right on top of the persistent radio source… We think that the bursts and the continuous source are likely to be either the same object or that they are somehow physically associated with each other.”

This FRB, at least, is now known incontrovertibly to have an origin far outside our own galaxy, although the galaxy itself is a surprise. It’s a small dwarf galaxy younger than ours, one that may be able to produce more massive stars than we see in the Milky Way. One possibility is that FRB 121102 is from the collapsed remnant of such a star. Shriharsh Tendulkar (McGill University) is lead author of one of the papers studying the event.

“The host galaxy for this FRB appears to be a very humble and unassuming dwarf galaxy, which is less than 1% of the mass or our Milky Way galaxy. That’s surprising. One would generally expect most FRBs to come from large galaxies which have the largest numbers of stars and neutron stars — remnants of massive stars. This dwarf galaxy has fewer stars, but is forming stars at a high rate, which may suggest that FRBs are linked to young neutron stars. There are also two other classes of extreme events — long duration gamma-ray bursts and superluminous supernovae — that frequently occur in dwarf galaxies, as well. This discovery may hint at links between FRBs and those two kinds of events.”

A burst originating from the region near a massive black hole in the galaxy’s core — an active galactic nucleus emitting jets of material — is a candidate for FRB 121102. And as data continue to accumulate, any periodicity found in future observations may point to the involvement of a rotating neutron star. Further entangling the story is a key question: Can we assume that all FRBs we’ve thus far detected have the same origins, or are we actually detecting more than one kind of cosmic event? Given that FRB 121102 is the only one of 18 known FRBs that repeats, we may be looking at different physical processes at work.

The papers are Chatterjee et al., “A direct localization of a fast radio burst and its host,” Nature 541 (5 January 2017), 58-61 (abstract); Tendulkar et al., “The Host Galaxy and Redshift of the Repeating Fast Radio Burst FRB 121102,” Astrophysical Journal Letters Vol. 834, No. 2 (4 January 2017) (abstract); B. Marcote et al., “The Repeating Fast Radio Burst FRB 121102 as Seen on Milliarcsecond Angular Scales,” Astrophysical Journal Letters Vol. 834, No. 2 (4 January 2017)(abstract).

tzf_img_post

{ 8 comments }

Hitchhiker to the Outer System?

by Paul Gilster on January 4, 2017

Years ago at the Aosta conference on interstellar studies, Greg Matloff told attendees about an interesting way to travel the Solar System. If the goal is to get to Mars, for example, it turns out that there are two objects — 1999YR14 and 2007EE26 — that pass close to both Earth and Mars, each with transit time of about a year. Let me quote from Greg’s paper:

Since orbital characteristics are known for a few thousand NEOs, it is reasonable to assume that about 0.1% of the total NEO population could be applied for Earth-Mars or Mars-Earth transfers during the time period 2020-2100. Because a few hundred thousand NEOs must exist that are greater in dimension than 10m, hundreds of small NEOs must travel near-Hohmann trajectories between Earth and Mars or Mars and Earth. It seems likely that a concerted search will find one or more candidate NEOs for shielding application during any opposition of the two planets.

The notion is provocative. Could we somehow hitch a ride on one of these objects, taking advantage of its capabilities as a radiation shield by digging into its surface and exploiting its resources along the way? And maybe we can look further than Mars. In 2014, a NEO called 2000WO148 swings by the Earth enroute to the main belt asteroid Vesta in 2043. The question becomes, are there other NEOs on interesting trajectories that might be of use in our explorations?

I was reminded of the NEO hitchhike idea this morning while reading about another interesting object. NEOWISE detected 2016 WF9 in late November of 2016. Here we have a true sightseer. 2016 WF9 approaches the orbit of Jupiter at its furthest point from the Sun, and then, over just under five years, swings inward, coming in past the main asteroid belt and the orbit of Mars to move just inside the orbit of the Earth before heading back out.

We get closest approach to Earth’s orbit on February 25th of this year, although at 51 million kilometers, this object hardly poses a danger to our planet, nor will it in the foreseeable future. Whether 2016 WF9 is an asteroid or a comet is not known. What we know is that it is between 0.5 and 1 kilometer across, and has low reflectivity, as do many dark objects in the main asteroid belt. Although in a comet-like orbit, 2016 WF9 lacks the dust and gas we normally associate with a comet. James ‘Gerbs’ Bauer (JPL) is deputy chief investigator for NEOWISE:

“2016 WF9 could have cometary origins. This object illustrates that the boundary between asteroids and comets is a blurry one; perhaps over time this object has lost the majority of the volatiles that linger on or just under its surface.”

pia21259-16

Image: An artist’s rendition of 2016 WF9 as it passes Jupiter’s orbit inbound toward the sun. Credit: NASA/JPL-Caltech.

Another object recently spotted by NEOWISE is indeed thought to be a comet, releasing dust as it nears the Sun. In the first week of the new year, C/2016 U1 NEOWISE will be in the southeastern sky shortly before dawn as seen from the northern hemisphere, reaching perihelion on January 14 inside the orbit of Mercury. Although it’s impossible to say for sure, it may become bright enough to be visible in binoculars, according to this JPL news release.

Since NEOWISE was reactivated in December of 2013, it has discovered either 9 or 10 comets, depending on what 2016 WF9 turns out to be. It 2016 WF9 is found to be an asteroid, then it would be the 100th discovered since reactivation. The original mission, the asteroid and comet-hunting part of the Wide-Field Infrared Survey Explorer (WISE) mission, discovered 34,000 asteroids. 31 of its discoveries pass within 20 lunar distances, and 19 are thought to be more than 140 meters in size, but reflect less than 10 percent of incident sunlight. They are objects as dark as new asphalt, absorbing most visible light but re-emitting energy at infrared wavelengths that the NEOWISE detectors can readily study.

For those interested in digging into these matters further, the NEOWISE data release, with access instructions and supporting documentation, is here. And on the fictional side, Kim Stanley Robinson’s novel 2312 looks at terraformed asteroids in terms of both habitats and intra-system transportation in an evolving space infrastructure.

tzf_img_post

{ 30 comments }

Close Look at Recent EmDrive Paper

by Paul Gilster on January 3, 2017

The concluding part of the Tau Zero Foundation’s examination of what is being called the ‘EmDrive’ appears today. It’s a close analysis of the recent paper by Harold ‘Sonny’ White and Paul March in the Journal of Propulsion and Power. Electrical engineer George Hathaway runs Hathaway Consulting Services, which has worked with inventors and investors since 1979 via an experimental physics laboratory near Toronto, Canada. Hathaway’s concentration is on novel propulsion and energy technologies. He has authored dozens of technical papers as well as a book, is a patent-holder and has hosted and lectured at various international symposia.

Hathaway Consulting maintains close associations with advanced physics institutions and universities in the US and Europe. Those familiar with our Frontiers of Propulsion Science book will know his paper on gravitational experiments with superconductors, which closely examined past methods and cast a skeptical eye on early claims of anomalous forces (an earlier paper, “Gravity Modification Experiment using a Rotating Superconducting Disk and Radio Frequency Fields,” appeared in Physica C). Like Marc Millis, Hathaway calls for continued testing of EmDrive concepts and increased rigor in experimental procedures.

By George Hathaway

Comments on “Measurement of Impulsive Thrust from a Closed Radio Frequency Cavity in Vacuum” (White, March et al, published online by Jnl. Prop. & Power November 17, 2016).

Introduction

White et al are to be congratulated for attempting to measure the small thrusts allegedly produced by a novel thruster whose operating mechanism is not only not understood but purportedly violates fundamental physical laws. They have made considerable effort to reduce the possibility of measurement artifact. However it appears that there are some fundamental problems with the interpretation of the measurement data produced by their thrust balance. This document will analyse the measurement procedure and comment on the interpretation.

The following comments roughly follow the order in the original text by White et al

Analysis and Comments

1. Null Test Orientation

Tests were performed in both the “Forward” and “Reverse” direction as well as in a “Null” direction where the alleged force vector pointed towards the rotational axis of the balance (pg 23). Apparently no Null tests were performed with the force vector pointing away from the balance axis nor were any tests performed with the “test article” force vector pointing up or down. These additional orientations would have provided much needed control data given the magnitude of the allegedly purely thermal signal seen in their “Null” test.

In addition, the Forward and Reverse tests should also have been performed by just re-orienting the test article whilst keeping all other rotating components untouched. In this type of control experiment, the spurious effect of the rest of the components is largely eliminated.

2. Axis Verticality

An optical bench was used as a platform to mount the vacuum chamber containing the balance. It is not stated whether the optical bench was itself mounted on pneumatic legs, however, this is usually the case with optical benches. The correct operation of any balance of this geometry requires that the pivots around which the balance arm rotates must be perfectly aligned vertically one above the other (for a 2-pivot system). When the pneumatic legs of the table are inflated, the axis of the balance is not typically able to be kept perfectly vertical, as required to obtain the maximum balance sensitivity and repeatability. There is no indication in the text stating how such verticality was assured throughout the test campaign, especially since the balance was housed in a large vacuum chamber.

3. Flexural Bearings

There is no information presented to indicate whether the linear flexure bearings were operating within the manufacturer’s axial loading specification, especially when additional ballast weight was required for the non-“split configuration” tests. It would also have been useful to see data on the natural frequency of the balance when loaded with the equivalent weights used in the thrust tests, given the damping method described. Also missing is an explanation of why none of the traces of the optical displacement sensor return to starting baseline after the calibration and “thrust” pulses. There seems to be an inherent bearing stiction problem preventing the balance from returning to its original baseline after a test. This is not due to general balance drift and is typical for overloaded bearings of this type. Long-term balance stability/drift plots would be useful.

4. Electrostatic Calibrator

Evidently the calibration of the electrostatic “fin” method of applying calibration pulses was performed using an electronic balance (Scientech SA-210). Unfortunately no data was provided to show exactly how this calibration was performed. In particular, no data was provided to show that there was no electrostatic interaction between the high-voltage calibration voltages and the operation of the balance. Since the Scientech balance properly reports vertical forces only, was care taken to translate these vertical forces into the horizontal calibration forces required by the thrust balance? It would have been useful for the authors to have employed a second, independent horizontal force calibration to verify the Scientech method such as a strain gauge-type force gauge with interpolation.

5. Vacuum System

The authors note that although turbomolecular pumps were used to evacuate the vacuum chamber, they caused no artificial vibrational signals. Turbo pumps require mechanical backing pumps to evacuate them to atmosphere. These mechanical pumps are connected to the turbo pumps typically via thick and stiff vacuum hoses. These hoses can transmit backing pump vibrations to the turbo pumps which are usually rigidly connected to the vacuum chamber. Was this source of vibration taken into account as well?

Additionally, no evidence is provided to show how the interior of the test article was evacuated coincidentally with the chamber evacuation. This is a different concern to that stated in the paper (pp 27, 28) regarding outgassing of the dielectric. The concern here is that if the test article cannot be fully evacuated coincidentally with the chamber evacuation, residual gas inside the test article can possibly escape during the time of a test, causing spurious force signals. Moreover, if the test article is rather well-sealed, the shell of the test article, especially the end plates, could expand upon evacuation of the chamber due to air trapped inside prior to chamber pump-down. This would alter the center of gravity (COG) of the balance causing a spurious signal, especially if the trapped air is heated upon application of RF power of tens of watts.

6. Liquid Metal Connections

“Galinstan screw and socket” rotary connections were employed to prevent any unwanted torques from upsetting the balance due to hard-wire connections between the rotating test article and the power supplies, analytical instruments etc fixed to the lab frame. There must have been quite a few of these connections for DC power, Forward and Reverse RF power, various tuning and drive signals etc. The authors failed to indicate how these connections were arranged geometrically. The ideal mounting arrangement is for such liquid metal connections to be stacked one on top of the other exactly coaxial with the main rotational axis of the balance. It seems unlikely that the design constraints of the balance within the chamber shown would accommodate this tall a stack of connections. Thus it is assumed that these connections were not arranged coaxially with the balance axis. If so, there could be spurious side thrusts generated by Ampère currents set up within the galinstan. This should have been tested and reported.

7. Thermal Expansion and Control Tests

The White et al paper contains considerable information on the effects of thermal expansion of the various test article components. It would be beneficial to see control experiments in which the test article is replaced by a suitable control article such as a purely cylindrical cavity of approximately the same dimensions, materials and construction and which supports similar RF modes as the frustrated conical test article.

According to pg 10, the heat sink unsurprisingly is the greatest source of heat during operation. It would be useful to perform control tests by separating the heat sink mechanically from the rest of the rotating components in such a way as to allow it to be oriented in any direction relative to the rest of the components to see the effect on the optical displacement signal.

Evidently, the test article assembly produces a relatively large thermal “thrust” signal as measured by the optical displacement sensor. The only explanation given is the change in center of gravity (COG) due to thermal expansion of various components causes a spurious torque on the balance. In fact the presence of a thrust signal due to thermal effects is only inferred, not proven. Not only that but it is stated (pg 10) that this thermal effect causes the balance arm to shift “with the same polarity as the impulsive signal” in Forward or Reverse tests. Here also it is implied but not proven that an “impulsive thrust” signal is even present (see below). The authors need to perform such control tests as to ascertain with certainty that there is indeed a “thermal thrust” before assuming without proof that it causes the balance arm to shift “with the same polarity”. One such test would be to construct a “control article” of the same shape, material and weight as the test article but with guaranteed no “impulsive thrust” and substitute it for the test article. Instead of powering it with an RF signal, put a resistor or light bulb inside to simulate the thermal characteristics.

This lack of proof of the presence of either a thermal thrust or an impulsive thrust thus precludes statements such as “the thermal signal in the vacuum runs is slightly larger than the magnitude of the impulsive signal [due to convective issues]”.

8. Confirmation Bias in Thrust Analysis

The entire edifice of the analysis of the signals from the optical displacement sensor rests on the assumption of the correctness and correct application of Fig. 5 to the present test situation. Fig. 5 shows an ad-hoc superposition of two assumed signals, namely a thermal signal and a pulse (impulse) signal. This is presented initially as a “conceptual simulation” and is reasonable in its own right. However, it then takes on the value of an accepted fact throughout the rest of the paper. Fig. 5 represents what the authors expect to see in the signal from the optical displacement sensor. When they see signals from this sensor which vaguely look like the expected superposition signal as represented in Fig 5, they assume that Fig 5 must actually represent what is going on in their system under test. This is a clear inductive reasoning fallacy called Confirmation Bias. This problem leads to baseless assumptions about the timing of the onset of expected effects after application of the stimulus (RF power), their proper shapes, and the joint amplitudes and thus the individual (impulse vs thermal) magnitudes.

In particular, the authors assume that the “true” impulse signal from the test article will look just like the assumed signal shown in Fig. 5, namely that it will look just like their calibration signal. This will include an initial fast-rising but well-behaved exponential slope up to a flat-topped constant thrust followed by a slower exponential falling section back to baseline. Next they assume that the thermal signal will be a well-behaved double exponential starting exactly at the same time as the impulse signal, also as shown in idealized form in Fig. 5. An additional assumption made by the authors is that there are no other spurious effects which might be represented as additional curves in Fig.5. The simple addition of the amplitudes of the thermal and impulse signals produces the resulting superposition signal. This signal is used as a template against which the actual sensor signal is compared. By stretching the imagination, the sensor signal can be force-fit onto the idealized superposition signal and, voila, the simple analysis can proceed to extract the magnitude of the true impulse signal.

This method is applied to all the sensor signals except that in Fig. 10 showing the “split configuration”.

There are additional problems with this force-fitting routine. For example, in Fig. 7, which is analysed in some detail, the initial rising slope of the displacement sensor signal should be an asymptotically flattening exponential according to Fig. 5. But it is clearly an asymptotically rising signal, perhaps exponential in shape. About half-way through the RF power application period, this rising slope suddenly breaks into a markedly linear (rising) slope. According to Fig. 5, this part of the signal should show an asymptotically decreasing (flattening) exponential slope, definitely not a linear slope. The authors even use linear curve fitting in this region, evidence that even they do not consider this part of the slope exponential. All the optical displacement signals shown in the other relevant figures (Figs. 13, 16) show this characteristic as well.

Then a sleight-of-hand is used to tease out the contributions of the assumed thermal vs the impulsive signal. According to pg. 11, “the characteristics of the curve [superposition curve in Fig. 5] after this discontinuity [the break in slope of the rising exponential due to the onset of steady thrust] are used as the baseline to be shifted down so that the line projects back to the “origin” or moment when RF power is activated.” The amount of this baseline shift is taken to represent the “true” impulse signal. Naturally, this assumes that the onset of thrust (and the thermal signal) are all coincident exactly with the application of RF power (and are all of the ideal shape according to Fig. 5). According to Fig. 7, it also assumes that a straight line can be used as this “baseline shift” rather than the more likely broken exponential shaped line depicted in Fig. 5. This has the added bonus of arbitrarily increasing the “calculated” impulsive thrust.

Pg. 13 introduces a “Slope Filtering: Alternate Approach” to the force-fitting approach discussed above whereby the time derivative of the displacement sensor signal is plotted. This procedure produces a curve of magnitudes of slopes (Fig. 9). Sadly, this method starts off with the same assumptions as in the above approach. It compounds these problems by invoking an arcane procedure whereby the parts of the original displacement sensor curve with slopes lower a particular arbitrary (and unstated) value are removed and what’s left of the curve allegedly represents the “true” impulse curve. None of this procedure is shown in detail and only the final result is shown which, conveniently for the authors, is within ~20% of the previous analysis method. Of course, this convenient coincidence is entirely dependent on the arbitrary slope magnitude removal value.

9. Split Configuration

On pg. 15 we learn that by splitting the test article from the rest of the electronics – one on each end of the balance arm, the response time is reduced as expected due to the reduction in ballast weight required, and that the “true” thrust amplitude has been reduced from 106 uN to 63 uN, all other things being equal! Additionally, the displacement sensor curve (Fig. 10) is completely different in shape from the non-split configuration tests. The only explanation proffered for this discrepancy is that “the thermal contribution…is smaller in magnitude compared to the impulsive signal.” No proof of the correctness of this statement is provided. Since the split and non-split configuration curves are so radically different, the authors chose not to apply either of the analysis methods discussed above. They arbitrarily take the amplitude of the displacement signal at the instant it starts an exponentially asymptotic downward slope as the correct point. Why not use a variant of this method and apply it to the non-split configuration? Because it would result in relatively and apparently unacceptably huge (eg ~260 uN at 60 W) thrusts!

10. Difference between Forward and Reverse Thrusts

Tables 2 and 3 allow us to compare “calculated” thrusts (using the ideal curve force-fitting method discussed above) from Forward and Reverse non-split configurations. The Reverse thrusts are consistently lower than their Forward thrust counterparts. For example for 60 W, average Forward thrusts are 108 uN vs 60 uN for Reverse thrusts. For 80 W, these numbers are 104 uN vs 71 uN. No explanation is given for these differences, nor for the fact that in the Forward configuration, the 80 W thrust is lower than the 60 W thrust.

11. Null Thrust Test

It is stated on pg. 23 that “The [COG] shift from thermal expansion causes a downward drift in the optical displacement sensor.” Why not an upward drift? There is no justification given for this statement as no control tests were performed to ascertain what the result of a purely thermal effect might be, expansion or otherwise.

Further, the authors state “The results from the null thrust testing show no impulsive element…only the thermal signal.” This is also an unproven statement since no purely impulsive or purely thermal signal has been positively identified in shape or amplitude. The authors appear to have forgotten the thermal curve they used in Fig. 5, namely a double exponential. There is no evidence for any exponential part of the supposedly “thermal only” curve of the Null Test in Fig. 18. It appears completely linear and if there is a slight hint of an exponential, it is in the wrong sense (asymptotically falling, not flattening)! Another hint as to the problem of assigning a purely thermal explanation of the curve in Fig. 18 is the fact that exactly at the time of shutting off the RF power, there is no thermal lag or overshoot: the linear slope breaks suddenly to become essentially flat.

The implication of the Null Thrust test is that the thermal signal apparently seen in the Null Test would be the same as that seen in the Forward and Reverse tests. If so, then the curve force-fitting routine discussed above is invalid as it assumes a double exponential thermal curve (Fig. 5).

The Null Thrust test depicted in Fig. 18 was run at 80 W RF power. The Reverse Thrust test in Fig. 16 run at 80 W shows an apparent thermal signal of approx. 70 uN using the force-fitting routine. For the same period, the Null Thrust test shows an apparent thermal signal of approx. 275 uN. This is a huge discrepancy begging for detailed explanation.

Conclusion

In addition to mechanical and related considerations, the authors’ methods of analysis of sensor data to derive thrusts rests on untenable grounds. Not only is there an assumption of the presence of only a “true” impulse signal as well as a thermal signal, there is an assumption that the observed signal can be broken down into just these 2 components and amplitudes can be calculated based on an idealized superposition assumption. Therefore, until more control tests are performed allowing a more accurate method for estimation of thrusts, no faith can be placed in the thrust magnitudes reported in the paper.

tzf_img_post

{ 37 comments }

Uncertain Propulsion Breakthroughs?

by Paul Gilster on December 30, 2016

Now that the EmDrive has made its way into the peer-reviewed literature, it falls in range of Tau Zero’s network of scientist reviewers. Marc Millis, former head of NASA’s Breakthrough Propulsion Physics project and founding architect of the Tau Zero Foundation, has spent the last two months reviewing the relevant papers. Although he is the primary author of what follows, he has enlisted the help of scientists with expertise in experimental issues, all of whom also contributed to BPP, and all of whom remain active in experimental work. The revisions and insertions of George Hathaway (Hathaway Consulting), Martin Tajmar (Dresden University), Eric Davis (EarthTech) and Jordan Maclay (Quantum Fields, LLC) have been discussed through frequent email exchanges as the final text began to emerge. Next week I’ll also be presenting a supplemental report from George Hathaway. So is EmDrive new physics or the result of experimental error? The answer turns out to be surprisingly complex.

by Marc Millis, George Hathaway, Martin Tajmar, Eric Davis, & Jordan Maclay

It’s time to weigh in about the controversial EmDrive. I say, controversial, because of its profound implications if genuine, plus the lack of enough information with which to determine if it is genuine. A peer-reviewed article about experimental tests of an EmDrive was just published in the AIAA Journal of Propulsion and Power by Harold (Sonny) White and colleagues: White, H., March, P., Lawrence, J., Vera, J., Sylvester, A., Brady, D., & Bailey, P. (2016), “Measurement of Impulsive Thrust from a Closed Radio-Frequency Cavity in Vacuum,” Journal of Propulsion and Power, (print version pending, online version here.

That new article, plus related peer-reviewed articles, were reviewed by colleagues in our Tau Zero network, including two who operate similar low-thrust propulsion tests stands. From our reviews and discussions, I have reached the following professional opinions – summarized in the list below and then detailed in the body of this article. I regret that I can only offer opinions instead of definitive conclusions. That ambiguity is a significant part of this story that also merits discussion.

Overview

Technical

(1) The experimental methods and resulting data indicate a possible new force-producing effect, but not yet satisfying the threshold of “extraordinary evidence for extraordinary claims” – especially since this is a measurement of small effects.

(2) The propulsion physics explanations offered, which already assume that the measured force is real, are not sound.

(3) Experiments have been conducted on other anomalous forces, whose fidelity and implications merit comparable scrutiny, specifically Jim Woodward’s “Mach Effect Thruster.”

Implications

(1) If either the EmDrive or Mach Effect Thrusters are indeed genuine, then new physics is being discovered – the ramifications of which cannot be assessed until after those effects are sufficiently modeled. Even if it turns out that the effects are of minor utility, having new experimental approaches to explore unfinished physics would be valuable.

(2) Even if genuine, it is premature to assess the potential utility of these devices. Existing data only addresses some of the characteristics necessary to compare with other technologies. At this point, it is best to withhold judgment, either pro or con.

Pitfalls to Avoid

(1) The earlier repeated tactic, to attempt fast and cheap experimental tests, has turned out to be neither fast nor cheap. It’s been at least 14 years since the EmDrive first emerged (2002) and despite numerous tests, we still lack a definitive conclusion.

(2) In much the same way that thermal and chamber effects are obscuring the force measurements, our ability to reach accurate conclusions is impeded by our natural human behavior of jumping to conclusions, confirmation biases, sensationalism, and pedantic reflexes. This is part of the reality that also needs understanding so that we can separate those influences from the underlying physics.

Recommendations

(1) Continue scrutinizing the existing experimental investigations on both the EmDrive and Mach Effect Thrusters.

(2) To break the cycle of endlessly not doing the right things to get a definitive answer, begin a more in-depth experimental program using qualified and impartial labs, plus qualified and impartial analysts. The Tau Zero Foundation stands ready to make arrangements with suitable labs and analysts to produce reliable findings, pro or con.

(3) If it turns out that the effects are genuine, then continue with separate (a) engineering and (b) physics research, where the engineers focus on creating viable devices and the physicists focus on deciphering nature. In both cases:

  • Characterize the parameters that affect the effects.
  • Deduce mathematical models.
  • Apply those models to (a) assess scalability to practical levels, and (b) understand the new phenomena and its relation to other fundamental physics.
  • On all of the above, conduct and publish the research with a focus on the reliability of the findings rather than on their implications.

Details

Pitfall 1 – The Fog of Want

Our decisions about this physics are influenced by behaviors that have nothing to do with physics. To ignore this human element would be a disservice to our readers. To get to the real story, we need to reveal that human element so that we can separate it from the rest of the data, like any good experiment. I’m starting off with this issue so that you are alert to its influences before you read the rest of this article.

As much as I strive to be impartial, I know I have an in-going negative bias on the EmDrive history. To create a review that reflects reality, rather than echoing my biases, I had to acknowledge and put aside my biases. Similarly, if you wish to extract the most from this article, you might want to check your perspectives. Ask yourself these three questions: (1) Do you already have an opinion about this effect and are now reading this article to see if we’ll confirm your expectation? (2) Do you want to know our conclusions without any regard to how we reached those conclusions? (3) Are you only interested in this EmDrive assessment, without regard to other comparable approaches?

If you answered “yes” to any of those questions, then you, like me, have natural human cognitive dysfunctions. To get past those reflexes, start by at least noticing that they exist. Then, take the time to notice both the pros and cons of the article, not just the parts you want to be true. Deciphering reality takes time instead of just listening to reflexive beliefs. It requires that one’s mind be open to the possibility you might be right and equally open to the possibility you might be wrong.

EmDrive History

This history is a recurring theme of incredible claims with non-credible evidence for those claims. In all cases, the effect is assumed to be real before the tests – which reflects a blinding bias. This dates back to at least 2002 when Roger Shawyer claimed to invent a device that “provides direct conversion from electrical energy to thrust, without expelling propellant.” I was still at NASA and vaguely remember reviewing it then. Regardless of the claims, the fidelity of the methods were below average. Over the years I heard about several other tests, but never saw any data. Eventually there was a press story about tests in China, along with this photo. It turns out that this photo is not a Chinese rig, but one of Shawyer’s:

fig01

Shawyer’s device and supporting equipment are on a rotating frame, where that rotation is used to determine if the device is thrusting. Note, however, the radiator and coolant lines. Any variation in the coolant flow would induce a torque that would obscure any real force measurements. Knowing the claimed thrusting effect is small and having enough experience to guess the likely variations in coolant flow, I considered this test set-up flawed.

Regarding the Chinese tests, I did not previously know they are described in peer-reviewed articles. Since many of us did not know either, I’m listing them here along with cursory impressions:

Juan, Y., et al, (2012). Net thrust measurement of propellantless microwave thrusters. Acta Physica Sinica, Chinese Physical Society.

Due to all of the impressions below, I do not have any confidence in their data:

  • Assumes first that the EmDrive is genuine.
  • Verbally describes theory, but without predicting experimental findings.
  • The experiment is not described in enough detail to assess its fidelity, but is similar to the one in the photo. Regardless, there is absolutely no discussion of possible influences on the rotation from tilting, power lead forces, vibration effects, thermal effects, or others.
  • The behavior of the thrust stand was not characterized before installing the EmDrive. Testing the two together without first having characterized the thrust stand separately prevents separating their distinct characteristics from the data.
  • The data plots lack error bands.

Juan, Y., et al (2013). Prediction and experimental measurement of the electromagnetic thrust generated by a microwave thruster system. Chinese Physics B, 22(5), 050301.

Due to all of the impressions below, I do not have any confidence in their data:

  • The description of the experiment is improved from the 2012 paper and appears to be the same configuration. This time possible effects from tilting and the power lead forces are mentioned, but they still do not address vibration, thermal, coolant loop, or other effects.
  • Again, they fail to characterize the thrust stand separately from the EmDrive.
  • Unlike the 2012 paper, they attempt to make numerical predictions. Details are provided for their physics derivations (which I did not scrutinize). That theory is then applied to make predictions for their specific hardware, but only verbally described it, rather than showing an explicit derivation. They show plots of the predicted force versus power, but only up to 200W, where the experimental runs span about 100W to 2400W.
  • The experimental results do not match their linear predictions for the ratio of force-to-power. These differences are then evasively dismissed.

Juan, Y., et al. (2016), “Thrust Measurement of an Independent Microwave Thruster Propulsion Device with Three-Wire Torsion Pendulum Thrust Measurement System,” Journal of Propulsion Technology, vol. 37, no. 2, pp 362-371.

The text is in Chinese, which I did not translate, but the figures and plots are captioned in English. Therefore I comment only on those diagrams. Again, what is shown is not enough to support claims of anomalous forces:

  • From figures 2, 3, 6, 7, 16, and 19, it appears the prior apparatus is now hung from torsion wires instead of a rotating support from below. This time the coolant loop is explicitly shown, but in a conceptual drawing instead of showing specifics. Again, the influence of the coolant loop is ignored.
  • The only “measurement results” plot is “force versus serial number” – which conveys no meaningful information (without being able to read associated text).
  • I learned later from Martin Tajmar, that the observed thrust drops by more than an order of magnitude when the device is powered by batteries instead of the external cables (cables whose currents can induce forces).

I chose not to cite and comment on the many non-peer-reviewed articles on Shawyer’s website and related AIAA conference papers.

Shawyer eventually published a peer-reviewed article, specifically: Shawyer, R. (2015), “Second generation EmDrive propulsion applied to SSTO launcher and interstellar probe,” Acta Astronautica, vol. 116, pp 166-174. Shawyer states: “Theoretical and experimental work in the UK, China and the US has confirmed the basic principles of producing thrust from an asymmetric resonant microwave cavity.” That assertion has not held up to scrutiny. Therefore, all related assertions are equally unfounded. Instead of offering substantive evidence, this article instead predicts the performance for three variations of EmDrives that now claim to use superconductivity. From these, he presents conceptual diagrams for their respective spacecraft. He also mentions the “Cannae Drive,” by Guido Fetta, as another embodiment of his device.

Latest EmDrive Paper

The latest paper, in the AIAA Journal of Propulsion and Power, is an improvement in fidelity on the prior tests and may be indicative of a new propulsive effect. However, the methods and data are still not crossing the threshold of “extraordinary evidence for extraordinary claims” – especially since this is a measurement of small effects. With the improved fidelity of the reporting and the data traces themselves, I have to question my earlier bias that the prior data was entirely due to experimental artifacts and proponent biases.

The assessment offered below is a summary of discussions with the coauthors of this report plus a few other colleagues. Both Martin Tajmar and George Hathaway operate similar low-thrust propulsion test stands and thus are familiar with such details. George Hathaway’s more focused analysis will be posted in a future Centauri Dreams article.

The major problems with the paper are (1) lack of impartiality, (2) the test hardware is not sufficiently characterized to separate spurious effects from the test article’s effects, (3) the data analysis is marred by the use of subjective techniques, and (4) the data can be interpreted in more than one way – where one’s bias will affect one’s conclusions.

The first shortcoming of the paper is that it is biased. It assumes that the propulsion effect is genuine and then goes on to invent an explanation for that unverified effect. This bias skews how they collect and analyze the data. To be more useful, the paper should have reported impartially on its experimental and analytical methods to isolate a potential new force-producing effect from other contaminating influences.

The next shortcoming is insufficient testing for how spurious causes can affect the thrust stand. While this new paper is a significant improvement over the previous publications, it falls short of providing the needed information to reach a definitive conclusion. They use techniques comparable to engineering tests of conventional low-thrust electric propulsion. While such engineering techniques might be passable for checking electric propulsion design changes, it is not sufficient to demonstrate that a new physics effect exists. The specific shortcomings include:

  • Thrust stand tilting: The thrust stand has a vertical axis, where even slight changes of that alignment will affect how the thrust stand behaves. There are three parts to this, none of which are quantified: the fidelity of the thrust stand flexures and pivots, the alignment fidelity of that structure to the vacuum chamber, and the sustained levelness of the “optical bench” upon which the vacuum chamber is mounted.
  • Thrust stand characterization: The thrust stand does not return to its original position after tests, even for most calibration events. Additionally, the thrust stand is over-damped, meaning that it is slow to respond to changes, including the calibration events. Those characteristics (time for the thrust stand to respond to a known force and the difference between its before/after positions) are important to understand so that those artifacts can be separated from the data. These facets are largely ignored in the paper. The report does mention that the location of the masses on the thrust stand affects its response rate (“split configuration” versus “non-split”), but this difference is not quantified. The thrust stand uses magnetic dampers. Similar dampers used on one of Martin Tajmar’s thrust stands were found to cause spurious effects (subsequently replaced with oil dampers). Given the irregular behavior, it is fair to suspect that other causes are interfering with the motion of the thrust stand. The flexural bearings might be operated beyond their load capacity or might be affected by temperature.
  • Forces from power cables: To reduce the influence of electromagnetic forces from the power leads, Galinstan liquid metal screw and socket connections are used. While encouraging, it is not specified if these connections (several needed) are all coaxially aligned with the stand’s rotation axis (as required to minimize spurious forces). Also, there are no tests with power into a dummy load to characterize these possible influences.
  • Chamber wall interactions: Though mentioned as a possible source of error, the electromagnetic forces between the test device and the vacuum chamber walls are dismissed without quantitative estimates or tests. One way that this could have been explored is by using more variations in the position and orientation of the test device relative to the chamber. For example, in the “null thrust” configuration, only one of four possibilities is used (the device pointed toward the pivot axis). If also pointed up, down, and away from the pivot, more information would have been collected to help assess such effects.
  • Thermal effects: The paper acknowledges the possible contributions from thermal effects, but does not quantify that contribution. For example, there are no measurements of temperature over time compared to the thrust stand’s deflection. Such measurements should have been made during operation of the device and when running power through a dummy load. Absent that data, the paper resorts to subjectively determining which parts of the data are thermal effects. For example, without any validation, the paper assumes that the displacement measured during the “null thrust” configuration is entirely a thermal effect. It does not consider chamber wall interactions or any other possible sources. The paper does speculate that temperature changes might shift the center of gravity of the test article in a way that affects the thrust stand, but no diagrams are offered showing how a slight change in one of those dimensions would affect the thrust stand.

The third and most egregious shortcoming in the report is that they apply a vaguely described “conceptual simulation” (which is never mathematically detailed) as their primary tool to deduce which part of the data is attributable to their device and which is due to thermal effects. They assume a priori the shapes of both the “impulsive thrust” (their device) and thermal effects and how those signals will superimpose. There is no consideration of chamber wall effects, power lead forces, tilting, etc. As a reflection of how poorly defined this assumed superposition, the ‘magnitude’ and ‘time’ axes on the chart showing this relation (Fig. 5) are labeled as “arbitrary units.” Another problem is that their assumed impulsive thrust curve does not match the shape of most of the data that they attribute to impulsive thrust. Instead of the predicted smooth curve, the data shows deviations about halfway through the thrusting time. They then apply this subjective and arbitrary tool to reach their conclusions. Because they are biased that the effect is genuine and because their methods overlook critical measurements, I cannot trust the authors’ interpretations of their results.

Absent an adequate accounting for the magnitude and characteristics of secondary causes and how to remove those possible influences from the data, the fourth major problem with the report is that its data can then be interpreted more than one way.

Rather than evoking subjective techniques here, the comments that follow are based only on examining their data plots as a whole. To illustrate how this data can then be interpreted in more than one way, both dismissive and supportive interpretations are offered. In particular, we compare the traces from the “forward,” “null,” and “reverse” thrust configurations and then the force versus power compilation of the runs.

The data for the 80W operation of the device in the “forward,” “null,” and “reverse” thrust configurations is presented in Figures, 9c, 18, and 10c, respectively. Recall from the above discussions that this data includes all the uncharacterized spurious causes (thermal, chamber wall interactions, power lead forces, tilting of the thrust stand, and seismic effects), plus any real force from the test device. The values shown in the table below were read from enlarged versions of the figures.

emdrive_table

Table of Noteworthy Data Comparisons Between Forward, Null, and Reverse Thrust Orientations

For a genuine thrusting effect, one would expect the results to show near-matching magnitudes for forward and reverse thrust and a zero magnitude for the null-thrust orientation. If one looks only at the “Total deflection,” all the magnitudes are roughly the same, including the null-thrust. Pessimistically, one could then infer that the spurious effects are great enough to be easily misinterpreted as a genuine thrust.

emdrive_figure

Conversely, if one considers how quickly the deflections occur, then the attention would be on the “Rate of deflection.” In that case, the thrusting configurations are roughly twice as large as the null-thrust configuration. From only that, one might infer that a new force-producing effect is larger than spurious causes.

To infer conclusions based on the deflection rates, one must also examine the rate of deflection for the calibration events, which should be the same in all configurations. The calibration deflection rate appears roughly the same in the forward and reverse thrust configuration, but more than 2.5 times larger in the null thrust configuration. That there is a difference compounds the difficulty of reaching conclusions. There are also significant inconsistencies with how the thrust stand rebounds once the power is turned off between the thrusting and null-thrust configurations, again compounding the difficulty of reaching conclusions.

Because a possible positive interpretation exists within those different perspectives, I cannot rule out the possibility that the data reflects a new force-producing effect. But as stated earlier, given all the uncharacterized secondary effects and the questionable subjective techniques used in the report, this is not sufficient evidence. Given the prominent role played by the rate of deflections, the dynamic behavior of the thrust stand must be more thouroughly understood before reaching firm conclusions.

Next, let’s examine the compilation of runs, namely Fig. 19. Based on a linear fit through the origin with the data, they conclude a thrust-to-power ratio of 1.2 ± 0.1 mN/kW (=µN/W). While this is true, the data can be interpreted more than one way. Note that the averages for 60 and 80 watts operations are the same, so a linear fit is not strictly defensible. One could just as easily infer that increasing power yields decreasing thrust, a constant 50 µNewton force, or an exponential curve that flattens out to a constant (saturated) thrust of about 100 uN. Note too that the null-thrust data (which could be interpreted to be as high as 211 µN) is not shown on this chart.

white_fig19

Recall too that they did not quantify the potential spurious effects, so their presumed error band of only ±6 µN does not stand up to scrutiny. Note, for example, the span in the 40W data is about ± 17µN, the 60W about ± 50µN, and the 80W about ± 32µN. What is not clear is if these 40, 60, and 80 Watt runs represent different operating parameters (Q-factor?), or if instead, these are the natural variations with fixed settings.

The pessimistic interpretation is that the deviations in the data represent variations for the same operating conditions, in which case the data are too varied from which to conclude any correlations. Conversely, the optimistic interpretation is to assume the variations are due to changes in operating parameters, but then that additional information should be made available and be an explicit part of the analysis.

In summary, this most recent report is a significant improvement, but has many shortcomings. Questionable subjective techniques are used to infer the “thrust” from the data. Other likely influences are not quantified. But also, despite those inadequacies, the possibility of a new force-producing effect cannot be irrefutably ruled out. This is intriguing, but still falling short of defensible evidence.

EmDrive and Other Space Drive Theories

First, I cannot stress enough that there is no new EmDrive “effect” yet about which to theorize. The physical evidence on the EmDrive is neither defensible nor does it include enough operating parameters to characterize a new effect. The data is not even reliable enough to deduce the force-per-power relationship, let alone any other important correlations. What about the effects of changing the dimensions or geometry, changing the materials, or changing the microwave frequencies or modulation? And then there is the unanswered question, what are the propulsion forces pushing on?

Assuming for the moment that the EmDrive is a new force-producing effect, we know at least two things (1) it is not a photon rocket, because the claimed forces are 360 times greater than the photon rocket effect, and (2) a force, without an “equal and opposite force,” goes beyond Newton’s laws. Note that I did not evoke the more familiar “violating conservation of momentum” point. That is because these experiments are still trying to figure out if there is a force. We won’t get to conservation of momentum until after those forces are applied to accelerate an object. If that happens, then we must ask what reaction mass is being accelerated in the opposite direction. If the effects are indeed genuine, then new physics is being discovered or old physics is being applied in a new, unfamiliar context.

For those claiming to have a theory to predict a new propulsion effect, it is necessary that those theories make testable numeric predictions. The predictions in Juan’s 2013 paper did not match its results. The analytical discussions in White’s 2016 experimental paper do not make theoretical predictions. The same is true with his 2015 theoretical paper: White (2015), “A discussion on characteristics of the quantum vacuum,” Physics Essays, vol. 28, no. 4, 496-502.

Short of having a self-consistent theory, any speculations should at least accurately echo the physics they cite. The explanations in the White’s 2016 experimental paper, White’s 2015 theory paper, and even White’s 2013 report on the self-named “White-Juday Warp Field Interferometer” (White (2013), “Warp Field Mechanics 101,” Journal of the British Interplanetary Society, vol. 66, pp. 242-247), did not pass this threshold. I’ll leave to other authors to elaborate on the 2015 and 2016 papers, while a review of the 2013 warp drive claims is available here. It is Lee & Cleaver (2014), “The Inability of the White-Juday Warp Field Interferometer to Spectrally Resolve Spacetime Distortions,” [physics.gen-ph].

In contrast, it is also important to avoid pedantic reflexes – summarily dismissing anything that does not fit what we already know, or assuming all of our existing theories are completely correct. For example, the observations that lead to the Dark Matter and Dark Energy hypotheses do not match existing theories, but that evidence has been reliably documented. Using that data, many different theories are being hypothesized and tested. The distinction here is that both the proponents and challengers make sure they are accurately representing what is, and is not yet, known.

If a propulsion physics breakthrough is to be found, it will likely be discovered by examining relevant open questions in physics. A relevant theoretical question to non-rocket propulsion concepts (including the EmDrive) is ensuring conservation of momentum. One way to approach this is to look for phenomena is space that might serve as a reaction mass in lieu of propellant, perhaps like the quantum vacuum. Another approach is to dig deeper into the nature of inertial frames. Inertial frames are the reference frames upon which the laws of motion and the conservation laws are defined, yet it is still unknown what causes inertial frames to exist or if they have any deeper properties that might prove useful.

Woodward Tests and Theory

In addition to the overtly touted EmDrive, there are about two-dozen other space drive concepts of varying degree of substance. One of them started out as a theoretical investigation into the physics of inertial frames which then advanced to make testable numeric predictions. Specifically I’m referring to what is now called the “Mach Effect Thruster” concept of James F. Woodward, which dates back at least to this article:

Woodward, James F. (1990), “A new experimental approach to Mach’s principle and relativistic gravitation,” Foundations of Physics Letters, vol. 3, no. 5, pp. 497-506.

A more in-depth and recent publication on these concepts is available as:

Woodward, James F. (2013) Making Starships and Stargates: The Science of Interstellar Transport and Absurdly Benign Wormholes. Springer Praxis Books.

Experiments have been modestly underway for years, including three recent independent replication attempts by George Hathaway in Toronto Canada, Martin Tajmar in Dresden Germany, and Nembo Buldrini in Wiener Neustadt, Austria. A workshop was held to review these findings in September 20-23, 2016, in Estes Park, Colorado. I understand from an email conversation with Jim Woodward that these reports and workshop proceedings are now undergoing peer review for likely publication early in 2017.

The main point here, by citing just this one other example, is that there are other approaches beyond the highly publicized EmDrive claims. It would be a disservice to our readers to let a media fixation with one theme blind us to alternatives.

Implications

If either the EmDrive or Mach Effect Thruster is indeed genuine, then new physics is being discovered or old physics is being applied in a new, unfamiliar context. Either would be profound. Today it is premature to assert than any of these effects are genuine, or conversely, to flatly rule out that such propulsion ambitions are impossible. When the discussions are constrained to exclude pedantic disdain and wishful interpretations, and limited to people who have either the education or experience in related fields, one encounters multiple, even divergent, perspectives.

Next, even if new physics-to-engineering is emerging, it is premature to assess its utility. The number of factors that go into deciding if a technology has an advantage over another are way beyond what data is yet available. Recall that the performance of the first aircraft, jet engine, transistor, etc, were all tiny examples of what those breakthroughs evolved to become. Reciprocally, we tend to forget about all the failed claims who have faded into obscurity. We just do not know enough today, pro or con, to judge.

I realize the urge within human behavior for fast, definitive answers that we can act on. This lingering uncertainty is aggravating, even more so when peppered with distracting hype or dismissive disdain. To get to the underlying reality, we must continue with a focus on the fidelity of the methods to produce reliable results, rather than jumping to conclusions on the implications.

What to Do About It

If you want definitive answers, then we must improve the reliability of the methods and data, and remain patiently open for the results to be as they are, good news or bad news. I alluded earlier to the broken tactic of trying to get answers with fast and cheap experiments. How many inadequate experiments and over how many years does it take before we change our tactics? I’ve had this debate more than once with potential funding sources and I hope they are reading now to see… “I told ya so!” Sorry, I could not resist that human urge to emotionally amplify a well-reasoned point. To break the cycle of endlessly not doing the right things to get a definitive answer, we must begin a more in-depth experimental program using qualified and impartial labs, plus qualified and impartial analysts. Granted, those types of service providers are not easy to find, where impartiality is the hardest to come by. Also, it might take three years to get a reliable answer, which is at least better than 14 years. And the trustworthy experiments will not be cheap, but quite likely far less than the aggregate spent on the repeated ‘cheap’ experiments. If any of those prior funding sources (or new) are reading this and finally want trustworthy answers, contact us. Tau Zero stands ready to make arrangements with suitable labs and analysts to conduct such a program.

And what if we do discover a breakthrough? In that case, we recommend distinguishing two themes of research, one from an engineering point of view to nudge the effect into a useful embodiment, and another from an academic point of view, to fully decipher and compare the new effects to physics in general. In both those cases we need to:

1. Characterize the parameters that affect the effects. Instead of just testing one design, vary the parameters of the device and the test conditions to get enough information to work with.

2. Deduce mathematical models from that more complete set of information.

3. Apply those models to (a) assess scalability to practical levels, and (b) explore the new phenomena and its relation to other fundamental physics.

4. On all of the above, conduct and publish the research with a focus on the reliability of the findings rather than on their implications.

For those of you who are neither researchers nor funding sources, what should you do? First, before reposting an article, take the time to see if it offers new and substantive information. If it turns out to be hollow click-bait, then do not share it. If it has both new information with meaningful details, then share it. Next, as your read various articles, notice which sources provide the kind of information that helps you understand the situation. Spend more time with those sources and avoid sources who do not.

Regarding questionable press stories, I’m not sure yet what to make of this: “The China Academy of Space Technology (CAST), a subsidiary of the Chinese Aerospace Science and Technology Corporation (CASC) and the manufacturer of the Dong Fang Hong satellites, has held a press conference in Beijing explaining the importance of the EmDrive research and summarizing what China is doing to move the technology forward.” Some stories claim there is a prototype device in orbit. If true, I would expect to see at least one photo of the device being tested in space. But we’ll see…

When faced with uncertain situations and where the data is unreliable, the technique I use to minimize my biases is to simultaneously entertain conflicting hypotheses, both the pro and con. Then, as new reliable information is revealed, I see which of those hypotheses are consistent with that new data. Eventually, after enough reliable data has accrued, the reality becomes easier to see.

Note

The cited devices have gone by multiple names (e.g. EmDrive, EM Space Drive; Mach Effect Thruster, Mach-Lorentz Thruster), and the versions used in this article are the ones with the greatest number of Google search hits.

tzf_img_post

{ 105 comments }

New Horizons: Going Deep in the Kuiper Belt

by Paul Gilster on December 29, 2016

We’ve retrieved all the data from New Horizons’ flyby of Pluto/Charon in 2015, the last of it being acquired on October 25 of this year. But data analysis is a long and fascinating process, with papers emerging in the journals and new discoveries peppering their pages. The New Horizons science team submitted almost 50 scientific papers in 2016, and we can expect that stream of publication to continue in high gear as we move deeper into the Kuiper Belt.

For New Horizons is very much an ongoing enterprise, as Alan Stern’s latest PI’s Perspective makes clear. We have an encounter with a small Kuiper Belt object (KBO) called 2014 MU69 to think about, and the symmetry that Stern points to in his essay is striking. Two years ago New Horizons had just emerged from cruise hibernation as preparations for the Pluto/Charon encounter began. And exactly two years from now, we’ll be again following the incoming datastream as the last of the New Horizons targets comes into breathtaking proximity.

2016-12-year-of-kbo-artwork-1

But 2014 MU69 isn’t the only KBO in the cards. Stern describes what’s next:

The year ahead will begin with observations of a half-dozen KBOs by our LORRI [LOng Range Reconnaissance Imager] telescope/imager in January. Those observations, like the ones we made in 2016 of another half-dozen KBOs, are designed to better understand the orbits, surface properties, shapes, satellite systems and frequency of rings around these objects. These observations can’t be done from any ground-based telescope, the Hubble Space Telescope, or any other spacecraft – because all of those other resources are either too far away or viewing from the wrong angles to accomplish this science. So this work is something that only New Horizons can accomplish.

Ponder that 2014 MU69 is almost 6.5 billion kilometers out and you have to wonder when we’ll next get a spacecraft this deep into the system. The answer has as much to do with funding as our technologies, since a wide variety of outer system probes have been under discussion in the last forty years. Each new deep space mission that actually flies fuels public interest, but the phenomenon is all too brief, and although space exploration seems to be flourishing in the movies these days, it’s hard to see a New Horizons sequel any time soon.

Even so, I’m optimistic in the longer term because the process in view in a mission of this complexity works its own kind of magic. Yesterday I wrote about Vera Rubin’s ability to communicate not just her love of the stars to young scientists she worked with but her values of tenacity, curiosity and exploration. In the same way, a mission executed with the precision of New Horizons has to set an example for budding scientists everywhere, a young population out of which will surely grow at least a few future principal investigators like Alan Stern.

New Horizons is a long way from home, and January will be used to take advantage of its position through measurements of hydrogen gas in the heliosphere (using the Alice ultraviolet spectrometer) and charged particles and dust (through the SWAP, PEPSSI and SDC instruments). What’s different today about space missions is that a student with a PC can drill deep into all this, following tweets from researchers, exploring ideas on blogs and reading direct communications from the mission team. The Apollo days were fantastic, but we could never get as close to a mission as we can get to a New Horizons, or a Dawn or a Rosetta.

There are going to be things to track in early 2017 as New Horizons makes yet another course correction, but the spacecraft will then enter hibernation until September, when a new round of KBO observations begins. As Stern points out, flyby operations for 2014 MU69 are set to commence in July of 2018, meaning that while the vehicle hibernates, the mission teams will be writing and testing the command sequences for the January 1, 2019 flyby.

So the absorbing process of a deep space mission continues to play out in space and on Earth. The first observations ever made of a Kuiper Belt Object from within the Kuiper Belt itself are now out in the form of a paper in Astrophysical Journal Letters. The object is 1994 JR1, studied through the LORRI instrument after the Pluto/Charon flyby from a distance of 1.85 AU and then about six months later from a distance of 0.71 AU. The earlier observations were supplemented with simultaneous Hubble studies of the same object. We learn quite a lot from these early observations. From the paper’s summary:

(15810) 1994 JR1 has a V-R of 0.76, making it a very red KBO [V-R refers to spectral slope, a measure of reflectance vs wavelength]. Unique New Horizons observations showed that JR1 has a high surface roughness of 37±5°, indicating that it is potentially very cratered. They also showed that the rotational period of JR1 is 5.47±0.33 hours, faster than most similar-sized KBOs, and enabled a reduction of radial uncertainty of JR1’s position from 105 to 103 km.

Moreover, New Horizons has revealed this KBOs interactions with the objects around it:

Neptune perturbations bring Pluto and JR1 close together every 2.4 million years, when Pluto can perturb JR1’s orbit. Future ground-based photometry of JR1 would be useful to better constrain the period and opposition surge, and to allow preliminary estimates of JR1’s shape and pole. These proof of concept distant KBO observations demonstrate that the New Horizons extended mission will indeed be capable of observing dozens of distant KBOs during its flight through the Kuiper Belt.

Ongoing science like this from the outer Solar System will snare the attention of people making their initial way into astronomy and astronautics. Who knows what careers will be shaped by these studies, and what new targets that next generation will explore?

The paper is Porter et al., “The First High-Phase Observations of a KBO: New Horizons Imaging of (15810) 1994 JR1 from the Kuiper Belt,” Astrophysical Journal Letters Vol. 828, No. 2 (2016). Preprint available.

tzf_img_post

{ 10 comments }

Vera Rubin (1928-2016)

by Paul Gilster on December 28, 2016

When Vera Rubin went to Cornell University to earn a master’s degree, she quickly found herself immersed in galaxy dynamics, lured to the topic by Martha Stahr Carpenter. The interest, though, was a natural one; it drew on Rubin’s childhood fascination with the motion of stars across the sky. You could say that motion captivated her from her earliest days. At Cornell, she studied physics from such luminaries as Richard Feynman, Philip Morrison and Hans Bethe. She would complete the degree in 1951 and head on to Georgetown.

Rubin, who died on Christmas day, was possessed of a curiosity that made her ask questions others hadn’t thought of. In Bright Galaxies, Dark Matters (1997), a collection of her papers, the astronomer recalls writing to Milton Humason in 1949, asking him about the redshifts he and his colleagues were compiling. Rubin had heard that many had yet to be published, and she would use those she had to look for systematic motion among the galaxies, motion that would show up if you removed the Hubble expansion from the data.

“I found that many of these galaxies defined a great circle on the sky, or roughly a circle, and that there were large regions of positive and negative values of residual velocity,” Rubin told editor Sally Stephens in a 1992 interview. “What in fact I really found was the supergalactic plane, although I entitled the paper ‘Rotation of the Universe.’”

Rubin tended to dismiss this early work in later life (“I presume none of this work would hold up today”), and her paper was rejected by the Astrophysical Journal as well as the Astronomical Journal, though later presented at a 1950 AAS meeting. Even so, the questions she raised were hugely significant, and at the time under study by Kurt Gödel at Princeton, the school that turned down her graduate application because of her gender. What Rubin was homing in on was the presence of large-scale galactic motion.

rubin

Today, we talk about the Rubin-Ford effect, the observation that describes the motion of our galaxy relative to a sample of galaxies at varying distances and compares this to its motion relative to the cosmic microwave background (the Ford here is Kent Ford, an astronomer whose spectrometer became critical for Rubin’s studies of stellar motion in spiral galaxies). Early work that had been flawed by insufficient data would eventually grow into this result.

I always tend to link Rubin and Fritz Zwicky in my thinking. Way back in 1933, the Swiss astronomer who taught most of his career at Caltech was noting discrepancies between the apparent mass of galaxies in the Coma cluster and the amount of light they produced, leading him to coin the phrase ‘dunkle Materie’ (dark matter) to explain the effect. Both Zwicky and Rubin had an uncanny knack for seeing places where the universe was posing questions. For Rubin, it would become the motion of spiral galaxies that defined her career.

The problem leaped out at astronomers once Rubin put her finger on it. You would expect galaxies to spin in fairly conventional ways, with stars nearer the center moving faster than those on the outskirts, just as in our own Solar System, the inner planets orbit the Sun much faster than the outer worlds. But by 1974 Rubin was able to show that the outer stars in spiral galaxies move much faster than could be explained by the mass of the visible matter in the galaxies. Dark matter again reared its head, and became the subject of intense investigation.

We still haven’t observed dark matter directly, though the current calculation is that about 27 percent of the universe is made up of the stuff, with only 5 percent being the normal matter we had until recently assumed was all there was. By 1998, we had learned, too, of dark energy and the continuing expansion of the universe, yet another mystery demanding an explanation. The dark energy work would produce a Nobel Prize; dark matter has yet to do so. Rubin’s exclusion from the Nobel occupies much of the media commentary on her death. I think Phil Plait’s discussion is on the money.

Rubin would put her painstaking methods to work on over 200 galaxies in her career. Finishing her PhD in 1954 (her thesis advisor at Georgetown was George Gamow, a science popularizer and early advocate of Big Bang theory), Rubin taught at Georgetown for eleven years before joining the Carnegie Institution for Science in 1965, where she began her collaboration with Ford. She would become the second female astronomer elected to the National Academy of Sciences and would receive the National Medal of Science in 1993.

Rubin’s loss resonates through the world of astronomy and is keenly felt by the many she influenced, especially women who were inspired by her example to tackle a career in the physical sciences. We can measure careers by papers published and ideas propagated, but it’s all too easy to miss the more intangible factors like lives touched and careers launched. On all these scores Vera Rubin deserves the thanks of the field she did so much to shape.

tzf_img_post

{ 13 comments }

Orbital Determination for Proxima Centauri

by Paul Gilster on December 27, 2016

Let’s talk this morning about the relationship of Proxima Centauri to nearby Centauri A and B, because it’s an important issue in our investigations of Proxima b, not to mention the evolution of the entire system. Have a look at the image below, which shows Proxima Centauri’s orbit as determined by Pierre Kervella (CNRS/Universidad de Chile), Frédéric Thévenin (Observatoire de la Côte d’Azur) and Christophe Lovis (Observatoire astronomique de l’Université de Genève). The three astronomers have demonstrated that all three stars — Proxima Centauri as well as Centauri A and B — form a single, gravitationally bound system.

screenshot-from-2016-12-26-13-36-43

Image: Proxima Centauri’s orbit (shown in yellow) around the Centauri A and B binary. Credit: Kervella, Thévenin and Lovis.

A couple of things to point out here, the first being the overall image. You’ll see Alpha Centauri clearly labeled within the yellow ellipse of Proxima’s orbit. Off to the right of the ellipse, you’ll see Beta Centauri. I often see the image of these two stars identified as Centauri A and B, but Kervella et al have it right. The single bright ‘star’ within the ellipse is the combined light of Centauri A and B. Beta Centauri, at the right, is an entirely different star, itself a triple system in the constellation Centaurus, at a distance of about 400 light years.

Now as to that orbit — 550,000 years for a single revolution — things get interesting. One reason it has been important to firm up Proxima’s orbit is that while a bound star would have affected the development of the entire system, the question has until now been unresolved. Was Proxima Centauri actually bound to Centauri A and B, or could it simply be passing by, associated with A and B only by happenstance? Back in 1993 Robert Matthews and Gerard Gilmore found this to be a borderline case, calling for further kinematic data to clarify the issue.

When Jeremy Wertheimer and Gregory Laughlin (UC-Santa Cruz) attacked the problem in 2006, they found it ‘quite likely’ that Proxima Centauri was bound to the A/B pair. If this were the case, it would mean that the trio probably formed together out of the same nearby material, with the result that we could expect them to have the same age and metallicity. Laughlin and Wertheimer assumed that future, yet more accurate kinematic measurements would make it clear ‘that Proxima Cen is currently near the apastron of an eccentric orbit…’

And now we have Kervella and team, who have used the HARPS instrument (High Accuracy Radial Velocity Planet Searcher) on ESO’s 3.6m instrument at La Silla to make the call. Using radial velocity and astrometry, the researchers have surmounted the main problem with determining Proxima’s bound state. The lack of high-precision radial velocity measurements has been the result of Proxima’s relative faintness, but drilling down into HARPS data has produced a new radial velocity of −21.700 ± 0.027 km s−1, which tracks nicely with the prediction of Wertheimer and Laughlin, and is low enough to indicate a bound state.

As we consider that interesting planet around Proxima Centauri, we now can ponder that its star is the same age as Centauri A and B, and that its age is a comparable 6 billion years, making the planet about a billion years older than our Earth. Exactly how the planet formed becomes an interesting issue as well, because we have interactions between three stars to think about. From the paper:

The orbital motion of Proxima could have played a significant role in the formation and evolution of its planet. Barnes et al. (2016) proposed that a passage of Proxima close to α Cen may have destabilized the original orbit(s) of Proxima’s planet(s), resulting in the current position of Proxima b. Conversely, it may also have influenced circumbinary planet formation around α Cen (Worth & Sigurdsson 2016). Alternatively, Proxima b may also have formed as a distant circumbinary planet of α Cen, and was subsequently captured by Proxima. In these scenarios, it could be an ocean planet resulting from the meltdown of an icy body (Brugger et al. 2016). Proxima b may therefore not have been located in the habitable zone (Ribas et al. 2016) for as long as the age of the α Cen system (5 to 7 Ga; Miglio & Montalbán 2005; Eggenberger et al. 2004; Kervella et al. 2003; Thévenin et al. 2002).

So there we are. Plenty of alternatives to ponder as we look into the origins of the nearest known planet to our Solar System. Just how the researchers tuned up the radial velocity data to avoid the problem of convective blueshift — where the star’s unstable surface can shift the observed wavelength of spectral lines – and gravitational redshift, which can likewise be misleading, is covered in the paper’s appendix. The selection of four strong very high signal-to-noise emission lines made the difference in this exquisitely tight measurement.

The paper is Kervella, Thévenin & Lovis, “Proxima’s orbit around α Centauri,” accepted at Astronomy & Astrophysics (preprint).

tzf_img_post

{ 22 comments }

Seasonal Break

by Paul Gilster on December 22, 2016

The other day on the hugely enjoyable Galactic Journey site, I ran into an interesting historical tidbit. Here, from the 1753 Cyclopædia: or, An Universal Dictionary of Arts and Sciences by Ephraim Chambers is a definition of the word ‘interstellar.’

611217interstellar

And with a modernized presentation:

“Interstellar, is a word used by some authors to express those parts of the universe that are without and beyond our Solar system; in which are supposed to several other systems of planets moving around the fixed stars as the centers of their respective motions: and if it be true, as it is not improbable, that each fixed star is thus a sun to some habitable orbs, that move round it, the interstellar world will be infinitely the greater part of the universe.”

Another early instance of planetary systems around other stars in wide circulation at an early date. Chambers was working for John Senex, a London-based globe-maker, when he conceived the plan for his Cyclopædia, a project to which he soon devoted his entire attention. The first edition appeared by subscription in 1728 in a two volume, 2466 page folio, but the work, one of the first general encyclopedias to be published in English, would see numerous further editions, including one in Ireland as well as an Italian translation.

chambers_cyclopaedia_1728

Those of you who are not yet familiar with Galactic Journey will want to remedy the lack, especially if you enjoy science fiction as much as most Centauri Dreams readers do. The site is something of a time machine, written from the perspective of science fiction magazines and events of over 50 years ago, and what’s delightful to me is that I often find issues of Analog or Fantastic discussed that I bought off the newsstand when they appeared. And because I love magazine fiction, every one of those issues is still here on my shelves, approximately ten feet from where I’m now writing.

We’re pushing into holiday travel time, so I’m going to close up shop until next week. Let me wish all of you a happy season and thank you for the comments and suggestions with which you’ve always enlivened the site. We have much to talk about in coming days, but for now, safe journey to all of you on the road.

tzf_img_post

{ 29 comments }

Citizen SETI

by Paul Gilster on December 21, 2016

I love watching people who have a passion for science constructing projects in ways that benefit the community. I once dabbled in radio astronomy through the Society of Amateur Radio Astronomers, and I could also point to the SETI League, with 1500 members on all seven continents engaged in one way or another with local SETI projects. And these days most everyone has heard the story of Planet Hunters, the citizen science project that identified the unusual Boyajian’s Star (KIC 8462852). When I heard from Roger Guay and Scott Guerin, who have been making their own theoretical contributions to SETI, I knew I wanted to tell their story here. The post that follows lays out an alien civilization detection simulation and a tool for visualizing how technological cultures might interact, with an entertaining coda about an unusual construct called a ‘Dyson shutter.’ I’m going to let Roger and Scott introduce themselves as they explain how their ideas developed.

by Roger Guay and Scott Guerin

Citizen Science plays an increasingly important role across several scientific disciplines and especially in the fields of astronomy and SETI. Tabby’s star, discovered by members of the Planet Hunters project and the SETI@home project are recent examples of massively parallel citizen-science efforts. Those large-scale projects are counterbalanced by individuals whose near obsession with a subject compels them to study, write, code, draw, design, talk about, or build artifacts that help them understand the ideas that excite them.

Roger Guay and Scott Guerin, working in isolation, recently discovered parallel evolution in their thinking about SETI and the challenges of interstellar detection and communication. Guay has undertaken the programming of a 10,000 x 8,000 light year swath of a typical galaxy and populates it with random radiating communicating civilizations. His model allows users to tweak basic parameters to see how frequently potential detections occur. Guerin is more interested in a galaxy-wide model and has used worksheets and animations to bring his thoughts to light. His ultimate goal is to develop a parametric civilization model so that interactions, if any, can be studied. However, at the core, both efforts were attempts at visualizing the Fermi Paradox across space-time, and both experimenters show how fading electromagnetic halos may be all that’s left for us to discover of an extraterrestrial civilization, if we listen hard enough.

The backgrounds, mindsets, and tool kits available to Roger and Scott play an important role in their path to this blog.

Roger Guay

I am a retired Physicist and Technical Fellow Emeritus from Boeing in Seattle. I can’t remember when I first became interested in being a scientist (it was in grade school) but I do remember when I first became obsessed with the Fermi paradox. It was during a discussion while on a road trip with a colleague. At first, this discussion mainly revolved around the almost unfathomable vastness of space and time in our galaxy, but then turned to parameters of the Drake equation. The one that was the most controversial was L, the lifetime of an Intelligent Civilization or IC.

The casual newcomer to the Drake equation will tend to assume a relatively long lifetime for an IC, but when considering detection methods such as SETI uses, one must adjust L to reflect the lifetime of the technology of the detection method. For example, SETI is listening for electromagnetic transmissions in the microwave to radio and TV range. So, L has to be the estimated lifetime of that technology. For SETI’s technology, we’ll call this the Radio Age. On Earth, the Radio Age started about 100 years ago and has already fallen off due to technological advances such as the internet and satellite communication. So I argued, an L = 150 ± 50 years might be a more reasonable assumption for the Drake equation when considering the detection method of listening for radio signals.

At this point the discussion was quite intense! When I thought about an L equal to a few hundred years in a galaxy that continues to evolve over a 13-billion-year lifespan, the image that came to my mind was that of fireflies in the night. And that was the precursor for my Alien Civilization Detection or ACD simulation.

One can imagine electromagnetic or “radio” bubbles appearing randomly in time and space and growing in size over time. At any instant in time the bubble from an IC will have a radius equal to the speed of light times the amount of time since that IC first began broadcasting. These bubbles will continue to grow at the speed of light. When the IC stops broadcasting for whatever reason, the bubble will become hollow and the shell thickness will reflect the time duration of that IC’s Radio Age lifetime.

If the age of our galaxy is compressed into one year, we on Earth have been “leaking” radio and television signals into space for only a small fraction of a second. And, considering the enormity of space and the fact that our “leakage” radiation has only made it to a few hundred stars out of the two to four hundred billion in our galaxy, one inevitably realizes there must be a significant synchronization problem that arises when ICs attempt to detect one another. So what does this synchronicity problem look like visually?

To answer this question my tasks became clear: dynamically generate and animate radio bubbles randomly in space and time, grow them at the speed of light at very fast accelerated rate in a highly compressed region of the galaxy, fade them over time for inverse square law decay, and then analyze the scene for detection. No Problem!!!

Using LiveCode, a modern derivative of HyperCard on steroids, I began my 5-year project to scientifically simulate this problem. Using the Monte Carlo Method whereby randomly generated rings denoting EM radiation from ICs pop into existence in a 8,000 X 10,000 LY region of the galaxy* centered on our solar system at a rate of about 100 years per second, the firefly analogy came to life. And the key to determining detection potential is to recognize that it can only occur when a radiation bubble is passing over another IC that is actively listening. This is the synchronicity problem that is dramatically apparent when the simulation is run!

To be scientifically accurate and meaningful, some basic assumptions were required:

  • 1. ICs will appear not only randomly in space, but also randomly in time.
  • 2. ICs will inevitably transition into (and probably out of) a Radio/TV age where they too will “leak” electromagnetic radiation into space.
  • 3. The radio bubbles are assumed to be spherically homogeneous**.

To use the ACD simulation, the user chooses and adjusts parameters such as Max Range, Transmit and Listen times*** and N, the Drake equation estimate of the number of ICs in the galaxy at any given instant. During a simulation run, potential detections are tallied and the overall probability of detection is displayed.

About two years ago, as the project continued to evolve, I became aware of Stephan Webb’s encyclopedic book on the Fermi Paradox, If the Universe is Teeming with Aliens … Where is Everybody? This book was most influential in my thinking and the way I shaped the existing version of the ACD simulation.

screenshot-from-2016-12-21-07-57-27

A snapshot of the main screen of the ACD simulation midway through a 10,000 year run.

A Webb review of the ACD simulation is available here: http://stephenwebb.info/category/fermi-paradox/

And you can download it here at this Dropbox link:

https://www.dropbox.com/sh/dlkx24shyfjsoax/AADeFd2wZyZxvLYHU2f4jJ0ha?dl=0

Conclusions? The ACD simulation dramatically demonstrates that there is indeed a synchronicity problem that automatically arises when ICs attempt to detect one another. And for reasonable (based on Earth’s specifications) Drake equation parameter selections, detection potentials are shown to be typically hundreds of years apart. In other words, we can expect to search for a few hundred years before finding another IC in our section of the galaxy. When you consider Occam’s razor, is not this synchronicity problem the most logical resolution to the Fermi Paradox?

Footnotes:

* The thickness of the Milky Way is small compared to its diameter. So for regions close to the center of the thickness, we can approximate with a 2-dimensional model.

** Careful consideration has to be given to this last assumption: Of course, it is not accurate in that the radiation from a typical IC is assumed to be composed of many different sources and have widely varying parameters, as they are on Earth. But the bottom line is that the homogenous distribution gives the best case scenario of detection potential. An example of when to apply this thinking is to consider laser transmission vs radio broadcast. Since a laser would presumably by highly directed and therefore more intense at greater distances, the user of the ACD simulation might choose a Higher Max Range but at the same time realize that pointing problems will make detection potential much smaller than the ACD indicates. The ACD does not take this directly into consideration. Room for the ACD to grow?

*** One of the features of this simulation is that the user can make independent selections of both the transmit and listening times of ICs, whereas the Drake equation lumps them together in the lifetime parameter.

Scott Guerin

I grew up north of Milwaukee, Wisconsin and was the kid in 5th grade who would draw a nuclear reactor on the classroom’s chalkboard. My youthful designs were influenced by Voyage to the Bottom of the Sea, Lost in Space, everything NASA, and 2001: a Space Odyssey. In the mid 70s, I was a technical illustrator at the molecular biology laboratory at UW Madison and, after graduation with a fine arts degree, I went on to a 30-year career as an interpretive designer of permanent exhibits in science and history museums.

I began visually exploring SETI over two years ago in order to answer three questions: First, why is such a thought-provoking subject so often presented only in math and graphs thereby limiting information to experts? Secondly, why is the Fermi Paradox a paradox? Thirdly, what form might an interstellar “we are here” signaling technology take?

Using Sketchup, I built a simple galactic model to see what scenarios matched the current state of affairs: silence and absence. At a scale of 1 meter = 1 light year, I positioned Sol appropriately, and randomly “dropped” representations of civilizations (I refer to them as CivObjects) into the model. Imagine dropping a cup full of old washers, nails, wires, and screws onto a flat, 10” plate and seeing if any happen to overlap with a grain-of-salt-sized solar system (and that speck is still ~105 too large).

The short answer is that they didn’t overlap and I’ve concluded that the synchronicity issue, combined with weak listening and looking protocols is a strong answer to the paradox. When synchronicity is considered along with sheer rarity of emitting civilizations (my personal stance), the silence makes even more sense.

screenshot-from-2016-12-21-08-00-11

For scale, the green area at lower right represents the Kepler star field if it were a ~6,000 LY diameter sphere. The solid discs represent currently emitting civilizations, the halos represent civilizations that have stopped emissions over time, and the lines and wedges represent directed communications. I sent this diagram to Paul and Marc at Centauri Dreams who were kind enough to pass it on to several leading scientists and they graciously, and quickly, replied with encouragement.

Curtis Charles Mead’s 2013 Harvard dissertation “A Configurable Terasample-per-second Imaging System for Optical SETI,” George Greenstein’s Understanding the Universe, Tarter’s, and the Benford’s papers, among others, were influential in my next steps. I realized the halos were unrealistic representations of a civilization’s electromagnetic emissions and that if you could see them from afar, they could be visualized as prickly, 3-dimensional sea urchin-like artifacts with tight beams of powerful radar, microwave, and laser emanating from a mushy sphere of less directional, weaker electromagnetic radiation.

screenshot-from-2016-12-21-08-02-01

From afar, Earth’s EM halo is a lumpy, flattened sphere some 120LY in radius dating to the first radio experiments in the late 1890’s. The 1974 Arecibo message toward M13 is shown being emitted at the 10 o’clock position.

From Tarter’s 2001 paper “At current levels of sensitivity, targeted microwave searches could detect the equivalent power of strong TV transmitters at a distance of 1 light year (the red sphere at center in the diagram), or the equivalent power of strong military radars to 300 ly, and the strongest signal generated on Earth (Arecibo planetary radar) to 3000 ly, whereas sky surveys are typically two orders of magnitude less sensitive. The sensitivity of current optical searches could detect megajoule pulses focused with a 10-m telescope out to a distance of 200 ly.”

screenshot-from-2016-12-21-08-03-14

In this speculative diagram, two civilizations “converse” across 70 LY. Mead’s paper confirms the aiming accuracy needed to correct for the the proper motion of the stars, given a laser beam just a handful of AU wide at the distance illustrated, is within human grasp. The civilizations shown would most likely have been emitting EM for hundreds of years so that their raw EM halos are so large and diffuse they cannot be shown in the diagram. The magenta blob represents the elemental EM “hum” of a civilization within a couple LY, the green spikes represent tightly beamed microwaves for typical communications and radar , while the yellow spikes are lasers reaching out to probes, being used as light-sail boosters, and fostering long distance high-bandwidth communications. Each civilization has an EM fingerprint, affected by their system’s ecliptic angle and rotation, persistence of ability, and types of technologies deployed — these equate to a unique CivObject.

In advance of achieving the goal of a fully parametric 3D model, I manually animated several kinds of civilizations and their interactions by imagining a CivObject as a variant of a Minkowski space-time cone. I move the cone’s Z axis (time) through a galactic hypersurface to illustrate a civilization’s history of passive and intentional transmission, as well as probes at sub-lightspeed. A CivObject’s anatomy reveals the course of a civilization’s history and I like to think of them as distant cousins of Hari Seldon’s prime radiant. https://vimeo.com/195239607 password: setiwow!

The anatomy of a CivObject allows arbitrary time scales to be visualized as function of xy directionality, EM strength, and type of emission. Below is Earth’s as a reference. Increasing transmission power is suggested by color.

04-2-minkowski-earth-civ-object-121-years

I found it easy to animate transmissions but continue to struggle with visualizing periods of listening and the strength of receivers. Like Guay, I concluded that a potential detection can occur only when a transmission passes through a listening civilization. A “Conversing” model designed to actually simulate communication interactions needs to address both ends of “the line” with a full matrix of transmitter/receiver power ratios as well as sending/listening durations, directions, sensitivities, and intensities. In addition, a more realistic galactic model including 3d star locations, the GHZ, and interstellar extinction/absorption rates is needed.

And now for some sci-fi

A few months before KIC 8462852 was announced and Dyson Swarms became all the rage, I noticed one of those old ventilators on top of a barn roof and thought that if a Kardashev II civilization scaled it up to +-1AU diameter, it would become a solar powered, omni-directional signalling device capable of sending an “Intelligence was here” message across interstellar space. I called it a Dyson Shutter.

Imagine a star surrounded by a number of ribbon-like light sails connected at their poles. Each vane’s stability, movement, and position is controlled by the angle of sail relative to incoming photons from the central star. The shutter would be a high tech, ultra-low bandwidth, scalable construct. I have imagined that each sail, at the equator, would be no less than one Earth diameter wide which is at the lower end of Kepler-grade detection.

Depending on the number constructed, the vanes could be programmed to shift into simple configurations such as fibonacci and prime number sequences.

screenshot-from-2016-12-21-08-06-22

screenshot-from-2016-12-21-08-07-32

I imagine the Dyson Shutter remains in a stable message period for hundreds of rotations. Perhaps there are “services” for the occasional visitor, perhaps it has defenses against comets, incoming asteroids, or inter-galactic graffiti artists. Perhaps it is an intelligent being itself but is it a lure, a trap, a collector, or colleague? Is it possible Tabby’s star is a Dyson Shutter undergoing a multi-year message reconfiguration?

screenshot-from-2016-12-21-08-08-41

The shutter’s poles are imagined to be filled with command and control systems, manufacturing facilities, spaceports, etc.

Wrap

We hope that our work as presented here might inspire some of you to join the ranks of the Citizen Scientist. There are many opportunities and science needs the help. With today’s access to information and digital tools, anyone with a little passion for their ideas and a lot of imagination and persistence can help communicate complex issues to the public and make contributions to science. We hope that our stories resonate with at least some of you. Please let us know what you think and let’s all push back on the frontiers of ignorance!

tzf_img_post

{ 52 comments }