In what spirit do we pursue experimentation, and with what criteria do we judge the results? Marc Millis has been thinking and writing about such questions in the context of new propulsion concepts for a long time. As head of NASA’s Breakthrough Propulsion Physics program, he looked for methodologies by which to push the propulsion envelope in productive ways. As founding architect of the Tau Zero Foundation, he continues the effort through books like Frontiers of Propulsion Science, travel and conferences, and new work for NASA through TZF. Today he reports on a recent event that gathered people who build equipment and test for exotic effects. A key issue: Ways forward that retain scientific rigor and a skeptical but open mind. A quote from Galileo seems appropriate: “I deem it of more value to find out a truth about however light a matter than to engage in long disputes about the greatest questions without achieving any truth.”
by Marc G Millis
A workshop on propellantless propulsion was held at a sprawling YMCA campus of classy rusticity, in Estes Park Colorado, from Sept 10 to 14. These are becoming annual events, with the prior ones being in LA in Nov 2017, and in Estes Park, Sep 2016. This is a fairly small event of only about 30 people.
It was at the 2016 event where three other labs reported the same thrust that Jim Woodward and his team had been reporting for some time – with the “Mach Effect Thruster” (which also goes by the name “Mach Effect Gravity Assist” device). Backed by those independent replications, NASA awarded Woodward’s team NIAC grants. Updates on this work and several other concepts were discussed at this workshop. There will be a proceedings published after all the individual reports are rounded up and edited.
Before I go on to describe these updates, I feel it would be helpful to share a technique that I regularly use to when trying to assess potential breakthrough concepts. I began using this technique when I ran NASA’s Breakthrough Propulsion Physics project to help decide which concepts to watch and which to skip.
When faced with research that delves into potential breakthroughs, one faces the challenge of distinguishing which of those crazy ideas might be the seeds of breakthroughs and which are the more generally crazy ideas. In retrospect, it is easy to tell the difference. After years of continued work, the genuine breakthroughs survive, along with infamous quotes from their naysayers. Meanwhile the more numerous crazy ideas are largely forgotten. Making that distinction before the fact, however, is difficult.
So how do I tell that difference? Frankly, I can’t. I’m not clairvoyant nor brilliant enough to tell which idea is right (though it is easy to spot flagrantly wrong ideas). What I can judge and what needs to be judged is the reliability of the research. Regardless if the research is reporting supportive or dismissive evidence of a new concept, those findings mean nothing unless they are trustworthy. The most trustworthy results come from competent, rigorous researchers who are impartial – meaning they are equally open to positive or negative findings. Therefore, I first look for the impartiality of the source – where I will ignore “believers” or pedantic pundits. Next, I look to see if their efforts are focused on the integrity of the findings. If experimenters are systematically checking for false positives, then I have more trust in their findings. If theoreticians go beyond just their theory to consider conflicting viewpoints, then I pay more attention. And lastly, I look to see if they are testing a critical make-break issue or just some less revealing detail. If they won’t focus on a critical issue, then the work is less relevant.
Consider the consequences of that tactic: If a reliable researcher is testing a bad idea, you will end up with a trustworthy refutation of that idea. Null results are progress – knowing which ideas to set aside. Reciprocally, if a sloppy or biased researcher is testing a genuine breakthrough, then you won’t get the information you need to take that idea forward. Sloppy or biased work is useless (even if from otherwise reputable organizations). The ideal situation is to have impartial and reliable researchers studying a span of possibilities, where any latent breakthrough in that suite will eventually reveal itself (the “pony in the pile”).
Now, back to the workshop. I’ll start with the easiest topic, the infamous EmDrive. I use the term “infamous” to remind you that (1) I have a negative bias that can skew my impartiality, and (2) there are a large number of “believers” whose experiments never passed muster (which lead to my negative bias and overt frustration).
Three different tests of the EmDrive were reported of varying degrees of rigor. All of the tests indicated that the claimed thrust is probably attributable to false positives. The most thorough tests were from the Technical University of Dresden, Germany, led by Martin Tajmar, and where his student, Marcel Weikert presented the EmDrive tests, and Matthias Kößling on the details of their thrust stand. They are testing more than one version of the EmDrive, under multiple conditions, and all with alertness for false positives. Their interim results show that thrusts are measured when the device is not in a thrusting mode – meaning that something else is creating the appearance of a thrust. They are not yet fully satisfied with the reliability of their findings and tests continue. They want to trace the apparent thrust its specific cause.
The next big topic was Woodward’s Mach Effect Thruster – determining if the previous positive results are indeed genuine, and then determining if they are scalable to practical levels. In short – it is still not certain if the Mach Effect Thruster is demonstrating a genuine new phenomenon or if it is a case of a common experimental false positive. In addition to work of Woodward’s team, led by Heidi Fearn, the Dresden team also had substantial progress to report, specifically where Maxime Monette covered the Mach Effect thruster details in addition to the thrust stand details from Matthias Kößling. There was also an analytical assessment by based on conventional harmonic oscillators, plus more than one presentation related to the underlying theory.
One of the complications that developed over the years is that the original traceability between Woodward’s theory and the current thruster hardware has thinned. The thruster has become a “back box” where the emphasis is now on the empirical evidence and less on the theory.
Originally, the thruster hardware closely followed the 1994 patent which itself was a direct application of Woodward’s 1990 hypothesized fluctuating inertia. It involved two capacitors at opposite ends of a piezoelectric separator, where the capacitors experience the inertial fluctuations (during charging and discharging cycles) and where the piezoelectric separator cyclically changes length between these capacitors.
Its basic operation is as follows: While the rear capacitor’s inertia is higher and the forward capacitor lower, the piezoelectric separator is extended. The front capacitor moves forward more than the rear one moves rearward. Then, while the rear capacitor’s inertia is lower and the forward capacitor higher, the piezoelectric separator is contracted. The front capacitor moves backward less than the rear one moves forward. Repeating this cycle shifts the center of mass of the system forward – apparently violating conservation of momentum.
The actual conservation of momentum is more difficult to assess. The original conservation laws are anchored to the idea of an immutable connection between inertia and an inertial frame. The theory behind this device deals with open questions in physics about the origins and properties of inertial frames, specifically evoking “Mach’s Principle.” In short, that principle is ‘inertia here because of all the matter out there.’ Another related physics term is “Inertial Induction.” Skipping through all the open issues, the upshot is that variations in inertia would require revisions to the conservation laws. It’s an open question.
Back to the tale of the evolved hardware. Eventually over the years, the hardware configuration changed. While Woodward and his team tried different ways to increase the observed thrust, the ‘fluctuating inertia’ components and the ‘motion’ components were merged. Both the motions and mass fluctuations are now occurring in a stack of piezoelectric disks. Thereafter, the emphasis shifted to the empirical observations. There were no analyses to show how to connect the original theory to this new device. The Dresden team did develop a model to link the theory to the current hardware, but determining its viability is part of the tests that are still unfinished [Tajmar, M. (2017). Mach-Effect thruster model. Acta Astronautica, 141, 8-16.].
Even with the disconnect between the original theory and hardware now under test, there were a couple of presentations about the theory, one by Lance Williams and the other by Jose’ Rodal. Lance, reporting on discussions he had when attending the April 2018 meeting of American Physical Society, Division of Gravitational Physics, suggested how to engage the broader physics community about this theory, such as using the more common term of “Inertial Induction” instead of “Mach’s Principle.” Lance elaborated on the prevailing views (such as the absence of Maxwellian gravitation) that would need to be brought into the discussion – facing the constructive skepticism to make further advances. Jose’ Rodal elaborated on the possible applicability of “dilatons” from the Kaluza–Klein theory of compactified dimensions. Amid these and other presentations, there was lively discussion involving multiple interpretations of well established physics.
An additional provocative model for the Mach Effect Thruster came from an interested software engineer, Jamie Ciomperlik, who dabbles in these topics for recreation. In addition to his null tests of the EmDrive, he created a numerical simulation for the Mach Effect using conventional harmonic oscillators. The resulting complex simulations showed that, with the right parameters, a false positive thrust could result from vibrational effects. After lengthy discussions, it was agreed to examine this more closely, both experimentally and analytically. Though the experimentalists already knew of possible false positives from vibration, they did not previously have an analytical model to help hunt for these effects. One of the next steps is to check how closely the analysis parameters match the actual hardware.
Quantum approaches were also briefly covered, where Raymond Chiao discussed the negative energy densities of Casimir cavities and Jonathan Thompson (a prior student of Chiao’s) gave an update on experiments to demonstrate the “Dynamical Casimir effect” – a method to create a photon rocket using photons extracted from the quantum vacuum.
There were several other presentations too, spanning topics of varying relevance and fidelity. Some of these were very speculative works, whose usefulness can be compared to the thought-provoking effect of good science fiction. They don’t have to be right to be enlightening. One was from retired physicist and science fiction writer, John Cramer, who described the assumptions needed to induce a wormhole using the Large Hadron Collider (LHC) that could cover 1200 light-years in 59 days.
Representing NASA’s Innovative Advanced Concepts (NIAC), Ron Turner gave an overview of the scope and how to propose for NIAC awards.
A closing thought about consequences. By this time next year, we will have definitive results on the Mach Effect Thruster, and the findings of the EmDrive will likely arrive sooner. Depending on if the results are positive or negative, here are my recommendations on how to proceed in a sane and productive manner. These recommendations are based on history repeating itself, using both the good and bad lessons:
If It Does Work:
- Let the critical reviews and deeper scrutiny run their course. If this is real, a lot of people will need to repeat it for themselves to discover what it’s about. This takes time, and not all of it will be useful or pleasant. Pay more attention to those who are attempting to be impartial, rather than those trying to “prove” or “disprove.” Because divisiveness sells stories, expect press stories focusing on the controversy or hype, rather than reporting the blander facts.
- Don’t fall for the hype of exaggerated expectations that are sure to follow. If you’ve never heard of the “Gartner Hype Cycle,” then now’s the time to look it up. Be patient, and track the real test results more than the news stories. The next progress will still be slow. It will take a while and a few more iterations before the effects start to get unambiguously interesting.
- Conversely, don’t fall for the pedantic disdain (typically from those whose ideas are more conventional and less exciting). You’ll likely hear dismissals like, “Ok, so it works, but it’s not useful. ” or “We don’t need it to do the mission.” Those dismissals only have a kernel of truth in a very narrow, near-sighted manner.
- Look out for the sharks and those riding the coattails of the bandwagon. Sorry to mix metaphors, but it seemed expedient. There will be a lot of people coming out of the woodwork in search of their own piece of the action. Some will be making outrageous claims (hype) and selling how their version is better than the original. Again, let the test results, not the sales pitches, help you decide.
If It Does Not Work:
- Expect some to dismiss the entire goal of “spacedrives” based on the failure of one or two approaches. This is a “generalization error” which might make some feel better, but serves no useful purpose.
- Expect others to chime in with their alternative new ideas to fill the void, the weakest of which will be evident by their hyped sales pitches.
- Follow the advice given earlier: When trying to figure out which idea to listen too, check their impartiality and rigor. Listen to those that are not trying to sell nor dismiss, but rather to honestly investigate and report. When you find those service providers, keep tuned in to them.
- To seek new approaches toward the breakthrough goals, look for the intersection of open questions in physics to the critical make-break issues of those desired breakthroughs. Those intersections are listed in our book Frontiers of Propulsion Science.
The Sagittarius Dwarf Galaxy is a satellite of the Milky Way, about 70,000 light years from Earth and in a trajectory that has it currently passing over the Milky Way’s galactic poles; i.e., perpendicular to the galactic plane. What’s intriguing about this satellite is that its path takes it through the plane of our galaxy multiple times in the past, a passage whose effects may still be traceable today. A team of scientists led by Teresa Antoja (Universitat de Barcelona) is now using Gaia data to trace evidence of its effects between 300 and 900 million years ago.
Image: The Sagittarius dwarf galaxy, a small satellite of the Milky Way that is leaving a stream of stars behind as an effect of our Galaxy’s gravitational tug, is visible as an elongated feature below the Galactic centre and pointing in the downwards direction in the all-sky map of the density of stars observed by ESA’s Gaia mission between July 2014 to May 2016. Credit: ESA/Gaia/DPAC.
This story gets my attention because of my interest in the Gaia data and the uses to which they can be put. We just looked at interstellar interloper ‘Oumuamua and saw preliminary work on tracing it back to a parent star. No origin could be determined, but the selection of early candidates was an indication of an evolving method in using the Gaia dataset, which will expand again with the 2021 release. The Sagittarius Dwarf galaxy compels a different method, and we’ll be seeing quite a few new investigations with methods of their own growing out of this attempt to begin a three-dimensional map of the Milky Way. A kinematic census of over one billion stars will come out of Gaia.
A billion stars represents less than 1 percent of the galactic population, so you can see how far we have to go, but we’re already finding innovative ways to put the Gaia data to use, as witness Antoja’s new paper in Nature. As we saw in ‘Oumuamua’s Origin: A Work in Progress, Gaia uses astrometric methods to measure not just the position but the velocity of stars on the plane of the sky. We also get a subset of a few million stars for which the mission will include radial velocity, producing stellar motion in a three-dimensional ‘phase space.’
From the Antoja paper:
By exploring the phase space of more than 6 million stars (positions and velocities) in the disk of the Galaxy in the first kiloparsecs around the Sun from the Gaia Data Release 2 (DR2, see Methods), we find that certain phase space projections show plenty of substructures that are new and that had not been predicted by existing models. These have remained blurred until now due to the limitations on the number of stars and the precision of the previously available datasets.
Antoja’s team found that these unique data revealed an unexpected pattern when stellar positions were plotted against velocity. The pattern is a snail shell shape that emerges when plotting the stars’ altitude above or below the plane of the galaxy against their velocity in the same direction. Nothing like this had been noted before, nor could it have been without Gaia.
“At the beginning the features were very weird to us,” says Antoja. “I was a bit shocked and I thought there could be a problem with the data because the shapes are so clear. It looks like suddenly you have put the right glasses on and you see all the things that were not possible to see before.”
Image: This graph shows the altitude of stars in our Galaxy above or below the plane of the Milky Way against their velocity in the same direction, based on a simulation of a near collision that set millions of stars moving like ripples on a pond. The snail shell-like shape of the pattern reproduces a feature that was first seen in the movement of stars in the Milky Way disc using data from the second release of ESA’s Gaia mission, and interpreted as an imprint of a galactic encounter. Credit: T. Antoja et al. 2018.
Stellar motions, we are learning, produce ripples that may no longer show up in the stars’ visible distribution, but do emerge when their velocities are taken into consideration. Antoja and colleagues believe the cause of this motion was the Sagittarius Dwarf Galaxy, whose last close pass would have perturbed many stars in the Milky Way. The timing is the crux, for estimates of when the snail shell pattern began fit with the timing of the last dwarf galaxy pass.
As with the ‘Oumuamua study, we’re at the beginning of teasing out newly available information from the trove that Gaia is giving us. To firm up the connection with the Sagittarius Dwarf Galaxy, Antoja team has much to do as it moves beyond early computer modeling and analysis, but the evidence for perturbation, whatever the source, is clear. From the paper:
…an ensemble of stars will stretch out in phase space, with the range of frequencies causing a spiral shape in this projection. The detailed time evolution of stars in this toy model is described in Methods and shown in Extended Data Fig. 3. As time goes by, the spiral gets more tightly wound, and eventually, this process of phase mixing leads to a spiral that is so wound that the coarse-grained distribution appears to be smooth. The clarity of the spiral shape in the Z-VZ [vertical position and velocity] plane revealed by the Gaia DR2 data, implies that this time has not yet arrived and thus provides unique evidence that phase mixing is currently taking place in the disk of the Galaxy.
The shell-like pattern thus contains information about the distribution of matter in the Milky Way and the nature of stellar encounters. The bigger picture is that untangling the evolution of the galaxy and explaining its structure is what Gaia was designed for, a process that is now gathering momentum. We’re only beginning to see what options this mission is opening up.
The paper is Antoja et al., “A Dynamically Young and Perturbed Milky Way Disk,” Nature 561 (2018), 360-362 (abstract / preprint).
I always imagined Titan’s surface as a relatively calm place, perhaps thinking of the Huygens probe in an exotic, frigid landing zone that I saw as preternaturally still. Then, prompted by an analysis of what may be dust storms on Titan, I revisited what Huygens found. It turns out the probe experienced maximum winds about ten minutes after beginning its descent, at an altitude of some 120 kilometers. It was below 60 kilometers that the wind dropped. And during the final 7 kilometers, the winds were down to a few meters per second. At the surface, according to the European Space Agency, Huygens found a light breeze of 0.3 meters per second.
But is Titan’s surface always that quiet? The Cassini probe has shown us that Titan experiences interesting weather driven by a methane cycle that operates at temperatures far below Earth’s water cycle, filling its lakes and seas with methane and ethane. The evaporation of hydrocarbon molecules produces clouds that lead to rain, with conditions varying according to season. Conditions at the time of the equinox, with the Sun crossing Titan’s equator, are particularly lively, producing massive clouds and storms in the tropical regions.
So a lot can happen here depending on where and when we sample. Sebastien Rodriguez (Université Paris Diderot, France) and colleagues noticed unusual brightenings in infrared images made by Cassini near the moon’s 2009-2010 northern spring equinox. The paper refers to these as “three distinctive and short-lived spectral brightenings close to the equator.”
The first assumption was that these were clouds, but that idea was quickly discounted. Says Rodriguez:
“From what we know about cloud formation on Titan, we can say that such methane clouds in this area and in this time of the year are not physically possible. The convective methane clouds that can develop in this area and during this period of time would contain huge droplets and must be at a very high altitude — much higher than the 6 miles (10 kilometers) that modeling tells us the new features are located.”
Image: This compilation of images from nine Cassini flybys of Titan in 2009 and 2010 captures three instances when clear bright spots suddenly appeared in images taken by the spacecraft’s Visual and Infrared Mapping Spectrometer. The brightenings were visible only for a short period of time — between 11 hours to five Earth weeks — and cannot be seen in previous or subsequent images. Credit: NASA/JPL-Caltech/University of Arizona/University Paris Diderot/IPGP.
In a paper just published in Nature Geoscience, the researchers likewise discount the possibility that Cassini had detected surface features, areas of frozen methane or lava flows of ice. The problem here is that the bright features in the infrared were visible for relatively short periods — 11 hours to 5 weeks — while surface spots should have remained visible for longer. Nor do they bear the chemical signature expected from such formations at the surface.
Image: This animation — based on images captured by the Visual and Infrared Mapping Spectrometer on NASA’s Cassini mission during several Titan flybys in 2009 and 2010 — shows clear bright spots appearing close to the equator around the equinox that have been interpreted as evidence of dust storms. Credit: NASA/JPL-Caltech/University of Arizona/University Paris Diderot/IPGP.
Rodriguez and team used computer modeling to show that the brightened features were atmospheric but extremely low, forming what is in all likelihood a thin layer of solid organic particles. Such particles form because of the interaction between methane and sunlight. Because the bright features occurred over known dune fields at Titan’s equator, Rodriguez believes that they are clouds of dust kicked up by wind hitting the dunes.
“We believe that the Huygens Probe, which landed on the surface of Titan in January 2005, raised a small amount of organic dust upon arrival due to its powerful aerodynamic wake,” says Rodriguez. “But what we spotted here with Cassini is at a much larger scale. The near-surface wind speeds required to raise such an amount of dust as we see in these dust storms would have to be very strong — about five times as strong as the average wind speeds estimated by the Huygens measurements near the surface and with climate models.”
Image: Artist’s concept of a dust storm on Titan. Researchers believe that huge amounts of dust can be raised on Titan, Saturn’s largest moon, by strong wind gusts that arise in powerful methane storms. Such methane storms, previously observed in images from the international Cassini spacecraft, can form above dune fields that cover the equatorial regions of this moon especially around the equinox, the time of the year when the Sun crosses the equator. Credit: NASA/ESA/IPGP/Labex UnivEarthS/University Paris Diderot.
In reaching this conclusion, the researchers analyzed Cassini spectral data and deployed atmospheric models and simulations to show that micrometer-sized solid organic particles from the dunes below were responsible, an indication of dust in the atmosphere that far exceeds what Huygens found at the surface. The winds associated with the phenomenon would be unusually strong, but could be explained by downbursts in the equinoctial methane storms.
If dust storms can be created by such winds, then Titan’s equatorial regions are still active, with the dunes undergoing constant change. We have a world that is active not only in its hydrocarbon cycle and its geology, but also in what we can call its ‘dust cycle.’ The only moon in the Solar System with a dense atmosphere and surface liquid offers yet another analogy with Earth, a similarity that highlights the complexity of this frigid, hydrocarbon-rich world.
The paper is Rodriguez et al., “Observational evidence for active dust storms on Titan at equinox,” Nature Geoscience 24 September 2018 (abstract).
The much discussed interstellar wanderer called ‘Oumuamua made but a brief pass through our Solar System, and was only discovered on the way out in October of last year. Since then, the question of where the intriguing interloper comes from has been the object of considerable study. This is, after all, the first object known to be from another star observed in our system. Today we learn that a team of astronomers led by Coryn Bailer-Jones (Max Planck Institute for Astronomy) has been able to put Gaia data and other resources to work on the problem.
The result: Four candidate stars identified as possible home systems for ‘Oumuamua. None of these identifications is remotely conclusive, as the researchers make clear. The significance of the work is in the process, which will be expanded as still more data become available from the Gaia mission. So in a way this is a preview of a much larger search to come.
What we are dealing with is the reconstruction of ‘Oumuamua’s motion before it encountered our Solar System, and here the backtracking become tangled with the object’s trajectory once we actually observed it. Its passage through the system as well as stars it encountered before it reached us all factor into determining its origin.
What the Bailer-Jones teams brings to the table is something missing in earlier attempts to solve the riddle of ‘Oumuamua’s home. We learned in June of 2018 that ‘Oumuamua’s orbit was not solely the result of gravitational influences, but that a tiny additional acceleration had been added when the object was close to the Sun. That brought comets into the discussion: Was ‘Oumuamua laden with ice that, sufficiently heated, produced gases that accelerated it?
The problem with that idea was that no such outgassing was visible on images of the object, the way it would be with comets imaged close to the Sun. Whatever the source of the exceedingly weak acceleration, though, it had to be factored into any attempt to extrapolate the object’s previous trajectory. Bailer-Jones and team manage to do this, offering a more precise idea of the direction from which the object came.
Image: This artist’s impression shows the first interstellar asteroid: `Oumuamua. This unique object was discovered on 19 October 2017 by the Pan-STARRS 1 telescope in Hawai`i. Subsequent observations from ESO’s Very Large Telescope in Chile and other observatories around the world show that it was travelling through space for millions of years before its chance encounter with our star system. `Oumuamua seems to be a dark red object, either elongated, as in this image, or else shaped like a pancake. Credit: ESO/M. Kornmesser.
At the heart of this work are the abundant data being gathered by the Gaia mission, whose Data Release 2 (DR2) includes position, on-sky motion and parallax information on 1.3 billion stars. As this MPIA news release explains, we also have radial velocity data — motion directly away from or towards the Sun — of 7 million of these Gaia stars. The researchers then added in Simbad data on an additional 220,000 stars to retrieve further radial velocity information.
To say this gets complicated is a serious understatement. 4500 stars turn up as potential homes for ‘Oumuamua, assuming both the object and the stars under consideration all moved along straight lines and at constant speeds. Then the researchers had to take into consideration the gravitational influence of all the matter in the galaxy. The likelihood is that ‘Oumuamua was ejected from a planetary system during the era of planet formation, and that it would have been sent on its journey by gravitational interactions with giant planets in the infant system.
Calculating its trajectory, then, could lead us back to ‘Oumuamua’s home star, or at least to a place close to it. Another assumption is that the relative speed of ‘Oumuamua and its parent star is comparatively slow, because objects are not typically ejected from planetary systems at high speed. Given all this, Bailer-Jones and team come down from 4500 candidates to four that they consider the best possibilities. None of these stars is currently known to have planets at all, much less giant planets, but none has been seriously examined for planets to this point.
Let’s pause on this issue, because it’s an interesting one. Digging around in the paper, I learned that unstable gas giants would be more likely to eject planetesimals than systems with stable giant planets, a consequence of the eccentric orbits of multiple gas giants during an early phase of system instability. It also turns out that there are ways to achieve higher ejection velocities. Does ‘Oumuamua come from a binary star? Let me quote from the paper on this:
Higher ejection velocities can occur for planetesimals scattered in a binary star system. To demonstrate this, we performed a simple dynamical experiment on a system comprising a 0.1 M? star in a 10 au circular orbit about a 1.0 M? star. (This is just an illustration; a full parameter study is beyond the scope of this work.) Planetesimals were randomly placed between 3 au and 20 au from the primary, enveloping the orbit of the secondary… Once again most (80%) of the ejections occur at velocities lower than 10 km s?1, but a small fraction is ejected at higher velocities in the range of those we observe (and even exceeding 100 km s?1).
So keep this in mind in evaluating the candidate stars. One of these is the M-dwarf HIP 3757, which can serve as an example of how much remains to be done before we can claim to know ‘Oumuamua’s origin. Approximately 77 light years from Earth, the star as considered by these methods would have been within 1.96 light years of ‘Oumuamua about 1 million years ago. This is close enough to make the star a candidate given how much play there is in the numbers.
But the authors are reluctant to claim HIP 3757 as ‘Oumuamua’s home star because the relative speed between the object and the star is about 25 kilometers per second, making ejection by a giant planet in the home system less likely. More plausible on these grounds is HD 292249, which would have been within a slightly larger distance some 3.8 million years ago. Here we get a relative speed of 10 kilometers per second. Two other stars also fit the bill, one with an encounter 1.1 million years ago, the other at its closest 6.3 million years ago. Both are in the DR2 dataset and have been catalogued by previous surveys, but little is known about them.
Now note another point: None of the candidate stars in the paper are known to have giant planets, but higher speed ejections can still be managed in a binary star system, or for that matter in a system undergoing a close pass by another star. None of the candidates is known to be a binary. Thus the very mechanism of ejection remains unknown, and the authors are quick to add that they are working at this point with no more than a small percentage of the stars that could have been ‘Oumuamua’s home system.
Given that the 7 million stars in Gaia DR2 with 6D phase space information is just a small fraction of all stars for which we can eventually reconstruct orbits, it is a priori unlikely that our current search would find ‘Oumuamua’s home star system.
Yes, and bear in mind too that ‘Oumuamua is expected to pass within 1 parsec of about 20 stars and brown dwarfs every million years. Given all of this, the paper serves as a valuable tightening of our methods in light of the latest data we have about ‘Oumuamua, and points the way toward future work. The third Gaia data release is to occur in 2021, offering a sample of stars with radial velocity data ten times larger than DR2 [see the comments for a correction on this]. No one is claiming that ‘Oumuamua’s home star has been identified, but the process for making this identification is advancing, an effort that will doubtless pay off as we begin to catalog future interstellar objects making their way into our system.
The paper is Bailer-Jones et al., “Plausible home stars of the interstellar object ‘Oumuamua found in Gaia DR2,” accepted for publication in The Astrophysical Journal (preprint).
That small spacecraft can become game-changers, our topic last Friday, is nowhere more evident than in the success of Rover 1A and 1B, diminutive robot explorers that separated from the Hayabusa2 spacecraft at 0406 UTC on September 21 and landed soon after. Their target, the asteroid Ryugu, will be the site of detailed investigation not only by these two rovers, but also by two other landers, the German-built Mobile Asteroid Surface Scout (MASCOT) and Rover 2, the first of which is to begin operations early in October. Congratulations to JAXA, Japan’s space agency, for these early successes delivered by its Hayabusa2 mission.
Surface operations will be interesting indeed. Both rovers were released at an altitude of 55 meters above the surface, their successful deployment marking an advance over the original Hayabusa mission, which was unable to land its rover on the asteroid Itokawa in 2005. Assuming all goes well, the mission should gather three different samples of surface material for return to Earth in 2020. The third sample collection is to take advantage of Hayabusa2’s Small Carry-on Impactor (SCI), which will create a crater to retrieve subsurface material.
Why Ryugu? The object is a carbonaceous asteroid that has likely changed little since the Solar System’s early days, rich in organic material and offering us insight into the kind of objects that would have struck the Earth in the era when life’s raw materials, along with water, could have been delivered. It has also proven, as the JAXA team knew it would, a difficult landing site, with an uneven distribution of mass that produces variations in the gravitational pull over the surface.
On that score, it’s interesting to note that the Hayabusa2 controllers are sharing data with NASA’s OSIRIS-REx mission to asteroid Bennu. Likewise a sample return effort, OSIRIS-REx will face the same gravitational issues inherent in such small, irregular objects, which can be ameliorated by producing maps of each asteroid’s gravity. The three-dimensional models produced for the Dawn spacecraft at Ceres are the kind of software tools that will help both mission teams understand their targets better and ensure successful operations on the surface.
But back to Rover 1A and 1B, which have landed successfully and are both taking photographs and sending data, the first time we have landed and moved a probe autonomously on an asteroid surface. Although the first image was blurred because of the rover’s spin, it did display the receding Hayabusa spacecraft and the bright swath of the asteroid just below. Here’s JAXA’s mission tweet of that first image.
Says Tetsuo Yoshimitsu, who leads the MINERVA-II1 rover team:
Although I was disappointed with the blurred image that first came from the rover, it was good to be able to capture this shot as it was recorded by the rover as the Hayabusa2 spacecraft is shown. Moreover, with the image taken during the hop on the asteroid surface, I was able to confirm the effectiveness of this movement mechanism on the small celestial body and see the result of many years of research.
The ‘hop’ Yoshimitsu refers to is a reference to the means of locomotion the rovers will use on the surface. Remember that these vehicles are no more than 18 centimeters wide and 7 centimeters high, weighing on the order of 1 kilogram. In Ryugu’s light gravity, the rovers will make small jumps across the surface, a motion carefully constrained so as not to reach the object’s escape velocity. Below is the first Rover-1A image taken during a hop.
Image: Captured by Rover-1A on September 22 at around 11:44 JST. Color image captured while moving (during a hop) on the surface of Ryugu. The left-half of the image is the asteroid surface. The bright white region is due to sunlight. Credit: JAXA.
And have a look at an image taken during landing operations before Rover-1B reached the surface. Here the asteroid terrain is clearly defined.
Image: Captured by Rover-1B on September 21 at around 13:07 JST. This color image was taken immediately after separation from the spacecraft. The surface of Ryugu is in the lower right. The coloured blur in the top left is due to the reflection of sunlight when the image was taken. Credit: JAXA.
Yuichi Tsuda is Hayabusa2 project manager:
I cannot find words to express how happy I am that we were able to realize mobile exploration on the surface of an asteroid. I am proud that Hayabusa2 was able to contribute to the creation of this technology for a new method of space exploration by surface movement on small bodies.
I would say Tsuda’s pride in his team and his hardware is more than justified. As we go forward with surface operations, let me commend Elizabeth Tasker’s fine work in spreading JAXA news in English. Even as JAXA offers live updates from Hayabusa2 in English and the official Hayabusa2 site offers its own coverage, Tasker, a British astrophysicist working at JAXA, has provided useful mission backgrounders like this one, as well as running the English-language Hayabusa2 Twitter account @haya2e_jaxa, and keeping up with her own Twitter account @girlandkat. There will be no shortage of Ryugu news in days ahead.