≡ Menu

DE-STAR and Breakthrough Starshot: A Short History

Last Monday’s article on the Trillion Planet Survey led to an email conversation with Phil Lubin, its founder, in which the topic of Breakthrough Starshot invariably came up. When I’ve spoken to Dr. Lubin before, it’s been at meetings related to Starshot or presentations on his DE-STAR concept. Standing for Directed Energy System for Targeting of Asteroids and exploRation, DE-STAR is a phased laser array that could drive a small payload to high velocities. We’ve often looked in these pages at the rich history of beamed propulsion, but how did the DE-STAR concept evolve in Lubin’s work for NASA’s Innovative Advanced Concepts office, and what was the path that led it to the Breakthrough Starshot team?

The timeline below gives the answer, and it’s timely because a number of readers have asked me about this connection. Dr. Lubin is a professor of physics at UC-Santa Barbara whose primary research beyond DE-STAR has involved the early universe in millimeter wavelength bands, and a co-investigator on the Planck mission with more than 400 papers to his credit. He is co-recipient of the 2006 Gruber Prize in Cosmology along with the COBE science team for their groundbreaking work in cosmology. Below, he tells us how DE-STAR emerged.

By Philip Lubin

June 2009: Philip Lubin begins work on large scale directed energy systems at UC Santa Barbara. Baseline developed is laser phased array using MOPA [master oscillator power amplifier, a configuration consisting of a master laser (or seed laser) and an optical amplifier to boost the output power] topology. The DE system using this topology is named DE-STAR (Directed Energy System for Targeting Asteroids and exploRation). Initial focus is on planetary defense and relativistic propulsion. Development program begins. More than 250 students involved in DE R&D at UCSB since.

February 14, 2013: UCSB group has press release about DE-STAR program to generate public discussion about applications of DE to planetary defense in anticipation of February 15 asteroid 2012 DA14, which was to come within geosync orbit. On February 15 Chelyabinsk meteor/asteroid hit. This singular coincidence of press release and hit the next day generated a significant change in interest in possible use of large scale DE for space applications. This “pushed the DE ball over the hill”.

August 2013: Philip Lubin and group begin publication of detailed technical papers in multiple journals. DE-STAR program is introduced at invited SPIE [Society of Photo-optical Instrumentation Engineers] plenary talk in San Diego at Annual Photonics meeting. More than 50 technical papers and nearly 100 colloquia from his group have emerged since then. List of DE-STAR papers can be found here.

August 2013: 1st Interstellar Congress held in Dallas, Texas by Icarus Interstellar. Eric Malroy introduces concepts for the use of nanomaterials in sails.

August 2013: First proposal submitted to NASA for DE-STAR system from UC Santa Barbara.

January 2014: Work begins on extending previous UCSB paper to much longer “roadmap” paper which becomes “A Roadmap to Interstellar Flight” (see below).

February 11, 2014 – Lubin gives colloquium on DE-STAR at the SETI Institute in Mountain View, CA. Summarizes UCSB DE program for planetary defense, relativistic propulsion and implications for SETI. SETI Institute researchers suggest Lubin speak with NASA Ames director Pete Worden as he was not at the talk. Worden eventually leaves Ames a year later on March 31, 2015 to go to the Breakthrough Foundation. Lubin and Worden do not meet until 18 months later at the Santa Clara 100YSS meeting (see below).

August 2014: Second proposal submitted to NASA for DE-STAR driven relativistic spacecraft. Known as DEEP-IN (Directed Energy Propulsion for Interstellar Exploration). Accepted and funded by NASA NIAC program as Phase I program. Program includes directed energy phased array driving wafer scale spacecraft as one option [Phase 1 report “A Roadmap to Interstellar Flight” available here].

April 2015: Lubin submits the “roadmap” paper to the Journal of the British Interplanetary Society.

June 2015: Lubin presents DE driven relativistic flight at Caltech Keck Institute meeting. Meets with Emmett and Glady W Technology Fund.

August 31, 2015: August 31, 2015: Lubin and Pete Worden attend 100YSS (100 Year Star Ship) conference in Santa Clara, CA [Worden is now executive director, Breakthrough Starshot, and former director of NASA Ames Research Center]. Lubin is invited by Mae Jemison (director of 100YSS) to give a talk about the UCSB NASA DE program as a viable path to interstellar flight. Worden has to leave before Lubin’s talk, but in a hallway meeting Lubin informs Worden of the UCSB NASA Phase I NASA program for DE driven relativistic flight. This meeting takes places as Lubin recalls Feb 2014 SETI meeting where a discussion with Worden is suggested. Worden asks for further information about the NASA program and Lubin sends Worden the paper “A Roadmap to Interstellar Flight” summarizing the NASA DEEP-IN program. Worden subsequently forwards paper to Yuri Milner.

December 16, 2015: Lubin, Worden and Pete Klupar [chief engineer at Breakthrough Prize Foundation] meet at NASA Ames to discuss DEEP-IN program and “roadmap” paper.

December 2015: Milner calls for meeting with Lubin to discuss DEEP-IN program, “roadmap” paper and the prospects for relativistic flight.

January 2016: Private sector funding of UCSB DE for relativistic flight effort by Emmett and Glady W Technology Fund begins. Unknown to public – anonymous investor greatly enhances UCSB DE effort.

January 2016: First meeting with Milner in Palo Alto. Present are Lubin, Milner, Avi Loeb (Harvard University), Worden and Klupar. Milner sends “roadmap” paper to be reviewed by other physicists. A long series of calls and meetings ensue. This begins the birth of Breakthrough Starshot program.

March 2016 – NASA Phase II proposal for DEEP-IN submitted. Renamed Starlight subsequently. Accepted and funded by NASA.

March 2016: After multiple reviews of Lubin “roadmap” paper by independent scientists, Breakthrough Initiatives endorses idea of DE driven relativistic flight.

April 12, 2016: Public release of Breakthrough Starshot. Hawking endorses idea at NY public announcement.

To keep up with developments, the following websites are useful:

NASA Starlight (DE-STAR for interstellar relativistic flight):
http://www.deepspace.ucsb.edu/projects/starlight

Planetary Defense Application of DE-STAR:
http://www.deepspace.ucsb.edu/projects/directed-energy-planetary-defense

Implications for SETI:
http://www.deepspace.ucsb.edu/projects/implications-of-directed-energy-for-seti

tzf_img_post

{ 10 comments }

2015 TG387: A New Inner Oort Object & Its Implications

Whether or not there is an undiscovered planet lurking in the farthest reaches of the Solar System, the search for unknown dwarf planets and other objects continues. Extreme Trans-Neptunian objects (ETNOs) are of particular interest. The closest they come to the Sun is well beyond the orbit of Neptune, with the result that they have little gravitational interaction with the giant planets. Consider them as gravitational probes of what lies beyond the Kuiper Belt.

Among the population of ETNOs are the most distant subclass, known as Inner Oort Cloud objects (IOCs), of which we now have three. Added to Sedna and 2012 VP113 comes 2015 TG387, discovered by Scott Sheppard (Carnegie Institution for Science), Chad Trujillo (Northern Arizona University) and David Tholen (University of Hawaiʻi). The object was first observed in 2015, leading to several years of follow-up observations necessary to obtain a good orbital fit.

For 2015 TG387 is a challenging catch, discovered at about 80 AU from the Sun but normally at far greater distance:

“We think there could be thousands of small bodies like 2015 TG387 out on the Solar System’s fringes, but their distance makes finding them very difficult,” Tholen said. “Currently we would only detect 2015 TG387 when it is near its closest approach to the sun. For some 99 percent of its 40,000-year orbit, it would be too faint to see, even with today’s largest telescopes.”

Image: The orbits of the new extreme dwarf planet 2015 TG387 and its fellow Inner Oort Cloud objects 2012 VP113 and Sedna, as compared with the rest of the Solar System. 2015 TG387 was nicknamed “The Goblin” by its discoverers, since its provisional designation contains “TG”, and the object was first seen around Halloween. Its orbit has a larger semi-major axis than both 2012 VP11 and Sedna, so it travels much farther from the Sun, out to 2300 AU. Credit: Roberto Molar Candanosa and Scott Sheppard / Carnegie Institution for Science.

Perihelion, the closest distance this object gets to the Sun, is now calculated at roughly 65 AU, so we are dealing with an extremely elongated orbit. 2015 TG387 has, after VP113 and Sedna (80 and 76 AU respectively), the third most distant perihelion known, but it has a larger orbital semi-major axis, so its orbit carries it much further from the Sun than either, out to about 2,300 AU. At these distances, Inner Oort Cloud objects are all but isolated from the bulk of the Solar System’s mass.

You may recall that it was Sheppard and Trujillo who discovered 2012 VP113 as well, triggering a flurry of investigation into the orbits of such worlds. The gravitational story is made clear by the fact that Sedna, 2012 VP113 and 2015 TG387 all approach perihelion in the same part of the sky, as do most known Extreme Trans-Neptunian objects, an indication that their orbits are being shaped by something in the outer system. Thus the continuing interest in so-called Planet X, a hypothetical world whose possible orbits were recently modeled by Trujillo and Nathan Kaib (University of Oklahoma).

The simulations show the effect of different Planet X orbits on Extreme Trans-Neptunian objects. In 2016, drawing on previous work from Sheppard and Trujillo, Konstantin Batygin and Michael Brown examined the orbital constraints for a super-Earth at several hundred AU from the Sun in an elliptical orbit. Including such a world in their simulations, the latter duo were able to show that several presumed planetary orbits could result in stable orbits for other Extreme Trans-Neptunian objects. Let’s go to the paper to see how 2015 TG387 fits into the picture:

…Trujillo (2018) ran thousands of simulations of a possible distant planet using the orbital constraints put on this planet by Batygin and Brown (2016a). The simulations varied the orbital parameters of the planet to identify orbits where known ETNOs were most stable. Trujillo (2018) found several planet orbits that would keep most of the ETNOs stable for the age of the solar system.

So we’ve fit the simulated orbit with Sedna, 2012 VP113 and other ETNOs. The next step was obvious:

To see if 2015 TG387 would also be stable to a distant planet when the other ETNOs are stable, we used several of the best planet parameters found by Trujillo (2018). In most simulations involving a distant planet, we found 2015 TG387 is stable for the age of the solar system when the other ETNOs are stable. This is further evidence the planet exists, as 2015 TG387 was not used in the original Trujillo (2018) analysis, but appears to behave similarly as the other ETNOs to a possible very distant massive planet on an eccentric orbit.

Image: Movie of the discovery images of 2015 TG387. Two images were taken about 3 hours apart on October 13, 2015 at the Subaru Telescope on Maunakea, Hawaiʻi. 2015 TG387 can be seen moving between the images near the center, while the more distant background stars and galaxies remain stationary. Credits: Dave Tholen, Chad Trujillo, Scott Sheppard.

We know very little about 2015 TG387 itself, though the paper, assuming a moderate albedo, finds a likely diameter in the range of 300 kilometers. The stability of this small object’s orbit, keeping it aligned and stable in relation to the eccentric orbit of the hypothesized Planet X, supports the existence of the planet, especially since the derived orbit of 2015 TG387 was determined after the Planet X orbital simulations. Despite this, notes the paper in conclusion, “…2015 TG387 reacts with the planet very similarly to the other known IOCs and ETNOs.”

Another interesting bit: There is a suggestion that ETNOs in retrograde orbit are stable. Given this, the authors do not rule out the idea that the planet itself might be on a retrograde orbit.

The paper is Sheppard et al., “A New High Perihelion Inner Oort Cloud Object,” submitted to The Astronomical Journal (preprint).

tzf_img_post

{ 11 comments }

Kepler 1625b: Orbited by an Exomoon?

8,000 light years from Earth in the constellation Cygnus, the star designated Kepler 1625 may be harboring a planet with a moon. The planet, Kepler 1625b, is a gas giant several times the mass of Jupiter. What David Kipping (Columbia University) and graduate student Alex Teachey have found is compelling though not definitive evidence of a moon orbiting the confirmed planet.

If we do indeed have a moon here, and upcoming work should be able to resolve the question, we are dealing, at least in part, with the intriguing scenario many scientists (and science fiction writers) have speculated about. Although a gas giant, Kepler 1625b orbits close to or within the habitable zone of its star. A large, rocky moon around it could be a venue for life, but the moon posited for this planet doesn‘t qualify. It’s quite large — roughly the size of Neptune — and like its putative parent, a gaseous body. If we can confirm the first exomoon, we’ll have made a major advance, but the quest for habitable exomoons does not begin around Kepler 1625b.

Image: Columbia’s Alex Teachey, lead author of the paper on the detection of a potential exomoon. Credit: Columbia University.

None of this should take away from the importance of the detection, for exploring moons around exoplanets will doubtless teach us a great deal about how such moons form. Unlike the Earth-Moon system, or the Pluto/Charon binary in our own Solar System, Kepler-1625b’s candidate moon would not have formed through a collision between two rocky bodies early in the history of planetary development. We’d like to learn how it got there, if indeed it is there. Far larger than any Solar System moon, it is estimated to be but 1.5% of its companion’s mass.

The methods David Kipping has long espoused through the Hunt for Exomoons with Kepler (HEK) project have come to fruition here. Working with data on 284 planets identified through the Kepler mission, each of them with orbital periods greater than 30 days, Kipping and Teachey found interesting anomalies at Kepler-1625b, where Kepler had recorded three transits. The lightcurves produced by these transits across the face of the star as seen from the spacecraft showed deviations that demanded explanation.

Image: Exomoon hunter David Kipping. Credit: Columbia University.

Their interest heightened, the researchers requested and were awarded 40 hours of time on the Hubble Space Telescope, whose larger aperture could produce data four times more precise than that available from Kepler. On October 28 and 29 of 2017, the scientists took data through 26 Hubble orbits. Examining the lightcurve of a 19-hour long transit of Kepler-1625b, they noted a second, much smaller decrease in the star’s light some 3.5 hours later.

Kipping refers to it as being consistent with “…a moon trailing the planet like a dog following its owner on a leash,” but adds that the Hubble observation window closed before the complete transit of the candidate moon could be measured. The paper addresses this second dip:

The most compelling piece of evidence for an exomoon would be an exomoon transit, in addition to the observed TTV [transit timing variation]. If Kepler-1625b’s early transit were indeed due to an exomoon, then we should expect the moon to transit late on the opposite side of the barycenter. The previously mentioned existence of an apparent flux decrease toward the end of our observations is therefore where we would expect it to be under this hypothesis. Although we have established that this dip is most likely astrophysical, we have yet to discuss its significance or its compatibility with a self-consistent moon model.

In and of itself, this is exciting information, but as noted above, we also learn in this morning’s paper in Science Advances that transit timing variations are apparent here. The planet itself began its transit 77.8 minutes earlier than predicted. One way to account for this is by the pull of a moon on the planet, resulting in their both orbiting a common center of gravity and thus throwing the transit calculation (based on an unaccompanied planet) off. What Kipping and Teachey will need to eliminate is the possibility that a second planet, yet undetected, could have caused the timing variation. There is thus far no evidence from Kepler of such a planet.

“A companion moon is the simplest and most natural explanation for the second dip in the light curve and the orbit-timing deviation,” said lead author Teachey. “It was a shocking moment to see that light curve, my heart started beating a little faster and I just kept looking at that signature. But we knew our job was to keep a level head testing every conceivable way in which the data could be tricking us until we were left with no other explanation.”

Image: This is Figure 4 from the paper. Caption: Moon solutions. The three transits in Kepler (top) and the October 2017 transit observed with HST (bottom) for the three trend model solutions. The three colored lines show the corresponding trend model solutions for model M, our favored transit model. The shape of the HST transit differs from that of the Kepler transits owing to limb darkening differences between the bandpasses. Credit: David Kipping, Alex Teachey.

One problem with exomoon hunting is that the ideal candidate planets are those in wide orbits, but this makes for long periods between transits. Even so, the number of large planets in orbits farther from their star than 1 AU is growing, and such worlds should be useful targets for the upcoming James Webb Space Telescope. Although we still have to confirm Kepler-1625b’s moon, such a confirmation could prove only the beginning of a growing exomoon census.

We’ll know more as we make more detections, but for now, I think Kipping and Teachey’s caution is commendable. Noting that confirmation will involve long scrutiny, observations and skepticism from within the community, they point out:

…it is difficult to assign a precise probability to the reality of Kepler-1625b-i. Formally, the preference for the moon model over the planet-only model is very high, with a Bayes factor exceeding 400,000. On the other hand, this is a complicated and involved analysis where a minor effect unaccounted for, or an anomalous artifact, could potentially change our interpretation. In short, it is the unknown unknowns that we cannot quantify. These reservations exist because this would be a first-of-its-kind detection — the first exomoon.

A final thought: The paper points out that the original Kepler data that flagged Kepler-1625b as interesting in exomoon terms were actually stronger than the Kepler data Kipping and Teachey added into the mix for this work. They are now working with the most recent release, which had to be revisited for all factors that could affect the analysis. It turns out that this most recent release “only modestly favors that hypothesis when treated in isolation.” The HST data make the strongest case in strengthening the case for an exomoon. The authors believe that this shows the need to pursue similar Kepler planets for exomoons with HST and other facilities, even in cases where the Kepler data themselves do not show a large exomoon-like signature.

The paper is Teachey & Kipping, “Evidence for a large exomoon orbiting Kepler-1625b,” Science Advances 3 October 2016 (complete citation when I have it).

tzf_img_post

{ 30 comments }

Into the Cosmic Haystack

A new paper from Jason Wright (Penn State) and colleagues Shubham Kanodia and Emily Lubar deals with SETI and the ‘parameter space’ within which we search, with interesting implications. For the researchers show that despite searching for decades through a variety of projects and surveys, SETI is in early days indeed. Those who would draw conclusions about its lack of success to this point fail to understand the true dimensions of the challenge.

But before getting into the meat of the paper, let’s talk about a few items in its introduction. For Wright and team contextualize SETI in relation to broader statements about our place in the cosmos. We can ask questions about what we see and what we don’t see, but we have to avoid being too facile in our interpretation of what some consider to be an ‘eerie silence’ (the reference is to a wonderful book by Paul Davies of the same name).

Image: Penn State’s Jason Wright. Credit: Jody Barshinger.

Back in the 1970s, Michael Hart argued that even with very slow interstellar travel, the Milky Way should have been well settled by now. If, that is, there were civilizations out there to settle it. Frank Tipler made the same point, deducing from the lack of evidence that SETI itself was pointless, because if other civilizations existed, they would have already shown up.

In their new paper, Wright and team take a different tack, looking at the same argument as applied to more terrestrial concerns. Travel widely (Google Earth will do) and you’ll notice that most of the places you select at random show no obvious signs of humans or, in a great many cases, our technology. Why is this? After all, it takes but a small amount of time to fly across the globe when compared to the age of the technology that makes this possible. Shouldn’t we, then, expect that by now, most parts of the Earth’s surface should bear signs of our presence?

It’s a canny argument in particular because we are the only example of a technological species we have, and the Hart-style argument fails for us. If we accept the fact that although there are huge swaths of Earth’s surface that show no evidence of us, the Earth is still home to a technological civilization, then perhaps the same can be said for the galaxy. Or, for that matter, the Solar System, so much of which we have yet to explore. Could there be, for example, a billion year old Bracewell probe awaiting activation among the Trans-Neptunian objects?

Maybe, then, there is no such thing as an ‘eerie silence,’ or at least not one whose existence has been shown to be plausible. The matter seems theoretical until you realize it impacts practical concerns like SETI funding. If we assume that extraterrestrial civilizations do not exist because they have not visited us, then SETI is a wasteful exercise, its money better spent elsewhere.

By the same token, some argue that because we have not yet had a SETI detection of an alien culture, we can rule out their existence, at least anywhere near us in the galaxy. What Wright wants to do is show that the conclusion is false, because given the size of the search space, SETI has barely begun. We need, then, to examine just how much of a search we have actually been able to mount. What interstellar beacons, for example, might we have missed because we lacked the resources to keep a constant eye on the same patch of sky?

The Wright paper is about the parameter space within which we hope to find so-called ‘technosignatures.’ Jill Tarter has described a ‘cosmic haystack’ existing in three spatial dimensions, one temporal dimension, two polarization dimensions, central frequency, sensitivity and modulation — a haystack, then, of nine dimensions. Wright’s team likes this approach:

This “needle in a haystack” metaphor is especially appropriate in a SETI context because it emphasizes the vastness of the space to be searched, and it nicely captures how we seek an obvious product of intelligence and technology amidst a much larger set of purely natural products. SETI optimists hope that there are many alien needles to be found, presumably reducing the time to find the first one. Note that in this metaphor the needles are the detectable signatures of alien technology, meaning that a single alien species might be represented by many needles.

Image: Coming to terms with the search space as SETI proceeds, in this case at Green Bank, WV. Credit: Walter Bibikow/JAI/Corbis /Green Bank Observatory.

The Wright paper shows how our search haystacks can be defined even as we calculate the fraction of them already examined for our hypothetical needles. A quantitative, eight-dimensional model is developed to make the calculation, with a number of differences between the model haystack and the one developed by Tarter, and factoring in recent SETI searches like Breakthrough Listen’s ongoing work. The assumption here, necessary for the calculation, is that SETI surveys have similar search strategies and sensitivities.

This assumption allows the calculation to proceed, and it is given support when we learn that its results align fairly well with the previous calculation Jill Tarter made in a 2010 paper. Thus Wright: “…our current search completeness is extremely low, akin to having searched something like a large hot tub or small swimming pool’s worth of water out of all of Earth’s oceans.”

And then Tarter, whose result for the size of our search is a bit smaller. Let me just quote her (from an NPR interview in 2012) on the point:

“We’ve hardly begun to search… The space that we’re looking through is nine-dimensional. If you build a mathematical model, the amount of searching that we’ve done in 50 years is equivalent to scooping one 8-ounce glass out of the Earth’s ocean, looking and seeing if you caught a fish. No, no fish in that glass? Well, I don’t think you’re going to conclude that there are no fish in the ocean. You just haven’t searched very well yet. That’s where we are.”

This being the case, the idea that a lack of success for SETI to date is a compelling reason to abandon the search is shown for what it is, a misreading of the enormity of the search space. SETI cannot be said to have failed. But this leads to a different challenge. Wright again:

We should be careful, however, not to let this result swing the pendulum of public perceptions of SETI too far the other way by suggesting that the SETI haystack is so large that we can never hope to find a needle. The whole haystack need only be searched if one needs to prove that there are zero needles—because technological life might spread through the Galaxy, or because technological species might arise independently in many places, we might expect there to be a great number of needles to be found.

The paper also points out that in its haystack model are included regions of interstellar space between stars for which there is no assumption of transmitters. Transmissions from nearby stars are but a subset of the haystack, and move up in the calculation of detection likelihood.

So we keep looking, wary of drawing conclusions too swiftly when we have searched such a small part of the available parameter space, and we look toward the kind of searches that can accelerate the process. These would include “…surveys with large bandwidth, wide fields of view, long exposures, repeat visits, and good sensitivity,” according to the paper. The ultimate survey? All sky, all the time, the kind of all-out stare that would flag repeating signals that today could only register as one-off phenomena, and who knows what other data of interest not just to SETI but to the entire community of deep-sky astronomers and astrophysicists.

The paper is Wright et al., “How Much SETI Has Been Done? Finding Needles in the n-Dimensional Cosmic Haystack,” accepted at The Astronomical Journal (preprint).

tzf_img_post

{ 111 comments }

Trillion Planet Survey Targets M-31

Can rapidly advancing laser technology and optics augment the way we do SETI? At the University of California, Santa Barbara, Phil Lubin believes they can, and he’s behind a project called the Trillion Planet Survey to put the idea into practice for the benefit of students. As an incentive for looking into a career in physics, an entire galaxy may be just the ticket.

For the target is the nearest galaxy to our own. The Trillion Planet Survey will use a suite of meter-class telescopes to search for continuous wave (CW) laser beacons from M31, the Andromeda galaxy. But TPS is more than a student exercise. The work builds on Lubin’s 2016 paper called “The Search for Directed Intelligence,” which makes the case that laser technology foreseen today could be seen across the universe. And that issue deserves further comment.

Centauri Dreams readers are familiar with Lubin’s work with DE-STAR, (Directed Energy Solar Targeting of Asteroids and exploRation), a scalable technology that involves phased arrays of lasers. DE-STAR installations could be used for purposes ranging from asteroid deflection (DE-STAR 2-3) to propelling an interstellar spacecraft to a substantial fraction of the speed of light (DE-STAR 3-4). The work led to NIAC funding (NASA Starlight) in 2015 examining beamed energy systems for propulsion in the context of miniature probes using wafer-scale photonics and is also the basis for Breakthough Starshot.

Image: UC-Santa Barbara physicist Philip Lubin. Credit: Paul Wellman/Santa Barbara Independent.

A bit more background here: Lubin’s Phase I study “A Roadmap to Interstellar Flight “ is available online. It was followed by Phase II work titled “Directed Energy Propulsion for Interstellar Exploration (DEEP-IN).” Lubin’s discussions with Pete Worden on these ideas led to talks with Yuri Milner in late 2015. The Breakthrough Starshot program draws on the DE-STAR work, particularly in its reliance on miniaturized payloads and, of course, a laser array for beamed propulsion, the latter an idea that had largely been associated with large sails rather than chip-sized payloads. Mason Peck and team’s work on ‘sprites’ is also a huge factor.

But let’s get back to the Trillion Planet Survey — if I start talking about the history of beamed propulsion concepts, I could spend days, and anyway, Jim Benford has already undertaken the task in these pages in his A Photon Beam Propulsion Timeline. What’s occupies us this morning is the range of ideas that play around the edges of beamed propulsion, one of them being the beam itself, and how it might be detected at substantial distances. Lubin’s DE-STAR 4, capable of hitting an asteroid with 1.4 megatons of energy per day, would stand out in many a sky.

In fact, according to Lubin’s calculations, such a system — if directed at another star — would be seen in systems as distant as 1000 light years as, briefly, the brightest star in the sky. Suddenly we’re talking SETI, because if we can build such systems in the foreseeable future, so can the kind of advanced civilizations we may one day discover among the stars. Indeed, directed energy systems might announce themselves with remarkable intensity.

Image: M31, the Andromeda Galaxy, the target of the largely student led Trillion Planet Survey. Credit & Copyright: Robert Gendler.

Lubin makes this point in his 2016 paper, in which he states “… even modest directed energy systems can be ‘seen’ as the brightest objects in the universe within a narrow laser linewidth.” Amplifying on this from the paper, he shows that stellar light in a narrow bandwidth would be very small in comparison to the beamed energy source:

In case 1) we treat the Sun as a prototype for a distant star, one that is unresolved in our telescope (due to seeing or diffraction limits) but one where the stellar light ends up in ~ one pixel of our detector. Clearly the laser is vastly brighter in this sense. Indeed for the narrower linewidth the laser is much brighter than an entire galaxy in this sense. For very narrow linewidth lasers (~ 1 Hz) the laser can be nearly as bright as the sum of all stars in the universe within the linewidth. Even modest directed energy systems can stand out as the brightest objects in the universe within the laser linewidth.

And again (and note here that the reference to ‘class 4’ is not to an extended Kardashev scale, but rather to a civilization transmitting at DE-STAR 4 levels, as defined in the paper):

As can be seen at the distance of the typical Kepler planets (~ 1 kly distant) a class 4 civilization… appears as the equivalent of a mag~0 star (ie the brightest star in the Earth’s nighttime sky), at 10 kly it would appear as about mag ~ 5, while the same civilization at the distance of the nearest large galaxy (Andromeda) would appear as the equivalent of a m~17 star. The former is easily seen with the naked eye (assuming the wavelength is in our detection band) while the latter is easily seen in a modest consumer level telescope.

Out of this emerges the idea that a powerful civilization could be detected with modest ground-based telescopes if it happened to be transmitting in our direction when we were observing. Hence the Trillion Planet Survey, which looks at using small telescopes such as those in the Las Cumbres Observatory’s robotic global network to make such a detection.

With M31 as the target, the students in the Trillion Planet Survey are conducting a survey of the galaxy as TPS gets its software pipeline into gear. Developed by Emory University student Andrew Stewart, the pipeline processes images under a set of assumptions. Says Stewart:

“First and foremost, we are assuming there is a civilization out there of similar or higher class than ours trying to broadcast their presence using an optical beam, perhaps of the ‘directed energy’ arrayed-type currently being developed here on Earth. Second, we assume the transmission wavelength of this beam to be one that we can detect. Lastly, we assume that this beacon has been left on long enough for the light to be detected by us. If these requirements are met and the extraterrestrial intelligence’s beam power and diameter are consistent with an Earth-type civilization class, our system will detect this signal.”

Screening transient signals from its M31 images, the team will then submit them to further processing in the software pipeline to eliminate false positives. The TPS website offers links to background information, including Lubin’s 2016 paper, but as of yet has little about the actual image processing, so I’ll simply quote from a UCSB news release on the matter:

“We’re in the process of surveying (Andromeda) right now and getting what’s called ‘the pipeline’ up and running,” said researcher Alex Polanski, a UC Santa Barbara undergraduate in Lubin’s group. A set of photos taken by the telescopes, each of which takes a 1/30th slice of Andromeda, will be knit together to create a single image, he explained. That one photograph will then be compared to a more pristine image in which there are no known transient signals — interfering signals from, say, satellites or spacecraft — in addition to the optical signals emanating from the stellar systems themselves. The survey photo would be expected to have the same signal values as the pristine “control” photo, leading to a difference of zero. But a difference greater than zero could indicate a transient signal source, Polanski explained. Those transient signals would then be further processed in the software pipeline developed by Stewart to kick out false positives. In the future the team plans to use simultaneous multiple color imaging to help remove false positives as well.

Why Andromeda? The Trillion Planet Survey website notes that the galaxy is home to at least one trillion stars, a stellar density higher than the Milky Way’s, and thus represents “…an unprecedented number of targets relative to other past SETI searches.” The project gets the students who largely run it into the SETI business, juggling the variables as we consider strategies for detecting other civilizations and upgrading existing search techniques, particularly as we take into account the progress of exponentially accelerating photonic technologies.

Projects like these can exert a powerful incentive for students anxious to make a career out of physics. Thus Caitlin Gainey, now a freshman in physics at UC Santa Barbara:

“In the Trillion Planet Survey especially, we experience something very inspiring: We have the opportunity to look out of our earthly bubble at entire galaxies, which could potentially have other beings looking right back at us. The mere possibility of extraterrestrial intelligence is something very new and incredibly intriguing, so I’m excited to really delve into the search this coming year.”

And considering that any signal arriving from M31 would have been enroute for well over 2 million years, the TPS also offers the chance to involve students in the concept of SETI as a form of archaeology. We could discover evidence of a civilization long dead through signals sent well before civilization arose on Earth. A ‘funeral beacon’ announcing the demise of a once-great civilization is a possibility. In terms of artifacts, the search for Dyson Spheres or other megastructures is another. The larger picture is that evidence of extraterrestrial intelligence can come in various forms, including optical or radio signals as well as artifacts detectable through astronomy. It’s a field we continue to examine here, because that search has just begun.

Phil Lubin’s 2016 paper is “The Search for Directed Intelligence,” REACH – Reviews in Human Space Exploration, Vol. 1 (March 2016), pp. 20-45. (Preprint / full text).

tzf_img_post

{ 38 comments }

Small Provocative Workshop on Propellantless Propulsion

In what spirit do we pursue experimentation, and with what criteria do we judge the results? Marc Millis has been thinking and writing about such questions in the context of new propulsion concepts for a long time. As head of NASA’s Breakthrough Propulsion Physics program, he looked for methodologies by which to push the propulsion envelope in productive ways. As founding architect of the Tau Zero Foundation, he continues the effort through books like Frontiers of Propulsion Science, travel and conferences, and new work for NASA through TZF. Today he reports on a recent event that gathered people who build equipment and test for exotic effects. A key issue: Ways forward that retain scientific rigor and a skeptical but open mind. A quote from Galileo seems appropriate: “I deem it of more value to find out a truth about however light a matter than to engage in long disputes about the greatest questions without achieving any truth.”

by Marc G Millis

A workshop on propellantless propulsion was held at a sprawling YMCA campus of classy rusticity, in Estes Park Colorado, from Sept 10 to 14. These are becoming annual events, with the prior ones being in LA in Nov 2017, and in Estes Park, Sep 2016. This is a fairly small event of only about 30 people.

It was at the 2016 event where three other labs reported the same thrust that Jim Woodward and his team had been reporting for some time – with the “Mach Effect Thruster” (which also goes by the name “Mach Effect Gravity Assist” device). Backed by those independent replications, NASA awarded Woodward’s team NIAC grants. Updates on this work and several other concepts were discussed at this workshop. There will be a proceedings published after all the individual reports are rounded up and edited.

Before I go on to describe these updates, I feel it would be helpful to share a technique that I regularly use to when trying to assess potential breakthrough concepts. I began using this technique when I ran NASA’s Breakthrough Propulsion Physics project to help decide which concepts to watch and which to skip.

When faced with research that delves into potential breakthroughs, one faces the challenge of distinguishing which of those crazy ideas might be the seeds of breakthroughs and which are the more generally crazy ideas. In retrospect, it is easy to tell the difference. After years of continued work, the genuine breakthroughs survive, along with infamous quotes from their naysayers. Meanwhile the more numerous crazy ideas are largely forgotten. Making that distinction before the fact, however, is difficult.

So how do I tell that difference? Frankly, I can’t. I’m not clairvoyant nor brilliant enough to tell which idea is right (though it is easy to spot flagrantly wrong ideas). What I can judge and what needs to be judged is the reliability of the research. Regardless if the research is reporting supportive or dismissive evidence of a new concept, those findings mean nothing unless they are trustworthy. The most trustworthy results come from competent, rigorous researchers who are impartial – meaning they are equally open to positive or negative findings. Therefore, I first look for the impartiality of the source – where I will ignore “believers” or pedantic pundits. Next, I look to see if their efforts are focused on the integrity of the findings. If experimenters are systematically checking for false positives, then I have more trust in their findings. If theoreticians go beyond just their theory to consider conflicting viewpoints, then I pay more attention. And lastly, I look to see if they are testing a critical make-break issue or just some less revealing detail. If they won’t focus on a critical issue, then the work is less relevant.

Consider the consequences of that tactic: If a reliable researcher is testing a bad idea, you will end up with a trustworthy refutation of that idea. Null results are progress – knowing which ideas to set aside. Reciprocally, if a sloppy or biased researcher is testing a genuine breakthrough, then you won’t get the information you need to take that idea forward. Sloppy or biased work is useless (even if from otherwise reputable organizations). The ideal situation is to have impartial and reliable researchers studying a span of possibilities, where any latent breakthrough in that suite will eventually reveal itself (the “pony in the pile”).

Now, back to the workshop. I’ll start with the easiest topic, the infamous EmDrive. I use the term “infamous” to remind you that (1) I have a negative bias that can skew my impartiality, and (2) there are a large number of “believers” whose experiments never passed muster (which lead to my negative bias and overt frustration).

Three different tests of the EmDrive were reported of varying degrees of rigor. All of the tests indicated that the claimed thrust is probably attributable to false positives. The most thorough tests were from the Technical University of Dresden, Germany, led by Martin Tajmar, and where his student, Marcel Weikert presented the EmDrive tests, and Matthias Kößling on the details of their thrust stand. They are testing more than one version of the EmDrive, under multiple conditions, and all with alertness for false positives. Their interim results show that thrusts are measured when the device is not in a thrusting mode – meaning that something else is creating the appearance of a thrust. They are not yet fully satisfied with the reliability of their findings and tests continue. They want to trace the apparent thrust its specific cause.

The next big topic was Woodward’s Mach Effect Thruster – determining if the previous positive results are indeed genuine, and then determining if they are scalable to practical levels. In short – it is still not certain if the Mach Effect Thruster is demonstrating a genuine new phenomenon or if it is a case of a common experimental false positive. In addition to work of Woodward’s team, led by Heidi Fearn, the Dresden team also had substantial progress to report, specifically where Maxime Monette covered the Mach Effect thruster details in addition to the thrust stand details from Matthias Kößling. There was also an analytical assessment by based on conventional harmonic oscillators, plus more than one presentation related to the underlying theory.

One of the complications that developed over the years is that the original traceability between Woodward’s theory and the current thruster hardware has thinned. The thruster has become a “back box” where the emphasis is now on the empirical evidence and less on the theory.

Originally, the thruster hardware closely followed the 1994 patent which itself was a direct application of Woodward’s 1990 hypothesized fluctuating inertia. It involved two capacitors at opposite ends of a piezoelectric separator, where the capacitors experience the inertial fluctuations (during charging and discharging cycles) and where the piezoelectric separator cyclically changes length between these capacitors.

Its basic operation is as follows: While the rear capacitor’s inertia is higher and the forward capacitor lower, the piezoelectric separator is extended. The front capacitor moves forward more than the rear one moves rearward. Then, while the rear capacitor’s inertia is lower and the forward capacitor higher, the piezoelectric separator is contracted. The front capacitor moves backward less than the rear one moves forward. Repeating this cycle shifts the center of mass of the system forward – apparently violating conservation of momentum.

The actual conservation of momentum is more difficult to assess. The original conservation laws are anchored to the idea of an immutable connection between inertia and an inertial frame. The theory behind this device deals with open questions in physics about the origins and properties of inertial frames, specifically evoking “Mach’s Principle.” In short, that principle is ‘inertia here because of all the matter out there.’ Another related physics term is “Inertial Induction.” Skipping through all the open issues, the upshot is that variations in inertia would require revisions to the conservation laws. It’s an open question.

Back to the tale of the evolved hardware. Eventually over the years, the hardware configuration changed. While Woodward and his team tried different ways to increase the observed thrust, the ‘fluctuating inertia’ components and the ‘motion’ components were merged. Both the motions and mass fluctuations are now occurring in a stack of piezoelectric disks. Thereafter, the emphasis shifted to the empirical observations. There were no analyses to show how to connect the original theory to this new device. The Dresden team did develop a model to link the theory to the current hardware, but determining its viability is part of the tests that are still unfinished [Tajmar, M. (2017). Mach-Effect thruster model. Acta Astronautica, 141, 8-16.].

Even with the disconnect between the original theory and hardware now under test, there were a couple of presentations about the theory, one by Lance Williams and the other by Jose’ Rodal. Lance, reporting on discussions he had when attending the April 2018 meeting of American Physical Society, Division of Gravitational Physics, suggested how to engage the broader physics community about this theory, such as using the more common term of “Inertial Induction” instead of “Mach’s Principle.” Lance elaborated on the prevailing views (such as the absence of Maxwellian gravitation) that would need to be brought into the discussion – facing the constructive skepticism to make further advances. Jose’ Rodal elaborated on the possible applicability of “dilatons” from the Kaluza–Klein theory of compactified dimensions. Amid these and other presentations, there was lively discussion involving multiple interpretations of well established physics.

An additional provocative model for the Mach Effect Thruster came from an interested software engineer, Jamie Ciomperlik, who dabbles in these topics for recreation. In addition to his null tests of the EmDrive, he created a numerical simulation for the Mach Effect using conventional harmonic oscillators. The resulting complex simulations showed that, with the right parameters, a false positive thrust could result from vibrational effects. After lengthy discussions, it was agreed to examine this more closely, both experimentally and analytically. Though the experimentalists already knew of possible false positives from vibration, they did not previously have an analytical model to help hunt for these effects. One of the next steps is to check how closely the analysis parameters match the actual hardware.

Quantum approaches were also briefly covered, where Raymond Chiao discussed the negative energy densities of Casimir cavities and Jonathan Thompson (a prior student of Chiao’s) gave an update on experiments to demonstrate the “Dynamical Casimir effect” – a method to create a photon rocket using photons extracted from the quantum vacuum.

There were several other presentations too, spanning topics of varying relevance and fidelity. Some of these were very speculative works, whose usefulness can be compared to the thought-provoking effect of good science fiction. They don’t have to be right to be enlightening. One was from retired physicist and science fiction writer, John Cramer, who described the assumptions needed to induce a wormhole using the Large Hadron Collider (LHC) that could cover 1200 light-years in 59 days.

Representing NASA’s Innovative Advanced Concepts (NIAC), Ron Turner gave an overview of the scope and how to propose for NIAC awards.

A closing thought about consequences. By this time next year, we will have definitive results on the Mach Effect Thruster, and the findings of the EmDrive will likely arrive sooner. Depending on if the results are positive or negative, here are my recommendations on how to proceed in a sane and productive manner. These recommendations are based on history repeating itself, using both the good and bad lessons:

If It Does Work:

  • Let the critical reviews and deeper scrutiny run their course. If this is real, a lot of people will need to repeat it for themselves to discover what it’s about. This takes time, and not all of it will be useful or pleasant. Pay more attention to those who are attempting to be impartial, rather than those trying to “prove” or “disprove.” Because divisiveness sells stories, expect press stories focusing on the controversy or hype, rather than reporting the blander facts.
  • Don’t fall for the hype of exaggerated expectations that are sure to follow. If you’ve never heard of the “Gartner Hype Cycle,” then now’s the time to look it up. Be patient, and track the real test results more than the news stories. The next progress will still be slow. It will take a while and a few more iterations before the effects start to get unambiguously interesting.
  • Conversely, don’t fall for the pedantic disdain (typically from those whose ideas are more conventional and less exciting). You’ll likely hear dismissals like, “Ok, so it works, but it’s not useful. ” or “We don’t need it to do the mission.” Those dismissals only have a kernel of truth in a very narrow, near-sighted manner.
  • Look out for the sharks and those riding the coattails of the bandwagon. Sorry to mix metaphors, but it seemed expedient. There will be a lot of people coming out of the woodwork in search of their own piece of the action. Some will be making outrageous claims (hype) and selling how their version is better than the original. Again, let the test results, not the sales pitches, help you decide.

If It Does Not Work:

  • Expect some to dismiss the entire goal of “spacedrives” based on the failure of one or two approaches. This is a “generalization error” which might make some feel better, but serves no useful purpose.
  • Expect others to chime in with their alternative new ideas to fill the void, the weakest of which will be evident by their hyped sales pitches.
  • Follow the advice given earlier: When trying to figure out which idea to listen too, check their impartiality and rigor. Listen to those that are not trying to sell nor dismiss, but rather to honestly investigate and report. When you find those service providers, keep tuned in to them.
  • To seek new approaches toward the breakthrough goals, look for the intersection of open questions in physics to the critical make-break issues of those desired breakthroughs. Those intersections are listed in our book Frontiers of Propulsion Science.

tzf_img_post

{ 102 comments }

Gaia Data Hint at Galactic Encounter

The Sagittarius Dwarf Galaxy is a satellite of the Milky Way, about 70,000 light years from Earth and in a trajectory that has it currently passing over the Milky Way’s galactic poles; i.e., perpendicular to the galactic plane. What’s intriguing about this satellite is that its path takes it through the plane of our galaxy multiple times in the past, a passage whose effects may still be traceable today. A team of scientists led by Teresa Antoja (Universitat de Barcelona) is now using Gaia data to trace evidence of its effects between 300 and 900 million years ago.

Image: The Sagittarius dwarf galaxy, a small satellite of the Milky Way that is leaving a stream of stars behind as an effect of our Galaxy’s gravitational tug, is visible as an elongated feature below the Galactic centre and pointing in the downwards direction in the all-sky map of the density of stars observed by ESA’s Gaia mission between July 2014 to May 2016. Credit: ESA/Gaia/DPAC.

This story gets my attention because of my interest in the Gaia data and the uses to which they can be put. We just looked at interstellar interloper ‘Oumuamua and saw preliminary work on tracing it back to a parent star. No origin could be determined, but the selection of early candidates was an indication of an evolving method in using the Gaia dataset, which will expand again with the 2021 release. The Sagittarius Dwarf galaxy compels a different method, and we’ll be seeing quite a few new investigations with methods of their own growing out of this attempt to begin a three-dimensional map of the Milky Way. A kinematic census of over one billion stars will come out of Gaia.

A billion stars represents less than 1 percent of the galactic population, so you can see how far we have to go, but we’re already finding innovative ways to put the Gaia data to use, as witness Antoja’s new paper in Nature. As we saw in ‘Oumuamua’s Origin: A Work in Progress, Gaia uses astrometric methods to measure not just the position but the velocity of stars on the plane of the sky. We also get a subset of a few million stars for which the mission will include radial velocity, producing stellar motion in a three-dimensional ‘phase space.’

From the Antoja paper:

By exploring the phase space of more than 6 million stars (positions and velocities) in the disk of the Galaxy in the first kiloparsecs around the Sun from the Gaia Data Release 2 (DR2, see Methods), we find that certain phase space projections show plenty of substructures that are new and that had not been predicted by existing models. These have remained blurred until now due to the limitations on the number of stars and the precision of the previously available datasets.

Antoja’s team found that these unique data revealed an unexpected pattern when stellar positions were plotted against velocity. The pattern is a snail shell shape that emerges when plotting the stars’ altitude above or below the plane of the galaxy against their velocity in the same direction. Nothing like this had been noted before, nor could it have been without Gaia.

“At the beginning the features were very weird to us,” says Antoja. “I was a bit shocked and I thought there could be a problem with the data because the shapes are so clear. It looks like suddenly you have put the right glasses on and you see all the things that were not possible to see before.”

Image: This graph shows the altitude of stars in our Galaxy above or below the plane of the Milky Way against their velocity in the same direction, based on a simulation of a near collision that set millions of stars moving like ripples on a pond. The snail shell-like shape of the pattern reproduces a feature that was first seen in the movement of stars in the Milky Way disc using data from the second release of ESA’s Gaia mission, and interpreted as an imprint of a galactic encounter. Credit: T. Antoja et al. 2018.

Stellar motions, we are learning, produce ripples that may no longer show up in the stars’ visible distribution, but do emerge when their velocities are taken into consideration. Antoja and colleagues believe the cause of this motion was the Sagittarius Dwarf Galaxy, whose last close pass would have perturbed many stars in the Milky Way. The timing is the crux, for estimates of when the snail shell pattern began fit with the timing of the last dwarf galaxy pass.

As with the ‘Oumuamua study, we’re at the beginning of teasing out newly available information from the trove that Gaia is giving us. To firm up the connection with the Sagittarius Dwarf Galaxy, Antoja team has much to do as it moves beyond early computer modeling and analysis, but the evidence for perturbation, whatever the source, is clear. From the paper:

…an ensemble of stars will stretch out in phase space, with the range of frequencies causing a spiral shape in this projection. The detailed time evolution of stars in this toy model is described in Methods and shown in Extended Data Fig. 3. As time goes by, the spiral gets more tightly wound, and eventually, this process of phase mixing leads to a spiral that is so wound that the coarse-grained distribution appears to be smooth. The clarity of the spiral shape in the Z-VZ [vertical position and velocity] plane revealed by the Gaia DR2 data, implies that this time has not yet arrived and thus provides unique evidence that phase mixing is currently taking place in the disk of the Galaxy.

The shell-like pattern thus contains information about the distribution of matter in the Milky Way and the nature of stellar encounters. The bigger picture is that untangling the evolution of the galaxy and explaining its structure is what Gaia was designed for, a process that is now gathering momentum. We’re only beginning to see what options this mission is opening up.

The paper is Antoja et al., “A Dynamically Young and Perturbed Milky Way Disk,” Nature 561 (2018), 360-362 (abstract / preprint).

tzf_img_post

{ 8 comments }

Of Storms on Titan

I always imagined Titan’s surface as a relatively calm place, perhaps thinking of the Huygens probe in an exotic, frigid landing zone that I saw as preternaturally still. Then, prompted by an analysis of what may be dust storms on Titan, I revisited what Huygens found. It turns out the probe experienced maximum winds about ten minutes after beginning its descent, at an altitude of some 120 kilometers. It was below 60 kilometers that the wind dropped. And during the final 7 kilometers, the winds were down to a few meters per second. At the surface, according to the European Space Agency, Huygens found a light breeze of 0.3 meters per second.

But is Titan’s surface always that quiet? The Cassini probe has shown us that Titan experiences interesting weather driven by a methane cycle that operates at temperatures far below Earth’s water cycle, filling its lakes and seas with methane and ethane. The evaporation of hydrocarbon molecules produces clouds that lead to rain, with conditions varying according to season. Conditions at the time of the equinox, with the Sun crossing Titan’s equator, are particularly lively, producing massive clouds and storms in the tropical regions.

So a lot can happen here depending on where and when we sample. Sebastien Rodriguez (Université Paris Diderot, France) and colleagues noticed unusual brightenings in infrared images made by Cassini near the moon’s 2009-2010 northern spring equinox. The paper refers to these as “three distinctive and short-lived spectral brightenings close to the equator.”

The first assumption was that these were clouds, but that idea was quickly discounted. Says Rodriguez:

“From what we know about cloud formation on Titan, we can say that such methane clouds in this area and in this time of the year are not physically possible. The convective methane clouds that can develop in this area and during this period of time would contain huge droplets and must be at a very high altitude — much higher than the 6 miles (10 kilometers) that modeling tells us the new features are located.”

Image: This compilation of images from nine Cassini flybys of Titan in 2009 and 2010 captures three instances when clear bright spots suddenly appeared in images taken by the spacecraft’s Visual and Infrared Mapping Spectrometer. The brightenings were visible only for a short period of time — between 11 hours to five Earth weeks — and cannot be seen in previous or subsequent images. Credit: NASA/JPL-Caltech/University of Arizona/University Paris Diderot/IPGP.

In a paper just published in Nature Geoscience, the researchers likewise discount the possibility that Cassini had detected surface features, areas of frozen methane or lava flows of ice. The problem here is that the bright features in the infrared were visible for relatively short periods — 11 hours to 5 weeks — while surface spots should have remained visible for longer. Nor do they bear the chemical signature expected from such formations at the surface.

Image: This animation — based on images captured by the Visual and Infrared Mapping Spectrometer on NASA’s Cassini mission during several Titan flybys in 2009 and 2010 — shows clear bright spots appearing close to the equator around the equinox that have been interpreted as evidence of dust storms. Credit: NASA/JPL-Caltech/University of Arizona/University Paris Diderot/IPGP.

Rodriguez and team used computer modeling to show that the brightened features were atmospheric but extremely low, forming what is in all likelihood a thin layer of solid organic particles. Such particles form because of the interaction between methane and sunlight. Because the bright features occurred over known dune fields at Titan’s equator, Rodriguez believes that they are clouds of dust kicked up by wind hitting the dunes.

“We believe that the Huygens Probe, which landed on the surface of Titan in January 2005, raised a small amount of organic dust upon arrival due to its powerful aerodynamic wake,” says Rodriguez. “But what we spotted here with Cassini is at a much larger scale. The near-surface wind speeds required to raise such an amount of dust as we see in these dust storms would have to be very strong — about five times as strong as the average wind speeds estimated by the Huygens measurements near the surface and with climate models.”

Image: Artist’s concept of a dust storm on Titan. Researchers believe that huge amounts of dust can be raised on Titan, Saturn’s largest moon, by strong wind gusts that arise in powerful methane storms. Such methane storms, previously observed in images from the international Cassini spacecraft, can form above dune fields that cover the equatorial regions of this moon especially around the equinox, the time of the year when the Sun crosses the equator. Credit: NASA/ESA/IPGP/Labex UnivEarthS/University Paris Diderot.

In reaching this conclusion, the researchers analyzed Cassini spectral data and deployed atmospheric models and simulations to show that micrometer-sized solid organic particles from the dunes below were responsible, an indication of dust in the atmosphere that far exceeds what Huygens found at the surface. The winds associated with the phenomenon would be unusually strong, but could be explained by downbursts in the equinoctial methane storms.

If dust storms can be created by such winds, then Titan’s equatorial regions are still active, with the dunes undergoing constant change. We have a world that is active not only in its hydrocarbon cycle and its geology, but also in what we can call its ‘dust cycle.’ The only moon in the Solar System with a dense atmosphere and surface liquid offers yet another analogy with Earth, a similarity that highlights the complexity of this frigid, hydrocarbon-rich world.

The paper is Rodriguez et al., “Observational evidence for active dust storms on Titan at equinox,” Nature Geoscience 24 September 2018 (abstract).

tzf_img_post

{ 19 comments }

‘Oumuamua’s Origin: A Work in Progress

The much discussed interstellar wanderer called ‘Oumuamua made but a brief pass through our Solar System, and was only discovered on the way out in October of last year. Since then, the question of where the intriguing interloper comes from has been the object of considerable study. This is, after all, the first object known to be from another star observed in our system. Today we learn that a team of astronomers led by Coryn Bailer-Jones (Max Planck Institute for Astronomy) has been able to put Gaia data and other resources to work on the problem.

The result: Four candidate stars identified as possible home systems for ‘Oumuamua. None of these identifications is remotely conclusive, as the researchers make clear. The significance of the work is in the process, which will be expanded as still more data become available from the Gaia mission. So in a way this is a preview of a much larger search to come.

What we are dealing with is the reconstruction of ‘Oumuamua’s motion before it encountered our Solar System, and here the backtracking become tangled with the object’s trajectory once we actually observed it. Its passage through the system as well as stars it encountered before it reached us all factor into determining its origin.

What the Bailer-Jones teams brings to the table is something missing in earlier attempts to solve the riddle of ‘Oumuamua’s home. We learned in June of 2018 that ‘Oumuamua’s orbit was not solely the result of gravitational influences, but that a tiny additional acceleration had been added when the object was close to the Sun. That brought comets into the discussion: Was ‘Oumuamua laden with ice that, sufficiently heated, produced gases that accelerated it?

The problem with that idea was that no such outgassing was visible on images of the object, the way it would be with comets imaged close to the Sun. Whatever the source of the exceedingly weak acceleration, though, it had to be factored into any attempt to extrapolate the object’s previous trajectory. Bailer-Jones and team manage to do this, offering a more precise idea of the direction from which the object came.

Image: This artist’s impression shows the first interstellar asteroid: `Oumuamua. This unique object was discovered on 19 October 2017 by the Pan-STARRS 1 telescope in Hawai`i. Subsequent observations from ESO’s Very Large Telescope in Chile and other observatories around the world show that it was travelling through space for millions of years before its chance encounter with our star system. `Oumuamua seems to be a dark red object, either elongated, as in this image, or else shaped like a pancake. Credit: ESO/M. Kornmesser.

At the heart of this work are the abundant data being gathered by the Gaia mission, whose Data Release 2 (DR2) includes position, on-sky motion and parallax information on 1.3 billion stars. As this MPIA news release explains, we also have radial velocity data — motion directly away from or towards the Sun — of 7 million of these Gaia stars. The researchers then added in Simbad data on an additional 220,000 stars to retrieve further radial velocity information.

To say this gets complicated is a serious understatement. 4500 stars turn up as potential homes for ‘Oumuamua, assuming both the object and the stars under consideration all moved along straight lines and at constant speeds. Then the researchers had to take into consideration the gravitational influence of all the matter in the galaxy. The likelihood is that ‘Oumuamua was ejected from a planetary system during the era of planet formation, and that it would have been sent on its journey by gravitational interactions with giant planets in the infant system.

Calculating its trajectory, then, could lead us back to ‘Oumuamua’s home star, or at least to a place close to it. Another assumption is that the relative speed of ‘Oumuamua and its parent star is comparatively slow, because objects are not typically ejected from planetary systems at high speed. Given all this, Bailer-Jones and team come down from 4500 candidates to four that they consider the best possibilities. None of these stars is currently known to have planets at all, much less giant planets, but none has been seriously examined for planets to this point.

Let’s pause on this issue, because it’s an interesting one. Digging around in the paper, I learned that unstable gas giants would be more likely to eject planetesimals than systems with stable giant planets, a consequence of the eccentric orbits of multiple gas giants during an early phase of system instability. It also turns out that there are ways to achieve higher ejection velocities. Does ‘Oumuamua come from a binary star? Let me quote from the paper on this:

Higher ejection velocities can occur for planetesimals scattered in a binary star system. To demonstrate this, we performed a simple dynamical experiment on a system comprising a 0.1 M star in a 10 au circular orbit about a 1.0 M star. (This is just an illustration; a full parameter study is beyond the scope of this work.) Planetesimals were randomly placed between 3 au and 20 au from the primary, enveloping the orbit of the secondary… Once again most (80%) of the ejections occur at velocities lower than 10 km s−1, but a small fraction is ejected at higher velocities in the range of those we observe (and even exceeding 100 km s−1).

So keep this in mind in evaluating the candidate stars. One of these is the M-dwarf HIP 3757, which can serve as an example of how much remains to be done before we can claim to know ‘Oumuamua’s origin. Approximately 77 light years from Earth, the star as considered by these methods would have been within 1.96 light years of ‘Oumuamua about 1 million years ago. This is close enough to make the star a candidate given how much play there is in the numbers.

But the authors are reluctant to claim HIP 3757 as ‘Oumuamua’s home star because the relative speed between the object and the star is about 25 kilometers per second, making ejection by a giant planet in the home system less likely. More plausible on these grounds is HD 292249, which would have been within a slightly larger distance some 3.8 million years ago. Here we get a relative speed of 10 kilometers per second. Two other stars also fit the bill, one with an encounter 1.1 million years ago, the other at its closest 6.3 million years ago. Both are in the DR2 dataset and have been catalogued by previous surveys, but little is known about them.

Now note another point: None of the candidate stars in the paper are known to have giant planets, but higher speed ejections can still be managed in a binary star system, or for that matter in a system undergoing a close pass by another star. None of the candidates is known to be a binary. Thus the very mechanism of ejection remains unknown, and the authors are quick to add that they are working at this point with no more than a small percentage of the stars that could have been ‘Oumuamua’s home system.

Given that the 7 million stars in Gaia DR2 with 6D phase space information is just a small fraction of all stars for which we can eventually reconstruct orbits, it is a priori unlikely that our current search would find ‘Oumuamua’s home star system.

Yes, and bear in mind too that ‘Oumuamua is expected to pass within 1 parsec of about 20 stars and brown dwarfs every million years. Given all of this, the paper serves as a valuable tightening of our methods in light of the latest data we have about ‘Oumuamua, and points the way toward future work. The third Gaia data release is to occur in 2021, offering a sample of stars with radial velocity data ten times larger than DR2 [see the comments for a correction on this]. No one is claiming that ‘Oumuamua’s home star has been identified, but the process for making this identification is advancing, an effort that will doubtless pay off as we begin to catalog future interstellar objects making their way into our system.

The paper is Bailer-Jones et al., “Plausible home stars of the interstellar object ‘Oumuamua found in Gaia DR2,” accepted for publication in The Astrophysical Journal (preprint).

tzf_img_post

{ 38 comments }

Hayabusa2: Successful Rover Deployment at Asteroid Ryugu

That small spacecraft can become game-changers, our topic last Friday, is nowhere more evident than in the success of Rover 1A and 1B, diminutive robot explorers that separated from the Hayabusa2 spacecraft at 0406 UTC on September 21 and landed soon after. Their target, the asteroid Ryugu, will be the site of detailed investigation not only by these two rovers, but also by two other landers, the German-built Mobile Asteroid Surface Scout (MASCOT) and Rover 2, the first of which is to begin operations early in October. Congratulations to JAXA, Japan’s space agency, for these early successes delivered by its Hayabusa2 mission.

Surface operations will be interesting indeed. Both rovers were released at an altitude of 55 meters above the surface, their successful deployment marking an advance over the original Hayabusa mission, which was unable to land its rover on the asteroid Itokawa in 2005. Assuming all goes well, the mission should gather three different samples of surface material for return to Earth in 2020. The third sample collection is to take advantage of Hayabusa2’s Small Carry-on Impactor (SCI), which will create a crater to retrieve subsurface material.

Why Ryugu? The object is a carbonaceous asteroid that has likely changed little since the Solar System’s early days, rich in organic material and offering us insight into the kind of objects that would have struck the Earth in the era when life’s raw materials, along with water, could have been delivered. It has also proven, as the JAXA team knew it would, a difficult landing site, with an uneven distribution of mass that produces variations in the gravitational pull over the surface.

On that score, it’s interesting to note that the Hayabusa2 controllers are sharing data with NASA’s OSIRIS-REx mission to asteroid Bennu. Likewise a sample return effort, OSIRIS-REx will face the same gravitational issues inherent in such small, irregular objects, which can be ameliorated by producing maps of each asteroid’s gravity. The three-dimensional models produced for the Dawn spacecraft at Ceres are the kind of software tools that will help both mission teams understand their targets better and ensure successful operations on the surface.

But back to Rover 1A and 1B, which have landed successfully and are both taking photographs and sending data, the first time we have landed and moved a probe autonomously on an asteroid surface. Although the first image was blurred because of the rover’s spin, it did display the receding Hayabusa spacecraft and the bright swath of the asteroid just below. Here’s JAXA’s mission tweet of that first image.

Says Tetsuo Yoshimitsu, who leads the MINERVA-II1 rover team:

Although I was disappointed with the blurred image that first came from the rover, it was good to be able to capture this shot as it was recorded by the rover as the Hayabusa2 spacecraft is shown. Moreover, with the image taken during the hop on the asteroid surface, I was able to confirm the effectiveness of this movement mechanism on the small celestial body and see the result of many years of research.

The ‘hop’ Yoshimitsu refers to is a reference to the means of locomotion the rovers will use on the surface. Remember that these vehicles are no more than 18 centimeters wide and 7 centimeters high, weighing on the order of 1 kilogram. In Ryugu’s light gravity, the rovers will make small jumps across the surface, a motion carefully constrained so as not to reach the object’s escape velocity. Below is the first Rover-1A image taken during a hop.

Image: Captured by Rover-1A on September 22 at around 11:44 JST. Color image captured while moving (during a hop) on the surface of Ryugu. The left-half of the image is the asteroid surface. The bright white region is due to sunlight. Credit: JAXA.

And have a look at an image taken during landing operations before Rover-1B reached the surface. Here the asteroid terrain is clearly defined.

Image: Captured by Rover-1B on September 21 at around 13:07 JST. This color image was taken immediately after separation from the spacecraft. The surface of Ryugu is in the lower right. The coloured blur in the top left is due to the reflection of sunlight when the image was taken. Credit: JAXA.

Yuichi Tsuda is Hayabusa2 project manager:

I cannot find words to express how happy I am that we were able to realize mobile exploration on the surface of an asteroid. I am proud that Hayabusa2 was able to contribute to the creation of this technology for a new method of space exploration by surface movement on small bodies.

I would say Tsuda’s pride in his team and his hardware is more than justified. As we go forward with surface operations, let me commend Elizabeth Tasker’s fine work in spreading JAXA news in English. Even as JAXA offers live updates from Hayabusa2 in English and the official Hayabusa2 site offers its own coverage, Tasker, a British astrophysicist working at JAXA, has provided useful mission backgrounders like this one, as well as running the English-language Hayabusa2 Twitter account @haya2e_jaxa, and keeping up with her own Twitter account @girlandkat. There will be no shortage of Ryugu news in days ahead.

tzf_img_post

{ 28 comments }