Centauri Dreams
Imagining and Planning Interstellar Exploration
SETI: The Casino Perspective
I like George Johnson’s approach toward SETI. In The Intelligent-Life Lottery, he talks about playing the odds in various ways, and that of course gets us into the subject of gambling. What are the odds you’ll hit the right number combination when you buy a lottery ticket? Whenever I think about the topic, I always remember walking into a casino one long ago summer on the Côte d’Azur. I’ve never had the remotest interest in gambling, and neither did the couple we were with, but my friend pulled a single coin out of his pocket and said he was going to play the slots.
“This is it,” he said, holding up the coin, a simple 5 franc disk (this was before the conversion to the Euro). “No matter what happens, this is all I play.”
He went up to the nearest slot machine and dropped the coin in. Immediately lights flashed and bells rang, and what we later calculated as the equivalent of about $225 came pouring out. Surely, I thought, he’ll take at least one of these coins and play it again — it’s how gambling works. But instead, he headed for the door and we turned the money into a nice meal. $225 isn’t a huge hit, to be sure (not in the vicinity of Monte Carlo!), but calculating the value of the 5 franc coin at about a dollar, he did OK. As far as I know,none of us has ever gone back into a casino.
Image: The Palais de la Méditerranée in Nice. It’s possible to drop a lot of money in here fast, but we got out unscathed.
The odds on winning the grand prize in a lottery are formidable, and Johnson notes that a Powerball prize of $90 million, the result of hitting an arbitrary combination of numbers, went recently to someone who picked up a ticket at a convenience store in Colorado. The odds on that win were, according to Powerball’s own statistics, something like one in 175 million.
Evolutionary biologist Ernst Mayr probably didn’t play the slots, but he used his own calculations of the odds to argue against Carl Sagan’s ideas on extraterrestrial civilizations. No way, said Mayr, intelligence is vanishingly rare. It took several billion years of evolution to produce a species that could build cities and write sonnets. If you’re thinking of the other inhabitants of spaceship Earth, consider that we are one out of billions of species that have evolved in this time. What slight tug in the evolutionary chain might have canceled us out altogether?
Johnson likewise quotes Stephen Jay Gould, who argued that so many chance coincidences put us where we are today that we should be awash in wonder at our very existence. We not only hit the Powerball numbers, but we kept buying tickets, and with each new ticket, we won again and got an even larger prize. Some odds!
For Gould, the fact that any of our ancestral species might easily have been nipped in the bud should fill us “with a new kind of amazement” and “a frisson for the improbability of the event” — a fellow agnostic’s version of an epiphany.
“We came this close (put your thumb about a millimeter away from your index finger), thousands and thousands of times, to erasure by the veering of history down another sensible channel,” he wrote. “Replay the tape a million times,” he proposed, “and I doubt that anything like Homo sapiens would ever evolve again. It is, indeed, a wonderful life.”
A universe filled with planets on which nothing more than algae and slime have evolved? Perhaps, but of course we can’t know this until we look, and I think Seth Shostak gets it right in an essay on The Conversation called We Could Find Alien Life, but Politicians Don’t Have the Will. Seth draws the distinction between searching for life per se, as we are engaged in on places like Mars, and searching for intelligent beings who use technologies to communicate. He’s weighing evolution’s high odds against the sheer numbers of stellar systems we’re discovering, and saying the idea of other intelligence in the universe is at least plausible.
And here the numbers come back into play because, despite my experience in the Nice casino, we’re unlikely to hit a SETI winner with only a few coins. Shostak points out that the proposed 2015 NASA budget allocates $2.5 billion for planetary science, astrophysics and related work including JWST — this encompasses spectroscopy to study the atmospheres of exoplanets, another way we might find traces of living things on other worlds, though not necessarily intelligent species. And while this figure is less than 1/1000th of the total federal budget in the US, the combined budgets for the SETI effort are a thousand times less than what NASA will spend.
“Of course, if you don’t ante up, you will never win the jackpot,” Shostak concludes, yet another gambling reference in a field that is used to astronomical odds and how we might defeat them. I have to say that Mayr’s analysis makes a great deal of sense to me, and so does Gould’s, but I’m with Shostak anyway. The reason is simple: We have no higher calling than to discover our place in the universe, and to do that, the question of whether or not other intelligent species exist is paramount. I’m one of those people who want to be proven wrong, and the way to do that is with a robust SETI effort working across a wide range of wavelengths.
And working, I might add, across a full spectrum of ideas. Optical SETI complements radio SETI, but we can broaden our SETI hunt to include the vast troves of astronomical data our telescopes are producing day after day. We have no notion of how an alien intelligence might behave, but we can look for evidence not only in transmissions but in the composition of stellar atmospheres and asteroid belts, all places we might find clues of advanced species modifying their environment. It is not inconceivable that we might one day find signs of structures, Dyson spheres or swarms or other manipulations of a solar system’s available resources.
So I’m with the gamblers on this. We may have worked out the Powerball odds, but figuring out the odds on intelligent life is an exercise that needs more than a single example to be credible. I’ll add that SETI can teach us a great deal even if we never find evidence of ETI. If we are alone in the galaxy, what would that say about our prospects as we ponder interstellar expansion? Would we, as Michael Michaud says, go on from this to ‘impose intention on chance?’ I think so, for weighing against our destructive impulses, we have a dogged need to explore. SETI is part of our search for meaning in the cosmos, a meaning we can help to create, nurture and sustain.
Did Stardust Sample Interstellar Materials?
Space dust collected by NASA’s Stardust mission, returned to Earth in 2006, may be interstellar in origin. We can hope that it is, because the Solar System we live in ultimately derives from a cloud of interstellar gas and dust, so finding particles from outside our system takes us back to our origins. It’s also a first measure — as I don’t have to tell this audience — of the kind of particles a true interstellar probe will encounter after it has left our system’s heliosphere, the ‘bubble’ in deep space blown out by the effects of the Sun’s solar wind.
Image: Artist’s rendering of the Stardust spacecraft. The spacecraft was launched on February 7, 1999, from Cape Canaveral Air Station, Florida, aboard a Delta II rocket. It collected cometary dust and suspected interstellar dust and sent the samples back to Earth in 2006. Credit: NASA JPL.
The cometary material has been widely studied in the years since its return, but how to handle the seven potentially interstellar grains thus far found, and verify their origin? It’s not an easy task. Stardust exposed its collector on the way to comet Wild 2 between 2000 and 2002. Aboard the spacecraft, sample collection trays made up of aerogel and separated by aluminum foil trapped three of the potentially interstellar particles, which are only a tenth as large as Wild 2’s comet dust, within the aerogel, while four other particles of interest left pits and rim residue in the aluminum foil. At Berkeley, synchrotron radiation from the lab’s Advanced Light Source, along with scanning transmission x-ray and Fourier transform infrared microscopes, have ruled out many interstellar candidate dust particles because they are contaminated with aluminum.
The latter may have been knocked off the spacecraft to become embedded in the aerogel, but we’ll learn more as the work continues. The grains are more than a thousand times smaller than a grain of sand. To confirm their interstellar nature it will be necessary to measure the relative abundance of three stable isotopes of oxygen, says Andrew Westphal (UC-Berkeley), lead author of a paper published last week in Science. In this news release from Lawrence Berkeley National Laboratory, Westphal says that while the analysis would confirm the dust’s origin, the process would destroy the samples, which is why the team is hunting for more particles in the Stardust collectors even as it practices isotope analysis on artificial dust particles.
Image: The bulbous impact from the vaporized dust particle called Sorok can barely be seen as the thin black line in this section of aerogel in the upper right corner. Credit: Westphal et al. 2014, Science/AAAS.
So far the analysis has been entirely non-destructive and the results have been in some ways surprising. Twelve papers being published in Meteoritics & Planetary Science are outlining the methods now being deployed. Finding the grains has meant probing the aerogel panels by studying tiny photographic ‘slices’ at different visual depths, producing a sequence of millions of images that was turned into a video. A citizen science project called Stardust@home was a player in the analysis, using distributed computing and the eyes of volunteers to study the video to look for tracks caused by the dust. So far, more than 100 tracks have been found but not all have been analyzed, and only 77 of the 132 aerogel panels have been scanned.
So we have the potential for further finds. What we’re learning is that if this dust is indeed interstellar, it’s surprisingly diverse. Says Westphal:
“Almost everything we’ve known about interstellar dust has previously come from astronomical observations—either ground-based or space-based telescopes. The analysis of these particles captured by Stardust is our first glimpse into the complexity of interstellar dust, and the surprise is that each of the particles are quite different from each other.”
Image: The dust speck called Orion contained crystalline minerals olivine and spinel as well an an amorphous material containing magnesium, and iron. Credit: Westphal et al. 2014, Science/AAAS.
Two of the larger particles have a fluffy composition that Westphal compares to a snowflake, a structure not anticipated from earlier models of interstellar dust. Interestingly, they contain olivine, a mineral composed of magnesium, iron and silicon, which implicates disk material or outflows from other stars modified by its time in the interstellar deep. The fact that three of the particles found in the aluminum foil between tiles on the collector tray also contained sulfur compounds is striking, as its presence was not expected in interstellar particles. The ongoing analysis of the remaining 95 percent of the foils in the collector may help clarify the situation.
The paper is Westphal et al., “Evidence for Interstellar Origin of Seven Dust Particles Collected by the Stardust Spacecraft,” Science Vol. 345, No. 6198 (2014), pp. 786-791 (abstract).
A Dramatic Upgrade for Interferometry
What can we do to make telescopes better both on Earth and in space? Ashley Baldwin has some thoughts on the matter, with reference to a new paper that explores interferometry and advocates an approach that can drastically improve its uses at optical wavelengths. Baldwin, a regular Centauri Dreams commenter, is a consultant psychiatrist at the 5 Boroughs Partnership NHS Trust in Warrington, UK and a former lecturer at Liverpool and Manchester Universities. He is also a seriously equipped amateur astronomer — one who lives a tempting 30 minutes from the Jodrell Bank radio telescope — with a keen interest in astrophysics and astronomical imaging. His extensive reading takes in the latest papers describing optical breakthroughs, making him a key information source on these matters. His latest find could have major ramifications for exoplanet detection and characterization.
by Ashley Baldwin
An innocuous looking article by Michael J. Ireland (Australian National University, Canberra) and John D. Monnier (University of Michigan) may represent a big step towards one of the greatest astronomical instrument breakthroughs since the invention of the telescope. In true Monnier style it is down-played. But I think you should pay attention to “A Dispersed Heterodyne Design for the Planet Formation Imager (PFI),” available on the arXiv site. The Planet Formation Imager is a future world facility that will image the process of planetary formation, especially the formation of giant planets. What Ireland and Monnier are advocating is a genuine advance in interferometry.
An interferometer essentially combines the light of several different telescopes, all in the same phase, so it adds together “constructively” or coherently, to create an image via a rather complex mathematical process called a Fourier transform (no need to go into detail but suffice to say it works). We wind up with detail or angular resolution equivalent to the distance between the two telescopes. In other words, it’s like having a single telescope with an aperture equivalent to the distance, or “baseline” between the two. If you combine several telescopes, this creates more baselines which in effect help fill in more detail to the virtual singular telescopes’ “diluted aperture”. The equation for baseline number is n(n-1) /2, where n is the number of telescopes. If you have 30 telescopes this gives an impressive 435 baselines with angular resolution orders of magnitude beyond the biggest singular telescope. So far so easy? Wrong.
The principle was originally envisaged in the 1950s for optical/infrared telescopes. The problem is the coherent mixing of the individual wavelengths of light. It must be accurate to a tiny fraction of a wavelength, which for optical light is a few billionths of a metre. Worse still, how do you arrange for light, each signal at a slightly different phase, to be mixed from telescopes a large distance apart?
Radio interferometers do this via optical fibres. Easy. Remember, you have to allow for the different times at which waves from different sources each arrive at the “beam combining” mirror by mixing them in the phase they left the original scope. This is done electronically. The radio waves are converted into electrical impulses at source, each representing the phase at which they hit the telescope. They can then be converted back to the correct phase radio wave later, to be mixed at leisure by a computer and the Fourier transform used to create an image.
The more telescopes, the more baselines and the longer they are, the greater the singular resolution. This has been done in the UK by connecting seven large radio telescopes by fibre optic cable to create an interferometer, eMerlin, with 15 baselines, the longest of which is 200 kilometers. Wow! This has been connected with radio telescopes across Europe to make an even bigger device. The US radio telescopes have been connected into the Very Long Baseline Array, from Hawaii to mainland US to the Virgin Islands, to create a maximum baseline of thousands of kilometers. The European and US devices can be connected for even bigger baselines and even connected to space radio telescopes to give baselines wider than our planet’s radius. Truly awesome resolution results.
Image: e-Merlin is an array of seven radio telescopes, spanning 217 km, connected by a new optical fibre network to Jodrell Bank Observatory. Credit: Jodrell Bank Observatory/University of Manchester.
Where does all this leave optical/infrared interferometry, I hear you say? Well, a long way behind, so far. Optical/infrared light is at too high a frequency to convert to stable equivalent electrical pulse proxies as with radio, and current optical cable, despite being good, loses too much of its transmitted signal (so called dispersion) to be of any use for transferral over distance as with the radio interferometer (although optical cables are rapidly improving in quality). There are optical/infrared interferometers, involving the Keck telescopes and the Very Large Telescope in Chile. There is also the CHARA (Center for High Angular Resolution Astronomy) array of Georgia State University and the Australian SUSI (Sydney University Stellar Interferometer). Amongst others.
These arrays transmit the actual telescope light itself before mixing it, a supercomputer providing the accuracy needed to keep the correct phase of light as it was at the aperture. They all use multiple vacuum filled tunnels with complex mirror arrays, “the optical train,” to reflect the light to the beam mixer. It works, but at a cost. Even over the hundred metres or so of distance between telescopes, up to 95% of the light is lost, meaning only small but bright targets such as the star Betelgeuse can be observed. Fantastic angular resolution though. The star is 500 light years away yet the CHARA (just six one-metre telescopes) can resolve it into a disc! No single telescope, even one of the new super-large ELTs currently being built, could get close! This gives some idea of the sheer power of interferometry. Imagine a device in space with no nasty wobbly atmosphere to spoil things.
But the Ireland and Monnier paper represents hope and shows the way to the future of astronomical imaging. What the researchers are advocating is heterodyne interferometry, an old fashioned idea, again like interferometry itself. Basically it involves creating an electrical impulse as near in frequency as possible to the one entering the telescope, and then mixing it with the incoming light to produce an “intermediate frequency” signal. This signal still holds the phase information of the incoming light but in a stable electrical proxy that can be converted to the original source light and mixed with light from other telescopes in the interferometer to create an image. This avoids most of the complex light-losing “optical train”.
Unfortunately, the technique cannot be used for the beam combiner itself or the all important delay lines whereby light from different telescopes is diverted so it can all arrive at the combiner in phase to be mixed constructively. Both these processes still lose large amounts of light, although much less. The interferometer also needs a supercomputer to combine the source light accurately. Hence the delay till now. The light loss can be compensated for with lots of big telescopes in the interferometer — 4-8 meters is the ideal, as suggested in the paper. This allows baselines and associated massive increase in angular resolution of up to 7km. Bear in mind a few hundred metres was the previous best — you see the extent of the improvement.
The problem is obvious, though. Lots of big telescopes and a supercomputer add up to a lot of money. A billion dollars or more. Its a big step in the right direction, though. Extend the heterodyne concept to exclude the beam combiner and delay line loss and the loss of light approaches that of a radio interferometer. Imagine what could be seen. If the concept ends up in space then one day we will actually “see” exoplanets. This is another reason why “formation flying” for a telescope/star-shade combination (as explored in various NASA concepts) is so important, as it is a crucial element of a future space interferometer. The Planet Formation Imager discussed in the Monnier and Ireland paper is seen as a joint international effort to manage costs. The best viewing would be in Antarctica. One for the future, but a clearer and more positive future.
What Io Can Teach Us
Io doesn’t come into play very much on Centauri Dreams, probably because of the high astrobiological interest in the other Galilean satellites of Jupiter — Europa, Callisto and Ganymede — each of which may have an internal ocean and one, Europa, a surface that occasionally releases material from below. Io seems like a volcanic hell, as indeed it is, but we saw yesterday that its intense geological activity produces interactions with Jupiter’s powerful magnetosphere, leading to radio emissions that might be a marker for exomoon detection.
The exoplanet hunt has diverse tools to work with, from the transits that result from chance planetary alignments to radial velocity methods that measure the motion of a host star in response to objects around it. Neither is as effective at planets in the outer parts of a solar system as we’d like, so we turn to direct imaging for large outer objects and sometimes luck out with gravitational microlensing, finding a planetary signature in the occultation of a distant star. All these methods work together in fleshing out our knowledge of exoplanets, and it will be helpful indeed if electromagnetic detection is a second way beyond transits of looking for an exomoon.
That first exomoon detection will be a major event. But in studying Io’s interactions with Jupiter, the paper from Zdzislaw Musielak’s team at the University of Texas at Arlington (see yesterday’s post) leaves open the question of just how common such moons are, and of course we don’t know the answer, other than to say that we do have the example of Titan as a large moon with a thick, stable atmosphere. Clearly Io rewards study in and of itself, and its recent intense activity reminds us what can happen to an object this close to a gas giant’s enormous gravity well. With Musielak’s work in mind, then, lets have a run at recent Io findings.
What we learn from Imke de Pater (UC-Berkeley) and colleagues is that a year ago, Io went through a two-week period of massive volcanic eruptions sending material hundreds of kilometers above the surface, a pattern that may be more common than we once thought. Io is small enough (about 3700 kilometers across) that hot lava rises high above the surface, and in the case of the most recent events, pelted hundreds of square kilometers with molten slag.
Never a quiet place, Io is usually home to a large outburst every few years, but the scale here was surprising. Says de Pater colleague Ashley Davis (JPL/Caltech):
“These new events are in a relatively rare class of eruptions on Io because of their size and astonishingly high thermal emission. The amount of energy being emitted by these eruptions implies lava fountains gushing out of fissures at a very large volume per second, forming lava flows that quickly spread over the surface of Io.”
Image: Images of Io obtained at different infrared wavelengths (in microns, μm, or millionths of a meter) with the W. M. Keck Observatory’s 10-meter Keck II telescope on Aug. 15, 2013 (a-c) and the Gemini North telescope on Aug. 29, 2013 (d). The bar on the right of each image indicates the intensity of the infrared emission. Note that emissions from the large volcanic outbursts on Aug. 15 at Rarog and Heno Paterae have substantially faded by Aug. 29. A second bright spot is visible to the north of the Rarog and Heno eruptions in c and to the west of the outburst in d. This hot spot was identified as Loki Patera, a lava lake that appeared to be particularly active at the same time. Image by Imke de Pater and Katherine de Kleer, UC Berkeley.
De Pater discovered the first two outbursts on August 15, 2013, with the brightest at a caldera called Rarog Patera. The other occurred at the Heno Patera caldera (a caldera is not so much a crater but the collapse of surface after a volcanic eruption, leaving a large, bowl-shaped depression with surrounding scarps). The Rarog Patera event produced, according to observations conducted at the Keck II instrument in Hawaii, a 9-meter thick lava flow, one that covered 80 square kilometers. The Heno Patera flow covered almost 200 square kilometers.
But the main event was on August 29, revealed in observations led by Berkeley grad student Katherine de Kleer at the Gemini North telescope on Mauna Kea and the nearby Infrared Telescope Facility (IRTF). The actual thermal source of the eruption had an area of 50 square kilometers in an event apparently dominated by lava fountains. Usefully, the de Pater team tracked the third outburst for almost two weeks, providing data that will help us understand how such volcanic activity influences Io’s atmosphere. That, in turn, will give us insights into how eruptions support the torus of ionized gas that circles Jupiter in the region of Io’s orbit.
Image: The Aug. 29, 2013, outburst on Io was among the largest ever observed on the most volcanically active body in the solar system. Infrared image taken by Gemini North telescope, courtesy of Katherine de Kleer, UC Berkeley.
Here again we have helpful synergies between different tools, in this case the Japanese HISAKI (SPRINT-A) spacecraft, whose own observations of the Io plasma torus supplement what de Kleer observed in Hawaii. The correlation of the data sets may provide new insights into the process and, if Musielak’s methods at exomoon detection pay off through future radio observations, may help us interpret those results. The gravitational tugs of Jupiter, Europa and Ganymede feed Io’s volcanic activity, surely a scenario that is repeated around gas giants elsewhere. If so, the Io ‘laboratory’ will turn out to have surprising exomoon implications.
Three papers came out of this work, the first being de Pater et al., “Two new, rare, high-effusion outburst eruptions at Rarog and Heno Paterae on Io,” published online in Icarus 26 July 2014 (abstract). We also have de Kleer et al., “Near-infrared monitoring of Io and detection of a violent outburst on 29 August 2013,” published online in Icarus 24 June, 2014 (abstract) and de Pater, “Global near-IR maps from Gemini-N and Keck in 2010, with a special focus on Janus Patera and Kanehekili Fluctus,” published online in Icarus 10 July 2014 (abstract). This UC-Berkeley news release is also helpful.
Radio Emissions: An Exomoon Detection Technique?
Here’s an interesting notion: Put future radio telescopes like the Long Wavelength Array, now under construction in the American southwest, to work looking for exomoons. The rationale is straightforward and I’ll examine it in a minute, but a new paper advocating the idea homes in on two planets of unusual interest from the exomoon angle. Gliese 876b and Epsilon Eridani b are both nearby (15 light years and 10.5 light years respectively), both are gas giants, and both should offer a recognizable electromagnetic signature if indeed either of them has a moon.
The study in question comes out of the University of Texas at Arlington, where a research group led by Zdzislaw Musielak is looking at how large moons interact with a gas giant’s magnetosphere. The obvious local analogue is Io, Jupiter’s closest moon, whose upper atmosphere (presumably created by the active volcanic eruptions on the surface) encounters the charged plasma of the magnetosphere, creating current and radio emissions.
The researchers calls these “Io-controlled decametric emissions,” and they could be the key to an exomoon detection if we can find something similar around a nearby gas giant like those named above. Io’s atmosphere may be volcanic in origin, but we know from the example of Titan that moons in greatly different configurations can also have an atmosphere. The interactions with the magnetosphere are what is important. “We said, ‘What if this mechanism happens outside of our solar system?'” says Musielak. “Then, we did the calculations and they show that actually there are some star systems that if they have moons, it could be discovered in this way.”
Image: Schematic of a plasma torus around an exoplanet, which is created by the ions injected from an exomoon’s ionosphere into the planet’s magnetosphere. Credit: UT Arlington.
We’ve often speculated about the habitability of a moon orbiting a gas giant, but neither of the planets named above, Gliese 876b and Epsilon Eridani b, is within the habitable zone of its respective star. The former has a semimajor axis of 0.208 AU, beyond the HZ outer edge for this M4V-class red dwarf. Epsilon Eridani b is likewise a gas giant (about 1.5 times Jupiter mass) with an orbital distance of approximately 3.4 AU, again outside the K2V primary’s habitable zone. So early work on these two planets would not be related to the habitability question but would serve as a useful test of our ability to detect exomoons using electromagnetic interactions.
I wrote David Kipping (Harvard-Smithsonian Center for Astrophysics) this morning to ask for his reaction to the electromagnetic approach to exomoon detection. Kipping heads The Hunt for Exomoons with Kepler, which uses techniques involving planetary transits and the signature of exomoons within. He called this work “…an inventive idea which could discover exomoons not detectable with any other technique,” and went on to point out just where electromagnetic methods might be the most effective.
Magnetospheres are more extended for gas giants on wide orbits, like Jupiter. So I would expect this technique to be most fruitful for cold Jupiters, whereas the transit technique is better suited for planets at the habitable-zone distance or closer. The complementary nature of these detection techniques will allow us to find moons around planets at a range of orbital separations.
Adding more tools to our inventory can only help as we proceed in our search for the first exomoon. Let me quote Kipping’s further thoughts on the method:
In order to make a detection with this method, the moon must possess an ionosphere and so some kind of atmosphere. Io has a tenuous atmosphere because of intense tidal friction leading to volcanism and subsequent sulphur dioxide outgassing, but we don’t really know how common such a scenario is. Alternatively, a moon may be able to retain an atmosphere much like the Earth does, but in the Solar System only Titan satisfies this criteria.
The host planet must have a strong magnetosphere. For Jupiter-sized planets, this is reasonable but Neptunes and mini-Neptunes dominate the planet census and if such objects have moons, their magnetospheres are unlikely to be strong enough to produce an observable radio signal via interaction with a moon’s ionosphere.
For these reasons, an absence of a radio signal would not necessarily mean that there were no moons, unlike the transit technique which can make more definitive statements.
The technique is most useful for nearby planetary systems, within a few parsecs, but then again these are likely the most interesting systems to explore!
Unlike the transit method, this technique does not require the orbital inclination of the planetary system to be nearly aligned to our line of sight – a significant advantage.
The best-case quoted sensitivities, 0.25 to 0.75 Earth radii, are comparable to the best-case sensitivities with the transit method.
This new exomoon work reminds me of Jonathan Nichols’ thinking on radio telescopes and exoplanet detection. An astronomer at the University of Leicester, Nichols proposed at a Royal Astronomical Society meeting in 2011 that a radio telescope like the Low Frequency Array (LOFAR), now under construction across France, Sweden, the Netherlands, Great Britain and Germany, could detect the radio waves generated by the aurorae of gas giants, emissions that we can detect from Jupiter and Saturn in our own system. Nichols believes we might use such methods to find planets up to 150 light years away. See Exoplanet Aurora as Detection Tool for more.
The paper is Noyola et al., “Detection of Exomoons Through Observation of Radio Emissions,” The Astrophysical Journal Vol. 791, No. 1 (2014), p. 25 (abstract). The paper on aurora detection is Nichols, “Magnetosphere-ionosphere coupling at Jupiter-like exoplanets with internal plasma sources: implications for detectability of auroral radio emissions,” Monthly Notices of the Royal Astronomical Society, published online July 1, 2011 (abstract / preprint).
‘Aragoscope’ Offers High Resolution Optics in Space
Our recent discussions of the latest awards from the NASA Innovative Advanced Concepts office remind me that you can easily browse through the older NIAC awards online. But first a word about the organization’s history. NIAC operated as the NASA Institute for Advanced Concepts until 2007 under the capable leadership of Robert Cassanova, who shepherded through numerous studies of interest to the interstellar-minded, from James Bickford’s work on antimatter extraction in planetary magnetic fields to Geoffrey Landis’ study of advanced solar and laser lightsail concepts. The NIAC Funded Studies page is a gold mine of ideas.
NIAC has been the NASA Innovative Advanced Concepts office ever since 2011, when the program re-emerged under a modified name. NASA’s return to NIAC in whatever form was a welcome development. Remember that we had lost the Breakthrough Propulsion Physics project in 2002, and there was a time there when the encouragement of ideas from outside the agency seemed moribund. Now we’re seeing opportunities for new space concepts that have ramifications on how NASA conducts operations, a welcome platform for experimentation and discovery.
Over the years I’ve written a number of times about Webster Cash’s ideas on ‘starshades,’ which came together under the New Worlds concept that has itself been through various levels of NASA funding. Starshades are large occulters that are used to block out the light of a central star to reveal the planets orbiting around it. Properly shaped, a starshade placed in front of a space telescope can overcome the diffraction of light (where light bends around the edges, reducing the occulter’s effectiveness). Cash’s New Worlds pages provide an overview of his starshade concepts and link to NASA study documents presenting the idea in detail.
With this background, it’s interesting to see that NIAC awarded Cash a new NIAC grant in June to study what he calls an Aragoscope, named after Dominique-François-Jean Arago, who carried out a key experiment demonstrating the wave-like nature of light in 1818. Rather than overcoming the diffraction of light, as the starshade is designed to do, the Aragoscope would take advantage of it, blocking the front of the telescope with a large disk but allowing the diffracted light to converge to form an image behind the disk. This ‘Arago Spot’ (also called the ‘Poisson Spot’) was what Arago had demonstrated, a bright point that appears at the center of a circular object’s shadow.
Image: Arago spot experiment. A point source illuminates a circular object, casting a shadow on a screen. At the shadow’s center a bright spot appears due to diffraction, contradicting the prediction of geometric optics. Credit: Wikimedia Commons.
How to put this effect to work in the design of a space telescope? Unlike a starshade, the Aragoscope would be circular in shape, an opaque disk whose diffracted light is directed toward a pinhole camera at its center, then to a telescope that provides extremely high resolution views of stellar objects. Cash sees the method as a way to lower the cost of large optical systems limited by diffraction effects in a dramatic way. Rather than being overcome by such effects, his instrument would gather diffracted light and refocus it. From the NASA announcement last June:
The diagram in the summary chart shows a conventional telescope pointed at an opaque disk along an axis to a distant target. Rather than block the view, the disk boosts the resolution of the system with no loss of collecting area. This architecture, dubbed the “Aragoscope” in honor of the scientist who first detected the diffracted waves, can be used to achieve the diffraction limit based on the size of the low cost disk, rather than the high cost telescope mirror. One can envision affordable telescopes that could provide 7cm resolution of the ground from geosynchronous orbit or images of the sky with one thousand times the resolution of the Hubble Space Telescope.
Image: A view of the Aragoscope, an opaque disk with associated telescope. Credit: Webster Cash.
An article in Popular Mechanics this July notes that a 1000-kilometer Aragoscope could study the event horizon of black holes in the X-ray spectrum, making for highly detailed views of interesting galactic nuclei like that of M87. But Cash also talks about picking out features like sunspots and plasma ejections on nearby stars, and says an early target, once a space-based version of the Aragoscope could be launched, would be Alpha Centauri A and B. It’s intriguing that the Aragoscope, in some ways, turns Cash’s earlier starshade concepts on their head, aiming for high resolution rather than high contrast. “I spent a lot of time understanding the physics of destroying diffractive waves very efficiently,” he told the magazine. “In the process, it’s not hard to see that you can use those detractive waves to create images.”