Transient Listening: A Caution

by Paul Gilster on April 20, 2015

by James Benford

Searching for the faintest of signals in hopes of detecting an extraterrestrial civilization demands that we understand the local environment and potential sources of spurious signals. But we’ve also got to consider how signals might be transmitted, the burden falling on SETI researchers to make sense out of the physics (and economics) that constrain distant beacon builders. James Benford, CEO of Microwave Sciences and a frequent Centauri Dreams contributor, now looks at the problem in light of recent transients and discusses how we should move forward.

Jim Benford

The recent activity on Perytons leads us to a major lesson. We have a vast microwave network all around us that can interfere with transient radio astronomy. Our cell phones, though not powerful, influence the stronger transmitters and antennas of the cell phone towers. Add to that the many Internet hubs, microwave ovens, wireless equipment and extensive communication webs. All these may have fast transients with features that are largely unreported.

Filtering out extraneous sources is very important in the broader context of radio telescope astronomy, especially for the growing field of transient radio astronomy. There are many types of possible astronomical transients: of course pulsars, but also magnetars, rotating radio transients (RRATs), gamma ray burst afterglow in radio, etc.

And there may be ETI beacons, which are likely to be transient as well. This may apply especially for the beacons I, and my coauthors Gregory and Dominic Benford, have explored in earlier papers. We argued on economic grounds (both capital costs and operating costs) that they are likely to be transient. [See SETI: Figuring Out the Beacon Builders and A Beacon-Oriented Strategy for SETI, as well as the citations listed at the end of this article].

Traditional SETI research takes the point of view of receivers, not transmitters. This neglects the implications of a simple fact: a receiver does not pay for the transmitter; the sender determines what to build.

But most radio astronomy observers are unfamiliar with the technologies and techniques of transmitters, whether it’s commercial electronic equipment, military equipment or SETI beacon builders.

XMM-Newton_Magnetar-illustration_08-13

Image: Magnetar SGR 0418+5729 with a magnetic loop. Magnetars, a potential source of transients, are peculiar pulsars – the spinning remnants of massive stars – that are characterized by unusually intense magnetic fields. Astronomers discovered them through their exceptional behavior at X-ray wavelengths, including sudden outbursts of radiation and occasional giant flares. Credit: ESA/ATG medialab.

A Missing Piece to the Puzzle

A specific example: Identifying the source of Perytons at the Parkes radio telescope – the microwave ovens – is incomplete. It misses identifying a cause for the observed frequencies received following a descending curve, flattening a little at later times. That’s approximately like the ‘dispersion measure’ (DM) due to interstellar plasma. [see Perytons: A Microwave Solution].

How can that happen in a microwave oven? A jerked-open oven door cuts off the voltage V to the magnetron. The frequency emitted from a magnetron scales as V/B, with the magnetic field B fixed by a permanent magnet. Voltage is proportional to frequency (f~V), so emitted frequency falls as the voltage declines and turns off. The magnetron passes through several lower frequency modes, which explains why the intensity of the radiation varies up and down as modes wax and wane.

Electron emission in the magnetron also declines as the cathode cools. The emitted frequency thus mimics a dispersion measure. (Note that amplifiers, such as the klystrons within transmitters used in the Deep Space Network and planetary radars, won’t show such behavior because of the way they operate. All oscillators, such as magnetrons, can behave this way.)

A Parallel

Therefore there is a parallel between what has now been found about Perytons and what we found about SETI beacons: Five years ago we looked at them from the point of view of those who would build beacons, as opposed to those observers who presume that beacons are all designed around the observers’ convenience or requirements. Similarly, the searchers after Perytons didn’t understand microwave sources – such as microwave ovens – and therefore missed that possibility for some years. People who are doing radio astronomy are usually not conversant with microwave radiating technologies. Alas, in the paper the Swinburne group just published, they still say that the cause of the emissions is “obscure.” It isn’t. Knowledge of how magnetrons work would have led to understanding their “dispersion measure.”

Because of the increasing emphasis in radio astronomy on searching for transients of many varieties, the point of view has to change. Radio astronomers had best study the transient background in detail, to eliminate false positives in their search for unusual astrophysical events. And the Peryton episode is a reminder for SETI that every possible alternative has to be explored before fingers are pointed at an extraterrestrial explanation.

This could be a bit tedious, but it’s essential. And it could avoid future embarrassments. If it’s any consolation, any transmitting aliens hailing us may have considered this as well. Possible implications for our search strategies should be explored.

For more on the Benfords’ papers on beacons, see James Benford, Gregory Benford and Dominic Benford, “Messaging with Cost-Optimized Interstellar Beacons,” Astrobiology Vol. 10, No. 5 (2010), pp. 475-490 (abstract / preprint), and the same authors’ “Searching for Cost-Optimized Interstellar Beacons,” Astrobiology Vol. 10, No. 5 (2010), pp. 491-498 (abstract / preprint).

tzf_img_post

{ 1 comment }

Exomoons continue to be elude us, though they’re under intense study. One detection strategy is called Orbital Sampling Effect, as explained in the article below. I’ll let Michael Hippke describe it, but the intriguing fact is that we can work with these methods using existing datasets to refine our techniques and actively hunt for candidates. Michael is a researcher based in Düsseldorf, Germany. With a background in econometrics, statistics and IT, he mastered data analysis at McKinsey & Company, a multinational management consulting firm. These days he puts his expertise to work in various areas of astrophysics, and most recently appeared here in our discussion of his paper on Fast Radio Bursts (see Fast Radio Bursts: SETI Implications?).

by Michael Hippke

hippke-photo

Our own Solar System hosts 8 planets (plus Pluto and other “dwarf planets”), but 16 large moons with radii over 1,000km. And we have detected thousands of exoplanets – planets orbiting other stars – but not a single exomoon. The question of their existence is interesting, as some exomoons might in fact be habitable. Lately, there has been some speculation that, overall, there might be more habitable moons than planets in the universe. Consequently, we really want to know more about moons!

Moons are, by definition, smaller than their host planets, and thus harder to detect. Various search methods have been proposed – with the HEK project (Hunt for Exomoons with Kepler), led by David Kipping, being the most prominent team. A novel, promising method has been developed by René Heller in 2014, dubbed the “orbital sampling effect” (OSE). As with exoplanet transits, this method stacks many (dozens or ideally hundreds) of planet transits, and searches for the signature of a moon in this stack. While planet transit shapes are rather simple, the moon curves turn out the be very complex.

orbit

Image: A star with a transiting planet and its moon. The angled area shows the inclination of the moon orbit. Orbit positions beyond the dashed line are not undergoing transit, and are thus not observable.

In my recent work, I have processed data from the Kepler space telescope to search for this effect. I also worked with the “scatter peak,” an exomoon detection method described by Attila Simon (Konkoly Observatory, Hungary) and team in 2012. It is based on the fact that the geometrical exomoon configuration is very likely different during every exoplanet transit: On some transits, the moon might be ahead of the planet, on other transits behind it. When stacking many transits, at a given phase folded time, one gets a flux loss in some cases, and not in others. This results in increased scatter (photometric noise) when compared to out-of-transit times.

While the sole use of the scatter peak is problematic due to stellar noise, it can be used to confirm or reject certain signals. Not surprisingly, the struggle against stellar noise, instrumental jitter and other glitches has required the development of a complex statistical framework. While the Kepler data quality is at the very limit for exomoon hunting, a few very interesting results could be achieved.

The first result is sensitivity. What moons can we detect with Kepler and the OSE? Learning the answer to this will be useful for the assessment of future time-series photometry space missions, such as TESS or PLATO 2.0. With Kepler, the limit seems to be about 0.3 — 0.4 Earth radii for a moon to be detected, which is about the size of Ganymede. In many cases, where the host stars are dimmer, or noisier, only larger moons can be detected. Despite these limitations, my work shows that the OSE is a promising method, which will one day, with better data quality and/or processing, likely succeed and find moons!

sensitivity-histo

Image: The smallest radii detectable with the OSE in Kepler data are ~0.4 Earth radii. In many cases, the data and method only allows for the detection of larger moons. These are calculated limits, not real observations.

The second result is the ‘average moon’ effect. While no single moon could be detected, it is possible to “super-stack” a larger sample of planet-OSEs to estimate the average moon size in different samples. For very short-period planets with orbits shorter than about 15 days, no moons are seen. This is in agreement with stability arguments: The closer the planet to the star, the more the star “pulls” on the moon and tries to swallow it. The critical distance is not perfectly clear, but believed to be at ~15-day orbits. In my analysis, I find that the average moon signal comes up for periods over 35 days. In the sample of 35- to 80-day orbits, I find an average moon radius of about 2,000km (roughly like our moon). This estimate doesn’t tell how many planets actually have moons, or how many multiple moon systems are included in this average. It is for future studies (and telescopes) to determine this. But it is exciting that one can try.

The third result is about individual candidates. A small sample of planets shows prominent OSE-like signals justifying an in-depth analysis. It must be clearly said that, very likely, all of these will turn out to be false-positives. For some cases, it might even be possible to show that they cannot be moons, for example because some configurations are not stable over longer time frames. But this is not a bad result, for when we find false-positives, we can add the detection mechanism for these to our algorithm, and improve future searches.

ose264b

Image: Planet transit (straight line), moon effect due to the OSE (dashed line) and real datapoints (dots with error bars). In this case of Kepler-264b, the data are in favour of a moon interpretation, although this cannot be considered a detection, as detailed in the paper.

Personally, I would expect that the first moon(s) that will be found will be at the long (large/massive) end of exomoon distribution, as was the case for exoplanets. This comes from a selection bias: Large things are easier to see, and will thus be detected first. It will not mean that all moons are giants, as not all planets are Hot Jupiters (which were the first planets detected). Interesting times are ahead!

For more information, the paper is Hippke, “On the detection of Exomoons: A search in Kepler data for the orbital sampling effect and the scatter peak.” It has been accepted by the Astrophysical Journal for publication. A preprint is available.

tzf_img_post

{ 10 comments }

G-HAT: Searching For Kardashev Type III

by Paul Gilster on April 16, 2015

A new paper out of the Glimpsing Heat from Alien Technologies search (G-HAT) at Penn State is packed with fascinating reading, and I’m delighted to send you in its direction (citation below) for a further dose of the energizing concepts of ‘Dysonian SETI.’ Supported by a New Frontiers in Astronomy and Cosmology grant funded by the John Templeton Foundation, G-HAT has been studying whether highly advanced civilizations are detectable through their waste heat at mid-infrared wavelengths, a tell-tale signature posited by Freeman Dyson in the 1960s.

We now have the highly useful dataset of some 100 million entries gathered by WISE, the Wide-field Infrared Survey Explorer mission, to work with. G-HAT researcher Roger Griffith, lead author of the paper on this work, went through these data, culling out 100,000 galaxies that could be seen with sufficient detail, and searching for any that produced an unusually strong mid-infrared signature. Fifty galaxies do show higher levels of mid-infrared than expected, necessitating follow-up studies to determine whether natural processes are at work or, indeed, the functions of an extraterrestrial civilization. But no clear signs of alien technology appear.

G-HAT’s founder, Jason Wright, points to the significance of this new scientific result:

“Our results mean that, out of the 100,000 galaxies that WISE could see in sufficient detail, none of them is widely populated by an alien civilization using most of the starlight in its galaxy for its own purposes. That’s interesting because these galaxies are billions of years old, which should have been plenty of time for them to have been filled with alien civilizations, if they exist. Either they don’t exist, or they don’t yet use enough energy for us to recognize them.”

Image: A false-color image of the mid-infrared emission from the Great Galaxy in Andromeda, as seen by Nasa’s WISE space telescope. The orange color represents emission from the heat of stars forming in the galaxy’s spiral arms. The G-HAT team used images such as these to search 100,000 nearby galaxies for unusually large amounts of this mid-infrared emission that might arise from alien civilizations. Credit: NASA/JPL-Caltech/WISE Team.

Wright refers to the work as a ‘pilot study’ that will help the researchers tune up their methodologies to separate natural astronomical sources from what could be waste heat from an alien civilization. The team found some curiosities within our own Milky Way, including one cluster of objects that WISE registers strongly, but that appears black in visible light telescopes. Co-investigator Matthew Povich (Cal Poly Pomona) believes this to be a cluster of young stars inside a hitherto undiscovered molecular cloud. Another local find: A bright nebula around the star 48 Librae apparently flagging a huge dust cloud. Both will doubtless receive scrutiny from astronomers even as G-HAT itself moves on to study its own best galactic candidates.

Griffith and Cal Poly Pomona undergraduate Jessica Maldonado studied the astronomical literature to find which of the most interesting objects turned up by the survey had previously been studied. About half a dozen had received no previous scrutiny, a fact that does not surprise G-HAT co-investigator Steinn Sigurðsson:

“When you’re looking for extreme phenomena with the newest, most sensitive technology, you expect to discover the unexpected, even if it’s not what you were looking for. Sure enough, Roger and Jessica did find some puzzling new objects. They are almost certainly natural astronomical phenomena, but we need to study them more carefully before we can say for sure exactly what’s going on.”

That there is much for G-HAT to do should be obvious from the fact that using the abundant WISE data in the mid-infrared is a new direction for SETI. There are those who would argue, as Michael Hart did back in 1975, that given the age of the Milky Way, we should be aware of other civilizations if they indeed existed. Hart believed that any galaxy will become colonized in a relatively short time when compared to the overall age of the galaxy. Either there is no spacefaring species in a galaxy, or the galaxy in question quickly fills up with them.

Nikolai Kardashev classified civilizations in 1965 on the basis of the energies they could use, a Type II civilization being one capable of using the power of its host star’s entire luminosity. This is the now familiar realm of the Dyson sphere, intercepting the entire output of its star. We then move to a Type III civilization, one capable of using the stellar luminosity of an entire galaxy. Hart’s argument was that the emergence of a Type II culture is shortly followed by Type III. If interstellar flight is common, we should expect many Type III civilizations. The paper notes:

If Hart’s reasoning is sound, then we should expect that, unless intelligent, spacefaring life is unique to Earth in the local universe, other galaxies should have galaxy-spanning supercivilizations, and a search for K3’s may be fruitful. If there is a flaw in it, then intelligent, spacefaring life may be endemic to the Milky Way in the form of many K2’s, in which case a search within the Milky Way would be more likely to succeed. It is prudent, therefore, to pursue both routes.

Type II civilizations in our own galaxy should be detectable, and we’ve looked before at searches for the waste heat of the Dyson spheres such cultures might create. Richard Carrigan’s work on this score has been pioneering (see Toward an Interstellar Archaeology as an entry into other articles on the topic that are in the archives). Carrigan used data from IRAS (the Infrared Astronomical Satellite, launched in 1983), deciding out of 11,000 sources that the best of the Dyson sphere candidates he found were reddened and dusty astronomical objects.

The G-HAT paper, the third produced by the group, also points to the searches for Dyson spheres by Jun Jugaku and Shiro Nishimura, whose work in the 1990s found no Dyson spheres around the roughly 550 stars they surveyed within 25 parsecs. James Annis conducted a search for Kardashev Type III civilizations in the late 1990s and found no evidence for them, but mid-infrared surveys like WISE give us far more data within which to conduct such a search.

Thus these early results from G-HAT point to deepening study as the team refines its methods to make them sensitive to lower waste heat levels from extraterrestrial technologies. The currently reported work tells us that none of the galaxies resolved by WISE in this study contain Type III civilizations that are reprocessing 85 percent or more of the starlight of their galaxy into the mid-infrared. And as mentioned above, out of 100,000 galaxies, only fifty show a mid-infrared signature that could be considered consistent with reprocessing more than 50 percent of the starlight.

These fifty point to the further investigations ahead. As does this::

We also identify 93 sources with γ > 0.25 but very little study in the scientific literature. Three of these sources are MIR-bright [mid-infrared] and red galaxies that are essentially new to science, having little or no literature presence beyond bare mentions of a detection by IRAS or other surveys.

Here γ refers to thermal waste heat emitted by an object (waste heat luminosity), which is expressed as a fraction of the starlight available to the civilization. The paper explains that for waste heat temperatures in the 100-600 K range, values of γ near 1 would indicate that most of the galaxy’s luminosity was in the mid-infrared (now we are talking about the waste heat of a Type III technology). Values near 0 would imply that the value of any alien waste heat was small compared to the total output of the stars in the galaxy. The paper continues:

Verification that the MIR flux in all of these galaxies is predominantly from natural sources (e.g., through SED [Spectral Energy Distribution] modeling across many more bands than WISE offers or spectroscopy) will push our upper limit on galaxy-spanning alien energy supplies in our sample of 1 × 105 galaxies down to 50% of the available starlight. In the meantime, these are the best candidates in the Local Universe for Type iii Kardashev civilizations.

The paper is Griffith et al., “The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. III. The Reddest Extended Sources in WISE,” Astrophysical Journal Supplement Series Vol. 217, No. 2, published 15 April 2015 (abstract / preprint). A Penn State news release is also available.

tzf_img_post

{ 26 comments }

Perytons: A Microwave Solution

by Paul Gilster on April 15, 2015

Radio bursts scant milliseconds long that have been reported at the Parkes radio telescope in New South Wales — so-called ‘perytons’ — turn out to be the product of microwave ovens. The Case of the Puzzling Perytons, as Earl Stanley Gardner might have titled it, appeared in these pages earlier, with alliteration intact, when Jim Benford tackled it in Puzzling Out the Perytons. You’ll recall that Benford thought microwave ovens were involved, and now we learn that the Parkes team had independently reached the same conclusion before he arrived. Moreover, the authors of the Parkes paper had already embarked upon an investigation that now yields positive results.

Make no mistake, this is a useful finding, even if it has generated a certain degree of understandable banter. After all, we’re looking for emissions from deep space but fending off spurious signals generated by staff lunching on the grounds of the observatory itself. The larger picture, though, is that the kind of signals our radio telescopes work with are so infinitesimally small that they’re subject to myriad incursions from the sea of radiating objects all around us.

We need to know what these sources of interference are, and although it doesn’t have the same cachet as discovering a new astrophysical phenomenon, much less detecting the signature of an extraterrestrial civilization, something as mundane as tracing interference to a microwave oven is a step forward. Perytons have been reported back into the 1990s, in some ways acting like a signal from deep space, but in others (their persistent appearance in different fields of view), looking like the result of nearby interference from, perhaps, an aircraft or telescope component.

In fact, that characteristic wide-field detectability is a known screen that radio astronomers use to filter out local interference. Also looking suspiciously human was the fact that the 25 perytons reported in the literature occurred only during office hours and primarily on weekdays. The perytons, though, had something that set them apart from more normal sources of interference. They showed dispersion measures (DMs) that were roughly similar to genuine astrophysical signals, their swept frequencies mimicking signals that have moved through deep space.

For more on dispersion measures and their uses, see Fast Radio Bursts: SETI Implications?, which ran ten days ago in these pages. But as that article notes, we have to make a distinction between fast radio bursts (FRBs) and perytons, the former still enigmatic and evidently from non-local sources, the latter now understood to be the product of human technology. The Parkes authors are careful to make this distinction in the concluding paragraphs of their paper.

parkes_2

Image: The 64-meter radio telescope at the Parkes Observatory in New South Wales, Australia. Credit: CSIRO.

The paper on the Parkes work is “Identifying the source of perytons at the Parkes radio telescope” (citation below). Lead author Emily Petroff and team describe their use of radio frequency interference monitoring equipment at Parkes as well as the Australia Telescope Compact Array (about 400 kilometers north of Parkes) to chase down the culprit. In January, the team detected three perytons, syncing with the detection of the signals by the telescope itself.

What the RFI equipment detected were emissions in the frequency range 2.3 ~ 2.5 GHz consistent with the 1.4 GHz peryton event, indicating to the researchers that the 1.4 GHz millisecond bursts were associated with the episodes of 2.4 GHz emission. From the paper:

The 2.3 – 2.5 GHz range of the spectrum is allocated to “fixed”, “mobile” and “broadcasting” uses by the Australian Communications and Media Authority, and includes use by industrial, scientific and medical applications, which encompasses microwave ovens, wireless internet, and other electrical items. This suggests that the perytons may be associated with equipment operating at 2.3 ∼ 2.5 GHz, but that some intermittent event or malfunctioning, for example, from the equipment’s power supply, is resulting in sporadic emission at 1.4 GHz.

We also learn that although peryton detections at 1.4 GHz coincide with episodes where there is a higher-frequency emission, the higher frequency emission can occur without an associated peryton. The RFI monitor data tell the tale: Several hundred spikes of emission at 2.3 ~ 2.5 GHz occurred during the investigation period. The events clustered in time of day and tended to occur during the daytime, a finding consistent with microwave oven use or the use of other electrical equipment. It remained to attempt the ‘recreation’ of a peryton to test the theory.

Tests in February and March involved running the three microwaves in close proximity to the telescope at both high and low power. Details are in the paper and I won’t go further into them other than to note that the decisive moment evidently came on the 17th of March, when the microwave cycle was interrupted by opening the door. The paper describes three bright perytons produced from the staff kitchen microwave at times exactly coinciding with the opening of the microwave oven door.

Looking at earlier peryton detections, the authors found that of the 46 perytons detected at Parkes since 1998, more than a third of the total occurred within a short period on a particular day in 1998. Again, the microwave seems clearly implicated in a specific usage scenario:

…we find that indeed eight of the 15 intervals between consecutive events fall within the range 22.0+/-0.3 seconds, which is exceedingly unlikely to have been produced by manually opening the oven. Rather, we believe that the operator had selected a power level of less than 100%, causing the magnetron power to cycle on and off on a 22-second cycle, the period specified in the manufacturer’s service manual and confirmed by measurement. It appears likely that over this seven-minute period the oven produced a peryton on all or most completions of this 22-second cycle but that the operator stopped the oven manually several times by opening the door, each time restarting the 22-second cycle.

In other words, the oven’s magnetron can push out a signal when the door is opened prematurely — the magnetron is still in its shut-down phase and is producing emissions. The staff kitchen and visitors center at the observatory are the sites implicated in peryton production.

So we have three new peryton detections, allowing the researchers to correlate them with strong local emissions at 2.3 ~ 2.5 GHz, and further tests that show that perytons can be generated with microwave ovens on the site. The authors end the paper by stressing that the strong indications that perytons are man-made do not apply to fast radio bursts (FRBs), which they believe are of astronomical origin. In particular, FRB 010724, the so-called ‘Lorimer Burst,’ is not consistent with the peryton data. The authors declare it a genuine FRB, not a peryton.

The paper is Petroff et al., “Identifying the source of perytons at the Parkes radio telescope,” submitted to Monthly Notices of the Royal Astronomical Society (preprint). Nadia Drake does a fine job with the story for National Geographic in Rogue Microwave Ovens Are the Culprits Behind Mysterious Radio Signals.

tzf_img_post

{ 5 comments }

New Horizons Message Update

by Paul Gilster on April 14, 2015

If you want to send a message to the stars, Jon Lomberg is the man to consult. A gifted artist and creator of the gorgeous Galaxy Garden in Kona, Hawaii, Lomberg may be most famous for his frequent work with Carl Sagan, including the celebrated Cosmos series. But it’s his involvement with the Voyager Interstellar Record, a project for which he served as design director, that makes him so uniquely qualified to embark on a new messaging effort, the One Earth: New Horizons Message project.

lomberg

Let’s talk about Voyager and how the new message differs. 115 images and 27 musical selections went into the Voyager record, along with abundant audio of the life and natural sounds of our planet. The 12-inch gold-plated copper disk included spoken greetings in fifty-five languages beginning with Akkadian (a language of ancient Sumer) and ending with Wu, a modern Chinese dialect. The ninety minutes of music can be played at 16 ⅔ revolutions per minute using a cartridge and needle enclosed within the record’s protective jacket.

We knew that the Voyagers would eventually leave the Solar System, which was why putting a message from humanity on them suggested itself. In the absence of a current mission being built to do the same thing, reasoned Lomberg, why not upload a message to the one we already have in progress, the New Horizons probe to Pluto/Charon? The plan is to create the message not through a small committee but a worldwide crowd-sourced effort in which people will build a self-portrait of Earth. Over 10,000 people from more than 140 countries signed the original petition urging NASA to send the Message. The team hopes that thousands more will submit ideas for its content.

Image: Jon Lomberg, project director for the One Earth: New Horizons Message effort.

If the Voyager record carried pictures and sound, we could do the same for New Horizons, sending all of our content to the spacecraft once it has completed its encounter with Pluto/Charon and any later Kuiper Belt Object the spacecraft may be able to study. Software and 3D files are another possibility, limited only by the memory constraints of the New Horizons onboard computers, which have been under active study by the One Earth technical team. Unlike the Voyager and earlier Pioneer messages, this is a message that can be updated and improved as long as the spacecraft is in communication with the Earth, which could be a matter of decades.

A news release dated today announces the start of an online fundraising campaign designed by Fiat Physica, which specializes in raising money for science projects. Readers are encouraged to visit the site to view a video on the project. From the release:

We have determined that the message is technically feasible. We are confident of a message lifetime of tens of millennia in the computer memory, and can take steps to increase the message’s durability beyond that. Our submission website will be constructed by the iScience group at the University of Koblenz, under the direction of Ulf-Dietrich Reips. One main goal of the fundraising is to gather the funds required to build this site. We hope to have the submission site ready before the July encounter. We will announce soon when the project is ready to receive submissions.

Messages like those aboard Voyager and New Horizons are designed for the people of Earth as much as any extraterrestrial beings that may, in some remote future, encounter them. Having completed its mission, New Horizons will leave the Solar System in the general direction of Sagittarius, toward the center of the Milky Way, though not in the direction of any specific star. The odds of interception are obviously minute, and the data will be sent to the spacecraft on standard radio links connecting Earth with all NASA spacecraft, representing no increase in the risk of detection beyond messaging traffic that is already in progress. In other words, the One Earth Message, although designed to be found by aliens, almost certainly will not be.

Why, then, send it in the first place? Lomberg and his eighty-strong advisory board believe that creating a message to represent all of us is a priceless educational and cultural opportunity, allowing us to step back and view our ‘pale blue dot’ in a whole new way. Shaping such a message so as to be intelligible to non-humans is itself a challenge with abundant learning opportunities. Alan Stern, principal investigator for New Horizons, is a strong supporter of the One Earth Message, as is the New Horizons team at Johns Hopkins Applied Physics Laboratory. Please have a look at the One Earth Message site and consider getting involved.

“This will be a message from and to the Earth,” says Jon Lomberg. “The very act of creating it will be a powerful reminder that we all share the same, small planet. We truly are one Earth.”

One_Earth

{ 12 comments }

Call for Participation: TVIW 2016

by Paul Gilster on April 13, 2015

I like the theme of the just announced 2016 iteration of the Tennessee Valley Interstellar Workshop. Set in Chattanooga, TN, the meeting will convene at a local landmark, the Chattanooga Choo-Choo Hotel, which is actually built around the old railroad station made famous in the Glenn Miller tune of the same name. What better way to describe the upcoming event than what the group has chosen: “From Iron Horse to Worldship: Becoming an Interstellar Civilization.” The Chattanooga event follows two previous meetings in Oak Ridge and one in Huntsville, AL, all of which I’ve had the pleasure of attending.

TVIW-logo

The level of engagement I’ve found at TVIW has made all the meetings a success, beginning with the first, in Oak Ridge, back in 2011. That one sticks in my mind because of the intense fog that hung over the mountains as I drove in the evening before. The discussions and presentations were stimulating throughout, with an emphasis on more engagement with audience members than in a formal conference. Last year, this included breakout sessions on specific topics, including SETI, worldship ecosystems and their biological implications, reverse engineering starships from science fiction, and safety issues in deep space systems.

“We are setting the table for serious discussion about humanity’s future as an interstellar civilization,” says John Preston, TVIW’s president. “We want to provide fresh insights into the early challenges that we face while developing technology and prepare for long-term space travel.”

TVIW is now seeking proposals for interdisciplinary working tracks and plenary talks, along with other content such as posters, displays of art and models, demonstrations and panel discussions. Its news release about the upcoming event says that the organization ‘prefers content that is well-grounded, near-term, and practical, and that promotes future collaboration amongst its participants.’ What follows is TVIW’s Call for Participation, reproduced verbatim and with links to guide those interested in getting involved. The Tau Zero Foundation is pleased to be one of the sponsors of the Chattanooga meeting.

“From Iron Horse to Worldship: Becoming an Interstellar Civilization”

The Tennessee Valley Interstellar Workshop (TVIW) hereby invites participation in its 2016 Meeting, to be held from Sunday, February 28 through Wednesday, March 2 in the historic Chattanooga Choo Choo Hotel in Chattanooga, Tennessee. We are seeking proposals for working tracks, plenary talks, and any other content.

Working Tracks are collaborative, small-group discussions about a set of interdisciplinary questions on an interstellar subject. Proposals for Working Tracks are due on June 15, 2015. Once proposed, development of candidate tracks will continue until August 16, 2015. See the web site for full details.

We assume the proposer(s) will be the Working Track’s technical lead and need not act as facilitator or moderator. Each Track will comprise a few hours of structured discussion and work on Monday and Tuesday during the Meeting. We expect participants in each Track to have prepared in advance for the Meeting, so that we will have productive conversations during our short time together. We anticipate that Tracks will extend existing collaborations or become the start of new collaborations after the Meeting. As with our 2014 Meeting, we expect reports from our Tracks to appear as Comments in JBIS.

Submit a Working Track proposal
https://www.tviw.us/event/tviw-2016/propose-working-track

Plenary Talks include traditional papers, lectures, and presentations. Proposals are due on August 16, 2015. Papers should be for original work that has not been previously published. The schedule allocates 20 minutes for each talk followed by 5 minutes of Q&A. Talks will be recorded on video for later distribution. Certain papers may be selected for submission to professional publications, such as JBIS.

Submit a Plenary Talk proposal
https://www.tviw.us/event/tviw-2016/propose-plenary-talk

Other Content includes anything not explicitly mentioned above. This category may include posters, displays of art or models, demonstrations, seminars, panel discussions, interviews, or public outreach events. Proposals are due on August 16, 2015. Be creative!

Submit an Other Content proposal
https://www.tviw.us/event/tviw-2016/propose-other-content

The Program Committee will send out all acceptances and rejections no later than November 1, 2015. Some decisions may occur sooner than this deadline.

TVIW is a group of interstellar enthusiasts who work together to take practical, near-term steps to advance humanity’s progress toward becoming an interstellar civilization. Each TVIW Meeting is an opportunity for face-to-face collaboration about interstellar topics such as exploration, communication, and human expansion. The field of interstellar studies has been heavy with science and technology, yet going to the stars must involve and engage all aspects of human society. TVIW is interested in all fields that can contribute toward this goal and therefore encourages proposals from the social sciences, humanities, and arts. Participants are encouraged not only to present original concepts but to develop these into active projects and continuing research. The Meeting is a time for thought-provoking ideas, stimulating discussions, and boundless optimism about a future that one day will be within humanity’s reach.

Let’s have a great Meeting!

Les Johnson
General Chairman, 2016 Tennessee Valley Interstellar Workshop

Contact the Program Committee
program-committee-2016@tviw.us

TVIW Web Site
https://www.tviw.us

JBIS abbreviates Journal of the British Interplanetary Society.

The TVIW is incorporated in the State of Tennessee as a public-benefit, non-profit scientific education organization, recognized as tax-exempt under chapter 501(c)(3) of the US Internal Revenue Service. Contributions to TVIW are tax-deductible to the extent provided by law.

tviw_header

{ 0 comments }

Near-Term Missions: What the Future Holds

by Paul Gilster on April 10, 2015

Discussing the state of space mission planning recently with Centauri Dreams contributor Ashley Baldwin, I mentioned my concerns about follow-up missions to the outer planets once New Horizons has done its job at Pluto/Charon. No one is as plugged into mission concepts as Dr. Baldwin, and as he discussed what’s coming up both in exoplanet research as well as future planetary missions, I realized we needed to pull it all together in a single place. What follows is Ashley’s look at what’s coming down the road in exoplanetary research as well as planetary investigation in our own Solar System, an overview I hope you’ll find as useful as I have. Dr. Baldwin is a consultant psychiatrist at the 5 Boroughs Partnership NHS Trust (Warrington, UK) and a former lecturer at Liverpool and Manchester Universities. He is also, as his latest essay makes clear, a man with a passion for what we can do in space.

by Ashley Baldwin

baldwin2

We’ve come a long way since the discovery of the first “conventional” exoplanets in 1995 ( of course, “pulsar” orbiting planets had been discovered several years earlier). Since then ground based RV and Transit surveys have discovered several hundreds of planets, supplemented by a thousand confirmed finds plus many more “candidates” by the miracle that is Kepler — the original and K2, aided and abetted ably by Corot and re modeled Hubble and Spitzer. Between these several dozen larger examples have been “characterised ” by a combination of RV,transit and transit spectroscopy. We are at a crossroads and stand at the edge of the beginning of a golden age, this despite the austere times in which we live. But as they say, necessity is the mother of invention, and the innovation arising from lack of funds has led to a versatility in hardware use unimaginable twenty years ago (along with 1800 plus exoplanets!).

So what next? Lots of things. Before talking about obvious things like space telescopes and spectroscopy I feel I must make space for asteroseismology. A new science and still little known, but without it hardly any of the recent exoplanet advances would have been possible. It is the process by which kinetic energy or pulsations inside stars is converted into vibrations. In effect sound waves. Originating in the Sun with helioseismology, asteroseismology is basically the same process by which seismic waves of earthquakes are used to inform us about the details of the Earth’s interior.

Vibrations from different parts of stellar interiors expand outwards to the star’s surface where their nature and origin can be determined with increasing accuracy. There are three types of these vibrations:

  • Gravity or ‘g’ waves originate from stellar cores and have been implicated in the movement of stellar material into other areas as well as contributing to the uniform rotation of the core, linking thus with the outer convective zone of stellar-mass stars. These waves only reach the surface in special circumstances.
  • Pressure or ‘p’ waves, arise from the outer convective zone and are the main source of information on this crucial area of a star’s interior as they reach the surface.
  • Finally, “f” waves are surface “ripples”. These vibrations help astronomers accurately calculate the mass, age, diameter and temperature of a star, with age in particular being crucially determined to an accuracy of 10%. Why so important? Well, apart from determining the nature of stars themselves, they also underpin the description of orbiting exoplanets.

Understand the star and you understand the planet. The more stars that are subject to this analysis, the more precise it becomes. This is a little known but crucial element of Kepler (and CoRoT) and will also be central to PLATO (Planetary Transits and Oscillations of stars) in the 2020s. TESS (Transiting Exoplanet Survey Satellite), sadly, is too small and its pointing times per star too brief to add a lot to the process despite its huge capabilities for such a low budget. Kepler has a dedicated committee that oversees the correlation of all asteroseismological data as will PLATO, and the huge amounts of stellar information collected will provide precision detail on exoplanets by the end of the next decade (and indeed before).

Near-Term Developments

Things start hotting up from next year with “first Light” on the revolutionary RV spectrograph ESPRESSO (Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations) at the VLT in Chile. This device is an order of magnitude more powerful than previous devices like HARPS, and in combination with the large telescopes of the VLT can discover planets the same size as Earth in the habitable zones of Sun-like stars. The first of its kind to do so. Apart from positioning such planets for potential direct imaging and spectroscopic characterisation, it will also provide mass estimates with varying degrees of accuracy.

Meanwhile, nearby ALMA (Atacama Large Millimeter/submillimeter Array) will provide unprecedented images and detail on the all-important protoplanetary disks from which planets form, and which inform the nature of our own system’s evolutionary history. The Square Kilometer Array (SKA), due to become operational next decade in South Africa and Australia, will also do this in longer, radio wavelengths, and its enormous collecting area (quite literally a square kilometer) with moveable unit telescopes (as with ALMA too) will create a synthetic “filled” aperture on a par with a solid telescope of similar dimensions and consequent exquisite resolution.

alma_array

Image: ALMA antennae on the Chajnantor Plateau. Credit: ALMA (ESO/NAOJ/NRAO), O. Dessibourg.

Submillimetre astronomy is often referred to as molecular imaging, as the wavelengths used are perfect, given their low energy and related cool temperatures, for picking up chemical molecules in the interstellar medium, and have been instrumental in showing the ubiquity of many of the materials needed for life, like amino acids, the building blocks of proteins, and PAHs (poly aromatic hydrocarbons) which are key constituents of cell membranes as well as the long chain amines in the goo on Titan. ​ALMA has identified hydrogen cyanide and methyl cyanide , poisonous elements on Earth, but critical progenitor molecules for protein and life building amino acids. No one has discovered life off Earth to date, but the commonest elements created by stars, carbon, hydrogen,oxygen and nitrogen (CHON), are amongst the main constituents of life and in conjunction with molecules like amino acids and PAHs suggest that the key components of life are ubiquitous.

Even greater accuracy can be achieved by combining all the radio telescopes dotted across the Earth and even in space to create a “diluted ” aperture (not completely filled in but with equivalent width of its most remote elements) wider than the Earth itself. It isn’t difficult to guess the extent of such a device’s resolution! The SKA and ALMA, as with any new and sophisticated astronomical hardware, have a planned-out mission itinerary, but given their extreme capabilities have the added ability to make unexpected and exciting discoveries.

Returning to shorter wavelengths, ground based telescopes are being equipped with increasingly sophisticated adaptive optics (AO), in conjunction with high altitude sites, allowing them to image with increasing detail in wavelengths from optical to mid-infrared and bring to bear their large light gathering capacity without the huge expense of launch to and maintenance in space. This will culminate in the completion of the three extremely large telescopes (ELTs) between 2020 and 2024. Work is underway on 25-40 m apertures that will capture sufficient light in combination with AO to discover, image and characterise planets down even to Earth size.

Space-Based Observation

In the shorter term, hot on the heels of ESPRESSO and reliant on its discoveries is TESS, a small satellite with multiple telescope/cameras and sensors, due for launch to a specially designed widely elliptical orbit to maximise imaging of exoplanet transits round “bright” nearby stars, largely M dwarfs. These small stars’ planets orbit close in and their transits eclipse a larger portion of the star, creating so- called “deep transits” on a regular basis (including in the habitable zones which are only 0.25 AU for even the largest M-class stars) that can be added together, or “binned”, to produce a potent signal.

Better still, TESS will work in concert with the James Webb Space Telescope following its launch a year later. JWST is largely an infrared telescope designed to look at extragalactic objects and cosmological concepts. Although, sadly, not an exoplanet imager, it has been optimised to spectroscopically analyse exoplanets, and with a 6.5m aperture it should do very well. It will image a transit and analyse the small amount of starlight passing through the outer atmosphere of the transiting planet in order to characterise it: A “transmission” spectrum. Alternatively, as the transit times can be calculated precisely, a spectrum of the combined planet and star light can be taken when they are next to each other and by subtracting the spectrum of the star alone whilst the planet is eclipsed behind it, a net planetary spectrum can be calculated.

tess_overview_10-14

Image: The principal goal of the TESS mission is to detect small planets with bright host stars in the solar neighborhood, so that detailed characterizations of the planets and their atmospheres can be performed. Credit: MIT.

It’s likely that given the huge workload of JWST, its exoplanetary characterisation work will be limited to premier targets. Smaller mission funding pools will be utilised to produce small but dedicated exoplanet transit spectroscopy telescopes to characterise larger exoplanets, as suggested in the previously unsuccessful ESA and NASA concepts EChO and FINESSE.

TESS itself will look at half a billion stars across the whole sky over a two year period and if it holds together, should get a much longer mission expansion. Parts of its field of imaging around the ecliptic poles are designed to overlap with the JWST’s area of operation to maximise their synergy. The longest periods of “staring” also occur there to allow analysis of planets with the longest orbital periods in the habitable zones of the largest possible (most Sun-like) stars. Ordinarily three proven transits are required for proof of discovery but given the nearby target stars any discoveries can be followed up by ESPRESSO for proof, reducing required transits to just two. There is growing optimism that with JWST, TESS might make the ultimate discovery!

CHEOPS_BEAUTY_FRONT_gradient_250x250

Launched in a similar timeframe as TESS, the small ESA telescope CHEOPs will look for transits predicted by RV discoveries, allowing accurate mass and density calculations of up to 500 planets of gas giant to mini-Neptune size to add to the growing list of planets characterised this way, thus helping build up a picture of planetary nature and distribution. At present, planets in this category have been grouped by Marcy et al and the data suggests that Earth like planets (rocky with a thin atmosphere) exist up to about 1.6 R Earth or 5M Earth with larger objects more likely to have a thick atmosphere and be more akin to “mini Neptunes”. The larger the sample, the greater the accuracy, hence CHEOPs, TESS and ESPRESSO’s wider importance in characterisation, which will also,inform efficient future imaging searches.

Image: CHEOPS – CHaracterising ExOPlanet Satellite – is the first mission dedicated to searching for exoplanetary transits by performing ultra-high precision photometry on bright stars already known to host planets. Credit: ESA.

Into the Next Decade

Crunch time arrives in the 2020s. The beginning of the decade is the time of the routine Decadal survey that lays out NASA’s plans and priorities over the following ten years. It will determine the priority that exoplanets (and Solar System planets) are given. The JWST has left a huge hole in the budget that must be balanced and at a time when manned space flight, never cheap, is reappearing after its post Space Shuttle hiatus. There is room for plenty of optimism, though.

Unlike my dear old National Health Service here in Britain, year on year funding can be stored to be used at a later date. Any ATLAST (Advanced Technology Large Aperture Space Telescope), Terrestrial Planet Finder telescope or High Definition Space Telescope in the proportions necessary for detailed exoplanetary characterisation will cost upward of $15 billion. Huge, but not insurmountable, if funds are hoarded over 15 years ahead of a 2035 launch. Imagining a 16m monster like that! Quite a supposition.

ATLAST16m

Image: The Advanced Technology Large Aperture Space Telescope (ATLAST) is a NASA strategic mission concept study for the next generation of UVOIR [near-UV to far-infrared] space observatory. ATLAST will have a primary mirror diameter in the 8m to 16m range that will allow us to perform some of the most challenging observations to answer some of our most compelling astrophysical questions. Credit: Space Telescope Science Institute.

What is more definite is the Wide-field Infrared Survey telescope, WFIRST, due for launch circa 2024, maybe a bit earlier. Originally planned as a dark energy mission, it has grown enormously thanks to the NRO donation of a Hubble dimension, high-quality wide-field mirror. At the same time, Princeton’s David Spergel made a compelling and successful case for inclusion of an internal starlight occulter or coronagraph at about half a billion dollars extra (much of which was covered by partner space agencies). This would be a “technological demonstrator ” instrument. Large funds were released for advancing this largely theoretical technology to a useable level through experimental “Probe” concepts which also developed an external occulter technology.

The coronagraph has already massively exceeded all expectation of success and has at least 5 years more development time before the telescope development begins. That’s a lot of useful time. It’s aim is to allow direct imaging of Jupiter-Neptune mass planets about as far out from a star as Mars. The coronagraph blocks out the much brighter starlight that swamps the feeble planet light. Already the technology has improved to the point where a few Super Earths or even smaller planets might be visualised. Sadly, not quite in the habitable zone, but the wider orbits will allow more accurate categorisation of the planetary cohort thus telling us what to look for in the future. The final orbit of this telescope is yet to be decided and is crucial.

Given the long gap until ATLAST (envisioned as a flagship mission of the 2025 – 2035 period) and the finite life expectancy of Hubble and JWST, WFIRST is obviously intended to bridge the gap, and thus will need servicing like Hubble. To this effect it was felt necessary to keep it near to Earth (for convenience of data download too), but rapid advances in robotic servicing mean it could now be stationed as far afield as the ideal viewing spot, the Sun/Earth Lagrange point L2. It could possibly even be moved nearer to the moon for servicing. Thus locale would allow the addition of an external occulter if funding was available. This technology allows closer imaging to the star than the coronagraph, even into the habitable zone. Whether WFIRST ends up with both internal and external occulters remains to be seen and will likely to be decided by the 2020 Decadal study according to the political and financial climate of the day. Meantime, it’s great to know that such a useful planet hunter will be operational for a long time post 2024.

WFIRST does other useful exoplanetary work. It too will discover exoplanets by the transit method and also by the often forgotten microlensing principle. This involves a nearby star sitting in front of a further out star and effectively focusing its light via gravity, as described by Einstein in his relativity work. Exoplanets orbiting the nearer star stand out during this process and can have their radius and mass determined accurately. As this method works for further out planets, it provides a way of populating the full range of planetary orbits and characteristics, which we have seen is critical to establishing the nature of alien star system architecture. The downside of microlensing is that it is a one-off experience and can’t be revisited. Direct imaging, transiting, and microlensing makes WFIRST one potent exoplanet observatory. What more can it do? The answer is a lot.

WFIRST

Image: The Wide-Field Infrared Survey Telescope (WFIRST) is a NASA observatory designed to perform wide-field imaging and slitless spectroscopic surveys of the near infrared (NIR) sky for the community. The current Astrophysics Focused Telescope Assets (AFTA) design of the mission makes use of an existing 2.4m telescope to enhance sensitivity and imaging performance. Credit: NASA.

Consider astrometry. Like asteroseismology it is a little known science but rapidly expanding, and like asteroseismology is the shape of things to come. Astrometry measures sidewards movement of a star due to the gravitational effects of orbiting stars. A bit similar to the R.V method, but better in that it accurately determines mass of planets and also their location with pinpoint accuracy. Meanwhile, the ESA telescope Gaia (is currently in the process of staring for extended periods at over a billion Milky Way stars in order to determine their position to within 1% error. In the process, as with Kepler and PLATO, it will carry out detailed asteroseismology, which will advance this critical field even further. Know the star and you know the planet.

Astrometry will allow Gaia the added benefit of single-handedly accurately positioning several thousand gas giant planets. However, combining its results with WFIRST should allow accurate positioning and mass/radius of nearby planets down to Earth size, including planets in the habitable zone, helping develop an effective search strategy for the WFIRST direct imaging technology whether by internal or external occulter or both. Critically, astrometry helps discover and characterise planets around M-dwarfs which form the large part of the stellar neighbourhood. As the habitable zone for even the largest of these stars (and many of the next class up, K-class stars) is inside 0.4AU, it is unlikely that even an advanced internal or external occulting device would allow direct imaging so close to the star, so any orbiting planets could only be classified by astrometry and transit spectroscopy if they transit the star.

The Milky Way Shines on Paranal

Image: Gaia is an ambitious mission to chart a three-dimensional map of our Galaxy, the Milky Way, in the process revealing the composition, formation and evolution of the Galaxy. Gaia will provide unprecedented positional and radial velocity measurements with the accuracies needed to produce a stereoscopic and kinematic census of about one billion stars in our Galaxy and throughout the Local Group. This amounts to about 1 per cent of the Galactic stellar population. Credit: ESA.

Generally, the closer a planet is to a star, the greater the likelihood of a transit, and the nature of planetary formation around M dwarfs also leads to protoplanetary disks that form in such a position as to create transiting planets. This, ironically, was the proposed for the now defunct TPF-I. If Gaia goes beyond its initial 5 year mission, WFIRST should be able to find and characterise up to 20,000 Jupiter or Neptune sized planets! All that for just $2.5 billion means that WFIRST will likely be one of the greatest observatories, anywhere, of all time.

PLATO is a cross between Kepler and TESS. Like Kepler, it is designed to find Earth-sized planets in the habitable zone of Sun-like stars, and as with TESS, these stars will be close enough to characterise and confirm from ground-based telescopes and spectroscopes. PLATO, too, will carry out extensive asteroseismology, which along with Kepler and Gaia will give unprecedented knowledge of most star types by 2030.

plato

Image: PLAnetary Transits and Oscillations of stars (PLATO) is the third medium-class mission in ESA’s Cosmic Vision programme. Its objective is to find and study a large number of extrasolar planetary systems, with emphasis on the properties of terrestrial planets in the habitable zone around solar-like stars. PLATO has also been designed to investigate seismic activity in stars, enabling the precise characterisation of the planet host star, including its age. Credit: ESA.

Meanwhile, Hubble has been given a clean bill of health until at least 2020. The aim is for as much overlap with JWST as possible, bridging the gap to WFIRST. Spitzer will likely be phased out once JWST is up. WFIRST, if serviced and upgraded regularly like Hubble, could also last twenty years plus, certainly until the ATLAST telescope is operational and well after the Extremely Large Telescopes are fully functional on the ground. Given the huge cost of building, launching and maintaining space telescopes (not least $8.5 billion for JWST), NASA have now made it clear that future designs will be multi-purpose and modular for ease of service/upgrade.

Imaging an Exoplanet

In terms of resolving and imaging an exoplanet, we move into the realm of science fiction for now. To produce even a ten-pixel spatial image of a nearby planet would require a space telescope with an aperture equivalent to 200 miles. Clearly impossible for one telescope, but a thirty minute exposure employing 150 3m diameter mirrors with varying separations of up to 150 km, linked together as a “hyper telescope”, would be sufficient to act as an ‘Exo-Earth imager’ able to detect several pixel “green spots” similar to the Amazon basin on a planet within ten light years.The short exposure time is an added necessity for spatial imaging in order avoid blurring caused by clouds or planetary rotation. This is why it may be important to have an external occulter with WFIRST, not just for potent imaging but to allow the “formation flying ” necessary to link the two devices together. A small step but a necessary one to get to direct spatial imaging.

Meanwhile, everything we learn from direct imaging will be via increasingly sensitive spectroscopy of O2, O3, CH4, H20 (liquid in the habitable zone as determined by astrometry) and photosynthetic pigments like the chlorophyll “red edge” bio signatures from “point sources”. The bigger the telescope, the better the signal to noise ratio (SNR) and the better the spectroscopic resolution. WFIRST has a three dimensional “Integral field spectroscope” with a maximum resolution of 70. When you think that high resolution runs to the hundreds of thousands, it shows that we are only just scratching the surface. Apart from that and until spatial imaging (if ever), SETI, Infrared heat emission or spacecraft exhausts might be the only way to separate intelligent life from life per se.

That said, things are going to happen that would have been inconceivable even in 1995. Twenty thousand plus exoplanets by 2030, hundreds characterised. Exciting if crude and controversial spectroscopic findings, and just five years perhaps from launching a 16m segmented telescope into an orbit 1.5 million kms from Earth where it will be regularly serviced by astronauts and robots practising for Mars missions.

Missions and Their Development Paths

Closer to home, we have the ESA JUICE (Jupiter Icy Moons Explorer) mission to Jupiter with flybys of Europa and Callisto and a Ganymede orbiter. NASA will hopefully get its act together for a cost-effective Europa Clipper and we may yet find signs of life closer to home, though my money is on biosignatures first from an exoplanet, possibly as early as TESS but certainly from ATLAST. The key for me is that life (as we know it) is made from elements and molecules that are common.

JUICE_artist-impression_275x389

This is why infrared astronomy is so important, for infrared light travels long distances and isn’t easily absorbed, and if it is, it soon gets re emitted. The missions to icy bodies in our system, like Rosetta, Dawn, OSIRIS-REX and New Horizons, are as critical to life discovery as TESS, WFIRST or even ATLAST, as they illustrate the ubiquity of all the necessary ingredients of life (including water) as well as the violent formation of our solar system. In the absence of “flagship missions”, the highest-funded NASA missions that were suspended after the JWST overspend, most planetary-style missions are now funded by smaller amounts, like Explorer (different cash levels up to $220 million plus a launch), Discovery ($500 million plus a launch) and New Frontiers ($1 billion plus a launch). NASA even has a list of prescribed launch vehicles and savings made, but fitting any mission into a smaller launcher can feed into the mission itself. Up to $16 million for a Discovery concept, for example. Applications are invited for missions according to how often they fly with the cheapest, Explorer, launching every three years.

Image: JUICE (JUpiter ICy moons Explorer) is the first large-class mission in ESA’s Cosmic Vision 2015-2025 programme. Planned for launch in 2022 and arrival at Jupiter in 2030, it will spend at least three years making detailed observations of the giant gaseous planet Jupiter and three of its largest moons, Ganymede, Callisto and Europa. Credit: ESA.

The limited funding has had the advantage, however, of inspiring great innovation and hugely successful mission concepts like TESS, just $200 million, Kepler at about $700 million, and Juno, a New Frontiers $1 billion mission currently en route to Jupiter. Without going into detail, the costs of missions are made up of numerous elements with hardware like telescopes and spacecraft contributing the biggest element, but they require constant engineering and operating support throughout their lifetime, which builds up and has to be factored into the initial budget.

Juno hasn’t reached Jupiter yet, but its science and engineering teams are all at work making sure it operates. So although budgets of hundreds of millions sound like a lot, they are in fact fairly small, especially if compared to “flagship” missions like Cassini, Galileo and the Voyagers, which in current funding would run into many billions of dollars. This contributes to a big hole in exploration, preventing follow up intermediate telescopes and interplanetary missions. The lack of any mission to Uranus or Neptune is a classic example, with no plan for even getting close since Voyager 2. The fact that even a heavily cut back Europa Clipper is still estimated at $2 billion for a 3.5 year multiple flyby mission (which is cheaper than an orbiter). The heavy contribution of running costs is bizarrely demonstrated by the fact that a Europa lander was considered over an orbiter because it would be cheaper simply because it wouldn’t last long due to the hostile environment! The next round of New Frontiers bids is just starting for a 2021 launch with just one outer Solar System concept involving a Saturn atmospheric probe and relay spacecraft. It is expected to transmit f0r 50 minutes after an 8 year journey and costs $1 billion.

All of this illustrates the problem mission planners face and the huge cost of such missions. Europa Clipper is actually a very good value mission and might just fly. In conjunction with the ESA JUICE mission, Europa Clipper will drive forward our knowledge of Jupiter’s inner moons, certainly confirming or disproving the idea of sub-surface oceans beyond all doubt and maybe finding some interesting things leaking out from the depths! The ESA face the same situation with JUICE, funded through their large or “L” programme scheme with a budget of just over a billion Euros, or about $1.5 billion. Their lesser funds have forced even greater innovation than NASA and the low cost of JUICE is due to innovative lightweight and cheap materials like silicon carbide for a mission concept very similar to Europa Clipper.

Returning to Uranus and Neptune, these planets always appear in both NASA and ESA discovery “road maps” but always with other things further ahead which, with limited funds, ultimately take precedent. There is constant pressure to have visible results, the success of which was obvious with the ESA Rosetta mission and, we can assume, with Dawn at Ceres as well. Out of necessity, such mission concepts tend to be favoured as opposed to a mission to Uranus that with conventional rockets would take as long as 13 years.

Remember that throughout that time the spacecraft needs looking after remotely from both an engineering and operations perspective, requiring the maintenance of near full time staff, all of which eats into a limited budget of $1 billion, the New Frontiers maximum unless alternative funding sources or partnerships are used. This is one of the reasons I welcome the Falcon Heavy launcher so much. It is much cheaper at about $100 million than any other comparable launcher and can lift bigger loads off Earth into orbit. What isn’t as well known is that it has the ability to send missions direct to their target rather than needing gravitational assists from Earth, Venus and Jupiter, as with previous outer Solar System missions since Voyager.

Falcon Heavy could lift about a 5 tonne payload to Uranus in well under ten years, and in reducing the mission length would of course lower its cost, allowing more “mission proper”, perhaps even fitting within a New Frontiers cost envelope. The ESA were certainly able to produce a stripped down “straw man” dual Uranus/ Neptune concept, ODINUS, within an L budget. New ion propulsion systems like NEXT (or its descendants) require far less propellant than conventional chemical rockets and could ultimately be used to slow a Uranus probe into orbit without taking up too much mass of the critical spacecraft and its instrument suite.

Neptune_Full

Image: Neptune, a compelling target but one without a current mission in the pipeline. Credit: NASA.

That just leaves one big obstacle: Power. So far out from the Sun, even huge versions of today’s efficient solar cells would be inadequate to power even basic spacecraft functions, never mind complex scientific equipment. Traditionally power comes from converting heat to electricity via radioactive decay of the isotope Plutonium 238 (not to be confused with its deadly bomb making cousin Plutonium 239) in a “Radioisotope Thermal Generator”. Cassini uses such a device, as does Voyager 2, that with an 80 year plus half life is ideally suited for such extended missions.

This isotope is a byproduct of nuclear bomb making, so post Cold War it is in increasingly poor supply and what is available is earmarked for other projects well in advance. This situation faces all missions mooted to go beyond Jupiter, like an Enceladus or Titan orbiter/lander, and is a real deal breaker that needs addressing. Uranus and Neptune in particular need to be explored in detail not least because the exoplanet categorisation described above illustrates that they, in varying sizes, are the most ubiquitous planet type in the galaxy and are on our doorstep.

It’s impossible to talk observational astronomy and not mention Mars. Undoubtedly, the most popular mission target given its proximity, with a solid surface on which to land and the possibility of life, slim but possible if not now then at some time in the distant past. The next tranche of missions starts in 2016 with an ESA orbiter and stationary lander and a NASA lander, Insight. Both landers are intended to last two Earth years and prepare the way for rovers.

The NASA mission was part of the Discovery programme and was chosen just ahead of the TiME concept, Titan Mare Explorer – a floating lander that would analyse the Titanian methane lakes whilst Earth was above the horizon so it could transmit direct to Earth without the need for an expensive orbiter. What a mission that would have been, and for just half a billion dollars. That chance is now gone for twenty years or so, and with it any hope of a near-term Saturn mission after Cassini, given the expense of a more complex mission profile to either Titan or Enceladus.

Meanwhile, Insight will dig some holes and do more analysis and help prepare the way for the Mars 2020 rover, a beefed up version of Curiosity which will have more sophisticated instruments, including drills, that will look specifically for life rather than just water, as with Curiosity. Crucially this will use one of the few remaining RTGs as opposed to solar panels like those on Opportunity (still going after running a marathon in ten years), thereby removing the possibility of using the device for outer Solar System exploration. Plutonium 238 production has begun again at Oak Ridge but yearly production is tiny, and it will take years to produce the kilogram masses necessary to power space missions. The ESA are also sending a rover to Mars, in 2018, funding and builders Roscosmos allowing. It will be solar powered and launched by a Russian Proton rocket, whose success rate isn’t the best.

For all the potential deficiencies in exploration, what has been achieved over the last twenty years is immense. What is planned over the next twenty years is not fully clear yet, but it is likely to culminate in a huge multi-purpose space telescope that will pull all previous work together, and in concert with other space and ground telescopes, and hopefully multiple interplanetary missions, will discover signs of life, if not at present, certainly in the past. I think ultimately we will find out that life is common, but much as I would like to disagree with Fermi and “Rare Earth”, I think finding intelligent life is going to be a whole lot harder. As the Hitchhikers Guide to the Galaxy says, “Space is a very big place “. Both in size and time.

Considering we are living in times of austerity, though, I think what we have done so far isn’t at all bad !

tzf_img_post

{ 29 comments }

Enter ‘Galactic Archaeology’

by Paul Gilster on April 9, 2015

I’ve used the term ‘interstellar archaeology’ enough for readers to know that I’m talking about new forms of SETI that look for technological civilizations through their artifacts, as perhaps discoverable in astronomical data. But there is another kind of star-based archaeology that is specifically invoked by the scientists behind GALAH, as becomes visible when you unpack the acronym — Galactic Archaeology with HERMES. A new $13 million instrument on the Anglo-Australian Telescope at Siding Spring Observatory, HERMES is a high resolution spectrograph that is about to be put to work.

Andromeda

Image: I can’t resist running this beautiful 1899 photograph of M31, then known as the Great Andromeda Nebula, when talking about our evolving conception of how galaxies form. Credit: Isaac Roberts (d. 1904), A Selection of Photographs of Stars, Star-clusters and Nebulae, Volume II, The Universal Press, London, 1899. Via Wikimedia Commons.

And what an instrument HERMES is, capable of providing spectra in four passbands for 392 stars simultaneously over a two degree field of view. What the project’s leaders intend is to survey one million stars by way of exploring how the Milky Way formed and evolved. The idea is to uncover stellar histories through the study of their chemistry, as Joss Bland-Hawthorn (University of Sydney) explains:

“Stars formed very early in our galaxy only have a small amount of heavy elements such as iron, titanium and nickel. Stars formed more recently have a greater proportion because they have recycled elements from other stars. We reach back to capture this chemical state – by analysing the mixture of gases from which the star formed. You could think of it as its chemical fingerprint – or a type of stellar DNA from which we can unravel the construction of the Milky Way and other galaxies.”

Determining the histories of these stars with reference to 29 chemical signatures as well as stellar temperatures, mass and velocity should help the researchers create a map of their movements over time. This should be a fascinating process, for views of galaxy formation have changed fundamentally since the days when Allan Sandage and colleagues proposed (in 1962) that a protogalactic gas cloud that settled into a disk could explain galaxies like the Milky Way.

That concept suggested that the oldest stars in the galaxy were formed from gas that was being drawn toward the galactic center, collapsing from the halo to the plane, and in Sandage’s view, this collapse was relatively rapid (on the order of 100 million years), with the initial contraction beginning roughly ten billion years ago. Later we begin to see a different model developing, one in which the galaxy formed through the agglomeration of smaller elements like satellite galaxies. Both these processes are now believed to play a role, with infalling satellite systems affecting not just the galactic halo but also the disk and bulge.

Galaxy-s-Halo-Displays-Layer-Cake-Structure-2

Image: Structure of the Milky Way, showing the inner and outer halo. Credit: NASA, ESA, and A. Feild (STScI).

Galactic archaeology is all about detecting the debris of these components, making it possible to reconstruct a plausible view of the proto-galaxy. How the galactic disk and bulge were built up is the focus, determined by using what the researchers call ‘the stellar relics of ancient in situ star formation and accretion events…’ The authors explain the challenges they face:

… unraveling the history of disc formation is likely to be challenging as much of the dynamical information such as the integrals of motion is lost due to heating. We need to examine the detailed chemical abundance patterns in the disc components to reconstruct [the] substructure of the protogalactic disc. Pioneering studies on the chemodynamical evolution of the Galactic disc by Edvardsson et al. (1993) followed by many other such works (e.g. Reddy et al. 2003; Bensby, Feltzing & Oey 2014), show how trends in various chemical elements can be used to resolve disc structure and obtain information on the formation and evolution of the Galactic disc, e.g. the abundances of thick disc stars relative to the thin disc. The effort to detect relics of ancient star formation and the progenitors of accretion events will require gathering kinematic and chemical composition information for vast numbers of Galactic field stars.

Two days ago we looked briefly at globular clusters and speculated on what the view from a planetary surface deep inside one of these clusters might look like. The globular clusters, part of the galaxy’s halo, contain some of its oldest stars, and the entire halo is poor in metals. Going into the GALAH survey, the researchers believe that a large fraction of the halo stars are remnants of early satellite galaxies that evolved independently before being acquired by the Milky Way, a process that seems to be continuing as we discover more dwarf satellites and so-called ‘stellar streams,’ associations of stars that have been disrupted by tidal forces.

Seventy astronomers from seventeen institutions in eight countries are involved in GALAH, which is led by Bland-Hawthorn, Gayandhi De Silva (University of Sydney) and Ken Freeman (Australian National University). Their work should give us much new information not just about the halo and globular clusters but the interactions of stars throughout the disk and central bulge. The paper on the project is De Silva et al., “The GALAH survey: scientific motivation,” Monthly Notices of the Royal Astronomical Society Vol. 449, Issue 3 (2015), pp. 2604-2617 (abstract). A University of Sydney news release is also available.

And in case you’re interested, the classic paper by Sandage et al. is “Evidence from the motions of old stars that the Galaxy collapsed,” Astrophysical Journal, Vol. 136 (1962), p. 748 (abstract).

tzf_img_post

{ 15 comments }

Ganymede Bulge: Evidence for Its Ocean?

by Paul Gilster on April 8, 2015

What to make of the latest news about Ganymede, which seems to have a bulge of considerable size on its equator? William McKinnon (Washington University, St. Louis) and Paul Schenk (Lunar and Planetary Institute) have been examining old images of the Jovian moon taken by the Voyager spacecraft back in the 1970s, along with later imagery from the Galileo mission, in the process of global mapping. The duo discovered the striking feature that Schenk described on March 20 at the 46th Lunar and Planetary Science Conference in Texas. Says McKinnon:

“We were basically very surprised. It’s like looking at old art or an old sculpture. We looked at old images of Ganymede taken by the Voyager spacecraft in the 1970s that had been completely overlooked, an enormous ice plateau, hundreds of miles across and a couple miles high… It’s like somebody came to you and said, ‘I have found a thousand mile wide plateau in Australia that was six miles high.’ You’d probably think they were out of their minds or spent too much time in the Outback.”

The bulge is about 600 kilometers across and 3 kilometers tall, and the researchers believe that it may be an indication of the moon’s sub-surface ocean. The going theory is that the bulge emerged at one of Ganymede’s poles and slid along the top of the ocean in a motion called true polar wander (TPW). The find sets us up for future mapping of Ganymede, for the polar wander theory leads McKinnon and Schenk to believe that a similar bulge should exist opposite this one. If current mission planning holds, we may learn the answer early in the 2030s.

ganymede

Image: Jupiter’s moon Ganymede probably has a sub-surface ocean, as recent work suggests. Credit: NASA/JPL.

If it takes a global sub-surface ocean to produce true polar wander, then we should expect the same thing on Europa, and indeed, the evidence points to the phenomenon, though here the signs of TPW are much clearer presumably because the ice crust is thinner. Strain produced by the shell’s rotation forces concentric grooves — the researchers call them ‘crop circles’ — to emerge. McKinnon and Schenk found that the ‘two incomplete sets of concentric arcuate trough-like depressions’ previously identified on Europa are offset in a pattern that fits the stresses of true polar wander. Moreover, additional features show evidence of TPW, including fissure-like fractures and smaller subsidiary fractures that seem to be associated with the concentric features and their patterning on the surface. “The TPW deformation pattern on Europa,” the authors add, “is thus more complex than the original features reported.” Here again it will take future mapping to provide the higher resolution needed to explore these issues.

The bulge now identified on Ganymede indicates that at some point in the past, the moon’s surface ice rotated, with what had been thicker polar shell material now being found at the equator. McKinnon’s surprise at the finding is understandable given that there is no other surface sign of true polar wander on Ganymede, as the LPSC proceedings paper makes clear:

Extensive search of the entire Voyager and Galileo image library, including all terminator image sequences and all high resolution images reveals no trace of any of the features currently associated with TPW on Europa. No arcuate troughs, no irregular depressions, no raised plateaus, no crosscutting en-echelon fractures. This may be consistent with a thicker ice shell on Ganymede.

That thicker ice shell would have made true polar wander less likely, but the phenomenon could have occurred in the past over a thinner surface crust. The paper goes on:

A thicker ice shell (and lithosphere) will not deform as easily, and will resist polar wander in the first place; a thinner icy shell, more plausible in Ganymede’s past, may have undergone polar wander, but the resultant stresses will be lower by a factor of 3 compared with those on Europa and may not have created such a distinctive tectonic signature. We are engaged in a global search for other manifestations of TPW on Ganymede and will report on our findings.

The question, then, is how a bulge the size of the one the researchers have identified on Ganymede can still be in place. In an article on this work in National Geographic (Bizarre Bulge Found on Ganymede, Solar System’s Largest Moon), McKinnon had this to say:

“Any ideas about how you support a three-kilometer-high [two-mile] ice bulge, hundreds of kilometers wide, over the long term on Ganymede are welcome… We’ve never seen anything like it before; we don’t know what it is.”

As we’ve seen recently, Ganymede’s sub-surface ocean has been confirmed by Joachim Saur and colleagues (University of Cologne) through the study of auroral activity (see Evidence Mounts for Ganymede’s Ocean). Now we have further indication that crustal slippage has occurred on the moon, all but requiring an ocean separating surface materials from the deeper core. We can expect to learn much more, including whether or not there is a corresponding bulge opposite to this one, when the Jupiter Icy Moon Explorer mission arrives in 2030. If the proposed timeline is met, JUICE will begin orbital operations around Ganymede in 2033.

A St. Louis Public Radio news release on McKinnon and Schenk’s work is also available.

tzf_img_post

{ 1 comment }

In Search of Colliding Stars

by Paul Gilster on April 7, 2015

How often do two stars collide? When you think about the odds here, the likelihood of stellar collisions seems remote. You can visualize the distance between the stars in our galaxy using a method that Rich Terrile came up with at the Jet Propulsion Laboratory. The average box of salt that you might buy at the grocery store holds on the order of five million grains of salt. Two hundred boxes of salt, then, make a billion grains, while 20,000 boxes give us 100 billion. That’s now considered a low estimate of the number of stars in our galaxy, which these days tends to be cited at about 200 billion, but let’s go with the low figure because it’s still mind-boggling.

So figure you have 20,000 boxes of salt and you spread the grains out to mimic the actual separation of stars in the part of the galaxy we live in. Each grain of salt would have to be eleven kilometers away from any of its neighbors. These are considerable distances, to say the least, but of course there are places in the galaxy where stars are far closer to each other than here.

I was reminded of this recently while reading Jack McDevitt’s novel Seeker (Ace, 2005), which Centauri Dreams reader Rob Flores had mentioned in our discussion of Scholz’s Star and its close pass by the Solar System some 70,000 years ago (see Scholz’s Star: A Close Flyby). Seeker is one of Jack’s tales about Alex Benedict, a far future dealer in antiquities, some of which are thousands of years in our own future. Here’s a bit of dialogue that brings up stellar collisions. On the distant world of the McDevitt universe, Benedict’s aide Chase Kolpath is discussing the matter with an astrophysicist and asks how frequent such collisions are:

“They happen all the time, Chase. We don’t see much of it around here because we’re pretty spread out. Thank God. Stars never get close to one another. But go out into some of the clusters—” She stopped and thought about it. “If you draw a sphere around the sun, with a radius of one parsec, you know how many other stars will fall within that space?”

“Zero,” I said. “Nothing’s close.” In fact the nearest star was Formega Ti, six light-years out.

“Right. But you go out to one of the clusters, like maybe the Colizoid, and you’d find a half million stars crowded into that same sphere.”

“You’re kidding.”

“I never kid, Chase. They bump into one another all the time.” I tried to imagine it. Wondered what the night sky would look like in such a place. Probably never got dark.

I’ve always had the same thought, and tried to imagine myself in a globular cluster like 47 Tucanae or the ancient Messier 5. Another place that (almost) never got dark was the planet in Isaac Asimov’s story “Nightfall” (Astounding Science Fiction, September 1941). Asimov took us to a world where six stars kept the sky continually illuminated except for a brief night every 2049 years. The story, so Asimov’s autobiography tells us, grew out of John Campbell’s asking Asimov to write something based on a famous quotation from Ralph Waldo Emerson:

If the stars should appear one night in a thousand years, how would men believe and adore, and preserve for many generations the remembrance of the city of God!

Of course, the Asimov tale involves a six-star system in which the inhabitants know nothing beyond the stars that keep them illuminated. In a galactic cluster, things get incredibly tight, with typical star distances in the range of one light year, but distances in the core much closer to the size of the Solar System. Planetary orbits in such tight regions would surely be unstable, but we can still try to imagine a sky spangled with stars this close in all directions, if only as an exercise for the imagination. Keep in mind, too, that clusters like Omega Centauri can have several million solar masses worth of stars. An environment like this one is surely ripe for stellar collisions.

Messier_5

Image: This sparkling jumble is Messier 5 — a globular cluster consisting of hundreds of thousands of stars bound together by their collective gravity. But Messier 5 is no normal globular cluster. At 13 billion years old it is incredibly old, dating back to close to the beginning of the Universe, which is some 13.8 billion years of age. It is also one of the biggest clusters known, and at only 24 500 light-years away, it is no wonder that Messier 5 is a popular site for astronomers to train their telescopes on. Messier 5 also presents a puzzle. Stars in globular clusters grow old and wise together. So Messier 5 should, by now, consist of old, low-mass red giants and other ancient stars. But it is actually teeming with young blue stars known as blue stragglers. These incongruous stars spring to life when stars collide, or rip material from one another. Credit: Cosmic fairy lights by ESA/Hubble & NASA. Via Wikimedia Commons.

17th Century Detection of a Collision?

Now we have word that astronomers have found evidence that the nova known as Nova Vulpeculae 1670 was actually the result of a stellar collision. Appearing in 1670 and recorded by both Giovanni Domenico Cassini and Johannes Hevelius, great figures in the astronomy of their day, the ‘new star’ was a naked eye object that varied in brightness over the course of two years. It vanished, reappeared, vanished, reappeared and finally disappeared for good.

We’ve known since the 1980s that a faint nebula in the suspected location of the event was probably all that was left of the star, but new work using APEX (Atacama Pathfinder Experiment telescope), the Submillimeter Array (SMA) and the Effelsberg radio telescope has allowed us to study the chemical composition of the nebula and measure the ratios of different isotopes in the gas. We learn that these ratios do not correspond to what we would expect from a nova.

So what was Nova Vul 1670? The mass of material was too great to be the product of a nova explosion. The paper on this work argues that we are looking at what is left after a collision between two stars, which leaves us with what is known as a red transient, in which material from the stellar interiors is blown into space, leaving only a cool, dusty remnant.

Only a few such objects, also known as luminous red novae, have been detected, with the first confirmed instance being the object M85 OT2006-1 in Messier 85. We also have the case of V1309 Scorpii, which appears to be an interesting instance of the merger of a contact binary, detected in 2008. With so few examples to work with, we have much to learn about the frequency and nature of these phenomena. But globular clusters do appear to be the best place to look for collisions. Back in 2000, Michael Shara (American Museum of Natural History) told a symposium that several hundred collisions per hour could be expected throughout the visible universe, almost all of which we will never detect. (see Two Stars Collide; A New Star Is Born).

Shara estimated as well that in the ten billion year lifetime of the Milky Way, about one million collisions have occurred within globular clusters, or about one every 10,000 years. So Jack McDevitt’s astrophysicist seems to have it about right. If the entire universe is your stage, then stars collide all the time. When Hevelius described Nova Vul 1670 as ‘nova sub capite Cygni’ — a new star below the head of the Swan — he could have no idea how rare his observation was in terms of a human lifetime, but how common on a cosmic time scale.

The paper on Nova Vul 1670 is Kamiński et al., “Nuclear ashes and outflow in the oldest known eruptive star Nova Vul 1670,” published online in Nature 23 March 2015. The link to the abstract is broken as of this morning, but I’ll post it when it’s functional.

tzf_img_post

{ 9 comments }