Astrobiology: A Cautionary Tale

by Paul Gilster on February 27, 2015

We’re discovering planets around other stars at such a clip that moving to the next step — studying their atmospheres for markers of life — has become a priority. But what techniques will we use and, more to the point, how certain can we be of their results? Centauri Dreams columnist Andrew LePage has been mulling these matters over in the context of how we’ve approached life on a much closer world. Before the Viking landers ever touched down on Mars, a case was being made for life there that seemed compelling. LePage’s account of that period offers a cautionary tale about astrobiology, and a ringing endorsement of the scientific method. A senior project scientist at Visidyne, Inc., Drew is also the voice behind Drew ex Machina.

by Andrew LePage

Andrew_LePage_2014

Every time I read an article in the popular astronomy press about how some new proposed instrument will allow signs of life to be detected on a distant extrasolar planet, I cannot help but be just a little skeptical. For those of us with long memories, we have already been down this road of using remote sensing techniques to “prove” life existed on some distant, unreachable world, only to be disappointed when new observations became available. But instead of a distant extrasolar planet, over half a century ago that planet was our next door neighbor, Mars.

Back when I was in high school in the late 1970s, I enjoyed spending time during study hall going through science books and magazines, old as well as new, in the school library. Among the interesting tidbits I read about were spectral features known as “Sinton bands” and how in the early 1960s these were considered the latest evidence of life on Mars. Of course by the time I was reading this, I knew from the then-recent results from the Viking missions that the explanation for these and other observations was simply incorrect. So what ever happened to these Sinton bands and the interpretation they were evidence of life on Mars?

In the years leading up to the beginning of the Space Age, the general consensus of the scientific community was that Mars was a smaller and colder version of the Earth that supported primitive plant life akin to lichen. This view was based on a large body of observational evidence gathered over the first half of the 20th century. A firmly established wave of darkening was observed spreading over the spring hemisphere of Mars each Martian year which was widely seen as being the result of plants coming out of their winter slumber much as happens on Earth each spring. This interpretation was bolstered by visual observations that the dark regions of Mars appeared to have a distinct green hue just as one would expect from widespread plant life.

Other observations of Mars during this period lent further support to the view that the Red Planet could support simple life forms. The general consensus of the astronomical community at this time based on analyses of decades of photometric and polarimetric measurements of Mars indicated that the surface pressure of the Martian atmosphere was about 85 millibars or about 8.4% of Earth’s surface pressure. Carbon dioxide and water vapor were detected and nitrogen was widely expected to be the major atmospheric constituent just as it was on Earth. No large bodies of water were visible on the surface and the climate was certainly colder than on Earth as a whole owing to Mars’ greater distance from the Sun, but the surface temperatures at the equator easily exceeded the freezing point of water during the summer so that liquid water was expected to be available. While not an ideal environment by terrestrial standards, it seemed that Mars had conditions that would be expected to support life much like the high arctic here on Earth.

Mars_Mt_Wilson_1956

Image: This was the best photograph of Mars available before the Space Age taken at the Mt. Wilson observatory in 1956 – the same year Sinton bands were discovered. Credit: Mt. Wilson Observatory.

To further test this view, American astronomer William Sinton (1925-2004) decided to use the latest technological advancements in infrared (IR) spectroscopy to obtain observations of Mars during its especially favorable 1956 opposition. On seven nights during the fall of 1956, Dr. Sinton used the 1.55-meter Wyeth Reflector at the Harvard College Observatory to make IR spectral measurements using a lead sulfide detector cooled using liquid nitrogen to vastly improve its sensitivity. He made repeated measurements between the wavelengths of 3.3 to 3.6 μm in order to sample the spectral region where resonances from the C-H bonds of various organic molecules would create distinctive absorption features. His analysis found a dip in the IR spectrum of Mars near 3.46 μm which resembled his IR spectrum of lichen. This finding and his conclusions were published in highly respected, peer-reviewed astronomical publication The Astrophysical Journal.

Encouraged by these initial results, Dr. Sinton repeated his measurements using an improved IR detector on the 5-meter Hale Telescope at the Mt. Palomar Observatory (then, the largest telescope in the world) during the following opposition of Mars in October 1958. His new observations had ten times the sensitivity of his original measurements and now covered wavelengths from as short as 2.7 μm out to 3.8 μm. In addition to absorption features attributable to methane and water vapor in Earth’s atmosphere, Dr. Sinton identified absorption features centered at 3.43, 3.56 and 3.67 μm that appeared to be weaker or absent in the brighter areas of Mars. Dr. Sinton concluded that inorganic compounds like carbonates could not produce the observed features. Instead they must be produced by organic compounds selectively concentrated in the dark areas of Mars that were already known to be greener. While the features he observed were not a perfect match for any known plant life on Earth, he concluded that they were due to organic compounds such as carbohydrates produced by plants on the surface of Mars. These findings and conclusions were again published in a well-regarded, peer-reviewed scientific journal, Science.

While there was naturally some healthy skepticism about the findings, they were seen by many as supporting the generally held view that Mars was the home of simple, lichen-like plant life. In order to better observe what became known as “Sinton bands”, the Soviet Union even planned to include IR instrumentation to measure these spectral features from close range on the first pair of spacecraft they launched towards Mars in October 1960. Unfortunately, both Mars probes succumbed to launch vehicle failures during ascent and never even made it into Earth orbit. Soviet engineers attempted it again with a pair of much more capable flyby probes of which only Mars 1 survived launch on November 1, 1962. Unfortunately, Mars 1 suffered a major failure in its attitude control system during its cruise and contact was lost three months before its encounter with Mars on June 21, 1963. As a result, there were no close-up IR observations of the Sinton bands at this time.

Mars_1

Image: The earliest Soviet Mars probes carried IR instrumentation to observe Sinton bands at close range including Mars 1 launched in November 1962. Credit: RKK Energia.

But even as the Soviet Union was struggling to reach Mars with their first interplanetary probes, the case for there being plant life on Mars and the Sinton bands being evidence for it was already beginning to unravel. Donald Rea, leading a team of scientists at the University of California – Berkeley, published the results of their work on Sinton bands in September 1963. They examined the IR spectra of a large number of inorganic and organic samples in the laboratory and could not find a match for the observed Sinton bands. While they could not find a satisfactory explanation for the bands, they found that the presence of carbohydrates as proposed by Dr. Sinton was not a required conclusion.

Another major blow was landed in a paper by another University of California – Berkeley team headed by chemist James Shirk which was published on New Year’s Day 1965. Their laboratory work suggested that the Sinton bands could be caused by deuterated water vapor – water where one or both of the normal hydrogen atoms, H, in H2O are replaced with the heavy isotope of hydrogen known as deuterium, D, to form HDO or D2O. Shirk and his team speculated that the deuterated water vapor was present in the Martian atmosphere with the implication that the D:H ratio of Mars greatly exceeded that of the Earth.

The final explanation for the Sinton bands came in a paper coauthored by Donald Rea and B.T. O’Leary of the University of California – Berkeley as well as William Sinton himself published in March of 1965. Based on a new analysis of Dr. Sinton’s data from 1958, observations of the solar IR spectrum from Earth’s surface and the latest laboratory results, it was found that the absorption features in the Martian spectrum now identified as being at 3.58 and 3.69 μm were the result of HDO in Earth’s atmosphere. The feature at 3.43 μm was, in retrospect, a marginal detection in noisy data and was probably spurious. The mystery of the Sinton bands was solved and, unfortunately, it had nothing to do with life on Mars.

Sinton bands were not the only causality of advances in technology and remote sensing techniques at this time. As more detailed ground-based observations of Mars were made during the 1960s and the first spacecraft reached this world, it was eventually found that all of the earlier observations that had been taken as evidence of life on Mars were either inaccurate or had non-biological explanations. After a half century of observations from space and on the surface, we now know that the Martian environment is simply too hostile to support even hardy lichen-like plants as had been widely believed before the Space Age.

This story about the rise and fall of the view that Mars harbors plant-like life forms should not be taken as an example of the failure of science. Instead, it is a perfect example of how the self-correcting scientific process is supposed to work. Observations are made, hypotheses are formulated to explain the observations and those hypotheses are then tested by new observations. In this case, the pre-Space Age view that Mars supported lichen-like plants was disproved when new data no longer supported that view. And our subsequent experience with the in situ search for life on Mars by the Viking landers in 1976 is further evidence not that Mars is necessarily lifeless, but that detecting extraterrestrial life is much more difficult than had been previously believed. These lessons need to be remembered as future instruments start to scan distant extrasolar planets and claims are made that life has been found because of the alleged presence of one compound or another. Past experience has shown that such interpretations can easily be incorrect especially when dealing with new observing techniques of distant worlds with unfamiliar environments.

tzf_img_post

{ 32 comments }

A Laser ‘Comb’ for Exoplanet Work

by Paul Gilster on February 25, 2015

It’s been years since I’ve written about laser frequency comb (LFC) technology, and recent work out of the Max Planck Institute of Quantum Optics, the Kiepenheuer Institute for Solar Physics and the University Observatory Munich tells me it’s time to revisit the topic. At stake here are ways to fine-tune the spectral analysis of starlight to an unprecedented degree, obviously a significant issue when you’re dealing with radial velocity readings of stars that are as tiny as those we use to find exoplanets.

Remember what’s happening in radial velocity work. A star moves slightly when it is orbited by a planet, a tiny change in speed that can be traced by studying the Doppler shift of the incoming starlight. That light appears blue-shifted as the star moves, however slightly, towards us, while shifting to the red as it moves away. The calibration techniques announced in the team’s paper show us that it’s possible to measure a change of speed of roughly 3 cm/s with their methods, whereas with conventional calibration techniques, the best measurement is roughly 1 m/s (although see the citations below for HARPS calibration of an LFC that reaches 2.5 cm/s). Detecting an Earth-mass planet in an Earth-like orbit around a solar-type star involves observing velocity changes of 10 cm/s or less, so we’re clearly entering the right range here.

Let’s back up and consider how a laser frequency comb works. Below is an image from the European Southern Observatory explaining the ‘comb’ analogy — as you can see, the graph resembles a fine-toothed comb, one built around short, equally spaced pulses of light created by a laser. The different colors of the pulsed laser light are separated based on their individual frequencies. Combining an ultrafast laser as a calibration tool with an external source of light allows scientists to measure the frequency of the external light to a high degree of precision.

ann12037a

Image: This picture illustrates part of a spectrum of a star obtained using the HARPS instrument on the ESO 3.6-metre telescope at the La Silla Observatory in Chile. The lines are the light from the star spread out in great detail into its component colours. The dark gaps in the lines are absorption features from different elements in the star. The regularly spaced bright spots just above the lines are the spectrum of the laser frequency comb that is used for comparison. The very stable nature and regular spacing of the frequency comb make it an ideal comparison, allowing the detection of minute shifts in the star’s spectrum that are induced by the motion of orbiting planets. Note that in this image, the colour range is for illustrative purposes only, as the real changes are much more subtle. Credit: ESO.

The laser frequency comb is, then, a standard ‘ruler’ that can measure the frequency of light to extreme precision. In the case of the recently announced findings, the researchers worked with sunlight averaged over the complete solar disk, as captured by the ChroTel solar telescope (located at the Vacuum Tower Telescope installation in Tenerife, Canary Islands). They combined this light with the light from the laser frequency comb, injecting both into a single optical fiber. The result was sent on to a spectrograph for analysis, with striking results. Lead author Rafael Probst (Max Planck Institute of Quantum Optics) comments:

“Our results show that if the LFC light and the sunlight are simultaneously fed through the same single-mode fibre, the obtained calibration precision improves by about a factor of 100 over a temporally separated fibre transmission. We then obtain a calibration precision that keeps up with the best calibration precision ever obtained on an astrophysical spectrograph, and we even see considerable potential for further improvement.”

Probst goes on to say that although the technique is currently restricted to solar spectroscopy, it should be workable even for faint astronomical targets as it is perfected. He comments in this news release from the Institute of Physics that a key aspect of the work is the clean and stable beam at the output that results from using single-mode fiber, a kind of fiber common in laser applications but relatively little used thus far in astronomy. The LFC at the Vacuum Tower Telescope is the first installation for astronomical use based on single-mode fiber.

These refinements of laser frequency comb technique point toward future measurements of Doppler shifts that will make detecting Earth-sized planets with radial velocity methods more likely. The laser frequency comb seems poised to become a major tool. “In astronomy, frequency combs are still a novelty and non-standard equipment at observatories,” the authors write in their conclusion. “This however, is about to change, and LFC-assisted spectroscopy is envisioned to have a flourishing future in astronomy.”

The paper is Probst et al., “Comb-calibrated solar spectroscopy through a multiplexed single-mode fiber channel,” New Journal of Physics Vol. 17 (February 2015) 023048 (abstract). See also this video abstract of the work. Laser frequency comb work at HARPS reaching into the cm/s range is reported in Wilken et al., “A spectrograph for exoplanet observations calibrated at the centimetre-per-second level,” Nature Vol. 485, Issue 7400 (May, 2012), 611-614 (abstract).

tzf_img_post

{ 4 comments }

Soft Robotics for a Europa Rover

by Paul Gilster on February 23, 2015

Approaching problems from new directions can be unusually productive, something I always think of in terms of Mason Peck’s ideas on using Jupiter as a vast accelerator to drive a stream of micro-spacecraft (Sprites) on an interstellar mission. Now Peck, working with Robert Shepherd (both are at Cornell University) is proposing a new kind of rover, one ideally suited for Europa. The idea, up for consideration at the NASA Innovative Advanced Concepts (NIAC) program, is once again to exploit a natural phenomenon in place of a more conventional technology. What Peck and Shepherd have in mind is the use of ‘soft robotics’ — autonomous machines made of low-stiffness polymers or other such material — to exploit local energy beneath Europa’s ice.

We’re at the edge of a new field here, with soft robotics advocates using principles imported from more conventional rigid robot designs to work with pliable materials in a wide range of applications, some of which tie in with the growth in 3D printing. The people working in this area are developing applications for everything from physical therapy to minimally invasive surgery, with energizing inputs from organic chemistry and soft materials science. If the average robot is modeled around metallic structures with joints based on conventional bearings, soft robotics looks to the natural world for models of locomotion through terrain and innovative methods of energy production.

Peck and Shepherd are proposing what they call a ‘soft-robotic rover with electromagnetic power scavenging,’ a device capable of moving in the seas of Europa that is anything but the submarine-like craft some have envisioned to do the job. The closest analog in the natural world is the lamprey eel, the soft robotics version of which would use electrodynamic tethers to scavenge energy. The rover moves by swimming, powered not by solar or nuclear power but by the use of expanding gases. The rover under the ice sends data to a Europa orbiter using VLF wavelengths, like a submarine. Let me quote from the NIAC proposal on this mechanism:

The electrical energy scavenged from the environment powers all rover subsystems, including one that electrolyzes H2O. Electrolysis produces a mixture of H2 and O2 gas, which is stored internally in the body and limbs of this rover. Igniting this gas expands these internal chambers, causing shape change to propel the rover through fluid or perhaps along the surface of a planetary body.

Power Beneath the Ice

This is the first time I have encountered locomotion based on electromagnetic power scavenging, with accompanying reliance on soft robotic structures that could change the way we look at designing probes that will operate in ocean environments like that on Europa. It’s interesting to take this one step further, as the proposal itself notes, and remember that a rover inspired by biology may here point toward astrobiology, in that electromagnetic energy is considered to be a possible source of energy for any native life under Europa’s ice.

peck_shepherd_1

Image: Water electrolyzer and gas generation subsystem. Credit: Mason Peck, Robert Shepherd.

We know from experience in Earth orbit that tethers work, a result of the fact that a conductor moving through a magnetic field experiences an induced current. The power available at Jupiter gives us interesting options:

In Jupiter’s orbit, where the magnetic field can be up to 10,000 times more powerful and the ionosphere denser than Earth’s, the power can be even higher. A NASA/MSFC study characterized the power available for an EDT [electrodynamic tether] near Jupiter. The authors report that at least 1W would be available in Europa’s orbit but did not consider the much higher conductivity of the ocean or whatever tenuous atmosphere may exist near the surface.

Because higher conductivity produces greater current, the authors argue that far more power should be available on Europa, and that tethers no more than meters long should be sufficient to power the rover. According to the NIAC proposal, one end of the tether would be attached to the rover’s power systems while the other would be kept above the rover by a gas-filled balloon to maintain the necessary configuration as the rover conducted operations, and to ensure a predictable level of current. The tether itself also serves as an antenna to relay science data (possibly through an umbilical) to the surface. The proposed Phase I study would investigate tether configurations and the ability of an EDT to produce the power for transmission.

So we have energy harvested (or scavenged) by electrodynamic tethers being used to power up an electrolyzer that can split water into gaseous H2 and O2, an efficient way to use local resources in a domain where solar and perhaps nuclear power would be unusable (remember in relation to nuclear options that NASA has cancelled development of the Advanced Stirling Radioisotope Generator – ASRG – technologies, although some testing at NASA Glenn is to continue). The gases produced by the electrolysis would then be stored for energy usage, tapped as a combustible fuel/oxidizer mixture, and as a pressurant.

A Natural Model for Locomotion

This last point deserves a second look. In previous work, Robert Shepherd has created pneumatically powered silicon-based robots that can move and navigate obstacles, using onboard air compressors and lithium battery packs. Work at MIT has demonstrated a pneumatically powered swimming robot with a soft tail using onboard compressed CO2. Shepherd has also demonstrated the use of hydrogen combustion to increase the range and speed of soft robots, a model he and Peck propose for further study in the Europa concept, one that might be used both below the ice and on the surface.

peck_shepherd_2

Image: Water jetting actuated by ignition of H2/O2 gas and subsequent shape change. Credit: Mason Peck, Robert Shepherd.

Here again the model from nature is instructive, for as the report notes, “[t]he design of the combustion powered hydro-jetting mechanism is analogous to the morphology of an octopus’ mantle cavity.” The report anticipates using jetting methods to allow the rover to range widely at long distances and also for precision operations over short distances. Peck and Shepherd are also hoping to study grasping operations that would be modeled on biology, using ‘an array of teeth-like grippers positioned around the water jet area (mouth) of the synthetic lamprey.’

This is fascinating work that offers us solutions for powering an underwater robot but also provides mechanisms for movement in this environment (one that we may find in other gas giant moons) through the use of a form of robotics that mimics the natural world. Solutions inspired by biology help us move beyond the use of solar arrays, nuclear power or batteries to keep our rover operational and to give it what would seem to be a robust and lengthy lifetime. I would say that getting soft robotics into the picture for future space operations is a very wise idea, certainly one that justifies continued study and investment as we look toward the outer planets.

tzf_img_post

{ 20 comments }

Beta Pictoris: New Analysis of Circumstellar Disk

by Paul Gilster on February 20, 2015

Our discovery of the interesting disk around Beta Pictoris dates back all the way to 1984, marking the first time a star was known to host a circumstellar ring of dust and debris. But it’s interesting how far back thinking on such disks extends. Immanuel Kant’s Universal Natural History and Theory of the Heavens (1755) proposed a model of rotating gas clouds that condensed and flattened because of gravity, one that would explain how planets form around stars. Pierre-Simon Laplace developed a similar model independently, proposing it in 1796, after which the idea of gaseous clouds in the plane of the disk continued to be debated as alternative theories on planet formation emerged.

Today we can view debris disks directly and learn from their interactions. Out of the Beta Pictoris discovery have grown numerous observations including the new visible-light Hubble images shown below. The beauty of this disk is that we see it edge-on and, because of the large amount of light-scattering dust here, we see it very well indeed. Beta Pictoris, a young star about twenty million years old, is a relatively close 63 light years from Earth. It also offers the only directly imaged debris disk that is known to have a giant planet, imaged in infrared wavelengths by the European Southern Observatory’s Very Large Telescope in 2009.

hs-2015-06-a-web_print

Image: The photo at the bottom is the most detailed picture to date of a large, edge-on, gas-and-dust disk encircling the 20-million-year-old star Beta Pictoris. The new visible-light Hubble image traces the disk in closer to the star to within about 1 billion kilometers of the star (which is inside the radius of Saturn’s orbit about the Sun). When comparing the latest images to Hubble images taken in 1997 (top), astronomers find that the disk’s dust distribution has barely changed over 15 years despite the fact that the entire structure is orbiting the star like a carousel. The Hubble Space Telescope photo has been artificially colored to bring out detail in the disk’s structure. Credit: NASA, ESA, and D. Apai and G. Schneider (University of Arizona).

As the paper on this work notes, the new images give us the most detailed view of the disk at optical wavelengths that we’ve ever had, with the opportunity as in the image above to study its characteristics over a fifteen-year period. This is helpful because the estimated orbital period of the planet here is between 18 and 22 years, giving astronomers the ability to study a large degree of planetary and disk motion in a relatively small timeframe. The comparison shows that the dust distribution has changed little over the past fifteen years despite the disk’s rotation. A key issue is how the disk is distorted by the presence of the massive planet embedded within it.

The new high-contrast images provide an inner working angle that is smaller than earlier images by a factor of 2, allowing astronomers to image the disk at the location where the gas giant planet was located in 2012. Changes in brightness within the disk indicate an asymmetry that may be the mark of an inner inclined disk projecting into the outer disk material. The image below combines data from Hubble and the ALMA array, highlighting the dust and gas asymmetry.

hs-2015-06-d-large_web

Image: This is a color composite image of the disk encircling Beta Pictoris. The image shows a curious asymmetry in the dust and gas distribution. This may be due to a planetary collision within the disk, which may have pulverized the bodies. Radio data from the Atacama Large Millimeter/submillimeter Array (ALMA) shows the dust (1.3 millimeter is colored green) and carbon monoxide gas (colored red). Credit for Hubble Data: NASA, ESA, D. Apai and G. Schneider (University of Arizona). Credit for ALMA Data: NRAO and W.R.F. Dent (ALMA, Santiago, Chile).

The disk asymmetry issue will be explored by future Hubble work as well as by observations from the James Webb Space Telescope, which should give us a better indication of planet/disk interactions in the system. There is much to learn here in relation to disk warping and the origins of the planet’s orbital inclination, a tilt from the main disk that previous work had estimated at about 1 degree. The paper refines this estimate:

The warped disk – as seen in projection – subtends an angle larger than the best-fit orbital inclination of β Pic b, suggesting that planetesimals may be perturbed to higher inclinations than that of the perturbing giant planet. This finding is consistent with the predictions of dynamical simulations of a planetesimal system influenced by secular perturbations of a planet on inclined orbits… The fact that the warp is seen at angles 4, would taken on face value, then suggests that β Pic b’s inclination is – within uncertainties – underestimated by current measurements and it may be close to ∼ 2.

beta_pic_system

Image: Key structures in the β Pic system, as derived from multi-wavelength imaging. Credit: Daniel Apai et al. (Figure 15 from the paper).

We need to learn how the massive gas giant in the Beta Pictoris system ended up with an inclined orbit, and what has caused the asymmetry in the disk itself. Learning how the process works around this nearby star will help us detect the evidence of exoplanets in other circumstellar disks. While we continue to use the dusty Beta Pictoris system as the model for debris disks around young stars, we’re learning that the structure of disks may be intimately related to the planets moving within them. Thus each circumstellar disk will likely have its own signature. “The Beta Pictoris disk is the prototype for circumstellar debris systems, but it may not be a good archetype,” says co-author Glenn Schneider (University of Arizona).

The paper is Apai et al., “The Inner Disk Structure, Disk-Planet Interactions, and Temporal Evolution in the β Pictoris System: A Two-Epoch HST/STIS Coronagraphic Study,” in press at the Astrophysical Journal (preprint).

tzf_img_post

{ 4 comments }

Scholz’s Star: A Close Flyby

by Paul Gilster on February 19, 2015

The star HIP 85605 until recently seemed more interesting than it may now turn out to be. In a recent paper, Coryn Bailer-Jones (Max Planck Institute for Astronomy, Heidelberg) noted that the star in the constellation Hercules had a high probability of coming close enough to our Solar System in the far future (240,000 to 470,000 years from now) that it would pass through the Oort Cloud, potentially disrupting comets there. The possibility of a pass as close as .13 light years (8200 AU) was there, but Bailer-Jones cautioned that distance measurements of this star could be incorrect. His paper on nearby stellar passes thus leaves the HIP 85605 issue unresolved.

Enter Eric Mamajek (University of Rochester) and company. Working with data from the Southern African Large Telescope (SALT) and the Magellan telescope at Las Campanas Observatory in Chile, Mamajek showed that the distance to HIP 85605 has been underestimated by a factor of ten. As Bailer-Jones seems to have suspected, the new measurement takes the star on a trajectory that does not bring it within the Oort Cloud. But in the same paper, the team names an interesting system called Scholz’s Star as a candidate for a close pass in the past.

Studying the star’s tangential velocity (motion across the sky) as well as radial velocity data, the team found that despite being relatively close at 20 light years, Scholz’s Star shows little tangential velocity. That would imply an interesting encounter ahead, or one that had already happened. Mamajek explains:

“Most stars this nearby show much larger tangential motion. The small tangential motion and proximity initially indicated that the star was most likely either moving towards a future close encounter with the solar system, or it had ‘recently’ come close to the solar system and was moving away. Sure enough, the radial velocity measurements were consistent with it running away from the Sun’s vicinity – and we realized it must have had a close flyby in the past.”

Red and Brown dwarf binary system

Image: Artist’s conception of Scholz’s star and its brown dwarf companion (foreground) during its flyby of the solar system 70,000 years ago. The Sun (left, background) would have appeared as a brilliant star. The pair is now about 20 light years away. Credit: Michael Osadciw/University of Rochester.

The paper on this work, recently published in the Astrophysical Journal, determines the star’s trajectory, one that shows that about 70,000 years ago, it would have passed some 52,000 AU from the Sun. This works out to about 0.82 light years, or 7.8 trillion kilometers, quite a bit closer than Proxima Centauri, and probably close enough to pass through the outer Oort Cloud. The star was within 100,000 AU of the Sun for a period of roughly 10,000 years.

Scholz’s star (W0720) is a low-mass object in the constellation Monoceros also tagged WISE J072003.20-084651.2 and only recently discovered (by Ralf-Dieter Scholz in 2014) thanks to its dimness in optical wavelengths, its proximity to the galactic plane and its low proper motion. Adaptive optics imaging and high resolution spectroscopy has demonstrated that the star is actually a binary, an M-dwarf with a companion at 0.8 AU that is probably a brown dwarf.

The question that immediately comes to mind is what kind of object the Scholz’s star system would have presented in the night sky some 70,000 years ago. The answer is not dramatic, for at its closest approach the binary would have had an apparent magnitude in the range of 11.4 (note: there is a typo in the paper, as noted here, which had specified an apparent magnitude of 10.3). This is five magnitudes, or a factor of 100 times, fainter than the faintest naked eye stars. But the paper notes that M-dwarfs like this one are often given to flare activity that might have made Scholz’s star a brighter object. From the paper:

If W0720 experienced occasional flares similar to those of the active M8 star SDSS J022116.84+194020.4 (Schmidt et al. 2014), then the star may have been rarely visible with the naked eye from Earth (V < 6; ∆V < −4) for minutes or hours during the flare events. Hence, while the binary system was too dim to see with the naked eye in its quiescent state during its flyby of the solar system ∼70 kya, flares by the M9.5 primary may have provided visible short-lived transients visible to our ancestors.

And take a look at this graph, which Eric Mamajek published on Twitter yesterday.

Mamajek_graph

As you can see, Scholz’s Star was moving out. If it had been visible, what would ancient skywatchers have made of it? We also have to wonder what other close encounters our Solar System may have had with other stars. Note this point from the paper about M-dwarfs:

Past systematic searches for stars with close flybys to the solar system have been understandably focused on the Hipparcos astrometric catalog (García Sánchez et al. 1999; Bailer-Jones 2014), however it contains relatively few M dwarfs relative to their cosmic abundance. Searches in the Gaia astrometric catalog for nearby M dwarfs with small proper motions and large parallaxes (i.e. with small tangential velocities) will likely yield addition candidates.

So much still to learn about M-dwarfs!

The paper is Mamajek et al., “The Closest Known Flyby of a Star to the Solar System,” Astrophysical Journal Letters 800 (2015), L17 (preprint). The Bailer-Jones paper discussed above is “Close Encounters of the Stellar Kind,” in press at Astronomy & Astrophysics (preprint). For more on Bailer-Jones, see Stars Passing Close to the Sun.

tzf_img_post

{ 26 comments }

Ceres: Past and Future

by Paul Gilster on February 18, 2015

Now it’s really getting interesting. Here are the two views of Ceres that the Dawn spacecraft acquired on February 12. The distance here is about 83,000 kilometers, the images taken ten hours apart and magnified. As has been true each time we’ve talked about Ceres in recent weeks, these views are the best ever attained, with arrival at the dwarf planet slated for March 6.

PIA19056_ip

What I notice and really enjoy about watching Dawn in action is the pace of the encounter. Dawn is currently moving at a speed of 0.08 kilometers per second relative to Ceres, which works out to 288 kilometers per hour. The distance of 83,000 kilometers on the 12th of February has now closed (as of 1325 UTC today, the 18th) to 50,330 kilometers. Its quite a change of pace from the days when we used to watch Voyager homing in on a planetary encounter. Voyager 2 reached about 34 kilometers per second as it approached Saturn, for example, then slowed dramatically as it climbed out of the giant planet’s gravitational well. The same profile held with each encounter.

bsf16-22

Have a look at this JPL graph of Voyager 2 assist velocity changes and you’ll see the profile for each planetary flyby. In each case, the spacecraft comes into the gravitational influence of the planet and falls towards it, increasing its speed to a maximum at the time of closest approach. The numbers at closest approach to Saturn are actually over twice what Voyager 2 is making right now — 15.4 kilometers per second — but it’s the climb out of the gravity well that slows the vehicle down, something that is clearly shown in the graph at each encounter. Voyager 2, as noted below, actually left the Earth moving at 36 kilometers per second relative to the Sun.

Image: Voyager 2 leaves Earth at about 36 km/s relative to the sun. Climbing out, it loses much of the initial velocity the launch vehicle provided. Nearing Jupiter, its speed is increased by the planet’s gravity, and the spacecraft’s velocity exceeds Solar System escape velocity. Voyager departs Jupiter with more sun-relative velocity than it had on arrival. The same is seen at Saturn and Uranus. The Neptune flyby was designed to put Voyager close by Neptune’s moon Triton rather than to attain more speed. Diagram courtesy Steve Matousek, JPL.

The result for those of us watching spellbound as the Voyager mission progressed was that the planetary flybys were over quickly, the newly seen planetary details and moons captured for later analysis. With the benefit of supple ion propulsion, Dawn approaches at a far more leisurely pace as it moves toward not flyby but orbit around its latest target. To judge from what we’re seeing in the latest imagery, Ceres is going to be a fascinating place to explore. Note the craters and bright spots that have people talking throughout the space community.

“As we slowly approach the stage, our eyes transfixed on Ceres and her planetary dance, we find she has beguiled us but left us none the wiser,” said Chris Russell, principal investigator of the Dawn mission, based at UCLA. “We expected to be surprised; we did not expect to be this puzzled.”

Given how much we learned about Vesta during Dawn’s fourteen month exploration of the asteroid, we can hope that a comparative analysis of Ceres will teach us much about the formation of these objects and the Solar System itself. The best resolution in the images above is 7.8 kilometers per pixel. But day by day a small world is opening up, slowly, majestically. We may be looking at a place that becomes a significant resource for an expanding interplanetary economy in some much to be hoped for future. I’m reminded of a work by the Irish poet Eavan Boland, drawn back to it this morning because of the title: ‘Ceres Looks at the Morning.’ Boland’s reference is to the classical goddess Ceres, but this excerpt seems apropos as we approach and reveal a new land:

Beautiful morning
look at me as a daughter would
look: with that love and that curiosity:
as to what she came from.
And what she will become.

tzf_img_post

{ 21 comments }

A Black Hole of Information?

by Paul Gilster on February 17, 2015

A couple of interesting posts to talk about in relation to yesterday’s essay on the Encyclopedia Galactica. At UC-Santa Cruz, Greg Laughlin writes entertainingly about The Machine Epoch, an idea that suggested itself because of the spam his systemic site continually draws from “robots, harvesters, spamdexing scripts, and viral entities,” all of which continually fill up his site’s activity logs as they try to insert links.

Anyone who attempts any kind of online publishing knows exactly what Laughlin is talking about, and while I hate to see his attention drawn even momentarily from his ongoing work, I always appreciate his insights on systemic, a blog whose range includes his exoplanet analyses as well as his speculations on the far future (as I mentioned yesterday, Laughlin and Fred Adams are the authors behind the 1999 title The Five Ages of the Universe, as mind-bending an exercise in extrapolating the future as anything I have ever read). I’ve learned on systemic that he can take something seemingly mundane and open it into a rich venue for speculation.

So naturally when I see Laughlin dealing with blog spam, I realize he’s got much bigger game in mind. In his latest, spam triggers his recall of a conversation with John McCarthy, whose credentials in artificial intelligence are of the highest order. McCarthy is deeply optimistic about the human future, believing that there are sustainable paths forward. He goes so far as to say “There are no apparent obstacles even to billion year sustainability.” Laughlin comments:

Optimistic is definitely the operative word. It’s also possible that the computational innovations that McCarthy had a hand in ushering in will consign the Anthropocene epoch to be the shortest — rather than one of the longest — periods in Earth’s geological history. Hazarding a guess, the Anthropocene might end not with the bang with which it began, but rather with the seemingly far more mundane moment when it is no longer possible to draw a distinction between the real visitors and the machine visitors to a web site.

Long or short-term? A billion years or a human future that more or less peters out as we yield to increasingly powerful artificial intelligence? We can’t know, though I admit that by temperament, I’m in the McCarthy camp. Either way, an Encyclopedia Galactica could get compiled by whatever kind of intelligences arise and communicate with each other in the galaxy.

Into the Rift

Of course, one problem is that any encyclopedia needs material to work with, and we are seeing some signs that huge amounts of information may now be falling into what Google’s Vinton Cerf, inventor of the TCP/IP protocols that drive the Internet, calls “an information black hole.”

Speaking at the American Association for the Advancement of Science’s annual meeting in San Jose (CA), Cerf asked whether we are facing into a century’s worth of ‘forgotten data,’ noting how much of our lives — our letters, music, family photos — are now locked into digital formats. Cerf’s warning, reported by The Guardian in an article called Google boss warns of ‘forgotten century’ with email and photos at risk, is stark:

“We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised… If there are photos you really care about, print them out.”

black-hole-05

If that seems extreme, consider that no special technology was needed to read stored information from our past, from the clay tablets of ancient Mesopotamia to the scrolls and codices on which early historians and later medieval monks recorded their civilization. A great library is a grand thing that does not demand dedicated hardware and software to use. ‘Bit rot’ occurs when the equipment on which we record our data becomes obsolete — a box of floppy disks sitting on my file cabinet mutely reminds me that I currently have no way to read them.

Cerf’s point isn’t that information can’t be migrated from one format to another. We’ll surely preserve documents, photos, audio and video considered to be significant in many formats. But what’s surprising when you start prowling a big academic library is how often the most insignificant things can act as pointers to useful information. A bus schedule, a note dashed off by the acquaintance of an artist, a manuscript filed under the wrong heading — all these can unlock parts of our history, and we can assume the same about many a casual email.

Historians have learned how the greatest mathematician of antiquity considered the concept of infinity and anticipated calculus in 3 BC after the Archimedes palimpsest was found hidden under the words of a Byzantine prayer book from the 13th century. “We’ve been surprised by what we’ve learned from objects that have been preserved purely by happenstance that give us insights into an earlier civilisation,” [Cerf] said.

Will we preserve what we need to so that we have the kind of record of our time that is so tightly locked up in the books and artifacts of previous eras? The article mentions work at Carnegie Mellon University in Pittsburgh, where a project called Olive is archiving early versions of software. The project allows a computer to mimic the device the original software ran on, a technology that seems a natural solution as we try to record the early part of the desktop computer era. We’ll be refining such solutions as we continue to address this problem.

Cerf talks about preserving information for hundreds or thousands of years, but of course what we’d like to see is a seamless way to migrate our cultural output through advancing levels of technology so that if John McCarthy’s instincts are right, our remote ancestors will still have access to the things we did and said, both meaningful and seemingly inconsequential. That’s a long way from an Encyclopedia Galactica, but learning how to do this could teach us principles of archiving and preservation that could eventually feed such a compilation.

tzf_img_post

{ 10 comments }

Information and Cosmic Evolution

by Paul Gilster on February 16, 2015

Keeping information viable is something that has to be on the mind of a culture that continually changes its data formats. After all, preserving information is a fundamental part of what we do as a species — it’s what gives us our history. We’ve managed to preserve the accounts of battles and migrations and changes in culture through a wide range of media, from clay tablets to compact disks, but the last century has seen swift changes in everyday products like the things we use to encode music and video. How can we keep all this readable by future generations?

The question is challenging enough when we consider the short term, needing to read, for example, data tapes for our Pioneer spacecraft when we’ve all but lost the equipment needed to manage the task. But think, as we like to do in these pages, of the long-term future. You’ll recall Nick Nielsen’s recent essay Who Will Read the Encyclopedia Galactica, which looks at a future so remote that we have left the ‘stelliferous’ era itself, going beyond the time of stars collected into galaxies, which is itself, Nick points out, only a small sliver of the universe’s history.

astounding_may42

Can we create what Isaac Asimov called an Encyclopedia Galactica? If you go back and read Asimov’s Foundation books, you’ll find that the Encyclopedia Galactica appears frequently, often quoted as the author introduced new chapters. From its beginnings in a short story called “Foundation” (Astounding Science Fiction, May 1942), the Encyclopedia Galactica was seen as the entirety of knowledge throughout a widespread galactic empire. Carl Sagan introduced the idea to a new audience in his video series Cosmos as a cache of all information and knowledge.

Image: The May, 1942 issue of Astounding Science Fiction, containing the first of the short stories that would eventually be incorporated into the Foundation novels.

Now we have an interesting new paper from Stefano Mancini and Roberto Pierini (University of Camerino, Italy) and Mark Wilde (Louisiana State University) that looks at the question of information viability. An Encyclopedia Galactica is going to need to survive in whatever medium it is published in, which means preserving the information against noise or disruption. The paper, published in the New Journal of Physics, argues that even if we find a technology that allows for the perfect encoding of information, there are limitations that grow out of the evolution of the universe itself that cause information to become degraded.

Remember, we’re talking long-term here, and while an Encyclopedia Galactica might serve a galactic population robustly for millions, perhaps billions of years, what happens as we study the fate of information from the beginning to the end of time? Mancini and team looked at the question with relation to what is called a Robertson-Walker spacetime, which as Mark Wilde explains in this video abstract of the work, is a solution of Einstein’s field equations of general relativity that describes a homogeneous, isotropic, expanding or contracting universe.

In addition to Wilde’s video, let me point you to The Cosmological Limits of Information Storage, a recent entry on the Long Now Foundation’s blog. As for the team’s method using the Robertson-Walker spacetime as the background for its development of communication theory models, it is explained this way in the paper:

… we can consider any quantum state of the matter field before the expansion of the universe begins and define, without ambiguity, its particle content. We then let the universe expand and check how the state looks once the expansion is over. The overall picture can be thought of as a noisy channel into which some quantum state is fed. Once we have defined the quantum channel emerging from the physical model, we will be looking at the usual communication task as information transmission over the channel. Since we are interested in the preservation of any kind of information, we shall consider the trade-off between the different resources of classical and quantum information.

In other words, encoded information is modeled against an expanding universe to see what happens to it. The result challenges the idea that there is any such thing as perfectly stored information, for the paper finds that over time, as the universe expands and evolves, a transformation inevitably occurs in the quantum space in which it is encoded. Noise is the result, what the authors refer to as an ‘amplitude damping channel.’ Information that is encoded into a storage medium is inevitably corrupted by changes to the quantum state of the medium.

Cosmology comes into play here because the paper argues that faster expansion of the cosmos creates more noise. We can encode our information in the form of bits or we can use information stored and encoded by the quantum state of particular particles, but in both cases noise continues to mount as the universe continues to expand. Collecting all the material for our Encyclopedia Galactica may, then, be a smaller challenge than preserving it in the face of cosmic evolution. On the time scales envisioned in Nick Nielsen’s essay (and studied at length in Fred Adams’ and Greg Laughlin’s book The Five Ages of the Universe: Inside the Physics of Eternity, the keepers of the encyclopedia have their work cut out for them.

Preserving the Encyclopedia Galactica will demand, it seems, continual fixes and long-term vigilance. But consider the researchers’ thoughts on the direction for future work:

…one could also cope with the degradation of the stored information by intervening from time to time and actively correcting the contents of the memory during the evolution of the universe. In this direction, channel capacities taking into account this possibility have been introduced… In another direction, and much more speculatively, one might attempt to find a meaningful notion for entanglement assisted communication in our physical scenario by considering Einstein-Rosen bridges… or entanglement between different universe’s eras, related to dark energy.

Now we’re getting speculative indeed! I’ve left out the references to papers on each of the possibilities within that quotation — see the paper, which is available on arXiv, for more. Mark Wilde notes in the video referenced above that another step forward would be to look at more general models of the universe in which information could be encoded in such exotic scenarios as the spacetime near a black hole. The latter is an interesting thought, and my guess is that we’ll have new work from these researchers delving into such models in the near future, a time close enough to our own that the data should still be viable.

The paper is Mancini et al., “Preserving Information from the beginning to the end of time in a Robertson-Walker spacetime,” New Journal of Physics 16 (2014), 123049 (abstract / full text).

tzf_img_post

{ 14 comments }

A Full Day at Pluto/Charon

by Paul Gilster on February 13, 2015

Have a look at the latest imagery from the New Horizons spacecraft to get an idea of how center of mass — barycenter — works in astronomy. When two objects orbit each other, the barycenter is the point where they are in balance. A planet orbiting a star may look as if it orbits without influencing the much larger object, but in actuality both bodies orbit around a point that is offset from the center of the larger body. A good thing, too, because this is one of the ways we can spot exoplanets, by the observed ‘wobble’ in the stars they orbit.

The phenomenon is really evident in what the New Horizons team describes as the ‘Pluto-Charon dance.’ Here we have a case where the two objects are close enough in size — unlike planet and star, or the Moon and the Earth — so that the barycenter actually falls outside both of them. The time-lapse frames in the movie below show Pluto and Charon orbiting a barycenter above Pluto’s surface, where Pluto and Charon’s gravity effectively cancel each other. Each frame here has an exposure time of one-tenth of a second.

zoom_bary_03-FINAL

Charon is about one-eighth as massive as Pluto. The images in play here come from New Horizons’ Long-Range Reconnaissance Imager (LORRI), being made between January 25th and 31st of this year. The New Horizons team is in the midst of an optical navigation (Opnav) campaign to nail down the locations of Pluto and Charon as preparations continue for the July 14th flyby. None of the other four moons of Pluto are visible here because of the short exposure times, but focus in on Charon. We’re looking at an object about the size of Texas.

Now take a look at Pluto/Charon back in 1978 when James Christy, an astronomer at the U.S. Naval Observatory, could see it using the 1.55-m (61-inch) Kaj Strand Astrometric Reflector at the USNO Flagstaff Station in Arizona. Christy was studying what was then considered to be a solitary ‘planet’ (since demoted) when he noticed that in a number of the images, Pluto seemed to be elongated, a distortion in shape that varied with respect to background stars over time. The discovery of a moon was formally announced in early July of that year by the International Astronomical Union. Charon received its official name in 1985.

Charon_Disc_732

Image: What Pluto/Charon looked like to James Christy in 1978. Credit: U.S. Naval Observatory.

The New Horizons time-lapse movie shows an entire rotation of each body, the first of the images being taken when the spacecraft was 203 million kilometers from Pluto. The last frame, six and a half days later, was taken when New Horizons was 8 million kilometers closer. Alan Stern (Southwest Research Institute), principal investigator for New Horizons, notes the significance of the latest imagery:

“These images allow the New Horizons navigators to refine the positions of Pluto and Charon, and they have the additional benefit of allowing the mission scientists to study the variations in brightness of Pluto and Charon as they rotate, providing a preview of what to expect during the close encounter in July.”

That’s an encounter that will close an early chapter in space exploration — all nine of the objects formerly designated planets will have had close-up examination — but of course it opens up yet another, as New Horizons looks toward an encounter with a Kuiper Belt object as it moves ever outward. Just as our Voyagers are still communicating long after Voyager 2 left Neptune, New Horizons gives us much to look forward to.

tzf_img_post

{ 24 comments }

What Comets Are Made Of

by Paul Gilster on February 12, 2015

When the Rosetta spacecraft’s Philae lander bounced while landing on comet 67P/Churyumov-Gerasimenko last November, it was a reminder that comets have a hard outer shell, a black coating of organic molecules and dust that previous missions, like Deep Impact, have also observed. What we’d like to learn is what that crust is made of, and just as interesting, what is inside it. A study out of JPL is now suggesting possible answers.

Antti Lignell is lead author on a recent paper, which reports on the team’s use of a cryostat device called Himalaya that was used to flash freeze material much like that found in comets. The procedure was to flash freeze water vapor molecules at temperatures in the area of 30 Kelvin (minus 243 degrees Celsius). What results is something called ‘amorphous ice,’ as explained in this JPL news release. Proposed as a key ingredient not only of comets but of icy moons, amorphous ice preserves the mix of water with organics along with pockets of space.

JPL’s Murthy Gudipati, a co-author of the paper on this work, compares amorphous ice to cotton candy, while pointing out that on places with much more moderate temperatures, like the Earth, all ice is in crystalline form. But comets, as we know, can change drastically as they approach the Sun. To mimic this scenario, the researchers used their cryostat instrument to gradually warm the amorphous ice they had created to 150 Kelvin (minus 123 degrees Celsius).

What happened next involved the kind of organics called polycyclic aromatic hydrocarbons (PAHs) common in deep space. These had been infused in the ice mixture that Lignell and Gudipati created. Lignell describes the result:

“The PAHs stuck together and were expelled from the ice host as it crystallized. This may be the first observation of molecules clustering together due to a phase transition of ice, and this certainly has many important consequences for the chemistry and physics of ice.”

Expelling the PAHs meant that the water molecules could then form the tighter structures of crystalline ice. The lab had produced, in other words, a ‘comet’ nucleus of its own, similar to what we have observed so far in space. Gudipati likens the lab’s ‘comet’ to deep fried ice cream — an extremely cold interior marked by porous, amorphous ice with a crystalline crust on top that is laced with organics. What we could use next, he notes, is a mission to bring back cold samples from comets to compare to these results.

Rosettat_activity_2

Image: Rosetta NAVCAM image of Comet 67P/C-G taken on 6 February from a distance of 124 km to the comet centre. In this orientation, the small comet lobe is to the left of the image and the large lobe is to the right. The image has been processed to bring out the details of the comet’s activity. The exposure time of the image is 6 seconds. Credits: ESA/Rosetta/NAVCAM – CC BY-SA IGO 3.0.

Meanwhile, we have the spectacular imagery above from Rosetta at comet 67P/Churyumov-Gerasimenko. What the European Space Agency refers to as ‘a nebulous glow of activity’ emanates from all over the sunlit surface, but note in particular the jets coming out of the neck region and extending toward the upper right. In this year of celestial marvels (Ceres, Pluto/Charon, and Rosetta at 67P/Churyumov-Gerasimenko), we’re seeing what happens to a comet as it warms and begins to vent gases from all over its surface. We’re now going to be able to follow Rosetta as it studies the comet from a range of distances, a view that until this year was solely in the domain of science fiction writers. What a spectacle lies ahead!

The paper is Lignell and Gudipati, “Mixing of the Immiscible: Hydrocarbons in Water-Ice near the Ice Crystallization Temperature,” published online by the Journal of Physical Chemistry on Oct0ber 10, 2014 (abstract).

tzf_img_post

{ 6 comments }