Citizen SETI

I love watching people who have a passion for science constructing projects in ways that benefit the community. I once dabbled in radio astronomy through the Society of Amateur Radio Astronomers, and I could also point to the SETI League, with 1500 members on all seven continents engaged in one way or another with local SETI projects. And these days most everyone has heard the story of Planet Hunters, the citizen science project that identified the unusual Boyajian’s Star (KIC 8462852). When I heard from Roger Guay and Scott Guerin, who have been making their own theoretical contributions to SETI, I knew I wanted to tell their story here. The post that follows lays out an alien civilization detection simulation and a tool for visualizing how technological cultures might interact, with an entertaining coda about an unusual construct called a ‘Dyson shutter.’ I’m going to let Roger and Scott introduce themselves as they explain how their ideas developed.

by Roger Guay and Scott Guerin

Citizen Science plays an increasingly important role across several scientific disciplines and especially in the fields of astronomy and SETI. Tabby’s star, discovered by members of the Planet Hunters project and the SETI@home project are recent examples of massively parallel citizen-science efforts. Those large-scale projects are counterbalanced by individuals whose near obsession with a subject compels them to study, write, code, draw, design, talk about, or build artifacts that help them understand the ideas that excite them.

Roger Guay and Scott Guerin, working in isolation, recently discovered parallel evolution in their thinking about SETI and the challenges of interstellar detection and communication. Guay has undertaken the programming of a 10,000 x 8,000 light year swath of a typical galaxy and populates it with random radiating communicating civilizations. His model allows users to tweak basic parameters to see how frequently potential detections occur. Guerin is more interested in a galaxy-wide model and has used worksheets and animations to bring his thoughts to light. His ultimate goal is to develop a parametric civilization model so that interactions, if any, can be studied. However, at the core, both efforts were attempts at visualizing the Fermi Paradox across space-time, and both experimenters show how fading electromagnetic halos may be all that’s left for us to discover of an extraterrestrial civilization, if we listen hard enough.

The backgrounds, mindsets, and tool kits available to Roger and Scott play an important role in their path to this blog.

Roger Guay

I am a retired Physicist and Technical Fellow Emeritus from Boeing in Seattle. I can’t remember when I first became interested in being a scientist (it was in grade school) but I do remember when I first became obsessed with the Fermi paradox. It was during a discussion while on a road trip with a colleague. At first, this discussion mainly revolved around the almost unfathomable vastness of space and time in our galaxy, but then turned to parameters of the Drake equation. The one that was the most controversial was L, the lifetime of an Intelligent Civilization or IC.

The casual newcomer to the Drake equation will tend to assume a relatively long lifetime for an IC, but when considering detection methods such as SETI uses, one must adjust L to reflect the lifetime of the technology of the detection method. For example, SETI is listening for electromagnetic transmissions in the microwave to radio and TV range. So, L has to be the estimated lifetime of that technology. For SETI’s technology, we’ll call this the Radio Age. On Earth, the Radio Age started about 100 years ago and has already fallen off due to technological advances such as the internet and satellite communication. So I argued, an L = 150 ± 50 years might be a more reasonable assumption for the Drake equation when considering the detection method of listening for radio signals.

At this point the discussion was quite intense! When I thought about an L equal to a few hundred years in a galaxy that continues to evolve over a 13-billion-year lifespan, the image that came to my mind was that of fireflies in the night. And that was the precursor for my Alien Civilization Detection or ACD simulation.

One can imagine electromagnetic or “radio” bubbles appearing randomly in time and space and growing in size over time. At any instant in time the bubble from an IC will have a radius equal to the speed of light times the amount of time since that IC first began broadcasting. These bubbles will continue to grow at the speed of light. When the IC stops broadcasting for whatever reason, the bubble will become hollow and the shell thickness will reflect the time duration of that IC’s Radio Age lifetime.

If the age of our galaxy is compressed into one year, we on Earth have been “leaking” radio and television signals into space for only a small fraction of a second. And, considering the enormity of space and the fact that our “leakage” radiation has only made it to a few hundred stars out of the two to four hundred billion in our galaxy, one inevitably realizes there must be a significant synchronization problem that arises when ICs attempt to detect one another. So what does this synchronicity problem look like visually?

To answer this question my tasks became clear: dynamically generate and animate radio bubbles randomly in space and time, grow them at the speed of light at very fast accelerated rate in a highly compressed region of the galaxy, fade them over time for inverse square law decay, and then analyze the scene for detection. No Problem!!!

Using LiveCode, a modern derivative of HyperCard on steroids, I began my 5-year project to scientifically simulate this problem. Using the Monte Carlo Method whereby randomly generated rings denoting EM radiation from ICs pop into existence in a 8,000 X 10,000 LY region of the galaxy* centered on our solar system at a rate of about 100 years per second, the firefly analogy came to life. And the key to determining detection potential is to recognize that it can only occur when a radiation bubble is passing over another IC that is actively listening. This is the synchronicity problem that is dramatically apparent when the simulation is run!

To be scientifically accurate and meaningful, some basic assumptions were required:

  • 1. ICs will appear not only randomly in space, but also randomly in time.
  • 2. ICs will inevitably transition into (and probably out of) a Radio/TV age where they too will “leak” electromagnetic radiation into space.
  • 3. The radio bubbles are assumed to be spherically homogeneous**.

To use the ACD simulation, the user chooses and adjusts parameters such as Max Range, Transmit and Listen times*** and N, the Drake equation estimate of the number of ICs in the galaxy at any given instant. During a simulation run, potential detections are tallied and the overall probability of detection is displayed.

About two years ago, as the project continued to evolve, I became aware of Stephan Webb’s encyclopedic book on the Fermi Paradox, If the Universe is Teeming with Aliens … Where is Everybody? This book was most influential in my thinking and the way I shaped the existing version of the ACD simulation.


A snapshot of the main screen of the ACD simulation midway through a 10,000 year run.

A Webb review of the ACD simulation is available here:

And you can download it here at this Dropbox link:

Conclusions? The ACD simulation dramatically demonstrates that there is indeed a synchronicity problem that automatically arises when ICs attempt to detect one another. And for reasonable (based on Earth’s specifications) Drake equation parameter selections, detection potentials are shown to be typically hundreds of years apart. In other words, we can expect to search for a few hundred years before finding another IC in our section of the galaxy. When you consider Occam’s razor, is not this synchronicity problem the most logical resolution to the Fermi Paradox?


* The thickness of the Milky Way is small compared to its diameter. So for regions close to the center of the thickness, we can approximate with a 2-dimensional model.

** Careful consideration has to be given to this last assumption: Of course, it is not accurate in that the radiation from a typical IC is assumed to be composed of many different sources and have widely varying parameters, as they are on Earth. But the bottom line is that the homogenous distribution gives the best case scenario of detection potential. An example of when to apply this thinking is to consider laser transmission vs radio broadcast. Since a laser would presumably by highly directed and therefore more intense at greater distances, the user of the ACD simulation might choose a Higher Max Range but at the same time realize that pointing problems will make detection potential much smaller than the ACD indicates. The ACD does not take this directly into consideration. Room for the ACD to grow?

*** One of the features of this simulation is that the user can make independent selections of both the transmit and listening times of ICs, whereas the Drake equation lumps them together in the lifetime parameter.

Scott Guerin

I grew up north of Milwaukee, Wisconsin and was the kid in 5th grade who would draw a nuclear reactor on the classroom’s chalkboard. My youthful designs were influenced by Voyage to the Bottom of the Sea, Lost in Space, everything NASA, and 2001: a Space Odyssey. In the mid 70s, I was a technical illustrator at the molecular biology laboratory at UW Madison and, after graduation with a fine arts degree, I went on to a 30-year career as an interpretive designer of permanent exhibits in science and history museums.

I began visually exploring SETI over two years ago in order to answer three questions: First, why is such a thought-provoking subject so often presented only in math and graphs thereby limiting information to experts? Secondly, why is the Fermi Paradox a paradox? Thirdly, what form might an interstellar “we are here” signaling technology take?

Using Sketchup, I built a simple galactic model to see what scenarios matched the current state of affairs: silence and absence. At a scale of 1 meter = 1 light year, I positioned Sol appropriately, and randomly “dropped” representations of civilizations (I refer to them as CivObjects) into the model. Imagine dropping a cup full of old washers, nails, wires, and screws onto a flat, 10” plate and seeing if any happen to overlap with a grain-of-salt-sized solar system (and that speck is still ~105 too large).

The short answer is that they didn’t overlap and I’ve concluded that the synchronicity issue, combined with weak listening and looking protocols is a strong answer to the paradox. When synchronicity is considered along with sheer rarity of emitting civilizations (my personal stance), the silence makes even more sense.


For scale, the green area at lower right represents the Kepler star field if it were a ~6,000 LY diameter sphere. The solid discs represent currently emitting civilizations, the halos represent civilizations that have stopped emissions over time, and the lines and wedges represent directed communications. I sent this diagram to Paul and Marc at Centauri Dreams who were kind enough to pass it on to several leading scientists and they graciously, and quickly, replied with encouragement.

Curtis Charles Mead’s 2013 Harvard dissertation “A Configurable Terasample-per-second Imaging System for Optical SETI,” George Greenstein’s Understanding the Universe, Tarter’s, and the Benford’s papers, among others, were influential in my next steps. I realized the halos were unrealistic representations of a civilization’s electromagnetic emissions and that if you could see them from afar, they could be visualized as prickly, 3-dimensional sea urchin-like artifacts with tight beams of powerful radar, microwave, and laser emanating from a mushy sphere of less directional, weaker electromagnetic radiation.


From afar, Earth’s EM halo is a lumpy, flattened sphere some 120LY in radius dating to the first radio experiments in the late 1890’s. The 1974 Arecibo message toward M13 is shown being emitted at the 10 o’clock position.

From Tarter’s 2001 paper “At current levels of sensitivity, targeted microwave searches could detect the equivalent power of strong TV transmitters at a distance of 1 light year (the red sphere at center in the diagram), or the equivalent power of strong military radars to 300 ly, and the strongest signal generated on Earth (Arecibo planetary radar) to 3000 ly, whereas sky surveys are typically two orders of magnitude less sensitive. The sensitivity of current optical searches could detect megajoule pulses focused with a 10-m telescope out to a distance of 200 ly.”


In this speculative diagram, two civilizations “converse” across 70 LY. Mead’s paper confirms the aiming accuracy needed to correct for the the proper motion of the stars, given a laser beam just a handful of AU wide at the distance illustrated, is within human grasp. The civilizations shown would most likely have been emitting EM for hundreds of years so that their raw EM halos are so large and diffuse they cannot be shown in the diagram. The magenta blob represents the elemental EM “hum” of a civilization within a couple LY, the green spikes represent tightly beamed microwaves for typical communications and radar , while the yellow spikes are lasers reaching out to probes, being used as light-sail boosters, and fostering long distance high-bandwidth communications. Each civilization has an EM fingerprint, affected by their system’s ecliptic angle and rotation, persistence of ability, and types of technologies deployed — these equate to a unique CivObject.

In advance of achieving the goal of a fully parametric 3D model, I manually animated several kinds of civilizations and their interactions by imagining a CivObject as a variant of a Minkowski space-time cone. I move the cone’s Z axis (time) through a galactic hypersurface to illustrate a civilization’s history of passive and intentional transmission, as well as probes at sub-lightspeed. A CivObject’s anatomy reveals the course of a civilization’s history and I like to think of them as distant cousins of Hari Seldon’s prime radiant. password: setiwow!

The anatomy of a CivObject allows arbitrary time scales to be visualized as function of xy directionality, EM strength, and type of emission. Below is Earth’s as a reference. Increasing transmission power is suggested by color.


I found it easy to animate transmissions but continue to struggle with visualizing periods of listening and the strength of receivers. Like Guay, I concluded that a potential detection can occur only when a transmission passes through a listening civilization. A “Conversing” model designed to actually simulate communication interactions needs to address both ends of “the line” with a full matrix of transmitter/receiver power ratios as well as sending/listening durations, directions, sensitivities, and intensities. In addition, a more realistic galactic model including 3d star locations, the GHZ, and interstellar extinction/absorption rates is needed.

And now for some sci-fi

A few months before KIC 8462852 was announced and Dyson Swarms became all the rage, I noticed one of those old ventilators on top of a barn roof and thought that if a Kardashev II civilization scaled it up to +-1AU diameter, it would become a solar powered, omni-directional signalling device capable of sending an “Intelligence was here” message across interstellar space. I called it a Dyson Shutter.

Imagine a star surrounded by a number of ribbon-like light sails connected at their poles. Each vane’s stability, movement, and position is controlled by the angle of sail relative to incoming photons from the central star. The shutter would be a high tech, ultra-low bandwidth, scalable construct. I have imagined that each sail, at the equator, would be no less than one Earth diameter wide which is at the lower end of Kepler-grade detection.

Depending on the number constructed, the vanes could be programmed to shift into simple configurations such as fibonacci and prime number sequences.



I imagine the Dyson Shutter remains in a stable message period for hundreds of rotations. Perhaps there are “services” for the occasional visitor, perhaps it has defenses against comets, incoming asteroids, or inter-galactic graffiti artists. Perhaps it is an intelligent being itself but is it a lure, a trap, a collector, or colleague? Is it possible Tabby’s star is a Dyson Shutter undergoing a multi-year message reconfiguration?


The shutter’s poles are imagined to be filled with command and control systems, manufacturing facilities, spaceports, etc.


We hope that our work as presented here might inspire some of you to join the ranks of the Citizen Scientist. There are many opportunities and science needs the help. With today’s access to information and digital tools, anyone with a little passion for their ideas and a lot of imagination and persistence can help communicate complex issues to the public and make contributions to science. We hope that our stories resonate with at least some of you. Please let us know what you think and let’s all push back on the frontiers of ignorance!


PanSTARRS: Digital Sky Survey Data Release

A 1.8 meter telescope at the summit of Haleakal? on Maui is the first instrument in use at the Pan-STARRS (Panoramic Survey Telescope & Rapid Response System) observatory. Pan-STARRS recently completed a digital survey of the sky in visible and infrared wavelengths that began in May of 2010, a project that surveyed the entire sky visible from Hawaii over a period of four years, scanning it 12 times in each of five filters. The result is a collection of 3 billion separate sources, including not just stars and galaxies but numerous transient, moving and variable objects. All told, we’re dealing with about 2 petabytes of data.

Now we learn that data from the survey is being made available worldwide. Ken Chambers, director of the Pan-STARRS observatories, comments:

“The Pan-STARRS1 Surveys allow anyone to access millions of images and use the database and catalogs containing precision measurements of billions of stars and galaxies. Pan-STARRS has made discoveries from Near Earth Objects and Kuiper Belt Objects in the Solar System to lonely planets between the stars; it has mapped the dust in three dimensions in our galaxy and found new streams of stars; and it has found new kinds of exploding stars and distant quasars in the early universe.”

How heartening it is to see extensive information on more than 3 billion sources now becoming publicly available. The catalog of four years of observations is being rolled out in two phases, the first being the release of the ‘Static Sky,’ which presents the average of each of the observing epochs. For every object, in other words, we get an average value for its position, brightness and colors — it will also be possible to get the stack image in each of the observed colors, while galaxies will include further information. Next year, the plan is to release the full database giving information and images for each individual epoch.


Image: This compressed view of the entire sky visible from Hawaii by the Pan-STARRS1 Observatory is the result of half a million exposures, each about 45 seconds in length, taken over a period of 4 years. The shape comes from making a map of the celestial sphere, like a map of the Earth, but leaving out the southern quarter. The disk of the Milky Way looks like a yellow arc, and the dust lanes show up as reddish brown filaments. The background is made up of billions of faint stars and galaxies. If printed at full resolution, the image would be 2.4 kilometers long, and you would have to get close and squint to see the detail. Credit: Danny Farrow, Pan-STARRS1 Science Consortium and Max Planck Institute for Extraterrestial Physics.

As Centauri Dreams readers know, our focus here has been predominantly on objects relatively near to the Sun — one of our purposes, after all, is to consider interstellar probes and their potential targets. On that score, we learn this from Thomas Henning (Max Planck Institutes for Astronomy, Heidelberg):

“Based on Pan-STARRS, researchers are able to measure distances, motions and special characteristics such as the multiplicity fraction of all nearby stars, brown dwarfs, and of stellar remnants like, for example white dwarfs. This will expand the census of almost all objects in the solar neighbourhood to distances of about 300 light-years.”

There is definitely an exoplanet component here. Henning continues:

“The Pan-STARRS data will also allow a much better characterization of low-mass star formation in stellar clusters. Furthermore, we gathered about 4 million stellar light curves to identify Jupiter-like planets in close orbits around cool dwarf stars in order to constrain the fraction of such extrasolar planetary systems.”

In terms of protecting our planet, Pan-STARRS devoted part of its four-year survey to searching for hazardous asteroids, an effort that proved so successful that following the end of the survey, NASA began using the telescope and its 1.4 Gigapixel camera (GPC1) for further asteroid investigations. Researchers believe that over 90 percent of near-Earth objects larger than 1 kilometer have already been found, making the current focus of the Near Earth Object program the discovery of objects larger than 140 meters. Clearly, Pan-STARRS can help.


Image: Contributions of various astronomical surveys to the discovery of near-Earth asteroids, showing totals for NEAs of all sizes. Credit: Alan B. Chamberlin/Jet Propulsion Laboratory/NASA.

From a much broader perspective, PAN-STARRS has mapped our galaxy at a level of detail the MPIA is saying has never been achieved before, offering ‘a deep and global view’ of a significant fraction of the Milky Way’s plane and disk, areas that surveys generally avoid because of the complexity of these dense and dusty regions. In M31, the closest neighboring galaxy, the survey has detected several microlensing events and numerous Cepheid variables. The next step, according to this MPIA news release, is to measure redshifts of galaxies and other cosmological objects to analyze the distribution of galaxies in three dimensions, data that can provide constraints on our standard cosmological model.

The data from the first part of the Pan-STARRS survey is being archived at the Space Telescope Science Institute (STScI) in Baltimore and can be accessed through MAST (Mikulski Archive for Space Telescopes) using the Pan-STARRS1 link.


Learning More about Outer System Planets

What kind of planets are most common in the outer reaches of a planetary system? It’s a tricky question because most of the data we’ve gathered on exoplanets has to do with the inner regions. Both transit and radial velocity studies work best with large planets near their stars. But a new gravitational microlensing study looks hard at outer system planets, finding that planets of Neptune’s mass are those most likely to be found in these icy regions.

It should be no surprise that gravitational microlensing has produced few planets, about 50 so far, compared to the thousands detected through transit studies and radial velocity methods. After all, microlensing relies upon alignments that are far more unusual than even the transit method, in which a planet crosses the face of its star as seen from Earth. In microlensing, astronomers look for rare alignments between a distant star and one much nearer.

Given the right alignment, the ‘bending’ of spacetime caused by the nearer star’s mass allows researchers to study changes in the brightness of the background star, which can be clues to the existence of a planet. Microlensing can see not just planets close to their host stars but those far distant from the primary. Moreover, as the new work points out, we can use microlensing to figure out the mass ratio of the planet to the host star, and in about 40 percent of events, we can measure the mass of the host star and planet themselves.

A team led by Daisuke Suzuki (NASA GSFC) identified 1474 microlensing events between 2007 and 2012, drawing on data from the Microlensing Observations in Astrophysics (MOA) project, a collaboration between Japanese and New Zealand researchers that alerted astronomers to 3300 potential microlensing events in this time period. The analysis also incorporates data from the Optical Gravitational Lensing Experiment (OGLE).

Suzuki and colleagues homed in on the frequency of planets compared to the mass ratio of planet and star and the distances between them. A typical planet-hosting star is about 60 percent of the mass of the Sun. Its typical planet is between 10 and 40 times the mass of the Earth. By comparison, Neptune is about 17 Earth masses, while Jupiter is 318 times as massive as the Earth. Cold Neptune-mass worlds are thus identified as the most common kinds of planets beyond the ‘snow line,’ the point in a planetary system beyond which water remained frozen during planetary formation. In our Solar System, the snow line is at about 2.7 AU, roughly the middle of the main asteroid belt.

The paper surveys stars toward the galactic bulge, where the chances of a microlensing alignment are highest. Says Suzuki:

“We’ve found the apparent sweet spot in the sizes of cold planets. Contrary to some theoretical predictions, we infer from current detections that the most numerous have masses similar to Neptune, and there doesn’t seem to be the expected increase in number at lower masses. We conclude that Neptune-mass planets in these outer orbits are about 10 times more common than Jupiter-mass planets in Jupiter-like orbits.”


Image: This graph plots 4,769 exoplanets and planet candidates according to their masses and relative distances from the snow line, the point where water and other materials freeze solid (vertical cyan line). Gravitational microlensing is particularly sensitive to planets in this region. Planets are shaded according to the discovery technique listed at right. Masses for unconfirmed planetary candidates from NASA’s Kepler mission are calculated based on their sizes. For comparison, the graph also includes the planets of our solar system. Credit: NASA’s Goddard Space Flight Center

So based on the MOA data, planets forming in the outer reaches of a planetary system are likely to be Neptunes. But remember the limitations of the data here — we have relatively few detected exoplanets, and in fact, only 22 planets (with a possible 23rd) show up in the 1474 MOA events. What’s heartening is how we are going to go about expanding that dataset.

Tightening up the constraints on mass and distance to the lens systems will ultimately allow us to measure what the paper calls the ‘microlensing parallax effect,’ determining the distance of the system with the help of space telescopes far from the Earth. From the paper:

The ultimate word on the statistical properties of planetary systems will be achieved from the space based exoplanet survey (Bennett & Rhie 2002) of the WFIRST (Spergel et al. 2015) mission, and hopefully also the Euclid (Penny et al. 2013) mission. The high angular resolution of these space telescopes will allow mass and distance determinations of thousands of exoplanets because it will be possible to detect the lens star and measure the lens-source relative proper motion with the high resolution survey data itself. This will give us the same comprehensive picture of the properties of cold exoplanets that Kepler is providing for hot planets.

WFIRST (Wide Field Infrared Survey Telescope) was formally designated as a NASA mission at the beginning of this year. To be launched in the mid-2020s, it will carry a 288 megapixel multi-band near-infrared camera and a coronagraph for the suppression of starlight. ESA’s Euclid mission, like WFIRST, has a gravitational microlensing component, with a launch date in late 2020. If we can use space-based resources like these to enrich our microlensing catalog, our understanding of the outer precincts of exoplanetary systems will surge.

The paper is Suzuki et al., “The Exoplanet Mass-Ratio Function from the MOA-II Survey: Discovery of a Break and Likely Peak at a Neptune Mass,” Astrophysical Journal Vol. 833, No. 2 (13 December 2016). Abstract / preprint.


A New Look at Ice on Ceres

Ceres, that interesting dwarf planet in the asteroid belt, is confirmed to be just as icy as we had assumed. In fact, a new study of the world, led by Thomas Prettyman (Planetary Science Institute), was the subject of a press conference yesterday at the American Geophysical Union fall meeting in San Francisco. Prettyman and team used data from the Dawn spacecraft’s Gamma Ray and Neutron Detector (GRaND) instrument to measure the concentrations of iron, hydrogen and potassium in the uppermost meter of Ceres’ surface.

Prettyman, who is principal investigator on GRaND, oversees an instrument that works by measuring the number and energy of gamma rays and neutrons coming from Ceres. The neutrons are the result of galactic cosmic rays interacting with the surface, some of them being absorbed while others escape. The number and kind of these interactions allows researchers to investigate surface composition. Hydrogen on Ceres is thought to be in the form of frozen water, allowing the researchers to study the global distribution of ice.

The result of the GRaND study: The elemental data show that the materials were processed by liquid water within the interior. The top layer of Ceres’ surface is hydrogen rich, with the higher concentrations found at mid- to high latitudes, a finding consistent with near surface water ice, with the ice table closest to the surface at the higher latitudes. Says Prettyman:

“On Ceres, ice is not just localized to a few craters. It’s everywhere, and nearer to the surface with higher latitudes. These results confirm predictions made nearly three decades ago that ice can survive for billions of years within a meter of the surface of Ceres. The evidence strengthens the case for the presence of near-surface water ice on other main belt asteroids.”


Image: This image shows dwarf planet Ceres overlaid with the concentration of hydrogen determined from data acquired by the gamma ray and neutron detector (GRaND) instrument aboard NASA’s Dawn spacecraft. The hydrogen is in the upper yard (or meter) of regolith, the loose surface material on Ceres. The color scale gives hydrogen content in water-equivalent units, which assumes all of the hydrogen is in the form of H2O. Blue indicates where hydrogen content is higher, near the poles, while red indicates lower content at lower latitudes. In reality, some of the hydrogen is in the form of water ice, while a portion of the hydrogen is in the form of hydrated minerals (such as OH, in serpentine group minerals). The color information is superimposed on shaded relief map for context. Credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/PSI.

But we have no solid ice layer here. Instead, Ceres’ surface appears to be a porous mixture of rocky materials, with ice filling the pores, as this Institute for Astronomy (University of Hawaii) news release makes clear. The GRaND findings show about 10 percent ice by weight.

Also interesting is that the elemental composition of Ceres differs from CI and CM carbonaceous chondrite meteorites, which represent some of the most primitive, undifferentiated meteorites we know (Cl and CM are two of several different subgroupings within the carbonaceous chondrite family). These meteorites were also altered by water, but the GRaND data tell us that their parent body would have differed markedly from Ceres.

The researchers offer two explanations, the first being that large scale convection occurring within Ceres may have separated ice and rock components, leaving the surface with a different composition than the bulk of the object. The other possibility is that Ceres formed in a different location in the Solar System than the parent object of this class of meteorite.

A second paper on Ceres has also appeared, this one in Nature. It is the work of Thomas Platz (Max Planck Institute for Solar System Research, Göttingen) and colleagues, who focus on craters that are found in persistently shadowed regions. These ‘cold traps’ are cold enough (about 110 K) that little of their ice turns into vapor. Bright material found in some of these craters is thought to be ice, and Dawn’s infrared mapping spectrometer has indeed confirmed ice in at least one.

As is the case with the Moon and Mercury, ice in such cold traps is thought to be the result of impacting bodies, although solar wind interactions are also a possibility. Each of these bodies has a small tilt compared to its axis of rotation, producing numerous permanently shadowed craters. “We are interested in how this ice got there and how it managed to last so long,” said co-author Norbert Schörghofer (University of Hawaii at Manoa). “It could have come from Ceres’ ice-rich crust, or it could have been delivered from space.”

But the comparison between what we find on Ceres and elsewhere in the Solar System reminds us how much we still have to learn about the process. From the paper::

The direct identification of water-ice deposits in PSRs [permanently shadowed regions] on Ceres builds on mounting evidence from Mercury and the Moon that PSRs are able to trap and preserve water ice. For the Moon, the abundance and distribution of cold-trapped ice is little understood. On Mercury, the cold traps are filled with ice, and the planet traps about the same fraction of exospheric water as Ceres, so either the PSRs on Ceres are not able to retain as much water ice as those on Mercury or the amount of available water is much lower.

The Prettyman paper is “Extensive water ice within Ceres’ aqueously altered regolith: Evidence from nuclear spectroscopy,” published online by Science 15 December 2016 (abstract). The Platz paper is “Surface water-ice deposits in the northern shadowed regions of Ceres,” published online by Nature Astronomy 15 December 2016 (abstract). Video of the press briefing at the AGU meeting can be accessed here.


Surviving the Journey: Spacecraft on a Chip

If Breakthrough Starshot can achieve its goal of delivering small silicon chip payloads to Proxima Centauri or other nearby stars, it will be because we’ve solved any number of daunting problems in the next 30 years. That’s the length of time the project’s leaders currently sketch out to get the mission designed, built and launched, assuming it survives its current phase of intense scrutiny. The $100 million that currently funds the project will go into several years of feasibility analysis and design to see what is possible.

That means scientists will work a wide range of issues, from the huge ground-based array that will propel the payload-bearing sails to the methods of communications each will use to return data to the Earth. Also looming is the matter of how to develop a chip that can act as all-purpose controller for the numerous observations we would like to make in the target system.

If the idea of a spacecraft on a chip is familiar, it’s doubtless because you’ve come across the work of Mason Peck (Cornell University), whose work on the craft he calls ‘sprites’ has appeared many times in these pages (see, for example, Sprites: A Chip-Sized Spacecraft Solution). Both Peck and Harvard’s Zac Manchester, who worked in Peck’s lab at Cornell, have been active players in Breakthrough Starshot’s choice of single-chip payloads and continue to advise the project.


Image: A small fleet of ‘sprites,’ satellites on a chip, as envisioned in low Earth orbit. Can single-chip spacecraft designs now be developed into payloads for an interstellar mission? Credit: Space Systems Design Studio.

Meanwhile, NASA itself has been working with the Korea Institute of Science and Technology (KAIST) on the design of single-chip spacecraft. A key issue, discussed at the International Electron Devices Meeting in San Francisco in early December, is how to keep such a chip healthy given the hazards of deep space. For Starshot, the matter involves not just the few minutes of massive acceleration (over 60,000 g’s) of launch from Earth orbit, but the 20 years of cruise time at 20 percent of the speed of light before reaching the target star.

The first part of the question seems manageable, as hardening electronics against huge accelerations is an area well studied by the military, so data are abundant. The cruise phase, though, opens up concerns about radiation. According to KAIST’s Yang-Kyu Choi, interstellar radiation can degrade performance through the accumulation of positively charged defects in the silicon dioxide depths of the chip. Such defects can produce anomalous current flow and changes to the operation of critical transistors. The matter of malfunctioning chips is discussed in this recent story in IEEE Spectrum.

At the San Francisco meeting, self-healing chips were the theme, drawing on work that comes out of the 1990s that showed heating could help radiation sensors recover their functionality. Mixing this with work on flash memory out of Taiwan’s Macronix International, an integrated device manufacturer in the Non-Volatile Memory (NVM) market, the new NASA study uses concepts developed at KAIST to make on-chip healing more efficient. From the IEEE story:

This study uses KAIST’s experimental “gate-all-around” nanowire transistor. Gate-all-around nanowire transistors use nanoscale wires as the transistor channel instead of today’s fin-shaped channels. The gate, the electrode that turns on or off the flow of charge through the channel, completely surrounds the nanowire. Adding an extra contact to the gate allows you to pass current through it. That current heats the gate and the channel it surrounds, fixing any radiation-induced defects.

It might seem natural to simply provide more shielding for the chip during the two decades of interstellar cruise, but shielding adds mass, a critical issue when trying to drive a payload to a significant fraction of the speed of light. Thus the self-healing alternative, which assumes potential damage but provides self-analysis of the problem and heat inside the chip to work the healing magic. We also gain from the standpoint of further miniaturization — at scales of tens of nanometers, nanowire transistors are significantly smaller than the kind of transistors on chips currently used in spacecraft, adding savings in chip size and weight.

According to the IEEE report, KAIST’s “gate-all-around” device is likely to see wide production in the early 2020s at it begins to replace the older FinFET (Fin Field Effect Transistor) technologies. From the standpoint of single-chip spacecraft, it’s heartening to learn that radiation repairs can be made over and over, with flash memory recovered up to 10,000 times. A scenario emerges in which a chip on an interstellar flight can be powered down, heated internally to restore full performance, and then restored to service.

Pondering interstellar performance for chips that weigh no more than a gram is cause for reflection. Within just a few years we’ve gone from the idea of massive fusion-driven designs like Project Daedalus to payloads smaller than smartphones. The idea invariably brings to mind Robert Freitas’ concept of a ‘needle’ probe that could be sent in swarms to nearby stars, loaded with nanotech assemblers that would construct scientific instruments and communications devices out of material they found in the destination system.

It wasn’t so long ago that former NASA administrator Dan Goldin was speaking of a probe as light as a Coke can, but the Freitas probe and Breakthrough Starshot go well beyond that. The trick here is not getting too far ahead of the curve of technological development. With a 30-year window, Starshot can anticipate breakthroughs that will solve some of its key challenges, but relying on the future to plug in a solution doesn’t always go as planned. Thus it’s heartening to see potential answers to the cruise problem already beginning to emerge.