≡ Menu

Six decades of SETI have yet to produce a detection. Are there strategies we have missed? In today’s essay, Michael Hippke takes us into the realm of quantum communication, explaining how phenomena like ‘squeezed light’ can flag an artificial signal with no ambiguity. Quantum coherence, he argues, can be maintained over interstellar distances, and quantum methods offer advantages in efficiency and security that are compelling. Moreover, techniques exist with commercially available equipment to search for such communications. Hippke is a familiar face on Centauri Dreams, having explored topics from the unusual dimming of Boyajian’s Star to the detection of exomoons using what is known as the orbital sampling effect. He is best known for his Transit Least Squares (TLS) exoplanet detection method, which is now in wide use and has accounted for the discovery of ~ 100 new worlds. An astrophysics researcher at Sonneberg Observatory and visiting scholar for Breakthrough Listen at UC-Berkeley, Michael now introduces Quantum SETI.

by Michael Hippke

Almost all of today’s searches for extraterrestrial intelligence (SETI) are focused on radio waves. It would be possible to extend our search to include interstellar quantum communications.

Quite possibly, our Neanderthal ancestors around the bonfires of the Stone Age marveled at the night sky and scratched their heads. What are all these stars about? Are there other worlds out there which have equally delicious woolly mammoths? Much later, about 200 years ago, the great mathematician Carl Friedrich Gauß proposed to cut down large areas of Siberian forest, in the form of a triangle, to send a message to the inhabitants of the Moon. At the end of the 19th Century, many canals were built, including the Suez and Panama canals. Inspired by these engineering masterpieces, astronomers searched for similar signs of technology on other planets. The logic was clear: What the great human civilization can build must reflect what other civilizations will inevitably build.

Clearly, Martians must equally be in need of canals. Indeed, the Italian astronomer Giovanni Schiaparelli discovered “canali” on Mars in 1877. Other observers joined the effort, and Percival Lowell asserted that the canals exist and must be artificial in origin.

Something similar happened again a short time later when Guglielmo Marconi put the first radio into operation in December 1894. Just a few years later, Nikola Tesla searched for radio waves from Mars, and believed he had made a detection. It turned out to be a mistake, but the search for radio signals from space continued. The “Search for Extraterrestrial Intelligence,” or SETI for short, received a boost in 1960 from two publications in the prestigious journal Nature. For the first time, precise scientific descriptions were given for the frequencies and limits of interstellar communication using radio waves [https://www.nature.com/articles/184844a0] and optical light [https://www.nature.com/articles/190205a0]. Between 1960 and 2018, the SETI Institute recorded at least 104 experiments with radio telescopes [https://technosearch.seti.org/]. All unsuccessful so far, which is also true for searches in the optical domain, for X-rays, or infrared signatures.

Photons? Neutrinos? Higgs bosons?

Particle physics radically changed our view of the world in the 20th century: It was only through the understanding of elementary particles that discoveries such as nuclear fission (atomic weapons, nuclear power plants) became possible. Of the 37 elementary particles known today in the Standard Model, several are suitable for an interstellar communication link. I examined the pros and cons of all relevant particles in a 2018 research paper [https://arxiv.org/abs/1711.07962]. The known photons (light particles) were the “winners”, because they are massless and therefore energetically favorable. In addition, they travel at light speed, can be focused very well, and can carry several bits of information per particle.

Photons are not only known as light particles – they are also present in the electromagnetic spectrum as radio waves, and with higher particle energies than X-rays or gamma rays. In addition, there are other particles that can be more or less reasonably used for communication. For example, it has been demonstrated that neutrinos can be used to transmit data [https://arxiv.org/abs/1203.2847]. Neutrinos have the advantage that they effortlessly penetrate kilometer-thick rock. However, this is also one of their disadvantages: they are extremely difficult to detect, because they also penetrate (almost) every detector.

Incidentally, the particle that is the least suitable of all for long-distance communication is the Higgs boson. It was predicted by Peter Higgs in 1964, but was not observed for the first time until 2012 at the Large Hadron Collider (LHC) at CERN – it also won a Nobel Prize.

The Higgs boson decays after only 10-22 seconds. To keep it alive long enough to travel to the next star, it would have to be accelerated very strongly. Due to the Lorentz factor, its subjective time would then pass more slowly. In practice, however, this is impossible to achieve, because one would have to pump so much energy into the Higgs particle that it would become a black hole. It thus disqualifies itself as a data carrier.

Photons and quanta

Quanta, simply put, are discrete particles in a system that all have the same energy. For example, in 1905 Albert Einstein postulated that particles of light (photons) always have multiples of a smallest amount of energy. This gives rise to the field of quantum mechanics, which describes effects at the smallest level. The transition to the macroscopic, classical world is a grey area – quantum effects have also been demonstrated in fullerenes, which are spheres of 60 carbon atoms. So although quantum effects occur in all particles, it makes sense to focus on photons for interstellar communication because they are superior to other particles for this purpose.

Four advantages of quantum communication

1. Information efficiency

Classical communication with photons, over interstellar distances, can be well illustrated in the particle model. The transmitter generates a pulse of particles, and focuses them through a parabolic mirror into a beam whose minimum diameter is limited by diffraction. This means that the light beam expands over large distances.

For example, if an optical laser beam is focused through a telescope measuring one meter and sent across the 4 light years to Alpha Centauri, the light cone there is already as wide as the distance from the Earth to the Sun. So a receiver on a planet around Alpha Centauri receives only a small fraction of the emitted photons. The rest flies past the receiver into the depths of space. On the other hand, photons are quite cheap to buy: You already get about 1019 photons from a laser that shines with one watt for one second.

In the sum of these effects, every photon is precious in interstellar communication. Therefore, one wants to encode as many bits of information as possible into each transmitted photon. How to do that?

Photons (without directional information) have three degrees of freedom: their arrival time, their energy (= wavelength or frequency), and the polarization. Based on this, an alphabet can be agreed upon, so that, for example, a photon arriving at time 11:37 with wavelength 650 nm (“red”) and polarization “left” corresponds to the letter “A”. The number of bits, which can be encoded per degree of freedom, scales unfortunately only logarithmically: 1024 modes result in 10 bits per photon. In practice, one still has to take losses and noise into account, so that with this classical communication it is rarely possible to transmit more than on the order of 10 bits per photon.

Quantum communication, however, offers the possibility to increase the information density. There are several ways to realize this, but a good illustration is based on the fact that one can “squeeze” light (more on this later). Then, for example, the time of arrival can be measured more accurately (at the expense of other parameters). There are analytical models, and also already practical demonstrations, which show that the information content can be increased by up to 50 percent. In our simple example, about 15 bits per photon could be encoded instead of only 10 for the classical case.

2. Information security

Encryption of sensitive data during data transmission is an important issue for us humans. Of course, we don’t know if this is the case for other civilizations. But it is plausible that future colonies on Mars (or Alpha Centauri…) will also want to encrypt their communications with each other and with Earth. In this respect, encryption is quite relevant for transmissions through space.

Today’s encryption methods are mostly based on mathematical one-way functions. For example, it is easy to multiply two large numbers. However, if the secret key is missing, you have to go the other way around and calculate the two prime factors from the large number. This is much more difficult. However, the security of this and similar methods is “only” due to the fact that no one has yet found an effective method of calculation. We have in no case the mathematical proof available that such a calculation is not possible. There is always the danger that a clever algorithm will be found which cracks the encryption. Quantum computers could also be used in the future to attack some encryption methods.

In contrast, there is quantum cryptography. The best-known method uses a quantum key exchange, which has also been used in practice over long distances, for example via satellite. This is based on quantum mechanics and is unbreakable as long as no mistake is made during transmission – and as long as no one disproves quantum mechanics.

3. Gate-keeping

If there really is a galactic Internet, how to protect it from being spammed by uneducated civilizations? This problem has already occupied Mieczysław Subotowicz, a Polish professor of astrophysics, who wrote in a technical paper on neutrino communication in 1979 that it was: “so difficult that an advanced civilization could intentionally communicate only through it with aliens of its own level of development”.

Now, as mentioned above, neutrino communications are very inefficient. It would be much more elegant and energy efficient to use photons instead. As an entry barrier, it seems plausible not to allow classical photons, but to require quantum communications. This would leave out young technological civilizations like ours, though we would have a good chance of joining in the next few decades.

4. Quantum computing

Konrad Zuse built the Zuse Z3, the first Turing-complete computer, in his Berlin apartment in 1941. This was a single computing machine. It took several decades until the first computers were connected (networked together) in 1969 with the ARPANET. This gave rise to the Internet, in which billions of computers of all kinds are connected today: PCs, cell phones, washing machines, etc. All these devices are classical computers exchanging classical information (bits) on classical paths (for example via photons in optical fibers).

In the future, quantum computers may gain importance because they can solve a certain class of problems much more efficiently. This could give rise to a “quantum Internet” in which quantum computers exchange “qubits,” or entangled quantum bits. These could be intermediate results of simulations, or even observational data that are later superimposed on each other [https://arxiv.org/abs/2103.07590].

Likewise, it is conceivable that quantum-based observational data and intermediate results will be exchanged over larger distances. This is when interstellar quantum communication comes into play. If distant civilizations also use quantum computers, their communications will consist of entangled particles.

Excursus: The (im)possible magic Pandora quantum box

The idea of using quantum entanglement to transmit information instantaneously (without loss of time) over long distances is a frequent motif in science fiction literature. For example, in the famous novel The Three Body Problem by Chinese author Liu Cixin, the “Trisolarans” use quantum entangled protons to communicate instantaneously.

This method sounds too good to be true – and unfortunately it actually contains three fundamental flaws. The first is the impossibility of exchanging information faster than the speed of light. If that were possible, there would be a causality violation: one could transmit the information before an event happens, thus causing paradoxes (“grandfather paradox” [https://arxiv.org/abs/1505.07489]). Second, quantum entanglement does not work this way: one cannot change one of two entangled particles, thereby causing an influence on the state of the partner. As soon as one of the particles is changed, this process destroys the entanglement (“no communication theorem”).

Third, an information transfer without particles (no particle flies from A to B) is impossible. Information is always bound to mass (or energy) in our universe, and does not exist detached from it. There are still open questions here, for example when and how information that flew in with matter comes out of a black hole again. But this does not change the fact that the communication by quantum entanglement, and without particle exchange, is impossible.

But wait a minute – before we throw away the “magic box of the entangled photons”, we should once more examine the idea. For there is, despite all the nonsense that is written about it, an actually sensible and physically undisputed possibility of use: known under the term “pre-shared entanglement” [https://arxiv.org/abs/quant-ph/0106052].

To perform this operation, we must first assume that we can entangle and store a large number of photons. This is not so easy: the current world record for a quantum memory preserves entanglement for only six hours. And even that requires considerable effort: It uses a ground-state hyperfine transition of europium ion dopants in yttrium orthosilicate using optically detected nuclear magnetic resonance techniques [https://www.nature.com/articles/nature14025]. But it is conceivable that technological advances will make longer storage possible. Conditions are particularly good for interstellar travel, because space is dark and cold, which slows decoherence caused by particle interactions.

So let’s assume such a quantum memory is available – what do we do with it? We take one half of the magic box on board a spaceship! And the counterpart remains on earth. Now the spaceship flies far away, and wants to communicate home. The trick is then not to send the bits of the information transmission simply on a photon letter to the earth, but to superpose each classical signal photon first with one (or more) stored entangled photons. The result is one classical photon per superposition, which is then sent “totally normally” to the receiver (for example the earth). Upon arrival, the receivers opens their own magic box and bring their part of the entangled particles with it to superposition. This allows the original message to be reconstructed.

The advantage of this procedure is increased information content: The amount of information (in bits per photon) increases by the factor log2(M), where M is the ratio of the entangled to the signal photons. Even a very large magic box is therefore of limited use, because unfortunately log2(1024), for example, is only 10. Losses and interference (due to noise, for example) also have a negative effect on the amount of encodable information. Nevertheless, “pre-shared entanglement” is a method that can be considered, because it is physically accepted – in contrast to most other ideas in popular literature.

Quantum communication in practice

But what does quantum communication look like in practice? Is there even a light source for it on earth? Yes, for a few years now this has actually been the case! When gravitational waves from merging black holes were detected for the first time at the Laser Interferometer Gravitational-wave Observatory (LIGO) in 2016, “squeezed light” was used. This is laser light traveling through a very precisely controlled crystal (an “OPO” for “Optical Parametric Oscillator”). This converts one green photon into two entangled red photons, to what is called a squeezed vacuum. This reduces phase uncertainty at the expense of amplitude fluctuation. And it is the former that matters: One would like to measure the arrival time of the photons very precisely in order to compare the length of the path with and without gravitational waves. The brightness of the photons is not important.

Such a squeezed light, with lower fluctuations compared to classical light, also improves interstellar communication. It still remains unresolved what is the best way to modulate the actual data. Signal strength is also still low, with just a few watts of squeezed light in use at LIGO. By comparison, there are classical lasers in the megawatt range. So the development of quantum light is several decades behind classical light. But more powerful quantum light sources in the kilowatt range are already planned for next-generation gravitational wave detectors. This would also mark the entry threshold for meaningful interstellar quantum communications.

Detection of quantum communication

Entangled photons are also just photons – shouldn’t they already be detectable in optical SETI experiments anyway? In principle this is correct, because for a single photon it is in principle not determinable who or what has generated it. If it falls on the detector at 11:37 a.m. with a wavelength of 650 nm (color red), we cannot possibly say whether it came from a star or from the laser cannon of the Death Star.

However, a photon rarely comes alone. If we receive one thousand photons with 650 nm within one nanosecond from the direction of Alpha Centauri in our one-meter mirror telescope, then we can be sure that they do not come from the star itself (the star sends only about 32 photons of all wavelengths per nanosecond into our telescope). Classical optical SETI is based on this search assumption. It is thus very sensitive to strong laser pulses, but also very insensitive to broadband sources.

Quantum SETI extends the search horizon by additional features. If we receive a group of photons, they no longer have to correspond to a specific wavelength, or arrive in a narrow time interval, for us to assume an artificial origin. Instead, we can check for quantum properties, such as the presence (or absence) of squeezed light. Indeed, there is no (known) natural process that produces squeezed light. If we receive such, it would be extremely interesting in any case. And there are indeed tests for squeezed light that can be done with existing telescopes and detectors. In the simplest case, one tests the intensity and its variance for a nonlinear (squared) correlation, which requires only a good CCD sensor [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.113602].

There are numerous other tests for quantum properties of light that are applicable to starlight. For faint sources from which only a few photons are received, one can measure their temporal separation. Chaotic starlight is temporally clustered, so it is very likely to reach us in small groups. Classical coherent light, i.e. laser light, is much more uniform. For light with photon “antibunching”, in the extreme case, the distance between every two photons is identical – so their arrival times are perfectly uncorrelated. This quantum mechanical effect can never occur in natural light sources, and is thus a sure sign of a technical origin. The technique is used from time to time because it is useful for determining stellar diameters (“intensity interferometry”).

For a few stars we can already deduce on the basis of existing data that they are of natural origin: Arcturus, Procyon and Pollux [https://academic.oup.com/mnras/article/472/4/4126/4344853]. In the future, however, the method can be applied to a large number of “strange” objects to test them for an artificial origin: impossible triple stars [https://academic.oup.com/mnras/article/445/1/309/988488], hyperfast globular clusters [https://iopscience.iop.org/article/10.1088/2041-8205/787/1/L11], or generally all interesting objects listed in the “Exotica” catalog by Brian Lacki (Breakthrough Listen) [https://arxiv.org/abs/2006.11304].

Current status and outlook

The idea to extend SETI by quantum effects is still quite new. However, one can fall back on known search procedures and must adapt these only slightly. Thus, dubious light sources can be effectively checked for an artificial origin in the future. We can be curious what the next observations will show, and ask the question: “Dear photon, are you artificially produced?”

The paper is Hippke, “Searching for interstellar quantum communications,” in press at the Astronomical Journal (preprint). See also the video “Searching for Interstellar Quantum Communications,” available at https://www.youtube.com/watch?v=Kwue4L8m2Vs.

tzf_img_post
{ 23 comments }

Mapping the Boundary of the Heliosphere

Between the Solar System and interstellar space is a boundary layer called the heliosheath. Or maybe I should define this boundary as being between the inner, planetary part of the Solar System and interstellar space. After all, we consider the Oort Cloud as part of our own system, yet it begins much further out. Both Voyagers have crossed the region where the Sun’s heliosphere ends and interstellar space begins, while they won’t reach the Oort, by some estimates, for another 300 years.

The broader region is called the heliopause, a place where the outflowing solar wind of protons, electrons and alpha particles (two protons and two neutrons tightly bound) encounters what we can call the interstellar wind, itself pushing up against the heliosphere and confining the solar wind-dominated region to a bubble. We now learn that this boundary region has been mapped, showing interactions at the interface.

A paper describing this feat has now appeared, with Dan Reisenfeld (Los Alamos National Laboratory) as lead author. Says Reisenfeld:

“Physics models have theorized this boundary for years. But this is the first time we’ve actually been able to measure it and make a three-dimensional map of it.”

Image: A diagram of our heliosphere. For the first time, scientists have mapped the heliopause, which is the boundary between the heliosphere (brown) and interstellar space (dark blue). Credit: NASA/IBEX/Adler Planetarium.

Riesenfeld and team used data from IBEX, the Interstellar Boundary Explorer satellite, which orbits the Earth but detects energetic neutral atoms (ENAs) from the zone where solar wind particles collide with those of the interstellar wind. Reisenfeld likens the process to bats using sonar, with IBEX using the solar wind as the outgoing signal and mapping the return signal, which varies depending on the intensity of the solar wind striking the heliosheath. Changes in the ENA count trigger the IBEX detectors.

“The solar wind ‘signal’ sent out by the Sun varies in strength, forming a unique pattern,” adds Reisenfeld. “IBEX will see that same pattern in the returning ENA signal, two to six years later, depending on ENA energy and the direction IBEX is looking through the heliosphere. This time difference is how we found the distance to the ENA-source region in a particular direction.”

The IBEX data cover a complete solar cycle from 2009 through 2019. We learn that the minimum distance from the Sun to the heliopause is about 120 AU in the direction facing the interstellar wind, while in the opposite direction, we see a tail that extends to at least 350 AU, which the paper notes is the distance limit of the measurement technique. The asymmetric shape is striking. From the paper’s abstract:

As each point in the sky is sampled once every 6 months, this gives us a time series of 22 points macropixel–1 on which to time-correlate. Consistent with prior studies and heliospheric models, we find that the shortest distance to the heliopause, dHP, is slightly south of the nose direction (dHP ~ 110–120 au), with a flaring toward the flanks and poles (dHP ~ 160–180 au).

Animation: The first three-dimensional map of the boundary between our solar system and interstellar space—a region known as the heliopause. Credit: Reisenfeld et al

The data make it clear that interactions between the solar wind and the interstellar medium occur over distances much larger than the size of the Solar System. It’s also clear that because the solar wind is not steady, the shape of the heliosphere is ever changing. A ‘gust’ of solar wind causes the heliosphere to inflate, with surges of neutral particles along its outer boundary, while lower levels of solar wind cause a contraction that is detected as a concurrent diminution in the number of neutral particles.

IBEX has been a remarkably successful mission, with a whole solar cycle of observations now under its belt. As we assimilate its data, we can look forward to IMAP — the Interstellar Mapping and Acceleration Probe, which is scheduled to launch in late 2024 and should enable scientists to extend the solid work IBEX has begun.

The paper is Reisenfeld et al., “A Three-dimensional Map of the Heliosphere from IBEX,” Astrophysical Journal Supplement Series Vol. 254, No. 2 (2021) Abstract. The paper is part of a trio of contributions entitled A Full Solar Cycle of Interstellar Boundary Explorer (IBEX) Observations, available here.

tzf_img_post
{ 3 comments }

Brown Dwarfs & Rogue Planets as JWST Targets

About 1,000 light years away in the constellation Perseus, the stellar nursery designated NGC 1333 is emerging as a priority target for astronomers planning to use the James Webb Space Telescope. Brown dwarfs come into play in the planned work, as do the free-floating ‘rogue’ planets we discussed recently. For NGC 1333 is a compact, relatively nearby target, positioned at the edge of a star-forming molecular cloud. It’s packed with hundreds of young stars, many of them hidden from view by dust, a venue in which to observe star formation in action.

Hoping to learn more about very low mass objects, Aleks Scholz (University of St Andrews, UK) lays out plans for using JWST to chart the distinctions between objects that emerge out of gravitational collapse of gas and dust clouds, and objects that grow through accretion inside a circumstellar disk. Says Scholz:

“The least massive brown dwarfs identified so far are only five to 10 times heftier than the planet Jupiter. We don’t yet know whether even lower mass objects form in stellar nurseries. With Webb, we expect to identify cluster members as puny as Jupiter for the first time ever. Their numbers relative to heftier brown dwarfs and stars will shed light on their origins and also give us important clues about the star formation process more broadly.”

Image: Scientists will use Webb to search the nearby stellar nursery NGC 1333 for its smallest, faintest residents. It is an ideal place to look for very dim, free-floating objects, including those with planetary masses. Credit: NASA/JPL-Caltech/R. A. Gutermuth (Harvard-Smithsonian CfA).

Flying aboard JWST is an instrument called the Near Infrared Imager and Slitless Spectrograph (NIRISS), which Scholz and colleagues will use to analyze the temperature and composition of low-mass objects like these. It is the absorption signature of a particular object, especially water and methane molecules, that will be critical for the work. The advantage of the NIRISS instrument is that it can provide simultaneous spectrographic data on dozens of objects, shortening and simplifying the observational task. One of Scholz’ team, Ray Jayawardhana (Cornell University) has been involved in JWST instrumentation since 2004, and was active in the design and development of NIRISS.

Unable to sustain hydrogen fusion, a brown dwarf may have a mass between 1% and 8% that of the Sun. Most light emitted by these objects is in the infrared, and the already tricky targets are at the top of the size range in this study. Investigating free-floating planets takes us to another level, and even with that in mind, the distinction between a brown dwarf and a giant planet can be blurry. Koraljka Muzic (University of Lisbon), also on Scholz’ team, explains:

“There are some objects with masses below the 10-Jupiter mark freely floating through the cluster. As they don’t orbit any particular star, we may call them brown dwarfs, or planetary-mass objects, since we don’t know better. On the other hand, some massive giant planets may have fusion reactions. And some brown dwarfs may form in a disk.”

Looking through Scholz’ publication list, I noticed a recent paper (“Size and structures of disks around very low mass stars in the Taurus star-forming region” — citation below) that notes the challenge to planet formation models posed by the structure of disks around such stars.

In particular, several giant planets have been found around brown dwarfs, leaving open the question of whether they formed as binary companions or as planets. If the latter, models of planetesimal accretion are hard pressed to explain the process in this environment. The movement of dust presents a problem:

Different physical processes lead to collisions of particles and their potential growth, such as Brownian motion, turbulence, dust settling, and radial drift… All of these processes have a direct or indirect dependency on the properties of the hosting star, such as the temperature and mass. For instance, from theoretical calculations, settling and radial drift are expected to be more efficient in disks around VLMS [Very Low Mass Stars] and BDs [Brown Dwarfs], with BD disks being 15–20% flatter and with radial drift velocities being twice as high or even more in these disks compared to T-Tauri disks…. With radial drift being a more pronounced problem in disks around BDs and VLMS, it is still unknown how this barrier of planet formation is overcome in these environments where the disks are more compact, colder, and have a lower mass.

The paper on the Taurus star-forming region draws on data from ALMA (Atacama Large Millimeter/submillimeter Array), and notes the problems that we can hope JWST will alleviate:

Detection rate of substructures: millimeter dust substructures were directly detected in only 50% of the targets in our sample. Our results suggest that the detection of substructures in disks around VLMS is limited by angular resolution and sensitivity, since the dust radial extent is very small and these disks are also very faint. Deep, high angular resolution observations over a non-brightness biased sample of VLMS should confirm the ubiquity of substructures in these disks.

This is going to be an exciting area of research. As the paper points out, for every ten stars that form in our galaxy, somewhere between two and five brown dwarfs also form, and we already know that low-mass M-dwarfs account for as much as 75 percent of the Milky Way’s stars. When massive objects form around or near brown dwarfs, we are challenged to adjust our models of interactions within the disk and re-consider models of gravitational collapse. Interesting brown dwarf issues await JWST if we can just get it into operation.

The Scholz paper cited above is “Size and structures of disks around very low mass stars in the Taurus star-forming region,” Astronomy & Astrophysics Vol. 645, A139 (January 2021). Abstract.

tzf_img_post
{ 4 comments }

NEO Surveyor: Proposed Asteroid Surveillance Mission

Near-Earth Object Surveyor is a proposed space telescope working at infrared wavelengths, an instrument that just completed a successful mission review and now moves on to the next phase of mission development. In NASA parlance, the upcoming Key Decision Point-B moves into Preliminary Design territory. Getting a spacecraft from concept to flight is a long process, but let’s back out to the broader picture.

Planetary defense is all about finding objects that could impact the Earth with serious consequences. That means setting size targets, and on that score, we’re making progress. In 2010, NASA announced that it had identified 90 percent of all Near Earth Objects larger than 1,000 meters. That moved us to the next target, NEOs larger than 140 meters in size, a goal set by the National Aeronautics and Space Administration Act of 2005. JPL now says about 40% of NEOs within this size range have been identified.

So with this work in progress, what does NEO Surveyor bring to the table? For one thing, it makes it possible to discover asteroids on dangerous trajectories much faster than current methods allow, by including objects that could approach the Earth from directions close to the Sun, a blind spot for ground-based observatories. Amy Mainzer is survey director for NEO Surveyor at the University of Arizona:

“By searching for NEOs closer to the direction of the Sun, NEO Surveyor would help astronomers discover impact hazards that could approach Earth from the daytime sky. NEO Surveyor would also significantly enhance NASA’s ability to determine the specific sizes and characteristics of newly discovered NEOs by using infrared light, complementing ongoing observations being conducted by ground-based observatories and radar.”

Image: NEO Surveyor is a new mission proposal designed to discover and characterize most of the potentially hazardous asteroids that are near the Earth. Credit: NASA/JPL-Caltech.

It’s worth remembering that while there are currently no impact threats in the catalog for this century, unknown objects still pose problems. Nobody tracked the Chelyabinsk impactor of 2013, reminding us of the dangers of complacency and the need for better sensors, like those NEO Surveyor would deploy in the infrared. The Chelyabinsk object was about 17 meters in size, well below what we are currently cataloging.

But we continue to make progress. Mike Kelley, a NEO Surveyor program scientist at NASA headquarters, believes the spacecraft could bring the catalog of 140-meter objects to 90 percent completion within ten years of launch (in 2026, if NEO Surveyor continues to move on track).

Meanwhile, we should keep in mind missions further along in the pipeline. The Double Asteroid Redirection Test (DART) mission is up for launch later this year. This one is about active planetary defense, with the plan of using a kinetic impactor to change an asteroid’s trajectory. The target is a binary near-Earth asteroid called (65803) Didymos; more specifically, DART will hit Didymos’ moon Dimorphos head on in the fall of 2022.

Image: Illustration of how DART’s impact will alter the orbit of Dimorphos (formerly called “Didymos B”) about Didymos. Telescopes on Earth will be able to measure the change in the orbit of Dimorphos to evaluate the effectiveness of the DART impact. Credit: NASA/JPL.

Interestingly, about one sixth of the known near-Earth asteroid (NEA) population are binary or multiple-body systems. Didymos and Dimorphos are separated by about one kilometer, with the 160-meter moon tidally locked to the 780 meter primary. Let’s also note the international aspects of DART, for the mission will work hand in glove with an Italian cubesat called LICIA (Light Italian CubeSat for Imaging of Asteroid) that will observe the impact ejecta, while the European Space Agency’s Hera mission will make a post-impact survey several years after the event.

Asteroid threat mitigation is indeed a global concern, but we’re beginning to experiment with deflection strategies using actual missions. The mission page for DART explains the plan this way:

The DART demonstration has been carefully designed. The impulse of energy that DART delivers to the Didymos binary asteroid system is low and cannot disrupt the asteroid, and Didymos’s orbit does not intersect Earth’s at any point in current predictions. Furthermore, the change in Dimorphos’s orbit is designed to bring its orbit closer to Didymos. The DART mission is a demonstration of capability to respond to a potential asteroid impact threat, should one ever be discovered.

We can hope we’ll never have to use the DART strategy — or others that are under active consideration — to adjust the trajectory of a major impactor, but we obviously need to have the tools available just in case. The need to conduct such tests and to maintain active surveillance as a means of planetary defense is a driver for space technologies we shouldn’t overlook. The capability of adjusting orbits much further from home is a spur toward exploration and surveillance throughout the system.

tzf_img_post
{ 6 comments }

A Visualization of Galactic Settlement

When the question of technosignatures at Alpha Centauri came up at the recent Breakthrough Discuss conference, the natural response was to question the likelihood of a civilization emerging around the nearest stars to our own. We kicked that around in Alpha Centauri and the Search for Technosignatures, focusing on ideas presented by Brian Lacki (UC-Berkeley) at the meeting. But as we saw in that discussion, we don’t have to assume that abiogenesis has to occur in order to find a technosignature around any particular star.

Ask Jason Wright (Penn State) and colleagues Jonathan Carroll-Nellenback and Adam Frank (University of Rochester) as well as Caleb Scharf (Columbia University), whose analysis of galaxies in transition has now produced a fine visual aid. Described in a short paper in Research Notes of the AAS, the simulation makes a major point: If civilizations last long enough to produce star-crossing technologies, then technosignatures may be widespread, found in venues across the galaxy.

The simulation depicts the expansion of a technological civilization through the Milky Way, created along lines previously described in the literature by the authors (citation below). What we are looking at is the transition between a Kardashev Type II civilization (here defined as a species using its technology in a significant fraction of the space around the host star), and a Type III, which has spread throughout the galaxy. Wright has argued in earlier work that, contra Sagan and others, this might be a fast process considering the motions of stars themselves, which would overcome the inertia of slower growing settlements and boost expansion rates.

Image: This is Figure 1 from the paper. Caption: A snapshot of the animation showing the settlement of the galaxy. White points are unsettled stars, magenta spheres are settled stars, and white cubes represent a settlement ship in transit. The spiral structure is due to galactic shear as the settlement wave expands. The full, low-resolution video is available in the HTML version of this research note, and a high resolution version can be found archived at ScholarSphere (see footnote 7). Credit: Wright et al.

And here is the animation, also available at https://youtu.be/hNMgtRf0GOg.

Issues like starship capabilities and the lifetime of colonies come into play, but the striking thing is how fast galactic settlement occurs and how the motions of stars factor into the settlement wave. Naturally, the parameters are everything, and they’re interesting:

  • Ships are launched no more frequently (from both the home system and all settlements) than every 0.1 Myr — every 100,000 years;
  • Technology persists in a given settlement for 100 million years before dying out;
  • Ship range is roughly 3 parsecs, on the order of 10 light years.
  • Ship speeds are on the order of 10 kilometers per second; in other words, Voyager-class speeds. “We have chosen,” the authors say, “ship parameters at the very conservative end of the range that allows for a transition to Type iii.”

All told, the simulation covers 1 billion years, and about it, the authors say that:

…it shows how rapidly expansion occurs once the settlement front reaches the galactic bulge and center. The speed of the settlement front depends strongly on the ratio of the maximum ship range to the average stellar separation. Here, we deliberately set this ratio to near unity at the stellar density of the first settlement, so the time constant on the settlement growth starts out small but positive. Eventually, the inward-moving part of the front encounters exponentially increasing stellar densities and accelerates, while the outward-moving part stalls in the rarer parts of the galaxy. Note that at around 0:33 a halo star becomes settled, and at 0:35 it settles a disk star near the top of the movie and far from the other settlements. This creates a second settlement front that merges with the first…

It comes as no surprise that the central regions of galaxies, thick with stars, are places that favor interstellar migration. Can a technological culture survive against ambient conditions in a galactic bulge? If so, these regions are logical SETI targets, and perhaps the most likely to yield a technosignature. The idea has synergy with other observations we are already interested in making, as for example studies of the supermassive black hole at galactic center.

So even slow — very slow — ships will fill a galaxy.

The paper is Wright et al., “The Dynamics of the Transition from Kardashev Type II to Type III Galaxies Favor Technosignature Searches in the Central Regions of Galaxies,” Research Notes of the AAS Vol. 5, No. 6 (June 2021). Abstract. The 2019 paper is Carroll-Nellenback et al., “The Fermi Paradox and the Aurora Effect: Exo-civilization Settlement, Expansion, and Steady States,” Astronomical Journal Vol. 158, No. 3 (2019). Abstract. This earlier paper is a storehouse of references and insights into the likelihood of interstellar settlement and spread.

tzf_img_post
{ 71 comments }

Liquid Water on a Free Floating Planet’s Moon?

As we learn more about how planetary systems form, it’s becoming accepted that a large number of planets are being ejected from young systems because of their interactions with more massive worlds. I always referred to these as ‘rogue planets’ in previous articles on the subject, but a new paper from Patricio Javier Ávila (University of Concepción, Chile) and colleagues makes it clear that the term Free Floating Planet (FFP) is now widespread. A new acronym for us to master!

There have been searches to try to constrain the number of free floating planets, though the suggested ranges are wide. Microlensing seems the best technique, as it can spot masses we cannot otherwise see through their effect on background starlight. Of these, the estimates come in at around 2 Jupiter-mass planets and 2.5 terrestrial-class rocky worlds per star that have been flung into the darkness. This is a vast number of planets, but we have to be wary of mass uncertainties, as the cut-off between planet and brown dwarf (usually around 13 Jupiter masses) comes into play.

Image: An artist’s conception of a free floating planet. Credit: JPL/Caltech.

Any chance for life on a world like this? It’s hard to see how unless it’s something exotic indeed, but it’s Friday, so let’s play around with the idea. A major paper on rogue worlds is a 1999 discussion in Nature by David Stevenson (Caltech), which assumes a hydrogen-rich atmosphere. I’m just going to pull this out of the abstract before moving on to the Ávila paper:

Pressure-induced far-infrared opacity of H2 may prevent these bodies from eliminating internal radioactive heat except by developing an extensive adiabatic (with no loss or gain of heat) convective atmosphere. This means that, although the effective temperature of the body is around 30 K, its surface temperature can exceed the melting point of water. Such bodies may therefore have water oceans whose surface pressure and temperature are like those found at the base of Earth’s oceans. Such potential homes for life will be difficult to detect.

To say the least. Let’s also note a later paper by Steinn Sigurðsson and John Debes that has shown that among terrestrial class planets ejected from their stars, a good number may retain a lunar-sized moon. Citations for both these papers are below.

But let’s think bigger. Ávila and colleagues go after Jupiter-sized worlds with large, terrestrial-sized moons (far larger than any we see in our Solar System, where Ganymede, larger than Mercury but much smaller than Earth, reigns supreme). They model the chemical composition and evolution of CO2 and water in an attempt to discover the kind of atmosphere that would allow liquid water on the surface. CO2 is found to produce more effective atmospheric opacity (governing atmospheric absorption) than Stevenson’s choice of molecular hydrogen.

From the paper:

…to the best of our knowledge, there are no detailed models of the chemical evolution of the atmosphere of a moon orbiting an FFP. Within this context, we introduce here an atmospheric model to tackle this limitation. We assume that in the absence of radiation from a companion star, the tidal and the radiogenic heating mechanisms represent the main sources of energy to maintain and produce an optimal range of surface temperatures.

The authors simulate the atmosphere of an Earth-sized moon in an eccentric orbit around a gas giant, analyzing its thermal structure and determining the mechanisms that can keep it warm. The assumption is that carbon dioxide accounts for 90% of the moon’s atmosphere. The model relies on radiogenic heating along with tidal factors as the main energy sources while invoking an atmosphere under changing conditions of cosmic ray ionization, chemistry, pressure and temperatures.

In a setting like this, the cosmic-ray ionization rate (CRIR) drives chemistry in the atmosphere. A bit more on this:

Due to the absence of impinging radiation, the time-scale of water production is driven by the efficiency of cosmic rays in penetrating the atmosphere. Higher CRIRs reduce the water formation time-scale when compared to low-CRIR models, implying that they play a key role in the chemical evolution, by enhancing the chemical kinetics. However, due to the attenuation of cosmic rays, in the lower layers of the atmosphere, the water production is also affected by the density structure, that determines the integrated column density through the atmosphere. This causes an altitude-dependent abundance of water as well as of some of the other chemical species, as CO, H2 and O2.

The authors’ model assumes an initial 10% molecular hydrogen and measures changes depending on atmospheric pressure, semi-major axis and eccentricity, the latter generating tidal heating. In the best scenario, we wind up with an amount of water on the surface of the moon that is about 10,000 times smaller than the volume of Earth’s oceans, but 100 times larger than found in Earth’s atmosphere. Thus we have a conceivable way to keep water a liquid on the surface, offering the possibility of prebiotic chemistry:

“Under these conditions, if the orbital parameters are stable to guarantee a constant tidal heating, once water is formed, it remains liquid over the entire system evolution, and therefore providing favourable conditions for the emergence of life.

Keeping that orbit eccentric enough to produce the needed tidal forces is a challenge. The authors’ research indicates that while moons around ejected gas giants may exist up to 0.1 AU from the planet, closer orbits in the range of ≲ 0.01 AU are more probable (Jupiter’s largest moons are within 0.01 AU). Is a single moon in this configuration not going to circularize its orbit, or can earlier orbital resonances survive the ejection? A good science fiction writer should have a go at this scenario to see what’s possible.

The paper is Avila et al., “Presence of water on exomoons orbiting free-floating planets: a case study,” International Journal of Astrobiology published online 08 June 2021 (full text). The Sigurðsson and Debes paper is Debes & Sigurðsson, ”The Survival Rate of Ejected Terrestrial Planets with Moons,” Astrophysical Journal Vol. 668, No. 2 (2 October 2007) L 167 (full text). The Stevenson paper is “Life-sustaining planets in interstellar space?” Nature 400 (6739):32 (1999). Abstract.

tzf_img_post
{ 29 comments }

A Rapidly Growing Catalog of Fast Radio Bursts

Hard to believe that Fast Radio Bursts (FRBs) were only discovered in 2007, as it seems we’ve been puzzled by them for a lot longer. Thus far about 140 FRBs have been detected, but now we have news that the Canadian Hydrogen Intensity Mapping Experiment (CHIME) has pulled in a total of 535 new fast radio bursts in its first year of operation between 2018 and 2019. The catalog growing from this work was presented this week at the annual meeting of the American Astronomical Society.

“Before CHIME, there were less than 100 total discovered FRBs; now, after one year of observation, we’ve discovered hundreds more,” says CHIME member Kaitlyn Shin, a graduate student in MIT’s Department of Physics. “With all these sources, we can really start getting a picture of what FRBs look like as a whole, what astrophysics might be driving these events, and how they can be used to study the universe going forward.”

Image: The large radio telescope CHIME, pictured here, has detected more than 500 mysterious fast radio bursts in its first year of operation, MIT researchers report. Credit: Courtesy of CHIME.

CHIME involves four cylindrical radio antennas that MIT describes as “roughly the size and shape of snowboarding half-pipes” located in British Columbia, and operated by the National Research Council of Canada. A correlator instrument — a digital signaling processor — digs through data from the stationary array at a rate of 7 terabits per second, allowing it to detect FRBs at a thousand times the pace of conventional radio telescopes.

We learn, for one thing, that FRBs are common, and frequent. Kiyoshi Masui (MIT) presented the catalog to conference goers on Wednesday the 9th:

“That’s kind of the beautiful thing about this field — FRBs are really hard to see, but they’re not uncommon. If your eyes could see radio flashes the way you can see camera flashes, you would see them all the time if you just looked up.”

It becomes clear from these data that the FRBs detected in the first year were evenly distributed in space, appearing in all parts of the sky. Their rate is thus far calculated to be 800 per day across the entire sky, a figure that is considered the most precise estimate of the phenomena’s occurrence that has yet been presented. Most bursts appear to have originated within distant galaxies, meaning they were highly energetic.

Two categories of FRB also emerge: Those that repeat and those that do not. 18 of the CHIME sources do repeat, with the rest one-time events. Among the repeating signals, each burst lasts slightly longer and emits more focused radio frequencies then bursts from single, non-repeating FRBs. We seem to be looking at two different kinds of astrophysical sources, or at least separate mechanisms, and it will be a goal of future data collection to clarify the differences between the two.

Image: The first 13 FRBs found by CHIME/FRB (from CHIME/FRB Collaboration, 2019, Nature, 566, 230). In this plot, the effects of dispersion have been removed from each source. Credit: CHIME.

While researchers work to learn what could cause such bright, fast signals, it’s fascinating to compare the FRB work with the use of supernovae as ‘standard candles.’ Evidence for the accelerating expansion of the universe was found by such measurements. Can FRBs be used as standard candles for other kinds of detections? Each FRB yields information about its propagation in terms of how gas and matter are distributed along the way to us. Kaitlyn Shin refers to the possibility of using them as “cosmological probes,” a potential enhanced by this new and growing catalog.

tzf_img_post
{ 7 comments }

TOI 1231b: A Useful Temperate Sub-Neptune

The beauty of nearby M-dwarf stars for exoplanet research is the depth of transits. If we are fortunate enough to find a planet crossing the face of the star as seen from our observatory, the star’s small size means a larger portion of its light will be attenuated. As you would imagine, this makes planets easier to spot, but the other significant advantage is that we have greater capability at analyzing the planet’s atmosphere.

TOI-1231b certainly fits the bill, although it’s a bit of an anomaly in the TESS universe. The space observatory operates with a built in observational bias because the Science Processing Operations Center (SPOC) pipeline and the Quick Look Pipeline (QLP) that comb through TESS data on a 2-minute and 30 minute cadence respectively have to show two transits for the planet’s period to be determined. Factor in that most of the TESS sky coverage is observed for 28 days and you wind up in the majority of cases with detections of planets with orbital periods of less than 14 days.

TOI-1231b’s period is 24 days, a nice catch given these constraints. The planet is a temperate sub-Neptune whose host star, NLTT 24399, is roughly 88 light years from the Sun. Already lead author Jennifer Burt (JPL) and team have been able to measure both the radius and mass of the planet, with followup data from the Planet Finder Spectrograph (PFS) on the Magellan Clay telescope at Las Campanas Observatory (Chile), as well as Las Cumbres Observatory and the Antarctica Search for Transiting ExoPlanets. From these parameters it was possible to calculate the planet’s density.

The temperatures on this world are calculated at 330 K (60 degrees Celsius), making TOI-1231b one of the lowest temperature exoplanets yet found whose atmosphere can be studied through transmission spectroscopy. The star is bright in the near-infrared (NIR), suggesting it will be a useful target for the James Webb Space Telescope as well as Hubble. One of the paper’s co-authors will be using the latter to mount a new series of observations within the month. Co-author Diana Dragomir (University of New Mexico) describes the team’s findings thus far:

“The low density of TOI 1231b indicates that it is surrounded by a substantial atmosphere rather than being a rocky planet. But the composition and extent of this atmosphere are unknown. TOI1231b could have a large hydrogen or hydrogen-helium atmosphere, or a denser water vapor atmosphere. Each of these would point to a different origin, allowing astronomers to understand whether and how planets form differently around M dwarfs when compared to the planets around our Sun, for example. Our upcoming HST observations will begin to answer these questions, and JWST promises an even more thorough look into the planet’s atmosphere.”

Image: An artist’s rendering of TOI-1231 b, a Neptune-like planet about 88 light years from Earth. Credit: NASA/JPL-Caltech.

One interesting aspect of this detection is the possibility of observing hydrogen and helium surrounding the planet because of its relatively low gravitational well and expected exposure to X-ray and ultraviolet radiation from the star. Moreover, there is only one other low-density temperate sub-Neptune, K2-18 b, currently in our catalog. It has temperatures in the 250-350 K range and a transmission spectrum that allows us to analyze its atmosphere, where evidence for water vapor has been found. Thus TOI 1231b should be useful as a check on how common water cloud formation in temperate sub-Neptunes may be.

All told, say the authors, “TOI 1231 b appears to be one of the most promising small exoplanets for transmission spectroscopy with HST and JWST detected by the TESS mission thus far.” A valuable find as we keep drilling down to analyze the atmospheres of ever smaller worlds, moving toward Earth-mass planets in the habitable zone.

The paper is Burt et al., “TOI-1231 b: A Temperate, Neptune-Sized Planet Transiting the Nearby M3 Dwarf NLTT 24399,” in process at The Astronomical Journal (preprint).

tzf_img_post
{ 1 comment }

When Will We See an Ice Giant Orbiter?

With NASA announcing that its Discovery program would fund both Davinci and Veritas, two missions to Venus, it’s worth pausing to consider where we are in the realm of Solar System exploration. This is not to knock the Venus decisions; this is a target that has been neglected compared to, obviously, Mars, and we’ve kept it on the back burner while exploring Jupiter, Saturn and, with a fast flyby, Pluto/Charon. With budgets always tight, the axe must fall, and fall it has on the promising Trident.

Discovery-class involves small-scale missions that cost less than $500 million to develop. The Trident mission would have delivered imagery from Triton that upgraded the 1989 images from Voyager 2, useful indeed given the moon’s active surface, and we might have learned about the presence of a subsurface ocean. I should also mention that we lost IVO when the four candidate missions were pared down to two. IVO (Io Volcano Observer) had a strong case of its own, with close flybys of the tortured geology on the most volcanically active body in the Solar System.

So on to Venus, but let’s consider how the next few decades are shaping up. We have flown orbital missions to every planet in the Solar System other than the two ice giants, and it’s worth considering how many questions about those worlds were suggested by the Voyager 2 flybys of Uranus and Neptune. Imagine if all we had of Saturn were flyby images, conceivably missing the active plume activity on Enceladus. What kind of startling data might an ice giant orbiter return that Voyager 2 didn’t see in its brief encounters?

The ice giants are a class of planet that, as the 2013 Planetary Science Decadal Survey stated “are… one of the great remaining unknowns in the solar system, the only class of planet that has never been explored in detail.” A Uranus Orbiter and Probe was, in fact, the third-highest priority large-class mission named by the report, but it’s clear that we won’t have such a mission in time for the 2030-2034 launch window needed (more on this in a moment). Despite that, let’s switch the focus to Uranus because of a short report from the 2020 Lunar and Planetary Science Conference that Ashley Baldwin forwarded.

There are all kinds of reasons why Uranus makes an interesting target. In addition to its status as an ice giant, Uranus has both a ring system and unusual moons, with five major satellites that may be ocean worlds and in any case show dramatic surface features. The seventh planet also sports a major tilt in both rotational and magnetic axes, and a wind circulation structure that is little understood. In the absence of a major orbiter mission, the brief paper Ashley sent examines the issues involved in sending a much smaller New Frontiers class orbiter with faster turnaround.

Image: Uranus’ moon Miranda sports one of the strangest and most varied landscapes among extraterrestrial bodies, including three large features known as “coronae,” which are unique among known objects in our solar system. They are lightly cratered collections of ridges and valleys, separated from the more heavily cratered (and presumably older) terrain by sharp boundaries like mismatched patches on a moth-eaten coat. Miranda’s giant fault canyons are as much as 12 times as deep as the Grand Canyon. This image was acquired by Voyager 2 on Jan. 24, 1986, around its close approach to the Uranian moon. Credit: JPL.

Back to that launch window I mentioned earlier. The 2030-2034 timeframe for Uranus would allow the needed Jupiter gravity assist that would get the payload to target before it reaches equinox in 2049. This is an important point: We’d like to see the northern hemispheres of the satellites — Voyager 2 could not see these — and after equinox they will once again become dark. A New Frontiers-class orbiter might just make the deadline, but it’s hard to see such a mission being funded in time. NASA now says the next opportunity to propose for the fifth round of New Frontiers missions will be no later than the fall of 2024.

New Horizons is a New Frontiers-class mission, as is OSIRIS-REx and Juno, all the subject of competitive selection through the program, which focuses on medium-scale missions that cost less than $850 million to develop. Within that cost envelope, a Uranus orbiter is a tricky proposition. The total mission duration cited in the paper is fourteen years because of the flight design life of the needed Multi-Mission Radioisotope Thermoelectric Generators (MMRTGs). Thus the baseline is a two year mission in orbit at Uranus with mapping of the entire system, all completed by Uranus spring equinox in 2049, “enabling different illuminations of the satellites and seasonal orientation of the planet and magnetosphere than observed by Voyager 2.”

Other issues: How to achieve orbital insertion at Uranus? Aerocapture seems a reasonable possibility and would have to be considered. The paper cites a 60-kg payload including five instruments along with radio science capabilities, and goes on to note that power is the most limiting constraint on a mission like this under New Frontiers cost limits. Here’s what the paper says about the power question:

…addressing power within cost is the primary obstacle to the feasibility of a NF Uranus orbiter mission. Previous Ice Giant mission studies have resulted in architectures requiring >350 W-e end-of-life power, which requires six MMRTGs. Owing to the relative inefficiency and significant cost of MMRTGs, any design should attempt to reduce the needed end-of-life power; this will have significant impact on both the spacecraft and orbit design as well as the communication subsystem and payload.

And of course we have this:

Other design considerations that place significant constraints on the feasibility of a NF Uranus orbiter include deep-space communications (specifically the power required for downlink) and radiation shielding mass.

Not an easy task. But this is what we face as we look beyond the current selections in the Discovery program. We’d all like to see an orbiter around both ice giants, but given the realities of time and budget, the likelihood of getting one around either before mid-century is slim. Eventually it will get done, and new technologies will make for a more efficient design and a more comprehensive mission. Sadly, the timeframe for seeing all this happen stretches a long way ahead.

Many of us find this frustrating. But the overview is that the exploration of the Solar System and the push beyond is a civilizational project that dwarfs human lifetimes. The things we can accomplish today build the basis for projects our children will complete. We push the limits of what we have, drive technology forward, and refuse to stop trying.

The paper is Cohen et al., “New Frontiers-class Uranus Orbiter: A Case For Exploring The Feasibility of Achieving Multidisciplinary Science With a Mid-scale Mission,” 51st Lunar and Planetary Science Conference (2020). Full text.

tzf_img_post
{ 27 comments }

Juno: Close Pass by Ganymede

The Juno spacecraft swings by Ganymede today, coming within 1,038 kilometers of the largest moon of Jupiter. We have to look back over twenty years to see such a close approach to Ganymede, that one conducted by the Galileo probe in 2000. Juno seems to be one of those gifts that keeps on giving, rewarding us now with new data on Ganymede’s composition, tenuous ionosphere, magnetosphere and icy surface, likely a shell over an underlying ocean.

Scott Bolton (SwRI) is Juno’s principal investigator:

“Juno carries a suite of sensitive instruments capable of seeing Ganymede in ways never before possible. By flying so close, we will bring the exploration of Ganymede into the 21st century, both complementing future missions with our unique sensors and helping prepare for the next generation of missions to the Jovian system – NASA’s Europa Clipper and ESA’s [European Space Agency’s] JUpiter ICy moons Explorer [JUICE] mission.”

Image: Left to right: The mosaic and geologic maps of Jupiter’s moon Ganymede were assembled incorporating the best available imagery from NASA’s Voyager 1 and 2 spacecraft and NASA’s Galileo spacecraft. Credits: USGS Astrogeology Science Center/Wheaton/NASA/JPL-Caltech.

The closest approach occurs at 13:35 EDT (17:35 UTC), with the craft’s science instruments beginning collecting data some three hours earlier. Especially useful here will be the Ultraviolet Spectrograph (UVS) and Jovian Infrared Auroral Mapper (JIRAM) instruments, while the Microwave Radiometer (MWR) will draw information on the composition and temperature of the crust. Bolton again:

“Ganymede’s ice shell has some light and dark regions, suggesting that some areas may be pure ice while other areas contain dirty ice. MWR will provide the first in-depth investigation of how the composition and structure of the ice varies with depth, leading to a better understanding of how the ice shell forms and the ongoing processes that resurface the ice over time.”

The animation below gives a sense of Ganymede’s geology:

Animation: A rotating globe of Ganymede, with a geologic map superimposed over a global color mosaic. Credit: USGS Astrogeology Science Center/Wheaton/ASU/NASA/JPL-Caltech.

Remember that JUICE is scheduled for launch in 2022, with multiple flybys of Europa and Callisto planned before the spacecraft moves into orbit around Ganymede, so everything we learn today with Juno will provide additional planning fodder for the mission. JUICE will fly with ice-penetrating radar, an instrument called RIME (Radar for Icy Moon Exploration) that is capable of penetrating up to 10 kilometers of ice and reflecting off subsurface features. That should obviously come in handy not just on Ganymede but Europa and Callisto as well.

A radio occultation experiment is planned for the Juno flyby using the spacecraft’s X-band and Ka-band wavelengths as a probe of the moon’s ionosphere, where solar radiation excites stray gases to form detectable ions. When Juno moves behind Ganymede, the changes in frequency observed as the radio signals pass through the ionosphere will be detected at the Deep Space Network Canberra complex in Australia. A solid reading here would be useful as scientists try to learn more not only about Ganymede’s ionosphere but also its magnetosphere. Larger than the planet Mercury, Ganymede is the only moon in the Solar System to have its own magnetosphere, a region of charged particles that form a bubble around the small world.

This will be a busy week for Juno mission scientists, with the close pass of Ganymede followed less than 24 hours later with the 33rd science pass of Jupiter. Juno’s JunoCam imager should be able to take images of Ganymede at a resolution equivalent to the best returned by Voyager and Galileo, a set of imagery that can be scanned for changes over the four decade period. Of relevance here will be new craters as we try to learn more about impacts in the outer system.

Meanwhile, the Juno Stellar Reference Unit (SRU) navigation camera should collect images with a particular import, telling us something about the radiation environment at Ganymede. Heidi Becker (JPL) is Juno radiation monitoring lead:

“The signatures from penetrating high-energy particles in Jupiter’s extreme radiation environment appear as dots, squiggles, and streaks in the images – like static on a television screen. We extract these radiation-induced noise signatures from SRU images to obtain diagnostic snapshots of the radiation levels encountered by Juno.”

A third camera is the Advanced Stellar Compass, which will measure energetic electrons penetrating its shielding every quarter of a second. NASA points out that from the perspective of Juno, Ganymede will go from being a point of light to a visible disk and back to a point of light in a mere 25 minutes, moving past the moon at about 19 kilometers per second. The ensuing Jupiter close pass will occur at 58 kilometers per second. Fast times above the cloud tops.

tzf_img_post
{ 9 comments }