≡ Menu

WFIRST: Ready for Construction

With the James Webb Space Telescope now declared ‘a fully assembled observatory’ by NASA, environmental tests loom for the instrument, which is now slated for launch in March of 2021. Within that context, we need to place WFIRST (Wide-Field Infrared Space Telescope), whose development was delayed for several years because of cost overruns on JWST. Recall that WFIRST was the top priority for a flagship mission in the last astrophysics Decadal Survey.

The good news is that NASA has just announced that WFIRST has passed what it is calling ‘a critical programmatic and technical milestone,’ which opens the path to hardware development and testing. With a viewing area 100 times larger than the Hubble instrument, WFIRST will be able to investigate dark energy and dark matter while at the same time examining exoplanets by using microlensing techniques applied to the inner Milky Way. Its exoplanet capabilities could be significantly extended if additional budgeting for a coronagraph — which would allow direct imaging of exoplanets — winds up being approved.

And that is a big ‘if.’ No one doubts the power of a coronagraph onboard WFIRST to block the light of a central star in order to examine any planets found around it, but this telescope has already suffered considerable budget anxiety, leading NASA to separate the coronagraph, now described as a ‘technology demonstration,’ from the $3.2 billion budget estimate. Adding the coronagraph and subsequent operations would take the total WFIRST tally to $3.9 billion.

Image: This graphic shows a simulation of a WFIRST observation of M31, also known as the Andromeda galaxy. Hubble used more than 650 hours to image areas outlined in blue. Using WFIRST, covering the entire galaxy would take only three hours. Credits: DSS, R. Gendle, NASA, GSFC, ASU, STScI, B. F. Williams.

Will Congress approve funding for the coronagraph, or will WFIRST fly without it? Will WFIRST fly at all, given that Congress has already had to save the telescope twice from cancellation? The current FY2021 budget request proposes terminating the telescope, but it continues to receive congressional support and remains on schedule for a 2025 launch. It should be noted that a coronagraph was not part of the Decadal Survey’s recommendations, which factors into the discussion and may put pressure on those hoping to raise the needed additional funding.

Note this from NASA’s March 2 statement:

The FY2020 Consolidated Appropriations Act funds the WFIRST program through September 2020. The FY2021 budget request proposes to terminate funding for the WFIRST mission and focus on the completion of the James Webb Space Telescope, now planned for launch in March 2021. The Administration is not ready to proceed with another multi-billion-dollar telescope until Webb has been successfully launched and deployed.

The expectation is that Congress will keep WFIRST in the budget but the corongraph remains an open question. So the first priority is keeping the mission alive, while it’s clear that the cost overruns that have so exasperated astronomers and politicians alike with the James Webb instrument have played a role in keeping the brakes on WFIRST spending. As we saw recently in these pages, the Decadal Surveys (from the National Academies of Sciences, Engineering, and Medicine) set science priorities for NASA and other science agencies. The lack of a coronagraph within the last astrophysics Decadal Survey doesn’t help its chances now.

tzf_img_post
{ 16 comments }

Astronomy relies on so-called ‘standard candles’ to make crucial measurements about distance. Cepheid variables, for example, perhaps the most famous stars in this category, were examined by Henrietta Swan Leavitt in 1908 as part of her study of variable stars in the Magellanic clouds, revealing the relationship between this type of star’s period and luminosity. Edwin Hubble would use distance calculations based on this relationship to estimate how far what was then called the ‘Andromeda Nebula’ was from our galaxy, revealing the true nature of the ‘nebula.’

In recent times, astronomers have used type Ia supernovae in much the same way, for comparing a source’s intrinsic brightness with what is observed in the sky likewise determines distance. The most commonly described type Ia supernovae model occurs in binary systems where one of the stars is a white dwarf, and the assumption among astronomers has been that this category of supernova produces a consistent peak luminosity that can be used to measure interstellar, and intergalactic, distances.

It was through the study of type Ia supernovae that the idea of dark energy arose to explain the apparent acceleration of the universe’s expansion, but we can also point to our methods for measuring the Hubble constant, which helps us gauge the current expansion rate of the cosmos.

Given the importance of standard candles to astronomy, we have to get them right. Now we have new work out of the Max Planck Institute for Astronomy in Heidelberg. A team led by Maria Bergemann draws our assumptions about these supernovae into question, and that could cause a reassessment of the rate of cosmic expansion. At issue: Are all type 1a supernovae the same?

Bergemann’s work on stellar atmospheres has, since 2005, focused on new models to examine the spectral lines observed there, the crucial measurements that lead to data on a star’s temperature, surface pressure and chemical composition. Computer simulations of convection within a star and the interactions of plasma with the star’s radiation have been producing and reinforcing so-called Non-LTE models that assume no local thermal equilibrium, leading to new ways to explore chemical abundances that alter our previous findings on some elements.

The team at MPIA has zeroed in on the element manganese using observational data in the near-ultraviolet, and extending the analysis beyond single stars to work with the combined light of numerous stars in a stellar cluster, which allows the examination of other galaxies. It takes a supernova explosion to produce manganese, and different types of supernova produce iron and manganese in different ratios. Thus a massive star going supernova, a ‘core collapse supernova,’ produces manganese and iron differently than a type 1a supernova.

Image: By examining the abundance of the element manganese, a group of astronomers has revised our best estimates for the processes behind supernovae of type Ia. Credit: R. Hurt/Caltech-JPL, Composition: MPIA graphics department.

Working with a core of 42 stars within the Milky Way, the team has essentially been reconstructing the evolution of iron and manganese as produced through type Ia supernova explosions. The researchers used iron abundance as an indicator of each star’s age relative to the others; these findings allow them to track the history of manganese in the Milky Way. What they are uncovering is that the ratio of manganese to iron has been constant over the age of our galaxy. The same constant ratio between manganese and iron is found in other galaxies of the Local Group, emerging as what appears to be a universal chemical constant.

This is a result that differs from earlier findings. Previous manganese measurements used the older LTE model, one assuming that stars are perfect spheres, with pressure and gravitational force in equilibrium. Such work helped reinforce the idea that type Ia supernovae most often occurred with a white dwarf drawing material from a giant companion. The data in Bergemann’s work, using Non-LTE models (No Local Thermal Equilibrium), are drawn from ESO’s Very Large Telescope and the Keck Observatory. A different conclusion emerges about how type Ia occurs.

The assumption has been that these supernovae happen when a white dwarf orbiting a giant star pulls hydrogen onto its own surface and becomes unstable, having hit the limiting mass discovered by Subrahmanian Chandrasekhar in 1930 (the “Chandrasekhar limit”). This limiting mass means that the total mass of the exploding star is the same from one Type Ia supernova to another, which governs the brightness of the supernova and produces our ‘standard candle.’ The 2011 Nobel Prize in Physics for Saul Perlmutter, Brian Schmidt, and Adam Riess comes out of using Type Ia as distance markers, with readings showing that the expansion of the universe is accelerating, out of which we get ‘dark energy.’

But the work of Bergemann and team shows that other ways to produce a type Ia supernova may better fit the manganese/iron ratio results. These mechanisms may appear the same as the white dwarf/red giant scenario, but because they operate differently, their brightness varies. Two white dwarfs may orbit each other, producing a merger with resulting explosion, or double detonations can occur in some cases as matter accretes onto the white dwarf, with a second explosion in the carbon-oxygen core. In both cases, we are exploring a different scenario than the standard type Ia.

The problem: These alternative supernovae scenarios do not necessarily follow the standard candle model. Double detonation explosions do not require a star to reach the Chandrasekhar mass limit. Explosions below this limit will not be as bright as the standard type Ia scenario, meaning the well-defined intrinsic brightness we are looking for in these events is not a reliable measure. And it appears that the constant ratio of manganese to iron that the researchers have found implies that non-standard type Ia supernovae are not the exception but the rule. As many as three out of four type 1a supernovae may be of this sort.

This is the third paper in a series that is designed to provide observational constraints on the origin of elements and their evolution within the galaxy. The paper notes that the evolution of manganese relative to iron is “a powerful probe of the epoch when SNe Ia started contributing to the chemical enrichment and, therefore, of star formation of the galactic populations.” The models of non-local thermodynamic equilibrium produce a fundamentally different result from earlier modeling and raise questions about the reliability of at least some type Ia measurements to gauge distance.

Given existing discrepancies between the Hubble constant as measured by type Ia supernovae and other methods, Bergemann and team have nudged the cosmological consensus in a sensitive place, showing the need to re-examine our standard candles, not all of which may be standard. Upcoming observational data from the gravitational wave detector LISA (due for a launch in the 2030s) may offer a check on the prevalence of white dwarf binaries that could confirm or refute this work. Even sooner, we should have the next data release (DR3) of ESA’s Gaia mission as a valuable reference.

The paper is Eitner et al., “Observational constraints on the origin of the elements III. Evidence for the dominant role of sub-Chandrasekhar SN Ia in the chemical evolution of Mn and Fe in the Galaxy,” in press at Astronomy & Astrophysics (preprint).

tzf_img_post
{ 38 comments }

Voyager and the Deep Space Network Upgrade

The fault protection routines programmed into Voyager 1 and 2 were designed to protect the spacecraft in the event of unforeseen circumstances. Such an event occurred in late January, when a rotation maneuver planned to calibrate Voyager 2’s onboard magnetic field instrument failed to occur because an unexpected delay in its execution left two systems consuming high levels of power (in Voyager terms) at the same time, overdrawing the available power supply.

We looked at this event not long after it happened, and noted that within a couple of days, the Voyager team was able to turn off one of the systems and turn the science instruments back on. Normal operations aboard Voyager 2 were announced on March 3, with five operating science instruments that had been turned off once again returning their data. Such autonomous operation is reassuring because Voyager 2 is now going to lose the ability to receive commands from Earth, owing to upgrades to the Deep Space Network in Australia. This is a temporary situation but one that will last the entire 11 months of the upgrade period.

Fortunately, scientists will still be able to receive science data from the craft, which is now 17 billion kilometers from Earth, but they will not be able to send commands to it during this period. The Canberra site is critical to the Voyager interstellar mission because its 70-meter wide antenna is the only one of the three DSN antennae that can communicate with Voyager 2, which is moving relative to the Earth’s orbital plane in such a way that it can only be seen from the southern hemisphere. Thus the California (Goldstone) and Spain (Robledo de Chavela) sites are ruled out, and there is no southern hemisphere antenna other than Canberra’s DSS43 capable of sending S-band signals powerful enough to communicate with Voyager 2.

Image: DSS43 is a 70-meter-wide (230-feet-wide) radio antenna at the Deep Space Network’s Canberra facility in Australia. It is the only antenna that can send commands to the Voyager 2 spacecraft. Credit: NASA/Canberra Deep Space Communication Complex.

The maintenance at DSS43 is essential, because we have communication and navigation needs for missions like the Mars 2020 rover and future exploration plans for both the Moon and Mars including at some point the crewed missions to the Moon in the Artemis program. Canberra has, in addition to the 70-meter dish, three 34-meter antennae that can receive the Voyager 2 signal, but are unable to transmit commands. During the period in question, Voyager 2 will continue to return data, according to Voyager project manager Suzanne Dodd:

“We put the spacecraft back into a state where it will be just fine, assuming that everything goes normally with it during the time that the antenna is down. If things don’t go normally – which is always a possibility, especially with an aging spacecraft – then the onboard fault protection that’s there can handle the situation.”

Expect the work at Canberra to be completed by January of 2021, placing an updated and more reliable antenna back into service and, presumably, continuing the active work managing Voyager 2’s ongoing mission. Better this, engineers reason, than dealing with future unplanned outages as DSS43 ages, while the upgrades will add state-of-the-art technology to the site. Putting all this in perspective is the fact that the dish has been in service for fully 48 years.

tzf_img_post
{ 14 comments }

Calculating Life’s Possibilities on Titan

With surface temperatures around -180° C, Titan presents problems for astrobiology, even if its seasonal rainfall, lakes and seas, and nitrogen-rich atmosphere bear similarities to Earth. Specifically, what kind of cell membrane can form and function in an environment this cold? Five years ago, researchers at Cornell used molecular simulations to screen for the possibilities, suggesting a membrane the scientists called an azotosome, which would be made out of the nitrogen, carbon and hydrogen molecules known to exist in Titan’s seas.

The azotosome was a useful construct because the phospholipid bilayer membranes giving rise to liposomes on Earth need an analog that can survive Titan’s conditions, a methane-based membrane that can form in cryogenic temperatures. And the Cornell work suggested that azotosomes would create a similar flexibility to cell membranes found on Earth. Titan’s seas of methane and ethane, then, might offer us the chance for a novel form of life to emerge.

Now we have new work out of Chalmers University of Technology in Gothenburg, Sweden that raises serious doubts about whether azotosomes could develop on Titan. The Cornell work examined the liquid organic compound acrylonitrile, found in Titan’s atmosphere, and built the azotosome idea around it, but the Swedish team’s calculations show that azotosomes are unlikely to be able to self-assemble in Titan’s conditions, for the acrylonitrile would crystalize into its molecular ice.

Martin Rahm (Department of Chemistry and Chemical Engineering, Chalmers University of Technology) is co-author of the paper:

“Titan is a fascinating place to test our understanding of the limits of prebiotic chemistry – the chemistry that precedes life. What chemical, or possibly biological, structures might form, given enough time under such different conditions? The suggestion of azotosomes was a really interesting proposal for an alternative to cell membranes as we understand them. But our new research paper shows that, unfortunately, although the structure could indeed tolerate the extremes of Titan, it would not form in the first place.”

This is interesting work, and not only because we are on track to launch Dragonfly in 2026, a mission to investigate the surface and sample different locations around the moon in an assessment of prebiotic chemistry. What we’re seeing is the emergence of computational astrobiology, the necessary follow-on to studies like the predictive work of 2015. The idea is to model the properties and formation routes of the materials proposed as supporting possible biological processes. In this case, we learn that the azotosome structure that looked so promising is not thermodynamically feasible.

But this work hardly eliminates the possibility of life on Titan. What if, the authors speculate, the cell structure itself is not critical? From the paper:

…on Titan, any hypothetical life-bearing macromolecule or crucial machinery of a life form will exist in the solid state and never risk destruction by dissolution. The question is then whether these biomolecules would benefit from a cell membrane. Already rendered immobile by the low temperature, biological macromolecules on Titan would need to rely on the diffusion of small energetic molecules, such as H2, C2H2, or HCN, to reach them in order for growth or replication to ensue. Transport of these molecules might proceed in the atmosphere or through the surrounding methane/ethane environment. A membrane would likely hinder this beneficial diffusion. Similarly, a membrane would likely hinder necessary removal of waste products of metabolism, such as methane and nitrogen, in the opposite direction.

Image: Researchers looking for life on Titan, Saturn’s largest moon, used quantum mechanical calculations to investigate the viability of azotosomes, a potential form of cell membrane. Credit: NASA / Yen Strandqvist / Chalmers.

At this stage, as the authors note, the limits of prebiotic chemistry and biology on Titan will have to stay in the realm of speculation, but computations like these can inform the choice of sites for Dragonfly as it explores the moon, helping us to match the reality on the ground with theory.

The paper is Sandström & Rahm, “Can polarity-inverted membranes self-assemble on Titan?” Science Advances Vol. 6, No. 4 (24 January 2020). Full text. The 2015 paper on azotosomes is Stevenson, Lunine & Clancy, “Membrane alternatives in worlds without oxygen: Creation of an azotosome,” Science Advances Vol. 1, No. 1 (27 February 2015), e1400067 (full text).

tzf_img_post
{ 26 comments }

On Freeman Dyson

Freeman Dyson’s response to the perplexity of our existence was not purely scientific. A polymath by nature, he responded deeply to art and literature and often framed life’s dilemmas through their lens. Always thinking of himself as a mathematician first, he unified quantum electrodynamics and saw the Nobel Prize go to the three who had formulated, in different ways, its structure, but he would cast himself as the Ben Jonson to Richard Feynman’s Shakespeare, a fact noted by Gregory Benford in his review of Phillip F. Schewe’s recent biography. That would be a typical allusion for a man whose restless intellect chafed at smug over-specialization, something neither he nor Feynman could ever be accused of.

Feynman, Julian Schwinger and Shinichiro Tomonaga each came up with ways to describe how electrons and photons interrelate, but it was Dyson, on one of his long cross-continental bus trips, who worked out the equivalence of their theories, giving us QED. He would publish the unifying paper in Physical Review in 1949. A year later, he met Tomonaga at Princeton, describing him in a June 24, 1950 letter to his parents as “a charming man, like so many of the really good ones. He talked with me for three hours with much humour and common sense… I have the impression that he is an exceptionally unselfish person.”

Which is exactly the impression I had of Dyson in the one interaction (other than email) I had with him, back in 2003 while I was pulling together material for Centauri Dreams and called the Institute for Advanced Study, his scholarly home since 1953, to schedule an interview. It was a spring day and, unfortunately for my purposes, a loud lawn mower was moving up and down outside Dyson’s window. I was having to shout to be heard, a nuisance, and I had trouble hearing him, but we persisted with much repetition and his good humor.

Always associated with Project Orion, the dramatic concept to propel a spacecraft by exploding nuclear charges behind it, Dyson had moved away from the idea, and indeed from nuclear energy entirely. He wanted to talk about microwave and laser propulsion, and expressed an interest in Clifford Singer’s ideas on pellet streams, an idea he liked because of the lack of diffraction. Over a close pass by the outside lawnmower, I heard him clearly: “Nuclear energy doesn’t cut it! Nuclear energy is too small. You’re using less than one percent of the mass with any kind of nuclear reaction so you’re limited to less than a tenth of lightspeed. Nuclear is great inside the Solar System, but not very interesting outside of it.”

If you would know something of this man, of his values and his conception of life, I direct you to the splendid Maker of Patterns: An Autobiography Through Letters, published in 2018. The concept is daring, for by eschewing standard autobiography to present himself largely through letters he wrote at the time, Dyson gives up the opportunity to edit his persona. None of us can point to a lifetime without contradiction, which is just another way of describing growth. Dyson was willing for that growth to be in full view. Thus the Dyson of 1958, writing about the Project Orion work he would later discount:

The basic idea is absurdly simple. One is amazed that nobody thought of it before. But the only man who could think of it was somebody who had been working and thinking for years with bombs, so that he could know exactly what a bomb of a given size will do. It was not an accident that this man happened to be Ted [Taylor]. The problem is to convince oneself that one can sit on top of a bomb without being fried… Ted’s genius led him to question the obvious impossibility. For the last six months Ted has spent his time talking to people in the government and trying to convince them that this idea is not crazy. He has had a hard time. But it seems we have now a lot of influential people on our side… Ted and I will fly to Los Alamos this evening. We travel like Paul and Barnabas.

Nothing would come of these travels, of course, because of the signing of the Limited Nuclear Test Ban Treaty of 1963, though Dyson would later support the treaty amid his deep concern over nuclear destruction. The idea of Orion still tantalizes many interstellar advocates today.

The lack of self-justifying ego — so rare in all too many quarters — that informs Dyson’s writings informs his wide reach into non-scientific markets, where he became the eloquent explainer of concepts he worked with in the course of his long life. I doubt there are many Centauri Dreams readers who do not have at least a few of his titles, books like Disturbing the Universe (1981) and Infinite in All Directions (1988). So many concepts sprang from his insistence on seeing things from a cosmological perspective, including for our interstellar purposes the Dyson sphere and the biological, self-replicating probe called ‘astrochicken’ that was enabled by artificial intelligence.

Image: Around the table clockwise are Dyson, Gregory Benford, Jim Benford and David Brin. Taken Jan. 30, 2019, before a discussion between Greg and Dyson at the Clarke Center (available here on YouTube).

All of these concepts he could relate to the general public through a style that was at once clear and enabling, so that the reader would, like this one, often look up from his or her reading to take in the audacity of ideas that were as logical as they were innovative. The archives of this site are awash with references to Dyson’s contributions, a tribute to his range and his reach. Remarkably, that intellect never deserted him even as his physical strength began to fail. Jim Benford, who has known Dyson since the 1960s, told me on the day of Dyson’s death that he had continued his yearly trips across the country to his La Jolla (CA) residence up until last year. This time around, at 96, he told Jim his doctors had argued against it. He would die a week later, a loss as deep to this field as his contribution was rich.

We shall know what we go to Mars for only after we get there. The study of whatever forms of life exist on Mars is likely to lead to better understanding of life in general. This may well be of more benefit to humanity than irrigating ten Saharas. But that is only one of many reasons for going. The main purpose is a general enlargement of human horizons.

Thus Dyson in a letter from La Jolla in 1958. Really, you must read Maker of Patterns. And from my 2003 interview with him:

Look at how people spread around the Earth. It’s not clear why we want to travel so much, but we do. It seems to be characteristic of humans from the time we left Africa. Why do people leave Africa to spread out to all these desolate places, to Siberia and across the Pacific? We know that people just do this. It’s part of human nature…

I think of him foremost as a deeply sane man, one who saw both the aspirations of the human mind as well as its limitations and took on the challenge of explaining life’s mysteries with a fierce joy. No one who reads, and re-reads, his essays and papers can miss this affirmation of mind at work, always building in new directions, unifying, shaping, questioning. It would be superfluous to try to summarize his many accomplishments in one post, for we will, inevitably, be turning his ideas over in our discussions for the rest of the lifetime of Centauri Dreams.

tzf_img_post
{ 12 comments }

Exploring the Contact Paradox

Keith Cooper is a familiar face on Centauri Dreams, both through his own essays and the dialogues he and I have engaged in on interstellar topics. Keith is the editor of Astronomy Now and the author of both The Contact Paradox: Challenging Assumptions in the Search for Extraterrestrial Intelligence (Bloomsbury Sigma), and Origins of the Universe: The Cosmic Microwave Background and the Search for Quantum Gravity (Icon Books) to be published later this year. The Contact Paradox is a richly detailed examination of the history and core concepts of SETI, inspiring a new set of conversations, of which this is the first. With the recent expansion of the search through Breakthrough Listen, where does SETI stand both in terms of its likelihood of success and its perception among the general public?

  • Paul Gilster

Keith, we’re 60 years into SETI and no contact yet, though there are a few tantalizing things like the WOW! signal to hold our attention. Given that you have just given us an exhaustive study of the field and mined its philosophical implications, what’s your take on how this lack of results is playing with the general public? Are we more or less ready today than we were in the days of Project Ozma to receive news of a true contact signal?

And despite what we saw in the film Contact, do you think the resultant clamor would be as widespread and insistent? Because to me, one of the great paradoxes about the whole idea of contact is that the public seems to get fired up for the idea in film and books, but relatively uninterested in the actual work that’s going on. Or am I misjudging this?

  • Keith Cooper

What a lot of people don’t realise is just how big space is. Our Galaxy is home to somewhere between 100 billion and 200 billion stars. Yet, until Yuri Milner’s $100 million Breakthrough Listen project, we had looked and listened, in detail, at about a thousand of those stars. And when I say listened closely, I mean we pointed a telescope at each of those stars for half an hour or so. Even Breakthrough Listen, which will survey a million stars in detail, finds the odds stacked against it. Let’s imagine there are 10,000 technological species in our Galaxy. That sounds like a lot, but on average we’d have to search between 10 million and 20 million stars just to find one of those species.

And remember, we’re only listening for a short time. If they’re not transmitting during that time frame, then we won’t detect them, at least not with a radio telescope. Coupled with the fact that incidental radio leakage will be much harder to detect than we thought, then it’s little wonder that we’ve not found anyone out there yet. Of course, the public doesn’t see these nuances – they just see that we’ve been searching for 60 years and all we’ve found is negative or null results. So I’m not surprised that the public are often uninspired by SETI.

Some of this dissatisfaction might stem from the assumptions made in the early days of SETI, when it was assumed that ETI would be blasting out messages through powerful beacons that would be pretty obvious and easy to detect. Clearly, that doesn’t seem to be the case. Maybe that’s because they’re not out there, or maybe it’s because the pure, selfless altruism required to build such a huge, energy-hungry transmitter to beam messages to unknown species is not very common in nature. Certainly on Earth, in the animal kingdom, altruism usually operates either on the basis of protecting one’s kin, or via quid pro quo, neither of which lend themselves to encouraging interstellar communication.

So I think we – that is, both the public and the SETI scientific community – need to readjust our expectations a little bit.

Are we ready to receive a contact signal? I suspect that we think we are, but that’s different from truly being ready. Of course, it depends upon a number of variables, such as the nature of the contact, whether we can understand the message if one is sent, and whether the senders are located close in space to us or on the other side of the Galaxy. A signal detected from thousands of light years away and which we can’t decode the message content of, will have much less impact than one from, say, 20 or 30 light years away, and which we can decode the message content and perhaps even start to communicate with on a regular basis.

  • Paul Gilster

I’ll go further than that. To me, the optimum SETI signal to receive first would be one from an ancient civilization, maybe one way toward galactic center, which would make by virtue of its extreme distance a non-threatening experience. Or at least it would if we quickly went to work on expanding public understanding of the size of the Galaxy and the Universe itself, as you point out. An even more ancient signal from a different galaxy would be even better, as even the most rabid conspiracy theorist would have little sense of immediate threat.

I suppose the best scenario of all would be a detection that demonstrated other intelligent life somewhere far away in the cosmos, and then a century or so for humanity to digest the idea, working it not only into popular culture, but also into philosophy, art, so that it becomes a given in our school textbooks (or whatever we’ll use in the future in place of school textbooks). Then, if we’re going to receive a signal from a relatively nearby system, let it come after this period of acclimatization.

Great idea, right? As if we could script what happens when we’re talking about something as unknowable as SETI contact. I don’t even think we’d have to have a message we could decode at first, because the important thing would be the simple recognition of the fact that other civilizations are out there. On that score, maybe Dysonian SETI turns the trick with the demonstration of a technology at work around another star. The fact of its existence is what we have to get into our basic assumptions about the universe. I used to assume this would be easy and come soon, and while I do understand about all those stars out there, I’m still a bit puzzled that we haven’t turned up something. I’d call that no more than a personal bias, but there it is.

Image: The Parkes 64m radio telescope in Parkes, New South Wales, Australia with the Milky Way overhead. Breakthrough Listen is now conducting a survey of the Milky Way galactic plane over 1.2 to 1.5 GHz and a targeted search of approximately 1000 nearby stars over the frequency range 0.7 to 4 GHz. Credit: Wikimedia Commons / Daniel John Reardon.

  • Keith Cooper

It’s the greatest puzzle that there is. Radio SETI approaches things from the assumption that ET just sat at home belting out radio signals, and yet, as we know, the Universe is so old that ET has had ample time to reach us, or to build some kind of Dysonian artefact, or to do something to make their presence more obvious. And over the years we’ve all drawn our own conclusions as to why this does not seem to be the case – maybe they are here but hidden, watching us like we’re in some kind of cosmic zoo. Or maybe interstellar travel and building megastructures are more difficult than we envision. Perhaps they are all dead, or technological intelligence is rare, or they were never out there in the first place. We just don’t know. All we can do is look.

I think science fiction has also trained us to expect alien life to be out there – and I don’t mean that as a criticism of the genre. Indeed, in The Contact Paradox, I often use science fiction as allegory, largely because that’s where discussions about what form alien life may take and what might happen during contact have already taken place. So let me ask you this, Paul: From all the sf that you’ve read, are there any particular stories that stand out as a warning about the subtleties of contact?

  • Paul Gilster

I suppose my favorite of all the ‘first contact through SETI’ stories is James Gunn’s The Listeners (1972). Here we have multiple narrators working a text that is laden with interesting quotations. Gunn’s narrative methods go all the way back to Dos Passos and anticipate John Brunner (think Stand on Zanzibar, for example). It’s fascinating methodology, but beyond that, the tumult that greets the decoding of an image from Capella transforms into acceptance as we learn more about a culture that seems to be dying and await what may be the reply to a message humanity had finally decided to send in response. So The Listeners isn’t really a warning as much as an exploration of this tangled issue in all its complexity.

Of course, if we widen the topic to go beyond SETI and treat other forms of contact, I love what Stanislaw Lem did with Solaris (1961). A sentient ocean! I also have to say that I found David Brin’s Existence (2012) compelling. Here competing messages are delivered by something akin to Bracewell probes, reactivated after long dormancy. Which one do you believe, and how do you resolve deeply contradictory information? Very interesting stuff! I mean, how do we respond if we get a message, and then a second one saying “Don’t pay any attention to that first message?”

What are some of your choices? I could go on for a bit about favorite science fiction but I’d like to hear from you. I assume Sagan’s Contact (1985) is on your list, but how about dazzling ‘artifact’ contact, as in the Strugatsky brothers’ Roadside Picnic (1972)? And how do we fit in Cixin Liu’s The Three Body Problem (2008)? At first glance, I thought we were talking about Alpha Centauri, but the novel shows no familiarity with the actual Centauri system, while still being evocative and exotic. Here the consequences of contact are deeply disturbing.

  • Keith Cooper

I wish I were as well read as you are, Paul! I did read The Three Body Problem, but it didn’t strike a chord with me, which is a shame. For artefact contact, however, I have to mention the Arthur C. Clarke classic, Rendezvous with Rama (1973). One of the things I liked about that story is that it removed us from the purpose of Rama. We just happened to be bystanders, oblivious to Rama’s true intent and destination (at least until the sequel novels).

Clarke’s story feels relevant to SETI today, in which embracing the search for ‘technosignatures’ has allowed researchers to consider wider forms of detection than just radio signals. In particular, we’ve seen more speculation about finding alien spacecraft in our own Solar System – see Avi Loeb pondering whether 1I/‘Oumuamua was a spacecraft (I don’t think it was), or Jim Benford’s paper about looking for lurkers.

I’ve got mixed feelings about this. On the one hand, although it’s speculative and I really don’t expect us to find anything, I see no reason why we shouldn’t look for probes in the Solar System, just in case, and it would be done in a scientific manner. On the other hand, it sets SETI on a collision course with ufology, and I’d be interested to see how that would play out in the media and with the public.

It could also change how we think about contact. Communication over many light years via radio waves or optical signals is one thing, but if the SETI community agrees that it’s possible that there could be a probe in our Solar System, then that would bring things into the arena of direct contact. As a species, I don’t think we’re ready to produce a coherent response to a radio signal, and we are certainly not ready for direct contact.

Contact raises ethical dilemmas. There’s the obvious stuff, such as who has the right to speak for Earth, and indeed whether we should respond at all, or stay silent. I think there are other issues though. There may be information content in the detected signal, for example a message containing details of new technology, or new science, or new cultural artefacts.

However, we live in a world in which resources are not shared equally. Would the information contained within the signal be shared to the whole world, or will governments covet that information? If the technological secrets learned from the signal could change the world, for good or ill, who should we trust to manage those secrets?

These issues become amplified if contact is direct, such as finding one of Benford’s lurkers. Would we all agree that the probe should have its own sovereignty and keep our distance? Or would one or more nations or organisations seek to capture the probe for their own ends? How could we disseminate what we learn from the probe so that it benefits all humankind? And what if the probe doesn’t want to be captured, and defends itself?

My frustration with SETI is that we devote our efforts to trying to make contact, but then shun any serious discussion of what could happen during contact. The search and the discussion should be happening in tandem, so that we are ready should SETI find success, and I’m frankly puzzled that we don’t really do this. Paul, do you have any insight into why this might be?

  • Paul Gilster

You’ve got me. You and I are on a slightly different page when it comes to METI, for example (Messaging to Extraterrestrial Intelligence). But we both agree that while we search for possible evidence of ETI, we should be having this broad discussion about the implications of success. And if we’re talking about actually sending a signal without any knowledge whatsoever of what might be out there, then that discussion really should take priority, as far as I’m concerned. I’d be much more willing to accept the idea of sending signals if we came to an international consensus on the goal of METI and its possible consequences.

As to why we don’t do this, I hear a lot of things. Most people from the METI side argue that the cat is already out of the bag anyway, with various private attempts to send signals proliferating, and the assumption that ever more sophisticated technology will allow everyone from university scientists to the kid in the basement to send signals whenever they want. I can’t argue with that. But I don’t think the fact that we have sent messages means we should give up on the idea of discussing why we’re doing it and why it may or may not be a sound idea. I’m not convinced anyway that any signals yet sent have the likelihood of being received at interstellar distances.

But let’s leave METI alone for a moment. On the general matter of SETI and implications of receiving a signal or finding ETI in astronomical data, I think we’re a bit schizophrenic. When I talk about ‘we,’ I mean western societies, as I have no insights into how other traditions now view the implications of such knowledge. But in the post-Enlightenment tradition of places like my country and yours, contacting ETI is on one level accepted (I think this can be demonstrated in recent polling) while at the same time it is viewed as a mere plot device in movies.

This isn’t skepticism, because that implies an effort to analyze the issue. This is just a holdover of old paradigms. Changing them might take a silver disc touching down and Michael Rennie strolling out. On the day that happens, the world really would stand still.

Let’s add in the fact that we’re short-sighted in terms of working for results beyond the next dividend check (or episode of a favorite show). With long-term thinking in such perilously short supply (and let’s acknowledge the Long Now Foundation’s heroic efforts at changing this), we have trouble thinking about how societies change over time with the influx of new knowledge.

Our own experience says that superior technologies arriving in places without warning can lead to calamity, whether intentional or not, which in and of itself should be a lesson as we ponder signals from the stars. A long view of civilization would recognize how fragile its assumptions can be when faced with sudden intervention, as any 500 year old Aztec might remind us.

Image: A 17th century CE oil painting depicting the Spanish Conquistadores led by Hernan Cortes besieging the Aztec capital of Tenochtitlan in 1519 CE. (Jay I. Kislak Collection).

Keith, what’s your take on the ‘cat out of the bag’ argument with regard to METI? It seems to me to ignore the real prospect that we can change policy and shape behavior if we find it counterproductive, instead focusing on human powerlessness to control our impulses. Don’t we on the species level have agency here? How naive do you think I am on this topic?

  • Keith Cooper

That is the ‘contact paradox’ in a nutshell, isn’t it? This idea that we’re actively reaching out to ETI, yet we can’t agree on whether it’s safe to do so or not. That’s the purpose of my book, to try and put the discussion regarding contact in front of a wider audience.

In The Contact Paradox, I’m trying not to tell people what they should think about contact, although of course I give my own opinions on the matter. What I am asking is that people take the time to think more carefully about this issue, and about our assumptions, by embarking on having the broader debate.

Readers of Centauri Dreams might point out that they have that very debate in the comments section of this website on a frequent basis. And while that’s true to an extent, I think the debate, whether on this site or among researchers at conferences or even in the pages of science fiction, has barely scratched the surface. There are so many nuances and details to examine, so many assumptions to challenge, and it’s all too easy to slip back into the will they/won’t they invade discussion, which to me is a total straw-man argument.

To compound this, while the few reviews that The Contact Paradox has received so far have been nice, I am seeing a misunderstanding arise in those reviews that once again brings the debate back down to the question of whether ETI will be hostile or not. Yet the point I am making in the book is that even if ETI is benign, contact could potentially still go badly, through misunderstandings, or through the introduction of disruptive technology or culture.

Let me give you a hypothetical example based on a science-fiction technology. Imagine we made contact with ETI, and they saw the problems we face on Earth currently, such as poverty, disease and climate change. So they give us some of their technology – a replicator, like that in Star Trek, capable of making anything from the raw materials of atoms. Let’s also assume that the quandaries that I mentioned earlier, about who takes possession of that technology and whether they horde it, don’t apply. Instead, for the purpose of this argument, let’s assume that soon enough the technology is patented by a company on Earth and rolled out into society to the point that replicators became as common a sight in people’s homes as microwave ovens.

Just imagine what that could do! There would be no need for people to starve or suffer from drought – the replicators could make all the food and water we’d ever need. Medicine could be created on the spot, helping people in less wealthy countries who can’t ordinarily get access to life-saving drugs. And by taking away the need for industry and farming, we’d cut down our carbon emissions drastically. So all good, right?

But let’s flip the coin and look at the other side. All those people all across the world who work in manufacturing and farming would suddenly be out of a job, and with people wanting for nothing, the economy would crash completely, and international trade would become non-existent – after all, why import cocoa beans when you can just make them in your replicator at home? We’d have a sudden obesity crisis, because when faced with an abundance of resources, history tells us that it is often human nature to take too much. We’d see a drugs epidemic like never before, and people with malicious intent would be able to replicate weapons out of thin air. Readers could probably imagine other disruptive consequences of such a technology.

It’s only a thought experiment, but it’s a useful allegory showing that there are pros and cons to the consequences of contact. What we as a society have to do is decide whether the pros outweigh the cons, and to be prepared for the disruptive consequences. We can get some idea of what to expect by looking at contact between different societies on Earth throughout history. Instead of the replicator, consider historical contact events where gunpowder, or fast food, or religion, or the combustion engine have been given to societies that lacked them. What were the consequences in those situations?

This is the discussion that we’re not currently having when we do METI. There’s no risk assessment, just a bunch of ill-thought-out assumptions masquerading as a rationale for attempting contact before we’re ready.

There’s still time though. ETI would really have to be scrutinising us closely to detect our leakage or deliberate signals so far, and if they’re doing that then they would surely already know we are here. So I don’t think the ‘cat is out of the bag’ just yet, which means there is still time to have this discussion, and more importantly to prepare. Because long-term I don’t think we should stay silent, although I do think we need to be cautious, and learn what is out there first, and get ready for it, before we raise our voice. And if it turns out that no one is out there, then we’ve not wasted our time, because I think this discussion can teach us much about ourselves too.

  • Paul Gilster

We’re on the same wavelength there, Keith. I’m not against the idea of communicating with ETI if we receive a signal, but only within the context you suggest, which means thinking long and hard about what we want to do, making a decision based on international consultation, and realizing that any such contact would have ramifications that have to be carefully considered. On balance, we might just decide to stay silent until we gathered further information.

I do think many people have simply not considered this realistically. I was talking to a friend the other day whose reaction was typical. He had been asking me about SETI from a layman’s perspective, and I was telling him a bit about current efforts like Breakthrough Listen. But when I added that we needed to be cautious about how we responded, if we responded, to any reception, he was incredulous, then thoughtful. “I’ve just never thought about that,” he said. “I guess it just seems like science fiction. But of course I realize it isn’t.”

So we’re right back to paradox. If we have knowledge of the size of the galaxy — indeed, of the visible cosmos — why do we not see more public understanding of the implications? I think people could absorb the idea of a SETI reception without huge disruption, but it will force a cultural shift that turns what had been fiction into the realm of possibility.

But maybe we should now identify the broad context within which this shift can occur. In the beginning of your book, Keith, you say this: “Understanding altruism may ultimately be the single most significant factor in our quest to make contact with other intelligent life in the Universe.”

I think this is exactly right, and the next time we talk, I’d like us to dig into why this statement is true, and its ramifications for how we deal with not only extraterrestrial contact but our own civilization. Along with this, let’s get into that thorny question of ‘deep time’ and how our species sees itself in the cosmos.

tzf_img_post
{ 93 comments }

G 9-40b: Confirming a Planet Candidate

M-class dwarfs within 100 light years are highly sought after objects these days, given that any transiting worlds around such stars will present unusually useful opportunities for atmospheric analysis. That’s because these stars are small, allowing large transit depth — in other words, a great deal of the star’s light is blocked by the planet. Studying a star’s light as it filters through a planetary atmosphere — transmission spectroscopy — can tell us much about the chemical constituents involved. We’ll soon extend that with space-based direct imaging.

While the discoveries we’re making today are exciting in their own right, bear in mind that we’re also building the catalog of objects that next generation ground telescopes (the extremely large, or ELT, instruments on the way) and their space-based cousins can examine in far greater depth. And it’s also true that we are tuning up our methods for making sure that our planet candidates are real and not products of data contamination.

Thus a planet called G 9-40b orbiting its red dwarf host about 90 light years out is significant not so much for the planet itself but for the methods used to confirm it. Probably the size of Neptune or somewhat smaller, G 9-40b is a world first noted by Kepler (in its K2 phase) as the candidate planet made transits of the star every six days. Confirmation that this is an actual planet has been achieved through three instruments. The first is the Habitable-zone Planet Finder (HPF), a spectrograph developed at Penn State that has been installed on the 10m Hobby-Eberly Telescope at McDonald Observatory in Texas.

HPF provides high precision Doppler readings in the infrared, allowing astronomers to exclude possible signals that might have mimicked a transiting world — we now know that G 9-40b is not a close stellar or substellar binary companion. HPF is distinguished by its spectral calibration using a laser frequency comb built by scientists at the National Institute of Standards and Technology and the University of Colorado. The instrument was able to achieve high precision in its radial velocity study of this planet while also observing the world’s transits across the star.

A post on the Habitable Zone Planet Finder blog notes that the brightness of the host star (given its proximity) and the large transit depth of the planet makes G 9-40b “…one of the most favorable sub-Neptune-sized planets orbiting an M-dwarf for transmission spectroscopy with the James Webb Space Telescope (JWST) in the future…”

But the thing to note about this work is the collaborative nature of the validation process, putting different techniques into play. High contrast adaptive optics imaging at Lick Observatory showed no stellar companions near the target, helping researchers confirm that the transits detected in the K2 mission were indeed coming from the star G 9-40. The Apache Point observations using high-precision diffuser-assisted photometry (see the blog entry for details on this technique) produced a transit plot that agreed with the K2 observations and allowed the team to tighten the timing of the transit. The Apache Point observations grew out of lead author Guðmundur Stefánsson’s doctoral work at Penn State. Says Stefánsson:

“G 9-40b is amongst the top twenty closest transiting planets known, which makes this discovery really exciting. Further, due to its large transit depth, G 9-40b is an excellent candidate exoplanet to study its atmospheric composition with future space telescopes.”

Image: Drawn from the HPF blog. Caption: Precise radial velocities from HPF (left) on the 10m Hobby-Eberly Telescope (right) allowed us to place an upper limit on the mass of the planet of 12 Earth masses. We hope to get a further precise mass constraint by continuing to observe G 9-40 in the future. Image credit: Figure 11a from the paper (left), Gudmundur Stefansson (right).

Near-infrared radial velocities from HPF allowed the 12 MEarth mass determination, the tightening of which through future work will allow the composition of the planet to be constrained. All of this is by way of feeding a space-based instrument like the James Webb Space Telescope with the data it will need to study the planet’s atmosphere. In such ways do we pool the results of our instruments, with HPF continuing its survey of the nearest low-mass stars in search of other planets in the Sun’s immediate neighborhood.

The paper is Stefansson et al., “A Sub-Neptune-sized Planet Transiting the M2.5 Dwarf G 9-40: Validation with the Habitable-zone Planet Finder,” Astronomical Journal Vol. 159, No. 3 (12 February 2020). Abstract / preprint.

tzf_img_post
{ 10 comments }

How NASA Approaches Deep Space Missions

Centauri Dreams reader Charley Howard recently wrote to ask about how NASA goes about setting its mission priorities and analyzing mission concepts like potential orbiter missions to the ice giants. It’s such a good question that I floated it past Ashley Baldwin, who is immersed in the evolution of deep space missions and moves easily within the NASA structure to extract relevant information. Dr. Baldwin had recently commented on ice giant mission analysis by the Outer Planets Advisory Group. But what is this group, and where does it fit within the NASA hierarchy? Here is Ashley’s explanation of this along with links to excellent sources of information on the various mission concepts under analysis for various targets, and a bit of trenchant commentary.

By Ashley Baldwin

Each of the relevant NASA advisory groups has its own page on the NASA site with archives stuffed full of great presentations. The most germane to our discussion here is the Outer Planets Assessment Group (OPAG). My own focus has been on the products OPAG and the other PAGs produce, though OPAG produces the most elegant presentations with interesting subject matter. Product more than process is my focus, along with politics with a little ‘p’ within the NASA administration, and ‘high’ politics with a big P.

There are a number of such “advisory groups” feeding into NASA through its Planetary Science Advisory Committee (PAC), some of them of direct interest to Centauri Dreams readers::

Exoplanet Exploration Program Analysis Group (ExoPAG);

Mars Exploration Program Analysis Group (MEPAG);

Venus Exploration Analysis Group (VEXAG);

Lunar Exploration Analysis Group (LEAG);

Small Bodies Assessment Group (SBAG)

The relative influence of these groups doubtless waxes and wanes over time, with Mars in the ascendancy for a long time and Venus in inferior conjunction for ages. Most were formed in 2004, with the exoplanet group unfortunately a year later (see * below for my thoughts on why and how this happened).

These groups are essentially panels of relevant experts/academics — astronomers, astrophysicists, geophysicists, planetary scientists, astronautical engineers, astrobiologists etc — from within the various NASA centers (JPL, Glenn, Goddard et al.), along with universities and related institutions. The chairpersons are elected and serve a term of three years. James Kasting, for instance, chaired the exoplanetary advisory group ExoPAG during the first decade of this century.

Each group has two to three full member meetings per year which are open to the public. They have set agendas and take the form of plenary sessions discussing presentations – all of which are made available in the meeting archives, which over the years tell the story of what is being prioritised as well as offering a great deal on planetary science. There are also more frequent policy committee meetings, some of which I have attended via Skype. The PAGs also work in collaboration with other space agencies, the European Space Agency (ESA) and Japan Aerospace Exploration Agency (JAXA) in particular. This all creates technological advice that informs and is informed by NASA policy, which is in turn informed politically, as you would imagine. All of this leads to the missions under consideration, such as Europa Clipper, the Space Launch System (SLS), the James Webb Space Telescope (JWST), the International Space Station (ISS) and the planning for future manned Lunar/Martian landings.

NASA can task the advisory groups to produce work relating to particular areas, such as ice giant missions, and with contributing towards the Decadal studies via a report that is due in March of 2023. On the Decadals: The National Research Council (NRC) conducts studies that provide a science community consensus on key questions being examined by NASA and other agencies. The broadest of these studies in NASA’s areas of research are the Decadal surveys.

So once each decade NASA and its partners ask the NRC to project 10 or more years into the future in order to prioritize research areas and develop mission concepts needed to make the relevant observations. You can find links to the most recent Decadal surveys here.

There is obviously jostling and internal competition for each group to get its priorities as high up the Decadal priority list as possible. Bearing in mind that there is a similar and equally competitive pyramid lobby for astrophysics, earth science and heliophysics.

Each PAG is encouraged to get its members to both individually and collectively submit ‘white papers’ championing research areas they feel are relevant. That’s thousands, so no wonder they need some serious and time consuming collation to produce the final document. This time around it will be Mars sample return versus the ice giants vying for the all important top spot (anything less than this and you are unlikely to receive a once-a-decade flagship mission).

The Planetary Science Advisory Committee, in turn, advises the central NASA Advisory Council (NAC). Its members are appointed at the discretion of and are directly advisory to the NASA administrator on all science matters within NASA’s purview. NAC was formed from the merger of several related groups in 1977, though its origins predate NASA’s formation in 1958.

The Discovery (small) and New Frontiers (medium ) Planetary Science programmes (with “flagship” missions like Clipper effectively being “large,” occurring generally once per decade) each run over a five year cycle, with one New Frontiers being picked each round and up to two Discovery missions chosen. This after short-listing from all concepts submitted in response to “an announcement of opportunity” – the formal NASA application process. The Discovery and New Frontiers programmes are staggered, as are the missions chosen under those programmes, with the aim of having a mission launching roughly on a 24 monthly rolling basis, presumably to help spread out their operational costs.

Both Discovery and New Frontiers come with a set budget cap, the $850-1000 million New Frontiers and $500 million Discovery. However, on top of this they have receive a free launcher (from a preselected list), some or all operational costs for the duration of the primary mission (which without extensions is about 2 years for Discovery like Insight and 3-4 years for a New Frontiers). There are also varying additional government furnished equipment (GFEs) on offer, consisting of equipment, special tooling, or special test equipment.

Sometimes other additional cost technology is included such as multi-mission radioisotope thermoelectric generators (MMRTG). Two have been slotted this time around for Discovery, which is very unusual as MMRTGs are at a premium and generally limited to New Frontiers missions or bigger. There were three on offer for last year’s New Frontiers round but as Dragonfly to Titan only needs one, there were two left over and they only have a limited shelf life.

This Discovery round also has broken with former policy in so much as ALL operations costs are being covered, including those outside of the mission proper (i.e whilst in transit to the target), thus removing cost penalties for missions with long transit times, like Trident to Triton. Even in hibernation there are system engineering costs and maintaining a science team that together add up to several million dollars per year. A big clue as to NASA’s Planetary Science Division’s priorities? I hope so!

The Explorer programme is the Astrophysics Division parallel process, run in similar fashion with one medium Explorer and one small Explorer (budget $170 million) picked every five years, though each programme is again staggered to effectively push out a mission about every two and a half years. There is some talk of the next Decadal study creating a funded “Probe” programme. Such programmes are generally only conceptual, but there is talk of a $1 billion budget for some sort of astrophysics mission, hopefully exoplanet related. No more than gossip at this point, though.

* And here is the ExoPAG bone of contention I mentioned above. Kepler was selected as a Discovery mission in 2003 prior to the formation of ExoPAG, and the rest of the planetary science groups went ballistic. This led to NASA excluding exoplanet missions from future Discovery and New Frontier rounds. Despite the tremendous success of Kepler, this limited ExoPAG to analogous but smaller Astrophysics Explorer funding. These are small- and medium-class, PI-led astrophysics missions, as well as astrophysics missions of opportunity.

Imagine what could have been produced, for instance, if the ESA’s ARIEL (or EChO) transit telescope had been done in conjunction with a New Frontiers budget instead of Astrophysics Explorer. The Medium Explorer budget reaches $200 million plus; New Frontiers gets up to $850-1000 million.

tzf_img_post
{ 7 comments }

Juno: Looking Deep into Jupiter’s Atmosphere

We’re learning more about the composition of Jupiter’s atmosphere, and in particular, the amount of water therein, as a result of data from the Juno mission. The data come in the 1.25 to 22 GHz range from Juno’s microwave radiometer (MWR), depicting the deep atmosphere in the equatorial region. Here, water (considered in terms of its component oxygen and hydrogen) makes up about 0.25 percent of the molecules in Jupiter’s atmosphere, almost three times the percentage found in the Sun. All of this gets intriguing when compared to the results from Galileo.

You’ll recall that the Galileo probe descended into the Jovian atmosphere back in 1995, sending back spectrometer measurements of the amount of water it found down to almost 120 kilometers, where atmospheric pressure reached 320 pounds per square inch (22 bar). Unlike Juno, Galileo showed that Jupiter might be dry compared to the Sun — there was in fact ten times less water than expected — but it also found water content increasing even as it reached its greatest depth, an oddity given the assumption that mixing in the atmosphere would create a constant water content. Did Galileo run into some kind of meteorological anomaly?

A new paper in Nature Astronomy looks at the matter as part of its analysis of the Juno results, which also depict an atmosphere not well mixed:

The findings of the Galileo probe were puzzling because they showed that where ammonia and hydrogen sulfide become uniformly mixed occurs at a level much deeper (~10 bar) than what was predicted by an equilibrium thermochemical model. The concentration of water was subsolar and still increasing at 22 bar, where radio contact with the probe was lost, although the concentrations of nitrogen and sulfur stabilized at ~3 times solar at ~10 bar. The depletion of water was proposed to be caused by meteorological effects at the probe location. The observed water abundance was assumed not to represent the global mean water abundance on Jupiter, which is an important quantity that distinguishes planetary formation models and affects atmospheric thermal structure.

Now Juno has found water content greater than what Galileo measured. But the fact that Galileo showed a water concentration that was still increasing when the probe no longer could send data makes its results inconclusive. The matter is important for those interested in planet formation because as the likely first planet to form, Jupiter would have contained the great bulk of gas and dust that did not go into the composition of the Sun. Thus planet formation models are keyed to factors like the amount of water the young planet would have assimilated. Scott Bolton, Juno principal investigator at the Southwest Research Institute in San Antonio, comments:

“Just when we think we have things figured out, Jupiter reminds us how much we still have to learn. Juno’s surprise discovery that the atmosphere was not well mixed even well below the cloud tops is a puzzle that we are still trying to figure out. No one would have guessed that water might be so variable across the planet.”

Image: The JunoCam imager aboard NASA’s Juno spacecraft captured this image of Jupiter’s southern equatorial region on Sept. 1, 2017. The image is oriented so Jupiter’s poles (not visible) run left-to-right of frame. Credit: NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill.

The research team, led by Cheng Li (JPL/Caltech) used data from Juno’s first eight science flybys, focusing on the equatorial region first because the atmosphere appears to be better mixed there than in other regions. Juno’s microwave radiometer can measure the absorption of microwave radiation by water at multiple depths at the same time. Using these methods, Juno could collect data from deeper in the atmosphere than Galileo, where pressures reach about 480 psi (33 bar). The next move will be to compare this with other regions, giving us a picture of water abundance as Juno coverage extends deeper into Jupiter’s northern hemisphere. Of particular interest will be what Juno will find at the planet’s poles.

From the paper:

We have shown that the structure of Jupiter’s EZ [equatorial zone] is steady, relatively uniform vertically and close to a moist adiabat [a region where heat does not enter or leave the system]; from this we have derived its water abundance. The thermal structure outside of the equator is still ambiguous owing to the non-uniform distribution of ammonia gas, for which we do not know the physical origin. Deriving the thermal structure outside of the equator in the future not only hints about the water abundance on Jupiter at other latitudes but also places constraints on the atmospheric circulation model for giant planets in the Solar System and beyond.

Image: Thick white clouds are present in this JunoCam image of Jupiter’s equatorial zone. These clouds complicate the interpretation of infrared measurements of water. At microwave frequencies, the same clouds are transparent, allowing Juno’s Microwave Radiometer to measure water deep into Jupiter’s atmosphere. The image was acquired during Juno’s flyby of the gas giant on Dec. 16, 2017. Credit: NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill.

The authors add that Juno has already revealed a deep atmosphere that is surprisingly variable as a function of latitude, highlighting the need to tread cautiously before making any assumptions about the planet’s overall water abundance. Extending these observations into other regions of the planet will be useful because oxygen is the most common element after hydrogen and helium in Jupiter’s atmosphere, and as water ice may thus have been the primary condensable in the protoplanetary disk. Consider this a deep probe into planet formation.

The paper is Li et al., “The water abundance in Jupiter’s equatorial zone,” Nature Astronomy 10 February 2020 (abstract).

tzf_img_post
{ 4 comments }

Trident: Firming up the Triton Flyby

It’s not a Triton, or even a Neptune orbiter, but Trident is still an exciting mission, a Triton flyby that would take a close look at the active resurfacing going on on this remarkable moon. Trident has recently been selected by NASA’s Discovery Program as one of four science investigations that will lead to one to two missions being chosen at the end of the study for development and launch in the 2020s.

These are nine-month studies, and they include, speaking of young and constantly changing surfaces, the Io Volcanic Observer (IVO). The other two missions are the Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy (VERITAS) mission, and DAVINCI+ (Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging Plus).

Each of these studies will receive $3 million to bring its concepts to fruition, concluding with a Concept Study Report, at which point we’ll get word on the one or two that have made it to further development and flight. The NASA Discovery program has been in place since 1992, dedicated to supporting smaller missions with lower cost and shorter development times than the larger flagship missions. That these missions can have serious clout is obvious from some of the past selections: Kepler, Dawn, Deep Impact, MESSENGER, Stardust and NEAR.

Active missions at the moment include Lunar Reconnaissance Orbiter and InSight, but we leave the inner system with Lucy, a Discovery mission visiting a main belt asteroid as well as six Jupiter trojans, and Psyche, which will explore the unusual metal asteroid 16 Psyche. Discovery missions set a $500 million cost-cap excluding launch vehicle operations, data analysis or partner contributions. The next step up in size is New Frontiers, now with a $1 billion cost-cap — here we can mention New Horizons, OSIRIS-REx and Juno as well as Dragonfly.

I assume that New Horizons’ success at Pluto/Charon helped Trident along, showing how much good science can be collected from a flyby. Triton makes for a target of high interest because of its atmosphere and erupting plumes, along with the potential for an interior ocean. The goal of Trident is to characterize the processes at work while mapping a large swath of Triton and learning whether in fact the putative ocean beneath the surface exists. A mid-2020s launch takes advantage of a rare and efficient gravity assist alignment to make the mission feasible. Louise Prockter, director of the Lunar and Planetary Institute in Houston, is principal investigator.

Image: Dr. Louise Prockter, program director for the Universities Space Research Association, as well as director of the Lunar and Planetary Institute, is now principal investigator for Trident. Credit: USRA.

We can thank Voyager 2 for providing our only close-up images of Triton, which was revealed to be a place where explosive venting blows dark material from beneath the ice into the air, material which falls back onto the surface to create new features. The terrain is varied and notable for the striking ‘cantaloupe’ pattern covering large areas. With its distinctive retrograde rotation, orbiting opposite to Neptune’s rotation, and high inclination orbit, Triton may well be an object captured from the Kuiper Belt, in an orbit where tidal forces likely lead to interior heating that could maintain an ocean. What we learn here could inform our understanding not just of KBOs, but also giant moons like Titan and Europa, and smaller ocean worlds like Enceladus.

This would be a flyby with abundant opportunities for data collection, as this precis from the 2019 Lunar and Planetary Science Conference makes clear:

An active-redundant operational sequence ensures unique observations during an eclipse of Triton – and another of Neptune itself – and includes redundant data collection throughout the flyby… High-resolution imaging and broad-spectrum IR imaging spectroscopy, together with high-capacity onboard storage, allow near-full-body mapping over the course of one Triton orbit… Trident passes through Triton’s thin atmosphere, within 500 km of the surface, sampling its ionosphere with a plasma spectrometer and performing magnetic induction measurements to verify the existence of an extant ocean. Trident’s passage through a total eclipse allows observations through two atmospheric radio occultations for mapping electron and neutral atmospheric density, Neptune-shine illuminated eclipse imaging for change detection since the 1989 Voyager 2 flyby, and high-phase angle atmospheric imaging for mapping haze layers and plumes.

Image: Global color mosaic of Triton, taken in 1989 by Voyager 2 during its flyby of the Neptune system. Color was synthesized by combining high-resolution images taken through orange, violet, and ultraviolet filters; these images were displayed as red, green, and blue images and combined to create this color version. With a radius of 1,350 kilometers (839 mi), about 22% smaller than Earth’s moon, Triton is by far the largest satellite of Neptune. It is one of only three objects in the Solar System known to have a nitrogen-dominated atmosphere (the others are Earth and Saturn’s giant moon, Titan). Triton has the coldest surface known anywhere in the Solar System (38 K, about -391 degrees Fahrenheit); it is so cold that most of Triton’s nitrogen is condensed as frost, making it the only satellite in the Solar System known to have a surface made mainly of nitrogen ice. The pinkish deposits constitute a vast south polar cap believed to contain methane ice, which would have reacted under sunlight to form pink or red compounds. The dark streaks overlying these pink ices are believed to be an icy and perhaps carbonaceous dust deposited from huge geyser-like plumes, some of which were found to be active during the Voyager 2 flyby. The bluish-green band visible in this image extends all the way around Triton near the equator; it may consist of relatively fresh nitrogen frost deposits. The greenish areas includes what is called the cantaloupe terrain, whose origin is unknown, and a set of “cryovolcanic” landscapes apparently produced by icy-cold liquids (now frozen) erupted from Triton’s interior.
Credit: NASA/JPL/USGS.

If it flies, Trident would launch in 2026 and reach Triton in 2038, using gravity assists at Venus, the Earth and, finally, Jupiter for a final course deflection toward Neptune. The current thinking is to bring the spacecraft, which will weigh about twice New Horizons’ 478 kg, within 500 kilometers of Triton, a close pass indeed compared to New Horizons’ 12,500 kilometer pass by Pluto. This is indeed close enough for the spacecraft to sample Triton’s ionosphere and conduct the needed magnetic induction measurements to confirm or refute the existence of its ocean. As this mission firms up, we’ll be keeping a close eye on its prospects in the outer system. Remember, too, the 2017 workshop in Houston examining a possible Pluto orbiter, still a long way from being anything more than a concept, but interesting enough to make the pulse race.

My friend Ashley Baldwin, who sent along some good references re Trident, also noted that Trident’s trajectory is such that the gravity assist around Jupiter could, at 1.24 Jupiter radii, provide a close flyby of Io. Interesting in terms of the competing Io Volcanic Observer entry.

tzf_img_post
{ 49 comments }