Where Do ‘Hot Neptunes’ Come From?

Learning about the orbital tilt of a distant exoplanet may help us understand how young planets evolve, and especially how they interact with both their star and other nearby planets. Thus the question of ‘hot Neptunes’ and the mechanisms that put them in place.The issue has been under study since 2004. Are we looking at planets laden with frozen ices that have somehow migrated to the inner system, or are these worlds that formed in place, so that their heavy elements are highly refractory materials that can withstand high disk temperatures?

Among the exoplanets that can give us guidance here is DS Tuc Ab, discovered in 2019 in data from the TESS mission (Transiting Exoplanet Survey Satellite). Here we have a young world whose host is conveniently part of the 45 million year old Tucana-Horologium moving group (allowing us to establish its age), a planet within a binary system in the constellation Tucana. The binary stars are a G-class and K-class star, with DS Tuc Ab orbiting the G-class primary.

A team at the Center for Astrophysics | Harvard & Smithsonian has developed a new modeling tool that is described in a paper to be published in The Astrophysical Journal Letters, one that allows them to measure the orbital tilt of DS Tuc Ab, the first time the tilt of a planet this young has been determined. Systems evolve over billions of years, making analysis of the formation and orbital configuration of their planets difficult. It’s clear from the team’s work that DS Tuc Ab did not, in the words of the CfA’s George Zhou, “get flung into its star system. That opens up many other possibilities for other, similar young exoplanets…”

The work was complicated by the fact that the host star, DS Tuc A, was covered up to 40% in star spots. David Latham (CfA) describes the situation:

“We had to infer how many spots there were, their size, and their color. Each time we’d add a star spot, we’d check its consistency with everything we already knew about the planet. As TESS finds more young stars like DS Tuc A, where the shadow of a transiting planet is hidden by variations due to star spots, this new technique for uncovering the signal of the planet will lead to a better understanding of the early history of planets in their infancy.”

The work proceeded using the Planet Finder Spectrograph on the Magellan Clay Telescope at Las Campanas Observatory in Chile, with the goal of finding out whether this newly formed world had experienced chaotic interactions in its past that could account for its current orbital position. The analysis involved modeling how the planet blocked light across the surface of the star, folding in the team’s projections of how the star spots changed the stellar light emitted. A well-aligned orbit would block an equal amount of light as the planet passed across the star’s surface. The method should aid the study of other young ‘hot Neptunes.’

Image: Animation courtesy of George Zhou, CfA. Top right illustration of DS Tuc AB by M Weiss, CfA.

Benjamin Montet (University of New South Wales) is lead author of a companion paper on which the CfA team were co-authors (citation below). Montet’s team used a different technique called the Rossiter-McLauglin effect to study the planet. Here the researchers measure slight blueshifts and redshifts in the star’s spectrum while simultaneously modeling both transit and stellar activity signals. Says Montet;

“DS Tuc Ab is at an interesting age. We can see the planet, but we thought it was still too young for the orbit of other distant stars to manipulate its path.”

What the combined work suggests is that DS Tuc Ab, because of its youth, probably did not form further out and migrate in. Its flat orbital tilt also indicates that the second star in the binary did not produce interactions that pulled it into its current position. The authors of the Montet paper consider this work “a first data point” in an analysis that may eventually confirm or rule out the hypothesis that wide binary companions can tilt protoplanetary disks to produce high inclination orbits in the planets that form within them.

A good deal of work is ahead to understand such young systems, but the methods the Montet team used are promising, for the Rossier-McLaughlin (R-M) effect proves to be potent. From the already published companion paper:

DS Tuc Ab is one of a small number of planets to be confirmed by a detection of its R-M signal rather than its spectroscopic orbit. This approach may be the optimal strategy for future confirmation of young planets orbiting rapidly-rotating stars. While the RV [radial velocity] of the star varies on rotational period timescales at the 300 m s?1 level, it does so relatively smoothly over transit timescales, enabling us to cleanly disentangle the stellar and planetary signals. While this planet would require a dedicated series of many spectra and a detailed data-driven analysis to measure a spectroscopic orbit, the R-M signal is visible by eye in observations from a single night. For certain systems, in addition to a more amenable noise profile, the amplitude of the R-M signal can be larger than the Doppler amplitude. Similar observations to these should be achievable for more young planets as they are discovered, which will shed light onto the end states of planet formation in protoplanetary disks.

So now we’re seeing two complementary methods for studying young planetary systems, both of which have turned in useful data on how one ‘hot Neptune’ must have formed.

Results of the CfA study will be published in The Astrophysical Journal Letters. The companion study is Montet et al., “The Young Planet DS Tuc Ab Has a Low Obliquity,” The Astronomical Journal Vol. 159, No. 3 (20 February 2020). Abstract / Preprint.

tzf_img_post

WFIRST: Ready for Construction

With the James Webb Space Telescope now declared ‘a fully assembled observatory’ by NASA, environmental tests loom for the instrument, which is now slated for launch in March of 2021. Within that context, we need to place WFIRST (Wide-Field Infrared Space Telescope), whose development was delayed for several years because of cost overruns on JWST. Recall that WFIRST was the top priority for a flagship mission in the last astrophysics Decadal Survey.

The good news is that NASA has just announced that WFIRST has passed what it is calling ‘a critical programmatic and technical milestone,’ which opens the path to hardware development and testing. With a viewing area 100 times larger than the Hubble instrument, WFIRST will be able to investigate dark energy and dark matter while at the same time examining exoplanets by using microlensing techniques applied to the inner Milky Way. Its exoplanet capabilities could be significantly extended if additional budgeting for a coronagraph — which would allow direct imaging of exoplanets — winds up being approved.

And that is a big ‘if.’ No one doubts the power of a coronagraph onboard WFIRST to block the light of a central star in order to examine any planets found around it, but this telescope has already suffered considerable budget anxiety, leading NASA to separate the coronagraph, now described as a ‘technology demonstration,’ from the $3.2 billion budget estimate. Adding the coronagraph and subsequent operations would take the total WFIRST tally to $3.9 billion.

Image: This graphic shows a simulation of a WFIRST observation of M31, also known as the Andromeda galaxy. Hubble used more than 650 hours to image areas outlined in blue. Using WFIRST, covering the entire galaxy would take only three hours. Credits: DSS, R. Gendle, NASA, GSFC, ASU, STScI, B. F. Williams.

Will Congress approve funding for the coronagraph, or will WFIRST fly without it? Will WFIRST fly at all, given that Congress has already had to save the telescope twice from cancellation? The current FY2021 budget request proposes terminating the telescope, but it continues to receive congressional support and remains on schedule for a 2025 launch. It should be noted that a coronagraph was not part of the Decadal Survey’s recommendations, which factors into the discussion and may put pressure on those hoping to raise the needed additional funding.

Note this from NASA’s March 2 statement:

The FY2020 Consolidated Appropriations Act funds the WFIRST program through September 2020. The FY2021 budget request proposes to terminate funding for the WFIRST mission and focus on the completion of the James Webb Space Telescope, now planned for launch in March 2021. The Administration is not ready to proceed with another multi-billion-dollar telescope until Webb has been successfully launched and deployed.

The expectation is that Congress will keep WFIRST in the budget but the corongraph remains an open question. So the first priority is keeping the mission alive, while it’s clear that the cost overruns that have so exasperated astronomers and politicians alike with the James Webb instrument have played a role in keeping the brakes on WFIRST spending. As we saw recently in these pages, the Decadal Surveys (from the National Academies of Sciences, Engineering, and Medicine) set science priorities for NASA and other science agencies. The lack of a coronagraph within the last astrophysics Decadal Survey doesn’t help its chances now.

tzf_img_post

Cosmic Expansion: A Close Look at a ‘Standard Candle’

Astronomy relies on so-called ‘standard candles’ to make crucial measurements about distance. Cepheid variables, for example, perhaps the most famous stars in this category, were examined by Henrietta Swan Leavitt in 1908 as part of her study of variable stars in the Magellanic clouds, revealing the relationship between this type of star’s period and luminosity. Edwin Hubble would use distance calculations based on this relationship to estimate how far what was then called the ‘Andromeda Nebula’ was from our galaxy, revealing the true nature of the ‘nebula.’

In recent times, astronomers have used type Ia supernovae in much the same way, for comparing a source’s intrinsic brightness with what is observed in the sky likewise determines distance. The most commonly described type Ia supernovae model occurs in binary systems where one of the stars is a white dwarf, and the assumption among astronomers has been that this category of supernova produces a consistent peak luminosity that can be used to measure interstellar, and intergalactic, distances.

It was through the study of type Ia supernovae that the idea of dark energy arose to explain the apparent acceleration of the universe’s expansion, but we can also point to our methods for measuring the Hubble constant, which helps us gauge the current expansion rate of the cosmos.

Given the importance of standard candles to astronomy, we have to get them right. Now we have new work out of the Max Planck Institute for Astronomy in Heidelberg. A team led by Maria Bergemann draws our assumptions about these supernovae into question, and that could cause a reassessment of the rate of cosmic expansion. At issue: Are all type 1a supernovae the same?

Bergemann’s work on stellar atmospheres has, since 2005, focused on new models to examine the spectral lines observed there, the crucial measurements that lead to data on a star’s temperature, surface pressure and chemical composition. Computer simulations of convection within a star and the interactions of plasma with the star’s radiation have been producing and reinforcing so-called Non-LTE models that assume no local thermal equilibrium, leading to new ways to explore chemical abundances that alter our previous findings on some elements.

The team at MPIA has zeroed in on the element manganese using observational data in the near-ultraviolet, and extending the analysis beyond single stars to work with the combined light of numerous stars in a stellar cluster, which allows the examination of other galaxies. It takes a supernova explosion to produce manganese, and different types of supernova produce iron and manganese in different ratios. Thus a massive star going supernova, a ‘core collapse supernova,’ produces manganese and iron differently than a type 1a supernova.

Image: By examining the abundance of the element manganese, a group of astronomers has revised our best estimates for the processes behind supernovae of type Ia. Credit: R. Hurt/Caltech-JPL, Composition: MPIA graphics department.

Working with a core of 42 stars within the Milky Way, the team has essentially been reconstructing the evolution of iron and manganese as produced through type Ia supernova explosions. The researchers used iron abundance as an indicator of each star’s age relative to the others; these findings allow them to track the history of manganese in the Milky Way. What they are uncovering is that the ratio of manganese to iron has been constant over the age of our galaxy. The same constant ratio between manganese and iron is found in other galaxies of the Local Group, emerging as what appears to be a universal chemical constant.

This is a result that differs from earlier findings. Previous manganese measurements used the older LTE model, one assuming that stars are perfect spheres, with pressure and gravitational force in equilibrium. Such work helped reinforce the idea that type Ia supernovae most often occurred with a white dwarf drawing material from a giant companion. The data in Bergemann’s work, using Non-LTE models (No Local Thermal Equilibrium), are drawn from ESO’s Very Large Telescope and the Keck Observatory. A different conclusion emerges about how type Ia occurs.

The assumption has been that these supernovae happen when a white dwarf orbiting a giant star pulls hydrogen onto its own surface and becomes unstable, having hit the limiting mass discovered by Subrahmanian Chandrasekhar in 1930 (the “Chandrasekhar limit”). This limiting mass means that the total mass of the exploding star is the same from one Type Ia supernova to another, which governs the brightness of the supernova and produces our ‘standard candle.’ The 2011 Nobel Prize in Physics for Saul Perlmutter, Brian Schmidt, and Adam Riess comes out of using Type Ia as distance markers, with readings showing that the expansion of the universe is accelerating, out of which we get ‘dark energy.’

But the work of Bergemann and team shows that other ways to produce a type Ia supernova may better fit the manganese/iron ratio results. These mechanisms may appear the same as the white dwarf/red giant scenario, but because they operate differently, their brightness varies. Two white dwarfs may orbit each other, producing a merger with resulting explosion, or double detonations can occur in some cases as matter accretes onto the white dwarf, with a second explosion in the carbon-oxygen core. In both cases, we are exploring a different scenario than the standard type Ia.

The problem: These alternative supernovae scenarios do not necessarily follow the standard candle model. Double detonation explosions do not require a star to reach the Chandrasekhar mass limit. Explosions below this limit will not be as bright as the standard type Ia scenario, meaning the well-defined intrinsic brightness we are looking for in these events is not a reliable measure. And it appears that the constant ratio of manganese to iron that the researchers have found implies that non-standard type Ia supernovae are not the exception but the rule. As many as three out of four type 1a supernovae may be of this sort.

This is the third paper in a series that is designed to provide observational constraints on the origin of elements and their evolution within the galaxy. The paper notes that the evolution of manganese relative to iron is “a powerful probe of the epoch when SNe Ia started contributing to the chemical enrichment and, therefore, of star formation of the galactic populations.” The models of non-local thermodynamic equilibrium produce a fundamentally different result from earlier modeling and raise questions about the reliability of at least some type Ia measurements to gauge distance.

Given existing discrepancies between the Hubble constant as measured by type Ia supernovae and other methods, Bergemann and team have nudged the cosmological consensus in a sensitive place, showing the need to re-examine our standard candles, not all of which may be standard. Upcoming observational data from the gravitational wave detector LISA (due for a launch in the 2030s) may offer a check on the prevalence of white dwarf binaries that could confirm or refute this work. Even sooner, we should have the next data release (DR3) of ESA’s Gaia mission as a valuable reference.

The paper is Eitner et al., “Observational constraints on the origin of the elements III. Evidence for the dominant role of sub-Chandrasekhar SN Ia in the chemical evolution of Mn and Fe in the Galaxy,” in press at Astronomy & Astrophysics (preprint).

tzf_img_post

Voyager and the Deep Space Network Upgrade

The fault protection routines programmed into Voyager 1 and 2 were designed to protect the spacecraft in the event of unforeseen circumstances. Such an event occurred in late January, when a rotation maneuver planned to calibrate Voyager 2’s onboard magnetic field instrument failed to occur because an unexpected delay in its execution left two systems consuming high levels of power (in Voyager terms) at the same time, overdrawing the available power supply.

We looked at this event not long after it happened, and noted that within a couple of days, the Voyager team was able to turn off one of the systems and turn the science instruments back on. Normal operations aboard Voyager 2 were announced on March 3, with five operating science instruments that had been turned off once again returning their data. Such autonomous operation is reassuring because Voyager 2 is now going to lose the ability to receive commands from Earth, owing to upgrades to the Deep Space Network in Australia. This is a temporary situation but one that will last the entire 11 months of the upgrade period.

Fortunately, scientists will still be able to receive science data from the craft, which is now 17 billion kilometers from Earth, but they will not be able to send commands to it during this period. The Canberra site is critical to the Voyager interstellar mission because its 70-meter wide antenna is the only one of the three DSN antennae that can communicate with Voyager 2, which is moving relative to the Earth’s orbital plane in such a way that it can only be seen from the southern hemisphere. Thus the California (Goldstone) and Spain (Robledo de Chavela) sites are ruled out, and there is no southern hemisphere antenna other than Canberra’s DSS43 capable of sending S-band signals powerful enough to communicate with Voyager 2.

Image: DSS43 is a 70-meter-wide (230-feet-wide) radio antenna at the Deep Space Network’s Canberra facility in Australia. It is the only antenna that can send commands to the Voyager 2 spacecraft. Credit: NASA/Canberra Deep Space Communication Complex.

The maintenance at DSS43 is essential, because we have communication and navigation needs for missions like the Mars 2020 rover and future exploration plans for both the Moon and Mars including at some point the crewed missions to the Moon in the Artemis program. Canberra has, in addition to the 70-meter dish, three 34-meter antennae that can receive the Voyager 2 signal, but are unable to transmit commands. During the period in question, Voyager 2 will continue to return data, according to Voyager project manager Suzanne Dodd:

“We put the spacecraft back into a state where it will be just fine, assuming that everything goes normally with it during the time that the antenna is down. If things don’t go normally – which is always a possibility, especially with an aging spacecraft – then the onboard fault protection that’s there can handle the situation.”

Expect the work at Canberra to be completed by January of 2021, placing an updated and more reliable antenna back into service and, presumably, continuing the active work managing Voyager 2’s ongoing mission. Better this, engineers reason, than dealing with future unplanned outages as DSS43 ages, while the upgrades will add state-of-the-art technology to the site. Putting all this in perspective is the fact that the dish has been in service for fully 48 years.

tzf_img_post

Calculating Life’s Possibilities on Titan

With surface temperatures around -180° C, Titan presents problems for astrobiology, even if its seasonal rainfall, lakes and seas, and nitrogen-rich atmosphere bear similarities to Earth. Specifically, what kind of cell membrane can form and function in an environment this cold? Five years ago, researchers at Cornell used molecular simulations to screen for the possibilities, suggesting a membrane the scientists called an azotosome, which would be made out of the nitrogen, carbon and hydrogen molecules known to exist in Titan’s seas.

The azotosome was a useful construct because the phospholipid bilayer membranes giving rise to liposomes on Earth need an analog that can survive Titan’s conditions, a methane-based membrane that can form in cryogenic temperatures. And the Cornell work suggested that azotosomes would create a similar flexibility to cell membranes found on Earth. Titan’s seas of methane and ethane, then, might offer us the chance for a novel form of life to emerge.

Now we have new work out of Chalmers University of Technology in Gothenburg, Sweden that raises serious doubts about whether azotosomes could develop on Titan. The Cornell work examined the liquid organic compound acrylonitrile, found in Titan’s atmosphere, and built the azotosome idea around it, but the Swedish team’s calculations show that azotosomes are unlikely to be able to self-assemble in Titan’s conditions, for the acrylonitrile would crystalize into its molecular ice.

Martin Rahm (Department of Chemistry and Chemical Engineering, Chalmers University of Technology) is co-author of the paper:

“Titan is a fascinating place to test our understanding of the limits of prebiotic chemistry – the chemistry that precedes life. What chemical, or possibly biological, structures might form, given enough time under such different conditions? The suggestion of azotosomes was a really interesting proposal for an alternative to cell membranes as we understand them. But our new research paper shows that, unfortunately, although the structure could indeed tolerate the extremes of Titan, it would not form in the first place.”

This is interesting work, and not only because we are on track to launch Dragonfly in 2026, a mission to investigate the surface and sample different locations around the moon in an assessment of prebiotic chemistry. What we’re seeing is the emergence of computational astrobiology, the necessary follow-on to studies like the predictive work of 2015. The idea is to model the properties and formation routes of the materials proposed as supporting possible biological processes. In this case, we learn that the azotosome structure that looked so promising is not thermodynamically feasible.

But this work hardly eliminates the possibility of life on Titan. What if, the authors speculate, the cell structure itself is not critical? From the paper:

…on Titan, any hypothetical life-bearing macromolecule or crucial machinery of a life form will exist in the solid state and never risk destruction by dissolution. The question is then whether these biomolecules would benefit from a cell membrane. Already rendered immobile by the low temperature, biological macromolecules on Titan would need to rely on the diffusion of small energetic molecules, such as H2, C2H2, or HCN, to reach them in order for growth or replication to ensue. Transport of these molecules might proceed in the atmosphere or through the surrounding methane/ethane environment. A membrane would likely hinder this beneficial diffusion. Similarly, a membrane would likely hinder necessary removal of waste products of metabolism, such as methane and nitrogen, in the opposite direction.

Image: Researchers looking for life on Titan, Saturn’s largest moon, used quantum mechanical calculations to investigate the viability of azotosomes, a potential form of cell membrane. Credit: NASA / Yen Strandqvist / Chalmers.

At this stage, as the authors note, the limits of prebiotic chemistry and biology on Titan will have to stay in the realm of speculation, but computations like these can inform the choice of sites for Dragonfly as it explores the moon, helping us to match the reality on the ground with theory.

The paper is Sandström & Rahm, “Can polarity-inverted membranes self-assemble on Titan?” Science Advances Vol. 6, No. 4 (24 January 2020). Full text. The 2015 paper on azotosomes is Stevenson, Lunine & Clancy, “Membrane alternatives in worlds without oxygen: Creation of an azotosome,” Science Advances Vol. 1, No. 1 (27 February 2015), e1400067 (full text).

tzf_img_post