Finding Neptune’s Smallest Moon

What a lively place Neptune used to be, at least back in the days when the planet captured Triton, doubtless a Kuiper Belt Object now in a retrograde orbit around the primary. Recent work led by Mark Showalter (SETI Institute) puts the Hubble Space Telescope to work in studying one result of the sudden acquisition of so massive an object. A first generation of small satellites was likely scattered and rearranged, its debris becoming the Neptunian moons we see today.

Among them is Hippocamp, once known as S/2004 N 1, which appears to be a fragment from Neptune’s second largest moon Proteus. What an interesting set of observations we have here. Discovered in 2013, Hippocamp is the outermost of the planet’s inner moons, and it orbits a scant 12,000 kilometers from Proteus. We can relate the 2013 discovery with what Voyager 2 found at Neptune in 1989: A large impact crater on Proteus.

“The first thing we realised was that you wouldn’t expect to find such a tiny moon right next to Neptune’s biggest inner moon,” saysd Mark Showalter. “In 1989, we thought the crater was the end of the story. With Hubble, now we know that a little piece of Proteus got left behind and we see it today as Hippocamp.”

Image: This artist’s impression shows the outermost planet of the Solar System, Neptune, and its small moon Hippocamp. Hippocamp was discovered in images taken with the NASA/ESA Hubble Space Telescope. Whilst the images taken with Hubble allowed astronomers to discover the moon and also to measure its diameter, about 34 kilometres, these images do not allow us to see surface structures. Credit: ESA/Hubble, NASA, L. Calçada.

The likely cause of the Proteus impact is a comet, striking long after the havoc created by Triton’s appearance. Jack Lissauer (NASA Ames) is a co-author of the new work:

“Based on estimates of comet populations, we know that other moons in the outer Solar System have been hit by comets, smashed apart, and re-accreted multiple times. This pair of satellites provides a dramatic illustration that moons are sometimes broken apart by comets.”

Image: This composite image shows the location of Neptune’s moon Hippocamp, formerly known just as S/2004 N 1, orbiting the giant planet Neptune, about 4.8 billion kilometres from Earth. The moon is only about 34 kilometres in diameter and dim, and was therefore missed by NASA’s Voyager 2 spacecraft cameras when the probe flew by Neptune in 1989. Several other moons that were discovered by Voyager appear in this 2009 image, along with a circumplanetary structure known as ring arcs. Mark Showalter of the SETI Institute discovered Hippocamp in July 2013 when analysing over 150 archival images of Neptune taken by Hubble from 2004 to 2009. The black-and-white image was taken in 2009 with Hubble’s Wide Field Camera 3 in visible light. Hubble took the colour inset of Neptune on August 19, 2009. Credit: NASA, ESA, and M. Showalter (SETI Institute).

Just 1/1000th the mass of Proteus, Hippocamp shouldn’t be where we see it, but that large impact crater Voyager found on Proteus tells the tale. It alone explains why Proteus didn’t assimilate or sweep aside Hippocamp long ago, thanks to an impact sufficient to have all but shattered Proteus while leaving Hippocamp behind. That cometary bombardment makes Hippocamp a third-generation satellite.

Image: This diagram shows the orbital positions of Neptune’s inner moons, which range in size from 17 to 420 kilometres in diameter. The outer moon Triton was captured from the Kuiper belt many billions of years ago. This tore apart Neptune’s original satellite system. After Triton settled into a circular orbit the debris from shattered moons re-coalesced into the second generation of inner satellites seen today. However, comet bombardment continued, leading to the birth of Hippocamp, which is a broken-off piece of Proteus. Therefore, Hippocamp is considered to be a third-generation satellite. Neither the size of the moons and Neptune, nor the orbits are to scale. Credit: NASA, ESA, and A. Feild (STScI).

Yesterday we looked at an exoplanet scenario for a massive planetary collision. The collision of Proteus with a comet is much smaller in scale, but powerful in its effects. From the paper:

We cannot rule out the possibility that Hippocamp formed in situ and has no connection to Proteus. However, its tiny size and peculiar location lead us to favour the proposed formation scenario, which illustrates the roles that collisions and orbital migration have played in shaping the Neptune system that we see today

The paper is Showalter et al., “The Seventh Inner Moon of Neptune,” Nature 566, pp. 350-353 (20 February 2019). Abstract.

tzf_img_post

Kepler 107: Collision of Worlds

It seems increasingly clear that the factors that govern what kind of a planet emerges where in a given stellar system are numerous and not always well understood. Beyond the snowline, planets draw themselves together from the ice and other volatiles available in these cold regions, so that we wind up with low-density gas or ice-giants in the outer parts of a stellar system. Sometimes. Rocky worlds are made of silicates and iron, elements that, unlike ice, can withstand the much warmer temperatures inside the snowline. But consider:

While we now have 2,000 confirmed exoplanets smaller than three Earth radii, the spread in their densities is all over the map. We’re finding that other processes must be in play, and at no insubstantial level. Low-density giant planets can turn up orbiting close to their stars. Planets not so dissimilar from Earth in terms of their radius may be found with strikingly different densities in the same system, and at no great distance from each other.

Which takes us to a new paper from Aldo S. Bonomo and Mario Damasso (Istituto Nazionale Di Astrofisica), working with an international team including astrophysicist Li Zeng (CfA). The collaboration has produced a new paper in Nature Astronomy that uses the planetary system around the star Kepler-107 to probe another possible formative influence: planetary collisions. Kepler-107 may be flagging a process that occurs in many young systems.

Image: The figure shows one frame from the middle of a hydrodynamical simulation of a high-speed head-on collision between two 10 Earth-mass planets. The temperature range of the material is represented by four colors grey, orange, yellow and red, where grey is the coolest and red is the hottest. Such collisions eject a large amount of the silicate mantle material leaving a high-iron content, high-density remnant planet similar to the observed characteristics of Kepler-107c. Credit: Zoe Leinhardt and Thomas Denman, University of Bristol.

Let’s take a deeper look at this curious system. The two innermost planets at Kepler-107 have radii that are nearly identical — 1.536 and 1.597 Earth-radii, respectively. They both orbit close to the host, a G2 star in Cygnus of about 1.25 solar masses, with orbital periods of 3.18 and 4.90 days. The scientists used the HARPS-N spectrograph at the Telescopio Nazionale Galileo in La Palma to determine the planets’ masses, and because they were working with known radii (thanks to Kepler’s observations of these transiting worlds), they were able to determine their densities. And now things get interesting.

For the innermost planet shows a density of 5.3 grams per cubic centimeter, while the second world comes in at 12.65 grams per cubic centimeter. The inner world, Kepler-107b, is thus about the same density as the Earth (5.5 grams per cubic centimeter), while Kepler-107c shows a much higher number (for comparison, water’s density is 1 gram per cubic centimeter).

The outer of the two is the denser world and by more than twice the inner world’s value, and given the proximity of their orbits, coming up with stellar radiation effects that could have caused mass loss is difficult to do, for such radiation should have affected both in the same way. The remaining strong possibility is that a collision between planets played a role in this system.

“This is one out of many interesting exoplanet systems that the Kepler space telescope has discovered and characterized,” says Li Zeng. “This discovery has confirmed earlier theoretical work suggesting that giant impact between planets has played a role during planet formation. The TESS mission is expected to find more of such examples.”

And this from the paper, which notes the possibility of extreme X-ray and ultraviolet flux in the young system, but dismisses it as operational here, at least to explain the discrepancy:

This imbalance cannot be explained by the stellar XUV irradiation, which would conversely make the more-irradiated and less-massive planet Kepler-107b denser than Kepler-107c. Instead, the dissimilar densities are consistent with a giant impact event on Kepler-107c that would have stripped off part of its silicate mantle.

Image: The video shows a hydrodynamical simulation of a high-speed head-on collision between two 10 Earth-mass planets. The temperature range of the material is represented by four colors grey, orange, yellow and red, where grey is the coolest and red is the hottest. Watch Video. Credit: Zoe Leinhardt and Thomas Denman, University of Bristol.

We can produce Kepler-107c, then, by a collision that results in a high-density remnant. We have apparent evidence for collisions even in our own Solar System. The composition of Mercury, with a dense metallic core and thin crust, may point to this; only Earth is more dense in our system. So too could the emergence of Earth’s moon through a planetary-sized impactor striking our planet. We can also see in the obliquity of Uranus — the planet’s axis of rotation is skewed by 98 degrees from what we would expect — the possibility that, as advanced in some theories, a large object struck the planet long ago.

The paper gives us some of the details about Kepler 107:

…the difference in density of the two inner planets can be explained by a giant impact on Kepler-107 c that removed part of its mantle, significantly reducing its fraction of silicates with respect to an Earth-like composition. The radius and mass of Kepler-107 c, indeed, lie on the empirically derived collisional mantle stripping curve for differentiated rocky/iron planets… Smoothed particle hydrodynamics simulations show that a head-on high-speed giant impact between two ~10M? exoplanets in the disruption regime would result in a planet-like Kepler-107 c with approximately the same mass and interior composition… Such an impact may destabilize the current resonant configuration of Kepler-107 and thus it likely occurred before the system reached resonance. Multiple less-energetic collisions may also lead to a similar outcome.

Another possibility discussed by the authors: Planet c may have formed closer to the parent star and later crossed Kepler-107b in its orbit. But the authors note that to dampen the orbital eccentricities this would have produced, this scenario is unlikely to have had time to operate.

The paper is Bonomo & Zeng, “A giant impact as the likely origin of different twins in the Kepler-107 exoplanet system,” Nature Astronomy 04 February 2019 (abstract).

tzf_img_post

Gravitational Wave Astronomy: Enter the ‘Standard Siren’

Recently we’ve talked about ‘standard candles,’ these being understood as objects in the sky about which we know the total luminosity because of some innate characteristic. Thus the Type 1a supernova, produced in a binary star system as material from a red giant falls onto a white dwarf, causing the smaller star to reach a mass limit and explode. These explosions reach roughly the same peak brightness, allowing astronomers to calculate their distance.

Even better known are Cepheid variables, stars whose luminosity and variable pulsation period are linked, so we can measure the pulsation and know the true luminosity. This in turn lets us calculate the distance to the star by comparing what we see against the known true luminosity.

This is helpful stuff, and Edwin Hubble used Cepheids in making the calculations that helped him figure out the distance between the Milky Way and Andromeda, which in turn put us on the road to understanding that Andromeda was not a nebula but a galaxy. Hubble’s early distance estimates were low, but we now know that Andromeda is 2.5 million light years away.

I love distance comparisons and just ran across this one in Richard Gott’s The Cosmic Web: “If our galaxy were the size of a standard dinner plate (10 inches across), the Andromeda Galaxy (M31) would be another dinner plate 21 feet away.”

Galaxies of hundreds of billions of stars each, reduced to the size of dinner plates and a scant 21 feet apart… Hubble would go on to study galactic redshifts enroute to determining that the universe is expanding. He would produce the Hubble Constant as a measure of the rate of expansion, and that controversial figure takes us into the realm of today’s article.

For we’re moving into the exciting world of gravitational wave astronomy, and a paper in Physical Review Letters now tells us that a new standard candle is emerging. Using it, we may be able to refine the value of the Hubble Constant (H0), the present rate of expansion of the cosmos. This would be helpful indeed because right now, the value of the constant is not absolutely pinned down. Hubble’s initial take was on the high side, and controversy has continued as different methodologies yield different values for H0.

Hiranya Peiris (University College London) is a co-author on the paper on this work:

“The Hubble Constant is one of the most important numbers in cosmology because it is essential for estimating the curvature of space and the age of the universe, as well as exploring its fate.

“We can measure the Hubble Constant by using two methods – one observing Cepheid stars and supernovae in the local universe, and a second using measurements of cosmic background radiation from the early universe – but these methods don’t give the same values, which means our standard cosmological model might be flawed.”

Binary neutron stars appear to offer a solution. As they spiral towards each other before colliding, they produce powerful gravitational waves, ripples in spacetime, that can be flagged by gravitational experiments like the Laser Interferometer Gravitational Wave Observatory (LIGO) and Virgo. Astronomers can then compare the gravitational wave observation with the detected light of a binary neutron star merger, a result that produces the velocity of the system

Image: Artist’s vision of the death spiral of the remarkable J0806 system. Credit: NASA/Tod Strohmayer (GSFC)/Dana Berry (Chandra X-Ray Observatory).

We’re closing on a refined Hubble’s Constant, and researchers on the case, led by Stephen Feeney (Center for Computational Astrophysics at the Flatiron Institute, NY) think that observations of no more than 50 binary neutron stars should produce sufficient data to pin down H0. We’ll then have an independent measurement of this critical figure. And, says Feeney, “[w]e should be able to detect enough mergers to answer this question within 5-10 years.”

Nailing down the Hubble Constant will refine our estimates of the curvature of space and the age of the universe. Just a few years ago, who would have thought we’d be doing this by using neutron star interactions as what the researchers are now delightfully calling ‘standard sirens,’ allowing measurements that should solve the H0 discrepancy once and for all.

A final impulse takes me back to Hubble himself. The man had a bit of the poet in him, as witness his own outside perspective of our galaxy: “Our stellar system is a swarm of stars isolated in space. It drifts through the universe as a swarm of bees drifts through the summer air.” Found in The Realm of the Nebulae (1936).

The paper is Feeney et al., “Prospects for Resolving the Hubble Constant Tension with Standard Sirens,” Physical Review Letters 122 (14 February 2019), 061105 (abstract).

tzf_img_post

Breakthrough Propulsion Study

Ideas on interstellar propulsion are legion, from fusion drives to antimatter engines, beamed lightsails and deep space ramjets, not to mention Orion-class fusion-bomb devices. We’re starting to experiment with sails, though beaming energy to a space sail is still an unrealized, though near-term, project. But given the sheer range of concepts out there and the fact that almost all are at the earliest stages of research, how do we prioritize our work so as to move toward a true interstellar capability? Marc Millis, former head of NASA’s Breakthrough Propulsion Physics project and founder of the Tau Zero Foundation, has been delving into the question in new work for NASA. In the essay below, Marc describes a developing methodology for making decisions and allocating resources wisely.

by Marc G Millis

In February 2017, NASA awarded a grant to the Tau Zero Foundation to compare propulsion options for interstellar flight. To be clear, this is not about picking a mission and its technology – a common misconception – but rather about identifying which research paths might have the most leverage for increasing NASA’s ability to travel farther, faster, and with more capability.

The first report was completed in June 2018 and is now available on the NASA Technical Report Server, entitled “Breakthrough Propulsion Study: Assessing Interstellar Flight Challenges and Prospects.” (4MB file at: http://hdl.handle.net/2060/20180006480).

This report is about how to compare the diverse propulsion options in an equitable, revealing manner. Future plans include creating a database of the key aspects and issues of those options. Thereafter comparisons can be run to determine which of their research paths might be the most impactive and under what circumstances.

This study does not address technologies that are on the verge of fruition, like those being considered for a probe to reach 1000 AU with a 50 year flight time. Instead, this study is about the advancements needed to reach exoplanets, where the nearest is 270 times farther (Proxima Centauri b). These more ambitious concepts span different operating principles and levels of technological maturity, and their original mission assumptions are so different that equitable comparisons have been impossible.

Furthermore, all of these concepts require significant additional research before their performance predictions are ready for traditional trade studies. Right now their values are more akin to goals than specifications.

To make fair comparisons that are consistent with the varied and provisional information, the following tactics are used: (1) all propulsion concepts will be compared to the same mission profiles in addition to their original mission context; (2) the performance of the disparate propulsion methods will be quantified using common, fundamental measures; (3) the analysis methods will be consistent with fidelity of the data; and (4) the figures of merit by which concepts will be judged will finally be explicit.

Regarding the figures of merit – this was one of the least specified details of prior interstellar studies. It is easy to understand why there are so many differing opinions about which concept is “best” when there are no common criteria with which to measure goodness. The criteria now include quantifiable factors spanning: (1) the value of the mission, (2) the time to complete the mission, and (3) the cost of the mission.

The value of a mission includes subjective criteria and objective values. The intent is to allow the subjective factors to be variables so that the user can see how their interests affect which technologies appear more valuable. One of those subjective judgments is the importance of the destination. For example, some might think that Proxima Centauri b is less interesting than the ‘Oumuamua object. Another subjective factor is motive. The prior dominant – and often implicit – figure of merit was “who can get there first.” While that has merit, it can only happen once. The full suite of motives continue beyond that first event, including gathering science about the destinations, accelerating technological progress, and ultimately, ensuring the survival of humanity.

Examples of the objective factors include: (1) time within range of target; (2) closeness to target (better data fidelity); and (3) the amount of data acquired. A mission that gets closer to the destination, stays there longer, and sends back more data, is more valuable. Virtually all mission concepts have been limited to fly-by’s. Table 1 shows how long a probe would be within different ranges for different fly-by speeds. To shift attention toward improving capabilities, the added value (and difficulty) of slowing at the destination – and even entering orbit – will now be part of the comparisons.

Table 1: Time on target for different fly by speeds and instrumentation ranges

Quantifying the time to complete a mission involves more than just travel time. Now, instead of the completion point being when the probe arrives, it is defined as when its data arrive back at Earth. This shift is because the time needed to send the data back has a greater impact than often realized. For example, even though Breakthrough StarShot aims to get there the quickest, in just 22 years, that comes at the expense of making the spacecraft so small that it takes an additional 20 years to finish transmitting the data. Hence, the time from launch to data return is about a half century, comparable to other concepts (46 yrs = 22 trip + 4 signal + 20 to transmit data). The tradeoffs of using a larger payload with a faster data rate, but longer transit time, will be considered.

Regarding the total time to complete the mission, the beginning point is now. The analysis includes considerations for the remaining research and the subsequent work to design and build the mission hardware. Further, the mission hardware, now by definition, includes its infrastructure. While the 1000 AU precursor missions do not need new infrastructure, most everything beyond that will.

Recall that the laser lightsail concepts of Robert Forward required a 26 TW laser, firing through a 1,000 km diameter Fresnel lens placed beyond Saturn (around 10 AU), aimed at a 1,000 km diameter sail with a mass of 800 Tonnes. Project Daedalus envisioned needing 50,000 tonnes of helium 3 mined from the atmospheres of the gas giant planets. This not only requires the infrastructure for mining those propellants, but also processing and transporting that propellant to the assembly area of the spacecraft. Even the more modest Earth-based infrastructure of StarShot is beyond precedent. StarShot will require one million synchronized 100 kW lasers spread over an area of 1 km2 to get it up to the required 100 GW.

While predicting these durations in the absolute sense is dubious (predicting what year concept A might be ready), it is easier to make relative predictions (if concept A will be ready before B) by applying the same predictive models to all concepts. For example, the infrastructure rates are considered proportional to the mass and energy required for the mission – where a smaller and less energetic probe is assumed to be ready sooner than a larger, energy-intensive probe.

The most difficult duration to estimate, even when relaxed to relative instead of absolute comparisons, is the pace of research. Provisional comparative methods have been outlined, but this is an area needing further attention. The reason that this must be included – even if difficult – is because the timescales for interstellar flight are comparable to breakthrough advancements.

The fastest mission concepts (from launch to data return) are 5 decades, even for StarShot (not including research and infrastructure). Compare this to the 7 decades it took to advance from the rocket equation to having astronauts on the moon (1903-1969), or the 6 decades to go from the discovery of radioactivity to having a nuclear power plant tied to the grid (1890-1950).

So, do you pursue a lesser technology that can be ready sooner, a revolutionary technology that will take longer, or both? For example, what if technology A is estimated to need just 10 more years of research, but 25 years to build its infrastructure, while option B is estimated to take 25 more years of research, but will require no infrastructure. In that case, if all other factors are equal, option B is quicker.

To measure the cost of missions, a more fundamental currency than dollars is used – energy. Energy is the most fundamental commodity of all physical transactions, and one whose values are not affected by debatable economic models. Again, this is anchoring the comparisons in relative, rather than the more difficult, absolute terms. The energy cost includes the aforementioned infrastructure creation plus the energy required for propulsion.

Comparing the divergent propulsion methods requires converting their method-specific measures to common factors. Laser-sail performance is typically stated in terms of beam power, beam divergence, etc. Rocket performance in terms of thrust, specific impulse, etc. And warp drives in terms of stress-energy-tensors, bubble thickness, etc. All these type-specific terms can be converted to the more fundamental and common measures of energy, mass, and time.

To make these conversions, the propulsion options are divided into 4 analysis groups, where the distinction is if power is received from an external source or internally, and if their reaction mass is onboard or external. Further, as a measure of propulsion efficiency (or in NASA parlance, “bang for buck”) the ratio of the kinetic energy imparted to the payload, to the total energy consumed by the propulsion method, can be compared.

The other reason that energy is used as the anchoring measure is that it is a dominant factor with interstellar flight. Naively, the greatest challenge is thought to be speed. The gap between the achieved speeds of chemical rockets and the target goal of 10% lightspeed is a factor of 400. But, increasing speed by a factor of 400 requires a minimum of 160,000 times more energy. That minimum only covers the kinetic energy of the payload, not the added energy for propulsion and inefficiencies. Hence, energy is a bigger deal than speed.

For an example, consider the 1-gram StarShot spacecraft traveling at 20% lightspeed. Just its kinetic energy is approximately 2 TJ. When calculating the propulsive energy in terms of the laser power and beam duration, (100 GW for minutes) the required energy spans 18 to 66 TJ, for just a 1-gram probe. For comparison, the energy for a suite of 1,000 probes is roughly the same as 1-4 years of the total energy consumption of New York City (NYC @ 500 MW).

Delivering more energy faster requires more power. By launching only 1 gm at a time, StarShot keeps the power requirement at 100 GW. If they launched the full suite of 1000 grams at once, that would require 1000 times more power (100 TW). Power is relevant to another under-addressed issue – the challenge of getting rid of excess heat. Hypothetically, if that 100 GW system has a 50% efficiency, that leaves 50 GW of heat to radiate. On Earth, with atmosphere and convection, that’s relatively easy. If it were a space-based laser, however, that gets far more dicey. To run fair comparisons, it is desired that each concept uses the same performance assumptions for their radiators.

Knowing how to compare the options is one thing. The other need is knowing which problems to solve. In the general sense, the entire span of interstellar challenges have been distilled into this “top 10” list. It is too soon to rank these until after running some test cases:

  • Communication – Reasonable data rates with minimum power and mass.
  • Navigation – Aiming well from the start and acquiring the target upon arrival, with minimum power and mass. (The ratio of the distance traversed to a ½ AU closest approach is about a million).
  • Maneuvering upon reaching the destination (at least attitude control to aim the science instruments, if not the added benefit of braking).
  • Instrumentation – Measure what cannot be determined by astronomy, with minimum power and mass.
  • High density and long-term energy storage for powering the probe after decades in flight, with minimum mass.
  • Long duration and fully autonomous spacecraft operations (includes surviving the environment).
  • Propulsion that can achieve 400 times the speed of chemical rockets.
  • Energy production at least 160,000 times chemical rockets and the power capacity to enable that high-speed propulsion.
  • Highly efficient energy conversion to minimize waste heat from that much power.
  • Infrastructure creation in affordable, durable increments.

While those are the general challenges common to all interstellar missions, each propulsion option will have its own make-break issues and associated research goals. At this stage, none of the ideas are ready for mission trade studies. All require further research, but which of those research paths might be the most impactive, and under what circumstances? It is important to repeat that this study is not about picking “one solution” for a mission. Instead, it is a process for continually making the most impactive advances that will not only enable that first mission, but the continually improving missions after that.

Ad astra incrementis.

tzf_img_post

Planet Formation: How Ocean Worlds Happen

It’s hard to fathom when we look at a globe, but our planet Earth’s substantial covering of ocean is relatively modest. Alternative scenarios involving ‘water worlds’ include rocky planets whose silicate mantle is covered in a deep, global ocean, with no land in sight. Kilometer after kilometer of water covers a layer of ice on the ocean floor in these models, making it unlikely that the processes that sustain life here could develop — how likely is a carbon cycle in such a scenario, and without it, how do we stabilize climate and make an inhabitable world?

These are challenging issues as we build the catalog of exoplanets and try to figure out local conditions. But it’s also intriguing to ask what made Earth turn out as dry as it is. Tim Lichtenberg developed a theory while doing his thesis at the Eidgenössische Technische Hochschule in Zürich (he is now at Oxford), and now presents it in a paper in collaboration with colleagues at Bayreuth and Bern, as well as the University of Michigan. Lichtenberg thinks we should be looking hard at the radioactive element Aluminium-26 (26Al).

Go back far enough in the evolution of the Solar System and kilometer-sized planetesimals made of rock and ice moved in a circumstellar disk around the young Sun, eventually through the process of accretion growing into planetary embryos. In this era a supernova evidently occurred in the astronomical neighborhood, depositing 26Al and other elements into the mix. Using computer simulations of the formation of thousands of planets, the researchers argue that two distinct populations emerge, water worlds and drier worlds like Earth.

“The results of our simulations suggest that there are two qualitatively different types of planetary systems,” says Lichtenberg: “There are those similar to our Solar System, whose planets have little water. In contrast, there are those in which primarily ocean worlds are created because no massive star, and so no Al-26, was around when their host system formed. The presence of Al-26 during planetesimal formation can make an order-of-magnitude difference in planetary water budgets between these two species of planetary systems.”

Image: Planetary systems born in dense and massive star-forming regions inherit substantial amounts of Aluminium-26, which dries out their building blocks before accretion (left). Planets formed in low-mass star-forming regions accrete many water-rich bodies and emerge as ocean worlds (right). Credit: Thibaut Roger.

Because planets grow from these early planetesimals, their composition is critical. If a great part of a planet’s water comes from them, then the danger of accreting too much water is always present if many of the constituent materials come from the icy regions beyond the snowline. But radioactive constituents like 26Al inside the planetesimals can create heat that can evaporate much of the initial water ice content before accretion occurs. Dense star-forming regions are more likely to produce planets that manifest these latter outcomes.

Lichtenberg and team examined the decay heat from 26Al in terms of this early planetesimal evolution, which would have led to silicate melting and degassing of primordial water abundances. Their simulations of planet populations delved into internal structures that varied according to disk structures, planetary composition, and initial location of planetary embryos. They produced statistical variations of incorporated water in planets that varied in radius and initial 26Al abundance. In all, the authors achieved what they believe to be a statistically representative set of 540,000 individual simulations over 18 parameter sets.

Image: This is Figure 3 from the paper. Caption: Fig. 3 | Qualitative sketch of the effects of 26Al enrichment on planetary accretion. Left, 26Al-poor planetary systems; right, 26Al-rich planetary systems. RP, planetary radius. Arrows indicate proceeding accretion (middle), planetesimal water content (bottom right, blue-brown) and live 26Al (bottom right, red-white). Credit: Lichtenberg et al.

We wind up with planetary systems with 26Al abundances similar to or higher than the Solar System forming terrestrial planets with lower amounts of water, an effect that grows more pronounced with distance from the host star, since embryos forming there are likely to be richer in water. Systems poor in 26Al are thus far more likely to produce water worlds. A remaining question involves the actual growth of rocky planets, as the paper notes:

If rocky planets grow primarily from the accumulation of planetesimals, then the suggested deviation between planetary systems should be clearly distinguishable among the rocky exoplanet census. If, however, the main growth of rocky planets proceeds from the accumulation of small particles, such as pebbles, then the deviation between 26Al-rich and 26Al-poor systems may become less clear, and the composition of the accreting pebbles needs to be taken into account.

The direction of future work to explore the question is clear;

… models of water delivery and planet growth need to synchronize the timing of earliest planetesimal formation, the mutual influence of collisions and 26Al dehydration, the potential growth by pebble accretion, and the partitioning of volatile species between the interior and atmosphere of growing protoplanets in order to further constrain the perspectives for rocky (exo-)planet evolution.

The paper is Lichtenberg et al., “A Water Budget Dichotomy of Rocky Protoplanets from 26Al-Heating,” Nature Astronomy Letters, 11 February 2019 (abstract). Thanks to John Walker for helpful information regarding this story.

tzf_img_post