Massive elements can build up in celestial catastrophes like supernovae, with the rapid-, or r-process, producing neutrons at a high rate as elements much heavier than lead or even uranium emerge. But we’re learning that such events happen not just in supernovae but also in neutron star mergers, which are thought to occur only a few times per million years in the Milky Way. A new paper looks at meteorites from the early Solar System to study what the decay of their radioactive isotopes can tell us about the period in which they were created.
Such isotopes have half-lives shorter than 100 million years, but we can determine their abundances in the early Solar System through meteorite studies like these. What Szabolcs Márka (Columbia University) and Imre Bartos (University of Florida) have done is to study how two of the short-lived r-process isotopes were produced, using simulations of neutron star mergers in the Milky Way to calculate the abundances of specific radioactive elements. The simulations show that about 100 million years before the Earth formed, a neutron star merger occurred some 1,000 light years from the gas cloud that would become the Solar System.
This would have been a spectacular event, says Márka:
“If a comparable event happened today at a similar distance from the solar system, the ensuing radiation could outshine the entire night sky.”
Image: This is Figure 1 from the paper, which appeared in Nature. Caption: The path of r-process elements. When neutron stars merge, they create an accreting black hole (the accretion disk is shown red). Tidal (dynamical) forces and winds from the accretion disk eject neutron-rich matter. This ejected matter (ejecta, shown grey) undergoes rapid neutron capture, producing heavy r-process elements, including actinides. The ejecta reach the pre-solar nebula and inject the heavy elements that will remain in the Solar System. Credit: Szabolcs Márka / Imre Bartos.
Because a supernova can likewise produce actinides (elements from actinium to lawrencium — atomic numbers 89-103 — all of them radioactive), the authors use the same methods to analyze these. The evidence points to a neutron star event rather than supernovae explosions near the early Solar System, for the abundance ratio found via meteorite studies “…is well below the uniform production model’s prediction…” What the authors mean by ‘uniform production model’ is the fact that supernovae are orders of magnitude more frequent in the Milky Way than neutron star mergers, and their production rate can be approximated as being uniform in time:
By comparing numerical simulations with the early Solar System abundance ratios of actinides produced exclusively through the r-process, we constrain the rate of occurrence of their Galactic production sites to within about 1−100 per million years. This is consistent with observational estimates of neutron-star merger rates, but rules out supernovae and stellar sources. We further find that there was probably a single nearby merger that produced much of the curium and a substantial fraction of the plutonium present in the early Solar System. Such an event may have occurred about 300 parsecs away from the pre-solar nebula, approximately 80 million years before the formation of the Solar System.
The authors point out that working backward to reconstruct the abundance of another of the short-lived elements would help to produce a more complete picture of the neutron star merger.
Because I was having trouble with the distinction between actinides produced by supernovae and those from neutron star mergers, I asked Dr. Bartos for a clarification. He was kind enough to provide the following, explaining how supernovae can be ruled out (and many thanks to Dr. Bartos for his quick response!):
The difference between supernovae and neutron star collisions that we take advantage of is their relative rates. Supernovae occur a thousand times more frequently than neutron star collisions in the Milky Way. This means that the shortest lived isotopes would be regularly replenished if produced by supernovae, making them certain to be present at the time of the Solar System’s formation. For the less common neutron star collisions, the shortest lived isotopes are depleted soon after a merger, and stay depleted until the next. This means that it is probable that at the time of the Solar System’s formation, this isotope will be depleted. The observed abundances of the short lived Curium-247 and Iodine-129 isotopes in the early Solar System show this depletion, ruling out supernovae.
Dr. Bartos went on to explain the steps he and co-author Márka took to clarify the result:
An extra step is that we normalize the Curium and Iodine amounts found in the early Solar System with the amount of longer-lived r-process elements (Thorium-232 and Iodine-129, respectively). This is important because this way our results don’t depend on how much r-process a single event produces. The ratios stay the same.
We’re in the early era of using gravitational wave astronomy through observations from LIGO and the Virgo collaboration to make the call on the rate of spectacular events like the merger of two neutron stars, a rate that will be tightened further with continuing observation. Thus the Abadie et al. paper I reference below, tapped by the authors, which points to the rich observational fields gravitational waves help us explore. It was a scant five years after that paper that the first gravitational wave detection was made, and in the short time since, we are now producing catalogs of black hole and neutron star mergers.
The paper is Bartos & Márka, “A nearby neutron-star merger explains the actinide abundances in the early Solar System,” Nature Vol. 569 (2 May 2019). pp. 85-87 (abstract). The Abadie paper referenced above is Abadie et al., “Topical review: predictions for the rates of compact binary coalescences observable by ground-based gravitational-wave detectors,” Classical and Quantum Gravity 27, 173001 (2010). Abstract.