SETI and Gravitational Lensing

Radio and optical SETI look for evidence of extraterrestrial civilizations even though we have no evidence that such exist. The search is eminently worthwhile and opens up the ancillary question: How would a transmitting civilization produce a signal strong enough for us to detect it at interstellar distances? Beacons of various kinds have been considered and search strategies honed to find them. But we’ve also begun to consider new approaches to SETI, such as detecting technosignatures in our astronomical data (Dyson spheres, etc.). To this mix we can now add a consideration of gravitational lensing, and the magnifications possible when electromagnetic radiation is focused by a star’s mass. For a star like our Sun, this focal effect becomes useful at distances beginning around 550 AU.

Theoretical work and actual mission design for using this phenomenon began in the 1990s and continues, although most work has centered on observing exoplanets. Here the possibilities are remarkable, including seeing oceans, continents, weather patterns, even surface vegetation on a world circling another star. But it’s interesting to consider how another civilization might see gravitational lensing as a way of signaling to us. Indeed, doing so could conceivably open up a communications channel if the alien civilization is close enough, for if we detect lensing being used in this way, we would be wise to consider using our own lens to reply.

Or maybe not, considering what happens in The Three Body Problem. But let’s leave METI for another day. A new paper from Slava Turyshev (Jet Propulsion Laboratory) makes the case that we should be considering not just optical SETI, but a gravitationally lensed SETI signal. The chances of finding one might seem remote, but then, we don’t know what the chances of any SETI detection are, and we proceed in hopes of learning more. Turyshev argues that with the level of technology available to us today, a lensed signal could be detected with the right strategy.

Image: Slava Turyshev (Jet Propulsion Laboratory). Credit: Asteroid Foundation.

“Search for Gravitationally Lensed Interstellar Transmissions,” now available on the arXiv site, posits a configuration involving a transmitter, receiver and gravitational lens in alignment, something we cannot currently manage. But recall that the effort to design a solar gravity lens (SGL) mission has been in progress for some years now at JPL. As we push into the physics involved, we learn not only about possible future space missions but also better strategies for using gravitational lensing in SETI itself. We are now in the realm of advanced photonics and optical engineering, where we define and put to work the theoretical tools to describe how light propagates in a gravity field.

And while we lack the technologies to transmit using these methods ourselves (at least for now), we do have the ability to detect extraterrestrial signals using gravitational lensing. In an email yesterday, Dr. Turyshev offered an overview of what his analysis showed:

Many factors influence the effectiveness of interstellar power transmission. Our analysis, based on realistic assumptions about the transmitter, shows that substantial laser power can be effectively transmitted over vast distances. Gravitational lensing plays a crucial role in this process, amplifying and broadening these signals, thereby increasing their brightness and making them more distinguishable from background noise. We have also demonstrated that modern space- and ground-based telescopes are well-equipped to detect lensed laser signals from nearby stars. Although individual telescopes cannot yet resolve the Einstein rings formed around many of these stars, a coordinated network can effectively monitor the evolving morphology of these rings as it traces the beam’s path through the solar system. This network, equipped with advanced photometric and spectroscopic capabilities, would enable not only the detection but also continuous monitoring and detailed analysis of these signals.

We’re imagining, then, an extraterrestrial civilization placing a transmitter in the region of its own star’s gravitational lens, on the side of its star opposite to the direction of our Solar System. The physics involved – and the mathematics here is quite complex, as you can imagine – determine what happens when light from an optical transmitter is sent to the star so that when it encounters the warped spacetime induced by the star’s mass, the diffracted rays converge and create what scientists call a ‘caustic,’ a pattern created by the bending of the light rays and their resulting focused patterns.

In the case of a targeted signal, the lensing effect emerges in a so-called ‘Einstein ring’ around the distant star as seen from Earth. The signal is brightened by its passage through warped spacetime, and if targeted with exquisite precision, could be detected and untangled by Earth’s technologies. Turyshev asks in this paper how the generated signal appears over interstellar distances.

The answer should help us understand how to search for transmissions that use gravitational lensing, developing the best strategies for detection. We’ve pondered possible interstellar networks of communication in these pages, using the lensing properties of participating stellar systems. Such signals would be far more powerful than the faint and transient signals detectable through conventional optical SETI.

Laser transmissions are inherently directional, unlike radio waves, the beams being narrow and tightly focused. An interstellar laser signal would have to be aimed precisely towards us, an alignment that in and of itself does not resolve all the issues involved. We can take into account the brightness of the transmitting location, working out the parameters for each nearby star and factoring in optical background noise, but we would have no knowledge of the power, aperture and pointing characteristics of a transmitted signal in advance. But if we’re searching for a signal boosted by gravitational lensing, we have a much brighter beam that will have been enhanced for best reception.

Image: Communications across interstellar distances could take advantage of a star’s ability to focus and magnify communication signals through gravitational lensing. A signal from—or passing through—a relay probe would bend due to gravity as it passes by the star. The warped space around the object acts somewhat like a lens of a telescope, focusing and magnifying the light. Pictured here is a message from our Sun to another stellar system. Possible signals from other stars using these methods could become SETI targets. Image credit: Dani Zemba / Penn State. CC BY-NC-ND 4.0 DEED.

Mathematics at this level is something I admire and find beautiful much in the same way I appreciate Bach, or a stunning Charlie Parker solo. I have nowhere near the skill to untangle it, but take it in almost as a form of art. So I send those more mathematically literate than I am to the paper, while relying on Turyshev’s explanation of the import of these calculations, which seek to determine the shape and dimensions of the lensed caustic, using the results to demonstrate the beam propagation affected by the lens geometry, and the changes to the density of the EM field received.

It’s interesting to speculate on the requirements of any effort to reach another star with a lensed signal. Not only does the civilization in question have to be able to operate within the focal region of its stellar lens, but it has to provide propulsion for its transmitter, given the relative motion between the lens and the target star (our own). In other words, it would need advanced propulsion just to point toward a target, and obviously navigational strategies of the highest order within the transmitter itself. As you can imagine, the same issues emerge within the context of exoplanet imaging. From the paper:

…we find that in optical communications utilizing gravitational lenses, precise aiming of the signal transmissions is also crucial. There could be multiple strategies for initiating transmission. For instance, in one scenario, the transmission could be so precisely directed that Earth passes through the targeted spot. Consequently, it’s reasonable to assume that the transmitter would have the capability to track Earth’s movement. Given this precision, one might question whether a deliberately wider beam, capable of encompassing the entire Earth, would be employed instead. This is just [a] few of many scenarios that merit thorough exploration.

Detecting a lensed signal would demand a telescope network optimized to search for transients involving nearby stars. Such a network would be capable of a broad spectrum of measurements which could be analyzed to monitor the event and study its properties as it develops. Current and near-future instruments from the James Webb Space Telescope and Nancy Grace Roman Space Telescope to the Vera C. Rubin Observatory’s LSST, the Thirty Meter Telescope and the Extremely Large Telescope could be complemented by a constellation of small instruments.

Because the lens parameters are known for each target star, a search can be constructed using a combination of possible transmitter parameters. A search space emerges that relies on current technology for each specific laser wavelength. According to Turyshev’s calculations, a signal targeting a specific spot 1 AU from the Sun would be detectable with such a network with the current generation of optical instruments. Again from the paper:

Once the signal is detected, the spatial distribution of receivers is invaluable, as each will capture a distinct dataset by traveling through the signal along a different path… Correlating the photometric and spectral data from each path enables the reconstruction of the beam’s full profile as it [is] projected onto the solar system. Integrating this information with spectral data from multiple channels reveals the transmitter’s specific features encoded in the beam, such as its power, shape, design, and propulsion capabilities. Additionally, if the optical signal contains encoded information, transmitted via a set of particular patterns, this information will become accessible as well.

While microlensing events created by a signal transmitted through another star’s gravitational lens would be inherently transient, they would also be strikingly bright and should, according to these calculations, be detectable with the current generation of instruments making photometric and spectroscopic observations. Using what Turyshev calls “a spatially dispersed network of collaborative astronomical facilities,” it may be possible not only to detect such a signal but to learn if message data are within. The structure of the point spread function (PSF) of the transmitting lens could be determined through coordinated ground- and space-based telescope observations.

We are within decades of being able to travel to the focal region of the Sun’s gravitational lens to conduct high-resolution imaging of exoplanets around nearby stars, assuming we commit the needed resources to the effort. Turyshev advocates a SETI survey along the lines described to find out whether gravitationally lensed signals exist around these stars, pointing out that such a discovery would open up the possibility of studying an exoplanet’s surface as well as initiating a dialogue. “[W]e have demonstrated the feasibility of establishing interstellar power transmission links via gravitational lensing, while also confirming our technological readiness to receive such signals. It’s time to develop and launch a search campaign.“

The paper is Turyshev, “Search for gravitationally lensed interstellar transmissions,” now available as a preprint. You might also be interested in another recent take on detecting technosignatures using gravitational lensing. It’s Tusay et al., “A Search for Radio Technosignatures at the Solar Gravitational Lens Targeting Alpha Centauri,” Astronomical Journal Vol. 164, No. 3 (31 August 2022), 116 (full text), which led to a Penn State press release from which the image I used above was taken.

Fusion Pellets and the ‘Bussard Buzz Bomb’

Fusion runways remind me of the propulsion methods using pellets that have been suggested over the years in the literature. Before the runway concept emerged, the idea of firing pellets at a departing spacecraft was developed by Clifford Singer. Aware of the limitations of chemical propulsion, Singer first studied charged particle beams but quickly realized that the spread of the beam as it travels bedevils the concept. A stream of macro-pellets, each several grams in size, would offer a better collimated ‘beam’ that would vaporize to create a hot plasma thrust when it reaches the spacecraft.

Even a macro-pellet stream does ‘bloom’ over time – i.e., it loses its tight coherency because of collisions with interstellar dust grains – but Singer was able to show through papers in The Journal of the British Interplanetary Society that particles over one gram in weight would be sufficiently massive to minimize this. In any case, collimation could also be ensured by electromagnetic fields sustained by facilities along the route that would measure the particles’ trajectory and adjust it.

Image: Clifford Singer, whose work on pellet propulsion in the late 1970s has led to interesting hybrid concepts involving on-board intelligence and autonomy. Credit: University of Illinois.

Well, it was a big concept. Not only did Singer figure out that it would take a series of these ‘facilities’ spaced 340 AU apart to keep the beam tight (he saw them as being deployed by the spacecraft itself as it accelerated), but it would also take an accelerator 105 kilometers long somewhere in the outer Solar System. That sounds crazy, but pushing concepts forward often means working out what the physics will allow and thus putting the problem into sharper definition. I’ve mentioned before in these pages that we have such a particle accelerator in the form of Jupiter’s magnetic field, which is fully 20,000 times stronger than Earth’s.

We don’t have to build Jupiter, and Mason Peck (Cornell University) has explored how we could use its magnetic field to accelerate thousands of ‘sprites’ – chip-sized spacecraft – to thousands of kilometers per second. Greg Matloff has always said how easy it is to overlook interstellar concepts that are ‘obvious’ once suggested, but it takes that first person to suggest them. Going from Singer’s pellets to Peck’s sprites is a natural progression. Sometimes nature steps in where engineering flinches.

The Singer concept is germane here because the question of fusion runways depends in part upon whether we can lead our departing starship along so precise a trajectory that it will intercept the fuel pellets placed along its route. Gerald Nordley would expand upon Singer’s ideas to produce a particle stream enlivened with artificial intelligence, allowing course correction and ‘awareness’ at the pellet level. Now we have a pellet that is in a sense both propellant and payload, highlighting the options that miniaturization and the growth of AI have provided the interstellar theorist.

Image: Pushing pellets to a starship, where the resulting plasma is mirrored as thrust. Nordley talks about nanotech-enabled pellets in the shape of snowflakes capable of carrying their own sensors and thrusters, tiny craft that can home in on the starship’s beacon. Problems with beam collimation thus vanish and there is no need for spacecraft maneuvering to stay under power. Credit: Gerald Nordley.

Jordin Kare’s contributions in this realm were striking. A physicist and aerospace engineer, Kare spent years at Lawrence Livermore National Laboratory working on early laser propulsion concepts and, in the 1980s, laser-launch to orbit, which caught the attention of scientists working in the Strategic Defense Initiative. He would go on to become a spacecraft design consultant whose work for the NASA Institute for Advanced Concepts (as it was then called) analyzed laser sail concepts and the best methods for launching such sails using various laser array designs.

Kare saw ‘smart pellets’ in a different light than previous researchers, thinking that the way to accelerate a sail was to miniaturize it and bring it up to a percentage of c close to the beamer. This notion reminds me of the Breakthrough Starshot sail concept, where the meter-class sails are blasted up to 20 percent of lightspeed within minutes by a vast laser array. But Kare would have nothing to do with meter-class sails. His notion was to make the sails tiny, craft them out of artificial diamond (he drew this idea from Robert Forward) and use them not as payload but as propulsion. His ‘SailBeam’, then, is a stream of sails that, like Singer’s pellets, would be vaporized for propulsion as they arrived at a departing interstellar probe.

Kare was a character, to put it mildly. Brilliant at what he studied, he was also a poet well known for his ‘filksongs,’ the science fiction fandom name for SF-inspired poetry, which he would perform at conventions. His sense of humor was as infectious as his optimism. Thus his DIHYAN, a space launch concept involving reusable rockets (if he could only see SpaceX’s boosters returning after launch!). DIHYAN, in typical Kare fashion, stood for “Do I Have Your Attention Now?” Kare’s role in the consideration of macro-scale matter sent for propulsion is secure in the interstellar literature.

And by the way, when I write about Kare, I’m always the recipient of email from well-meaning people who tell me that I’ve misspelled his name. But no, ‘Jordin’ is correct.

We need to talk about SailBeam at greater length one day soon. Kare saw it as “the most engineering-practical way to get up to a tenth of the speed of light.” It makes sense that a mind so charged with ideas should also come up with a fusion runway that drew on his SailBeam thinking. Following on to the work of Al Jackson, Daniel Whitmire and Greg Matloff, Kare saw that if you could place pellets of deuterium and tritium carefully enough, a vehicle initially moving at several hundreds of kilometers per second would begin encountering them with enough velocity to fire up its engines. He presented the idea at a Jet Propulsion Laboratory workshop in the late 1980s.

We’re talking about an unusual craft, and it’s one that will resonate not only with Johndale Solem’s Medusa, which we’ll examine in the next post, but also with the design shown in the Netflix version of Liu Cixin’s The Three Body Problem. This was not the sleek design familiar from cinema starships but a vehicle shaped more or less like a doughnut, although a cylindrical design was also possible. Each craft would have its own fusion pellet supply, dropping a pellet into the central ‘hole’ as one of the fusion runway pellets was about to be encountered. Kare worked out a runway that would produce fusion explosions at the rate of thirty per second.

Like Gerald Nordley, Kare worried about accuracy, because each of the runway pellets has to make a precise encounter with the pellet offered up by the starship. When I interviewed him many years back, he told me that he envisioned laser pulses guiding ‘smart’ pellets. Figure that you can extract 500 kilometers per second from a close solar pass to get the spacecraft moving outward at sufficient velocity (a very optimistic assumption, relying on materials technologies that are beyond our grasp at the moment, among other things), and you have the fusion runway ahead of you.

Initial velocity is problematic. Kare believed the vehicle would need to be moving at several hundreds of kilometers per second to attain sufficient velocity to begin firing up its main engines as it encountered the runway of fusion pellets. Geoff Landis would tell me he thought the figure was far too low to achieve deuterium/tritium ignition. But if it can be attained, Kare’s calculations produced velocities of 30,000 kilometers per second, fully one-tenth the speed of light. The fusion runway would extend about half a light day in length, and the track would run from near Earth to beyond Pluto’s orbit.

And there you have the Bussard Buzz Bomb, as Kare styled it. The reference is of course to the German V-1, which made a buzzing, staccato sound as it moved through English skies that those who heard it would come to dread, because when the sound stopped, you never knew where it would fall. You can’t hear anything in space, but if you could, Kare told me, his starship design would sound much like the V-1, hence the name.

In my next post, I’ll be talking about Johndale Solem’s Medusa design, which uses nuclear pulse propulsion in combination with a sail in startling ways. Medusa didn’t rely on a fusion runway, but the coupling of this technology with a runway is what started our discussion. The Netflix ‘3 Body Problem’ raised more than a few eyebrows. I’m not the only one surprised to see the wedding of nuclear pulse propulsion, sails and runways in a single design.

Clifford Singer’s key paper is “Interstellar Propulsion Using a Pellet Stream for Momentum Transfer,” JBIS 33 (1980), pp. 107-115. He followed this up with “Questions Concerning Pellet-Stream Propulsion,” JBIS 34 (1981), pp. 117-119. Gerald Nordley’s “Interstellar Probes Propelled by Self-steering Momentum Transfer Particles” (IAA-01-IAA.4.1.05, 52nd International Astronautical Congress, Toulouse, France, 1-5 Oct 2001) offers his take on self-guided pellets. Jordin Kare’s report on SailBeam concepts is “High-Acceleration Micro-Scale Laser Sails for Interstellar Propulsion,” Final Report, NIAC Research Grant #07600-070, revised February 15, 2002 and available here. You might also enjoy my SailBeam: A Conversation with Jordin Kare.

Interstellar Propulsion in ‘3 Body Problem’

You never know when a new interstellar propulsion concept is going to pop up. Some of us have been kicking around fusion runway ideas, motivated by Netflix’s streaming presentation of the Liu Cixin novel The Three Body Problem. There Earth is faced with invasion from an extraterrestrial civilization, but with centuries to solve the problem because it will take that long for the fleet to arrive. Faced with the need to get as much information as possible about the invaders, scientists desperately search for a way to get human technology up to 1.2 percent of lightspeed to intercept the fleet.

Image: 20 different examples of periodic solutions to the three body problem. Credit: Perosello/Wikimedia Commons. CC BY-SA 4.0.

So how would you do that with technology not much more advanced than today’s? The Netflix show’s solution is ingenious, though confusing for those who assume that the Netflix ‘3 Body Problem’ is based solely on the first of the Cixin novels. Actually it edges into the rest of the trilogy, which includes 2008’s The Dark Forest and 2010’s Death’s End. The whole sequence is known as Remembrance of Earth’s Past, and I had to dig into not just The Three Body Problem but The Dark Forest to find much discussion of any kind of propulsion.

Now we’re in a dark wood indeed. For in The Dark Forest (the title is an allusion to the Fermi paradox, usually linked with concerns over METI), the idea of a precursor scouting of the alien invasion fleet does not appear, nor does it appear in the first novel. What we do get is a lot of confusing discussion, such as this:

“If controlled nuclear fusion is achieved, spacecraft research will begin immediately. Doctor, you know about the two current research forks: media-propelled spacecraft and non-media radiation-drive spacecraft. Two opposing factions have formed around these two directions of research: the aerospace faction advocates research into media-propelled spacecraft, while the space force is pushing radiation-drive spacecraft… The fusion people and I are in favor of the radiation drive. For my part, I feel that it’s the only plan that enables interstellar cosmic voyages.”

The book’s many references to a ‘radiation drive’ seem to be referring to antimatter. What Cixin calls ‘media-propelled spacecraft’ is opaque to me, and I’d welcome reader comments on what it represents. Then there is a ‘curvature drive’ that appears in the final volume of the trilogy, but let’s leave that out of the discussion today. Perhaps it’s a kind of Alcubierre concept, but in any case I want to focus on fusion runways and sails for now, because the Netflix eight-part video presents the idea of sending a relatively small payload toward the invasion fleet using a form of nuclear pulse propulsion.

Here the presentation is accurate if rudimentary but the idea is fascinating. Because I don’t find this in the novels, I am wondering about where, along the route to production, the show acquired a technology made famous originally by Project Orion, with its sequence of nuclear explosions visualized as occurring behind a spacecraft’s huge shock absorbers. Wonderfully, the idea opens up to multiple interstellar propulsion ideas in the literature, including Johndale Solem’s Medusa concept and various fusion runway notions that emerged decades ago, one by my friend Al Jackson and Daniel Whitmire, another by Jordin Kare, who christened his concept the ‘Bussard buzz bomb.’

So we’ve got a lot to talk about. And out of the blue Adam Crowl wrote to remind me of something Martyn Fogg pointed out in 2017, when I wrote about Medusa then. Here’s Martyn’s comment:

Suppose these Solem sails were to have a small hole in their centre, they could be steered accurately, and that nuclear propulsion charges could be lined up perfectly in space, perhaps by laser guidance. Then one might imagine an ‘Interstellar Solem Sail Runway’ which would impart a jolt of pulse propulsion each time its sail overtook each charge, thereby accelerating the outgoing ship as a whole up to interstellar cruse velocity. The vessel would only need fuel to decelerate at the target system: a considerable reduction in the mass it would need to carry.

Talk about prescient! Because this is what shows up on the Netflix series.

I’m slammed for time this morning and have way too many ideas floating around as well as tabs open in various screens, so I’m going to break here and pick up this discussion next week, when I want to get into the details of fusion runways, and then I want to relate all this to Solem’s Medusa work by way of illustrating not only how ingenious all these ideas are, but how striking the design in the screen version of the Three Body Problem turns out to be. The designs we’ll be discussing are some of the most innovative that have come out of the interstellar effort thus far.

Free-Floating Planets as Interstellar Targets

Just a few weeks ago I wrote about stellar interactions, taking note of a concept advanced by scientists including Ben Zuckerman and Greg Matloff that such stars would make for easier interstellar travel. After all, if a star in its rotation around the Milky Way closes to within half a light year of the Sun, it’s a more feasible destination than Alpha Centauri. Of course, you have to wait for the star to come around, and that takes time. Zuckerman (UCLA), working with Bradley Hansen, has written about the possibility that close encounters are when a civilization will attempt such voyages.

I have a further idea along the lines of motion through the galaxy and its advantages to explorers, and it’s one that may not require tens of thousands of years of waiting. We’d like to get to another star system because we’re interested in the planets there, so what if an interstellar planet nudges into nearby space? I’ll ignore Oort Cloud perturbations and the rest to focus on a ‘rogue’ or ‘free-floating’ planet as the target of a probe, and ask whether we may not already have some of these in nearby space.

After all, finding free-floating planets – and I’m now going to start calling them FFPs, because that’s what appears in scientific papers on the matter – are hard to find. There being no reflected starlight to look for, the most productive way is to pick them out by their infrared signature, which means finding them when they’re relatively young. This is what Núria Miret Roig (University of Vienna) and team did a couple of years ago, working with data from the Very Large Telescope and other sources. Lo and behold, over one hundred FFPs turned up, all of them infants and still warm.

Image: The locations of 115 potential FFPs [free-floating planets] in the direction of the Upper Scorpius and Ophiuchus constellations, highlighted with red circles. The exact number of rogue planets found by the team is between 70 and 170, depending on the age assumed for the study region. This image was created assuming an intermediate age, resulting in a number of planet candidates in between the two extremes of the study. Credit: ESO/N. Risinger (skysurvey.org).

But young FFPs are most likely to be found in star-forming regions, two of which (in Scorpius and Ophiuchus) were subjected to Miret Roig and team’s searches. What’s likely to amble along in our rather more sedate region is an FFP with enough years on it to have cooled down. The WISE survey (Wide-Field Infrared Survey Explorer) showed how difficult it is to pin down red dwarfs in the neighborhood, although it can be done. But even there, when you get down to L- and T-class brown dwarfs, uncertainty persists about whether you can find them. With planets the challenge is even greater.

Sometimes FFPs are found through microlensing toward the galactic core, but I don’t think we can rely on that method for finding a population of such worlds within, say, half a light year. Nonetheless, Miret Roig is not alone in pointing out that “there could be several billions of these free-floating planets roaming freely in the Milky Way without a host star.” Indeed, that number could be on the low side given what we’re learning about how these objects form. Given the excitement over ‘Oumuamua and other interstellar interlopers that may appear, I’m surprised that there hasn’t been more attention paid to how we might detect planet-sized objects near our system.

The ongoing search for Planet 9 demonstrates how difficult finding a planet outside the ecliptic can be right here at home. While pondering the best way to proceed, I’ll divert the discussion to rogue planet formation, which has always been central to the debate. Are the processes rare or common, and if the latter, do most stellar systems including our own, have the potential for ejecting planets? The last two decades of study have been productive, as we have refined our methods for modeling this process.

Recent work on the Trapezium Cluster in the Orion Nebula shows us how the catalog of FFPs is growing. The Trapezium Cluster is helpfully located out of the galactic plane, and there is a molecular cloud behind it that reduces the problems posed by field stars. I was startled to learn about this study (conducted at the European Space Agency’s ESTEC facility in the Netherlands by Samuel Pearson and Mark J McCaughrean) because of the sheer number of FFPs it turned up. Some 540 FFP candidates are identified here, ranging in mass from 0.6 to 13 Jupiter masses, although the range is an estimate based on the age of the cluster and our current models of gas giant evolution.

Image: A total of 712 individual images from the Near Infrared Camera on the James Webb Space Telescope were combined to make this composite view of the Orion Nebula and the Trapezium Cluster. Credit: NASA, ESA, CSA/Science leads and image processing: M. McCaughrean, S. Pearson, CC BY-SA 3.0 IGO.

What stopped me cold about this work is that among the 540 candidate FFPs, 40 are binaries. Two free-floating planets moving together without a star, and enough of them that we have to learn a new term: JuMBOs, for Jupiter-mass binary objects. How does that happen? There are even two triple systems in the data. Digging into the paper:

…we can compare their statistical properties…with higher-mass systems. The JuMBOs span the full mass range of our PMO [planetary-mass object] candidates, from 13 MJup down to 0.7 MJup. They have evenly distributed separations between ∼25–390 au, which is significantly wider than the average separation of brown dwarf-brown dwarf binaries which peaks at ∼ 4 au [42, 43]. However, as our imaging survey is only sensitive to visual binaries with separations > 25 au, we can not rule out an additional population of JuMBOs with closer orbits. For this reason we take 9% as a lower bound for the PMO multiplicity fraction. The average mass ratio of the JuMBOs is q = 0.66. While there are a significant number of roughly equal-mass JuMBOs, only 40% of them have q ≥ 0.8. This is much lower than the typical mass ratios for brown dwarfs, which very strongly favour equal masses.

That last line is interesting. Our FFP binary systems tend to have planets of distinctly different masses, which implies, according to the authors, that if the JuMBOs formed through core collapse and fragmentation – like a star – “then there must be some fundamental extra ingredient involved at these very low masses.” But the binary systems here go well below the mass where this formation method was thought to work. That opens up the ‘ejection’ hypothesis, with the planets forming in a circumstellar disk only to be ejected by gravitational interactions. So note this:

In either case, however, how pairs of young planets can be ejected simultaneously and remain bound, albeit weakly at relatively wide separations, remains quite unclear. The ensemble of PMOs and JuMBOs that we see in the Trapezium Cluster might arise from a mix of both of these “classical” scenarios, even if both have significant caveats, or perhaps a new, quite separate formation mechanism, such as a fragmentation of a star-less disk is required.

Ejection is a rational thing to look at considering that gravitational scattering is a well-studied process and may well have occurred in the early days of our own system. On the other hand, in star-forming regions like Trapezium the nascent systems are so young that this scenario may be less likely than the core-collapse model, in which the process is similar to star formation as a molecular cloud collapses and fragments. The open question is whether a scenario like this, which seems to work for brown dwarfs, is also applicable to considerably smaller FFPs in the Jupiter-mass range.

In any case, it seems unlikely that binary planets could survive ejection from a host system. As co-author Pearson puts it, “Nine percent is massively more than what you’d expect for the planetary-mass regime. You’d really struggle to explain that from a star formation perspective…. That’s really quite puzzling.”

All of which triggered a new paper from Fangyuan Yu (Shanghai Jiao Tong University) and Dong Lai (Cornell University), which takes an entirely different tack when it comes to formation of binary FFPs:

The claimed detection of a large fraction (9 percent) of JuMBOs among FFPs (Pearson & McCaughrean 2023) seems to suggest that core collapse and fragmentation (i.e. scaled-down star formation) channel plays an important role in producing FFPs down to Jupiter masses, since we do not expect the ejection channel to produce binary planets. On the other hand, (Miret-Roig et al. 2022) suggested that the observed abundance of FFPs in young star clusters significantly exceeds the core collapse model predictions, indicating that ejections of giant planets must be frequent within the first 10 Myr of a planetary system’s life.

Yu and Lai look at close stellar flybys as a contributing factor to FFP binary formation. If we’re talking about dense young star clusters, encounters between stars should be frequent, and there has been at least one study advancing the idea that bound binary planets could be the result of such flybys. Yu and Lai model two-planet systems to study the effects of a flyby on single and double-planet systems. Will an FFP result from a close flyby? A binary FFP? Or will the flyby star contribute a planet to the system it encounters?

These numerical experiments yield interesting results: The production rate of binary pairs of FFPs caused by stellar flybys is always less than 1 percent in their modeling, even when parameters are adjusted to make for tightly packed stellar systems. Directly addressing the JWST work in Trapezium and the large number of JuMBOs found there, Yu and Lai deduce that they cannot be caused by flybys, and because ejection scenarios are so unlikely, they see “a scaled-down version of star formation” at work “via fragmentation of molecular cloud cores or weakly-bound disks or pseudo-disks in the early stages of star formation.”

The matter remains unresolved, producing much fodder for future observations and debate. And while we figure out how to detect free-floating planets that may already be far closer than Proxima Centauri, we can create science fictional scenarios of journeys not just to a single rogue planet, but to a binary or even a triple system cohering despite the absence of a central star. I can only imagine how much Robert Forward, the man who gave us Rocheworld, would have enjoyed working with that.

The paper is Pearson & McCaughrean, “Jupiter Mass Binary Objects in the Trapezium Cluster” (preprint). The Miret-Roig paper is “A rich population of free-floating planets in the Upper Scorpius young stellar association,” published online at Nature Astronomy 22 December 2021 (abstract). The Fangyuan Yu & Dong Lai paper is, “Free-Floating Planets, Survivor Planets, Captured Planets and Binary Planets from Stellar Flybys,” submitted to The Astrophysical Journal (preprint).

Building the Heavy Elements

A kilonova at the wrong place and time would spell trouble for any lifeforms emerging on a planetary surface. Just how we found out about kilonovae and the conditions that create them, not to mention their hypothesized effects, is the subject of Don Wilkins’ latest, a look at Cold War era surveillance that wound up pushing astronomy’s frontiers. That work now causes us to ponder the formation of an ‘island of stability’ in which exists a set of superheavy element isotopes with unique properties. It also raises interesting questions about our Solar System’s history and possible exposure to a nearby event. Based at Washington University in St. Louis, Don’s interest in deep space exploration here probes the formation and structure of matter in processes we’re only beginning to unlock.

by Don Wilkins

Setting out to discover something on Earth can sometimes reveal an unexpected result from a far more interesting source. As a case in point, consider what happened in August of 1963, when Great Britain, the US and the USSR signed a nuclear test ban treaty forbidding nuclear detonations in space or the Earth’s atmosphere. For the older space nerds, this is the same treaty that ended the Orion program. Given the Soviets’ history of violating treaties, the US launched the Vela (derived from the Spanish verb “velar”, to watch) series of satellites designed to monitor compliance with the treaty within two months of the signing. What they found was a bit of a surprise.

The satellites were heavily instrumented with x-ray, gamma-ray, neutron, optical and electromagnetic pulse (EMP) detectors along with other sensors designed to monitor the space environment. The satellites operated in pairs on opposite sides of a circular 250,000 kilometers in diameter orbit, Figure 1.

Figure 1. A Pair of Vela Satellites Readying for Launch. Los Angeles Air Force Base, U.S. Air Force Photo.

X-ray detectors directly sense nuclear blast. Gamma-ray and neutron detector activations would confirm the nuclear event and would prompt a stiffly worded diplomatic note sent to the Soviets. Vela satellites were positioned to monitor the Earth and the far side of the Moon. The latter involved detecting gamma radiation from radioactive debris scattered by a clandestine explosion. As a result of the separation of the satellites and separation in time between sensor triggering on the satellites, the angle to the event could be determined to about one-fifth of a radian or ten degrees. Angles to a single event observed by multiple pairs of satellites could provide a more precise direction to the source.

No diplomatic note concerning illegal nuclear tests was ever sent to the Soviets. Fortunately events which triggered the detectors but were clearly not signatures of nuclear detonations were not discarded. These formed a database which eventually led to the discovery of enormous, but short-lived gamma-ray bursts (GRBs) originating in deep space. GRBs last less than three seconds (although a recent discovery lasted an astounding 200 seconds), yet they are as luminous as 100 million galaxies, the equivalent of a 1000 novae. Gamma-ray sources have temperatures of approximately 109 K degrees and are among the hottest objects ever observed. Compounding the mystery, researchers only had a line pointing to the origin of the bursts but no distance.

GRBs occur daily and are uniformly distributed across the observable Universe. Initially no counterpart of the GRBs operating in the visual spectrum could be found. Then, in 1997, Italian astronomers caught the fading light of an object which could be linked with a GRB, Figure 2.

Figure 2. Left: Arrow points at the GRB optical counterpart. Right: An IR image of the tilted box area in the left image. The optical source is gone, and only a faint image of a very distant galaxy remains. The other two bright sources on the right side are spiral galaxies. Credit: W. M. Keck Observatory / NASA.

The favored explanation for GRBs is the collision of two neutron stars or two black holes. Astronomers named the neutron star mergers kilonovae (KN). In addition to GRBs, these collisions emit high-frequency gravitational waves (GW) and are, through rapid neutron capture (the r-process) nucleosynthesis, likely production sites of heavy elements. [1] A team led by Andres Levan examined spectroscopy of GRB 230307A, a long-duration GRB associated with a kilonova merger. A 2.15 micron emission line from that analysis is associated with tellurium (atomic mass 130), and a mid-IR peak, lanthanides production. GRB nucleosynthesis creates a wide range of atomic masses including heavy elements (mass above iron). [2]

These observations and others support the hypothesis that heavy elements within the Solar System are the remnants of a kilonova.[3-4]

Figure 3 depicts the evolution of a neutron star merger over the course of millennia. The drawing on the left depicts the aftermath a few years after the merger and at dimensions below a parsec. Gamma-rays are emitted in the dynamic ejecta and the hot cocoon. The gamma-ray jet and cocoon emissions are short-lived; the afterglow they produce emits broadband frequencies for several years. The dynamic ejecta include heavy elements which decay in less than a month to produce the UV, optical and IR displays. X-ray emissions, at potentially lethal levels, result from the interaction between the jet and the interstellar medium (ISM).

Figure 3. Structures Resulting from Neutron Star Merger

On the right hand side, a powerful shock wave from the merger produces a bubble in the ISM. Potentially lethal cosmic rays result.

Initial analysis of GRBs focused on the on-axis gamma ray bursts. M.L. Perkins’ team analyzed the data to understand threats by the off-axis emissions and the relation to other cosmic threats. [5]

According to the team:

For baseline kilonova parameters, … the X-ray emission from the afterglow may be lethal out to ∼ 5 pc and the off-axis gamma-ray emission may threaten a range out to ∼ 4 pc, whereas the greatest threat comes years after the explosion, from the cosmic rays accelerated by the kilonova blast, which can be lethal out to distances up to ∼ 11 pc. … . Based on the frequency and potential damage done, the threats in order of most to least harmful are: solar flares, impactors, supernovae, on-axis GRBs, and lastly off-axis BNS mergers.

One question concerns how close to Earth a kilonova may have manifested. The presence of two isotopes, iron-60 (Fe-60) and plutonium-244 (Pu-244) found in ocean sediments deposited 3 to 4 million years ago offers clues. These isotopes are only formed in very energetic processes.

Fe-60 can, in theory, be created in a standard supernova. Pu-244 is created only in specific classes of supernovae or the merger of a neutron star with another astronomical body, the kilonova.

Figure 4. Artist’s impression of a neutron star merger. Credit: University of Warwick / Mark Garlick.

One of the problems was the ratio between the isotopes. Researchers at the Università di Trento found, with a specific debris ejection pattern and a certain tilt of the merger event, the observed ratio of iron to plutonium isotopes could be explained by a kilonova. [6] The scientists examined rare types of supernovae such as a magneto-rotational supernova or collapsar, but concluded the kilonova was the source of the isotopes.

To determine how far from Earth the kilonova occurred, the researchers calculated the different spreads for each element based on the wind speed created by the kilonova. The answer was about 150 to 200 parsecs or about 500 to 600 light years away.

Hydrogen and helium were created with the Big Bang; heavier elements were made by fusion within the interior of stars, supernovae and kilonovae. Data provided by astronomer Jennifer Johnson from Ohio State University was used to produce the periodic table depicting the origins of elements shown in Figure 5 below.

Researchers have examined the heavy element composition of a number of stars, finding that some of these elements are the product of the radioactive decay of previously unobserved elements. [7] These predecessor elements form in a theorized “island of stability” with atomic numbers centered around 126. Isotopes in this region, beyond the fleeting transuranics, are hypothesized to possess “magic numbers” of protons and neutrons that allow them lifespans of thousands or millions of years. The rapid neutron-capture process that occurs in neutron-rich environments of neutron star mergers and supernovae appears inadequate to form the elements in the island of stability. How these transuranics were produced is a mystery.

Figure 5. Origins of Elements – Courtesy NASA’s Goddard Space Flight Center.

The effects of neutron star mergers, like rain, depends on timing. In the early stages of star formation, the collisions shower the clouds of hydrogen and helium with heavy metals necessary for life. Yet after life gained its foothold, an improperly timed – and ill-placed – kilonova could severely damage or erase what a predecessor started.

References

1. B. D. Metzger, G. Martínez-Pinedo, S. Darbha, E. Quataert, A. Arcones, D. Kasen, R. Thomas, P. Nugent, I. V. Panov, N. T. Zinner, “Electromagnetic counterparts of compact object mergers powered by the radioactive decay of r-process nuclei,” Monthly Notices of the Royal Astronomical Society, Volume 406, Issue 4, August 2010, Pages 2650–2662, https://doi.org/10.1111/j.1365-2966.2010.16864.x

2. Levan, A., Gompertz, B.P., Salafia, O.S. et al. “Heavy element production in a compact object merger observed by JWST.” Nature (2023). https://doi.org/10.1038/s41586-023-06759-1

3. Bartos, I., Marka, S. “A nearby neutron-star merger explains the actinide abundances in the early Solar System.” Nature 569, 85–88 (2019). https://doi.org/10.1038/s41586-019-1113-7

4. Watson, Darach, Hansen, Camilla J., Selsing, Jonatan, et al, “Identification of strontium in the merger of two neutron stars,’ arXiv:1910.10510 [astro-ph.HE], 23 Oct 2019

5. Perkins, M.L., Ellis, John, Fields, B.D, et al, “Could a Kilonova Kill: a Threat Assessment,” arXiv:2310.11627v1, 17 October 2023.

6. Leonardo Chiesa, et al, “Did a kilonova set off in our Galactic backyard 3.5 Myr ago?,” arXiv (2023). DOI: 10.48550/arxiv.2311.17159

7. Ian U. Roederer, et al, “Element abundance patterns in stars indicate fission of nuclei heavier than uranium,” Science, 7 Dec 2023, Vol 382, Issue 6675, pp. 1177-1180, DOI: 10.1126/science.adf1341