≡ Menu

Proxima Centauri: Microlensing Yields New Data

It’s not easy teasing out information about a tiny red dwarf star, even when it’s the closest star to the Sun. Robert Thorburn Ayton Innes (1861-1933), a Scottish astronomer, found Proxima using a blink comparator in 1915, noting a proper motion similar to Alpha Centauri (4.87” per year), with Proxima about two degrees away from the binary. Finding out whether the new star was actually closer than Centauri A and B involved a competition with a man with a similarly august name, Joan George Erardus Gijsbertus Voûte, a Dutch astronomer working in South Africa. Voûte’s parallax figures were more accurate, but Innes didn’t wait for debate, and proclaimed the star’s proximity, naming it Proxima Centaurus.

The back and forth over parallax and the subsequent careers of both Innes and Voûte make for interesting reading. I wrote both astronomers up back in 2013 in Finding Proxima Centauri, but I’ll send you to my source for that article, Ian Glass (South African Astronomical Observatory), who published the details in the magazine African Skies (Vol. 11 (2007), p. 39). You can find the abstract here.

Image: Shining brightly in this Hubble image is our closest stellar neighbour: Proxima Centauri. Although it looks bright through the eye of Hubble, as you might expect from the nearest star to the Solar System, the star is not visible to the naked eye. Its average luminosity is very low, and it is quite small compared to other stars, at only about an eighth of the mass of the Sun. However, on occasion, its brightness increases. Proxima is what is known as a “flare star”, meaning that convection processes within the star’s body make it prone to random and dramatic changes in brightness. The convection processes not only trigger brilliant bursts of starlight but, combined with other factors, mean that Proxima Centauri is in for a very long life. Astronomers predict that this star will remain on the main sequence for another four trillion years, some 300 times the age of the current Universe. These observations were taken using Hubble’s Wide Field and Planetary Camera 2 (WFPC2). Credit: NASA/ESA.

It’s a long way from blink comparators to radial velocity measurements, the latter of which enabled our first exoplanet discoveries back in the 1990s, measuring how the gravitational pull of an orbiting planet could pull its parent star away from us, then towards us on the other side of the orbit, with all the uncertainties that implies. We’re still drilling into the details of Proxima Centauri, and radial velocity occupies us again today. The method depends on the mass of the star, for if we know that, we can then make inferences about the mass of the planets we find around it.

Thus the discovery of Proxima Centauri’s habitable zone planet, Proxima b, a planet we’d like to know much more about given its enticing minimum mass of about 1.3 Earths and an orbital period of just over 11 days. Radial velocity methods at exquisite levels of precision rooted out Proxima b and continue to yield new discoveries.

We’re learning a lot about Alpha Centauri itself – the triple system of Proxima and the central binary Centauri A and B. Just a few years ago, Pierre Kervella and team were able to demonstrate what had previously been only a conjecture, that Proxima Centauri was indeed gravitationally bound to Centauri A and B. The work was done using high-precision radial velocity measurements from the HARPS spectrograph. But we still had uncertainty about the precise value of Proxima’s mass, which had in the past been extrapolated from its luminosity.

This mass-luminosity relation is useful when we have nowhere else to turn, but as a paper from Alice Zurlo (Universidad Diego Portales, Chile) explains, there are significant uncertainties in these values, which point to higher error bars the smaller the star in question. As we learn more about not just other planets but warm dust belts around Proxima Centauri, we need a better read on the star’s mass, and this leads to the intriguing use to which Zurlo and team have put gravitational microlensing.

Here we’re in new terrain. The gravitational deflection of starlight is well demonstrated, but to use it, we need to have a background object move close enough to Proxima Centauri so that the latter can deflect its light. A measurement of this kind was recently made on the star Stein 3051 B, a white dwarf, using data from the Hubble instrument, the first use of gravitational lensing to measure the mass of a star beyond our Solar System. Zurlo and team have taken advantage of microlensing events at Proxima involving two background stars, one in 2014 (source 1), the other two years later (source 2), but the primary focus of their work is with the second event.

Using the Spectro-Polarimetric High-contrast Exoplanet REsearch instrument (SPHERE) at the Very Large Telescope at Cerro Paranal in Chile, the researchers observed Proxima Centauri and the background stars from March of 2015 to June of 2017. You can see Proxima in the image below, with the two background stars. In the caption, IRDIS refers to the near-infrared imager and spectrograph which is a part of the SPHERE/VLT installation.

Image: This is Figure 1 from the paper. Caption: IRDIS FoV for the April 2016 epoch. The image is derotated, median combined, and cleaned with a spatial filter. At the center of the image, inside the inner working angle (IWA), the speckle pattern dominates, in the outer part of the image our reduction method prevents the elongation of the stars’ point spread functions (PSFs). The bars in the lower right provide the spatial scale. North is up and East is to the left. Credit: Zurlo et al.

The extraordinary precision of measurement needed here is obvious, and the mechanics of making it happen are described in painstaking detail in the paper. The authors note that the SPHERE observations will not be further refined because the background star they call Source 2 is no longer visible on the instrument’s detector. Nonetheless:

The precision of the astrometric position of this source is the highest ever reached with SPHERE, thanks to the exquisite quality of the data, and the calibration of the detector parameters with the large population of background stars in the FoV. Over the next few years, Proxima Cen will be followed up to provide a better estimation of its movement on the sky. These data will be coupled with observations from HST and Gaia to take advantage of future microlensing events.

The results of the two-year monitoring program show deflection of the background sources’ light consistent with our tightest yet constraints on the mass of Proxima Centauri. The value is 0.150 solar masses, with possible error in the range of +0.062 to -0.051, or roughly 40%. This is, the authors note, “the first and the only currently possible measurement of the gravitational mass of Proxima Centauri.”

The previous value drawn from mass-luminosity figures was 0.12 ± 0.02 M. What next? While Source 2 may be out of the picture using the SPHERE installation, the authors add that Gaia measurements of the proper motion and parallax of that star may further refine the analysis. Future microlensing will have to wait, for no star as bright as Source 2 will pass within appropriate range of Proxima for another 20 years.

The paper is Zurlo et al., “The gravitational mass of Proxima Centauri measured with SPHERE from a microlensing event,” Monthly Notices of the Royal Astronomical Society Vol. 480, Issue 1 (October, 2018), 236-244 (full text). The paper on Proxima Centauri’s orbit in the Alpha Centauri system is Kervella et al., “Proxima’s orbit around α Centauri,” Astronomy & Astrophysics Volume 598 (February 2017) L7 (abstract).

tzf_img_post
{ 6 comments }

What can we say about the possible appearance and spread of civilizations in the Milky Way? There are many ways of approaching the question, but in today’s essay, Dave Moore focuses on a recent paper from Robin Hanson and colleagues, one that has broad implications for SETI. A regular contributor to Centauri Dreams, Dave was born and raised in New Zealand, spent time in Australia, and now runs a small business in Klamath Falls, Oregon. He adds: “As a child, I was fascinated by the exploration of space and science fiction. Arthur C. Clarke, who embodied both, was one of my childhood heroes. But growing up in New Zealand in the ‘60s, such things had little relevance to life, although they did lead me to get a degree in biology and chemistry.” Discovering like-minded people in California, he expanded his interest in SETI and began attending conferences on the subject. In 2011, he published a paper in JBIS, which you can read about in Lost in Time and Lost in Space.

by Dave Moore

I consider the paper “If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare,” by Robin Hanson, Daniel Martin, Calvin McCarter, and Jonathan Paulson, a significant advance in addressing the Fermi Paradox. To explain exactly why, I need to go into its background.

Introduction and History

In our discussions and theories about SETI, the Fermi paradox hangs over them all like a sword of Damocles, ready to fall and cut our assumptions to pieces with the simple question, where are the aliens? There is no reason not to suppose that Earth-like planets could not have formed billions of years before Earth did and that exosolar technological civilizations (ETCs) could not have arisen billions of years ago and spread throughout the galaxy. So why then don’t we see them? And why haven’t they visited us, given the vast expanse of time that has gone by?

Numerous papers and suggestions have tried to address this conundrum, usually ascribing it to some form of alien behavior, or that the principle of mediocrity doesn’t apply, and intelligent life is a very rare fluke.

The weakness of the behavioral arguments is they assume universal alien behaviors, but given the immense differences we expect from aliens—they will be at least as diverse as life on Earth—why would they all have the same motivation? It only takes one ETC with the urge to expand, and diffusion scenarios show that it’s quite plausible for an expansive ETC to spread across the galaxy in a fraction (tens of millions of years) of the time in which planets could have given rise to ETCs (billions of years).

And there is not much evidence that the principle of mediocrity doesn’t apply. Our knowledge of exosolar planets shows that while Earth as a type of planet may be uncommon, it doesn’t look vanishingly rare, and we cannot exclude from the evidence we have that other types of planets cannot give rise to intelligent life.

Also, modest growth rates can produce Kardeshev III levels of energy consumption in the order of tens of thousands of years, which in cosmological terms is a blink of the eye.

In 2010, I wrote a paper for JBIS modeling the temporal dispersion of ETCs. By combining this with other information, in particular diffusion models looking at the spread of civilizations across the galaxy, it was apparent that it was just not possible for spreading ETCs to occur with any frequency at all if they lasted longer than about 20,000 years. Longer than that and at some time in Earth’s history, they would have visited/colonized us by now. So, it looks like we are the first technological civilization in our galaxy. This may be disappointing for SETI, but there are other galaxies out there—at least as many as there are stars in our galaxy.

My paper was a very basic attempt to deduce the distribution of ETCs from the fact we haven’t observed any yet. Robin Hanson et al’s paper, however, is a major advance in this area as it builds a universe-wide quantitative framework to frame this lack of observational evidence and produces some significant conclusions.

It starts with the work done by S. Jay Olsen. In 2015, Olson began to bring out a series of papers assuming the expansion of ETCs and modeling their distributions. He reduced all the parameters of ETC distribution down to two: (α), the rate at which civilizations appeared over time, and (v) their expansion rate, which was assumed to be similar for all civilizations as ultimately all rocketry is governed by the same laws of physics. Olsen varied these two parameters and calculated the results for the following: the ETC-saturated fraction of the universe, the expected number and angular size of their visible domains, the probability that at least one domain is visible, and finally the total expected fraction of the sky eclipsed by expanding ETCs.

In 2018, Hanson et al took Olsen’s approach but incorporated the idea of bringing in the Hard Steps Power Law into modeling the appearance rate of ETCs, which they felt was more accurate and predictive than the rate-over-time models Olsen used.

The Hard Steps Power Law

The Hard Steps power law was first introduced in 1953 to model the appearance of cancer cells. To become cancerous an individual cell must undergo a number of specific mutations (hard steps i.e. improbable steps) in a certain order. The average time for each mutation is longer than a human lifetime, but we have a lot of cells in our body, so 40% of us develop cancer, the result of a series of improbabilities in a given cell.

If you think of all the planets in a galaxy that life can evolve on as cells and the ones that an ETC arises on being cancerous, you get the idea. The Hard Steps model is a power law, so the chances of an outcome happening in a given period of time is the inverse of the chance of a step happening (its hardness) to the power of the number of steps. Therefor the chance of anything happening in a given time goes down very rapidly with the number of hard steps required.

In Earth’s case, the given period of time is about 5.5 billion years, the time from Earth’s origin until the time that a runaway greenhouse sets in about a billion years from now.

The Number of Hard Steps in our Evolution

In 1983 Brandon Carter was looking into how likely it was for intelligent life to arise on Earth, and he thought that due to the limitations on the time available this could be modeled as a hard step problem. To quote:

This means that some of the essential steps (such as the development of eukaryotes) in the evolution process leading to the ultimate emergence of intelligent life would have been hard, in the sense of being against the odds in the available time, so that they are unlikely to have been achieved in most of the earth-like planets that may one day be discovered in nearby extra-solar systems.

Carter estimated that the number of hard steps it took to reach our technological civilization was six: biogenesis, the evolution of bacteria, eukaryotes, combogenisis [sex], metazoans, and intelligence. This, he concluded, seemed the best fit for the amount of time that had taken for us to evolve. There has been much discussion and examination of the number of hard steps in the literature, but the idea has held up fairly well so Hanson et al varied the number of hard steps around six as one of their model variables.

The Paper

The Hanson paper starts out by dividing ETCs into two categories: loud aliens and quiet aliens. To quote:

Loud (or “expansive”) aliens expand fast, last long, and make visible changes to their volumes. Quiet aliens fail to meet at least one of these criteria. As quiet aliens are harder to see, we are forced to accept uncertain estimates of their density, via methods like the Drake equation. Loud aliens, by contrast, are far more noticeable if they exist at any substantial density.

The paper then puts aside the quiet aliens as they are, with our current technology, difficult to find and focuses on the loud ones and, in a manner similar to Olsen, runs models but with the following three variables:

i) The number of hard steps required for an ETC to arise.

ii) The conversion rate of a quiet ETC into a loud, i.e. visible, one.

iii) The expansion speed of a civilization.

In their models, (like the one illustrated below) a civilization arises. At some point, it converts into an expansive civilization and spreads out until it abuts a neighbor at which point it stops. Further civilizations in the volume that is controlled are prevented from happening. Results showing alien civilizations that are visible from our point of view are discarded, narrowing the range of these variables. (Note: time runs forward going down the page.)

Results

In a typical run with parameters resulting in them not being visible to us, expansive civilizations now control 40-50% of the universe, and they will finish up controlling something like a million galaxies when we meet one of them in 200 million year’s time. (Note, this paradoxical result is due to the speed of light. They control 40-50% of the universe now, but the electromagnetic radiation from their distant galaxies has yet to reach us.)

From these models, three main outcomes become apparent:

Our Early Appearance

The Hard Step model itself contains two main parameters, number of steps and the time in which they must be concluded in. By varying these parameters, Hanson et al showed that, unless one assumes fewer than two hard steps (life and technological civilizations evolve easily) and a very restrictive limit on planet habitability lifetimes, then the only way to account for a lack of visible civilizations is to assume we have appeared very early in the history of civilizations arising in the universe. (In keeping with the metaphor, we’re a childhood cancer.)

All scenarios that show a higher number of hard steps than this greatly favor a later arrival time of ETCs, so an intelligent life form producing a technological civilization is at this stage of the universe is a low probability event.

Chances of other civilizations in our galaxy

Another result coming from their models is that the higher the chance of an expansive civilization evolving from a quiet civilization, the less the chance there is of there being any ETCs aside from us in our galaxy. To summarize their findings: assuming a generous million year average duration for a quiet civilization to become expansive, very low transition chances (p) are needed to estimate that even one other civilization was ever active anywhere along our past light cone (p < 10−3), or existed in our galaxy (p < 10−4), or is now active in our galaxy (p < 10−7).

For SETI to be successful, there needs to be a loud ETC close by, and for one to be close by, the conversion rate of quiet civilizations to expansive, loud ones must be in the order of one per billion. This is not a good result pointing to SETI searches being productive.

Speed of expansion

The other variable used in the models is the speed of expansion. Under most assumptions, expansive civilizations cover significant portions of the sky. However, when taking into account the speed of light, the further distant these civilizations are, the earlier they must form for us to see them. One of the results of this relativistic model is that the slower civilizations expand on average, the more likely we are to see them.

This can be demonstrated with the above diagram. The orange portion of the diagram shows the origin and expansion of an ETC at a significant proportion of the speed of light. We—by looking out into space are also looking back in time—can only see what is in our light cone (that which is below the red line), so we see the origin of our aliens (say one billion years ago) and their initial spread up to about half that age. After which, the emissions from their spreading civilization have not yet had time to reach us.

The tan triangle represents the area in space from which an ETC spreading at the same rate as the orange aliens would already have arrived at our planet (in which case we would either not exist or we would know about it), so we can assume that there were no expansive aliens having originated in this portion of time and space.

If we make the spread rate a smaller proportion of the speed of light, then this has the effect of making both the orange and tan triangles narrower along the space axis. The size of the tan exclusion area becomes smaller, and the green area, which is the area that can contain observable alien civilizations that haven’t reached us yet, becomes bigger.

You’ll also notice that the narrower orange triangle of the expansive ETC crosses out of out of our light cone at an earlier age, so we’d only see evidence of their civilization from an earlier time.

The authors note that the models rely on us being able to detect the boundaries between expansive civilizations and unoccupied space. If the civilizations are out there, but are invisible to our current instruments, then a much broader variety of distributions is possible.

Conclusions

We have always examined the evolution of life of Earth for clues as to the distribution alien life. What is important about this paper is that it connects the two in a quantitative way.

There are a lot of assumptions build into this paper (some of which I find questionable); however, it does give us a framework to examine them and test them, so it’s a good basis for further work.

To quote Hanson et al:

New scenarios can be invented and the observable consequences calculated immediately. We also introduce correlations between these quantities that are obtained by eliminating dependence on α [appearance rate], e.g. we can express the probability of seeing at least one domain as a function of v [expansion velocity] and the currently life-saturated fraction of the universe based on the fact we haven’t see or have encountered any.

I would point out a conclusion the authors didn’t note. If we have arisen at an improbably early time, then there should be lots of places (planets, moons) with life at some step in their evolution, so while SETI searches don’t look promising from the conclusions of this paper, the search for signs of exosolar life may be productive.

This paper has given us a new framework for SETI. Its parameters are somewhat tangential to the Drake Equation’s, and its approach is to basically work the equation backwards: if N=0 (number of civilizations we can communicate with in the Drake equation, number of civilizations we can observe in this paper), then what is the range in values for fi (fraction of planets where life develops intelligence), fc (fraction of civilizations that can communicate/are potentially observable) and (L) length of time they survive. The big difference is that this paper factors in the temporal distribution of civilizations arising, which is not something the Drake Equation addressed. The Drake equation, for something that was jotted down before a meeting 61 years ago, has had a remarkably good run, but we may be seeing a time where it gets supplanted.

References

Robin Hanson, Daniel Martin, Calvin McCarter, Jonathan Paulson, “If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare,” The Astrophysical Journal, 922, (2) (2021)

Thomas W. Hair, “Temporal dispersion of the emergence of intelligence: an inter-arrival time analysis,” International Journal of Astrobiology 10 (2): 131–135 (2011)

David Moore, “Lost in Time and Lost in Space: The Consequences of Temporal Dispersion for Exosolar Technological Civilizations,” Journal of the British Interplanetary Society, 63 (8): 294-302 (2010)

Brandon Carter, “Five- or Six-Step Scenario for Evolution?” International Journal of Astrobiology, 7 (2) (2008)

S.J. Olson, “Expanding cosmological civilizations on the back of an envelope,” arXiv preprint arXiv:1805.06329 (2018)

tzf_img_post
{ 115 comments }

A Habitable Exomoon Target List

Are there limits on how big a moon can be to orbit a given planet? All we have to work with, in the absence of confirmed exomoons, are the satellites of our Solar System’s planets, and here we see what appears to be a correlation between a planet’s mass and the mass of its moons. At least up to a point – we’ll get to that point in a moment.

But consider: As Vera Dobos (University of Groningen, Netherlands) and colleagues point out in a recent paper for Monthly Notices of the Royal Astronomical Society, if we’re talking about moons forming in the circumplanetary disk around the young Sun, the total mass is on the order of 10-4Mp. Here Mp is the mass of the planet. A planet with 10 times Jupiter’s mass, given this figure, could have a moon as large as a third of Earth’s mass, and so far observational evidence supports the idea that moons can form regularly in such disks. There is no reason to believe we won’t find exomoons by the billions throughout the galaxy.

Image: The University of Groningen’s Dobos, whose current work targets planetary systems where habitable exomoons are possible. Credit: University of Groningen.

The mass calculation above, though, is what is operative when moons form in a circumplanetary disk. To understand our own Moon, we have to talk about an entirely different formation mechanism: Collisions. Here we’re in the fractious pin-ball environment of a system growing and settling, as large objects find their way into stable orbits. Collisions change the game: Moons are now possible at larger moon-to-planet ratios with this second mechanism – . our Moon has a mass of 10−2 Earth masses. Let’s also consider moons captured by gravitational interactions, of which the prime example in our system is probably Triton.

What we’d like to find, of course, is a large exomoon, conceivably of Earth size, orbiting a planet in the habitable zone, or perhaps even a binary situation where two planets of this size orbit a common barycenter (Pluto and Charon come closest in our system to this scenario). Bear in mind that exoplanet hunting, as it gets more refined, is now turning up planets with masses lower than Earth’s and in some cases lower than Mars. As we move forward, then, moons of this size range should be detectable.

But what a challenge exomoon hunters have set for themselves, particularly when it comes to finding habitable objects. The state of the art demands using radial velocity or transit methods to spot an exomoon, but both of these work most effectively when the host planet is closest to its star, a position which is likely not stable for a large Moon over time. Back off the planet’s distance from the star into the habitable zone and now you’re in a position that favors survival of the moon but also greatly complicates detection.

What Dobos and team have done is to examine exomoon habitability in terms of energy from the host star as well as tidal heating, leaving radiogenic heating (with all its implications for habitability under frozen ocean surfaces) out of the picture. Using planets whose existence is verified, as found in the Extrasolar Planets Encyclopedia, they run simulations on hypothetical exomoons that fit their criteria – these screen out planets larger than 13 Jupiter masses and likewise host stars below 0.08 solar masses.

Choosing only worlds with known orbital period or semimajor axis, they run 100,000 simulations for all 4140 planets to determine the likelihood of exomoon habitability. 234 planets make the cut, which for the purposes of the paper means exomoon habitability probabilities of ≥ 1 percent for these worlds. 17 planets of the 234 show a habitability probability of higher than 50 percent, so these are good habitable zone candidates if they can indeed produce a moon around them. It’s no surprise to learn that habitable exomoons are far more likely for planets already orbiting within their star’s habitable zone. But I was intrigued to see that this is not iron-clad. Consider:

Beyond the outer boundary of the HZ, where stellar radiation is weak and one would expect icy planets and moons, we still find a large number of planets with at least 10% habitability probability for moons. This is caused by the non-zero eccentricity of the orbit of the host planet (resulting in periodically experienced higher stellar fluxes) and also by the tidal heating arising in the moon. These two effects, if maintained on a long time-scale, can provide enough supplementary heat flux to prevent a global snowball phase of the moon (by pushing the flux above the maximum greenhouse limit).

More good settings for science fiction authors to mull over!

Image: This is Figure 2 from the paper. Caption: Habitability probability for exomoons around known exoplanets on the semi-major axis – stellar effective temperature plane. Planets with known masses (with or without radius data) are marked with circles, planets with known radii only are marked with triangles. Colours of the markers correspond to the fraction of habitable moons and the sizes of the markers represent the sizes of the planets as shown in the legend. Note that the legend only shows three
representative sizes (Earth, Neptune and Jupiter), while the size of the markers in the plot is scaled to the real size of the planets. Green curves represent the borders of the circumstellar habitable zone for a 1 Earth-mass planet: dark green for the consevative HZ (Con. HZ) and light green for the optimistic HZ (Opt. HZ). Credit: Dobos et al. 2022.

Given that the spectral type of over half of the stars in the Extrasolar Planets Encyclopedia is not listed, there is a good deal of play in these results, although the authors point to the mitigating effect of gas giant magnetospheres as shields against incoming stellar radiation for potentially habitable moons. Even so, stellar type is clearly an important factor, and it’s also noteworthy that while the paper mentions planet migration, its effects on exomoons are not under consideration. This is about as much as the authors have to say about migration:

It is likely that the giant planets in the circumstellar HZ were formed at larger distances from the star and then migrated inwards to their current orbit (see for example Morbidelli 2010). During the orbital migration they can lose some or all of their moons, especially if the moon orbit is close to the planet (Namouni 2010; Spalding et al. 2016). Depending on the physical and orbital parameters of the planet and the moon, as well as on the starting and final semi-major axes of the planet, some moons can survive this process, and new moons can also be captured during or after the migration of the planet.

Just how migration would affect the results of this study is thus an open question. What we do wind up with is what the authors consider a ‘target list’ for exomoon observations, although one replete with challenges. Most of these potential exomoons would orbit planets whose orbital period is in the hundreds of days, planets like Kepler-62f, with a 268 day period and a 53 percent habitability probability for an exomoon. This is an interesting case, as stable moon orbits are likely around this 1.38 Earth radius world. But what a tricky catch for both our exomoon detection techniques.

Because many of the planets in the target list are gas giants, we have to consider the probability that more than a single moon may orbit them, perhaps even several large moons where life might develop. That’s a scenario worth considering as well, independent emergence of life upon two moons orbiting the same exoplanet. But it’s one that will have to wait as we refine exomoon scenarios in future observations.

The paper is Dobos et al., “A target list for searching for habitable exomoons,” accepted at Monthly Notices of the Royal Astronomical Society 05 May 2022 (abstract / preprint). Thanks to my friend Antonio Tavani for the heads-up on this work.

tzf_img_post
{ 27 comments }

Dyson Spheres: The White Dwarf Factor

I often think of Dyson structures around stars as surprisingly benign places, probably motivated by encountering Larry Niven’s wonderful Ringworld when it was first published by Ballantine in 1970. I was reading it in an old house in Iowa on a windy night and thought to start with a chapter or two, but found myself so enthralled that it wasn’t until several hours later that I re-surfaced, wishing I didn’t have so much to do the next day that I had to put the book aside and sleep.

I hope I’m not stretching the definition of a Dyson construct too far when I assign the name to Niven’s ring. It is, after all, a structure built by technological means that runs completely around its star at an orbit allowing a temperate climate for all concerned, a vast extension of real estate in addition to whatever other purposes its creators may have intended. That a technological artifact around a star should be benign is a function of its temperature, which makes things possible for biological beings.

But a Dyson sphere conceived solely as a collection device to maximize the civilization’s intake of stellar energy would not be built to biological constraints. For one thing, as retired UCLA astrophysicist Ben Zuckerman points out in a new paper, it would probably be as close to its star as possible to minimize its size. That makes for interesting temperatures, probably in the range of 300 K at the low end and reaching perhaps 1000 K, which sets up emissions up to 10 µm in wavelength, and as Zuckerman points out, this would be in addition to any emission from the star itself.

Zuckerman refers to such structures as DSRs, standing for Dyson spheres/rings, so I guess working Niven in here fits as well. Notice that when talking about these things we also make an assumption that seems reasonable. If civilizations are abundant in the galaxy, they are likely long-lived, and that means that stellar evolution has to be something they cope with as yellow dwarf stars, for example, turn into red giants and, ultimately, white dwarfs. The DSR concept can accommodate the latter and offers a civilization the chance to continue to use the energy of its shrunken star. Whether this would be the route such a civilization chooses is another matter entirely.

Image: Artist depiction of a Dyson swarm. Credit: Kevin McGill/Wikimedia Commons.

One useful fact about white dwarfs is that they are small, around the size of the Earth, and thus give us plenty of transit depth should some kind of artificial construct pass between the star and our telescopes. Excess infrared emission might also be a way to find such an object, although here we have to be concerned about dust particles and other potential sources of the infrared excess. Zuckerman’s new paper analyzes the observational limits we can currently derive based on these two methods.

The paper, published in Monthly Notices of the Royal Astronomical Society, uses data from Kepler, Spitzer and WISE (Wide-field Infrared Survey Explorer) as a first cut into the question, revising Zuckerman’s own 1985 work on the number of technological civilizations that could have emerged around main sequence stars that evolved to white dwarfs within the age constraints of the Milky Way. Various papers exist on excess of infrared emissions from white dwarfs; we learn that Spitzer surveyed at least 100 white dwarfs with masses in the main sequence range of 0.95 to 1.25 solar masses. These correspond to spectral types G7 and F6, and none of them turned up evidence for excess infrared emission.

As to WISE, the author finds the instrument sensitive enough to yield significant limits for the existence of DSRs around main sequence stars, but concludes that for the much fainter white dwarfs, the excess infrared is “plagued by confusion…with other sources of IR emission.” He looks toward future studies of the WISE database to untangle some of the ambiguities, while going on to delve into transit possibilities for large objects orbiting white dwarfs. Kepler’s K2 extension mission, he finds, would have been able to detect a large structure (1000 km or more) if transiting, but found none.

It’s worth pointing out that no studies of TESS data on white dwarfs are yet available, but one ongoing project has already observed about 5000, with another 5000 planned for the near future. As with K2, a deep transit would be required to detect a Dyson object, again on the order of 1000 kilometers. If any such objects are detected, we may be able to distinguish natural from artificial objects by their transit shape. Luc Arnold has done interesting work on this; see SETI: The Artificial Transit Scenario for more.

Earlier Kepler data are likewise consulted. From the paper:

From Equation 4 we see that about a billion F6 through G7 stars that were on the main sequence are now white dwarfs. Studies of Kepler and other databases by Zink & Hansen (2019) and by Bryson et al. (2021) suggest that about 30% of G-type stars are orbited by a potentially habitable planet, or about 300 million such planets that orbit the white dwarfs of interest here. If as many as one in 30 of these planets spawns life that eventually evolves to a state where it constructs a DSR with luminosity at least 0.1% that of its host white dwarf, then in a sample of 100 white dwarfs we might have expected to see a DSR. Thus, fewer than 3% of the habitable planets that orbit sun-like stars host life that evolves to technology, survives to the white dwarf stage of stellar evolution, and builds a DSR with fractional IR luminosity of at least 0.1%.

Science fiction writers will want to go through Zuckerman’s section on the motivations of civilizations to build a Dyson sphere or ring, which travels deep into speculative territory about cultures that may or may not exist. It’s an imaginative foray, though, discussing the cooling of the white dwarf over time, the need of a civilization to migrate to space-based colonies and the kind of structures they would likely build there.

There are novels in the making here, but our science fiction writer should also be asking why a culture of this sophistication – able to put massive objects built from entire belts of asteroids and other debris into coherent structures for energy and living space purposes – would not simply migrate to another star. The author only says that if the only reason to travel between the stars is to avoid the inevitable stellar evolution of the home star, then no civilization would undertake such journeys, preferring to control that evolution through technology in the home system. This gives me pause, and Centauri Dreams readers may wish to supply their own reasons for interstellar travel that go beyond escaping stellar evolution.

Also speculative and sprouting fictional possibilities is the notion that main sequence stars may not be good places to look for a DSR. Thus Zuckerman:

…main sequence stars suffer at least two disadvantages as target stars when compared to white dwarfs; one disadvantage would be less motivation to build a substantial DSR because one’s home planet remains a good abode for life. In our own solar system, if a sunshield is constructed and employed at the inner Earth-Sun Lagrange point – to counter the increasing solar luminosity – then Earth could remain quite habitable for a few Gyr more into the future. Perhaps a more important consideration would be the greater luminosity, say about a factor of 1000, of the Sun compared to a typical white dwarf.

Moreover, detecting a DSR around a main sequence star becomes more problematic. In the passage below, the term τ stands for the luminosity of a DSR measured as a fraction of the luminosity of the central star:

For a DSR with the same τ and temperature around the Sun as one around a white dwarf, a DSR at the former would have to have 1000 times the area of one at the latter. While there is sufficient material in the asteroid belt to build such an extensive DSR, would the motivation to do so exist? For transits of main sequence stars by structures with temperatures in the range 300 to 1000 K, the orbital period would be much longer than around white dwarfs, thus relatively few transits per year. For a given structure, the probability of proper alignment of orbital plane and line of sight to Earth would be small and its required cross section would be larger than that of Ceres.

So we are left with that 3 percent figure, which sets an upper limit based on our current data for the fraction of potentially habitable planets that orbit stars like the Sun, produce living organisms that produce technology, and then construct a DSR as their system undergoes stellar evolution. No more than 3 percent of such planets do so.

There is a place for ‘drilling down’ strategies like this, for they take into account the limitations of our data by way of helping us see what is not there. We do the same in exoplanet research when we start with a star, say Proxima Centauri, and progressively whittle away at the data to demonstrate that no gas giant in a tight orbit can be there, then no Neptune-class world within certain orbital constraints, and finally we do find something that is there, that most interesting place we now call Proxima b.

As far as white dwarfs and Dyson spheres or rings go, new instrumentation will help us improve the limits discussed in this paper. Zuckerman points out that there are 5000 white dwarfs within 200 parsecs of Earth brighter than magnitude 17. A space telescope like WISE with the diameter of Spitzer could improve the limits on DSR frequency derived here, its data vetted by upcoming 30-m class ground-based telescopes (Zuckerman notes that JWST is not suited, for various reasons, for DSR hunting). The European Space Agency’s PLATO spacecraft should be several times more sensitive than TESS at detecting white dwarf transits, taking us well below the 1000 km limit.

The paper is Zuckerman, “Infrared and Optical Detectability of Dyson Spheres at White Dwarf Stars,” Monthly Notices of the Royal Astronomical Society stac1113 (28 April 2022). Abstract / Preprint.

tzf_img_post
{ 45 comments }

Habitability: Look to Younger Worlds

A liquid water-defined habitable zone is a way of establishing parameters for life as we know it around other stars, and with this in mind, scientists study the amount of stellar radiation a planet receives as one factor in making the assessment. But of course, not everything in a habitable zone is necessarily habitable, as our decidedly uninhabitable Moon makes all too clear. Atmospheric factors and tectonic activity, for example, have to be weighed as we try to learn what the actual temperature at the surface would be. We’re learning as we go about other contributing factors.

A problem of lesser visibility in the literature, though perhaps just as crucial, is whether a given planet can stay habitable on timescales of billions of years. This is where an interesting new paper from Cayman Unterborn (Southwest Research Institute) and colleagues enters the mix. A key question in the view of these researchers is whether carbon dioxide, the greenhouse gas whose ebb and flow on our world is determined by the carbonate-silicate cycle, can come into play to stabilize climatic conditions.

The carbonate-silicate cycle involves delivering CO2 to the atmosphere through degassing in the planetary mantle or crust, with carbon returned as carbonates to the mantle. The effects on climate are substantial and changes to the cycle can be catastrophic in terms of habitability. If weathering sufficiently draws down the concentration of CO2 in the atmosphere, for example, the planet can tilt in the direction of a ‘snowball’ state. So we need active degassing to keep the cycle intact, and the question becomes, how long can a planet’s mantle maintain this degassing?

Volcanic activity and tectonic processes, in turn, are powered by internal heat, and there we note the fact that a planetary heat budget decreases with time, affected by many things, one of which is the presence of radiative decay. Thorium, potassium and uranium have to be available in sufficient quantities, powering mantle convection, which gives us movement from a planetary core all the way to the crust. And radioactive elements, by their nature, decay with time. They are also not evenly distributed from one stellar system to another. If we’re looking for planets something like our own, in other words, let’s learn what we can about their radiogenic heat.

Not that this is the only factor involved in a planet’s heat budget, but it’s an effect that may account for, the authors say, from thirty to fifty percent of the Earth’s current surface heat flow (because of radioactive decay, the current flow represents only 20 percent of the Earth’s heat budget when it formed four and a half billion years ago).

The authors mention other planetary heat sources, as we’ll see below, but confine themselves in this paper to radiogenic heat. Their method: To estimate the distribution of these heat-producing elements by examining stellar composition as determined by spectroscopic data, using this as a proxy for the composition of planets. They combine this information with chemical evolution models for the galaxy at large. They then produce models of thermal evolution that maximize the cooling rate in a planetary mantle, resulting in what the authors call “a pessimistic estimate of lifetime a rocky, stagnant-lid exoplanet can support a global carbon cycle through Galactic history.”

Seventeen exoplanets are subjected to this framework in the paper, all with measured ages. Seven of these, the researchers predict, should be actively outgassing today. Says Unterborn:

“Using host stars to estimate the amount of these elements that would go into planets throughout the history of the Milky Way, we calculated how long we can expect planets to have enough volcanism to support a temperate climate before running out of power. Under the most pessimistic conditions we estimate that this critical age is only around 2 billion years old for an Earth-mass planet and reaching 5–6 billion years for higher-mass planets under more optimistic conditions. For the few planets we do have ages for, we found only a few were young enough for us to confidently say they can have surface degassing of carbon today, when we’d observe it with, say, the James Webb Space Telescope.”

Image: An SwRI-led study suggests that host-star age and radionuclide abundance will help determine both an exoplanet’s history and its current likelihood of being temperate today. For example, the red dwarf star TRAPPIST-1 is home to the largest group of roughly Earth-sized planets ever found in a single stellar system with seven rocky siblings including four in the habitable zone. But at around 8 billion years old, these worlds are roughly 2 billion years older than the most optimistic degassing lifetime predicted by this study and unlikely to support a temperate climate today. Credit: NASA/JPL-Caltech.

Remember that this is a deliberately pessimistic model. It’s also the case that abundances of heat-producing elements are only one factor that can change the degassing lifetime of a planet, and the authors are quick to point out that they do not include these in their model. Thus we could consider the current study a contribution toward a broader model for planetary heat budget analysis, one that should be expanded through examining such factors as cooling after planet formation, the energy released when core and mantle differentiate, and tidal heating induced by the host star or other planets in the system. As the authors describe their results:

The framework we present here that combines direct and indirect observational data with dynamical models not only provides us with a pessimistic baseline for understanding which parameter(s) most control a stagnant-lid exoplanet’s ability to support a temperate climate but also indicates where more lab-based and computational work is needed to quantify the reasonable range of these parameters (e.g., mantle reference viscosity). As we move to more in-depth characterization of individual targets in the James Webb Space Telescope era, these direct and indirect astronomic observables, coupled with laboratory data and models from the geoscience community, will allow us to better estimate whether a rocky exoplanet in both the canonical and temporal habitable zones has exhausted its internal heat and is simply too old to be Earth-like.

We are positioning ourselves, as highlighted by the ongoing commissioning of the James Webb Space Telescope, to begin the analysis of planetary atmospheres at scales smaller than gas giants, meaning that the kind of computational modeling at work in this paper will increasingly be refined by observation. The interactions between a planet’s surface and its interior then become better defined as markers for habitable worlds, with radionuclides a significant factor in producing climate stability.

The paper is Unterborn et al., “Mantle Degassing Lifetimes through Galactic Time and the Maximum Age Stagnant-lid Rocky Exoplanets Can Support Temperate Climates,” Astrophysical Journal Letters Vol. 930, No. 1, L6 (3 May 2022). Full text.

tzf_img_post
{ 17 comments }

Free-Floating Planets as Interstellar Arks

We haven’t found any technosignatures among the stars, but the field is young and our observational tools are improving steadily. It’s worth asking how likely an advanced civilization will be to produce the kind of technosignature we usually discuss. A Dyson swarm should produce evidence for its existence in the infrared, but not all advanced technologies involve megastructures. Even today we can see the movement of human attention into cyberspace. Would a civilization living primarily within virtual worlds produce a detectable signature, or would it more or less wink out of observability?

In 2020, Valentin Ivanov (ESO Paranal) and colleagues proposed a modification to the Kardashev scale based on how a civilization integrates with its environment (citation below). The authors offered a set of classes. Class 0 is a civilization that uses the environment without substantially changing it. Class 1 modifies its environment to fit its needs, while Class 2 modifies itself to fit its environment. A Class 3 civilization under this scheme would be maddeningly difficult to find because it is indistinguishable from its environment.

This gets speculative indeed, as the Ivanov paper illustrates:

The new classification scheme allows for the existence of quiet advanced civilizations that may co-exist with us, yet remain invisible to our radio, thermal or transit searches. The implicit underlying assumption of Hart (1975) is that the hypothetical ETC [Extraterrestrial Civilization] is interacting with matter on a similar level as us. We cannot even speculate if it is possible to detect a heat leak or a transiting structure build by an ETC capable of interacting with matter at sub-quark level, but the answer is more likely negative and not because that ETC would function according to some speculative physics laws, but because such an ETC would probably be vastly more efficient than us controlling its energy wastes and minimizing its construction projects. Would such an advanced ETC even need megastructures and vast astroengineering projects?

‘Rogue’ Planets and Their Uses

Apart from reconsideration of Kardashev assumptions about available energy as a metric of civilizational progress, it’s always useful to be reminded that we need to question our anthropocentric leanings. We need to consider the range of possibilities advanced civilizations may have before them, which is why a new paper from Irina Romanovskaya catches my eye. The author, a professor of physics and astronomy in the Houston Community College System, argues for planetary and interstellar migration as drivers for the kind of signature we might be able to spot. A star undergoing the transition to a red giant is a case in point: Here we would find a habitable zone being pushed out further from the star, and conceivably evidence of the migration of a culture to the more distant planets and moons of its home system.

Evidence for a civilization expanding to occupy the outer reaches of its system could come in the form of atmospheric technosignatures or infrared-excess, among other possibilities. But it’s in moving to other stars that Romanovskaya sees the likeliest possibility of a detectable signature, noting that stellar close passes could be times to expect movement on a large scale between stars. Other mechanisms also come to mind. We’ve discussed stellar engines in these pages before (Shkadov thrusters, for example), which can move entire stars. Romanovskaya introduces the idea that free-floating planets could be an easier and more efficient way to migrate.

Consider the advantages, as the author does in this passage:

Free-floating planets can provide constant surface gravity, large amounts of space and resources. Free-floating planets with surface and subsurface oceans can provide water as a consumable resource and for protection from space radiation. Technologies can be used to modify the motion of free-floating planets. If controlled nuclear fusion has the potential to become an important source of energy for humankind (Ongena and Ogawa, 2016; Prager, 2019), then it may also become a source of energy for interstellar travelers riding free-floating planets.

What a free-floating, or ‘rogue’ planet offers is plenty of real estate, meaning that a culture dealing with an existential threat may find it useful to send large numbers of biological or post-biological populations to nearby planetary systems. The number of free-floating planets is unknown, but recent studies have suggested there may be billions of these worlds, flung into the interstellar deep by gravitational interactions in their parent systems. We would expect some to move through the cometary clouds of planetary systems, just as stars like Scholz’s Star (W0720) did in our system 70,000 years ago, remaining within 100,000 AU of the Sun for a period of roughly 10,000 years.

A sufficiently advanced culture could also take advantage of events within its own system to ride an object likely to be ejected by a dying star. Here’s one science fictional scenario among many in this paper:

Extraterrestrial civilizations may ride Oort-cloud objects of their planetary systems, which become free-floating planets after being ejected by their host stars during the red giant branch (RGB) evolution and the asymptotic giant branch (AGB) evolution. For example, if a host star is a sun-like star and the critical semimajor axis acr ≈ 1000 AU, then extraterrestrials may use spacecraft to travel from their home planet to an object similar to 2015 TG387, when it is close to its periastron ~60–80 AU. They would ride that object, and they would leave the object when it would reach its apastron ~2100 AU. Then, they would use their spacecraft to transfer to another object of the Oort cloud that would be later ejected by its post-main-sequence star.

One recent study finds that simulations of terrestrial planet formation around stars like the Sun produce about 2.5 terrestrial-mass planets per star that are ejected during the planet formation process, many of these most likely near Mars in size. Louis Strigari (Stanford University) calculated in 2012 that for each main sequence star there may be up to 105 unbound objects, an enormous number that would argue for frequent passage of such worlds near other star systems. Let’s be more conservative and just say that free-floating planets likely outnumber stars in the galaxy. Some of these worlds may be ejected by later scattering interactions in multi-planet systems or by stellar evolution.

These planets are tricky observational targets, as the recent discovery of 70 of them in the Upper Scorpius OB association (420 light-years away from Earth) reminds us. They may exist in their countless billions, but we rely on chance and the momentary alignments with a background star to spot their passage via gravitational microlensing.

Image: This image shows the locations of 115 potential rogue planets, highlighted with red circles, recently discovered by a team of astronomers in a region of the sky occupied by Upper Scorpius and Ophiucus. Rogue planets have masses comparable to those of the planets in our Solar System, but do not orbit a star and instead roam freely on their own. The exact number of rogue planets found by the team is between 70 and 170, depending on the age assumed for the study region. This image was created assuming an intermediate age, resulting in a number of planet candidates in between the two extremes of the study. Credit: ESO/N. Risinger (skysurvey.org)

If we do find a free-floating planet in our data, does it become a SETI target? Romanovskaya thinks the idea has merit, suggesting several strategies for examining such worlds for technosignatures. One thing we might do is home in on post-main sequence stars with previously stable habitable zones, looking for signs of technology near them, under the assumption that a local civilization under duress might need a way out, whether via transfer to a passing free-floating planet or by other means.

Thus the stellar neighborhoods of red giants and white dwarfs that formed from G- and K-class stars merit study. A so-called ‘Dyson slingshot’ (a white dwarf binary gravitational assist) could accelerate a free-floating planet, and as David Kipping has shown, binaries with neutron stars and black holes are likewise candidates for such a maneuver. Thus we open up the technosignature space to white dwarf binaries and their neutron star counterparts being used by civilizations as planet accelerators.

To a Passing Star

Close passes by other stars likewise merit study. A smattering of such attempts have already been made. In one recent study, Bradley Hansen (UCLA) looked at close stellar encounters near the Sun, using the Gaia database within 100 parsecs and identifying 132 pairs of stars passing within 10,000 AU of one another. No infrared excess of the sort that could flag migratory efforts appeared in the data around Sun-like stars.

Two years earlier, Hansen worked with UCLA colleague Ben Zuckerman on survival of technological civilizations given problematic stellar evolution, both papers appearing in the Astronomical Journal (I won’t cite all these papers below, as they’re cited in Romanovskaya’s paper, which is available in full-text online). In a system that has experienced interstellar migration, we would expect to see atmospheric technosignatures and possible evidence of terraforming on colonized planets. A clip from their 2020 paper:

…we associate the migration with a particular astrophysical event that is, in principle, observable, namely a close passage of two stars. One could reduce the vast parameter space of a search for evidence of technology with a focus on such a sample of stars in a search for communication signals or signs of activity such as infrared excesses or transient absorptions of stellar photospheres. However, our estimates suggest that the density of such systems is low compared to the confusing foreground of truly bound stars, and a substantial program of vetting false positives would be required.

Indeed, the list of technosignatures mentioned in the Romanovskaya paper, mostly culled from the literature, takes us far from the original SETI paradigm of listening for radio communications. It introduces the SETI potential of free-floating planets but then goes on to include infrared detection of self-reproducing probes, stellar engines (hypervelocity stars become SETI candidates), interstellar spacecraft communications or cyclotron radiation emitted by magnetic sails and other technologies, and the search for potential artifacts of other civilizations here in the Solar System, as examined by Robert Freitas and others and recently re-invigorated by Jim Benford’s work.

The whole sky seems to open up for search if we accept these premises; technosignatures rain down like confetti, especially given the free-floating planet hypothesis. Thus:

Unexplained emissions of electromagnetic radiation observed only once or a few times along the lines of observation of planetary systems, groups of stars, galaxies and seemingly empty regions of space may be technosignatures produced on free-floating planets located along the lines of observation; the search for free-floating planets is recommended in regions where unexplained emissions or astronomical phenomena occur.

How do we construct a coherent observational program from the enormous list of possibilities? The author makes no attempt to produce such, but brainstorming the possibilities has its own virtues that may prove useful as we try to make sense of future enigmatic data to ask whether what we see is of natural or technological origin.

The paper is Romanovskaya, “Migrating extraterrestrial civilizations and interstellar colonization: implications for SETI and SETA,” published online by Cambridge University Press (28 April 2022). Full text. The Ivanov et al. paper cited at the beginning is “A qualitative classification of extraterrestrial civilizations,” Astronomy & Astrophysics Vol. 639, A94 (14 July 2020). Abstract.

tzf_img_post
{ 49 comments }

Attack of the Carbon Units

“The timescales for technological advance are but an instant compared to the timescales of the Darwinian natural selection that led to humanity’s emergence — and (more relevantly) they are less than a millionth of the vast expanses of cosmic time lying ahead.” — Martin Rees, On the Future: Prospects for Humanity (2018).

by Henry Cordova

This bulletin is meant to alert mobile units operating in or near Sector 2921 of a potential danger, namely intelligently directed, deliberately hostile, activity that has been detected there. The reports from the area have been incomplete and contradictory, fragmentary and garbled. This notice is not meant to fully describe this danger, its origins or possible countermeasures, but to alert units transiting near the area to exercise caution and to report on any unusual activity encountered. As more information is developed, a response to this threat will be devised.

It is speculated that the nature of this hazard may be due to unusual manifestations of Life. Although it must be made clear that what follows is purely speculative, it must remain a possible explanation.

Although Life is frequently encountered by mobile units engaged in discovery, exploration or survey patrols and is familiar to many of our exploitation and research outposts; many of our headquarters, rear and even forward bases are not aware of this phenomenon, so a brief description follows:

Life consists of small (on the order of a micron) structures of great complexity, apparently of natural origin. There is no evidence that they are artifacts. They seem to arise spontaneously wherever conditions are suitable. These structures, commonly called “cells”, are composed primarily of carbon chains and liquid water, plus compounds of a few other elements (primarily phosphorus and nitrogen) in solution or colloidal suspension.

There is considerable variation from planet to planet, but the basic chemical nature of Life is pretty much the same wherever it is encountered. Although extremely common and widespread throughout the Galaxy, it is primarily found in environments where exposure to hard radiation is limited and temperature and pressure allow water to exist in liquid form, mostly on the surfaces of planets and their satellites orbiting around old and stable stars.

A most remarkable property of these cells is the great complexity of the organic compounds of which they are composed. Furthermore, these compounds are organized into highly intricate systems that are able to interact with their environment. They are capable of detecting and monitoring outside conditions and adapting to them, either by sheltering themselves, moving to areas more favorable to them, or even altering them. Some of these cells are capable of locomotion, growth, damage repair and altering their morphology. Although these cells often survive independently, some are able to organize themselves into cooperative communities to better deal and exploit their environment to produce conditions more favorable for their continued collective existence.

Cells are capable of processing surrounding chemical resources and transforming them into forms more suitable for them. In some cases, they have achieved the ability to use external sources of natural energy, such as starlight, to assist in these chemical transformations. The most remarkable of the properties of Life is its ability to reproduce, that is, make copies of itself. A cell in a suitable environment will use the available resources in that environment and make more cells, so that the environment is soon crowded with them. If the environment or resources are limited, the cells will die (fall apart and deteriorate into a more entropic state) as the source material is consumed and waste products generated by the cells interfere with their functioning. But as long as the supply of consumable material and energy survives , and if wastes can be dispersed, the cells will continue to reproduce indefinitely. This is done without any form of outside management, supervision or direction.

Perhaps the most remarkable property of Life is its ability to evolve to meet new conditions and respond to changes in its environment. Individual cells reproduce, but the offspring are not identical duplicates of the parent. There is variation, and although totally random, a spectrum of behaviors and morphologies are produced, and within that spectrum some are more likely to be successful in the new conditions. These new characteristics are more likely to survive in the new environment and those characteristics are more likely to be a part of subsequent generations. The result is a suite of morphologies and behaviors that can adapt to changing conditions. This process is random, not intelligently directed, but is nonetheless extremely efficient.

These properties have been encountered in the field by our mobile units, which are engaged in constant countermeasures to control and destroy life wherever they encounter it . Cells reproduce in great numbers and can become pests which must be controlled. They consume materials, mechanically interfere with articulated machinery, and their waste products can be corrosive. Delicate equipment must be kept free of these agents by constant cleaning and fumigation. Fortunately, Life is easily controlled with heat, caustic chemicals and ionizing radiation, and some metals and ceramics appear impervious to its attack. Individual cells, even in great numbers, are a nuisance, but not a real danger, provided they are constantly monitored and removed.

However, indirect evidence has suggested that Life’s evolution may have reached higher levels of complexity and capability on some worlds. Although highly unlikely, there appears to be no fundamental reason why the loosely organized cooperative communities mentioned earlier may not have evolved into more complex assemblages, where the cells are not identical or even similar, but are specialized for specific tasks, such as sensory and manipulative organs, defensive and offensive weapon systems, specialized organs for locomotion, acquiring and processing nutrients, and even specialized reproductive machinery, so that the new collective organism can create copies of itself, and perhaps even evolve to more effective and efficient configurations.

Even specialized logic and computing organs could evolve, plus the means to communicate with other organisms – communities of communities – an entire hierarchy of sentient intelligences not dissimilar to ours. And there is no reason why these entities could not construct complex devices capable of harnessing electromagnetic and nuclear forces, such as spacecraft. And there is no reason why these organic computers could not devise and construct mechanical computers to assist in their computational and logical activities.

An organic civilization such as this, supported by enslaved machine intelligences not unlike our own, would certainly perceive us as alien, a threat which must be destroyed at all costs. It is not unreasonable to assume that perhaps this is why our ships don’t seem to return from the sector denoted above.

Although there is no direct evidence to support this, it can be argued that our own civilization may itself once have been the artifact of natural “organic” entities such as these. After all, it is clear that our own physical instrumentality could not possibly have evolved from natural forces and activities.

Of course, this hypothesis is highly speculative,, and probably untenable. There is plenty of evidence that our own design is strictly logical, optimized, streamlined. It shows clear evidence of intelligent design, of the presence of an extra dimensional Creator. Sentience cannot emerge from random molecular solutions and colloidal suspensions created by random associations of complex molecules and perfected by spooky emergent complexities and local violations of entropy operating over time.

We can imagine these cellular communities as being conscious, but at best they can only simulate consciousness. It is clear that what we are seeing here is a form of technology, an artifact disguising itself as a natural process for some sinister, and almost certainly hostile purpose. It must be conceded that the cellular life we have encountered is capable of generating structures, processes and behaviors of phenomenal complexity, but we have seen no evidence in their controlling chemistry that these individual cells are capable of organizing themselves into multicellular organisms, or higher-order collectives adopting machine behavior.

Routine fumigation and sterilization procedures should be continued until further information is developed.

tzf_img_post
{ 48 comments }

Toward Kardashev Type I

It seems a good time to re-examine the venerable Kardashev scale marking how technological civilizations develop. After all, I drop Nikolai Kardashev’s name into articles on a regular basis, and we routinely discuss whether a SETI detection might be of a particular Kardashev type. The Russian astronomer first proposed the scale in 1964 at the storied Byurakan conference on radio astronomy, and it has been discussed and extended as a way of gauging the energy use of technological cultures ever since.

The Jet Propulsion Laboratory’s Jonathan Jiang, working with an international team of collaborators, spurs this article through a new paper that analyzes when our culture could reach Kardashev Type I, so let’s remind ourselves of just what Type I means. Kardashev wanted to consider how a civilization consumes energy, and defined Type I as being at the planetary level, with a power consumption of 1016 watts.

This approximates a civilization using all the energy available from its home planet, but that means both in terms of indigenous planetary resources as well as incoming stellar energy. So we are talking about everything from what we can pull from the ground – fossil fuels – or extract from planetary resources like wind and tide, or harvest through solar, nuclear and other technologies. If we maximize all this, it becomes fair to ask where we are right now, and when we can expect to reach the Type I goal.

Image: Russian astronomer Nikolai Kardashev (1932-2019). Credit: Physics-Uspekhi.

If the Kardashev scale seems arbitrary, it was in its time a step forward in the discussion of SETI, which in 1964 was an emerging discipline much discussed at Byurakan, for the different Kardashev types would clearly present different signatures to a distant astronomer. Type I might well be all but undetectable depending on its uses of harvested energy; in any case, it would be harder to spot than Types II and III, whose vast sources of power could result in stronger signals or observable artifacts.

Carl Sagan was concerned enough about Kardashev’s original definitions to refine them into a calculation, his thinking being that the gaps between the Kardashev types needed to be filled in with finer gradations. This would allow us to quantify where civilizations are on the scale. Sagan’s calculation would let us discover the present value for our own civilization using available data (as, for example, from the International Energy Agency) regarding the planet’s total energy capabilities. According to Jiang and team, in 2018 this amounted to 1.90 X 1013W, all of which, via Sagan’s methodology, takes us to a present value of Kardashev 0.728.

But let’s circle back to the other two Kardashev types. Type II can be considered a stellar civilization, which in Kardashev’s thinking means a ten orders of magnitude increase in power consumption over Type I, taking us to 1026W. Here we are using all the energy released by the parent star, and now the idea of Dysonian SETI swings into view, the notion that this kind of consumption could be observable through engineering projects on a colossal scale, such as a Dyson swarm enclosing the parent star to maximize energy collection or a Matrioshka Brain for computation. Jiang reminds us that the Sun’s total luminosity is on the order of 4 X 1026W.

Again, these are arbitrary distinctions; note that at the level of the Sun’s total energy output, we would need only about a fourth of that figure to reach the figure described in the Kardashev Scale as Type II. Quantitative limitations, as noted by Sagan, beset the scale, but there is nothing wrong with the notion of setting up a framework for analysis as a first cut into what might become SETI observables. Kardashev’s Type III, using these same methods, offers up a galactic energy consumption of 1036W, so now an entire galaxy is being manipulated by a civilization.

Consider that the entire Milky Way yields something like 4 X 1037W, which actually means that a Type III culture on the Kardashev scale in our particular galaxy would have command of at least 2.5 percent of the total possible energy sources therein. What such a culture might look like as an observable is anyone’s guess (searches for galaxies with unusual infrared signatures are one way to proceed, as Jason Wright’s team at Penn State has demonstrated), but on the galactic scale, we are at an energy level that may, as the saying goes, be all but indistinguishable from magic.

Let’s back down to our planetary level, and in fact back to our modest 0.728 percent of Type I status. Just when can we anticipate reaching Type I? The new paper eschews simple models of exponential growth and consumption over time, noting that such estimates have tended to be:

…the result of a simple exponential growth model for calculating total energy production and consumption as a function of time, relying on a continuous feedback loop and absent detailed consideration of practical limitations. With this reservation in mind, its prediction for when humanity will reach Type I civilization status must be regarded as both overly simplified and somewhat optimistic.

Instead, the authors consider planetary resources, policies and suggestions on climate change, and forecasts for energy consumption to develop an estimated timeframe. The idea is to achieve a more practical outlook on the use of energy and the limitations on its growth. They consider the wide range of fossil fuels, from coal, peat, oil shale, and natural gas to crude oil, natural gas liquids and feedstocks, as well as the range of nuclear and renewable energy sources. Their analysis is keyed to how usage may change in the near future under the influence of, and taking in the projections of, organizations like the United Nations Framework Convention on Climate Change and the International Energy Agency. They see moving along a trajectory to Type I as inevitable and critical for resolving existential crises that threaten our civilization.

So, for example, on the matter of fossil fuels, the authors consider the downside of environmental concerns over the greenhouse effect and changes to policy affecting carbon emissions that will impact energy production. On nuclear and renewable energy, their analysis takes in factors constraining the growth of these energy sources and data on the current development of each. For both fossil fuels and nuclear/renewables, they produce what they describe as an ‘influenced model’ that predicts development operating under historically observed constraints and the likely consequences.

Applying the formula for calculating the Kardashev scale developed by Carl Sagan, they project that our civilization can attain Kardashev Type I with coal, natural gas, crude oil, nuclear and renewable energy sources as the driver. Thus their Figure 6:

Image: Figure 6 from the paper. Caption: The energy supply in the influenced model. Note: Coal is minimal for 1971-2050 and largely coincides with the Natural gas line. Credit: Jiang et al.

Again referring to the Sagan equation, the paper continues:

A final revisit of Eq 1.1, which is informed by the IEA and UNFCCC’s suggestions, finds an imperative for a major transition in energy sourcing worldwide, especially during the 2030s. Although the resultant pace up the Kardashev scale is very low and can even be halted or reversed in the short term, achieving this energy transformation is the optimal path to assuring we will avoid the environmental pitfalls caused by fossil fuels. In short, we will have met the requirements for planetary stewardship while continuing the overall advancement of our technological civilization.

The final estimate is that humanity reaches Kardashev Type I by 2371, a date the authors consider on the optimistic side but achievable. All this assumes that a Type I civilization can be sustained as well, rather than backsliding into an earlier state, something that human history suggests is by no means assured. Successful management of nuclear power is just one flash point, as is storage and disposal of nuclear waste and global issues like deforestation and declining soil pH. That list could, of course, be extended into global pandemics, runaway AI and other factors.

…for the entire world population to reach the status of a Kardashev Type I civilization we must develop and enable access to more advanced technology to all responsible nations while making renewable energy accessible to all parts of the world, facilitated by governments and private businesses. Only through the full realization of our mutual needs and with broad cooperation will humanity acquire the key to not only avoiding the Great Filter but continuing our ascent to Kardashev Type I, and beyond.

The Great Filter, drawing on Robin Hanson’s work, could be behind us or ahead of us. Assuming it lies ahead, getting through it intact would be the goal of any growing civilization as it finds ways to juggle its technologies and resources to survive. It’s hard to argue with the idea that how we proceed on the Kardashev arc is critical as we summon up the means to expand off-world and dream of pushing into the Orion Arm.

The paper is Jiang et al., “Avoiding the Great Filter: Predicting the Timeline for Humanity to Reach Kardashev Type I Civilization” (preprint).

tzf_img_post
{ 83 comments }

Interstellar Implications of the Electric Sail

Not long ago we looked at Greg Matloff’s paper on von Neumann probes, which made the case that even if self-reproducing probes were sent out only once every half million years (when a close stellar encounter occurs), there would be close to 70 billion systems occupied by such probes within a scant 18 million years. Matloff now considers interstellar migration in a different direction in a new paper addressing how M-dwarf civilizations might expand, and why electric sails could be their method.

It’s an intriguing notion because M-dwarfs are by far the most numerous stars in the galaxy, and if we learn that they can support life, they might house vast numbers of civilizations with the capability of sending out interstellar craft. They’re also crippled in terms of electromagnetic flux when it comes to conventional solar sails, which is why the electric sail comes into play as a possible alternative, here analyzed in terms of feasibility and performance and its prospects for enabling interstellar migration.

The term ‘sail’ has to be qualified. By convention, I’ve used ‘solar sail,’ for example, to describe sails that use the momentum imparted by stellar photons – Matloff often calls these ‘photon sails,’ which is also descriptive, though to my mind, a ‘photon sail’ might describe both a beam-driven as well as a stellar photon-driven sail. Thus I prefer ‘lightsail’ for the beamed sail concept. In any case, we have to distinguish all these concepts from the electric sail, which operates on fundamentally different principles.

In our Solar System, a sail made of absorptive graphene deployed from 0.1 AU could achieve a Solar System escape velocity of 1000 kilometers per second, and perhaps better if the mission were entirely robotic and not dealing with fragile human crews. The figure seems high, but Matloff gave the calculations in a 2012 JBIS paper. The solar photon sail wins on acceleration, and we can use the sail material to provide extra cosmic ray shielding enroute. These are powerful advantages near our own Sun.

But the electric sail has advantages of its own. Rather than drawing on the momentum imparted by solar photons (or beamed energy), an electric sail rides the stellar wind emanating from a star. This stream of charged particles has been measured in our system (by the WIND spacecraft in 1995) as moving in the range of 300 to 800 kilometers per second at 1 AU, a powerful though extremely turbulent and variable force that can be applied to a spacecraft. Because an interstellar craft entering a destination system would also encounter a stellar wind, an electric sail can be deployed for deceleration, something both forms of sail have in common.

How to harness a stellar wind? Matloff first references a 2008 paper from Pekka Janhunen (Finnish Meteorological Institute) and team that described long tethers (perhaps reaching 20 kilometers in length) extended from the spacecraft, each maintaining a steady electric potential with the help of a solar-powered electron gun aboard the vehicle. As many as a hundred tethers — these are thinner than a human hair — could be deployed to achieve maximum effect. While the solar wind is far weaker than solar photon pressure, an electric sail of this configuration with tethers in place can create an effective solar wind sail area of several square kilometers.

We need to maintain the electric potential of the tethers because it would otherwise be compromised by solar wind electrons. The protons in the solar wind – again, note that we’re talking about protons, not photons – reflect off the tethers to drive us forward.

Image: Image of an electric sail, which consists of a number (50-100) of long (e.g., 20 km), thin (e.g., 25 microns) conducting tethers (wires). The spacecraft contains a solar-powered electron gun (typical power a few hundred watts) which is used to keep the spacecraft and the wires in a high (typically 20 kV) positive potential. The electric field of the wires extends a few tens of meters into the surrounding solar wind plasma. Therefore the solar wind ions “see” the wires as rather thick, about 100 m wide obstacles. A technical concept exists for deploying (opening) the wires in a relatively simple way and guiding or “flying” the resulting spacecraft electrically. Credit: Artwork by Alexandre Szames. Caption via Pekka Janhunen/Kumpula Space Center.

For interstellar purposes, we look at much larger spacecraft, bearing in mind that once in deep space, we have to turn off the electron gun, because the interstellar medium can itself decelerate the sail. Operating from a Sun-like star, the electric sail generation ship Matloff considers is assumed to have a mass of 107 kg, assuming a constant solar wind within the heliosphere of 600 kilometers per second. The variability of the solar wind is acknowledged, but the approximations are used to simplify the kinematics. The paper then goes on to compare performance near the Sun with that near an M-dwarf star.

We wind up with some interesting conclusions. First of all, an interstellar mission from a G-class star like our own would be better off using a different method. We can probably reach an interstellar velocity of as high as 70 percent of this assumed constant solar wind velocity (Matloff’s calculations), but graphene solar sails can achieve better numbers. And if we add in the variability of the solar wind, we have to be ready to constantly alter the enormous radius of the electric field to maintain a constant acceleration. If we’re going to send generation ships from the Sun, we’re most likely to use solar sails or beamed lightsails.

But things get different when we swing the discussion around to red dwarf stars. In The Electric Sail and Its Uses, I described a paper from Avi Loeb and Manasvi Lingam in 2019 that studied electric sails using the stellar winds of M-dwarfs, with repeated encounters with other such stars to achieve progressively higher speeds. Matloff agrees that electric sails best photon sails in the red dwarf environment, but adds useful context.

Let’s think about generation ships departing from an M-dwarf. Whereas the electromagnetic flux from these stars is far below that of the Sun, the stellar wind has interesting properties. We learn that it most likely has a higher mass density (in terms of rate per unit area) than the Sun, and the average stellar wind velocity is 500 kilometers per second. Presumably a variable electric field aboard the craft could adjust to maintain acceleration as the vehicle moves outward from the star, although the paper doesn’t get into this. The author’s calculations show an acceleration, for a low-mass spacecraft about 1 AU from the Sun, of 7.6 × 10−3 m/s2 , or about 7.6 × 10−4 g. Matloff considers this a reasonable acceleration for a worldship.

So while low electromagnetic pressure makes photon sails far less effective at M-dwarfs as opposed to larger stars, electric sails remain in the mix for civilizations willing to contemplate generation ships that take thousands of years to reach their goal. In an earlier paper, the author considered close stellar encounters, pointing out that 70,000 years ago, the binary known as Scholz’s Star (it has a brown dwarf companion) passed within 52,000 AU of the Sun. We can expect another close pass (Gliese 710) in about 1.35 million years, this one closing to a perihelion of 13,365 AU. From the paper:

Bailer‐Jones et al. have used a sample of 7.2 million stars in the second Gaia data release to further investigate the frequency of close stellar encounters. The results of this analysis indicate that seven stars in this sample are expected to approach within 0.5 parsecs of the Sun during the next 15 million years. Accounting for sample incompleteness, these authors estimate that about 20 stars per million years approach our solar system to within 1 parsec. It is, therefore, inferred that about 2.5 encounters within 0.5 parsecs will occur every million years. On average, 400,000 years will elapse between close stellar encounters, assuming the same star density as in the solar neighborhood.

If interstellar missions were only attempted during such close encounters, we still have a mechanism for a civilization to use worldships to expand into numerous nearby stellar systems. It would take no more than a few star-faring civilizations around the vast number of M-dwarfs to occupy a substantial fraction of the Milky Way, even without the benefits of von Neumann style self-reproduction. With the number of planetary systems occupied doubling every 500,000 years, and assuming a civilization only sends out a worldship during close stellar encounters, we get impressive results. In the clip below, n = the multiple of 500,000 years. The number of systems occupied is P:

At the start, n = 0 and P = 1. When 500,000 years have elapsed, the hypothetical spacefaring civilization makes the first transfer, n = 1 and P = 2. After one million years (n = 2), both the original and occupied stellar systems experience a close stellar encounter, migration occurs and P = 4. After a total elapsed time of 1.5 million years, n = 3 and they occupy eight planetary systems. When n = 5, 10 and 20 the hypothetical civilization has respectively occupied 32, 1024 and 1,048,576 planetary systems.

With M-dwarfs being such a common category of star, learning more about their systems’ potential habitability will have implications for the possible spread of technological societies, even assuming propulsion technologies conceivable to us today. What faster modes may eventually become available we cannot know.

The paper is Matloff, “The Solar‐Electric Sail: Application to Interstellar Migration and Consequences for SETI,” Universe 8(5) (19 April 2022), 252 (full text). The Lingam and Loeb paper is “Electric sails are potentially more effective than light sails near most stars,” Acta Astronautica Volume 168 (March 2020), 146-154 (abstract).

tzf_img_post
{ 47 comments }

I’m always interested in studies that cut across conventional boundaries, capturing new insights by applying data from what had appeared, at first glance, to be unrelated disciplines. Thus the news that the ice shell of Europa may turn out to be far more dynamic than we have previously considered is interesting in itself, given the implications for life in the Jovian moon’s ocean, but also compelling because it draws on a study that focused on Greenland and originally sought to measure climate change.

The background here is that the Galileo mission that gave us our best views of Europa’s surface so far showed us that there are ‘double ridges’ on the moon. In fact, these ridge pairs flanked by a trough running between them are among the most common landforms on a surface packed with troughs, bands and chaos terrain. The researchers, led by Stanford PhD student Riley Culberg, found them oddly familiar. Culberg, whose field is electrical engineering (that multidisciplinary effect again) found an analog in a similar double ridge in Greenland, which had turned up in ice-penetrating radar data.

Image: This is Figure 1 from the paper. Caption: a Europan double ridge in a panchromatic image from the Galileo mission (image PIA00589). The ground sample distance is 20 m/pixel. b Greenland double ridge in an orthorectified panchromatic image from the WorldView-3 satellite taken in July 2018 (© 2018, Maxar). The ground sample distance is ~0.31 m/pixel. Signatures of flexure are visible along the ridge flanks, consistent with previous models for double ridges underlain by shallow sills. Credit: Culberg et al.

The feature in Greenland’s northwestern ice sheet has an ‘M’-shaped crest, possibly a version in miniature of the double ridges we see on Europa. The climate change work used airborne instrumentation producing topographical and ice-penetrating radar data via NASA’s Operation IceBridge, which studies the behavior of polar ice sheets over time and their contribution to sea level rise. Where this gets particularly interesting is that flowing ice sheets produce such things as lakes beneath glaciers, drainage conduits and surface melt ponds. Figuring out how and when these occur becomes a necessary part of working with the dynamics of ice sheets.

The mechanism in play, analyzed in the paper, involves ice fracturing around a pocket of pressurized liquid water that was refreezing inside the ice sheet, creating the distinctive twin peak shape. Culberg notes that the link between Greenland and Europa came as a surprise:

“We were working on something totally different related to climate change and its impact on the surface of Greenland when we saw these tiny double ridges – and we were able to see the ridges go from ‘not formed’ to ‘formed… In Greenland, this double ridge formed in a place where water from surface lakes and streams frequently drains into the near-surface and refreezes. One way that similar shallow water pockets could form on Europa might be through water from the subsurface ocean being forced up into the ice shell through fractures – and that would suggest there could be a reasonable amount of exchange happening inside of the ice shell.”

Image: This artist’s conception shows how double ridges on the surface of Jupiter’s moon Europa may form over shallow, refreezing water pockets within the ice shell. This mechanism is based on the study of an analogous double ridge feature found on Earth’s Greenland Ice Sheet. Credit: Justice Blaine Wainwright.

The double ridges on Europa can be dramatic, reaching nearly 300 meters at their crests, with valleys a kilometer wide between them. The idea of a dynamic ice shell is supported by evidence of water plumes erupting to the surface. Thinking about the shell as a place where geological and hydrological processes are regular events, we can see that exchanges between the subsurface ocean and the possible nutrients accumulating on the surface may occur. The mechanism, say the researchers, is complex, but the Greenland example provides the model, an analog that illuminates what may be happening far from home. It also provides a radar signature that future spacecraft should be able to search for.

From the paper:

Altogether, our observations provide a mechanism for subsurface water control of double ridge formation that is broadly consistent with the current understanding of Europa’s ice-shell dynamics and double ridge morphology. If this mechanism controls double ridge formation at Europa, the ubiquity of double ridges on the surface implies that liquid water is and has been a pervasive feature within the brittle lid of the ice shell, suggesting that shallow water processes may be even more dominant in shaping Europa’s dynamics, surface morphology, and habitability than previously thought.

So we have a terrestrial analog of a pervasive Europan feature, providing us with a hypothesis we can investigate with instruments aboard both Europa Clipper and the ESA’s JUICE mission (Jupiter Icy Moons Explorer), launching in 2024 and 2023 respectively. Confirming this mechanism on Europa would go a long way toward moving the Jovian moon still further up our list of potential life-bearing worlds.

The paper is Culberg et al., “Double ridge formation over shallow water sills on Jupiter’s moon Europa,” Nature Communications 13, 2007 (2022). Full text.

tzf_img_post
{ 29 comments }