Centauri Dreams
Imagining and Planning Interstellar Exploration
Optimal Strategies for Exploring Nearby Stars
We’ve spoken recently about civilizations expanding throughout the galaxy in a matter of hundreds of thousands of years, a thought that led Frank Tipler to doubt the existence of extraterrestrials, given the lack of evidence of such expansion. But let’s turn the issue around. What would the very beginning of our own interstellar exploration look like, if we reach the point where probes are feasible and economically viable? This is the question Johannes Lebert examines today. Johannes obtained his Master’s degree in Aerospace at the Technische Universität München (TUM) this summer. He likewise did his Bachelor’s in Mechanical Engineering at TUM and was visiting student in the field of Aerospace Engineering at the Universitat Politècnica de València (UPV), Spain. He has worked at Starburst Aerospace (a global aerospace & defense startup accelerator and strategic advisory company) and AMDC GmbH (a consultancy with focus on defense located in Munich). Today’s essay is based upon his Master thesis “Optimal Strategies for Exploring Nearby-Stars,” which was supervised by Martin Dziura (Institute of Astronautics, TUM) and Andreas Hein (Initiative for Interstellar Studies).
by Johannes Lebert
1. Introduction
Last year, when everything was shut down and people were advised to stay at home instead of going out or traveling, I ignored those recommendations by dedicating my master thesis to the topic of interstellar travel. More precisely, I tried to derive optimal strategies for exploring near-by stars. As a very early-stage researcher I was really honored when Paul asked me to contribute to Centauri Dreams and want to thank him for this opportunity to share my thoughts on planning interstellar exploration from a strategic perspective.
Figure 1: Me, last year (symbolic image). Credit: hippopx.com).
As you are an experienced and interested reader of Centauri Dreams, I think it is not necessary to make you aware of the challenges and fascination of interstellar travel and exploration. I am sure you’ve already heard a lot about interstellar probe concepts, from gram-scale nanoprobes such as Breakthrough Starshot to huge spaceships like Project Icarus. Probably you are also familiar with suitable propulsion technologies, be it solar sails or fusion-based engines. I guess, you could also name at least a handful of promising exploration targets off the cuff, perhaps with focus on star systems that are known to host exoplanets. But have you ever thought of ways to bring everything together by finding optimal strategies for interstellar exploration? As a concrete example, what could be the advantages of deploying a fleet of small probes vs. launching only few probes with respect to the exploration targets? And, more fundamentally, what method can be used to find answers to this question?
In particular the last question has been the main driver for this article: Before starting with writing, I was wondering a lot what could be the most exciting result I could present to you and found that the methodology as such is the most valuable contribution on the way towards interstellar exploration: Once the idea is understood, you are equipped with all relevant tools to generate your own results and answer similar questions. That is why I decided to present you a summary of my work here, addressing more directly the original idea of Centauri Dreams (“Planning […] Interstellar Exploration”), instead of picking a single result.
Below you’ll find an overview of this article’s structure to give you an impression of what to expect. Of course, there is no time to go into detail for each step, but I hope it’s enough to make you familiar with the basic components and concepts.
Figure 2: Article content and chapters
I’ll start from scratch by defining interstellar exploration as an optimization problem (chapter 2). Then, we’ll set up a model of the solar neighborhood and specify probe and mission parameters (chapter 3), before selecting a suitable optimization algorithm (chapter 4). Finally, we apply the algorithm to our problem and analyze the results (more generally in chapter 5, with implications for planning interstellar exploration in chapter 6).
But let’s start from the real beginning.
2. Defining and Classifying the Problem of Interstellar Exploration
We’ll start by stating our goal: We want to explore stars. Actually, it is star systems, because typically we are more interested in the planets that are potentially hosted by a star instead of the star as such. From a more abstract perspective, we can look at the stars (or star systems) as a set of destinations that can be visited and explored. As we said before, in most cases we are interested in planets orbiting the target star, even more if they might be habitable. Hence, there are star systems which are more interesting to visit (e. g. those with a high probability of hosting habitable planets) and others, which are less attracting. Based on these considerations, we can assign each star system an “earnable profit” or “stellar score” from 0 to 1. The value 0 refers to the most boring star systems (though I am not sure if there are any boring star systems out there, so maybe it’s better to say “least fascinating”) and 1 to the most fascinating ones. The scoring can be adjusted depending on one’s preferences, of course, and extended by additional considerations and requirements. However, to keep it simple, let’s assume for now that each star system provides a score of 1, hence we don’t distinguish between different star systems. Having this in mind, we can draw a sketch of our problem as shown in Figure 3.
Figure 3: Solar system (orange dot) as starting point, possible star systems for exploration (destinations with score ) represented by blue dots
To earn the profit by visiting and exploring those destinations, we can deploy a fleet of space probes, which are launched simultaneously from Earth. However, as there are many stars to be explored and we can only launch a limited number of probes, one needs to decide which stars to include and which ones to skip – otherwise, mission timeframes will explode. This decision will be based on two criteria: Mission return and mission duration. The mission return is simply the sum of the stellar score of each visited star. As we assume a stellar score of 1 for each star, the mission return is equal to the number of stars that is visited by all our probes. The mission duration is the time needed to finish the exploration mission.
In case we deploy several probes, which carry out the exploration mission simultaneously, the mission is assumed to be finished when the last probe reaches the last star on its route – even if other probes have finished their route earlier. Hence, the mission duration is equal to the travel time of the probe with the longest trip. Note that the probes do not need to return to the solar system after finishing their route, as they are assumed to send the data gained during exploration immediately back to Earth.
Based on these considerations we can classify our problem as a bi-objective multi-vehicle open routing problem with profits. Admittedly quite a cumbersome term, but it contains all relevant information:
- Bi-objective: There are two objectives, mission return and mission duration. Note that we want to maximize the return while keeping the duration minimal. Hence, from intuition we can expect that both objectives are competing: The more time, the more stars can be visited.
- Multi-vehicle: Not only one, but several probes are used for simultaneous exploration.
- Open: Probes are free to choose where to end their route and are not forced to return back to Earth after finishing their exploration mission.
- Routing problem with profits: We consider the stars as a set of destinations with each providing a certain score si. From this set, we need to select several subsets, which are arranged as routes and assigned to different probes (see Figure 4).
Figure 4: Problem illustration: Identify subsets of possible destinations si, find the best sequences and assign them to probes
Even though it appears a bit stiff, the classification of our problem is very useful to identify suitable solution methods: Before, we were talking about the problem of optimizing interstellar exploration, which is quite unknown territory with limited research. Now, thanks to our abstraction, we are facing a so-called Routing Problem, which is a well-known optimization problem class, with several applications across various fields and therefore being exhaustively investigated. As a result, we now have access to a large pool of established algorithms, which have already been tested successfully against these kinds of problems or other very similar or related problems such as the Traveling Salesman Problem (probably the most popular one) or the Team Orienteering Problem (subclass of the Routing Problem).
3. Model of the Solar Neighborhood and Assumptions on Probe & Mission Architecture
Obviously, we’ll also need some kind of galactic model of our region of interest, which provides us with the relevant star characteristics and, most importantly, the star positions. There are plenty of star catalogues with different focus and historical background (e.g. Hipparcos, Tycho, RECONS). One of the latest, still ongoing surveys is the Gaia Mission, whose observations are incorporated in the Gaia Archive, which is currently considered to be the most complete and accurate star database.
However, the Gaia Archive – more precisely the Gaia Data Release 2 (DR2), which will be used here* (accessible online [1] together with Gaia based distance estimations by Bailer-Jones et al. [2]) – provides only raw observation data, which include some reported spurious results. For instance, it lists more than 50 stars closer than Proxima Centauri, which would be quite a surprise to all the astronomers out there.
*1. Note that there is already an updated Data Release (Gaia DR3), which was not available yet at the time of the thesis.
Hence, a filtering is required to obtain a clean data set. The filtering procedure applied here, which consists of several steps, is illustrated in Figure 5 and follows the suggestions from Lindegren et al. [3]. For instance, data entries are eliminated based on parallax errors and uncertainties in BP and RP fluxes. The resulting model (after filtering) includes 10,000 stars and represents a spherical domain with a radius of roughly 110 light years around the solar system.
Figure 5: Setting up the star model based on Gaia DR2 and filtering (animated figure from [9])
To reduce the complexity of the model, we assume all stars to maintain fixed positions – which is of course not true (see Figure 5 upper right) but can be shown to be a valid simplification for our purposes, and we limit the mission time frames to 7,000 years. 7,000 years? Yes, unfortunately, the enormous stellar distances, which are probably the biggest challenge we encounter when planning interstellar travel, result in very high travel times – even if we are optimistic concerning the travel speed of our probes, which are defined by the following.
We’ll use a rather simplistic probe model based on literature suggestions, which has the advantage that the results are valid across a large range of probe concepts. We assume the probes to travel along straight-line trajectories (in line with Fantino & Casotto [4] at an average velocity of 10 % of the speed of light (in line with Bjørk [5]. They are not capable of self-replicating; hence, the probe number remains constant during a mission. Furthermore, the probes are restricted to performing flybys instead of rendezvous, which limits the scientific return of the mission but is still good enough to detect planets (as reported by Crawford [6]. Hence, the considered mission can be interpreted as a reconnaissance or scouting mission, which serves to identify suitable targets for a follow-up mission, which then will include rendezvous and deorbiting for further, more sophisticated exploration.
Disclaimer: I am well aware of the weaknesses of the probe and mission model, which does not allow for more advanced mission design (e. g. slingshot maneuvers) and assumes a very long-term operability of the probes, just to name two of them. However, to keep the model and results comprehensive, I tried to derive the minimum set of parameters which is required to describe interstellar exploration as an optimization problem. Any extensions of the model, such as a probe failure probability or deorbiting maneuvers (which could increase the scientific return tremendously), are left to further research.
4. Optimization Method
Having modeled the solar neighborhood and defined an admittedly rather simplistic probe and mission model, we finally need to select a suitable algorithm for solving our problem, or, in other words, to suggest “good” exploration missions (good means optimal with respect to both our objectives). In fact, the algorithm has the sole task of assigning each probe the best star sequences (so-called decision variables). But which algorithm could be a good choice?
Optimization or, more generally, operations research is a huge research field which has spawned countless more or less sophisticated solution approaches and algorithms over the years. However, there is no optimization method (not yet) which works perfectly for all problems (“no free lunch theorem”) – which is probably the main reason why there are so many different algorithms out there. To navigate through this jungle, it helps to recall our problem class and focus on the algorithms which are used to solve equal or similar problems. Starting from there, we can further exclude some methods a priori by means of a first analysis of our problem structure: Considering n stars, there are ?! possibilities to arrange them into one route, which can be quite a lot (just to give you a number: for n=50 we obtain 50!? 1064 possibilities).
Given that our model contains up to 10,000 stars, we cannot simply try out each possibility and take the best one (so called enumeration method). Instead, we need to find another approach, which is more suitable for those kinds of problems with a very large search space, as an operations researcher would say. Maybe you already have heard about (meta-)heuristics, which allow for more time-efficient solving but do not guarantee to find the true optimum. Even if you’ve never heard about them, I am sure that you know at least one representative of a metaheuristic-based solution, as it is sitting in front of your screen right now as you are reading this article… Indeed, each of us is the result of a thousands of years lasting, still ongoing optimization procedure called evolution. Wouldn’t it be cool if we could adopt the mechanisms that brought us here to do the next, big step in mankind and find ways to leave the solar system and explore unknown star systems?
Those kinds of algorithms, which try to imitate the process of natural evolution, are referred to as Genetic Algorithms. Maybe you remember the biology classes at school, where you learned about chromosomes, genes and how they are shared between parents and their children. We’ll use the same concept and also the wording here, which is why we need to encode our optimization problem (illustrated in Figure 6): One single chromosome will represent one exploration mission and as such one possible solution for our optimization problem. The genes of the chromosome are equivalent to the probes. And the gene sequences embody the star sequences, which in turn define the travel routes of each probe.
If we are talking about a set of chromosomes, we will use the term “population”, therefore sometimes one chromosome is referred to as individual. Furthermore, as the population will evolve over the time, we will speak about different generations (just like for us humans).
Figure 6. Genetic encoding of the problem: Chromosomes embody exploration missions; genes represent probes and gene sequences are equivalent to star sequences.
The algorithm as such is pretty much straightforward, the basic working principle of the Genetic Algorithm is illustrated below (Figure 7). Starting from a randomly created initial population, we enter an evolution loop, which stops either when a maximum number of generations is reached (one loop represents one generation) or if the population stops evolving and keeps stable (convergence is reached).
Figure 7: High level working procedure of the Genetic Algorithm
I don’t want to go into too much detail on the procedure – interested readers are encouraged to go through my thesis [7] and look for the corresponding chapter or see relevant papers (particularly Bederina and Hifi [8], from where I took most of the algorithm concept). To summarize the idea: Just like in real life, chromosomes are grouped into pairs (parents) and create children (representing new exploration missions) by sharing their best genes (which are routes in our case). For higher variety, a mutation procedure is applied to a few children, such as a partial swap of different route segments. Finally, the worst chromosomes are eliminated (evolve population = “survival of the fittest”) to keep the population size constant.
Side note: Currently, we have the chance to observe this optimization procedure when looking at the Coronavirus. It started almost two years ago with the alpha version; right now the population is dominated by the delta version, with omicron an emerging variant. From the virus perspective, it has improved over time through replication and mutation, which is supported by large populations (i.e., a high number of cases).
Note that the genetic algorithm is extended by a so-called local search, which comprises a set of methods to improve routes locally (e. g. by inverting segments or swapping two random stars within one route). That is why this method is referred to as Hybrid Genetic Algorithm.
Now let’s see how the algorithm is operating when applied to our problem. In the animated figure below, we can observe the ongoing optimization procedure. Each individual is evaluated “live” with respect to our objectives (mission return and duration). The result is plotted in a chart, where one dot refers to one individual and thus represents one possible exploration mission. The color indicates the corresponding generation.
Figure 8: Animation of the ongoing optimization procedure: Each individual (represented by a dot) is evaluated with respect to the objectives, one color indicates one generation
As shown in this animated figure, the algorithm seems to work properly: With increasing generations, it tries to generate better solutions, as it optimizes towards higher mission return and lower mission duration (towards the upper left in the Figure 8). Solutions from the earlier generation with poor quality are subsequently replaced by better individuals.
5. Optimization Results
As a result of the optimization, we obtain a set of solutions (representing the surviving individuals from the final generation), which build a curve when evaluated with respect to our twin objectives of mission duration and return (see Figure 9). Obviously, we’ll get different curves when we change the probe number m between two optimization runs. In total, 9 optimization runs are performed; after each run the probe number is doubled, starting with m=2. As already in the animated Figure 8, one dot represents one chromosome and thus one possible exploration mission (one mission is illustrated as an example).
Figure 9: Resulting solutions for different probe numbers and mission example represented by one dot
Already from this plot, we can make some first observations: The mission return (which we assume equal to the number of explored stars, just as a reminder) increases with mission duration. More precisely, there appears to be an approximately linear incline of star number with time, at least in most instances. This means that when doubling the mission duration, we can expect more or less twice the mission return. An exception to this behavior is the 512 probes curve, which flattens when reaching > 8,000 stars due to the model limits: In this region, only few unexplored stars are left which may require unfavorable transfers.
Furthermore, we see that for a given mission duration the number of explored stars can be increased by launching more probes, which is not surprising. We will elaborate a bit more on the impact of the probe number and on how it is linked with the mission return in a minute.
For now, let’s keep this in our mind and take a closer look at the missions suggested by the algorithm. In the figure below (Figure 10), routes for two missions with different probe number m but similar mission return J1 (nearly 300 explored stars) are visualized (x, y, z-axes dimensions in light years). One color indicates one route that is assigned to one probe.
Figure 10: Visualization of two selected exploration missions with similar mission return J1 but different probe number m – left: 256 available probes, right: 4 available probes (J2 is the mission duration in years)
Even though the mission return is similar, the route structures are very different: The higher probe number mission (left in Figure 10) is built mainly from very dense single-target routes and thus focuses more on the immediate solar neighborhood. The mission with only 4 probes (right in Figure 10), contrarily, contains more distant stars, as it consists of comparatively long, chain-like routes with several targets included. This is quite intuitive: While for the right case (few probes available) mission return is added by “hopping” from star to star, in the left case (many probes available) simply another probe is launched from Earth. Needless to say, the overall mission duration J2 is significantly higher when we launch only 4 probes (> 6000 years compared to 500 years).
Now let’s look a bit closer at the corresponding transfers. As before, we’ll pick two solutions with different probe number (4 and 64 probes) and similar mission return (about 230 explored stars). But now, we’ll analyze the individual transfer distances along the routes instead of simply visualizing the routes. This is done by means of a histogram (shown in Figure 11), where simply the number of transfers with a certain distance is counted.
Figure 11: Histogram with transfer distances for two different solution – orange bars belong to a solution with 4 probes, blue bars to a solution with 64 probes; both provide a mission return of roughly 230 explored stars.
The orange bars belong to a solution with 4 probes, the blue ones to a solution with 64 probes. To give an example on how to read the histogram: We can say that the solution with 4 probes includes 27 transfers with a distance of 9 light years, while the solution with 64 probes contains only 8 transfers of this distance. What we should take from this figure is that with higher probe numbers apparently more distant transfers are required to provide the same mission return.
Based on this result we can now concretize earlier observations regarding the probe number impact: From Figure 9 we already found that the mission return increases with probe number, without being more specific. Now, we discovered that the efficiency of the exploration mission w. r. t. routing decreases with increasing probe number, as there are more distant transfers required. We can even quantify this effect: After doing some further analysis on the result curve and a bit of math, we’ll find that the mission return J1 scales with probe number m according to ~m0.6 (at least in most instances). By incorporating the observations on linearity between mission return and duration (J2), we obtain the following relation: J1 ~ J2m0.6.
As J1 grows only with m0.6 (remember that m1 indicates linear growth), the mission return for a given mission duration does not simply double when we launch twice as many probes. Instead, it’s less; moreover, it depends on the current probe number – in fact, the contribution of additional probes to the overall mission return diminishes with increasing probe numbers.
This phenomenon is similar to the concept of diminishing returns in economics, which denotes the effect that an increase of the input yields progressively lower or even reduced increase in output. How does that fit with earlier observations, e. g. on route structure? Apparently, we are running into some kind of a crowding effect, when we launch many probes from the same spot (namely our solar system): Long initial transfers are required to assign each probe an unexplored star. Obviously, this effect intensifies with each additional probe being launched.
6. Conclusions and Implications for Planning Interstellar Exploration
What can we take from all this effort and the results of the optimization? First, let’s recap the methodology and tools which we developed for planning interstellar exploration (see Figure 12).
Figure 12: Methodology – main steps
Beside the methodology, which of course can be extended and adapted, we can give some recommendations for interstellar mission design considerations, in particular regarding the probe number impact:
- High probe numbers are favorable when we want to explore many stars in the immediate solar neighborhood. As further advantage of high probe numbers, mostly single-target missions are performed, which allows the customization of each probe according to its target star (e. g. regarding scientific instrumentation).
- If the number of available probes is limited (e. g. due to high production costs), it is recommended to include more distant stars, as it enables a more efficient routing. The aspect of higher routing efficiency needs to be considered in particular when fuel costs are relevant (i. e. when fuel needs to be transported aboard). For other, remotely propelled concepts (such as laser driven probes, e. g. Breakthrough Starshot) this issue is less relevant, which is why those concepts could be deployed in larger numbers, allowing for shorter overall mission duration at the expense of more distant transfers.
- When planning to launch a high number of probes from Earth, however, one should be aware of crowding effects. This effect sets in already for few probes and intensifies with each additional probe. One option to encounter this issue and thus support a more efficient probe deployment could be swarm-based concepts, as indicated by the sketch in Figure 13.
The swarm-based concept includes a mother ship, which transports a fleet of smaller explorer probes to a more distant star. After arrival, the probes are released and start their actual exploration mission. As a result, the very dense, crowded route structures, which are obtained when many probes are launched from the same spot (see again Figure 10, left plot), are broken up.
Figure 13: Sketch illustrating the beneficial effect of swarm concepts for high probe numbers.
Obviously, the results and derived implications for interstellar exploration are not mind-blowing, as they are mostly in line with what one would expect. However, this in turn indicates that our methodology seems to work properly, which of course does not serve as a full verification but is at least a small hint. A more reliable verification result can be obtained by setting up a test problem with known optimum (which is not shown here, but was also done for this approach, showing that the algorithm’s results deviate about 10% compared to the ideal solution).
Given the very early-stage level of this work, there is still a lot of potential for further research and refinement of the simplistic models. Just to pick one example: As a next step, one could start to distinguish between different star systems by varying the reward of each star system si based on a stellar metric, where more information of the star is incorporated (such as spectral class, metallicity, data quality, …). In the end it’s up to oneself, which questions he or she wants to answer – there is more than enough inspiration up there in the night sky.
Figure 14: More people, now
Assuming that you are not only an interested reader of Centauri Dreams but also familiar with other popular literature on that topic, you maybe have heard about Clarke’s three laws. I would like to close this article by taking up his second one: The only way of discovering the limits of the possible is to venture a little way past them into the impossible. As said before, I hope that the introduced methodology can help to answer further questions concerning interstellar exploration from a strategic perspective. The more we know, the better we are capable of planning and imagining interstellar exploration, thus pushing gradually the limits of what is considered to be possible today.
References
[1] ESA, “Gaia Archive,“ [Online]. Available: https://gea.esac.esa.int/archive/.
[2] C. A. L. Bailer-Jones et al., “Estimating Distances from Parallaxes IV: Distances to 1.33 Billion Stars in Gaia Data Release 2,” The Astronomical Journal, vol. 156, 2018.
https://iopscience.iop.org/article/10.3847/1538-3881/aacb21
[3] L. Lindegren et al., “Gaia Data Release 2 – The astrometric solution,” Astronomy & Astrophysics, vol. 616, 2018.
https://doi.org/10.1051/0004-6361/201832727
[4] E. Fantino and S. Casotto, “Study on Libration Points of the Sun and the Interstellar Medium for Interstellar Travel,” Universitá di Padova/ESA, 2004.
[5] R. Bjørk, “Exploring the Galaxy using space probes,” International Journal of Astrobiology, vol. 6, 2007.
https://doi.org/10.1017/S1473550407003709
[6] I. A. Crawford, “The Astronomical, Astrobiological and Planetary Science Case for Interstellar Spaceflight,” Journal of the British Interplanetary Society, vol. 62, 2009. https://arxiv.org/abs/1008.4893
[7] J. Lebert, “Optimal Strategies for Exploring Near-by Stars,“ Technische Universität München, 2021.
https://mediatum.ub.tum.de/1613180
[8] H. Bederina and M. Hifi, “A Hybrid Multi-Objective Evolutionary Algorithm for the Team Orienteering Problem,” 4th International Conference on Control, Decision and Information Technologies, Barcelona, 2017.
https://ieeexplore.ieee.org/document/8102710
[9] University of California – Berkeley, “New Map of Solar Neighborhood Reveals That Binary Stars Are All Around Us,” SciTech Daily, 22 February 2021.
https://scitechdaily.com/new-map-of-solar-neighborhood-reveals-that-binary-stars-are-all-around-us/
137496 b: A Rare ‘Hot Mercury’
We haven’t had many examples of so-called ‘hot Mercury’ planets to work with, or in this case, what might be termed a ‘hot super-Mercury’ because of its size. For HD 137496 b actually fits the ‘super-Earth’ category, at roughly 30 percent larger in radius than the Earth. What makes it stand out, of course, is the fact that as a ‘Mercury,’ it is primarily made up of iron, with its core carrying over 70 percent of the planet’s mass. It’s also a scorched world, with an orbital radius of 0.027 AU and a period of 1.6 days.
Another planet, non-transiting, turns up at HD 137496 as well. It’s a ‘cold Jupiter’ with a minimum mass calculated at 7.66 Jupiter masses, an eccentric orbit of 480 days, and an orbital distance of 1.21 AU from the host star. HD 137496 c is thus representative of the Jupiter-class worlds we’ll be finding more of as our detection methods are fine-tuned for planets on longer, slower orbits than the ‘hot Jupiters’ that were so useful in the early days of radial velocity exoplanet discovery.
The discoverers of the planetary system at HD 137496, an international group led by Tomas Silva (University of Porto, Portugal), found HD 137496 b, the hot Mercury, in K2 data, its transits apparent in the star’s light curve. The gas giant HD 137496 c was then identified in radial velocity work using the reliable HARPS and CORALIE spectrographs.
The primary is a G-class star a good bit older than the Sun, its age calculated at 8.3 billion years, but with a comparable mass (1.03 solar masses), and a radius of approximately 1.50 solar radii.
Image: HARPS (orange) and CORALIE (blue) radial velocities. In this figure, we present our RV time series. As is clearly seen, the data show a long-term and high-amplitude trend (semiamplitude of ~ 200 m s-1), typical of the signature of a long period giant planet. Credit: Silva et al.
A hot Mercury should turn out to be a useful find in a variety of ways. As the paper notes:
HD 137496 b (K2-364 b) joins the small sample of well characterized dense planets, making it an interesting target for testing planet formation theories, density enhancing mechanisms, and even the possible presence of an extended cometlike mineral rich exosphere. Together with HD 137496 c (K2-364 c), a high-mass (mass ratio…, high-eccentricity planet, this system presents an interesting architecture for planetary evolution studies. Future astrometric observations could also provide significant constraints on the relative inclination of the planetary orbits, unraveling new opportunities to discover the system’s dynamical history.
Keep in mind that most of the planets we now know about have radii somewhere between that of Earth and Neptune. In this range, numerous different system architectures are in play, and a wide variety of possible formation scenarios. As the authors note, high-density planets like HD 137496 b are distinctly under-sampled, which has been a check on theories of planet formation that would accommodate them.
And the theorists are going to have their hands full with this one. HD 137496 b’s parent star shows too little iron to form a planet with this density. I’m going to quote Sasha Warren on this. Working on a PhD at the University of Chicago, Warren focuses on how planetary atmospheres have evolved, particularly those of Mars and Venus. Of HD 137496 b, she has this to say in a recent article on astrobites about how such planets can become more iron-rich:
Firstly, the protoplanetary disks of dust and gas within which planets form around young stars can change in composition as a function of distance from the star. So, it is possible that a combination of high temperatures and magnetic interactions between the host star and the protoplanetary disk concentrated iron-rich materials where HD 137496 b originally formed. This could mean star compositions might not be very useful to help understand what short period rocky planets are made of. Secondly, planets close to their stars like HD 137496 b are so hot that their rocky surfaces can sometimes just evaporate away!
It will be fascinating to see how our theories evolve as we begin to expand the catalog of hot Mercury planets. 137496 b is only the fifth world in this category yet discovered.
The paper is Silva et al., “The HD 137496 system: A dense, hot super-Mercury and a cold Jupiter,” in process at Astronomy & Astrophysics (preprint).
Wolf 359: Of Gravitational Lensing and Galactic Networks
If self-reproducing probes have ever been turned loose in the Milky Way, they may well have spread throughout the galaxy. Our planet is 4.6 billion years old, but the galaxy’s age is 13 billion, offering plenty of time for this spread. A number of papers have explored the concept, including work by Frank Tipler, who in 1980 argued that even at the speed of current spacecraft, the galaxy could be completely explored within 300 million years. Because we had found no evidence of such probes, Tipler concluded that extraterrestrial technological civilizations did not exist.
Robert Freitas also explored the consequences of self-reproducing probes in that same year, reaching similar conclusions about how quickly they would spread, although not buying Tipler’s ultimate conclusion. It’s interesting that Freitas went to work on looking for evidence, reasoning that halo orbits around the Lagrangian points might be one place to search. He was, to my knowledge, the first to use the term SETA — Search for Extraterrestrial Artifacts — which has now come into common use, and is currently under examination by Jim Benford in his work on ‘lurkers.’
A new paper from Michaël Gillon (University of Liège) and Artem Burdanov (Massachusetts Institute of Technology) has now appeared that follows the implications of self-reproduction and technology, tying them to a more specific search regimen. Conversant with the work of Von Eshleman as well as Claudio Maccone, the authors ask whether using the gravitational lens offered by a star wouldn’t make the most reasonable method for ETI communications. The Sun’s huge magnifications, bending light from objects behind it as seen from a relay somewhere beyond its 550 AU lensing distance, could enable participation in a network that functioned on a galactic scale.
You probably remember Gillon as the man who led the team that discovered TRAPPIST-1’s planets. Back in 2014, he began his exploration of gravitational lensing and communications with the publication of a paper titled “A novel SETI strategy targeting the solar focal regions of the most nearby stars.” Accepting the idea that self-reproducing probes could spread through the galaxy in a span of hundreds of millions of years, the author opened the question of detectability. He drew on Maccone’s insight that links enabled by gravitational lensing could allow data-rich communications between two stars at extremely low power. It is in this 2014 paper that Gillon first proposes looking for leakage in traffic between star systems.
A civilization that has spread throughout the galaxy might set up such relays around any stars useful as network nodes. This would turn conventional SETI on its head. Rather than scanning for radio or optical signals from other stellar systems, we consider intercepting ongoing traffic between another star and the relay in our own system. A fully colonized galaxy, so the thinking goes, should have a relay around at least one nearby star.
Thus the term Focal Interstellar Communication Devices (FICDs), examples of which could be present in our own Solar System and perhaps in the focal regions of nearby stars. Several studies have already appeared on a strategy of performing intense multi-spectral monitoring of these focal regions in the hopes of snagging communication leakage from such a network. Gillon and Burdanov focus on a specific FICD. They identify Wolf 359, an M-dwarf that is the third closest stellar system to our own, as a prime candidate to receive a signal from a local FICD, and implement an optical search.
Why Wolf 359? Ponder this:
…detecting the FICD emission to a nearby star can only be done if the observer is within one of these narrow beams, putting a stringent geometrical constraint on the project concept. For an Earth-based observer, this means that the Earth’s minimum impact parameter has to be close to 1 as seen from the FICD, and thus also from the targeted nearby star. In other words, the Earth has to be a transiting (or nearly transiting) planet for one of the nearest stars to give this SETI concept a chance of success, so the target star has to be very close to the ecliptic plane. With its nearly circular orbit and its semi-major axis 215 times larger than the solar radius, the Earth has a mean transit probability < 0.5% for any random star of the solar neighborhood.
Image: An artist’s depiction of an active red dwarf star like Wolf 359 orbited by a planet. Credit: David A. Aguilar.
In other words, because the Earth transits the Sun as seen from Wolf 359, our planet would pass through any communication beam between the star and a local probe once per orbit. Thus a signal to Wolf 359 from an FICD in our Sun’s gravitational lensing region could in principle be detected. Gillon and Burdanov put the idea to the test using the TRAPPIST-South and SPECULOOS Southern Observatory in Chile, in a search “sensitive enough to detect constant emission with emitting power as small as 1W.”
The result: No detections. This could indicate that no probes exist within the Solar System using these methods, or at least that such a probe did not transmit during the observations. Indeed, the list of hypotheses to explain a null result is so large that no conclusion can be drawn. No detection simply means no detection.
But the observations lead us further to consider the spectral range of possible emissions from FICD to star. This is going to change depending on the star. Remember that using gravitational lensing to enable communications forces the receiver to face the host star, blocking its light with some kind of occulter (or perhaps a coronagraph) while enabling the signal to be received. Gillon and Burdanov note that Wolf 359 is a flare star with strong coronal activity, one with significant emission of X-ray and extreme ultraviolet light. The authors determine ‘a spectral zone of minimal emission’ that becomes interesting as a communications channel. Here let’s turn back to the paper, for this zone may be a better place to look:
While the very low emission of late-type M-dwarfs in this spectral range could be an issue for prebiotic chemistry on habitable planets (Rimmer et al. 2018), it could represent a nice spectral ‘sweet spot’ for a GL-based communication to a late M-dwarf like Wolf 359 or TRAPPIST-1. Another advantage of using this wavelength range instead of the optical range is the improved emission rate, thanks to the narrower laser beams… These considerations suggest that the spectral ranges 300-920nm and 400-950nm probed by the TRAPPIST-South and SPECULOOS South observations could not correspond to the optimal spectral range for a GL-based communication [gravitational lensing] from the solar system to Wolf 359. The 150-250 nm spectral range could represent a more optimal spectral range for such GL-based interstellar communication to a cold and active late-type M-dwarf like Wolf-359.
Image: This is Figure 2 from the paper. Caption: Illustration showing the geometry of the hypothesized communication link from the solar system to the Wolf 359 system. The distances and stellar sizes are not to scale. Wolf 359 is shown at 3 different positions. Position 1 corresponds to the time of the emission of the photons that we receive from it now. Position 2 corresponds to its current position. Position 3 corresponds to the time it will receive the photons emitted now by the FICD. Credit: Gillon & Burdanov.
Probing this spectral range would require a space-based instrument, but it would be interesting to target these frequencies in a reproduction of the Wolf 359 observations. This paper recounts the first attempt to detect optical messages emitted from the Solar System to this star, and as such seems intended primarily as a way to shake out observing methods and explore how gravitational lens-based networking could be observed.
The paper is Gillon & Burdanov, “Search for an alien communication from the Solar System to a neighbor star,” submitted to Monthly Notices of the Royal Astronomical Society (preprint). Gillon’s 2014 paper is ” “A novel SETI strategy targeting the solar focal regions of the most nearby stars,” Acta Astronautica Vol. 94, Issue 2 (February 2014), 629-633 (abstract).
Deep Learning Methods Flag 301 New Planets
It’s no small matter to add 301 newly validated planets to an exoplanet tally already totalling 4,569. But it’s even more interesting to learn that the new planets are drawn out of previously collected data, as analyzed by a deep neural network. The ‘classifier’ in question is called ExoMiner, describing machine learning methods that learn by examining large amounts of data. With the help of the NASA supercomputer called Pleiades, ExoMiner seems to be a wizard at separating actual planetary signatures from the false positives that plague researchers.
ExoMiner is described in a paper slated for The Astrophysical Journal, where the results of an experimental study are presented, using data from the Kepler and K2 missions. The data give the machine learning tools plenty to work with, considering that Kepler observed 112,046 stars in its 115-degree square search field, identifying over 4000 candidates. More than 2300 of these have been confirmed. The Kepler extended mission K2 detected more than 2300 candidate worlds, with over 400 subsequently confirmed or validated. The latest 301 validated planets indicate that ExoMiner is more accurate than existing transit signal classifiers.
How much more accurate? According to the paper, ExoMiner retrieved 93.6% of all exoplanets in its test run, as compared to a rate of 76.3% for the best existing transit classifier.
We see many more candidate planets than can be readily confirmed or identified as false positives in all our large survey missions. TESS, the Transiting Exoplanet Survey Satellite, for example, working with an area 300 times larger than Kepler’s, has detected 2241 candidates thus far, with about 130 confirmed. Obviously, pulling false positives out of the mix is difficult using our present approaches, which is why the ExoMiner methods are so welcome.
Hamed Valizadegan is ExoMiner project lead and machine learning manager with the Universities Space Research Association at NASA Ames:
“When ExoMiner says something is a planet, you can be sure it’s a planet. ExoMiner is highly accurate and in some ways more reliable than both existing machine classifiers and the human experts it’s meant to emulate because of the biases that come with human labeling…Now that we’ve trained ExoMiner using Kepler data, with a little fine-tuning, we can transfer that learning to other missions, including TESS, which we’re currently working on. There’s room to grow.”
Image: Over 4,500 planets have been found around other stars, but scientists expect that our galaxy contains millions of planets. There are multiple methods for detecting these small, faint bodies around much larger, brighter stars. The challenge then becomes to confirm or validate these new worlds. Credit: NASA/JPL-Caltech.
The paper describes the most common approach to detecting exoplanet candidates and vetting them. Imaging data are processed to identify ‘threshold crossing events’, after which a transit model is fitted to each signal, with diagnostic tests applied to subtract non-exoplanet effects. This produces data validation reports for these crossing events, which in turn are filtered to identify likely exoplanets. The data validation reports for the most likely events are then reviewed by vetting teams and released as objects of interest for follow-up work.
Machine learning (ML) methods speed the process. As described in the paper:
ML methods are ideally suited for probing these massive datasets, relieving experts from the time-consuming task of sifting through the data and interpreting each DV report, or comparable diagnostic material, manually. When utilized properly, ML methods also allow us to train models that potentially reduce the inevitable biases of experts. Among many different ML techniques, Deep Neural Networks (DNNs) have achieved state-of-the-art performance (LeCun et al. 2015) in areas such as computer vision, speech recognition, and text analysis and, in some cases, have even exceeded human performance. DNNs are especially powerful and effective in these domains because of their ability to automatically extract features that may be previously unknown or highly unlikely for human experts in the field to grasp…
The ExoMiner software learns by using data on exoplanets that have been confirmed in the past, and also by examining the false positives thus far generated. Given the sheer numbers of threshold crossing events Kepler and K2 have produced, automated tools to examine these massive datasets greatly facilitate the confirmation process. Remember that two goals are defined here. A ‘confirmed’ planet is one that is detected via other observational techniques, as when radial velocity methods, for example, are applied to identify the same planet.
A planet is ‘validated’ statistically when it can be shown how likely the find is to be a planet based on the data. The 301 new exoplanets are considered machine-validated. They have been in candidate status until ExoMiner went to work on them to rule out false positives. As with the analysis we examined yesterday, refining filtering techniques at Proxima Centauri to screen out flare activity, this work will be applied to future catalogs from TESS and the ESA’s PLATO mission. According to Valizadegan, the team is already at work using ExoMiner with TESS data.
Usefully, ExoMiner offers what the authors call “a simple explainability framework” that provides feedback on the classifications it makes. It isn’t a ‘black box,’ according to exoplanet scientist Jon Jenkins (NASA Ames), who goes on to say: “We can easily explain which features in the data lead ExoMiner to reject or confirm a planet.”
Looking forward, the authors explain the keys to ExoMiner’s performance. The reference to Kepler Objects of Interest (KOIs) below refers to a subset defined within the paper:
[S]ince the general concept behind vetting transit signals is the same for both Kepler and TESS data, and ExoMiner utilizes the same diagnostic metrics as expert vetters do, we expect an adapted version of this model to perform well on TESS data. Our preliminary results on TESS data verify this hypothesis. Using ExoMiner, we also demonstrate that there are hundreds of new exoplanets hidden in the 1922 KOIs that require further follow-up analysis. Out of these, 301 new exoplanets are validated with confidence using ExoMiner.
The paper is Valizadegan et al., “ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier to Mine Exoplanets,” accepted at The Astrophysical Journal (preprint).
Proxima Centauri: Transits Amidst the Flares?
Discovered in 1915, Proxima Centauri has been a subject of considerable interest ever since, as you would expect of the star nearest to our own. But I had no idea research into planets around Proxima went all the way back to the 1930s. Nonetheless, a new paper from Emily Gilbert (University of Chicago) and colleagues mentions a 1938 attempt by Swedish astronomer Erik Holmberg to use astrometric methods to search for one or more Proxima planets. The abstract of the Holmberg paper (citation below) reads in part:
Many parallax stars show periodic displacements. These effects probably are to be explained as perturbations caused by invisible companions. Since the amplitudes of the orbital motion are very small, the masses of the companions will generally be very small, too. Thus Proxima Centauri probably has a companion, the mass of which is only some few times larger than the mass of Jupiter. A preliminary investigation gives the result that 25% of the total number of parallax stars may have invisible companions.
Holmberg (1908-2000) seems to be best known for his work on galaxy interactions, but the movement of nearby stars was clearly a lively interest. I think we can assume that the Proxima ‘detection’ was due to systematic factors; i.e., noise in the data. But because I was fascinated by this early flurry of exoplanet hunting, I dug around a bit in Michael Perryman’s The Exoplanet Handbook (Cambridge University Press, 2011) to learn that Holmberg came back in 1943 to use long-term time-series photographic plates in another astrometric hunt, this one at 70 Ophiuchi (he thought he detected a gas giant), while in the same year, the Danish astronomer Kaj Aage Gunnar Strand (1907-2000) found evidence for a 16-Jupiter mass planet around 61 Cygni. Neither of these worlds turned out to be any more real than Holmberg’s putative planet at Proxima Centauri.
So exoplanet hunting is replete with false positives. We’ve talked at some length in these pages about the work of Peter van de Kamp (1901-1995), a Dutch astronomer living in the US, on possible planets at Barnard’s Star. His detections were ultimately shown to have resulted from systematic errors in his equipment (though unless I am mistaken, he never accepted this conclusion). Van de Kamp’s work in the 1960s and later made the point that a small number of astronomers have been actively searching for exoplanets long before the detection of 51 Pegasi b, but the Holmberg paper, taking us back prior to World War II, came as a surprise I wanted to share.
Searching for Transits
On to today’s work on Proxima Centauri, which as we know has no gas giant of the sort Holmberg deduced, but does host at least two planets, among which is the fascinating Proxima Centauri b, the latter in the habitable zone of the star. Habitability, however, is problematic. Proxima is a flare star, so active that the atmosphere of a habitable zone planet may be threatened by the intense radiation, with obvious implications for surface life.
Proxima’s flares can dominate observation, as this passage from today’s paper makes clear:
We see 2-3+ large flares every day in the 2-minute cadence TESS light curve (Vida et al., 2019), and even with TESS 2-minute cadence, optical photometry, it can be hard to fully resolve flare morphology in order to search for transits. Davenport et al. (2016) even suggest that the visible-light light curve of Proxima Centauri may be so dominated by flares that the time series can be thought of as primarily a superposition of many flares.
Image: Proxima Centauri is a “flare star,” meaning that convection processes within the star’s body make it prone to random and dramatic changes in brightness. The convection processes not only trigger brilliant bursts of starlight but, combined with other factors, mean that Proxima Centauri is in for a very long life. Astronomers predict that this star will remain middle-aged — or a “main sequence” star in astronomical terms — for another four trillion years, some 300 times the age of the current Universe. These observations were taken using Hubble’s Wide Field and Planetary Camera 2 (WFPC2). Its two companions, Alpha Centauri A and B, lie out of frame. Credit: NASA/ESA.
Indeed, Proxima’s flares appear nearly continuous, occurring at a range of wavelengths. Flares can induce shifts in radial velocity measurements, making observations noisy and burying a potential planetary signal in a sea of misleading data. The same problem occurs for transit detection, where flare-induced variations in the lightcurve may mask an actual planetary signature. All this makes clear what fine work Guillem Anglada-Escudé and team performed at unpacking the radial velocity data that first revealed the existence of Proxima b in 2016.
Because flare activity may be masking transits at Proxima Centauri, Gilbert and team modeled the stellar activity in their planet search algorithm, refining the result. Previous flare detection algorithms have tried to identify the flares and remove them, revealing the more stable stellar signal beneath. The authors take a different approach, using their own algorithm to first identify flares, then modeling them using a template, subtracting them from the data and running a transit search on the result. The unique flare modeling they apply to Proxima, painstakingly presented in the paper’s section on methods, involves a multi-step process of filtering and fitting the flare data. The scientists injected transits into the light curves before modeling as a way of determining how sensitive their method was, with results that boosted the planet signal.
Image: This is Figure 3 from the paper. Caption: By subtracting a flare model from the light curve of Proxima Centauri, we are able to significantly increase the probability of recovering small planets. We are able to reliably recover planets down to around the radius of Mars across the period range searched, effectively ruling out any transit of Proxima Centauri b. Credit: Gilbert et al.
A transit at Proxima Centauri would be a huge boon to astronomers, allowing a precise radius and accurate determination of the composition of Proxima Centauri b (the same is true, of course, of Proxima c). Moreover, Proxima b’s short orbital period would mean frequent transits, and thus would elevate its status as a place to look for biosignatures in the atmosphere.
Alas, we have no transits. Gilbert’s team used TESS observations of Proxima Centauri to make this determination. The conclusion seems tight. From the paper:
We find no evidence for Proxima Centauri b in TESS data. This is not surprising because previous efforts using different telescopes have been similarly fruitless.
Using the known minimum mass of Proxima Centauri b (Msini = 1.27 M?), we used the relationship from Chen and Kipping (2017) to derive an expected planet radius to be R = 1.08 ± .14 R?. A 100% Iron planet would have an expected radius of 0.88 R? (Zeng et al., 2019). Therefore, given our injection and recovery tests show that no planets larger than 0.4 R? transit Proxima Centauri at periods between 10–12 days, we are confident that we would recover the signal from Proxima Centauri b if it were to transit.
The team’s flare modeling is a big part of the story here, improving the sensitivity to transiting planets from 0.6 Earth radii to 0.4, and allowing the team to put rigorous limits on the probability of transits. According to the authors, this kind of flare modeling is a technique that should be applicable to all active stars. This is good news for the continuing work of TESS and the future work of Plato (ESA’s PLAnetary Transits and Oscillations of stars mission). Our sensitivity to small planets transiting low-mass, nearby active stars receives a boost from these methods, even though the search for transits at Proxima Centauri comes up empty.
The paper is Gilbert et al., “No Transits of Proxima Centauri Planets in High-Cadence TESS Data,” accepted at Frontiers in Astronomy and Space Sciences (preprint). The Holmberg paper is Holmberg, E., “Invisible Companions of parallax stars revealed by means of modern trigonometric parallax observations,” Meddelanden fran Lunds Astronomiska Observatorium, Series II 92, 5–25.
Wind Rider: A High Performance Magsail
Can you imagine the science we could do if we had the capability of sending a probe to Jupiter with travel time of less than a month? How about Neptune in 18 weeks? Alex Tolley has been running the numbers on a concept called Wind Rider, which derives from the plasma magnet sail he has analyzed in these pages before (see, for example, The Plasma Magnet Drive: A Simple, Cheap Drive for the Solar System and Beyond). The numbers are dramatic, but only testing in space will tell us whether they are achievable, and whether the highly variable solar wind can be stably harnessed to drive the craft. A long-time contributor to Centauri Dreams, Alex is co-author (with Brian McConnell) of A Design for a Reusable Water-Based Spacecraft Known as the Spacecoach (Springer, 2016), focusing on a new technology for Solar System expansion.
by Alex Tolley
In 2017 I outlined a proposed magnetic sail propulsion system called the Plasma Magnet that was presented by Jeff Greason at an interstellar conference [6]. It caught my attention because of its simplicity and potential high performance compared to other propulsion approaches. For example, the Breakthrough Starshot beamed sail required hugely powerful and expensive phased-array lasers to propel a sail into interstellar space. By contrast, the Plasma Magnet [PM] required relatively little energy and yet was capable of propelling a much larger mass at a velocity exceeding any current propulsion system, including advanced solar sails.
The Plasma Magnet was proposed by Slough [5] and involved an arrangement of coils to co-opt the solar wind ions to induce a very large magnetosphere that is propelled by the solar wind. Unlike earlier proposals for magnetic sails that required a large electric coil kilometers in diameter to create the magnetic field, the induction of the solar wind ions to create the field meant that the structure was both low mass and that the size of the resulting magnetic field increased as the surrounding particle density declined. This allowed for a constant acceleration as the PM was propelled away from the sun, very different from solar sails and even magsails with fixed collecting areas.
The PM concept has been developed further with a much sexier name: the Wind Rider, and missions to use this updated magsail vehicle are being defined.
Wind Rider was presented at the 2021 Division of Planetary Sciences (DPS) meeting by the team led by Brent Freeze, showing their concept of the design for a Jupiter mission they called JOVE. The December meeting of the American Geophysical Union was the venue for a different Wind Rider concept mission to the SGL, called Pathfinder.
The main upgrade from the earlier PM to the Wind Rider is the substitution of superconducting coils. This allows the craft to maintain the magnetic field without requiring constant power to maintain the electric current, reducing the required power source. Because the superconducting coils would quickly heat up in the inner system and lose their superconductivity, a gold foil reflective sun shield is deployed to shield the coils from the sun’s radiation. This is shown in the image above with the shield facing the sun to keep the coils in shadow. The shield is also expected to do double duty as a radio antenna, reducing the net parasitic mass on the vehicle.
The performance of the Wind Rider is very impressive. Calculations show that it will accelerate very rapidly and reach the velocity of the solar wind, about 400 km/s. This has implications for the flight trajectory of the vehicle and the mission time.
The first mission proposal is a flyby of Jupiter – Jupiter Observing Velocity Experiment (JOVE) – much like the New Horizons mission did at Pluto.
Figure 1. The Wind Rider on a flyby of Jupiter. The solar panels are hidden behind the sun shield facing the sun. The 16U CubeSat chassis is at the intersection of the 2 coils and sun shield.
The JOVE mission proposal is for an instrumented flyby of Jupiter [2]. The chassis is a 16U CubeSat. The scientific instrument payload is primarily to measure data on the magnetic field and ion density around Jupiter. The sail is powered by 4 solar panels that also double as struts to support the sun shield and generate about 1300 W at 1 AU and fall to about 50W at Jupiter.
Figure 2. Trajectory of the Wind Rider from Earth to Jupiter
The flight trajectory is effectively a beeline directly to Jupiter, starting the flight almost at opposition. No gravity assists from Earth or Venus are required, nor a long arcing trajectory to intercept Jupiter. Figure 2 shows the trajectory, which is almost a straight-line course with the average velocity close to that of the solar wind.
Although the mission is planned as a flyby, a future mission could allow for orbital insertion if the craft approaches Jupiter’s rotating magnetosphere to maximize the impinging field velocity. Although not mentioned by the authors, it should be noted that Slough has also proposed using a PM as an aerobraking shield that decelerates the craft as it creates a plasma in the upper atmosphere of planets.
How does the performance of the Wind Rider compare to other comparable missions?
The JUNO space probe to Jupiter had a maximum velocity of about 73 km/s as Jupiter’s gravity accelerated the craft towards the planet. The required gravity assists and long flight path, about 63 AU or over 9 billion km, mean that its average velocity was about 60 km/s. This is not the fairest comparison as the JUNO probe had to attain orbital insertion at Jupiter.
A fairer comparison is the fastest probe we have flown – the New Horizons mission to Pluto — which reached 45 km/s as it left Earth but slowed to 14 km/s as it flew by Pluto. New Horizons took 1 year to reach Jupiter to get a gravity assist for its 9 year mission to Pluto, and therefore a maximum average velocity of 19 km/s between Earth and Jupiter.
Wind Rider can reach Jupiter in less than a month. Figure 2 shows the almost straight-line trajectory to Jupiter. Launched just before opposition, Wind Rider reaches Jupiter in just over 3 weeks. Because opposition happens annually, a new mission could be launched every year.
As the Wind Rider quickly reaches its terminal velocity at the same velocity as the solar wind, it can reach the outer planets with comparably short times with the same trajectory and annual launch windows.
The Wind Rider can fly by Saturn in just 6 weeks, and Neptune in 18 weeks. Compare that to the Voyager 2 probe launched in 1977 that took 4 years and 12 years to fly by the same planets respectively. Pluto could be reached by Wind Rider in just 6 months.
Because of its high terminal velocity that does not reduce during its mission, the Wind Rider is also ideally suited for precursor interstellar missions.
The second proposed mission is called Pathfinder [1], proposed to ultimately reach the solar gravity focal line around 550 AU from the sun. Flight time is less than 7 years, making this a viable project for a science and engineering team and not a multi-generation one based on existing rocket propulsion technology. As the flight trajectory is a straight line, this makes the craft well suited to follow the focal line while imaging a target star or exoplanet using the sun’s diameter as a large aperture telescope to increase the resolving power.
As the Wind Rider reaches the solar wind velocity, it may even be able to ride the gusts of higher solar wind velocities, perhaps reaching closer to 550 km/s.
While solar sails have been considered the more likely means to reach high velocities, especially when making sun-diver maneuvers, even advanced sails with proposed areal densities well below anything available today would reach solar system escape velocities in the range of 80-120 km/s [3]. If the Wind Rider can indeed reach the velocity of the solar wind, it would prove a far faster vehicle than any solar sail being planned, and would not need a boost from large laser arrays, nor risky sun-diver maneuvers.
I would inject some caution at this point regarding the performance. The performance is based entirely on theoretical work and a small scale laboratory experiment. What is needed is a prototype launched into cis-lunar space to test the performace on actual hardware and confirm the capability of the technology to operate as theorized.
It should also be noted that despite its theoretical high performance, there is a potential issue with propelling a probe with a magnetic sail. Compared to a solar sail or a vehicle with reaction thrusters, the Wind Rider as described so far has no crosswind capability. It just runs in front of the solar wind like a dandelion seed in the wind. This means that it would have to be aimed very accurately at its target, and subject to the vagaries of the strength of the solar wind that is far less stable than the sun’s photon emissions. Like the dandelion, if the Wind Rider was very inexpensive, many could be launched in the expectation that at least one would successfully reach its target.
However, there is a possibility that some crosswind capability is possible. This is based on modelling by Nishida [4]. This paper was recommended by Dr. Freeze [7].
The study modeled the effect of the angle of attack of the magnetic field of a coil against the solar wind. The coil in this case would represent the induced circular movement of the solar wind induced by the primary Wind Rider/PM coils.
Theoretically, the angle of attack has an impact on the total force pushing past the magnetic field.
Figure 3 shows the pressure and on the field as the coil is rotated from 0 through 45 and 90 degrees to the solar wind.
The force experienced is maximal at 90 degrees. This is shown visually in figure 3 and graphically in figure 4.
Figure 4. Force on the coil effected by angle of attack. A near 90 degrees angle of attack increases the force about 50%.
The angle of attack also induces a change in the thrust vector experienced by the coil, which would act as a crosswind maneuvering capability, allowing for trajectory adjustments as well as a longer launch window for the Wind Rider.
Figure 5. The angle of attack affects the thrust vector. But note the countervailing torque on the coil.
If the coil can maintain an angle of attack with respect to teh solar wind, then the Wind Rider can steer across the solar wind to some extent.
Figure 6. (left) Angle of attack, and steering angle. (right) angle of attack and the torque on the coil.
Figure 6 shows that the craft could steer up to 12 degrees away from the solar wind direction. However, maintaining that angle of attack requires a constant force to oppose the torque restoring the angle of attack to zero or 90 degrees. The coil therefore acts like a weather vane, always trying to align itself with the solar wind. To maintain the angle of attack would be difficult. Reaction wheels like those on the Kepler telescope could only act in a transient manner. Another possibility suggested is to move the center of gravity of the craft in some way. Adding booms with coils might be another solution, albeit by adding mass and complexity, undesirable for this first generation probe. Jeff Greason has an upcoming paper to be published in 2022 on theoretical navigation with possible ranges of steering capability.
In summary, the Wind Rider is an upgraded version of the Plasma Magnet propulsion concept, now applied to a reference design for 2 missions, a fast flyby of Jupiter, and an interstellar precursor mission that could reach the solar gravity lens focus. The performance of the design is primarily based on modelling and as yet there is no experimental evidence to support a finite lift/drag ratio for the craft.
Having said that, the propulsion principle and hardware necessary are not expensive, and there seems to be much interest by the AIAA. Maybe this propulsion method can finally be built, flown and evaluated. If it works as advertised, it would open up the solar system to exploration by fast, cheap robotic probes and eventually crewed ships.
References
1. Freeze, B et al Wind Rider Pathfinder Mission to Trappist-1 Solar Gravitational Lens Focal Region in 8 Years (poster at AGU – Dec 13th, 2021). https://agu.confex.com/agu/fm21/meetingapp.cgi/Paper/796237
2. Freeze, B et al Jupiter Observing Velocity Experiment (JOVE), Introduction to Wind Rider Solar Electric Propulsion Demonstrator and Science Objective.
https://baas.aas.org/pub/2021n7i314p05/release/1
3. Vulpetti, Giovanni, et al. (2008) Solar Sails: A Novel Approach to Interplanetary Travel. New York: Springer, 2008.
4. Nishida, Hiroyuki, et al. “Verification of Momentum Transfer Process on Magnetic Sail Using MHD Model.” 41st AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, 2005.
https://doi.org/10.2514/6.2005-4463
5. Slough, J. “Plasma Magnet NASA Institute for Advanced Concepts Phase I Final Report.” 2004. http://www.niac.usra.edu/files/studies/final_report/860Slough.pdf. See Figure 2.
6. Tolley, A “The Plasma Magnet Drive: A Simple, Cheap Drive for the Solar System and Beyond” (2017).
https://www.centauri-dreams.org/2017/12/29/the-plasma-magnet-drive-a-simple-cheap-drive-for-the-solar-system-and-beyond/
7. Generous email communications with Dr. Brent Freeze in preparation of this article.