Optimal Strategies for Exploring Nearby Stars

We’ve spoken recently about civilizations expanding throughout the galaxy in a matter of hundreds of thousands of years, a thought that led Frank Tipler to doubt the existence of extraterrestrials, given the lack of evidence of such expansion. But let’s turn the issue around. What would the very beginning of our own interstellar exploration look like, if we reach the point where probes are feasible and economically viable? This is the question Johannes Lebert examines today. Johannes obtained his Master’s degree in Aerospace at the Technische Universität München (TUM) this summer. He likewise did his Bachelor’s in Mechanical Engineering at TUM and was visiting student in the field of Aerospace Engineering at the Universitat Politècnica de València (UPV), Spain. He has worked at Starburst Aerospace (a global aerospace & defense startup accelerator and strategic advisory company) and AMDC GmbH (a consultancy with focus on defense located in Munich). Today’s essay is based upon his Master thesis “Optimal Strategies for Exploring Nearby-Stars,” which was supervised by Martin Dziura (Institute of Astronautics, TUM) and Andreas Hein (Initiative for Interstellar Studies).

by Johannes Lebert

1. Introduction

Last year, when everything was shut down and people were advised to stay at home instead of going out or traveling, I ignored those recommendations by dedicating my master thesis to the topic of interstellar travel. More precisely, I tried to derive optimal strategies for exploring near-by stars. As a very early-stage researcher I was really honored when Paul asked me to contribute to Centauri Dreams and want to thank him for this opportunity to share my thoughts on planning interstellar exploration from a strategic perspective.

Figure 1: Me, last year (symbolic image). Credit: hippopx.com).

As you are an experienced and interested reader of Centauri Dreams, I think it is not necessary to make you aware of the challenges and fascination of interstellar travel and exploration. I am sure you’ve already heard a lot about interstellar probe concepts, from gram-scale nanoprobes such as Breakthrough Starshot to huge spaceships like Project Icarus. Probably you are also familiar with suitable propulsion technologies, be it solar sails or fusion-based engines. I guess, you could also name at least a handful of promising exploration targets off the cuff, perhaps with focus on star systems that are known to host exoplanets. But have you ever thought of ways to bring everything together by finding optimal strategies for interstellar exploration? As a concrete example, what could be the advantages of deploying a fleet of small probes vs. launching only few probes with respect to the exploration targets? And, more fundamentally, what method can be used to find answers to this question?

In particular the last question has been the main driver for this article: Before starting with writing, I was wondering a lot what could be the most exciting result I could present to you and found that the methodology as such is the most valuable contribution on the way towards interstellar exploration: Once the idea is understood, you are equipped with all relevant tools to generate your own results and answer similar questions. That is why I decided to present you a summary of my work here, addressing more directly the original idea of Centauri Dreams (“Planning […] Interstellar Exploration”), instead of picking a single result.

Below you’ll find an overview of this article’s structure to give you an impression of what to expect. Of course, there is no time to go into detail for each step, but I hope it’s enough to make you familiar with the basic components and concepts.

Figure 2: Article content and chapters

I’ll start from scratch by defining interstellar exploration as an optimization problem (chapter 2). Then, we’ll set up a model of the solar neighborhood and specify probe and mission parameters (chapter 3), before selecting a suitable optimization algorithm (chapter 4). Finally, we apply the algorithm to our problem and analyze the results (more generally in chapter 5, with implications for planning interstellar exploration in chapter 6).

But let’s start from the real beginning.

2. Defining and Classifying the Problem of Interstellar Exploration

We’ll start by stating our goal: We want to explore stars. Actually, it is star systems, because typically we are more interested in the planets that are potentially hosted by a star instead of the star as such. From a more abstract perspective, we can look at the stars (or star systems) as a set of destinations that can be visited and explored. As we said before, in most cases we are interested in planets orbiting the target star, even more if they might be habitable. Hence, there are star systems which are more interesting to visit (e. g. those with a high probability of hosting habitable planets) and others, which are less attracting. Based on these considerations, we can assign each star system an “earnable profit” or “stellar score” from 0 to 1. The value 0 refers to the most boring star systems (though I am not sure if there are any boring star systems out there, so maybe it’s better to say “least fascinating”) and 1 to the most fascinating ones. The scoring can be adjusted depending on one’s preferences, of course, and extended by additional considerations and requirements. However, to keep it simple, let’s assume for now that each star system provides a score of 1, hence we don’t distinguish between different star systems. Having this in mind, we can draw a sketch of our problem as shown in Figure 3.

Figure 3: Solar system (orange dot) as starting point, possible star systems for exploration (destinations with score ) represented by blue dots

To earn the profit by visiting and exploring those destinations, we can deploy a fleet of space probes, which are launched simultaneously from Earth. However, as there are many stars to be explored and we can only launch a limited number of probes, one needs to decide which stars to include and which ones to skip – otherwise, mission timeframes will explode. This decision will be based on two criteria: Mission return and mission duration. The mission return is simply the sum of the stellar score of each visited star. As we assume a stellar score of 1 for each star, the mission return is equal to the number of stars that is visited by all our probes. The mission duration is the time needed to finish the exploration mission.

In case we deploy several probes, which carry out the exploration mission simultaneously, the mission is assumed to be finished when the last probe reaches the last star on its route – even if other probes have finished their route earlier. Hence, the mission duration is equal to the travel time of the probe with the longest trip. Note that the probes do not need to return to the solar system after finishing their route, as they are assumed to send the data gained during exploration immediately back to Earth.

Based on these considerations we can classify our problem as a bi-objective multi-vehicle open routing problem with profits. Admittedly quite a cumbersome term, but it contains all relevant information:

  • Bi-objective: There are two objectives, mission return and mission duration. Note that we want to maximize the return while keeping the duration minimal. Hence, from intuition we can expect that both objectives are competing: The more time, the more stars can be visited.
  • Multi-vehicle: Not only one, but several probes are used for simultaneous exploration.
  • Open: Probes are free to choose where to end their route and are not forced to return back to Earth after finishing their exploration mission.
  • Routing problem with profits: We consider the stars as a set of destinations with each providing a certain score si. From this set, we need to select several subsets, which are arranged as routes and assigned to different probes (see Figure 4).

Figure 4: Problem illustration: Identify subsets of possible destinations si, find the best sequences and assign them to probes

Even though it appears a bit stiff, the classification of our problem is very useful to identify suitable solution methods: Before, we were talking about the problem of optimizing interstellar exploration, which is quite unknown territory with limited research. Now, thanks to our abstraction, we are facing a so-called Routing Problem, which is a well-known optimization problem class, with several applications across various fields and therefore being exhaustively investigated. As a result, we now have access to a large pool of established algorithms, which have already been tested successfully against these kinds of problems or other very similar or related problems such as the Traveling Salesman Problem (probably the most popular one) or the Team Orienteering Problem (subclass of the Routing Problem).

3. Model of the Solar Neighborhood and Assumptions on Probe & Mission Architecture

Obviously, we’ll also need some kind of galactic model of our region of interest, which provides us with the relevant star characteristics and, most importantly, the star positions. There are plenty of star catalogues with different focus and historical background (e.g. Hipparcos, Tycho, RECONS). One of the latest, still ongoing surveys is the Gaia Mission, whose observations are incorporated in the Gaia Archive, which is currently considered to be the most complete and accurate star database.

However, the Gaia Archive ­­­­­­­­­­­­­­­­­­­­­­– more precisely the Gaia Data Release 2 (DR2), which will be used here* (accessible online [1] together with Gaia based distance estimations by Bailer-Jones et al. [2]) – provides only raw observation data, which include some reported spurious results. For instance, it lists more than 50 stars closer than Proxima Centauri, which would be quite a surprise to all the astronomers out there.

*1. Note that there is already an updated Data Release (Gaia DR3), which was not available yet at the time of the thesis.

Hence, a filtering is required to obtain a clean data set. The filtering procedure applied here, which consists of several steps, is illustrated in Figure 5 and follows the suggestions from Lindegren et al. [3]. For instance, data entries are eliminated based on parallax errors and uncertainties in BP and RP fluxes. The resulting model (after filtering) includes 10,000 stars and represents a spherical domain with a radius of roughly 110 light years around the solar system.

Figure 5: Setting up the star model based on Gaia DR2 and filtering (animated figure from [9])

To reduce the complexity of the model, we assume all stars to maintain fixed positions – which is of course not true (see Figure 5 upper right) but can be shown to be a valid simplification for our purposes, and we limit the mission time frames to 7,000 years. 7,000 years? Yes, unfortunately, the enormous stellar distances, which are probably the biggest challenge we encounter when planning interstellar travel, result in very high travel times – even if we are optimistic concerning the travel speed of our probes, which are defined by the following.

We’ll use a rather simplistic probe model based on literature suggestions, which has the advantage that the results are valid across a large range of probe concepts. We assume the probes to travel along straight-line trajectories (in line with Fantino & Casotto [4] at an average velocity of 10 % of the speed of light (in line with Bjørk [5]. They are not capable of self-replicating; hence, the probe number remains constant during a mission. Furthermore, the probes are restricted to performing flybys instead of rendezvous, which limits the scientific return of the mission but is still good enough to detect planets (as reported by Crawford [6]. Hence, the considered mission can be interpreted as a reconnaissance or scouting mission, which serves to identify suitable targets for a follow-up mission, which then will include rendezvous and deorbiting for further, more sophisticated exploration.

Disclaimer: I am well aware of the weaknesses of the probe and mission model, which does not allow for more advanced mission design (e. g. slingshot maneuvers) and assumes a very long-term operability of the probes, just to name two of them. However, to keep the model and results comprehensive, I tried to derive the minimum set of parameters which is required to describe interstellar exploration as an optimization problem. Any extensions of the model, such as a probe failure probability or deorbiting maneuvers (which could increase the scientific return tremendously), are left to further research.

4. Optimization Method

Having modeled the solar neighborhood and defined an admittedly rather simplistic probe and mission model, we finally need to select a suitable algorithm for solving our problem, or, in other words, to suggest “good” exploration missions (good means optimal with respect to both our objectives). In fact, the algorithm has the sole task of assigning each probe the best star sequences (so-called decision variables). But which algorithm could be a good choice?

Optimization or, more generally, operations research is a huge research field which has spawned countless more or less sophisticated solution approaches and algorithms over the years. However, there is no optimization method (not yet) which works perfectly for all problems (“no free lunch theorem”) – which is probably the main reason why there are so many different algorithms out there. To navigate through this jungle, it helps to recall our problem class and focus on the algorithms which are used to solve equal or similar problems. Starting from there, we can further exclude some methods a priori by means of a first analysis of our problem structure: Considering n stars, there are ?! possibilities to arrange them into one route, which can be quite a lot (just to give you a number: for n=50 we obtain 50!? 1064 possibilities).

Given that our model contains up to 10,000 stars, we cannot simply try out each possibility and take the best one (so called enumeration method). Instead, we need to find another approach, which is more suitable for those kinds of problems with a very large search space, as an operations researcher would say. Maybe you already have heard about (meta-)heuristics, which allow for more time-efficient solving but do not guarantee to find the true optimum. Even if you’ve never heard about them, I am sure that you know at least one representative of a metaheuristic-based solution, as it is sitting in front of your screen right now as you are reading this article… Indeed, each of us is the result of a thousands of years lasting, still ongoing optimization procedure called evolution. Wouldn’t it be cool if we could adopt the mechanisms that brought us here to do the next, big step in mankind and find ways to leave the solar system and explore unknown star systems?

Those kinds of algorithms, which try to imitate the process of natural evolution, are referred to as Genetic Algorithms. Maybe you remember the biology classes at school, where you learned about chromosomes, genes and how they are shared between parents and their children. We’ll use the same concept and also the wording here, which is why we need to encode our optimization problem (illustrated in Figure 6): One single chromosome will represent one exploration mission and as such one possible solution for our optimization problem. The genes of the chromosome are equivalent to the probes. And the gene sequences embody the star sequences, which in turn define the travel routes of each probe.

If we are talking about a set of chromosomes, we will use the term “population”, therefore sometimes one chromosome is referred to as individual. Furthermore, as the population will evolve over the time, we will speak about different generations (just like for us humans).

Figure 6. Genetic encoding of the problem: Chromosomes embody exploration missions; genes represent probes and gene sequences are equivalent to star sequences.

The algorithm as such is pretty much straightforward, the basic working principle of the Genetic Algorithm is illustrated below (Figure 7). Starting from a randomly created initial population, we enter an evolution loop, which stops either when a maximum number of generations is reached (one loop represents one generation) or if the population stops evolving and keeps stable (convergence is reached).

Figure 7: High level working procedure of the Genetic Algorithm

I don’t want to go into too much detail on the procedure – interested readers are encouraged to go through my thesis [7] and look for the corresponding chapter or see relevant papers (particularly Bederina and Hifi [8], from where I took most of the algorithm concept). To summarize the idea: Just like in real life, chromosomes are grouped into pairs (parents) and create children (representing new exploration missions) by sharing their best genes (which are routes in our case). For higher variety, a mutation procedure is applied to a few children, such as a partial swap of different route segments. Finally, the worst chromosomes are eliminated (evolve population = “survival of the fittest”) to keep the population size constant.

Side note: Currently, we have the chance to observe this optimization procedure when looking at the Coronavirus. It started almost two years ago with the alpha version; right now the population is dominated by the delta version, with omicron an emerging variant. From the virus perspective, it has improved over time through replication and mutation, which is supported by large populations (i.e., a high number of cases).

Note that the genetic algorithm is extended by a so-called local search, which comprises a set of methods to improve routes locally (e. g. by inverting segments or swapping two random stars within one route). That is why this method is referred to as Hybrid Genetic Algorithm.

Now let’s see how the algorithm is operating when applied to our problem. In the animated figure below, we can observe the ongoing optimization procedure. Each individual is evaluated “live” with respect to our objectives (mission return and duration). The result is plotted in a chart, where one dot refers to one individual and thus represents one possible exploration mission. The color indicates the corresponding generation.

Figure 8: Animation of the ongoing optimization procedure: Each individual (represented by a dot) is evaluated with respect to the objectives, one color indicates one generation

As shown in this animated figure, the algorithm seems to work properly: With increasing generations, it tries to generate better solutions, as it optimizes towards higher mission return and lower mission duration (towards the upper left in the Figure 8). Solutions from the earlier generation with poor quality are subsequently replaced by better individuals.

5. Optimization Results

As a result of the optimization, we obtain a set of solutions (representing the surviving individuals from the final generation), which build a curve when evaluated with respect to our twin objectives of mission duration and return (see Figure 9). Obviously, we’ll get different curves when we change the probe number m between two optimization runs. In total, 9 optimization runs are performed; after each run the probe number is doubled, starting with m=2. As already in the animated Figure 8, one dot represents one chromosome and thus one possible exploration mission (one mission is illustrated as an example).

Figure 9: Resulting solutions for different probe numbers and mission example represented by one dot

Already from this plot, we can make some first observations: The mission return (which we assume equal to the number of explored stars, just as a reminder) increases with mission duration. More precisely, there appears to be an approximately linear incline of star number with time, at least in most instances. This means that when doubling the mission duration, we can expect more or less twice the mission return. An exception to this behavior is the 512 probes curve, which flattens when reaching > 8,000 stars due to the model limits: In this region, only few unexplored stars are left which may require unfavorable transfers.

Furthermore, we see that for a given mission duration the number of explored stars can be increased by launching more probes, which is not surprising. We will elaborate a bit more on the impact of the probe number and on how it is linked with the mission return in a minute.

For now, let’s keep this in our mind and take a closer look at the missions suggested by the algorithm. In the figure below (Figure 10), routes for two missions with different probe number m but similar mission return J1 (nearly 300 explored stars) are visualized (x, y, z-axes dimensions in light years). One color indicates one route that is assigned to one probe.

Figure 10: Visualization of two selected exploration missions with similar mission return J1 but different probe number m – left: 256 available probes, right: 4 available probes (J2 is the mission duration in years)

Even though the mission return is similar, the route structures are very different: The higher probe number mission (left in Figure 10) is built mainly from very dense single-target routes and thus focuses more on the immediate solar neighborhood. The mission with only 4 probes (right in Figure 10), contrarily, contains more distant stars, as it consists of comparatively long, chain-like routes with several targets included. This is quite intuitive: While for the right case (few probes available) mission return is added by “hopping” from star to star, in the left case (many probes available) simply another probe is launched from Earth. Needless to say, the overall mission duration J2 is significantly higher when we launch only 4 probes (> 6000 years compared to 500 years).

Now let’s look a bit closer at the corresponding transfers. As before, we’ll pick two solutions with different probe number (4 and 64 probes) and similar mission return (about 230 explored stars). But now, we’ll analyze the individual transfer distances along the routes instead of simply visualizing the routes. This is done by means of a histogram (shown in Figure 11), where simply the number of transfers with a certain distance is counted.

Figure 11: Histogram with transfer distances for two different solution – orange bars belong to a solution with 4 probes, blue bars to a solution with 64 probes; both provide a mission return of roughly 230 explored stars.

The orange bars belong to a solution with 4 probes, the blue ones to a solution with 64 probes. To give an example on how to read the histogram: We can say that the solution with 4 probes includes 27 transfers with a distance of 9 light years, while the solution with 64 probes contains only 8 transfers of this distance. What we should take from this figure is that with higher probe numbers apparently more distant transfers are required to provide the same mission return.

Based on this result we can now concretize earlier observations regarding the probe number impact: From Figure 9 we already found that the mission return increases with probe number, without being more specific. Now, we discovered that the efficiency of the exploration mission w. r. t. routing decreases with increasing probe number, as there are more distant transfers required. We can even quantify this effect: After doing some further analysis on the result curve and a bit of math, we’ll find that the mission return J1 scales with probe number m according to ~m0.6 (at least in most instances). By incorporating the observations on linearity between mission return and duration (J2), we obtain the following relation: J1 ~ J2m0.6.

As J1 grows only with m0.6 (remember that m1 indicates linear growth), the mission return for a given mission duration does not simply double when we launch twice as many probes. Instead, it’s less; moreover, it depends on the current probe number – in fact, the contribution of additional probes to the overall mission return diminishes with increasing probe numbers.

This phenomenon is similar to the concept of diminishing returns in economics, which denotes the effect that an increase of the input yields progressively lower or even reduced increase in output. How does that fit with earlier observations, e. g. on route structure? Apparently, we are running into some kind of a crowding effect, when we launch many probes from the same spot (namely our solar system): Long initial transfers are required to assign each probe an unexplored star. Obviously, this effect intensifies with each additional probe being launched.

6. Conclusions and Implications for Planning Interstellar Exploration

What can we take from all this effort and the results of the optimization? First, let’s recap the methodology and tools which we developed for planning interstellar exploration (see Figure 12).

Figure 12: Methodology – main steps

Beside the methodology, which of course can be extended and adapted, we can give some recommendations for interstellar mission design considerations, in particular regarding the probe number impact:

  • High probe numbers are favorable when we want to explore many stars in the immediate solar neighborhood. As further advantage of high probe numbers, mostly single-target missions are performed, which allows the customization of each probe according to its target star (e. g. regarding scientific instrumentation).
  • If the number of available probes is limited (e. g. due to high production costs), it is recommended to include more distant stars, as it enables a more efficient routing. The aspect of higher routing efficiency needs to be considered in particular when fuel costs are relevant (i. e. when fuel needs to be transported aboard). For other, remotely propelled concepts (such as laser driven probes, e. g. Breakthrough Starshot) this issue is less relevant, which is why those concepts could be deployed in larger numbers, allowing for shorter overall mission duration at the expense of more distant transfers.
  • When planning to launch a high number of probes from Earth, however, one should be aware of crowding effects. This effect sets in already for few probes and intensifies with each additional probe. One option to encounter this issue and thus support a more efficient probe deployment could be swarm-based concepts, as indicated by the sketch in Figure 13.

    The swarm-based concept includes a mother ship, which transports a fleet of smaller explorer probes to a more distant star. After arrival, the probes are released and start their actual exploration mission. As a result, the very dense, crowded route structures, which are obtained when many probes are launched from the same spot (see again Figure 10, left plot), are broken up.

Figure 13: Sketch illustrating the beneficial effect of swarm concepts for high probe numbers.

Obviously, the results and derived implications for interstellar exploration are not mind-blowing, as they are mostly in line with what one would expect. However, this in turn indicates that our methodology seems to work properly, which of course does not serve as a full verification but is at least a small hint. A more reliable verification result can be obtained by setting up a test problem with known optimum (which is not shown here, but was also done for this approach, showing that the algorithm’s results deviate about 10% compared to the ideal solution).

Given the very early-stage level of this work, there is still a lot of potential for further research and refinement of the simplistic models. Just to pick one example: As a next step, one could start to distinguish between different star systems by varying the reward of each star system si based on a stellar metric, where more information of the star is incorporated (such as spectral class, metallicity, data quality, …). In the end it’s up to oneself, which questions he or she wants to answer – there is more than enough inspiration up there in the night sky.

Figure 14: More people, now

Assuming that you are not only an interested reader of Centauri Dreams but also familiar with other popular literature on that topic, you maybe have heard about Clarke’s three laws. I would like to close this article by taking up his second one: The only way of discovering the limits of the possible is to venture a little way past them into the impossible. As said before, I hope that the introduced methodology can help to answer further questions concerning interstellar exploration from a strategic perspective. The more we know, the better we are capable of planning and imagining interstellar exploration, thus pushing gradually the limits of what is considered to be possible today.

References

[1] ESA, “Gaia Archive,“ [Online]. Available: https://gea.esac.esa.int/archive/.

[2] C. A. L. Bailer-Jones et al., “Estimating Distances from Parallaxes IV: Distances to 1.33 Billion Stars in Gaia Data Release 2,” The Astronomical Journal, vol. 156, 2018.
https://iopscience.iop.org/article/10.3847/1538-3881/aacb21

[3] L. Lindegren et al., “Gaia Data Release 2 – The astrometric solution,” Astronomy & Astrophysics, vol. 616, 2018.
https://doi.org/10.1051/0004-6361/201832727

[4] E. Fantino and S. Casotto, “Study on Libration Points of the Sun and the Interstellar Medium for Interstellar Travel,” Universitá di Padova/ESA, 2004.

[5] R. Bjørk, “Exploring the Galaxy using space probes,” International Journal of Astrobiology, vol. 6, 2007.
https://doi.org/10.1017/S1473550407003709

[6] I. A. Crawford, “The Astronomical, Astrobiological and Planetary Science Case for Interstellar Spaceflight,” Journal of the British Interplanetary Society, vol. 62, 2009. https://arxiv.org/abs/1008.4893

[7] J. Lebert, “Optimal Strategies for Exploring Near-by Stars,“ Technische Universität München, 2021.
https://mediatum.ub.tum.de/1613180

[8] H. Bederina and M. Hifi, “A Hybrid Multi-Objective Evolutionary Algorithm for the Team Orienteering Problem,” 4th International Conference on Control, Decision and Information Technologies, Barcelona, 2017.
https://ieeexplore.ieee.org/document/8102710

[9] University of California – Berkeley, “New Map of Solar Neighborhood Reveals That Binary Stars Are All Around Us,” SciTech Daily, 22 February 2021.
https://scitechdaily.com/new-map-of-solar-neighborhood-reveals-that-binary-stars-are-all-around-us/

tzf_img_post

Reaching an Interstellar Interloper

The ongoing Interstellar Probe study at the Johns Hopkins University Applied Physics Laboratory reminds us of the great contribution of the Voyager spacecraft, but also of the need to develop their successors. Interstellar flight is a dazzling goal considered in the long term, but present technologies develop incrementally and missions to other stars are a multi-generational goal. But as we continue that essential effort with projects like Interstellar Probe, we can also make plans to explore objects from other stellar systems (ISOs) closer to home.

I refer of course to the appearance in the last three years of two such objects, 1I/’Oumuamua and 2I/Borisov, the ‘I’ in their names referencing the exciting fact that these are interstellar in nature, passing briefly through our system before moving on. Papers have begun to appear to examine missions to one or the other of these objects, or to plan how, with sufficiently early discovery, we could get a spacecraft to the next one. And keep in mind the ESA’s Comet Interceptor mission, which sets its sights on a long-period comet but could be used for an ISO.

Are missions to interstellar objects possible with near-term technology? A new paper from lead author Andreas Hein (Initiative for Interstellar Studies) and an international team of researchers answers the question in the affirmative. The paper characterizes such missions by the resources required to perform them, which in turn relates to the ISO’s trajectory. Unbound ISOs — those that pass through our system only once — can be contrasted with bound objects that have remained in the Solar System after their entry. If the ISO is unbound, a mission launched before perihelion would have the best chance of producing data and perhaps sample return.

Image: An artist’s impression of 2I/Borisov, an interstellar comet. Credit: NRAO/AUI/NSF, S. Dagnello.

In previous papers, Hein and team have considered chemical propulsion complemented by a reverse gravity assist at Jupiter and a Solar Oberth Maneuver to reach 1I/’Oumuamua, although they have also looked at thermal nuclear propulsion with gravity assist at Jupiter. Uncertainties in the object’s orbit are challenging but, the authors believe, surmountable through the use of a telescope like that of New Horizons (LORRI) or, a highly speculative idea, a swarm of chipsats that could be launched ahead of the probe to refine navigational data. This approach goes well beyond existing technology, though, as the authors acknowledge by citing the work on Breakthrough Starshot’s laser architecture, which is a long way from realization.

I’m also concerned about that notion of a Solar Oberth Maneuver, given what we’ve learned recently in connection with the research on Interstellar Probe, for the kind of spacecraft described here to intercept 1I/’Oumuamua would carry the needed upper stage kick engine, along with the heat-shield technology Interstellar Probe has been investigating. All this adds to mass. The authors believe Falcon Heavy (or, unlikely, a future SLS) would be up to the challenge, but I think the proposed Solar Oberth Maneuver at 6 solar radii is a problematic goal in the near-term.

The authors echo these sentiments in terms of the perihelion burn itself as well as the navigation issues to reach the ISO which will ensue. A propulsive burn at perihelion for a probe trying to intercept an interstellar object is a long way from proven technology, particularly when we’re hoping to deliver a substantial instrument package to the ISO for science return. The authors call for developing nuclear thermal propulsion in order to make a wider range of ISOs reachable without relying on the Oberth maneuver.

The paper usefully offers a taxonomy of interstellar objects, matched to their associated science and conceivable mission types. Objects with low inclinations, low hyperbolic escape velocity (v?), and those discovered well before perihelion are the most reachable targets. Of course, this survey of options for reaching an ISO isn’t intended to be specific to a given object but applicable to many, suggesting what is possible with present and near-term technologies. In the discussion of a mission to 1I/’Oumuamua, the authors also note the wide range of details that need to be considered:

Our brief analysis (and its attendant caveats) should not be regarded as exhaustive. Other issues that we have not delineated include the difficulties posed by long CCD exposure times (11 hours in our scenario) such as the cumulative impact of cosmic rays and the necessity of accounting for parallax motion of the object during this period. Obstacles with respect to measuring the position of the object, calculating offsets, and relaying it to the spacecraft may also arise. Hence, we acknowledge that there are significant (but not necessarily insurmountable) and outstanding challenges that are not tackled herein, as they fall outside the scope of this particular paper.

In any event, 1I/’Oumuamua may be quite a tricky object to catch at this juncture even for this kind of fast flyby. Objects detected earlier in their entry into our system should present a much more workable challenge, and with the Vera Rubin Observatory coming into play, we are probably going to be finding many more of them, some well before perihelion. Hence the need to know what is possible for future operations at ISOs, ensuring we have a plan and resources available to fly when we next have the opportunity.

A rendezvous mission may one day be in the cards, with the authors relying on electric or magnetic sail propulsion schemes to allow the spacecraft to slow down and study the target at close hand. But it may be more reasonable to consider rendezvous with captured interstellar objects in bound elliptical orbits. These are missions which are examined here in relation to two potential ISOs (not yet confirmed as such), (514107) Ka’epaoka’awela, a Jupiter co-orbital in retrograde orbit, and the Centaur 2008 KV42. The paper examines rendezvous strategies and provides trajectories for multiple years. 2008 KV42, for example, should be reachable for rendezvous with launch in 2029 and a flight duration of 15 years.

Finally, nuclear thermal technologies should allow sample return from some interstellar objects using a pre-positioned interceptor at the Sun/Earth L2 point. The paper considers an interceptor mission to comet C/2020 N1, serving as a surrogate for particular types of ISOs. The spacecraft, using nuclear thermal or solar electric propulsion, would deploy an impactor on approach to the object and travel through the plume, perhaps using swarm subprobes to return samples to the main craft depending on whether or not the plume is thought likely to be hazardous.

Even without nuclear thermal capability, though, missions can be flown to some types of interstellar objects with technologies that are currently in use. From the paper:

Our results indicate that most mission types elucidated herein, except for sample return, could be realized with existing technologies or modified versions of existing technologies, such as chemical propulsion and a Parker Solar Probe-type heat shield (Hein et al., 2019; Hibberd et al., 2020). Collisions with dust, gas, and cosmic rays and spacecraft charging in the interplanetary or interstellar medium will engender deflection of the spacecraft trajectory and cause material damage to it, but both effects are likely minimal even at high speeds (Hoang et al., 2017; Hoang & Loeb, 2017; Lingam & Loeb, 2020, 2021), and the former can be corrected by onboard thrusters.

So we learn that missions to interstellar objects are feasible, with some fast flyby scenarios capable of being accomplished with today’s technologies. Rendezvous and sample return missions await the maturation of solar electric and nuclear thermal propulsion. Here the concept ‘near-term’ is speculative. When will we have nuclear thermal engines available for this kind of mission? I am speaking in a practical sense — we know a great deal about nuclear thermal methods, but when will we deploy workable engines at a high enough Technology Readiness Level to use?

There is much we could learn from an ISO intercept, whether a flyby, a rendezvous or a sample return. Given that we are a long way from being able to sample interstellar objects in other stellar systems (I doubt seriously we’ll have this capability in a century’s time), ISOs represent our best bet to discover the structure and composition of extrasolar objects. This and the capability of doing interplanetary dust and plasma science along the way should be enough to keep such missions under active study as our new generation telescopes come online.

The paper is Hein et al., “Interstellar Now! Missions to Explore Nearby Interstellar Objects,” in press at Advances in Space Research (abstract / preprint).

tzf_img_post

Assessing the Oberth Maneuver for Interstellar Probe

I notice that the question of ‘when to launch’ has surfaced in comments to my first piece on Interstellar Probe, the APL study to design a spacecraft that would be, in effect, the successor to Voyager. It’s a natural question, because if a craft takes 50 years to reach 1000 AU, there will likely be faster spacecraft designed later that will pass it in flight. I’m going to come down on the side of launching as soon as possible rather than anticipating future developments.

Two reasons: The research effort involved in stretching what we can do today to reach as high a velocity as possible inevitably moves the ball forward. We learn as we go, and ideas arise in the effort that can hasten the day of faster spacecraft. The second reason is that a vehicle like Interstellar Probe is hardly passive. It does science all along its route. By the time it reaches 1000 AU, it has returned massive amounts of information about the interstellar medium, our Sun’s passage through it, and the heliosphere that protects the Solar System.

All of that is germane to follow-on missions, and we have useful science data all the way. So I’m much in favor of pushing current technology into stretch missions even as we examine how to go faster still with the next iteration, the one that would succeed Interstellar Probe.

Getting Up to Speed

How fast can we travel now, as compared to 1977, when we launched Voyagers 1 and 2? We know we can reach 17 kilometers per second with 1977 technology because that is what Voyager 1 is doing right now. Interstellar Probe advocates would like to see something in the range of 95 kilometers per second as a way of making the 1000 AU journey in 50 years. That’s still, I suppose, within the lifetime of a researcher, but not by much, and it’s heartening to me that we’re extending the boundaries into a frank admission of the fact that some missions may be launched by one generation, maintained by another, and brought home by a third.

I always assumed we had an ace up our sleeves when it came to ramping up Voyager speed levels. Moving close to the Sun and making a propulsive burn at just the right moment seemed a sure way to exploit that deep gravity well and fling a probe outward at high velocity. The idea first appeared in Hermann Oberth’s Wege zur Raumschiffahrt (Paths to Spaceflight), which was published in 1929 in Germany. At the time, Oberth was also working as a consultant on the Fritz Lang film Frau im Mond (The Woman in the Moon), which would popularize the idea of rocketry and space travel. In fact, Oberth would dedicate Wege zur Raumschiffahrt to Lang and actress and screenwriter Thea von Harbou.

The authors of the Interstellar Probe 2019 report note in their extremely useful appendices that Oberth’s thinking on the maneuver that would be named after him anticipated in many ways the idea of using a gravity assist that was developed in the 1960s by Michael Minovitch. His thought experiment involved an astronaut on an asteroid 900 AU from the Sun. The astronaut, apparently quite long-lived, wants to go to a star some 1015 kilometers away (roughly the distance of Regulus). His asteroid has an orbital speed of 1 km/s and an orbital period of 27,000 years.

I won’t go into this in huge detail because it’s laid out so well in the report’s appendix (available here). But Oberth’s setup is that the target star is in the orbital plane of the asteroid, and he assumes the astronaut has a rocket that can produce a velocity change of 6 km/s. Sun, asteroid and target star are in a line in that order. He asks: What is the fastest way to reach the star?

Using the rocket alone reaches it in 5,555,000 years. Waiting for 20,000 years to add the asteroid’s orbital velocity to the velocity of the ship reduces that to 4,760,000 years. But Oberth realizes that the best answer is to use the rocket to move opposite to the asteroid’s motion, falling in toward the Sun to reach 500 km/s at perihelion, then using the remaining rocket fuel to boost the speed a bit further. He ultimately gets 70.9 km/s moving out of the Solar System, and his transit time is now reduced to 470,000 years. Thus the ‘Oberth maneuver’ enters the literature.

A spacecraft launched from Earth has to lose the heliocentric angular momentum of Earth’s orbit to fall toward the Sun in order to make the Oberth maneuver possible, the most efficient method being a direct trajectory from Earth to Jupiter, a retrograde gravity assist at Jupiter, and a long fall back to perihelion, at which point a kick-stage provides the further propulsive burn. All of this, including of course the thermal issues raised by putting the payload into such proximity to the Sun, has to be weighed against a straight gravity assist at Jupiter, with no close solar pass, when contemplating how best to accelerate the Interstellar Probe for the journey.

Image: This is an image of Parker Solar Probe as envisioned by Goddard Media Studios at NASA’s Goddard Space Flight Center in Maryland. It’s the closest I could come to what a close solar pass would look like, though it lacks the propulsive element of the Oberth maneuver. Credit: NASA GSFC.

Oberth in Today’s Terms

When I contacted Interstellar Probe principal investigator Ralph McNutt (JHU/APL) about these issues, he pointed out that the Mission Concept Report for the entire project would be made available on the probe website in the first week of December. Putting what the report will describe as the Solar Oberth Maneuver (SOM) through the severe filter of engineering capabilities with today’s technologies is a major priority of this report, and the results McNutt conveyed make it clear that my enthusiasm for the concept has been unjustified.

Unjustified, that is, in terms of a spacecraft being designed, as this one must be, around current technologies. Remember that we’re talking about a mission with a specific timeframe, one with a launch in the early 2030s, meaning that the materials and techniques to build and fly it have to be within range today. The Oberth maneuver at the Sun may have possibilities for us down the road. But today’s engineering constraints make the issues stark. As McNutt told me in an email:

…after a very careful look and relying on the same people, including the mission system engineer, who worked the thermal protection system (TSA) for Parker Solar Probe (PSP) we have concluded (1) the SOM offers no advantage over prograde gravity assists in rapid escape from the solar system for a “technology horizon” in the 2030’s and (2) there is no obvious “path” to changing this conclusion for the foreseeable future.

Image: Ralph L. McNutt Jr., chief scientist for Space Science at the Johns Hopkins University Applied Physics Laboratory and principal investigator for Interstellar Probe. Credit: Johns Hopkins University.

In other words, going to Jupiter straightaway, with no Oberth maneuver, is just as workable, and as we’ll see, avoids a series of thorny problems. One issue is the need for thermal protection, another the demand of launching a payload sufficiently large, one that would incorporate not only the propulsive stage for operations at perihelion preceding the long cruise, but would also include the science instrument package and the necessary high gain antenna that would be needed for data downlink at the distances the probe is envisioned to reach. We have to work within the constraints of present-day launch systems as well as existing engines for the kick.

On thermal issues, the Interstellar Probe team worked with Advanced Ceramic Fibers, an Idaho-based company, on ultra-high temperature material studies, the question being how one could take existing thermal protection as found on the current Parker Solar Probe mission and extend it into the range needed for the Solar Oberth Maneuver. But shield mass, said McNutt, is only one consideration. A ‘ballast’ mass is also required to keep the center of gravity moving along the engine centerline as the propellant burns down during the maneuver.

These issues of mass are critical. Let me quote McNutt again:

The real problem is the mass of the thermal shield assembly – multiple shields plus the supporting structure – to shield just the kick stage itself, even with no Interstellar Probe spacecraft. We’ve adopted solid rocket motors (SRMs) with specific impulses approaching 300s with masses of up to ~4,000 kg (Orion 50XL). In that case, we have an engineering solution that closes on paper, has all of the design margins included, would require specialized design work (> ~10’s of millions and multiple years of dedicated effort) and ends up with about the same performance (flight distance after 50 years) as a prograde Jupiter gravity assist, but with significantly more inherent risk, both in development and in the actual execution of the burn at the Sun itself. Bottom line: it may be doable with an investment of significantly more time and money, but it offers no advantage, and, therefore, we have concluded it would be a poor trade.

Within the upcoming report will be the 181 staging scenarios the team examined by way of reaching its conclusions about the Solar Oberth Maneuver. It becomes clear from the synopsis that McNutt gave me that existing technologies are simply not up to speed to realize the potential of the SOM, and even extending the technologies forward to nuclear rocket engines and greatly enhancing the performance of today’s launch vehicles would not change this fact. To make the Oberth maneuver at the Sun into a viable option, it appears, would take decades of work and demand billions of dollars in new investment. Best to shelve Oberth’s concept for this mission, though I suspect that future technologies will keep the concept in play.

Where to next with Interstellar Probe? If we rule out Oberth, then the two scenarios involving a Jupiter gravity assist remain, the team having considered other options including solar sails and finding them not ready within the needed timeframe. The first is a ‘passive’ flyby, in which every rocket stage is fired in an optimized launch sequence. The second is a powered gravity assist, in which a final kick-stage is reserved for use at Jupiter. We will see what the upcoming report has to say about these options, balancing among outbound speed, complexity, and mass.

tzf_img_post

Interstellar Probe: Pushing Beyond Voyager

Our doughty Voyager 1 and 2, their operations enabled by radioisotope power systems that convert heat produced by the decay of plutonium-238 into electricity, have been pushing outward through and beyond the Solar System since 1977. Designed for a four and a half year mission, we now have, more or less by accident and good fortune, our first active probes of nearby interstellar space. But not for long. At some point before the end of this decade, both craft will lack the power to keep any of their scientific instruments functioning, and one great chapter in exploration will close.

What will the successor to the Voyagers look like? The Johns Hopkins University Applied Physics Laboratory (JHU/APL) has been working on a probe of the local interstellar medium. We’re talking about a robotic venture that would be humanity’s first dedicated mission to push into regions that future, longer-range interstellar craft will have to cross as they move far beyond the Sun. If it flies, Interstellar Probe would be our first mission designed from the start to be an interstellar craft.

Pontus Brandt is an Interstellar Probe Concept Study project scientist, in addition to being principal investigator for two instruments aboard the European Space Agency’s Jupiter Icy Moon Explorer (JUICE) Mission. Brandt puts the ongoing work in context in a recent email:

Interstellar Probe would represent Humanity’s first deliberate step into interstellar space and go farther and faster than any spacecraft before. By using conventional propulsion, Interstellar Probe would travel through the boundaries of the protective heliosphere into the unknown interstellar cloud for the first time. Within its lifetime, it would push far beyond the Voyager mission to explore the heliospheric boundary and interstellar space so that we can ultimately understand where our home came from, and where we are going.

Image: A possible operation scenario, divided into phases and indicating science goals along the way. Credit: JHU/APL, from the Interstellar Probe 2019 Report.

The nature of the interstellar cloud Brandt refers to is significant. But before examining it, a bit of background. APL’s role in Interstellar Probe has roots in principal investigator Ralph McNutt’s tireless advocacy of what was once called Innovative Interstellar Explorer, a report originally funded by NASA in 2003 and often discussed in these pages. The current study began in 2018 and will continue through early 2022, examining the technologies that would make Interstellar Probe possible, with an eye on the coming Decadal Survey within NASA’s Heliophysics Science Division. Bear in mind as well that the space community has been discussing what we can call ‘interstellar precursor’ missions all the way back to the 1960s — an interesting story in itself! — and the Interstellar Probe concept appeared in the 2003 and 2013 Heliophysics Decadal Surveys.

About those Decadals: Every ten years, Decadal Surveys appear for the four NASA science mission divisions: Planetary Science, Astrophysics, Heliophysics and Earth Science, the idea being to provide guidance for the agency’s science program going forward. So the immediate context of the current effort at APL is that it is being conducted to provide technical input that can feed into the next Heliophysics Decadal Survey, which will cover the years 2023 to 2032. But the implications for science across all four divisions are part of APL’s remit, affecting specific targets and payloads.

What can realistically be done within the 2023-2032 time frame? And what kind of science could a mission like this, launching perhaps in 2030, hope to accomplish? Workshops began in June of 2018 and continue to refine science goals and support engineering trade studies in support of what the team calls “a ‘pragmatic’ interstellar probe mission.” The most recent of these, the fourth, just concluded on October 1. You can see its agenda here.

A launch in the early 2030s demands not futuristic technologies now in their infancy but proven methods that can be pushed hard in new directions. This is, you might say, ‘Voyager Plus’ rather than the starship Enterprise, but you build interstellar capability incrementally absent unexpected breakthroughs. That calls for a certain brute force determination to keep pushing boundaries, something Ralph McNutt and team have been doing at APL, to their great credit , for many years now. A spacecraft like this would be a flagship mission (now known as a Large Strategic Science Mission) — these are the most ambitious missions the agency will fly, a class that has included the Voyagers themselves, Cassini, Hubble and the James Webb Space Telescope.

A variety of methods for reaching beyond the heliosphere in the shortest possible time have been under consideration, including an “Oberth maneuver” (named after scientist Hermann Oberth, who documented it in 1929), where a propulsive burn is performed during a close solar pass that has itself been enabled by a retrograde Jupiter gravity assist. Other Jupiter flyby options, with or without a propulsive burn via a possible upper stage, remain on the table. The plan is to drive the probe out of the Solar System at speeds sufficient to reach the heliopause in 15 years. The participating scientists talk in terms of a flyout speed of 20 AU/year, which translates to 95 kilometers per second. Voyager 1, by comparison, is currently moving at roughly 17.1 kilometers per second.

The Voyagers own our current distance records, with Voyager 1 currently at 154 AU and Voyager 2 at 128 AU. Interstellar Probe would still be returning science at 1000 AU, meaning it would be capable of looking back and seeing not just the Earth in the context of the Solar System, as in Voyager’s ‘pale blue dot’ image, but also taking measurements of the heliosphere from well outside it, helping us understand both the interstellar medium and the effect of our stellar system as it moves through it.

There is much to be learned about the protective magnetic bubble called the heliosphere in which the entire Solar System is embedded. We have to understand that it is anything but static, as Pontus Brandt explains:

During its evolutionary journey around the galaxy, [the Sun] has plowed through widely different environments, witnessed supernova explosions on its path, that have all shaped the system that we live in today. The vast differences in interstellar densities, speeds, charge fractions have been responsible for an extreme range of sizes and shapes of the global heliosphere throughout its history – from many times bigger than today, to a tiny heliosphere below even the orbit of Earth. This, in turn, has had dramatic consequences for the penetration of the primordial soup of interstellar material that have affected several crucial aspects of elemental and isotopic abundances, atmospheric evolution, conditions for habitability and perhaps even biological evolution. Only some 60, 000 years ago, the Sun entered the vast Local Interstellar Cloud (some 30 light years across), and in just a few thousand years the solar system will enter a completely different interstellar cloud that will continue to shape its evolution.

Image: The Sun is on the way to exiting the Local Interstellar Cloud and entering another unexplored interstellar region. Credit: NASA/Goddard/Adler/U. Chicago/Wesleyan.

An interstellar precursor mission can examine energetic neutral atoms (ENAs) to provide data on the overall shape of the heliosphere. Major issues include how plasma from the Sun’s solar wind interacts with interstellar dust to form and continue to shape the heliosphere.

But a mission like this also shapes our views of time, as the Voyagers have done as we have watched their progress through the Solar System, the heliosphere and beyond. Mission scientists turned the 4.5 year mission into a surprising 45 year one solely on the strength of their design and the quality of their components, not to mention the unflagging efforts of the team that operates them. A mission designed from the start for 50 years, as Interstellar Probe would be, will likely have a lifetime far beyond that. Its components are meant to be functional when our grandchildren are in their dotage. Most of its controllers in 2080 have yet to be born.

So this is a multi-generational challenge, a reach beyond individual lifetimes. Let me quote from the Interstellar Probe Study 2019 Report, which is now available online.

It is important to note that the study does not purport to center on “the one and only” interstellar probe but rather on this mission as a first step to more advanced missions and capabilities… In addition to promising historically groundbreaking discoveries, the Interstellar Probe necessitates a transformation in the programmatics needed to accommodate lifetime, reliability, and funding requirements for this new type of multigenerational, multi-decade operational mission. Paving the way for longer journeys utilizing future propulsion technologies, such as those not invoked here, the Interstellar Probe is the first explicit step we take today on the much longer path to the stars.

Principal investigator Ralph McNutt tells me that the Interstellar Probe team is finishing up a Mission Concept Report for NASA on the progress thus far, incorporating results of the recent workshop. This report should be available on the Interstellar Probe website in early December, with a number of items clarifying aspects of the currently available 2019 report. We need to dig into some of the issues that will appear there, for the concept is changing as new studies emerge. In particular, let’s look next time at the ‘Oberth maneuver’ idea, what it means, and whether it is in fact a practical option. I’m surprised at what’s emerging on this.

tzf_img_post

NEA Scout: Sail Mission to an Asteroid

Near-Earth Asteroid Scout (NEA Scout) is a CubeSat mission designed and developed at NASA’s Marshall Space Flight Center in Huntsville and the Jet Propulsion Laboratory in Pasadena. I’m always interested in miniaturization, allowing us to get more out of a given payload mass, but this CubeSat also demands attention because it is a solar sail, the trajectory of whose development has been a constant theme on Centauri Dreams.

And while NASA has launched solar sails before (NanoSail-D was deployed in 2010), NEA Scout moves the ball forward by going beyond sail demonstrator stage to performing scientific investigations of an asteroid. As Japan did with its IKAROS sail, the technology goes interplanetary. Les Johnson (MSFC) is principal technology investigator for the mission:

“NEA Scout will be America’s first interplanetary mission using solar sail propulsion. There have been several sail tests in Earth orbit, and we are now ready to show we can use this new type of spacecraft propulsion to go new places and perform important science. This type of propulsion is especially useful for small, lightweight spacecraft that cannot carry large amounts of conventional rocket propellant.”

Image: Engineers prepare NEA Scout for integration and shipping at NASA’s Marshall Space Flight Center in Huntsville, Alabama. Credit: NASA.

The spacecraft, one of several secondary payloads, has been moved inside the Space Launch System (SLS) rocket that will take it into space on the Artemis 1 mission, an uncrewed test flight. Artemis 1 will be the first time the SLS and Orion spacecraft have flown together (the previous launch was via a Delta IV Heavy). NEA Scout, which will deploy after Orion separates, has been packaged and attached to an adapter ring connecting the SLS rocket and Orion.

Once separated from the launch vehicle, NEA Scout will deploy a thin aluminized polymer sail measuring 85 square meters (910 square feet). In terms of sail deployment, we can think of the mission as part of a continuum leading to Solar Cruiser, which will feature a sail 16 times larger when it launches in 2025. Deployment will be via stainless steel alloy booms. Near the Moon, the spacecraft will perform imaging instrument calibration and use cold gas thrusters to adjust its trajectory for a Near-Earth Asteroid. The solar sail will provide extended propulsion during the approximately two year cruise to destination. The final target asteroid has yet to be selected.

Image: NASA’s NEA Scout spacecraft in Gravity Off-load Fixture, System Test configuration at NASA’s Marshall Space Flight Center in Huntsville, AL. Credit: NASA.

The pace of innovation in miniaturization is heartening. I note this from a 2019 conference paper describing the final design and the challenges in perfecting the hardware (citation below):

The figurative explosion in CubeSat components for low earth orbital (LEO) missions proved that spacecraft components could be made small enough to accomplish missions with real and demanding science and engineering objectives. Unfortunately, these almost-off-the-shelf LEO components were not readily usable or extensible to the more demanding deep space environment. However, they served as an existence proof and allowed the NEA Scout spacecraft engineering team to innovate ways to reduce the size, mass, and cost of deep space spacecraft components and systems for use in a CubeSat form factor.

Image: Illustration of NEA Scout with the solar sail deployed as it flies by its asteroid destination. Credit: NASA.

At destination, NEA Scout is to perform a sail-enabled low-velocity flyby at less than 30 meters per second, with imaging down to less than 10 centimeters per pixel, which should enlarge our datasets on small asteroids, those measuring less than 100 meters across. Says principal science investigator Julie Castillo-Rogez (JPL):

“The images gathered by NEA Scout will provide critical information on the asteroid’s physical properties such as orbit, shape, volume, rotation, the dust and debris field surrounding it, plus its surface properties.”

The more we learn about small asteroids, the better, given our need to track trajectories and potentially change them if we ever find an object on course to a possible impact on Earth.

The presentation on NEA-Scout is Lockett et al., “Lessons Learned from the Flight Unit Testing of the Near Earth Asteroid Scout Flight System,” available here.

tzf_img_post

When Will We See an Ice Giant Orbiter?

With NASA announcing that its Discovery program would fund both Davinci and Veritas, two missions to Venus, it’s worth pausing to consider where we are in the realm of Solar System exploration. This is not to knock the Venus decisions; this is a target that has been neglected compared to, obviously, Mars, and we’ve kept it on the back burner while exploring Jupiter, Saturn and, with a fast flyby, Pluto/Charon. With budgets always tight, the axe must fall, and fall it has on the promising Trident.

Discovery-class involves small-scale missions that cost less than $500 million to develop. The Trident mission would have delivered imagery from Triton that upgraded the 1989 images from Voyager 2, useful indeed given the moon’s active surface, and we might have learned about the presence of a subsurface ocean. I should also mention that we lost IVO when the four candidate missions were pared down to two. IVO (Io Volcano Observer) had a strong case of its own, with close flybys of the tortured geology on the most volcanically active body in the Solar System.

So on to Venus, but let’s consider how the next few decades are shaping up. We have flown orbital missions to every planet in the Solar System other than the two ice giants, and it’s worth considering how many questions about those worlds were suggested by the Voyager 2 flybys of Uranus and Neptune. Imagine if all we had of Saturn were flyby images, conceivably missing the active plume activity on Enceladus. What kind of startling data might an ice giant orbiter return that Voyager 2 didn’t see in its brief encounters?

The ice giants are a class of planet that, as the 2013 Planetary Science Decadal Survey stated “are… one of the great remaining unknowns in the solar system, the only class of planet that has never been explored in detail.” A Uranus Orbiter and Probe was, in fact, the third-highest priority large-class mission named by the report, but it’s clear that we won’t have such a mission in time for the 2030-2034 launch window needed (more on this in a moment). Despite that, let’s switch the focus to Uranus because of a short report from the 2020 Lunar and Planetary Science Conference that Ashley Baldwin forwarded.

There are all kinds of reasons why Uranus makes an interesting target. In addition to its status as an ice giant, Uranus has both a ring system and unusual moons, with five major satellites that may be ocean worlds and in any case show dramatic surface features. The seventh planet also sports a major tilt in both rotational and magnetic axes, and a wind circulation structure that is little understood. In the absence of a major orbiter mission, the brief paper Ashley sent examines the issues involved in sending a much smaller New Frontiers class orbiter with faster turnaround.

Image: Uranus’ moon Miranda sports one of the strangest and most varied landscapes among extraterrestrial bodies, including three large features known as “coronae,” which are unique among known objects in our solar system. They are lightly cratered collections of ridges and valleys, separated from the more heavily cratered (and presumably older) terrain by sharp boundaries like mismatched patches on a moth-eaten coat. Miranda’s giant fault canyons are as much as 12 times as deep as the Grand Canyon. This image was acquired by Voyager 2 on Jan. 24, 1986, around its close approach to the Uranian moon. Credit: JPL.

Back to that launch window I mentioned earlier. The 2030-2034 timeframe for Uranus would allow the needed Jupiter gravity assist that would get the payload to target before it reaches equinox in 2049. This is an important point: We’d like to see the northern hemispheres of the satellites — Voyager 2 could not see these — and after equinox they will once again become dark. A New Frontiers-class orbiter might just make the deadline, but it’s hard to see such a mission being funded in time. NASA now says the next opportunity to propose for the fifth round of New Frontiers missions will be no later than the fall of 2024.

New Horizons is a New Frontiers-class mission, as is OSIRIS-REx and Juno, all the subject of competitive selection through the program, which focuses on medium-scale missions that cost less than $850 million to develop. Within that cost envelope, a Uranus orbiter is a tricky proposition. The total mission duration cited in the paper is fourteen years because of the flight design life of the needed Multi-Mission Radioisotope Thermoelectric Generators (MMRTGs). Thus the baseline is a two year mission in orbit at Uranus with mapping of the entire system, all completed by Uranus spring equinox in 2049, “enabling different illuminations of the satellites and seasonal orientation of the planet and magnetosphere than observed by Voyager 2.”

Other issues: How to achieve orbital insertion at Uranus? Aerocapture seems a reasonable possibility and would have to be considered. The paper cites a 60-kg payload including five instruments along with radio science capabilities, and goes on to note that power is the most limiting constraint on a mission like this under New Frontiers cost limits. Here’s what the paper says about the power question:

…addressing power within cost is the primary obstacle to the feasibility of a NF Uranus orbiter mission. Previous Ice Giant mission studies have resulted in architectures requiring >350 W-e end-of-life power, which requires six MMRTGs. Owing to the relative inefficiency and significant cost of MMRTGs, any design should attempt to reduce the needed end-of-life power; this will have significant impact on both the spacecraft and orbit design as well as the communication subsystem and payload.

And of course we have this:

Other design considerations that place significant constraints on the feasibility of a NF Uranus orbiter include deep-space communications (specifically the power required for downlink) and radiation shielding mass.

Not an easy task. But this is what we face as we look beyond the current selections in the Discovery program. We’d all like to see an orbiter around both ice giants, but given the realities of time and budget, the likelihood of getting one around either before mid-century is slim. Eventually it will get done, and new technologies will make for a more efficient design and a more comprehensive mission. Sadly, the timeframe for seeing all this happen stretches a long way ahead.

Many of us find this frustrating. But the overview is that the exploration of the Solar System and the push beyond is a civilizational project that dwarfs human lifetimes. The things we can accomplish today build the basis for projects our children will complete. We push the limits of what we have, drive technology forward, and refuse to stop trying.

The paper is Cohen et al., “New Frontiers-class Uranus Orbiter: A Case For Exploring The Feasibility of Achieving Multidisciplinary Science With a Mid-scale Mission,” 51st Lunar and Planetary Science Conference (2020). Full text.

tzf_img_post