Centauri Dreams
Imagining and Planning Interstellar Exploration
The Shape of Space Telescopes to Come
Planning and implementing space missions is a long-term process, which is why we’re already talking about successors to the James Webb Space Telescope, itself a Hubble successor that has yet to be launched. Ashley Baldwin, who tracks telescope technologies deployed on the exoplanet hunt, here looks at the prospects not just for WFIRST (Wide-Field InfraRed Survey Telescope) but a recently proposed High-Definition Survey Telescope (HDST) that could be a major factor in studying exoplanet atmospheres in the 2030s. When he is not pursuing amateur astronomy at a very serious level, Dr. Baldwin serves as a consultant psychiatrist at the 5 Boroughs Partnership NHS Trust (Warrington, UK).
by Ashley Baldwin
?”It was the best of times, it was the worst of times…” Dickens apart, the future of exoplanet imaging could be about two telescopes rather than two cities. Consider the James Webb Space Telescope (JWST), and Wide-Field InfraRed Survey Telescope (WFIRST), which as we shall see have the power not just to see a long way but also determine any big telescope future. JWST, or rather its performance, will determine whether there is even to be a big telescope future. The need to produce a big telescope and its function are a cause of increasing debate as the next NASA ten year roadmap, the Decadal Survey for 2020, approaches.
NASA will form Science Definition “focus” groups from the full range of its astrophysics community to determine the shape of this map. The Exoplanet Program Analysis Group (ExoPAG) is a dedicated group of exoplanetary specialists tasked with soliciting and coordinating community input to NASA’s exoplanet exploration programme through missions like Kepler, the Hubble Space Telescope, HST and more recently Spitzer. They have produced an outline of their vision in response to NASA’s soliciting of ideas, which is addressed here in conjunction with a detailed look at some of the central elements by way of explaining some of the complex features that exoplanet science requires.
Various members of ExoPAG have been involved in the exoplanet arm of the JWST and most recently in the NASA dark energy mission, which with the adoption of the “free” NRO 2.4m mirror array and a coronagraph is increasingly becoming an ad hoc exoplanet mission too. This mission has also been renamed: Wide-Field InfraRed Survey Telescope (WFIRST), a name that will hopefully go down in history! More about that later.
The Decadal Survey and Beyond
As we build towards the turn of the decade, though, the next Decadal Survey looms. This is effectively a road map of NASA’s plans for the coming decade. Never has there been a decade as important for exoplanet science if it is to build on Kepler’s enormous legacy. To date, over 4000 “candidate” planets have been identified and are awaiting confirmation by other means, such as the radial velocity technique. Recently twelve new planets have been identified in the habitable zones of their parent stars, all with Earth-like masses. Why so many now? New sophisticated software has been developed to automate the screening of the vast number of signals returned by Kepler, increasing the number of potential targets but more importantly, becoming more sensitive to the smaller signals of Earth-sized planets.
So what is next? In general these days NASA can afford one “Flagship” mission. This will be WFIRST for the 2020s. It is not a dedicated mission but as Kepler and ground-based surveys return increasingly exciting data, WFIRST evolves. In terms of the Decadal Survey, the exoplanet fraternity has been asked to develop mission concepts within the still-available funds.
Three “Probe” class concepts — up to and above current Discovery mission cost caps but smaller than flagship-class missions — have been mooted, the first of which is developing a star-shade to accompany WFIRST. This, if you recall, is an external occulting device that blocks out starlight by sitting several tens of thousands of kilometers off between the parent star and the telescope, allowing through the much dimmer accompanying planetary light, making characterisation possible. A recent Probe concept, Exo-S, addressed this very issue and proposed either a small 1.1m dedicated telescope and star-shade, or the addition of a star-shade to a pre existing mission like WFIRST. At that time, the “add on” option wasn’t deemed possible as it was proposed to put WFIRST into a geosynchronous orbit where a star-shade could not function.
The ExoPAG committee have recently produced a consensus statement of intent in response to a NASA request for guidance on an exoplanet roadmap for incorporation into NASA’s generic version for Decadal Survey 2020. As stated above, this group consists of a mixture of different professionals and amateurs (astrophysicists, geophysicists, astronomers, etc) who advise on all things exoplanet including strategy and results. They have been asked to create two science definition teams representing the two schools of exoplanet thinking to contribute to the survey.
One suggestion involved placing WFIRST at the star-shade friendly Earth/Sun Lagrange 2 point (932,000 kilometers from Earth, where the Sun and Earth Gravity cancel each other out, allowing a relatively stable orbit). This if it happens represents a major policy change from the original geosynchronous orbit, and is very exciting as unlike the current exoplanet coronagraph on the telescope, a star-shade of 34m diameter could image Earth-mass planets in the habitable zones of Sun-like stars. More on that below.
WFIRST at 2.4m will be limited in how much atmospheric characterisation it can perform given its relatively small aperture and time-limited observation period (it is not a dedicated exoplanet mission and still has to do dark energy science). The mission can be expected to locate several thousand planets via conventional transit photometry as well as micro-lensing and possibly even a few new Earth-like planets by combining its results with the ESA Gaia mission to produce accurate astrometry (position and mass in three dimensions) within 30 light years or so. There has even been a recent suggestion that exoplanet science or at least the coronagraph actually drives the WFIRST mission. A total turnaround if it happens and very welcome.
The second Probe mission is a dedicated transmission spectroscopy telescope. It would be a telescope of around 1.5m with a spectroscope, fine guidance system and mechanical cooler to spectroscopically analyse the light of a distant star as it passes through the atmosphere of a transiting exoplanet. No image of the planet here, but the spectrum of its atmosphere tells us almost as much as seeing it. The bigger the telescope aperture, the better for seeing smaller planets with thinner atmospheric envelopes. Planets circling M-dwarfs make the best targets, as the planet to star ratio here will obviously be the highest. The upcoming TESS mission is intended to provide such targets for the JWST, although even its 6.5m aperture will struggle to characterise atmospheres around all but the largest planets or perhaps, if lucky, a small number of “super-terrestrial” planets around M-dwarfs. It will be further limited by general astrophysics demand on its time. A Probe telescope would pick up where JWST left off and although smaller, could compensate by being a dedicated instrument with greater imaging time.
The final Probe concept links to WFIRST and Gaia. It would involve a circa 1.5m class telescope as part of a mission that like Gaia observes multiple stars on multiple occasions to measure subtle variations in their position over time, determining the presence of orbiting planets by their effect on the star. Unlike radial velocity methods, it can accurately determine mass and orbital period down to Earth- sized planets around neighbouring stars. A similar concept called NEAT was presented unsuccessfully for ESA funding but rejected despite being robust — a good summary is available through a Google search.
These parameters are obviously useful in their own right but more important provide targets for direct imaging telescopes like WFIRST rather than leaving the telescope to search star systems “blindly,” thus wasting limited time. At present the plan for WFIRST is to image pre-existing radial velocity planets to maximise searching, but nearby RV [radial velocity] planets are largely limited to the larger range of gas giants, and although important to exoplanetary science, they are not the targets that are going to excite the public or, importantly, Congress.
All of these concepts occur against the backdrop of the ESA RV PLATO mission and the new generation of super telescopes, the ELTs. Though ground based and limited by atmospheric interference, these will synergize perfectly with space telescopes, as their huge light-gathering capacity will allow high-resolution spectroscopy of suitable exoplanet targets identified by their space-based peers, especially if also combined with high quality coronagraphs.
Image: A direct, to-scale, comparison between the primary mirrors of the Hubble Space Telescope, James Webb Space Telescope, and the proposed High Definition Space Telescope (HDST). In this concept, the HDST primary is composed of 36 1.7 meter segments. Smaller segments could also be used. An 11 meter class aperture could be made from 54 1.3 meters segments. Credit: C. Godfrey (STScI).
Moving Beyond JWST
So the 2020s have the potential to be hugely exciting. But simultaneously we are fighting a holding battle to keep exoplanet science at the top of the agenda and make a successful case for a large telescope in the 2030s. It should be noted that there is still an element in NASA who are unsure as to what the reaction to the discovery of Earth like planets would be!
A series of “Probe” class missions will run in parallel with or before any flagship mission. No specific plans have been made for a flagship mission but an outline review of its necessary requirements has been commissioned by the Association of Universities for Research in Astronomy (AURA) and released under the descriptive title “High Definition Space Telescope” (HDST). A smaller review has produced an outline for a dedicated exoplanet flagship telescope called HabEX. These have been proposed as happening at the end of the next decade but have met resistance as being too close to the expensive JWST in time. As WFIRST is in effect a flagship mission (although never publicly announced as such), NASA generally can afford one such mission per decade, which means any big telescope will have to wait until the 2030s at the earliest. Decadal 2020 and the exoplanet consensus and science definition groups contributing to it will basically have to play a “holding” role, keeping up the exoplanet case throughout the decade using evidence from available resources to build a case for a subsequent large HDST.
The issue then becomes the launch vehicle upper stage “shroud,” or width. The first version of the Space Launch System (SLS) is only about 8.5m. Ideally the shroud should be at least a meter larger than the payload to allow “give” during launch pressures, which is especially important for a monolithic mirror where the best orientation is “face on”. Given the large stresses of launch, lightweight “honeycomb” versions of traditional mirrors cannot be used and solid versions weigh in at 56 tonnes, even before the rest of the telescope. For the biggest possible monolithic telescopes at least, we will have to wait for the 10m-plus shroud and heavier lifting ability of the SLS or any other large launcher.
A star-shade on WFIRST via one of these Probe missions seems the best bet for a short term driver of change. Internal coronagraphs on 2m class telescopes allow too little light through for eta Earth spectroscopic characterisation, but star-shades will (provided their light enters the telescope optical train high enough up, if like WFIRST the plan is to have internal and external coronagraphs). There will be a smaller inner working angle, too, to get at the habitable zone of later spectrum stars (K). That’s if WFIRST ends up at L2, though L2 is talked about more and more.
The astrometry mission will be a dedicated version of WFIRST/Gaia synergy, saving lots of eta Earth searching time. It should be doable within Probe funding, as the ESA NEAT mission concept came in at under that. It fell through due to the formation flying element, but post PROBA 3 (a European solar coronagraphy mission that will in effect be the first dedicated “precision” formation flying mission) that issue should be resolved.
A billion dollars probably gets a decent transition spectroscopy mission which will have enough resolution to follow up some of the more promising TESS discoveries. Put these together and that’s a lot of exoplanet science with a tantalising amount of habitability material, too. WFIRST status seems to be increasing all the time and at one recent exoplanet meeting led by Gary Blackwood it was even stated (and highlighted) publicly that the coronagraph should LEAD the mission science. That’s totally at odds with previous statements that emphasised the opposite.
Other Probe concepts consider high-energy radiation such as X-rays, and though less relevant to exoplanets, the idea acknowledges the fact that any future telescopes will need to look at all facets of the cosmos and not just exoplanets. Indeed, competition for time on telescopes will become even more intense. Given the very faint targets that exoplanets present it must be remembered that collecting adequate photons takes a lot of precious telescope time, especially for small, close-in habitable zone planetary targets.
The ExoPAG consensus represents a compromise between two schools of thought: Those who wish to prioritise habitable target planets for maximum impact, and those favouring a methodical analysis of all exoplanets and planetary system architecture to build up a detailed picture of what is out there and where our own system fits into this. All of these are factors that are likely to determine the likelihood of life, and both approaches are robust. I would recommend that readers consult this article and related material and reach their own conclusions.
Image: A simulated image of a solar system twin as seen with the proposed High Definition Space Telescope (HDST). The star and its planetary system are shown as they would be seen from a distance of 45 light years. The image here shows the expected data that HDST would produce in a 40-hour exposure in three filters (blue, green, and red). Three planets in this simulated twin solar system – Venus, Earth, and Jupiter – are readily detected. The Earth’s blue color is clearly detected. The color of Venus is distorted slightly because the planet is not seen in the reddest image. The image is based on a state-of-the-art design for a high-performance coronagraph (that blocks out starlight) that is compatible for use with a segmented aperture space telescope. Credit: L. Pueyo, M. N’Diaye (STScI).
Defining a High Definition Space Telescope
What of the next generation of “Super Space Telescope”?. The options are all closely related and fall under the broad heading of High Definition Space Telescope (HDST). Such a telescope requires an aperture of between 10 and twelve metres minimum to have adequate light-capturing ability and resolution to carry out both exoplanet imaging and also wider astrophysics, such as viewing extragalactic phenomena like quasars and related supermassive black holes. Regardless of specifics these parameters require absolute stability with the telescope requiring picometer (10-12 metre) levels in order to function.
The telescope is diffraction limited at 500nm, right in the middle of the visible spectrum. Diffraction limit is effectively the wavelength that any circular mirror gives its best angular resolution, the ability to discern detail. Angular resolution is governed by the equation ? (lambda) or wavelength expressed as a fraction of a metre / telescope aperture (D) expressed in metres; e.g HDST has its optimum functioning or “diffraction limit” at 500nm wavelength, defined by the equation 500nm (10-9)/12m.
The higher the aperture of a telescope the more detail it can see at any given wavelength and conversely the longer the wavelength, the less detail it can see. That is under perfect conditions experienced in space as opposed to the constantly moving atmosphere for ground-based scopes that will rarely approach the diffraction limit. So the HDST will not have the same degree of resolution at infrared wavelengths as visible wavelengths, which is relevant as several potential biosignatures will appear on spectra at longer wavelengths.
Approaching the diffraction limit is possible on the ground with the use of laser-produced guide stars and modern “deformable mirrors or “adaptive optics,” which help compensate. This technique of deformable primary and especially secondary mirrors will be important in space as well, in order to achieve the incredible stability required for any telescope observing distant and dim exoplanets. This is especially true of coronagraphs, though much less so with star-shades, which could be important in determining which starlight suppression technique to employ.
Additionally, the polishing “finish” of the mirror itself requires incredible precision. As a telescope becomes larger, the quality of its mirror needs to improve given the minute wavelengths being worked with. The degree of polish or “finish” required is defined as fractions of a wavelength or wavefront error (WFE). For the HDST this is as low as 1/10 or even 1/20 of the wavelength in question. In its case, generally visible light around 500nm, so this error will be below 50nm, a tiny margin that illustrates the ultra high quality of telescope mirror required.
A large 12m HDST would require a WFE of about 1/20 lambda and possibly even lower, which works out to less than 30nm. The telescope would also require a huge giga-pixel array of sensors to capture any exoplanet detail, electron-magnifying CCDs, Electron Multiplying CCDs (EMCCDs), or their Mercury Cadmium Tellurium-based near infrared equivalent, which would need passive cooling to prevent heat generated from the sensors themselves producing “dark current,” creating a false digital image and background “noise”.
Such arrays already exist in space telescopes like the ESA Gaia, and producing larger versions would be one of the easier design requirements. For a UltraViolet-Optical-InfraRed (UVOIR) telescope an operating temperature of about -100 C would suffice (for the sensors, while only the telescope itself would be near room temperature).
All of the above is difficult but not impossible even today and certainly possible in the near future, with conventional materials like ultra-low expansion glass (ULE) able to meet this requirement, and more recently silicon carbide composites, too. The latter have the added advantage of a very low coefficient of expansion. This last feature can be crucial depending on the telescope sensor’s operating temperature range. Excessive expansion due to a “warm” telescope operating around 0-20 degrees C could disturb the telescope’s stability. It was for this reason that silicon carbide was chosen for the structural frame of the astrometry telescope Gaia, where stability was also key to accurately positioning one billion stars.
A “warm” operating temperature of around room temperature helps reduce telescope cost significantly, as illustrated by the $8 billion cost of the JWST, with an operating temperature of a few tens of Kelvin requiring an expensive and finite volume of liquid helium. Think how sad it was seeing the otherwise operational 3.5m ESA Herschel space telescope drifting off to oblivion when its supply of helium ran out.
The operating temperature of a telescope’s sensors determines its wavelength-sensitive range or “bandpass.” For wavelengths longer than about 5 micrometers (5000 nm), the sensors of the telescope require cooling in order to prevent the temperature of the telescope apparatus from impacting any incoming information. Bandpass is also influenced, generally made much smaller, by passing through a coronagraph. The longer the wavelength, the greater the cooling required. Passive cooling involves attaching the sensors to a metal plate that radiates heat out to space. This is useful for a large telescope that requires precision stability, as it has no moving parts that can vibrate. Cooler temperatures can be reached by mechanical “cryocoolers,” which can get down as low as a few tens of Kelvin (seriously cold) but at the price of vibration.
This was one of the two main reasons why the JWST telescope was so expensive. If required liquid helium to achieve its operating temperature of just a few Kelvin from absolute zero (the point at which a body has no energy and therefore the lowest reachable temperature) without vibration, in order to reach longer infrared wavelengths and look back further into time.
Remember, the further light has travelled since the Big Bang, the more it is stretched or “red-shifted,” and seeing back as far as possible was a big driver for JWST. The problem is that liquid helium only lasts so long before boiling off, with the large volumes required for ten years of service presenting a large mass and also requiring extensive, expensive testing, all of which contributed to the telescope’s cost and time overrun.
The other issue with large telescopes is whether they are made from one single mirror, like Hubble, or are segmented like the Keck telescopes and JWST. The largest currently manufacturable monolithic mirrors are off-axis (unobstructed), 8.4m in diameter, bigger than JWST and perfected in ground scopes like the LBT and GMT. Off-axis means that the focal plane of the telescope is offset from its aperture such that a focusing secondary mirror, sensor array, spectroscope or coronagraph doesn’t obstruct and reduce available light by up to 20% of the aperture. A big attraction to this design is that the unobstructed 8.4m mirror thus collects roughly the equivalent of a 9.2m on-axis mirror, ironically near the minimum requirements of the ideal exoplanet telescope.
Given the construction of six such mirrors for the GMT, this mirror is now almost “mass produced,” and thus very reasonably priced. The off-axis design allows sensor arrays, spectroscopes and especially large coronagraphs to sit outside the telescope without need of suspension within the telescope, with the “spider” attachments creating the “star” shaped interference diffraction patterns in the images we are all familiar with in conventional telescope designs. Despite being cheaper to manufacture and already tested extensively on the ground, the problem arises from the fact that there are currently no launchers big and powerful enough to lift what would in effect be a 50 tonne-plus telescope into orbit (non-lightweight honeycomb design due to high “g” and acoustic vibration forces at launch).
In general, a segmented telescope can be “folded” up inside a launcher fairing very efficiently, up to a maximum aperture of up to 2.5 X the fairing width. The Delta IV heavy launcher has a fairing width of about 5.5m, so in theory a segmented telescope of up to 14m could be launched provided it was below the maximum weight capacity of about 21 tonnes to geosynchronous transfer orbit. So it could be launched tomorrow! It was this novel segmentation that, along with cooling, added to the cost and construction time of the JWST, though hopefully once successfully launched it will have demonstrated its technological readiness and be cheaper next time round.
By the time a HDST variant is ready to launch it is hoped that there will be launchers with fairing widths and power to lift such telescopes, and they will be segmented because at 12m they exceed the monolithic limit. With a wavelength operating range from circa 90nm to 5000nm, they will require passive cooling only and the segmentation design will have been tested already, both of which will help reduce cost, which will be more simply dependent on size and launcher cost. This sort of bandpass, though not so large as a helium cooled telescope, is more than adequate for looking for key biosignatures of life such as ozone, O3, Methane, water vapour and CO2 under suitable conditions and with a good “signal to noise ratio”, the degree to which the required signal stands out from background noise.
Separating Planets from their Stars
Ideally signal to noise ratio should be better than ten. In terms of instrumentation, all exoplanet scientists will want a large telescope of the future to have starlight suppression systems to help directly image exoplanets as near to their parent stars as possible, with a contrast reduction of 10-10 in order to view Earth-sized planets in the liquid water “habitable zone.” The more Earth-like planets and biosignatures the better. There are ways of producing biosignature signs on a spectrograph that are abiotic, so a larger sample of such signatures strengthens the case for a life origin rather than a coincidental non-biological origin.
As has been previously discussed, there are two ways of doing this, with internal and external occulting devices. Internal coronagraphs are a series of masks and mirrors that help “shave off” the offending starlight, leaving only the orbiting planets. The race is on as to how close this can be done to the star. NASA’s WFIRST will tantalisingly achieve contrast reductions between 10-9 and 10-10, which shows how far this technology has come since the mission was conceived three years ago when such levels were pure fantasy.
How close to the parent star a planet can be imaged, the Inner working angle (IWA) is measured in milliarcseconds (mas), and for WFIRST this is slightly more than 100, between Earth and Mars in the solar system. A future HDST coronagraph would hope to get as low as 10 mas, thus allowing habitable zone planets around smaller, cooler (and more common) stars. That said, coronagraphs on segmented telescopes are an order of magnitude more difficult to design for segmented scopes than monolithic designs and little research has yet gone into this area. An external occulter or star-shade achieves the same goals as a coronagraph but by sitting way out in front of a telescope, between it and the target star, casts a shadow to exclude starlight. The recent Probe class concept explored the use of a 34m shade with WFIRST that was up to 35000kms away from the telescope. The throughput of light is 100% versus 20-30% maximum for most coronagraph designs, in an area where photons of light are at a premium. Perhaps just 1 photon per second or less might hit a sensor array from an exoplanet.
A quick word on coronagraph type might be useful. Most coronagraphs consist of a “mask” that sits in the entrance pupil of the focal plane and blocks out the central parent starlight whilst allowing the fainter peripheral exoplanet light to pass and be imaged. Some starlight will diffract around the mask (especially so for longer wavelengths like infrared) but can be removed by shaping the entry pupil or subsequent apodization (i.e., optical filtering technique), a process utilising a series of mirrors to “shave” off additional starlight till just the planet light is left.
For WFIRST the coronagraph is a combination of a “Lyot” mask and shaped pupil. This is efficient at blocking starlight to within 100 mas of the star but at the cost of losing 70-80% of the planet light, as previously stipulated. Such is the current level of technological progression ahead of proposals for the HDST. The reserve design utilises apodization, which has the advantage of removing starlight efficiently but without losing planet light; indeed, as much as 95% gets through. The design has not yet been tested to the same degree as the WFIRST primary coronagraph, though, as the necessary additional mirrors are very hard to manufacture. Its high “throughput” of light is very appealing where light is so precious, and thus the design is likely to see action at a later date. A coronagraph throughput of 95% on an off-axis 8.4m telescope compared to 20-30% for an alternative on even a 12m would allow more light to be analysed.
The advantage here is that the even more stringent stability requirements of a coronagraph are very much relaxed, and the amount of useful light reaching the focal plane of the telescope is near 100%. No place for waste. Star-shades offer deeper spectroscopic analysis compared to coronagraphs, too. The disadvantage is that the star-shade needs two separate spacecraft involved in precision “formation flying” to maintain the star-shade’s shadow in the right place, and the star-shade needs to move into a new position every time a new target is selected, taking days or weeks to get into position and of course having finite propellant supplies limiting its lifespan to a maximum of 5 years, and perhaps thirty or so premium-target exoplanets. Thus it may be that preliminary exoplanet discovery and related target mapping is done rapidly via a coronagraph before atmospheric characterisation via spectroscopy is done later by a star-shade with its greater throughput of light and greater spectroscopic range.
The good news is that the recent NASA ExoPAG consensus criteria require an additional Probe class ($1 billion) star-shade mission for WFIRST as well as a coronagraph. This would need the telescope to be at the stable Sun/Earth Lagrange point, but would make the mission in effect a technological demonstration mission for both types of starlight suppression, saving development costs for any future HDST while imaging up to 30 habitable zone Earth-like planets and locating many more within ten parsecs in combination with the Gaia astrometry results.
The drawback will be that WFIRST has a monolithic mirror and coronagraph development to date has focused on this mode rather than the segmented mirrors of larger telescopes. Star-shades are less affected by mirror type or quality, but a 12m telescope — compared to WFIRST’s 2.4m — would only achieve maximum results with a huge 80m shade. Building and launching a 34m shade is no mean feat but building and launching an enormous 80-100m version might even require fabrication in orbit. It would also need to be 160000-200000Kms from its telescope, making formation flying no easy achievement, especially as all star-shade technology can be tested only in computer simulations or downscaled in practice.
HDST Beyond Exoplanets
So that’s the exoplanet element. Exciting as such science is, it only represents a small portion of all astrophysics and any such HDST is going to be a costly venture, probably in excess of JWST. It will need to have utility across astrophysics, and herein lies the problem. What sort of compromise can be reached amongst different schools of astrophysics in terms of telescope function and also access time? Observing distant exoplanets can take days, and characterising their atmospheres even longer.
Given the price of JWST and its huge cost and time overrun, any Congress will be skeptical of being drawn into a bottomless financial commitment. It is for this reason that increasingly the focus is on both JWST and WFIRST. The first has absolutely GOT to work, well and for a long time, so that all its faults (as with Hubble, ironically) can be forgotten amid the celebration of its achievements. WFIRST must illustrate how a flagship level mission can work at a reasonable cost (circa $2.5 billion) and also show that all the exoplanet technology required for a future large telescope can work and work well.
The HABX2 telescope is in effect a variable aperture-specific variant of HDST (determined by funds) with the maximum possible passively cooled sensor bandpass described above and a larger version of the additional starlight suppression technology of WFIRST. In effect, a dedicated exoplanet telescope. It, too, would use a coronagraph or star-shade.
The overarching terms for all these telescope variants are determined by wavelength; thus the instrument would be referred to as Large Ultraviolet Optical InfraRed (LUVOIR), with specific wavelength range to be determined as necessary. Such a telescope is not a dedicated exoplanet scope and would obviously require suitable hardware. This loose definition is important as there are other telescope types — high energy, for instance, looking at X-Rays. The NASA Chandra telescope doesn’t image the highest energy X-Rays emitted by quasars or black holes. Following on from JWST and between it and the ALMA (Atacama Large Millimeter/submillimeter Array) is far infrared, which can use dedicated telescopes and has not been explored extensively. There are astrophysicist groups lobbying for all these telescope types.
Here WFIRST is again key. It will locate thousands of planets through conventional transition photometry and micro-lensing as well as astrometry, but the directly-imaged planets via its coronagraph and better still its star-shade should, if characterised (with the JWST?), speak for themselves, and if not guarantee a dedicated exoplanet HDST, at least provide NASA and Congress with the confidence to back a large space “ELT” with suitable bandpass and starlight suppression hardware, and time to investigate further. The HDST is an outline of what a future space telescope, be it HABX2 or a more generic instrument, might be.
Image: A simulated spiral galaxy as viewed by Hubble, and the proposed High Definition Space Telescope (HDST) at a lookback time of approximately 10 billion years (z = 2) The renderings show a one-hour observation for each space observatory. Hubble detects the bulge and disk, but only the high image quality of HDST resolves the galaxy’s star-forming regions and its dwarf satellite. The zoom shows the inner disk region, where only HDST can resolve the star-forming regions and separate them from the redder, more distributed old stellar population.
Credit: D. Ceverino, C. Moody, and G. Snyder, and Z. Levay (STScI).
Challenges to Overcome
The concern is that although much of its technology will hopefully be proven through the success of JWST and WFIRST, the step up in size in itself requires a huge technological advance, not least because of the exquisite accuracy required at all levels of its functioning, from observing exoplanets via a star-shade or coronagraph to the actual design, construction and operation of these devices. A big caveat is that it was this technological uncertainty that contributed to the time and cost overrun of JWST, something both the NASA executive and Congress are aware of. It is highly unlikely that such a telescope will launch before the mid-2030s at an optimistic estimate. There has already been pushback on an HDST telescope from NASA. What might be more likely is a compromise, one which delivers a LUVOIR telescope as opposed to an X-Ray or far-infrared alternative, but at more reasonable cost and budgeted for over an extended time prior to a 2030s launch.
Congress are keen to drive forward high profile manned spaceflight. Whatever your thoughts on that, it is likely to lead to the evolution of the SLS and private equivalents like SpaceX launchers. Should these have a fairing of around 10m, it would be possible to launch the largest monolithic mirror in an off-axis format that allows easier and most efficient use of a coronagraph or an intermediate star-shade (50m) with minimal technology development and at substantially less cost. Such a telescope would not present such a big technological advance and would be a relatively straightforward design. Negotiation over telescope usage could lead to greater time devoted to exoplanet science, thus compensating further for the “descoping” from the 12m HDST ideal (only 15% of JWST observation is granted for exoplanet use).Thus the future of manned and robotic spaceflight is intertwined.
A final interesting point is the “other” forgotten NRO telescope. It is identical to its high profile sibling and with “imperfections” in its manufacturing, but a recent NASA executive interview conceded it could still be used for space missions. At present logic would have it as backup for WFIRST. Could it, too, be the centrepiece of an exoplanet mission, one of the Probe concepts perhaps, especially the transit spectroscopy mission where mirror quality is less important?
As with WFIRST, its large aperture would dramatically increase the potency of any mission over a bespoke mirror and deliver a flagship mission at Probe costs. A bonus if, like WFIRST, it too is launched next decade, and as with Hubble and JWST, a bit of overlap with JWST would provide great synergy with the combined light-gathering capacity of the two telescopes, allowing greater spectroscopic characterisation of interesting targets provided by missions like TESS. The JWST workload could also be relieved, somewhat critically extending its active lifespan. Supposition only at this point. I don’t think NASA are sure what to do with it, though Probe funding could represent a way of using it without the need of diverting additional funds from elsewhere.
When all is said and done, the deciding factors are likely to be JWST and evidence collected from exoplanet Probe missions. JWST was five years overdue, five billion dollars overspent and laden with 162 moving parts, yet placed almost a million Kms away. It has simply got to work, and work well, if there is to be any chance of any other big space telescopes. Be nervous and cross fingers when it launches late 2018. Meantime, enjoy TESS and hopefully WFIRST and other Probe missions, which should be more than enough to keep everyone interested even without the arrival of the ELT ground base reinforcements with their high dispersion spectroscopy, which in combination with their own coronagraphs may also characterise habitable exoplanets. These planets and the success of the technology that finds them will be key to the development of the next big space telescope, if there is to be one.
Capturing public interest will be central to this, and we have seen just how much astrophysics missions can achieve in this regard with the recent high-profile successes of Rosetta and New Horizons. With ongoing innovation and the exoplanet missions next decade, this could usher in a golden era of exoplanet science. A final often forgotten facet of space telescopes, central to HDST use, is observing solar system bodies from Mars out to the Kuiper belt. Given the success of New Horizons, it wouldn’t be a surprise to see a similar future flyby of Uranus, but it gives some idea of the sheer potency of an HDST that it could resolve features down to just 300Km resolution. It could clearly image the icy “plumes” from Europa and Enceladus, especially in UV, where the shorter wavelength will allow its best resolving power, which illustrates the need for an ultraviolet capacity on the telescope.
By 2030 we are likely to know several tens of thousands of exoplanets, many characterised and even imaged, and who knows, maybe some exciting hints of biosignatures warranting the kind of detailed examination only a large space telescope can deliver.
Plenty to keep Centauri Dreams going for sure and maybe realise our position in the Universe.
——-
Further reading
Dalcanton, Seager at al., “From Cosmic Birth to living Earths: The future of UVOIR space astronomy.” Full text.
“HABX2: A 2020 mission concept for a flagship at modest cost,” Swain, Redfield et al. A white paper response to the Cosmic Origins Program Analysis Group call for Decadal 2020 Science and Mission concepts. Full text.
Equinox at Saturn: Puzzling Out the A Ring
I’m really going to miss Cassini when it takes its plunge into Saturn’s atmosphere in 2017. Having an orbiter in the outer system means that periodically we’ve been handed spectacular imagery and vast amounts of data for present and future analysis. Each new encounter now, such as the recent one with Dione, is a poignant reminder of how successful this mission has been, and how much we could gain with similar instrumentation around the ice giants.
Meanwhile, I look at this striking view of Saturn and its rings from 20 degrees above the ring plane, a mosaic built from 75 exposures using Cassini’s wide angle camera, and marvel at the view. The images were made in August of 2009, a day and a half after Saturn equinox, when the Sun was exactly overhead at the planet’s equator. The result is a darkening of the rings from this perspective because of the Sun’s lower angle to the ring plane, with shadows cast across the ring structure. It will be a while before we see this view again — even if we had another spacecraft somehow in place, equinox on Saturn occurs only once every 15 Earth years.
Image: This close to equinox, illumination of Saturn’s rings by sunlight reflected off the planet vastly dominates any meager sunlight falling on the rings. Hence, the half of the rings on the left illuminated by planetshine is, before processing, much brighter than the half of the rings on the right. On the right, it is only the vertically extended parts of the rings that catch any substantial sunlight. With no enhancement, the rings would be essentially invisible in this mosaic. To improve their visibility, the dark (right) half of the rings has been brightened relative to the brighter (left) half by a factor of three, and then the whole ring system has been brightened by a factor of 20 relative to the planet. So the dark half of the rings is 60 times brighter, and the bright half 20 times brighter, than they would have appeared if the entire system, planet included, could have been captured in a single image. Credit: NASA/JPL/Space Science Institute.
What the equinox event gave Cassini scientists was the opportunity to see unusual shadows and wavy structures that appeared during a time when the temperature of the rings’ icy particles began to drop because of the Sun’s position. A recent study in Icarus shows that during the equinox, Cassini’s Composite Infrared Spectrometer found temperatures that matched models of ring particle cooling over much of the expanse of the rings. But the outermost section — the A ring — turned out to be an anomaly, much warmer than the models predicted, with a temperature spike particularly evident in the middle of the A ring.
The JPL researchers believe that differences in the structure of Saturn’s ring particles account for the variation. As this JPL news release explains, most ring particles are thought to have a fluffy exterior something like fresh snow. The A ring seems to be different, however, composed of particles roughly 1 meter wide made up of solid ice, with only a thin outer layer (regolith). Thus we seem to have a concentration of what JPL’s Ryuji Morishima, who led the recent work, calls “solid ice chunks,” a finding the researcher finds unusual. “Ring particles,” he adds, “usually spread out and become evenly distributed on a timescale of about 100 million years.”
“This particular result is fascinating because it suggests that the middle of Saturn’s A ring may be much younger than the rest of the rings,” says Linda Spilker, Cassini project scientist at JPL and a co-author of the study. “Other parts of the rings may be as old as Saturn itself.”
Another possibility: The particles in question are being confined to their present location, perhaps by a moon that existed in the region within the past hundred million years, to be destroyed later by an impact. In this scenario, ring debris might not have had time to diffuse evenly throughout the ring. The researchers also suggest that small ‘rubble-pile’ moonlets could be breaking up in the A ring under the gravitational influence of Saturn and its larger moons.
Cassini is far from finished, despite my musings about its fate. In fact, the spacecraft will measure the mass of the main rings during its last orbits, with the hope that the data will constrain the rings’ age. All of this will give us further insights into how ring structures like these work, with the equinox data showing how short-lived changes can occur that reveal the rings’ deep structure. Until now, we’ve been unable to probe more than a millimeter below the surface of these countless particles, but we’re learning to build models of what must be there.
The paper is Morishima et al., “Incomplete cooling down of Saturn’s A ring at solar equinox: Implication for seasonal thermal inertia and internal structure of ring particles,” Icarus 23 June 2015 (abstract).
Hitchhiker to the Outer System
At the Jet Propulsion Laboratory in Pasadena, Masahiro Ono has been using supercomputer simulations to model a new way of moving between small bodies in the Solar System. We’ve had a demonstration in the last few years of what ion propulsion can do to enable orbital operations at one asteroid (Vesta) followed by a journey to another (Ceres) and orbital insertion there. But Ono is looking at ways to simplify the process of asteroid and comet rendezvous that replaces the need for propellant during the orbital insertion and landing phases.
Call it Comet Hitchhiker. “Hitchhiking a celestial body is not as simple as sticking out your thumb, because it flies at an astronomical speed and it won’t stop to pick you up. Instead of a thumb, our idea is to use a harpoon and a tether,” says Ono, who presents the idea today at the American Institute of Aeronautics and Astronautics SPACE conference in Pasadena. The work has intriguing implications for our investigations of the Kuiper Belt and outer system.
Ono’s method uses reusable tethers that, as this JPL news release explains, are best understood by means of a deep sea fishing analogy. As when a fisherman, having hooked a fish, releases more line under moderate tension, letting the line play out but gradually braking, so a spacecraft would sling an extendable tether toward a comet or asteroid, with a harpoon attached to the tether acting as the ‘hook.’ The spacecraft then releases tether even as it applies a regenerative brake to harvest kinetic energy as the craft accelerates.
Image: Comet Hitchhiker, shown in this artist rendering, is a concept for orbiting and landing on small bodies. Credit: NASA/JPL-Caltech/Cornelius Dammrich.
According to Ono’s simulations, a long enough line allows Comet Hitchhiker to match velocity with the target gradually, after which it begins to reel in the tether for a gentle descent. The idea, Ono believes, works in both directions. After conducting its investigations at the object in question, the spacecraft uses harvested energy to retrieve the tether and accelerate away to another object. The potential for studying five to ten asteroids in a single mission is on the table.
Image: The Comet Hitchhiker concept. Credit: Masahiro Ono.
The question, of course, is what kind of tethers could handle both the harpoon impact on the surface and the demands of the subsequent maneuver. To study the prospects, Ono and colleagues have been using a mathematical formulation they call the Space Hitchhike Equation, relating the specific strength of the tether, the mass ratio between spacecraft and tether, and the needed velocity change to make the operation work. Enormous tension will be placed on the tether and heat will be generated by the rapid decrease in speed for orbit and landing.
The hitchhike maneuver would require a tether anywhere from 100 to 1000 kilometers long. Ono is reporting that a velocity change of 1.5 kilometers per second can be managed with existing materials like Zylon and Kevlar, but a 10 kilometer per second velocity change would be possible with future technologies like carbon nanotubes and a diamond harpoon. The latter would give us sufficient delta-V to land on or orbit long-period comets or Kuiper Belt Objects, objects for which current methods only offer us flyby options.
Consider, too, the prospects for non-gravitational slingshots around small bodies, as Ono reports in a description of this work:
A comet hitchhiker can obtain ~5 km/s of additional delta-V by utilizing just 25% of the harvested energy for reeling in the tether and/or driving electric propulsion engines. The tether is detached from the target after the desired delta-V is obtained. Our concept enables to design a fast trajectory to a wide range of destinations in the Solar System by taking full advantage of the high relative velocity, abundance, and orbital diversity of small bodies. For example, by hitching a comet with q=0.5 AU, a comet hitchhiker can reach the current orbital distance of Pluto (32.6 AU) in 5.6 years and that of Haumea (50.8 AU) in 8.8 years.
With the European Space Agency’s Rosetta spacecraft continuing operations around Comet 67P/Churyumov-Gerasimenko and ongoing work with Dawn at Ceres, will Comet Hitchhiker open up future options for studying multiple small bodies in a single mission? Much depends on what happens next. Ono and team plan to do further modeling and experiments using materials simulating a comet or asteroid surface. Their work is being supported by a Phase 1 study from the NASA Innovative Advanced Concepts (NIAC) program. Ono’s description of Comet Hitchhiker is available at the NIAC site.
A Statistical Look at Panspermia
Would panspermia, the idea that primitive life can spread from star to star, be theoretically observable? Henry Lin and Abraham Loeb (both associated with the Harvard-Smithsonian Center for Astrophysics) believe the answer is yes. In a paper accepted for publication in Astrophysical Journal Letters, the duo make the case that panspermia would create statistical correlations regarding the distribution of life. Detecting biosignatures in the atmospheres of exoplanets may eventually allow us to apply statistical tests in search of these clustering patterns. If panspermia occurs, the paper argues, we can in principle detect it.
“In our theory,” says Lin, “clusters of life form, grow, and overlap like bubbles in a pot of boiling water.” The paper argues that future surveys like TESS (Transiting Exoplanet Survey Satellite) could be an early step in building the statistical database needed. TESS could detect hundreds of terrestrial-class explanets, some of whose atmospheres will be subject to study by ground-based observatories and instruments like the James Webb Space Telescope. Next-generation instruments will do more, allowing us to look for detailed spectral signatures like the ‘red edge’ of chlorophyll or, conceivably, the pollution of a technological society.
Moreover, SETI searches at radio or optical wavelengths could produce detections that eventually allow us to test for clustering, the point being that life that arises by spreading through panspermia should exhibit more clustering than life that arises spontaneously. The statistical models that Lin and Loeb develop in the paper have observable consequences that could begin to turn up as we expand our investigations into astrobiology. Think of Lin’s ‘bubbles’ of life that grow and overlap, or consider the spread of life from host to host in terms of the spread of an epidemic. We may eventually have the data to confirm the idea. From the paper:
In a favorable scenario, our solar system could be on the edge of a bubble, in which case a survey of nearby stars would reveal that ? 1/2 of the sky is inhabited while the other half is uninhabited. In this favorable scenario, ? 25 targets confirmed to have biosignatures (supplemented with 25 null detections) would correspond to a 5? deviation from the Poisson case [a probability distribution], and would constitute a smoking gun detection of panspermia.
Image: The center of the Milky Way as seen from the mountains of West Virginia. Is there life out there, and if so, does it arise spontaneously, or spread from star to star? Credit: Forest Wander.
The transition between an uninhabited to an inhabited galaxy occurs much faster through panspermia than through a gradual buildup of life arising spontaneously in random areas. There is even a Fermi implication here — if life started at roughly the same time everywhere, then we would expect fewer advanced civilizations at the present time than if life started at random times throughout the universe (the authors note that the Drake equation is based on the assumption that life arises independently everywhere, which contradicts efficient panspermia).
The paper continues:
A more generic placement would increase the number of required detections by a factor of a few, though an unusual bubble configuration could potentially reduce the number of required detections. It should be noted that the local environment of our solar system does not reflect the local environment ? 4 Gyr ago when life arose on earth, so the discovery of a bubble surrounding earth should be interpreted as the solar system “drifting” into a bubble which has already formed, or perhaps the earth seeding its environment with life.
The paper notes that any species capable of panspermia will have enormous fitness advantages as it can move from one stellar host to another. Lin and Loeb believe that if panspermia is not viable and Earth is the only inhabited planet, interstellar travel may lead to colonization of the galaxy. In this case panspermia models may still be relevant: A culture using starships may provide the opportunity for primitive forms of life including disease and viruses to spread efficiently, with the same processes of growth and diffusion occurring throughout space.
Can primitive life, then, spread on its own, or does it need intelligent life to create the conditions for its growth outward? Either way, we see life expanding in all directions, producing what this CfA news release calls “a series of life-bearing oases dotting the galactic landscape.” If such patterns exist, finding them will depend upon how quickly life spreads, for any ‘bubbles’ or ‘oases’ could be lost in the regular flow of stellar motion and redistribution about the galaxy. But whatever the biological mechanisms of panspermia might be, it is in principle detectable.
The paper is Lin and Loeb, “Statistical Signatures of Panspermia in Exoplanet Surveys,” accepted for publication at Astrophysical Journal Letters (preprint).
A KBO Target for New Horizons
What we’ll eventually want is a good name. 2014 MU69 is the current designation for the Kuiper Belt Object now selected as the next destination for New Horizons, one of two identified as possibilities, and the one the New Horizons team itself recommended. Thus we have a target — a billion and a half kilometers beyond Pluto/Charon — for the much anticipated extended mission, but whether that mission will actually occur depends upon NASA review processes that are not yet complete. Still, the logic of putting the spacecraft to future use is hard to miss, as John Grunsfeld, chief of the agency’s Science Mission Directorate, is the first to note:
“Even as the New Horizon’s spacecraft speeds away from Pluto out into the Kuiper Belt, and the data from the exciting encounter with this new world is being streamed back to Earth, we are looking outward to the next destination for this intrepid explorer. While discussions whether to approve this extended mission will take place in the larger context of the planetary science portfolio, we expect it to be much less expensive than the prime mission while still providing new and exciting science.”
Image: Path of NASA’s New Horizons spacecraft toward its next potential target, the Kuiper Belt object 2014 MU69, nicknamed “PT1” (for “Potential Target 1”) by the New Horizons team. Although NASA has selected 2014 MU69 as the target, as part of its normal review process the agency will conduct a detailed assessment before officially approving the mission extension to conduct additional science. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute/Alex Parker.
We wind up with a situation where action precedes future decision. While the extended mission proposal will not be turned in to NASA until next year, the spacecraft can’t delay its preparations for a rendezvous with 2014 MU69 — trajectory changes factor into the equation. New Horizons, as this JHU/APL news release points out, will perform four maneuvers in late October and early November to make the necessary course changes for a January 1, 2019 flyby.
In anticipation of probable work beyond Pluto/Charon, New Horizons has the necessary hydrazine for a KBO intercept, and we’ll be able to monitor its communications and data return for years to come. Researchers had their eye on the kind of primitive object out of which dwarf planets like Pluto themselves may have been made, and the new target fits the bill.
“2014 MU69 is a great choice because it is just the kind of ancient KBO, formed where it orbits now, that the Decadal Survey desired us to fly by,” said New Horizons Principal Investigator Alan Stern, of the Southwest Research Institute (SwRI) in Boulder, Colorado. “Moreover, this KBO costs less fuel to reach [than other candidate targets], leaving more fuel for the flyby, for ancillary science, and greater fuel reserves to protect against the unforeseen.”
As to that new name, 2014 MU69 is already being called PT1, for ‘potential target 1,’ but will want something a bit more muscular, and certainly more poetic. You’ll recall how tricky it was to find a KBO for this encounter in the first place (see, for example, New Horizons: Potential KBO Targets Identified). Among those found after the search began in 2011, none were within range of the craft’s fuel supply. It took the Hubble Space Telescope to discover, in the summer of 2014, the two prime candidates. And it’s easy to understand Alan Stern’s enthusiasm. 2014 MU69. at about 45 kilometers across, is ten times times bigger than the average comet and a thousand times more massive, even if it’s about 1/10,000th the mass of Pluto.
It wasn’t that long ago — in August of 1992, to be specific — that David Jewitt and Jane Luu discovered the first trans-Neptunian object beyond Pluto/Charon, one that gave rise to the term ‘cubewano,’ named after the latter part of its designation, (15760) 1992 QB1. Jewitt and Luu liked the name ‘Smiley’ for the KBO, but there is already an asteroid with that name (1613 Smiley), so like 2014 MU69, even the first identified KBO could use a new monicker. Whatever we call it, 2014 MU69 should give us a look at the early days of Solar System formation some 4.6 billion years ago, preserved by distance and the outer system deep freeze.
The Prime Directive – A Real World Case
Trying to observe but not harm another civilization can be tricky business, as Michael Michaud explains in the article below. While Star Trek gave us a model for non-interference when new cultures are encountered, even its fictional world was rife with departures from its stated principles. We can see the problem in microcosm in ongoing events in Peru, where a tribal culture coming into contact with its modern counterparts raises deeply ambiguous questions about its intentions. Michaud, author of Contact with Alien Civilizations (Copernicus, 2007), draws on his lengthy career in the U.S. Foreign Service to frame the issue of disruptive cultural encounter.
By Michael A.G. Michaud
Science fiction fans all know of the Prime Directive, usually described as avoiding contact with a less technologically advanced civilization to prevent disruption of that society’s development. In a 1968 Star Trek episode, the directive was explicitly defined: “No identification of self or mission. No interference with the social development of said planet. No references to space or the fact that there are other worlds or civilizations.” Another version of the Prime Directive forbade introducing superior knowledge or technology to a society that is incapable of handling such advantages wisely.
Commentators have pointed out many violations of the directive in the Star Trek series (and in other science fiction programs). The Enterprise crew sometimes intervened to prevent tragedy or promote positive outcomes. De facto, observance of the Prime Directive was scenario-dependent.
Star Fleet personnel sometimes used hidden observation posts or disguises to watch or interact with natives. In one episode, Captain Kirk left behind a team of sociologists to help restore an alien society to a “human form.” At the other extreme, the Prime Directive was interpreted as allowing an alien civilization to die.
Star Trek was not the first source of a prime directive. In Olaf Stapledon’s 1937 novel Star Maker, superior beings take great care to keep their existence hidden from “pre-utopian” primitives so that the less advanced beings will not lose their independence of mind.
A recent article in Science reminds us that the practical application of such a principle in a real contact situation on Earth is riddled with complications and uncertainties. The government of Peru has been debating whether or not to make formal contact with a tribal people living in the Peruvian Amazon, sighted frequently over the past year.
Peruvian policy has been to avoid contact with isolated tribes and to protect them from intruders in their reserves. In practice, this policy has been difficult to enforce. Tour operators sell tickets for “human safaris;” some tribespeople loiter on the river bank, letting themselves be seen. One anthropologist said that they were deliberately seeking to interact with people on the river.
There is a dark side to tribal behavior. Some of the tribals raided a nearby village for supplies, killing two villagers.
The tribespeoples’ conflicting actions have left their desires unclear. Though some have sought goods, shooting arrows at Peruvians suggests that they do not want contact.
Peru’s government wants to train local people to avoid isolated tribes unless those tribes make the first move. The plan is to increase patrols, discourage raids, and make contact with the tribespeople only if they show a willingness for conversation.
This is termed “controlled contact.” Two anthropologists proposed in a Science editorial that “a well-designed contact can be quite safe,” but another group accused them of advocating a dangerous and misleading idea.
One of the proposed explanations for our non-detection of alien intelligences is the Zoo Hypothesis, which claims that more advanced civilizations deliberately avoid making themselves known to us so as not to disturb humankind’s autonomous development. Others suggest practical reasons for such apparently altruistic behavior. As Robert Rood put it, the only thing we could offer them is new ideas. Their intervention would stop our development.
Much of this debate has been driven by guilt over the impact of Western colonial powers on other Earthly societies. Star Trek and other science fiction treatments used interactions with aliens as allegories for our own world.
Some argue that external cultural influences can be positive. What we call Western Civilization was the product of many forces that came from outside. Europe’s major religions came from the Middle East. Others see Westernization as a threat that must be resisted, notably in the Islamic world.
If we ever find ourselves in contact with an alien civilization, one of the parties is likely to be more scientifically and technologically advanced than the other. Will the more powerful intelligences observe some sort of Prime Directive? That may be more complicated than many humans believe.
——-
References
Andrew Lawler, “Mashco Piro tribe emerges from isolation in Peru,” Science 349 (14 August 2015), 679.
“Prime Directive,” Wikipedia, accessed 21 August 2015.
Michael A.G. Michaud, Contact with Alien Civilizations, Copernicus (Springer), 2007, 237.