≡ Menu

Heliophysics with Interstellar Implications

You would think that heading toward the Sun, rather than away from it, would not necessarily fall under Centauri Dreams’ purview, but missions like the Parker Solar Probe have reminded us that extreme environments are ideal testing grounds for future missions. Build a heat shield that can take you to within 10 solar radii of our star and you’re also exploring possibilities in ‘sundiver’ missions that all but brush the Sun in a tight gravity assist.

Or consider the two proposals NASA has just selected in the area of small satellite technologies, which grow directly out of its heliophysics program. Here, the study of the Sun’s interactions with the Solar System, and the consideration of Sun, planets and heliosphere as a deeply interconnected system, takes pride of place. Let’s start with a mission called SETH — Science-Enabling Technologies for Heliophysics. One of its two technology demonstrators, called the HELio Energetic Neutral Atom (HELENA) detector, involves solar energetic neutral atoms, which can provide advanced warnings of potential radiation threats to astronauts.

The other demonstrator aboard SETH is an optical communications technology expressly designed for CubeSats and other small satellites, one that could allow a hundred-fold increase in the return of deep space data. Building out a robotic infrastructure in the Solar System will involve increasingly miniaturized technologies. We can envision small satellite constellations that can network and operate one day in ‘swarm’ fashion to create a continuous presence around targets ranging from asteroids to the gas and ice giants that can shape their orbits.

Image: NASA has selected two proposals to demonstrate technologies to improve science observations in deep space. The proposals could help NASA develop better models to predict space weather events that can affect astronauts and spacecraft, such as coronal mass ejections (CMEs). In this image, taken by the Solar and Heliospheric Observatory on Feb. 27, 2000, a CME is seen erupting from the Sun, which is hidden by the disk in the middle, so the fainter material around it can be seen. Credit: ESA/NASA/SOHO.

Toward a Large Solar Sail

But if you’re looking for a mission with real interstellar punch, consider Solar Cruiser, whose two technology demonstrations involve measurements of the Sun’s magnetic field structure and the velocity of coronal mass ejections (CMEs), those vast explosions of plasma that can create space weather nightmares for utility grids on Earth. Making this mission possible will be a solar sail of almost 1,700 square meters. The timing on this proposal seems propitious given The Planetary Society’s recent success at raising the orbit of LightSail-2 using sunlight. Pushing toward much larger designs is the next step.

“This is the first time that our heliophysics program has funded this kind of technology demonstration,” said Peg Luce, deputy director of the Heliophysics Division at NASA Headquarters. “Providing the opportunity to mature and test technologies in deep space is a crucial step towards incorporating new techniques into future missions.”

Lots to work with here, and I’m drawing together more information about Solar Cruiser, which would not only be by far the largest solar sail yet deployed, but would also experiment with using the momentum of sunlight to continuously modify its orbit. This would allow us to obtain views of the Sun that orbits involving gravity alone would not make possible. Robert Forward explored the original concept and introduced it to the public first in the pages of Analog and then in his book Indistinguishable from Magic (1995), where he considered how we might use such spacecraft near the Earth. He called a spacecraft that uses a solar sail to hover over a region rather than orbiting the Earth a ‘statite,’ and explained it this way:

…I have the patent on it — U.S. Patent 5,183,225 “Statite: Spacecraft That Utilizes Light Pressure and Method of Use”… The unique concept described in the patent is to attach a television broadcast or weather surveillance spacecraft to a large highly reflective lightsail, and place the spacecraft over the polar regions of the Earth with the sail tilted so the light pressure from the sunlight reflecting off the lightsail is exactly equal and opposite to the gravity pull of the Earth.

Here we are using a solar sail for station-keeping rather than transport, and Solar Cruiser may turn out to be the first time we experiment with the technique, which offers options that other kinds of satellite do not:

With the gravity pull nullified, the spacecraft will just hover over the polar region, while the Earth spins around underneath it. Since the spacecraft is not in orbit around the Earth, it is technically not a satellite, so I coined the generic term ‘statite’ or ‘-stat’ to describe any sort of non-orbiting spacecraft (such as a ‘weatherstat’ or ‘videostat’ or ‘datastat’).

Image: Analog‘s December, 1990 issue contained an article by Robert Forward describing the ‘polesitter’ concept, one of many innovative ideas the scientist introduced to a broad audience. Credit: Condé Nast.

Can Solar Cruiser push these ideas forward in orbits near the Sun? Forward called orbits that are non-Keplerian ‘displaced orbits’ and also referred to such satellites as ‘polesitters.’ It will be fascinating to see how far Solar Cruiser will explore such capabilities as part of its larger mission, which should also teach us much about large sail materials and deployment.

What will follow is a nine-month study period, with both proposals funded at $400,000 for concept studies, after which one of the two proposals will be selected to go into space. Launch will take place in October of 2024 as a secondary payload along with the Interstellar Mapping and Acceleration Probe (IMAP) probe, another mission we’ll be following closely as it investigates the interactions of the solar wind with the local interstellar medium (the spacecraft will orbit the Sun-Earth L1 Lagrangian point and will also be used to monitor space weather).

Also of interest: Baig and McInnes, “Light-Levitated Geostationary Cylindrical Orbits are Feasible,” Journal of Guidance, Control and Dynamics, Vol. 33, No. 3 (2010), pp. 782-793 (abstract).


Looking for Life Under Flaring Skies

The faint glow of a directly imaged planet will one day have much to tell us, once we’ve acquired equipment like the next generation of extremely large telescopes (ELTs), with their apertures measuring in the tens of meters. Discovering the makeup of planetary atmospheres is an obvious deep dive for biosignatures, but there is another. Biofluorescence, a kind of reflective glow from life under stress, could be detectable in some conditions at astronomical distances.

New work on the matter is now available from Jack O’Malley-James and Lisa Kaltenegger, at Cornell University’s Carl Sagan Institute. The duo have been on the trail of biofluorescence for some time now, and in fact their paper in Monthly Notices of the Royal Astronomical Society picks up on a 2018 foray into biosignatures involving the phenomenon (citation below). Here the question is detectability in the context of biofluorescence as a protective mechanism, an ‘upshift’ of damaging ultraviolet into longer, safer wavelengths.

“On Earth, there are some undersea corals that use biofluorescence to render the sun’s harmful ultraviolet radiation into harmless visible wavelengths, creating a beautiful radiance,” says Kaltenegger.” Maybe such life forms can exist on other worlds too, leaving us a telltale sign to spot them.”

Image: An example of coral fluorescence. Coral fluorescent proteins absorb near-UV and blue light and re-emit it at longer wavelengths. Credit: Available under Creative Commons CC0 1.0 Universal Public Domain Dedication.

Biofluorescence in vegetation is an effect that is detectable from Earth orbit. Here the effect is comparatively small, accounting for 1-2 percent of the vegetation reflection signal, but of course it is also widespread given the coverage of vegetation over much of Earth’s surface. The phenomenon is also seen in corals, which produce a higher degree of fluorescence. Earth levels of biofluorescence are clearly too small to act as useful biosignatures for exoplanets, but higher levels may well occur elsewhere.

That’s because our early work on the atmospheres of Earth-sized planets will delve into systems around small M-dwarf stars. Such worlds are plentiful and the Transiting Exoplanet Survey Satellite (TESS) is expected to add to our inventory of habitable zone examples nearby. Even now, we have interesting targets: Proxima Centauri b, Ross 128b, TRAPPIST-1e, -f, -g, LHS 1140-b, for example, all of which orbit M-dwarf stars. And M-dwarf stars are known to flare.

This, in fact, is one of the cases originally made against life in such systems, for extreme X-Ray and UV radiation would create a challenging environment for even the simplest lifeforms. M-dwarfs vary in terms of the amount of radiation they produce; indeed, a planet around an inactive M-dwarf receives a lower dose of ultraviolet than Earth. But planets around active stars, particularly given the fact that the habitable zone is so close to the small star, receive much higher flux, and such flaring remains active for longer on M-dwarfs than on stars like the Sun.

Life could conceivably flourish underground on such worlds, or within oceans, but even on Earth, the authors note, we see biological responses like protective pigments and DNA repair pathways that are ways of mitigating radiation damage. The corals mentioned above use biofluorescence to reduce the risk of damage to symbiotic algae, absorbing blue and ultraviolet photons and re-emitting them at longer wavelengths.

Corals that display this phenomenon cover a mere 0.2 percent of the ocean floor, making for only a tiny change in our planet’s visible flux. But the situation could be different in actively flaring M-dwarf systems. To study the matter, O’Malley-James and Kaltenegger begin with a biofluorescent surface biosphere in a shallow, transparent ocean, adjusting the variables to simulate different ocean conditions. They vary fluorescent protein absorption and emission to produce values for reflected and emitted light. The assumption is that life evolving in conditions of extreme UV flux will produce ever more efficient fluorescence, in terms of absorbed vs, emitted photons (at maximum efficiency, all photons are absorbed and re-emitted).

The authors calculate the UV flux for different classes of M-dwarfs and quantify the outgoing emissions of common pigments during fluorescence. The model also includes atmospheric effects with varying cloud coverage, generating spectra and colors for hypothetical planetary conditions. False positives from mineral fluorescence are considered, as are signals produced by surface vegetation, with different fractions of surface coverings and biofluorescent life.

But let’s cut to the chase. A biosphere otherwise hidden from us could be revealed through the temporary glow resulting from the flare of an M-dwarf. From the paper:

Depending on the efficiency of the fluorescence, biofluorescence can increase the visible flux of a planet at the peak emission wavelength by over an order of magnitude during a flare event. For comparison, the change in brightness at peak emission wavelengths caused by biofluorescence could increase the visible flux of an Earth-like planet by two orders of magnitude for a widespread biofluorescent biosphere and clear skies, with low-cloud scenarios being more likely for eroded atmospheres. In an M star system, the reflected visible flux from a planet will be low due to the host star’s low flux at these wavelengths; however, the proposed biofluorescent flux is dependent on the host star’s UV flux, resulting in additional visible flux that is independent of the low stellar flux at visible wavelengths. This suggests that exoplanets in the HZ of active M stars are interesting targets in the search for signs of life beyond Earth.

Video: Lisa Kaltenegger, director of Cornell University’s Carl Sagan Institute, explains why studying bioluminescence on Earth can guide the way humans search for life on other planets.

If biofluorescence can evolve on the planets of active M-dwarf stars, it may turn out that high ultraviolet flux could be the key to reveal its existence. We need to quantify such effects because our ground- and space-based assets in the hunt for biosignatures are going to be homing in on the most readily studied planets first, and that means nearby worlds around red dwarfs. What O’Malley-James and Kaltenegger are doing is charting one possible signature that we may or may not find, but one which we now know to include in our investigative toolkit.

The paper is O’Malley-James and Kaltenegger, “Biofluorescent Worlds – II. Biological fluorescence induced by stellar UV flares, a new temporal biosignature,” Monthly Notices of the Royal Astronomical Society 13 August 2019 (abstract/full text). The earlier paper on this issue is O’Malley-James and Kaltenegger, “Biofluorescent Worlds: Biological fluorescence as a temporal biosignature for flare stars worlds,” accepted at MNRAS (preprint).


Modeling Early JWST Work on TRAPPIST-1

So much rides on the successful launch and deployment of the James Webb Space Telescope that I never want to take its capabilities for granted. But assuming that we do see JWST safely orbiting the L2 Lagrange point, the massive instrument will stay in alignment with Earth as it moves around the Sun. allowing its sunshield to protect it from sunlight and solar heating.

Thus deployed, JWST may be able to give us information more quickly than we had thought possible about the intriguing system at TRAPPIST-1. In fact, according to new work out of the University of Washington’s Virtual Planetary Laboratory, we might within a single year be able to detect the presence of atmospheres for all seven of the TRAPPIST-1 planets in 10 or fewer transits, if their atmospheres turn out to be cloud-free. Right now, we have no way of knowing whether any of these worlds have atmospheres at all. A thick, global cloud pattern like that of Venus would take longer, perhaps 30 transits, to detect, but is definitely in range.

“There is a big question in the field right now whether these planets even have atmospheres, especially the innermost planets,” says Jacob Lustig-Yaeger, a UW doctoral student who is lead author of the paper on this work. “Once we have confirmed that there are atmospheres, then what can we learn about each planet’s atmosphere — the molecules that make it up?”

Image: New research from UW astronomers models how telescopes such as the James Webb Space Telescope will be able to study the planets of the intriguing TRAPPIST-1 system. Credit: NASA.

Working with Lustig-Yaeger are UW’s Victoria Meadows, principal investigator for the Virtual Planetary Laboratory, and doctoral student Andrew Lincowski. The latter should be a familiar name if you’ve been following TRAPPIST-1 studies, because back in November of 2018 he was lead author on a paper on climate models for this fascinating system (see Modeling Climates at TRAPPIST-1).

We’ll now be hoping to follow up that work with early JWST data. Briefly, Lincowski and team pointed to the extremely hot and bright early history of TRAPPIST-1, a tiny M-dwarf 39 light years out with a radius not much bigger than Jupiter (although with considerably more mass — the star is about 9 percent the mass of the Sun). These early conditions could produce planetary evolution much like Venus, with evaporating oceans and dense, uninhabitable atmospheres. The Lincowski paper, though, did point to TRAPPIST-1 e as a potential ocean world.

These findings were in the context of a system among whose seven transiting worlds are three — e, f and g — that are positioned near or in the habitable zone, where liquid water might exist on the surface. Now we have Lustig-Yaeger and company modeling our early JWST capabilities. The paper finds that beyond the presence of an atmosphere, we may be able to draw further conclusions, particularly with regard to the evolution of what gas envelopes we find.

Although oxygen as a biosignature may not be detectable for the potentially habitable TRAPPIST-1 planets, oxygen as a remnant of pre-main-sequence water loss may be easily detected or ruled out… the 1.06 and 1.27 µm O2-O2 CIA [collisionally-induced absorption] features are key discriminants of a planet that has an oxygen abundance greatly exceeding biogenic oxygen production on Earth and may therefore indicate a planet that has undergone vigorous water photolysis and subsequent loss during the protracted super-luminous pre-main-sequence phase faced by late M dwarfs,,,

Such features could be detected fairly quickly:

… in as few as 7-9, 15, 8, 49-67, 55-82, 79-100, and 62-89 transits of TRAPPIST-1b, c, d, e, f, g, and h, respectively, should they possess such an atmosphere. These quoted number of transits may be sufficient to rule out the existence of oxygen-dominated atmospheres in the TRAPPIST-1 system. Additional evidence of ocean loss could be provided by detection of isotope fractionation, which may also be possible in as few as 11 transits with JWST.

Moreover, the authors find that water detection could help to pare down various evolutionary scenarios on these worlds, particularly for TRAPPIST-1 b, c and d, assuming atmospheres high in oxygen content that have not been completely desiccated by the star’s early history. Thus we are probing planetary evolution, but assessments of habitability are going to be tricky, and it seems clear that we will need to turn such analysis over to future direct-imaging missions.

On balance, we are talking about getting useful results with a fairly low number of transits. JWST’s onboard Near-Infrared Spectrograph will use transmission spectroscopy — where the star’s light passes through a planet’s atmosphere to reveal its spectral ‘fingerprint’ — to detect the presence of an atmosphere via the absorption of CO2. Such analysis can likewise either detect or rule out oxygen-dominated atmospheres, while constraining the extent of water loss through measurements of H2O abundance. All of this provides fodder for other, still evolving observing strategies using the JWST instrument package that can begin the characterization of these compelling worlds.

The paper is Lustig-Yaeger et al., “The Detectability and Characterization of the TRAPPIST-1 Exoplanet Atmospheres with JWST,” Astronomical Journal Vol. 158, No. 1 (21 June 2019). Abstract / preprint. The Lincowski paper referenced above is “Evolved Climates and Observational Discriminants for the TRAPPIST-1 Planetary System,” Astrophysical Journal Vol. 867, No. 1 (1 November 2018). Abstract / Preprint.


Nautilus: New Lens Concept for Space-based Array

As we’ve been talking about the limitations of giant telescopes in recent days — and a possible solution in David Kipping’s idea of a ‘terrascope’ — it pays to keep in mind how our ability to collect light has changed over the years. Thus the figure below, which is drawn from a new paper from Daniel Apai and Tom Milster (both at the University of Arizona) and colleagues. Here we see four centuries of evolution for light-collecting power through refracting and reflecting telescopes, with the introduction of segmented mirrors making larger apertures possible.

Image: This is Figure 1 from the paper (click to enlarge). Caption: Evolution of light-collecting area of ground-based (blue, green) and space-based (red) telescopes. The evolution is characterized by alternating stages of slow growth (when existing technology is scalable) and pauses (when existing technology cannot be scaled up). The data points represent the installation of the largest telescopes in their era and are connected to highlight general trends. Gray regions mark the approximate stages in the evolution when lenses, monolithic mirrors, and then segmented mirrors become to massive to be viable with existing technology. Telescopes used multiple different technological solutions to collect light. Large jumps in diameter are due to change in technology due to technological breakthroughs. Credit: Daniel Apai / Tom Milster / UA.

Anticipated designs for space telescopes take us from Hubble’s 2.4-meter mirror up to the segmented mirrors of the James Webb Space Telescope (6.5 meters) and future concepts like the Large UV/Optical/IR Surveyor (LUVOIR), with a 15-m primary telescope. The gradual growth of aperture sizes is, Apai and Milster argue in their paper, a bottleneck in the design of astronomical telescopes, and one we’ll have to overcome to study Earth-like planets and their potential biosignatures.

What the scientists are offering is a telescope concept in which the primary mirror is replaced by what the authors describe as multiorder diffractive engineered (MODE) material lens technology. Here we move from refraction, which involves light changing direction as it moves from one medium to another, to diffraction, where the waves of light change direction as they encounter barriers and openings in their path — think Fresnel lenses, where waves can interact constructively or destructively depending on wavelength. The MODE lens developed at UA by the authors is in effect a hybrid between refractive and diffractive lens technologies.

In their paper, Apai and Milster propose a space telescope called Nautilus, which would operate as a fleet of 35 14-meter wide spherical telescopes, each of them more powerful than the Hubble instrument. Within each Nautilus unit, an 8.5-meter diameter MODE lens would be used for high-precision transit spectroscopy, along with a 2.5 meter lens optimized for wide-field imaging and transit searches. The combined array would be powerful enough to characterize 1,000 exoplanets from a distance as far as 1,000 light years. The light-collecting power Nautilus achieves, by the authors’ calculations, is the equivalent of a 50-meter diameter telescope.

Image: Each individual Nautilus lens is 8.5 meters in diameter, larger than the mirrors of the Hubble Space Telescope and James Webb Space Telescope. Credit: Daniel Apai.

The authors argue that “…the concept described here offers a pathway to break away from the cost and risk growth curves defined currently by mirror technology and has the potential to enable very large and very lightweight, replicable technology for space telescopes.” Such lenses are less sensitive to misalignments and deformations than conventional refractive lenses and are readily replicated through processes of optical molding. Says Apai:

“Telescope mirrors collect light – the larger the surface, the more starlight they can catch. But no one can build a 50-meter mirror. So we came up with Nautilus, which relies on lenses, and instead of building an impossibly huge 50-meter mirror, we plan on building a whole bunch of identical smaller lenses to collect the same amount of light.”

These lenses work via a design the authors have patented, one that allows them to be both large and far lighter than monolithic mirrors. Large aperture lenses using diffractive methods can be produced without the mass and volume of material required by refractive designs. They are much thinner, formed around a series of separate sections mounted in a frame. The easiest analogy is with lighthouse lenses, which are built around concentric annular sections.

That makes them less expensive to launch, but the design also takes into account the practicalities of the space launch business, with the capability of stacking individual modules on top of each other. The lenses are 10 times lighter in areal density and 100 times less sensitive to misalignments. Thus the potential is here to sharply reduce launch costs as well as the cost of initial fabrication through a series of instruments using replicated components and identical telescopes — with a fleet of 35 in play, Nautilus collects light efficiently and distributes risk among multiple instruments. That latter point resonates given how much rides on a successful launch and deployment of missions like the James Webb Space Telescope.

“Currently, mirrors are expensive because it takes years to grind, polish, coat and test,” adds Apai. “Their weight also makes them expensive to launch. But our Nautilus technology starts with a mold, and often it takes just hours to make a lens. We also have more control over the process, so if we make a mistake, we don’t need to start all over again like you may need to with a mirror.”

Image: A labeled illustration of an individual Nautilus unit. It is designed to stack many units on top of each other in a rocket before inflating in space. Credit: Daniel Apai.

The paper takes note of the scalability of the concept:

Due to its relatively low production and launch costs and the identical multispacecraft model that is relatively new to astrophysical space telescopes, the general Nautilus system proposed here provides an easily scalable approach. Such multispacecraft models (multiple identical units) are used commercially (Iridium system) and for geo- and planetary sciences (Voyagers, Mariners, Mars exploration rovers, etc.) to reduce per-unit costs and risks and to extend capabilities. Furthermore, telescopes utilizing similar architecture but increasing in size could demonstrate feasibility and mitigate risks, while producing scientific data.

If the Nautilus model works, the prospect of low-cost space telescopes far more powerful than anything now in space or planned for it opens up, bringing such capabilities to the level of individual universities, who could launch their own small instruments. That would be quite a switch from the current model of having to compete for time on overbooked space observatories like Hubble. And Nautilus itself gives us the potential for observing 1,000 Earth-like planets within the 1,000 light years the light-collecting power of Nautilus is designed to investigate.

The paper is Apai et al., “A Thousand Earths: A Very Large Aperture, Ultralight Space Telescope Array for Atmospheric Biosignature Surveys,” Astronomical Journal Vol. 158, No. 2 (29 July 2019). Abstract.


The Terrascope: Challenges Going Forward

Yesterday I renewed our acquaintance with the idea that large natural objects can stand in for technologies we have previously been engineering into existence. The progression is a natural one. The early telescope work of Hans Lippershey and Galileo Galilei began with small instruments, but both refractor and later reflector designs would grow to enormous size, so that today, even with the best adaptive optics and segmented mirrors working together, we are pushing hard on what can be done. Not to mention the fact that controversies over land use can come into play with gigantic observatory installations, as we’ve seen recently in Hawaii.

The fascination is that there is nothing in physical law to preclude ever increasing segmented mirror instruments, but we have to question their economic realities and their practicality. I think it’s a nod to the sheer ingenuity involved in linking seemingly disparate phenomena that David Kipping could turn work on the ‘green flash’ seen at sunrise and sunset into, first, the realization that the refractive flash would be a globe-circling ring as seen from space when the Earth occulted the Sun, and second, into the idea that it marked a potentially usable lensing phenomenon.

Here we’re letting nature do the work. Using the Earth as a lens would allow us to extend our powers remarkably, and at comparatively low cost. Working with a one-meter detector of the sort we already know how to fly (Hubble uses a 2.4-meter mirror), we could achieve results we would expect from a 150-meter space telescope. Assuming, of course, that Kipping’s recent analysis is correct, or more accurately, that the issues he identifies in the paper are solvable and our imaging technologies are up to the task.

For we start digging into the challenging bits almost as soon as the concept is explained. What Kipping wants to exploit is the fact that light from a star behind the Earth would undergo refractive lensing as it entered and then exited the atmosphere. Crucially, the focal point this would create as light rays bent around the Earth and converged on the other side would vary in distance depending on the altitude in the atmosphere that the light passed. This would give us a focal line, so that as we moved further away from the Earth, we would be experiencing the lensing of light at higher and higher altitudes.

The Perils of Extinction

I hope by now the point is clear that because we are talking about light in the atmosphere, we have to be aware of all the extinction events — loss of light — that could occur, from low-level light being obscured by topographical features to much higher weather causing disruption in our seeing. Extinction is dependent on the color, or wavelength, of light in question, and we also have to take scattering effects into consideration.

But Kipping believes that light lensed from about 14 kilometers would be relatively free of these effects, allowing a robust image to be potentially acquired. The focal line sampled at roughly the distance of the Hill Sphere (about four times Earth-Moon distance) is thus what we’re after. Moreover, the alignment doesn’t have to be exact. It could be offset by about an Earth diameter.

Have a look at the figure below, drawn from Kipping’s paper, which shows the extinction expected at various distances from the Earth, and you’ll see that the most robust acquisition of signal is going to be at the Hill sphere distance.

Image: This is Figure 11 from the paper. Caption: Amplification after extinction expected for a 1 metre diameter telescope at the Earth’s Hill radius (top), half the Hill radius (middle) and the Moon’s separation (bottom). Six atmosphere models are shown… [the color coding is provided in the paper, q.v.], which control temperature-pressure profiles (and thus refractivity profile) as well as the extinction computed using lowtran7 [a propagation model and computer code for predicting atmospheric transmittance and sky thermal and scattered radiance]. All models assume no clouds. Standard photometric filters highlighted in gray, except for L, M and N which are slightly offset to encompass the optimal regions. Credit: David Kipping.

As you see, these models assume no clouds, but we must take into effect the fact that at any given time, clouds cover approximately three-quarters of the Earth, meaning they can block out the desired signal. The Hill sphere distance is, in Kipping’s estimation, sufficient to bypass the bulk of these. His calculations show that light passing through the atmosphere at an altitude of 14 kilometers should display a net loss of about eight percent, small enough to make the concept of the Terrascope viable.

Image: This is Figure 13 from the paper. Estimated transmission through the Earth’s atmosphere due to clouds for a Terrascope detector at a distance L. At one Hill radius (1,500,000 km), lensed rays travel no deeper than 13.7 km and thus largely avoid clouds, thereby losing less than 10% of the lensed light. Credit: David Kipping.

Parts of the Earth which are illuminated are a problem, because they insert a background component to the desired signal. The author notes that “background suppression strategies, such as leveraging polarization, wavelength information, and temporal light curve variations” may be of value in reducing some of this loss, but he proceeds to account for the loss with a conservative halving of this original amplification figure to 22,500. An actual Terrascope may able to make this figure climb, though the paper does not explore the idea further.

On the matter of extinction and its problems due to clouds, it’s interesting to consider the Terrascope in other wavelengths. Kipping’s primary thrust here is in optical and infrared light, but the paper moves on to consider the interesting possibility of moving outside this range. From the paper:

Moving further out into the radio offers two major advantages… First, extinction due to clouds can be largely ignored, allowing for detectors much closer including on the lunar surface. Second, Solar scattering is far less problematic in the radio and indeed it is typical for radio telescopes to operate during daylight phases. The simple refraction model of this work was extended to the radio and indeed the amplification was estimated to be largely achromatic beyond a micron. Nevertheless, the model did not correctly account for the radio refractivity as a function of humidity, nor the impact of the ionosphere on lensed rays. Accordingly, a radio terrascope may be an excellent topic for further investigation.

A Path for Research

Limited by working with only those targets that are behind the Earth as seen from the spacecraft, we seem to be boxed in, but consider the possibility, which Kipping discusses in his video on the Terrascope, of creating a fleet of small detectors around the Hill sphere. Now the target list widens as we gain pointing control, and as we build out such a fleet, we can choose our early targets while anticipating the next high-priority items to be served by future detectors. The important first step is to find out whether a working Terrascope delivers on its promise.

Those with a long memory may recall my Centauri Dreams articles on Claudio Maccone’s FOCAL ideas in relation to creating not just a telescope but a radio bridge between distant targets for interstellar exploration [see, for example, The Gravitational Lens and Communications]. Could we do something similar to create a radio Terrascope? The paper gives a nod to the notion, and I asked Kipping if he could expand upon the idea, which he was kind enough to do yesterday.

“Any system like this can also be used in reverse, we can switch out the detector for a transmitter and suddenly we have an antenna with an amplification of 45,000 instead,” he wrote in an email. “In fact, it’s even easier because you don’t need an Earth-shade anymore. To realize this in the radio, there would need to be some further work on how the ionosphere affects the lensing but certainly the optical and infrared are already viable according to my study. One could imagine using this as the bedrock for an interplanetary high speed internet, enabling missions like Cassini and Juno to send back very high fidelity data.”

Given current limitations on data return from deep in the Solar System, the term ‘interplanetary high-speed internet’ seizes the imagination, which is why I’m going to make sure that Vinton Cerf, inventor of the TCP/IP protocols and a man who has done extensive work on extending these methods into forms adapted for Solar System distances, sees the Terrascope idea. Looking down the road, I can only imagine the effect on public support for robotic exploration of the outer system if we were able to offer high-speed data and virtual reality capabilities for our targets.

The benefit of exploring David Kipping’s ideas plays hand in glove with research into gravitational lensing. We’re talking in both cases about placing a detector at the right position to acquire data while minimizing background contamination, developing the needed occulter and creating the software to untangle what does get through from the lensed photons we want. But with the Terrascope, we are working close to home, and can experiment at relatively low cost with strategies for maximizing the acquisition of our imagery and the untangling of it. Much of this work would play directly into the development of the longer-term FOCAL mission.

We need to investigate these ideas. The collecting power of a 150-meter telescope, if it can be achieved as per this concept, takes us into the possibility of detecting topography on exoplanets, not to mention its potential for biosignatures as well as deep space astronomy. Moving further along will involve looking deeper into the question of atmospheric effects and occulter design, which is where Kipping believes the next research effort should be mounted.

The paper is Kipping, “The “Terrascope”: On the Possibility of Using the Earth as an Atmospheric Lens,” accepted at Proceedings of the Astronomical Society of the Pacific (abstract).


Planetary Lensing: Enter the ‘Terrascope’

I’m always fascinated with ideas that do not disrupt the known laws of physics but imply an engineering so vast that it seems to defy practical deployment. Centauri Dreams readers are well aware by now of some of Robert Forward’s vast mental constructions, including lightsails in the hundreds of kilometers and enormous lenses in the outer Solar System as big as some US states. But such notions abound in the realm of interstellar thinking. Thus Clifford Singer’s ideas on pellet propulsion to a receding starship, which from the mathematical analysis require an accelerator 105 kilometers long, an engineering nightmare.

But then, when we reach sizes like these, we might ask ourselves whether we’re not overlooking the obvious. When Cornell’s Mason Peck went to work on wafer-scale spacecraft, one futuristic notion that occurred to him was to charge swarms of tiny ‘sprites’ through plasma interactions and use Jupiter’s magnetic field as a particle accelerator, pushing the chips to thousands of kilometers per second. That gets you to Proxima Centauri at, conceivably, a tenth of lightspeed. And instead of building a vast accelerator, you use the one the Solar System already has.

Columbia University’s David Kipping likewise investigates what we can do with natural objects. Years ago, he became fascinated with Claudio Maccone’s ideas for a FOCAL mission, which would use the Sun’s own mass as the instrument for ‘bending’ starlight to a focus at about 550 AU, one that might be exploited by future deep space telescopes. I put ‘bending’ in apostrophes because actually the light never bends, but flies straight and true through spacetime curved by the presence of mass. The Sun has 300,000 times the mass of the Earth, a useful object!

Image: Writing about Italian physicist Claudio Maccone gives me the chance to tap a favorite memory. Here I’m at the left, having lunch with Claudio ten years ago in the Italian Alps. This was one of many sessions in which Claudio helped me understand the implications of gravitational lensing. Credit: Roman Kezerashvili (City University of New York).

No one in the rich history of gravitational lensing concepts from Einstein through Von Eshleman and on to Maccone, Geoff Landis and Slava Turyshev has done more for the field than Maccone himself, having submitted a proposal to the European Space Agency for FOCAL as far back as 1993, and having authored the key text, Deep Space Flight and Communications: Exploiting the Sun as a Gravitational Lens (Springer/Praxis 2009).

A Lens the Size of the Earth

Of course, in our current stage of technological development, FOCAL itself is quite a reach, given our problems in getting to the outer Solar System — our farthest-flung craft even now are but a third of the way to where the lensing phenomenon could begin to be exploited. Well aware of this, David Kipping wondered if there wasn’t a way to explore the ‘bending’ of light in a different way, one that could help us learn how to untangle complex lensed images and develop near-term technologies at distances much closer to home. And it turns out there is, although it’s not a proposal that relies on gravitational lensing but rather the refraction of light.

Image: Columbia University physicist David Kipping.

Kipping wants us to consider the Earth as the source in a concept he calls the Terrascope. Although Maccone has considered the gravitational lensing properties of individual planets, the effect is small. Kipping reminds us that bending light through refraction has been used since the earliest telescopes. Refraction happens when light moves from a medium like air to a dense medium like glass. The result: Magnification as well as amplification, depending on the size of the lens.

The problem: Keep making larger and larger refracting telescopes and the lenses begin to deform under gravity. Reflector telescopes solve many of the problems of refractors, but we can see how far we’ve pushed their limits when we consider how we’ve had to move to segmented mirrors like the 36-mirror Gran Telescopio Canarias. Segmented mirrors cope with the deformation problems of a single large mirror but demand powerful computing resources. As their size continues to increase, costs skyrocket.

Kipping was originally inspired by the ‘Green Flash’ phenomenon that is the result of the refraction of the Sun’s light at sunset or sunrise. It lasts no more than a second or two, and appears because blue light is attenuated by scattering in the atmosphere as sunlight is spread into its constituent colors. The Columbia physicist, working thirteen years ago on a master’s thesis at Cambridge, realized that at a certain distance from Earth, if the Sun were directly behind our planet, a global green flash would appear, forming a green ring around Earth.

Image: The green flash as seen in Santa Cruz, VA. Credit: Brocken Inaglory CC BY-SA 3.0.

And what happens with more distant starlight? If the Earth is in front of a star, light from the star is deflected by the Earth’s atmosphere by about half a degree as it enters, and another half a degree as it exits — for those light rays that skim the surface. This sets up a focal point at a distance less than the Earth’s distance from the Moon. Even more significantly, a focal line is created as we consider light rays that enter the atmosphere higher up. Here the bending effect is somewhat less, but we also begin to mitigate atmospheric effects that would put noise in our data.

Image: Light from a distant star is deflected by the atmosphere of the Earth by half a degree. After skimming the surface, it is bent again as it exits the atmosphere by one-half a degree. Light entering the opposite hemisphere does the same, creating a focal point. Rays entering the atmosphere higher up bend less because the atmosphere is thinner with altitude, so the result is a focal line. The trick is knowing which light can be effectively sampled. Image credit: David Kipping.

As you can infer from the image above, light that skims the surface of the Earth would encounter too many obstacles to be helpful, but light rays encountering the atmosphere at higher altitudes can give us a focal point at the distance of the Moon. Even here, though, we run into scattering and extinction effects produced by the atmosphere, which depend upon the wavelength of the light we’re looking at.

Moreover, clouds are a factor, meaning we have to choose light that enters the atmosphere higher still. In his paper, Kpping argues that a detector placed at the Hill Sphere distance, about four times the distance between the Earth and the Moon, is positioned to intercept light that would have skimmed the Earth’s atmosphere at an altitude of 14 kilometers. As he notes in a recent email: “So put a detector at the Lagrange point, look back at the Earth, block out the disk of the Earth itself, and around the rim you should see light from distant stars lensed into ring-like structures.”

Image: Extinction effects for a terrascope at the Hill Sphere distance, roughly 4 Earth radii. Here, most of the infrared spectrum becomes usable for lensing. Image credit: David Kipping.

The Hill Sphere distance is benignly close compared to the gravitational lens at 550 AU, which means it offers an interesting observing possibility at a distance we’re experienced at reaching — the James Webb Space Telescope will work within this range. If we can truly exploit this phenomenon, we can get amplifications in the range of 22,500, Kipping estimates. This is actually a conservative estimate, but probably necessarily so, as he explains in his email:

“Now the Earth is much smaller than the Sun, so the amplification is not comparable with FOCAL but still impressive. I compute it is 45,000 after accounting for clouds and extinction losses, and is fairly wavelength independent beyond a micron (refraction doesn’t change much beyond 1micron). I think you might lose up to half of that due to half of the Earth being in daylight, which obviously would be a bright noise source. So removing that, I think 22,500 amplification is more realistic. That means you would turn a 1 meter detector into a ~150 meter effective aperture, amplifying sources by nearly 11 magnitudes.”

A one-meter detector analyzes light in a way that a 150-meter space telescope would otherwise be capable of, without, of course, the vast cost incurred by the latter (remember that even the JWST, a 6.5-meter instrument, has already incurred costs in the range of $10 billion).

But how would we use an effect that allowed us to look only at whatever happens to be behind the Earth during the observing period? I want to talk about this paper more tomorrow, and also alert you to Kipping’s video description of the idea. There are also obvious issues having to do with atmospheric effects and questions about occultation methods in this work. But there are enough serious advantages — we’ll look at several more tomorrow — to make us delve deeper.

The paper is Kipping, “The ‘Terrascope’: On the Possibility of Using the Earth as an Atmospheric Lens,” accepted at Proceedings of the Astronomical Society of the Pacific (abstract).


Summer Break

And boy do I need it! See you in two weeks.


TESS: Concluding First Year of Observations

If it seemed amazing to me that 50 years had gone by since Apollo 11, it surprises me as well to realize that, on a much shorter scale, the Transiting Exoplanet Survey Satellite (TESS) has been at work for a full year. In a recent news release, NASA is calling this “the most comprehensive planet-hunting expedition ever undertaken,” presumably a nod to the mission’s broad sky coverage as opposed to the sharply confined field of view of the Kepler mission.

Whereas Kepler took a ‘long stare’ at its starfield in Cygnus and Lyra, TESS keeps alternating what it sees, looking at a 24-by-96 degree section of sky for 27 days at a time. Moreover, TESS scientists are homing in on stars much closer to our Solar System. While Kepler was looking along the Orion arm of the galaxy at stars generally between 600 and 3,000 light years out (more distant stars were too faint to observe transit lightcurves), TESS puts the emphasis on stars closer than 300 light years, though with a similar method of looking for transits. The mission will wind up studying 85 percent of the sky, an area 350 times greater than Kepler.

George Ricker, TESS principal investigator at MIT, is thinking the mission he leads has had an outstanding first year:

“The pace and productivity of TESS in its first year of operations has far exceeded our most optimistic hopes for the mission. In addition to finding a diverse set of exoplanets, TESS has discovered a treasure trove of astrophysical phenomena, including thousands of violently variable stellar objects.”

That last bit is a nod to the fact that even as TESS hunts exoplanets, beginning with the southern sky in July of 2018, it also has been on the lookout for supernovae and other deep sky objects within its line of sight. The exoplanet haul in the past year includes 21 planets, with 959 candidates still waiting for confirmation by ground-based telescopes (the candidate list will swell enormously as the voluminous data yet to be analyzed comes into play). Its first year concluded, TESS is now looking at the northern sky. Bear in mind that the sections of sky TESS looks at can overlap — some parts of the sky thus wind up being observed for almost a year.

This is helpful, because an area near the poles in its observational ‘sphere’ will be under constant observation, producing targets for follow-up with the James Webb Space Telescope. The video below is useful for illustrating the TESS sky-coverage technique. Have a look, while pondering the words of Padi Boyd, a TESS project scientist at NASA GSFC:

“Kepler discovered the amazing result that, on average, every star system has a planet or planets around it. TESS takes the next step. If planets are everywhere, let’s find those orbiting bright, nearby stars because they’ll be the ones we can now follow up with existing ground and space-based telescopes, and the next generation of instruments for decades to come.”

Among the early TESS catches:

  • HD 21749c, the first Earth-size planet the mission has found. The world orbits a K-class star with about 70 percent of the mass of the Sun, located 53 light years away in the constellation Reticulum, one of two planets identified in this system;
  • A number of multi-planet systems, like that around L98-59, which includes a planet (L98-59b) between the size of Earth and Mars, the smallest yet found by TESS. Here the host star is an M-dwarf about a third the mass of the Sun, 35 light years away in the constellation Volans;
  • Three exocomets identified in the Beta Pictoris system. A comet’s lightcurve differs significantly from that of a transiting planet because of the extended cometary tail. These discoveries demonstrate the ability of TESS to identify tiny objects around young, bright stars, and should lead to future exocomet detections that can supply information about planet formation;
  • Six supernovae occurring in other galaxies, among them ASASSN-18rn, ASASSN-18tb and ATLAS18tne, found before ground-based surveys could identify them.

Image: Astronomers have found clear observational evidence of exocomets around the bright star Beta Pictoris, located some 65 light-years from Earth. At just 20 million years old, Beta Pictoris is relatively young, meaning it’s still surrounded by a disk of gas and dust known as a protoplanetary disk, seen here in this artist’s concept. Credit: NASA/FUSE/Lynette Cook.

We’re extremely early in the analysis of TESS data, considering that an object must make three transits to be considered an exoplanet candidate, after which a number of additional checks remain to be made before the object is submitted to study by ground-based telescopes. When the dust settles, TESS is expected to land more than 20,000 exoplanets, dozens of them the size of Earth and up to 500 planets less than twice the size of Earth. Of the total haul, scientists anticipate that the observatory will find about 17,000 planets larger than Neptune.

“The team is currently focused on finding the best candidates to confirm by ground-based follow-up,” said Natalia Guerrero, who manages the team in charge of identifying exoplanet candidates at MIT. “But there are many more potential exoplanet candidates in the data yet to be analyzed, so we’re really just seeing the tip of the iceberg here. TESS has only scratched the surface.”


VERITAS: Strengthening the Optical SETI Search

Breakthrough Listen has just announced a new optical SETI effort in partnership with the VERITAS Collaboration. The news took me by surprise, for VERITAS (Very Energetic Radiation Imaging Telescope Array System) generally deals in high-energy astrophysics, with a focus on gamma rays, which signal their presence through flashes of Cherenkov radiation when they strike the Earth’s atmosphere. Here, the array is being used to look for technosignatures, as Andrew Siemion (UC- Berkeley SETI Research Center) explains:

“Breakthrough Listen is already the most powerful, comprehensive, and intensive search yet undertaken for signs of intelligent life beyond Earth. Now, with the addition of VERITAS, we’re sensitive to an important new class of signals: fast optical pulses. Optical communication has already been used by NASA to transmit high definition images to Earth from the Moon, so there’s reason to believe that an advanced civilization might use a scaled-up version of this technology for interstellar communication.”

Image: View of the Fred Lawrence Whipple Observatory basecamp and the VERITAS array. Credit: VERITAS.

So the search for faint optical flashes that could signal the presence of an extraterrestrial civilization deepens, complementing the optical SETI work currently underway at Breakthrough Listen as well as its ongoing survey at radio frequencies. VERITAS brings four 12-meter telescopes located at the basecamp of the Fred Lawrence Whipple Observatory on Mount Hopkins in Arizona into the mix. This is quite an exoplanet venue: The observatory has facilities at different elevations, including exoplanet arrays for HAT (Hungarian-made Automated Telescope), the MEarth project and MINERVA, all three of these robotic.

In the Breakthrough Listen effort, VERITAS will be looking for pulsed optical beacons with durations as short as several nanoseconds, for at timescales like these, an artificial beacon could outshine any stars located in the same region of sky. All four telescopes will be used simultaneously, which should assist the effort in screening out false positive detections.

Although I hadn’t realized it until looking further into VERITAS, the array has already seen use in a search of Boyajian’s Star for such pulses (see Abeysekara et al., “A Search for Brief Optical Flashes Associated with the SETI Target KIC 8462852,” abstract here). You’ll recall that this star has received intense scrutiny because of its unusual pattern of dimming, which did not correspond to planetary transits and raised questions about the source of the lightcurve variations.

Now VERITAS goes to work on stars not already found on Breakthrough Listen’s primary star list. The numbers are striking: Breakthrough Listen calculates that if a laser delivering 500 terawatts in a pulse lasting a few nanoseconds were located at the same distance as Boyajian’s Star (an F3V-class object in Cygnus approximately 1470 light years away) and pointed in our direction, VERITAS would be able to detect it.

Most stars in the Breakthrough Listen target list, however, are considerably closer. Hence the VERITAS search will be sensitive to pulses a factor 100 – 10,000 times fainter still. Thus an array built with the purpose of studying very-high-energy gamma rays proves adaptable to a search for technosignatures, with UC-Santa Cruz physicist David Williams, one of the effort’s leaders, saying “It is impressive how well-suited the VERITAS telescopes are for this project.” Williams will work in collaboration with Jamie Holder (University of Delaware) and Andrew Siemion’s Breakthrough Listen team at UC-Berkeley’s SETI Research Center (BSRC).

While we’re on the topic of SETI, let me also call your attention to a new resource that Penn State’s Jason Wright and Alan Reyes have created. Go to the NASA ADS site and include in your search terms ‘bibgroup:SETI’. I just searched, for example, using ‘author:”maccone” bibgroup:SETI’ and came up with 56 hits. SETI has been short on bibliographical resources, so this is promising stuff. You’ll need to familiarize yourself with the search syntax, but it’s not at all difficult, and will reward those looking to firm up a citation or check on the status of a particular scientist’s work. Wright and Reyes have submitted a paper on the bibliography to JBIS. For more, see Towards a Comprehensive Bibliography for SETI.

Image: Penn State’s Jason Wright. Credit: PSU.


Sail Deployment: Reflections on LightSail 2

One thing that James E. Webb insisted on during his tenure as NASA administrator was that the space program was larger than an attempt to get humans to the Moon. The man who did so much to ensure that Apollo would succeed, and who will be rightfully honored in the form of the James Webb Space Telescope, was a proponent of exploration throughout the Solar System through robotic craft, and weather and communications satellites that would become part of a permanent reliance on a growing space infrastructure. Marc Millis noted some of the results in his recent essay.

For while the frustration of abandoning the Moon in the 1970s lingers, we do have over 4600 spacecraft in Earth orbit, many of them doing the kind of work Webb envisioned. We’ve completed the initial reconnaissance of the Solar System and made our first tentative ventures into the Kuiper Belt and out past the heliopause. We’re charting exoplanets and looking to explore Saturn’s largest moon. So these are things to keep in mind when frustration begins to build.

The anniversary of Apollo’s first landing was a time for looking back, but The Planetary Society has just reminded us that we have to keep looking ahead as well. With the successful deployment of the solar sail aboard LightSail 2, we are seeing the kind of change Millis talked about in action. Much of the space business has turned commercial, while we continue to sort out how to handle the change (SLS or Falcon Heavy?), and in the midst of this, private organizations using off-the-shelf hardware can produce crowd-funded missions of real value.

We’re in that confusing time — one that future historians will be able to sort out more readily than we can define it today — when the pace of technological change drives new models for the accomplishment of grand goals. How we interrelate big government projects with corporate space activities and private contributions will define an era that will one day have its own name, just as the 1960s could be partially defined by the term ‘Space Race.’ And it’s understandable that Planetary Society CEO Bill Nye should be proud of what his organization has accomplished: “We are advancing space science and exploration,” says Nye. “We are democratizing space. We are innovating.”

Apollo 8 gave us gorgeous photographs of our planet seen entire, a blue and brown crystal filling the frame. Now we have LightSail 2’s view of Earth, showing vast portions of the Pacific Ocean and part of the North American landmass. CubeSats in their various configurations make it possible for organizations like The Planetary Society as well as universities and other private groups to contemplate missions that move the ball forward. In the case of LightSail 2, we will have learned more about sail deployment, and orbit raising by the pressure of sunlight alone.

Image: This image of Earth, which shows the Pacific Ocean with Baja California and Mexico on the right, was captured by LightSail 2 on 18 July 2019 at 21:45 UTC while the spacecraft was in range of its ground station at Cal Poly San Luis Obispo in California. Though LightSail 2’s altitude is only 720 kilometers, its 185-degree, wide-angle camera lenses allow it to capture horizon-to-horizon Earth imagery. Credit: The Planetary Society.

The deployment command to LightSail 2 went out at about 1445 EDT (1845 UTC) on the 23rd, with the momentum wheel, responsible for orienting the sail in relation to the Sun, spinning up successfully. Now we wait for images of the deployed sail, which should be downloaded today.

Here’s a photo of the LightSail 2 team set up for sail deployment. It’s impossible to ignore the contrast between the Mission Control operations we see at NASA (think of the operation Chris Kraft ran!) and the ad hoc effort at Cal Poly. What makes this possible is vision, public financial support and technology trends working in favor of small, light spacecraft, not to mention gritty persistence. I’ll feel better when I’ve seen actual sail images, but for right now, things look good, and the entire Planetary Society team deserves our congratulations.

Image: This image shows The Planetary Society’s LightSail 2 team on console prior to sail deployment on 23 July 2019 at the Cal Poly CubeSat lab in San Luis Obispo, California. From left: Barbara Plante, Founder and President, Boreal Space; Alex Diaz, Avionics Engineer, Ecliptic Enterprises Corporation; John Bellardo, Associate Professor, Cal Poly San Luis Obispo; Dave Spencer, LightSail Project Manager, Associate Professor at Purdue University; Bruce Betts, LightSail Program Manager, Planetary Society Chief Scientist. Credit: The Planetary Society.

You can track LightSail 2’s condition on its own Mission Control page, which offers data on temperature, degree of rotation, control mode and location over the Earth. 40,000 private donations (totalling in the region of $7 million) went into LightSail 2 over 10 years. The results of the effort will also feed a NASA project called Near Earth Asteroid Scout, which will likewise employ CubeSat technology to visit an asteroid early in the 2020s. Launched by a Falcon Heavy, LightSail 2 emphasizes today’s mix of commercial, corporate and private effort.

I fall back on what Marc Millis said in these pages on Monday:

It is my hope that progress will continue along all these fronts and improve the human condition. The next steps toward the Moon, Mars and recreational spaceflight will usher in a new era, a suitable name for which will probably be conceived years later. It’s certainly no longer a “space race” with only one finish line. It is the beginning of a new stage of humanity.