Habitable Zones: A Moving Target

Habitable zones are always easy enough to explain when you invoke the ‘Goldilocks’ principle, but every time I talk about these matters there’s always someone who wants to know how we can speak about places being ‘not too hot, not too cold, but just right.’ After all, we’re a sample of one, and why shouldn’t there be living creatures beneath icy ocean crusts or on worlds hotter than we could tolerate? I always point out that we have to work with what we know, that water and carbon-based life are what we’re likely to be able to detect, and that we need to fund the missions to find it.

The last word on habitable zone models has for years been Kasting, Whitmire and Reynolds on “Habitable Zones around Main Sequence Stars.” Now Ravi Kopparapu (Penn State) has worked with Kasting and a team of researchers to tune-up the older model, giving us new boundaries based on more recent insights into how water and carbon dioxide absorb light. Both models work with well defined boundaries, the inner edge of the habitable zone being determined by a ‘moist greenhouse effect,’ where the stratosphere becomes saturated by water and hydrogen begins to escape into space.

The outer boundary is defined by the ‘maximum greenhouse limit,’ where the greenhouse effect fails as CO2 begins to condense out of the atmosphere and the surface becomes too cold for liquid water. When worked out for our own Solar System in terms of astronomical units, the 1993 model showed the habitable zone parameters extending from 0.95 to 1.67 AU. Earth was thus near the inner edge.

The new model improves the climate model and works out revised estimates for the habitable zones around not just Sun-like G-class stars but F, K and M stars as well. The definition uses atmospheric databases called HITRAN (high-resolution transmission molecular absorption) and HITEMP (high-temperature spectroscopic absorption parameters) that characterize planetary atmospheres in light of how both carbon dioxide and water are absorbed. The revision of these databases allows the authors to move the HZ boundaries further out from their stars than they were before.

800px-Kepler22b-artwork

Image: An artist’s conception of Kepler-22b, once thought to be positioned in its star’s habitable zone. New work on habitable zones suggests the planet is actually too hot to be habitable. Credit: NASA/Ames/JPL-Caltech.

This looks to be an important revision, one that people like Rory Barnes (University of Washington) are already calling ‘the new gold standard for the habitable zone’ (see Earth and others lose status as Goldilocks worlds) in New Scientist. In Solar System terms, the limits now become 0.99 AU and 1.70 AU. We see that the Earth moves closer to the inner edge of the habitable zone, causing the authors to comment about an important part of their analysis, that it does not factor in the effect of clouds:

…this apparent instability is deceptive, because the calculations do not take into account the likely increase in Earth’s albedo that would be caused by water clouds on a warmer Earth. Furthermore, these calculations assume a fully saturated troposphere that maximizes the greenhouse e?ect. For both reasons, it is likely that the actual HZ inner edge is closer to the Sun than our moist greenhouse limit indicates. Note that the moist greenhouse in our model occurs at a surface temperature of 340 K. The current average surface temperature of the Earth is only 288 K. Even a modest (5-10 degree) increase in the current surface temperature could have devastating a?ects on the habitability of Earth from a human standpoint. Consequently, though we identify the moist greenhouse limit as the inner edge of the habitable zone, habitable conditions for humans could disappear well before Earth reaches this limit.

While the small change to the Earth’s position in the habitable zone is getting most of the press attention, I’m more interested in what the new numbers say about M-dwarfs. These small red stars would have habitable zones close enough to the star that the likelihood of a transit increases. The 1993 habitable zone work did not model M-dwarfs with effective temperatures lower than 3700 K whereas the new work takes effective temperatures down to 2600 K. In an article run by NBC News, Abel Mendez (University of Puerto Rico at Arecibo) mentions that Gliese 581d, thought to skirt the outer limits of its star’s habitable zone, may now move toward the habitable zone’s center, increasing the possibility of life emerging there. Other planets catalogued by the Planetary Habitability Laboratory at UPR will be affected as some thought to have been in the habitable zone may move out of it. See A New Habitable Zone for more.

There are other factors to consider about M-dwarfs, especially the fact that planets close enough to these stars to be in the habitable zone are most likely tidally locked, presenting the same face to the star at all times. Neither the 1993 model or this revised one does well at representing a tidally locked world and the authors say they have not tried to explore synchronously rotating planets in different parts of the habitable zone around M-dwarfs. The paper does note that a planet near the outer edge of the HZ with a dense CO2 atmosphere should be more effective at moving heat to the night side, perhaps increasing the chances of habitability.

The overall effect of adjusting our parameters for habitable zones around the various stellar classes will be to improve our accuracy as we look toward producing lists of targets for future space-based observatories. The authors note that the James Webb Space Telescope, for example, is thought to be marginally capable of taking a transit spectrum of an Earth-like planet orbiting an M-dwarf. We’ll need the maximum chance for success before committing resources to specific planets once we get into the business of trying to identify biomarkers on possibly habitable worlds.

The paper is Kopparapu et al., “Habitable Zones Around Main-Sequence Stars: New Estimates,” accepted at the The Astrophysical Journal (preprint). Note that a habitable zone calculator based on this work is available online. The 1993 paper is Kasting, Whitmire and Reynolds, “Habitable Zones around Main Sequence Stars,” Icarus 101 (1993), pp. 108-128 (full text).

tzf_img_post

TW Hydrae: An Infant Planetary System Analyzed

You have to like the attitude of Thomas Henning (Max-Planck-Institut für Astronomie). The scientist is a member of a team of astronomers whose recent work on planet formation around TW Hydrae was announced this afternoon. Their work used data from ESA’s Herschel space observatory, which has the sensitivity at the needed wavelengths for scanning TW Hydrae’s protoplanetary disk, along with the capability of taking spectra for the telltale molecules they were looking for. But getting observing time on a mission like Herschel is not easy and funding committees expect results, a fact that didn’t daunt the researcher. Says Henning, “If there’s no chance your project can fail, you’re probably not doing very interesting science. TW Hydrae is a good example of how a calculated scientific gamble can pay off.”

I would guess the relevant powers that be are happy with this team’s gamble. The situation is this: TW Hydrae is a young star of about 0.6 Solar masses some 176 light years away. The proximity is significant: This is the closest protoplanetary disk to Earth with strong gas emission lines, some two and a half times closer than the next possible subjects, and thus intensely studied for the insights it offers into planet formation. Out of the dense gas and dust here we can assume that tiny grains of ice and dust are aggregating into larger objects and one day planets.

PR_130130_1gr

Image: Artist’s impression of the gas and dust disk around the young star TW Hydrae. New measurements using the Herschel space telescope have shown that the mass of the disk is greater than previously thought. Credit: Axel M. Quetz (MPIA).

The challenge of TW Hydrae, though, has been that the total mass of the molecular hydrogen gas in its disk has remained unclear, leaving us without a good idea of the particulars of how this infant system might produce planets. Molecular hydrogen does not emit detectable radiation, while basing a mass estimate on carbon monoxide is hampered by the opacity of the disk. For that matter, basing a mass estimate on the thermal emissions of dust grains forces astronomers to make guesses about the opacity of the dust, so that we’re left with uncertainty — mass values have been estimated anywhere between 0.5 and 63 Jupiter masses, and that’s a lot of play.

Error bars like these have left us guessing about the properties of this disk. The new work takes a different tack. While hydrogen molecules don’t emit measurable radiation, those hydrogen molecules that contain a deuterium atom, in which the atomic nucleus contains not just a proton but an additional neutron, emit significant amounts of radiation, with an intensity that depends upon the temperature of the gas. Because the ratio of deuterium to hydrogen is relatively constant near the Sun, a detection of hydrogen deuteride can be multiplied out to produce a solid estimate of the amount of molecular hydrogen in the disk.

The Herschel data allow the astronomers to set a lower limit for the disk mass at 52 Jupiter masses, the most useful part of this being that this estimate has an uncertainty ten times lower than the previous results. A disk this massive should be able to produce a planetary system larger than the Solar System, which scientists believe was produced by a much lighter disk. When Henning spoke about taking risks, he doubtless referred to the fact that this was only the second time hydrogen deuteride has been detected outside the Solar System. The pitch to the Herschel committee had to be persuasive to get them to sign off on so tricky a detection.

But 36 Herschel observations (with a total exposure time of almost seven hours) allowed the team to find the hydrogen deuteride they were looking for in the far-infrared. Water vapor in the atmosphere absorbs this kind of radiation, which is why a space-based detection is the only reasonable choice, although the team evidently considered the flying observatory SOFIA, a platform on which they were unlikely to get approval given the problematic nature of the observation. Now we have much better insight into a budding planetary system that is taking the same route our own system did over four billion years ago. What further gains this will help us achieve in testing current models of planet formation will be played out in coming years.

The paper is Bergin et al., “An Old Disk That Can Still Form a Planetary System,” Nature 493 ((31 January 2013), pp. 644–646 (preprint). Be aware as well of Hogerheijde et al., “Detection of the Water Reservoir in a Forming Planetary System,” Science 6054 (2011), p. 338. The latter, many of whose co-authors also worked on the Bergin paper, used Herschel data to detect cold water vapor in the TW Hydrae disk, with this result:

Our Herschel detection of cold water vapor in the outer disk of TW Hya demonstrates the presence of a considerable reservoir of water ice in this protoplanetary disk, suf?cient to form several thousand Earth oceans worth of icy bodies. Our observations only directly trace the tip of the iceberg of 0.005 Earth oceans in the form of water vapor.

Clearly, TW Hydrae has much to teach us.

Addendum: This JPL news release notes that although a young star, TW Hydrae had been thought to be past the stage of making giant planets:

“We didn’t expect to see so much gas around this star,” said Edwin Bergin of the University of Michigan in Ann Arbor. Bergin led the new study appearing in the journal Nature. “Typically stars of this age have cleared out their surrounding material, but this star still has enough mass to make the equivalent of 50 Jupiters,” Bergin said.

tzf_img_post

Explaining Retrograde Orbits

While radial velocity and transit methods seem to get most of the headlines in exoplanet work, there are times when direct imaging can clarify things found by the other techniques. A case in point is the HAT-P-7 planetary system some 1000 light years from Earth in the constellation Cygnus. HAT-P-7b was interesting enough to begin with given its retrograde orbit around the primary (meaning its orbit was opposite to the spin of its star). Learning how a planet can emerge in a retrograde orbit demands learning more about the system at large, which is why scientists from the University of Tokyo began taking high contrast images of the HAT-P-7 system.

It had been Norio Narita (National Astronomical Observatory of Japan) who, in 2008, discovered evidence of HAT-P-7b’s retrograde orbit. Narita’s team has now used adaptive optics at the Subaru Telescope to measure the proper motion of what turns out to be a small companion star now designated HAT-P-7B. The team was also able to confirm a second planet candidate that had been first reported in 2009. The latter, a gas giant dubbed HAT-P-7c, orbits between the orbits of the retrograde planet (HAT-P-7b) and the newfound companion star.

fig2e

Image: HAT-P-7 and its companion star in images obtained with the Subaru Telescope. IRCS (Infrared Camera and Spectrograph) captured the images in J band (1.25 micron), K band (2.20 micron), and L’ band (3.77 micron) in August 2011, and HiCIAO captured the image in H band (1.63 micron) in July 2012. North is up and east is left. The star in the middle is the central star HAT-P-7, and the one on the east (left) side is the companion star HAT-P-7B, which is separated from HAT-P-7 by more than about 1200 AU. The companion is a star with a low mass only a quarter of that of the Sun. The object on the west (right) side is a very distant, unrelated background star. (Credit: NAOJ)

This Subaru Telescope news release notes the current thinking of Narita’s team on how the retrograde orbit emerged in this system. Key to the puzzle is the Kozai mechanism, first described in the 1960s, which has been found to explain the irregular orbits of everything from irregular planetary moons to trans-Neptunian objects, and has been applied to various exoplanets. The Kozai mechanism says that orbital eccentricity can become orbital inclination, with perturbations leading to the periodic exchange of the two. In other words, what had been a circular but highly-inclined orbit can become an eccentric orbit at a lower inclination.

Ultimately, the effects can be far-reaching as planetary orbits change over time. Can the process be sequential? In the HAT-P-7 system, the researchers are suggesting that the companion star (HAT-P-7B) affected the orbit of the newly confirmed planet HAT-P-7c through the Kozai mechanism, causing orbital eccentricity to be exchanged for inclination. With its orbit now significantly inclined in relation to the central star, Hat-P-7c then affected the inner planet (HAT-P-7b) through the same mechanism, causing its orbit to become retrograde.

The researchers go on to make the case for direct imaging to check for stellar companions that can have a significant effect on planetary orbits. From the paper:

Thus far, the existence of possible faint outer companions around planetary systems has not been checked and is often overlooked, even though the Kozai migration models assume the presence of an outer companion. To further discuss planetary migration using the information of the RM [Rossiter-McLaughlin] e?ect / spot-crossing events as well as signi?cant orbital eccentricities, it is important to incorporate information regarding the possible or known existence of binary companions. This is also because a large fraction of the stars in the universe form binary systems… Thus it would be important to check the presence of faint binary companions by high-contrast direct imaging. In addition, if any outer binary companion is found, it is also necessary to consider the possibility of sequential Kozai migration in the system, since planet-planet scattering, if it occurs, is likely to form the initial condition of such planetary migration.

We have much to learn about retrograde orbits and the rippling effects of the Kozai mechanism are a possibility that will have to be weighed against other observations. Whether the researchers can make this case stick or not, the fact that so many ‘hot Jupiters’ are themselves in highly inclined or even retrograde orbits tells us how important it will be to work these findings into our theories of planet formation and migration. Direct imaging from the SEEDS project (Strategic Exploration of Exoplanets and Disks with Subaru Telescope) should prove useful as we continue to work on direct imaging of exoplanets around hundreds of nearby stars.

The paper is Narita et al., “A Common Proper Motion Stellar Companion to HAT-P-7,” Publications of the Astronomical Society of Japan, Vol. 64, L7 (preprint).

tzf_img_post

Data Storage: The DNA Option

One of the benefits of constantly proliferating information is that we’re getting better and better at storing lots of stuff in small spaces. I love the fact that when I travel, I can carry hundreds of books with me on my Kindle, and to those who say you can only read one book at a time, I respond that I like the choice of books always at hand, and the ability to keep key reference sources in my briefcase. Try lugging Webster’s 3rd New International Dictionary around with you and you’ll see why putting it on a Palm III was so delightful about a decade ago. There is, alas, no Kindle or Nook version.

Did I say information was proliferating? Dave Turek, a designer of supercomputers for IBM (world chess champion Deep Blue is among his creations) wrote last May that from the beginning of recorded time until 2003, humans had created five billion gigabytes of information (five exabytes). In 2011, that amount of information was being created every two days. Turek’s article says that by 2013, IBM expects that interval to shrink to every ten minutes, which calls for new computing designs that can handle data density of all but unfathomable proportions.

A recent post on Smithsonian.com’s Innovations blog captures the essence of what’s happening:

But how is this possible? How did data become such digital kudzu? Put simply, every time your cell phone sends out its GPS location, every time you buy something online, every time you click the Like button on Facebook, you’re putting another digital message in a bottle. And now the oceans are pretty much covered with them.

And that’s only part of the story. Text messages, customer records, ATM transactions, security camera images…the list goes on and on. The buzzword to describe this is “Big Data,” though that hardly does justice to the scale of the monster we’ve created.

The article rightly notes that we haven’t begun to catch up with our ability to capture information, which is why, for example, so much fertile ground for exploration can be found inside the data sets from astronomical surveys and other projects that have been making observations faster than scientists can analyze them. Learning how to work our way through gigantic databases is the premise of Google’s BigQuery software, which is designed to comb terabytes of information in seconds. Even so, the challenge is immense. Consider that the algorithms used by the Kepler team, sharp as they are, have been usefully supplemented by human volunteers working with the Planet Hunters project, who sometimes see things that computers do not.

Shakespeare

But as we work to draw value out of the data influx, we’re also finding ways to translate data into even denser media, a prerequisite for future deep space probes that will, we hope, be gathering information at faster clips than ever before. Consider work at the European Bioinformatics Institute in the UK, where researchers Nick Goldman and Ewan Birney have managed to code Shakespeare’s 154 sonnets into DNA, in which form a single sonnet weighs 0.3 millionths of a millionth of a gram. You can read about this in Shakespeare and Martin Luther King demonstrate potential of DNA storage, an article on their paper in Nature which just ran in The Guardian.

Image: Coding The Bard into DNA makes for intriguing data storage prospects. This portrait, possibly by John Taylor, is one of the few images we have of the playwright (now on display at the National Portrait Gallery in London).

Goldman and Birney are talking about DNA as an alternative to spinning hard disks and newer methods of solid-state storage. Their work is given punch by the calculation that a gram of DNA could hold as much information as more than a million CDs. Here’s how The Guardian describes their method:

The scientists developed a code that used the four molecular letters or “bases” of genetic material – known as G, T, C and A – to store information.

Digital files store data as strings of 1s and 0s. The Cambridge team’s code turns every block of eight numbers in a digital code into five letters of DNA. For example, the eight digit binary code for the letter “T” becomes TAGAT. To store words, the scientists simply run the strands of five DNA letters together. So the first word in “Thou art more lovely and more temperate” from Shakespeare’s sonnet 18, becomes TAGATGTGTACAGACTACGC.

The converted sonnets, along with DNA codings of Martin Luther King’s ‘I Have a Dream’ speech and the famous double helix paper by Francis Crick and James Watson, were sent to Agilent, a US firm that makes physical strands of DNA for researchers. The test tube Goldman and Birney got back held just a speck of DNA, but running it through a gene sequencing machine, the researchers were able to read the files again. This parallels work by George Church (Harvard University), who last year preserved his own book Regenesis via DNA storage.

The differences between DNA and conventional storage are striking. From the paper in Nature (thanks to Eric Davis for passing along a copy):

The DNA-based storage medium has different properties from traditional tape- or disk-based storage.As DNA is the basis of life on Earth, methods for manipulating, storing and reading it will remain the subject of continual technological innovation.As with any storage system, a large-scale DNA archive would need stable DNA management and physical indexing of depositions.But whereas current digital schemes for archiving require active and continuing maintenance and regular transferring between storage media, the DNA-based storage medium requires no active maintenance other than a cold, dry and dark environment (such as the Global Crop Diversity Trust’s Svalbard Global Seed Vault, which has no permanent on-site staff) yet remains viable for thousands of years even by conservative estimates.

The paper goes on to describe DNA as ‘an excellent medium for the creation of copies of any archive for transportation, sharing or security.’ The problem today is the high cost of DNA production, but the trends are moving in the right direction. Couple this with DNA’s incredible storage possibilities — one of the Harvard researchers working with George Church estimates that the total of the world’s information could one day be stored in about four grams of the stuff — and you have a storage medium that could handle vast data-gathering projects like those that will spring from the next generation of telescope technology both here on Earth and aboard space platforms.

The paper is Goldman et al., “Towards practical, high-capacity, low-maintenance information storage in synthesized DNA,” Nature, published online 23 January 2013.

tzf_img_post

The Velocity of Thought

How fast we go affects how we perceive time. That lesson was implicit in the mathematics of Special Relativity, but at the speed most of us live our lives, easily describable in Newtonian terms, we could hardly recognize it. Get going at a substantial percentage of the speed of light, though, and everything changes. The occupants of a starship moving at close to 90 percent of the speed of light age at half the rate of their counterparts back on Earth. Push them up to 99.999 percent of c and 223 years go by on Earth for every year they experience.

Thus the ‘twin paradox,’ where the starfaring member of the family returns considerably younger than the sibling left behind. Carl Sagan played around with the numbers in the 1960s to show that a spacecraft moving at an acceleration of one g would be able to reach the center of the galaxy in 21 years (ship-time), while tens of thousands of years passed on Earth. Indeed, keep the acceleration constant and our crew can reach the Andromeda galaxy in 28 years, a notion Poul Anderson dealt with memorably in the novel Tau Zero.

manchu_starship

Image: A Bussard ramjet in flight, as imagined for ESA’s Innovative Technologies from Science Fiction project. Credit: ESA/Manchu.

Not long after Monday’s post on fast spacecraft I received an email from a young reader who wanted to know a bit more about humans and speed. He had been interested to learn that the fastest man-made object thus far was the Helios II solar probe, while Voyager I’s 17 kilometers per second make it the fastest probe now leaving the system, well above New Horizons’ anticipated 14 kilometers per second at Pluto/Charon. But that being the case for automated probes, what was the fastest speed ever attained by a human being?

Speeds like this are well below those that cause noticeable relativistic effects, of course, but it’s an interesting question because of how much it changed at the beginning of the 20th Century, so let’s talk about it. Lee Billings recently looked into speed in a fine essay called Incredible Journey: Can We Reach the Stars Without Breaking the Bank? and found that in 1906, a man named Fred Marriott managed to surpass 200 kilometers per hour in (the mind boggles) a steam-powered car at Daytona Beach, Florida. This is worth thinking about because Lee points out that before this time, the fastest anyone could have traveled was 200 kilometers per hour, which happens to be the terminal velocity of the human body as it is slowed by air resistance.

So the advent of fast machines finally changed the speed record in 1906, and it would be a scant forty years later that Chuck Yeager pushed the X-1 up past 1000 kilometers per hour, faster than the speed of sound. I can remember checking out a library book back in the 1950s called The Fastest Man Alive. Before I re-checked the reference so I could write this post, I was assuming that the book had been about X-15 pilot Scott Crossfield, but I discovered that this 1958 title was actually the story of Frank Kendall Everest, Jr., known as ‘Pete’ to his buddies.

461px-Frank_Kendall_Everest

Everest flew in North Africa, Sicily and Italy and went on to complete 67 combat missions in the Pacific theater, including a stint as a prisoner of war of the Japanese in 1945. If there was an experimental aircraft he didn’t fly in the subsequent decade, I don’t know what it was, but if memory serves, the bulk of The Fastest Man Alive was about his work with the X-2, in which he reached Mach 2.9 in 1954. Everest was one of the foremost of that remarkable breed of test pilots who pushed winged craft close to space in the era before Gagarin.

But to get back to my friend’s question. Lee Billings identifies the fastest humans alive today as ‘three elderly Americans, all of whom Usain Bolt could demolish in a footrace.’ These are the Apollo 10 astronauts, whose fiery re-entry into the Earth’s atmosphere began at 39,897 kilometers per hour, a speed that would take you from New York to Los Angeles in less than six minutes. No one involved with the mission would have experienced relativistic effects that were noticeable, but in the tiniest way the three could be said to be slightly younger than the rest of us thanks to the workings of Special Relativity.

Sometimes time slows in the way we consider our relation to it. I noticed an interesting piece called Time and the End of History Illusion, written for the Long Now Foundation. The essay focuses on a paper recently published in Science that asked participants to evaluate how their lives — their values, ideas, personality traits — had changed over the past decade, and how much they expected to see them change in the next. Out of a statistical analysis of the findings came what the researchers are calling an ‘End of History Illusion.’

The illusion works like this: We tend to look back at our early lives and marvel at our naïveté. How could we not, seeing with a certain embarrassment all the mistakes we made, and knowing how much we have changed, and grown, over the years. One of the study authors, Daniel Gilbert, tells The New York Times, “What we never seem to realize is that our future selves will look back and think the very same thing about us. At every age we think we’re having the last laugh, and at every age we’re wrong.”

The older we get, in other words, the wiser we think we are in relation to our younger selves. We always think that we have finally arrived, that now we see what we couldn’t see before, and assume that we can announce our final judgment about various aspects of our lives. The process seems to be at work not only in our personal lives but in how we evaluate the world around us. How else to explain the certitude behind some of the great gaffes of intellectual history? Think of US patent commissioner Charles Duell, who said in 1899: “Everything that can be invented has been invented.” Or the blunt words of Harry Warner: “Who the hell wants to hear actors talk?”

The Long Now essay quotes Francis Fukuyama, who wrote memorably about the ‘end of history’ and French philosopher Jean Beaudrillard, who sees such ideas as nothing more than an illusion, one made possible by what he called ‘the acceleration of modernity.’ Long Now adds:

Illusion or not, the Harvard study shows that a sense of being at the end of history has real-world consequences: underestimating how differently we’ll feel about things in the future, we sometimes make decisions we later come to regret. In other words, the end of history illusion could be thought of as a lack of long-term thinking. It’s when we fail to consider the future impact of our choices (and imagine alternatives) that we lose all sense of meaning, and perhaps even lose touch with time itself.

We’ve come a long way from my reader’s innocent question about the fastest human being. But I think Long Now is on to something in talking about the dangers of misunderstanding how we may think, and act, in the future. By assuming we have reached some fixed goal of insight, we grant ourselves too many powers, thinking in our hubris that we are wiser than we really are. Time is elastic and can be bent around in interesting ways, as Einstein showed. Time is also deceptive and leads us as we age to become more doctrinaire than can be warranted.

Sometimes, of course, time and memory mingle inseparably. I’m remembering how my mother used to sit on the deck behind her house when I would go over there to make her coffee. We would look into the tangle of undergrowth and trees up the hill as the morning sun sent bright shafts through the foliage, and as Alzheimer’s gradually took her, she would often remark on how tangled the hillside had become. I always assumed she meant that it had become such because she was no longer maintaining it with the steady pruning of her more youthful years.

Then, not long before her death, I suddenly realized that she was not seeing the same hill that I was. At the end of her life, she was seeing the hill in front of her house in a small river town in Illinois. Like her current hill, it rose into the east so that while the house stood in shadow, sunlight would blaze across the Mississippi to paint the farmlands of Missouri on the bright mornings when she would get up to walk to school. When I went back there after her funeral, the hill was still open as she had remembered it, grassy, free of brush, though the house was gone. It was the hill she had returned to in her mind after 94 years, as vividly hers in 2011 as it had been in 1916. In such ways are we all time travelers, moving inexorably at the velocity of thought.

tzf_img_post