by Paul Gilster | Jan 31, 2008 | Culture and Society
Do technological cultures survive their growing pains? Species extinction through war or unintended environmental consequences — a cap upon the growth of civilizations — could be one solution to the Fermi question. They’re not here because they’re not there, having left ruined cities and devastated planets in their wake, just as we will. It’s a stark picture whether true or not, one that makes us ponder how the things we do with technology affect our future. Consider the question in terms of time.
The Holocene epoch, in which we live, began about 10,000 BC, incorporating early periods of human technology back to the rise of farming amd the growing use of metals. And while there has been scant time for true evolutionary change in animal and plant life during this short period, it is certainly true that extinctions of many large animals as we move from the late Pleistocene into the early Holocene have not only changed the world through which humans moved but may have been at least partially caused by human activities. Symbolic of the process are mammoths, mastodons, as well as giant sloths and saber-toothed tigers.
Even so, until recently recorded history human actions had not fundamentally changed gloabl environmental conditions. How much of an effect has our species had on the planet, and is that effect now different enough to change our terminology? Geologists at the University of Leicester, picking up on a proposal first made by chemist Paul Crutzen in 2002, now suggest that the Holocene epoch has ended. The new epoch, which they dub the Anthropocene, is the result of significant human actions. Its markers include disturbances to the carbon cycle and global termperature, ocean acidification, changes to sediment erosion and deposition, and species extinctions like those mentioned above.
The Anthropocene formally recognizes a widely perceived reality, the sharp line between the pre-industrial world and the technology-laden planet we now call home, awash with digital tools and freighted with the after-effects of industrial activity. And indeed, the cover of GSA Today (a publication of the Geological Society of America) in which this work appears makes the case rather strongly, showing the high-rise buildings of Shanghai fading out into the distance. It’s a stark reminder of how megacities like this one are transforming the planet.
Image credit: Geological Society of America.
But how do you choose a marker for the onset of an epoch? Specialists call such a marker a Global Stratigraphic Section and Point, or GSSP. With China and India in the midst of their own industrial revolutions, homing in solely on Western history may not be accurate. C02 levels could be helpful, but global changes in their levels are gradual. The authors’ musing takes in a range of possibilities:
From a practical viewpoint, a globally identifiable level is provided by the global spread of radioactive isotopes created by the atomic bomb tests of the 1960s; however, this postdates the major inflection in global human activity. Perhaps the best stratigraphic marker near the beginning of the nineteenth century has a natural cause: the eruption of Mount Tambora in April 1815, which produced the “year without a summer” in the Northern Hemisphere and left a marked aerosol sulfate “spike” in ice layers in both Greenland and Antarctica and a distinct signal in the dendrochronological record…
Or perhaps, the paper continues, picking a marker at this level of detail is not necessary, as long as the realities of what Paul Crutzen argued in 2002 are acknowledged. The current global environment is dominated by human activity to the point where reconsideration of our terminology must take it into account. The rapid rise of population from the beginning of the industrial revolution in Europe and the exploitation of coal, oil and gas have fueled a planetary transformation. And note this:
The combination of extinctions, global species migrations…, and the widespread replacement of natural vegetation with agricultural monocultures is producing a distinctive contemporary biostratigraphic signal. These effects are permanent, as future evolution will take place from surviving (and frequently anthropogenically relocated) stocks.
A new epoch indeed, the case for which is made powerfully in this paper. The assumption here is that this inevitable transformation is survivable, but then Centauri Dreams is sometimes accused of excess optimism. The reference is Zalasiewicz et al., “Are we now living in the Anthropocene,” GSA Today Vol. 18, Issue 2 (February 2008), pp. 4-8 (abstract).
by Paul Gilster | Jan 30, 2008 | Exotic Physics
Can measuring the positions and velocities of thousands of galaxies provide insight into the nature of dark energy? If so, we may have found a way to study what is perhaps the most puzzling question in astrophysics, the discovery that the expansion of the universe is proceeding faster today than it did in the past. Armchair theorists love dark energy because we know so little about it, and I routinely get e-mails offering to tell me exactly what dark energy is, few of which have any bearing on current observation or theory.
But that’s the way of mysteries — they incite comment — and as mysteries go, dark energy is a big one, perhaps the biggest now stirring the astrophysical cauldron. If we assume a dark energy producing a check on the gravitational pull of all matter in the cosmos, we’ve got the attention not just of cosmologists but propulsion theorists, who would love to find out how such a repulsive force might work. And if there is no such thing as dark energy, then determining why should tell us much about where and how we need to tweak current theories of gravitation, which also may have propulsion implications.
The European Southern Observatory’s Very Large Telescope array is at the heart of the latest dark energy work, looking at redshift distortions of distant galaxies by the thousands. The work relies on the fact that the expanding universe pushes galaxies away from each other, even as gravity tries to pull them together. Olivier Le Fevre, a member of the large international team behind this study, focuses on its technique:
“By measuring the apparent velocities of large samples of galaxies over the last thirty years, astronomers have been able to reconstruct a three-dimensional map of the distribution of galaxies over large volumes of the Universe. This map revealed large-scale structures such as clusters of galaxies and filamentary superclusters. But the measured velocities also contain information about the local motions of galaxies; these introduce small but significant distortions in the reconstructed maps of the Universe. We have shown that measuring this distortion at different epochs of the Universe’s history is a way to test the nature of dark energy.”
With thirteen thousand spectra in a field of view twenty times the size of the full Moon now available, the team can compare its result to the 2dF Galaxy Redshift Survey of the ‘local’ universe, assessing what the comparison tells us about dark energy. What seems to be emerging thus far is a confirmation of the technique, which will now require a set of future measurements to be extended over an area ten times larger than the current field. We are, in other words, still at the point of shaping our tools, and unable to make the definitive call on dark energy vs. competing explanations for what we observe. But shaping our tools is simply part of the necessary and painstaking preliminaries that make all science work.
The paper is Guzzo et al. (and I do mean ‘et al.,’ as there are 51 authors listed!), “A test of the nature of cosmic acceleration using galaxy redshift distortions,” Nature 451 (31 January 2008), pp. 541-544 (abstract). Also be aware of Strauss, “Cosmology: An ancient view of acceleration,” in the same issue (more on this one when I have time to study it).
by Paul Gilster | Jan 29, 2008 | Advanced Rocketry
By Larry Klaes
Tau Zero journalist Larry Klaes takes a look at Mason Peck’s work with reconfigurable space structures. Anyone who ponders the future of large structures in the Solar System — and this might include space-based telescopes, O’Neill habitats or perhaps one day enormous lenses of the sort Robert Forward envisioned — will wonder how such creations can be assembled. Potential solutions may one day grow out of Peck’s work, until recently funded by NIAC. Centauri Dreams also wonders how such theories will be supplemented by nanotechnological techniques that may one day return us to the era of thinking big in environments far from home.
Space is a promising but often difficult environment to work in. A typical spacecraft has to deal with a near vacuum, extreme temperatures, radiation fields, and micrometeoroids. With space ‘starting’ at one hundred miles above Earth’s surface, a region attainable at present only with expensive rockets, sending up numerous vehicles that have to work autonomously many times over while carrying their own resources merely adds to the daunting task of utilizing the Final Frontier.
One potential solution to some of these issues is being explored by Dr. Mason Peck, an assistant professor of mechanical and aerospace engineering at Cornell and director of the Space Systems Design Studio. Peck and his university team have been experimenting with the amazing properties of superconductivity, which involves the ability of an object to conduct electricity indefinitely with no resistance under certain conditions.
Peck is investigating an unusual property of type II superconductors called magnetic flux pinning, which may provide an ideal technology for in-orbit self-assembly of modular spacecraft and satellite formations. He calls these ‘non-contacting modular reconfigurable spacecraft.’
“A magnet on one spacecraft module interacts with a superconductor on another module. One becomes pinned to the other, but across some distance of empty space,” explains Peck. “At the moment that distance is on the order of centimeters. We would like to extend it by a factor of 10 or 100. We have some ideas about how to do that [in ways] more interesting than simply using bigger magnets and bigger superconductors.”
As for transferring data and power from one spacecraft module to another in space, Peck envisions infrared lasers being the key to making this process a reality. He and his team were able to demonstrate this principle in the laboratory in 2005 with infrared LEDs and specially designed photovoltaic cells.
Image: A composite of various lab demonstrations, computer models, and artwork related to modular spacecraft and the methods used in creating them. Credit: Mason Peck.
Among the applications Peck sees for this technology are spacecraft docking, sparse-aperture telescopes with individual mirror segments flux-pinned together, and the assembly or reconfiguration of space structures without mechanical hardware.
“In this project we are investigating spacecraft modules that would be constructed with interfaces consisting of combinations of magnets and superconductors, establishing a non-contacting interaction between the modules,” says Peck. “This action-at-a-distance interaction overcomes the limitations of other spacecraft-positioning strategies that involve electromagnetic fields. It allows fractionated or modular spacecraft to fix their relative positions and orientations without any mechanical connection, active control, or power expenditure. Power is in short supply in space, and robustness to power failures is an important objective of any spacecraft; so this technique is perfect for space applications.”
Last summer Peck and his team completed two experiments investigating their concept of operating multiple craft together in space.
“Our results indicate that flux pinning is promising for modular spacecraft assembly and station-keeping applications, providing mechanical stiffnesses over 200 Newtons per meter (N/m) at small (5 millimeter) magnet-superconductor separations and potentially useful non-zero stiffnesses at larger (over 3 centimeter) separations, with significant damping,” Peck adds. “We find that increasing the magnetic flux density at the superconductor surface strengthens the flux pinning forces, suggesting the possibility that higher stiffness can be obtained over larger distances by increasing or focusing the magnetostatic field.”
The basic idea of modularizing spacecraft is not new, going back to the first decades of the Space Age with the Apollo Command/Service Module docking with the Lunar Module. Later years would see manned craft from both the United States and Soviet Union docking with the first space stations in Earth orbit. Certain collections of space satellites have been conducting formation flights since the 1980s.
Peck’s project was funded by NASA’s Institute for Advanced Concepts (NIAC) until last year, when the space agency shut down the institution. Since then Northrop Grumman Space Technologies (NGST) has sponsored some of the work in the form of a gift to Cornell University. Peck also hopes to receive funding from DARPA, the Defense Advanced Research Projects Agency, in the near future.
“We anticipate that this technology will be very relevant to DARPA’s System F6 program, also known as the ‘Future Fast, Flexible, Fractionated, Free-Flying Spacecraft united by Information exchange.'” The primary goal of the System F6 program is to replace a more traditional ‘monolithic’ satellite with a cluster of modular satellites that can act as well as, if not better, than a single vehicle.
Cornell graduate student Joe Shoer hopes to launch a CubeSat demonstration of Peck’s non-contacting modular reconfigurable spacecraft design in a few years.
“CubeSat launches are relatively inexpensive: $50,000 to launch, probably $15,000 for the hardware we have in mind. So, we don’t need much,” says Peck. “We had also hoped to try a demo on NASA’s zero-gravity aircraft, but some other projects have taken more time and we find ourselves without enough resources (people and funds) at the moment. There is some possibility that DARPA’s F6 flight demo will include something along these lines.”
Among the innovative ideas Peck hopes become reality one day are space telescopes comprised of many discrete mirrors large enough to resolve planets circling distant stars. The Cornell professor also sees such modules as safe places for astronauts to store their equipment when working outside their vessels in the space environment.
“No longer are we required to distinguish among spacecraft subsystems, individual spacecraft, and constellations of spacecraft. Instead, the proposed concept blurs the distinction between modular spacecraft and formation flying, between spacecraft bus and payload, and to some extent between empty space and solid matter. Articulated payloads, reconfigurable space stations, and adaptable satellite architectures are possible without the mass and power typically associated with maintaining relative position and mechanically rebuilding structures.”
More information on non-contacting modular reconfigurable spacecraft is available here.
by Paul Gilster | Jan 28, 2008 | Deep Sky Astronomy & Telescopes
Stars being kicked out of the Milky Way — so-called ‘hypervelocity stars’ — follow a mechanism that seems understood. We know there is a supermassive black hole at galactic center, and it is likely the cause of the ejection of such stars from our galaxy. Nine stars have been found that fit this description, all of them over 50,000 parsecs from Earth. But the tenth is an anomaly, a young star ejected not from the Milky Way but from the Large Magellanic Cloud. A black hole is assumed to be the cause here as well, although the culprit has yet to be identified.
Image: A ‘hypervelocity star,’ shown flung from the Milky Way’s center. Now a similar star has been found exiting the Large Magellanic Cloud. Credit: Ruth Bazinet/Harvard-Smithsonian Center for Astrophysics.
One thing that assists researchers in identifying stellar origins is the fact that stars in the Large Magellanic Cloud (LMC) have their own particular characteristics. Alceste Bonanos (Carnegie Institution) was on the team that noted differences between this star — HE 0437-5439 — and the other galactic refugees. It couldn’t, for example, have come from the center of the Milky Way because getting it to its present location would have taken 100 million years, and the star is only 35 million years old. And then there is the matter of stellar elements. Says Bonanos:
“We were intrigued by the conundrum and decided to take up the challenge to solve this. Stars in the LMC are known to have lower elemental abundances than most stars in our galaxy, so we could determine if its chemistry was more like that galaxy’s or our own.”
The Carnegie team was able to verify the star’s age, and the fact that HE 0437-5439 is nine times the mass of the Sun. Its velocity was pegged at a scorching 722 kilometers per second. That’s fast indeed (ordinary stars in the Milky Way average out at around 100 km/s), but consistent with the velocities of other hypervelocity stars. And as for the relative abundance of various elements, the star shows a concentration about half that of the Sun, consistent with an origin in the LMC.
The ‘launch’ mechanism? Assume that the star was originally part of a binary system and you can set up a close pass by a massive black hole, with one star captured by the exotic object, the other flung out of the galaxy. This scenario was first proposed by Jack Hills (Los Alamos National Laboratory) in 1988, in a paper proposing that hypervelocity stars might exist and provide evidence of the black hole at galactic center. The first observational evidence for such stars came in 2005, and the assumption is that many more such stars are waiting to be found.
What we have here is the first observational evidence of a black hole in the Large Magellanic Cloud, surely a priority target for future Magellanic watchers. This is also the first time the abundance of key elements has been measured in a hypervelocity star. Thus does a rare find (only ten known hypervelocity stars) become rarer yet, and we measure the motions of a star whose origins lie in a satellite galaxy to our own.
The paper is Bonanos et al., “Low Metallicity Indicates that the Hypervelocity Star HE 0437-5439 was Ejected from the LMC,” scheduled to run in the Astrophysical Journal Letters and available online. And for you completists out there, Jack Hills’ paper is “Hyper-velocity and tidal stars from binaries disrupted by a massive Galactic black hole,” Nature 331 (25 February 1988), pp. 687-689 (abstract).
by Paul Gilster | Jan 26, 2008 | Asteroid and Comet Deflection
The 70-meter Goldstone antenna in the Mojave Desert has begun observations of 2007 TU24, the asteroid that will pass 538,000 kilometers from the Earth on January 27-28. Early indications are that the object is asymmetrical, with a diameter of approximately 250 meters. Close pass by the Earth is to occur on January 29 at 0833 UTC, with no chance of a strike. Says JPL’s Steve Ostro:
“With these first radar observations finished, we can guarantee that next week’s 1.4-lunar-distance approach is the closest until at least the end of the next century. It is also the asteroid’s closest Earth approach for more than 2,000 years.”
Image: These low-resolution radar images of asteroid 2007 TU24 were taken over a few hours by the Goldstone Solar System Radar Telescope in California’s Mojave Desert. Image resolution is approximately 20-meters per pixel. Next week, the plan is to have a combination of several telescopes provide higher resolution images. Credit: NASA/JPL-Caltech.
Now we can anticipate more detailed observations from the Arecibo Observatory, whose radar is the most sensitive in the world, allowing still closer readings on the asteroid’s size. With an imaging mode that offers resolution to 7.5 meters, scientists there should also be able to map the object’s surface in some detail. That’s useful information as we wait for the kind of detailed exploration a dedicated asteroid mission will one day make possible. Arecibo’s Mike Nolan looks forward to an observing session of the kind rarely afforded to those who study asteroids:
“Because it’s coming so close, we’ll get our highest quality imaging…We have good images of a couple dozen objects like this, and for about one in 10, we see something we‘ve never seen before. We really haven’t sampled the population enough to know what’s out there.”
And this is worth thinking about: Nolan points out that although objects like 2007 TU24 pass near the Earth every five years on average, astronomers rarely get enough advanced notice to run rigorous observing sessions on them. Finding, classifying and cataloguing such objects is obviously vital, as is continuation of Arecibo’s planetary radar program. Far less helpful is the e-mail hoax making the rounds claiming yet another NASA cover-up, this one of 2007 TU24’s supposedly impending impact. Astronomy.com’s Daniel Pendick has seen that one, but also a far more interesting new video on planetary defense.