On Neutrinos and the Speed of Light

If you’re tracking the interesting news from CERN on neutrinos moving slightly faster than the speed of light, be advised that there is an upcoming CERN webcast on the matter at 1400 UTC later today (the 23rd). Meanwhile, evidence that the story is making waves is not hard to find. I woke up to find that my local newspaper had a headline — “Scientists Find Signs of Particles Faster than Light” — on the front page. This was Dennis Overbye’s story, which originally ran in the New York Times, but everyone from the BBC to Science Now is hot on the trail of this one.

The basics are these: A team of European physicists has measured neutrinos moving between the particle accelerator at CERN to the facility beneath the Gran Sasso in Italy — about 725 kilometers — at a speed about 60 nanoseconds faster that it would have taken light to make the journey. The measurement is about 0.0025 percent (2.5 parts in a hundred thousand) greater than the speed of light, a tiny deviation, but one of obvious significance if confirmed. The results are being reported by OPERA (Oscillation Project with Emulsion-Tracking Apparatus), a group led by physicist Antonio Ereditato (University of Bern).

Neutrinos are nearly massless subatomic particles that definitely should not, according to Einstein’s theory of special relativity, be able to travel faster than light, which accounts for the explosion of interest. According to this account in Science Now, the OPERA team measured roughly 16,000 neutrinos that made the trip from CERN to the detector, and Ereditato is quoted as saying that the measurement itself is straightforward: “We measure the distance and we measure the time, and we take the ratio to get the velocity, just as you learned to do in high school.” The measurement has an uncertainty of 10 nanoseconds.

It’s hard to do any better than Ereditato himself when bringing caution to these findings. Let me quote the Science Now story again:

…even Ereditato says it’s way too early to declare relativity wrong. “I would never say that,” he says. Rather, OPERA researchers are simply presenting a curious result that they cannot explain and asking the community to scrutinize it. “We are forced to say something,” he says. “We could not sweep it under the carpet because that would be dishonest.”

And the BBC quotes Ereditato to this effect: “My dream would be that another, independent experiment finds the same thing. Then I would be relieved.” One reason for the relief would be that other attempts to measure neutrino speeds have come up with results consistent with the speed of light. Is it possible there was a systematic error in the OPERA analysis that gives the appearance of neutrinos moving faster than light? The timing is obviously exquisitely precise and critical for these results, and a host of possibilities will now be investigated.

This paragraph from a NatureNews story is to the point:

At least one other experiment has seen a similar effect before, albeit with a much lower confidence level. In 2007, the Main Injector Neutrino Oscillation Search (MINOS) experiment in Minnesota saw neutrinos from the particle-physics facility Fermilab in Illinois arriving slightly ahead of schedule. At the time, the MINOS team downplayed the result, in part because there was too much uncertainty in the detector’s exact position to be sure of its significance, says Jenny Thomas, a spokeswoman for the experiment. Thomas says that MINOS was already planning more accurate follow-up experiments before the latest OPERA result. “I’m hoping that we could get that going and make a measurement in a year or two,” she says.

Unusual results are wonderful things, particularly when handled responsibly. The OPERA team is making no extravagant claims. It is simply putting before the scientific community a finding that even Ereditato calls a ‘crazy result,’ the idea being that the community can bring further resources to bear to figure out whether this result can be confirmed. Both the currently inactive T2K experiment in Japan, which directs neutrinos from its facility to a detector 295 kilometers away, and a neutrino experiment at Fermilab may be able to run tests to confirm or reject OPERA’s result. A confirmation would be, as CERN physicist Alvaro de Rujula says, ‘flabbergasting,’ but one way or another, going to work on these findings is going to take time, and patience.

The paper “Measurement of the neutrino velocity with the OPERA detector in the CNGS beam” is now up on the arXiv server (preprint).

Addendum: For an excellent backgrounder on neutrino detection and the latest measurements, replete with useful visuals, see Starts With a Bang. Thanks to @caleb_scharf for the tip.

And this comment from a new Athena Andreadis post is quite interesting:

If it proves true, it won’t give us hyperdrives nor invalidate relativity. What it will do is place relativity in an even larger frame, as Eisteinian theory did to its Newtonian counterpart. It may also (finally!) give us a way to experimentally test string theory… and, just maybe, open the path to creating a fast information transmitter like the Hainish ansible, proving that “soft” SF writers like Le Guin may be better predictors of the future than the sciency practitioners of “hard” SF.

tzf_img_post

Progress Toward the Dream of Space Drives and Stargates

by James F. Woodward

I first wrote about James Woodward’s work in my 2004 book Centauri Dreams: Imagining and Planning Interstellar Exploration, and have often been asked since to comment further on his research. But it’s best to leave that to the man himself, and I’m pleased to turn today’s post over to him. A bit of biography: Jim Woodward earned bachelor’s and master’s degrees in physics at Middlebury College and New York University (respectively) in the 1960s. From his undergraduate days, his chief interest was in gravitation, a field then not very popular. So, for his Ph.D., he changed to the history of science, writing a dissertation on the history of attempts to deal with the problem of “action-at-a-distance” in gravity theory from the 17th to the early 20th centuries (Ph.D., University of Denver, 1972).

On completion of his graduate studies, Jim took a teaching job in the history of science at California State University Fullerton (CSUF), where he has been ever since. Shortly after his arrival at CSUF, he established friendships with colleagues in the Physics Department who helped him set up a small-scale, table-top experimental research program doing offbeat experiments related to gravitation – experiments which continue to this day. In 1980, the faculty of the Physics Department elected Jim to an adjunct professorship in the department in recognition of his ongoing research.

In 1989, the detection of an algebraic error in a calculation done a decade earlier led Jim to realize that an effect he had been exploring proceeded from standard gravity theory (general relativity), as long as one were willing to admit the correctness of something called “Mach’s principle” – the proposition enunciated by Mach and Einstein that the inertial properties of matter should proceed from the gravitational interaction of local bodies with the (chiefly distant) bulk of the matter in the universe. Since that time, Jim’s research efforts have been devoted to exploring “Mach effects”, trying to manipulate them so that practical effects can be produced. He has secured several patents on the methods involved.

Jim retired from teaching in 2005. Shortly thereafter, he was diagnosed with some inconvenient medical problems, problems that have necessitated ongoing care. But, notwithstanding these medical issues, he passes along the good news that he remains “in pretty good health” and continues to be active in his chosen area of research. Herewith a look at the current thinking of this innovative researcher.

Travel to even the nearest stars has long been known to require a propulsion system capable of accelerating a starship to a significant fraction of the speed of light if the trip is to be done in less than a human lifetime. And if such travel is to be seriously practical – that is, you want to get back before all of your stay-behind friends and family have passed on – faster than light transit speeds will be needed. That means “warp drives” are required. Better yet would be technology that would permit the formation of “absurdly benign wormholes” or “stargates”: short-cuts through “hyperspace” with dimensions on the order of at most a few tens of meters that leave the spacetime surrounding them flat. Like the wormholes in the movie and TV series “Stargate” (but not nearly so long and without “event horizons” as traversable wormholes don’t have event horizons). With stargates you can dispense with all of the claptrap attendant to starships and get where you want to go (and back) in literally no time at all. Indeed, you can get back before you left (if Stephen Hawking’s “chronology protection conjecture” is wrong) – but you can’t kill yourself before you leave.

Starships and stargates were the merest science fiction until 1988. In 1988 the issue of rapid spacetime transport became part of serious science when Kip Thorne and some of his graduate students posed the question: What restriction does general relativity theory (GRT) place on the activities of arbitrarily advanced aliens who putatively travel immense distances in essentially no time at all? The question was famously instigated by Carl Sagan’s request that Thorne vet his novel Contact, where travel to and from the center of the galaxy (more than 20,000 light years distant) is accomplished in almost no time at all. Thorne’s answer was wormholes – spacetime distortions that connect arbitrarily distant events through a tunnel-like structure in hyperspace – held open by “exotic” matter. Exotic matter is self-repulsive, and for the aforementioned “absurdly benign” wormholes, this stuff must have negative restmass. Not only does the restmass have to be negative, to make a wormhole large enough to be traversable, you need a Jupiter mass (2 X 1027 kg) of the stuff. This is almost exactly one one thousandth of the mass of the Sun and hundreds of times the mass of the Earth. In your livingroom, or on your patio. Warp drives, in this connection at least, are no better than wormholes. Miguel Alcubierre, in 1994, wrote out the “metric” for a warp drive; and it too places the same exotic matter requirement on would be builders.

Long before Thorne and Alcubierre laid out the requirements of GRT for rapid spacetime transport, it was obvious that finding a way to manipulate gravity and inertia was prerequisite to any scheme that hoped to approach, much less vastly surpass the speed of light. Indeed, in the late 1950s and early 1960s the US Air Force sponsored research in gravitational physics at Wright Field in Ohio. As a purely academic exercise, the Air Force could have cared less about GRT. Evidently, they hoped that such research might lead to insights that would prove of practical value. It seems that such hopes were not realized.

If you read through the serious scientific literature of the 20th century, until Thorne’s work in the late ’80s at any rate, you will find almost nothing ostensibly relating to rapid spacetime transport. The crackpot literature of this era, however, is replete with all sorts of wild claims and deeply dubious schemes, none of which are accompanied by anything resembling serious science. But the serious (peer reviewed) scientific literature is not devoid of anything of interest.

If you hope to manipulate gravity and inertia to the end of rapid spacetime transport, the “obvious conjecture” is that you need a way to get some purchase on gravity and inertia. Standard physics, embodied in the field equations of Einstein (GRT) and Maxwell (electrodynamics), seems to preclude such a possibility. So that “obvious conjecture” suggests that some “coupling” beyond that contained in the Einstein-Maxwell equations needs to be found. And if we are lucky, such a coupling, when found, will lead to a way to do the desired manipulations. As it turns out, there are at least two instances of such proposed couplings advanced by physicists of impeccable credentials. The first was made by Michael Faraday – arguably the pre-eminent experimental physicist of all time – in the 1840s. He wanted to kill the action-at-a-distance character of Newtonian gravity (that is, its purported instantaneous propagation) by inductively coupling it to electromagnetism (which he had successfully shown to not be an action-at-a-distance interaction by demonstrating the inductive coupling of electricity and magnetism). He did experiments intended to reveal such coupling. He failed.

The second proposal was first made by Arthur Schuster (President of the Royal Society in the 1890s) and later Patrick M.S. Blackett (1947 Nobel laureate for physics). They speculated that planetary and stellar magnetic fields might be generated by the rotational motion of the matter that makes them up. That is, electrically neutral matter in rotation might generate a magnetic field. Maxwell’s electrodynamics, of course, makes no such prediction. There were other proposals. In the 1930s and ’40s Wolfgang Pauli and then Erwin Schrodinger constructed five-dimensional “unified” field theories of gravity and electromagnetism that predicted small coupling effects not present in the Einstein-Maxwell equations. But the Schuster-Blackett conjecture is more promising as the effects there are much larger – large enough for experimental investigation. And George Luchak, a Canadian graduate student (at the time), had written down a set of coupled field equations for Blackett’s proposal.

Some worthwhile experiments can be done with limited means in a short time but only a fool tries to do serious experiments without having a plausible theory as a guide. Plausible theory does not mean Joe Doak’s unified field theory. It means theory that only deviates from standard physics in explicit, precise ways that are transparent to inspection and evaluation. (The contra positive, by the way, is also true.) So, armed with Faraday’s conjecture and then the Schuster-Blackett conjecture and Luchak’s field equations, in the late 1960s I set out to investigate whether they might lead to some purchase on gravity and inertia. The better part of 25 years passed doing table-top experiments and poking around in pulsar astrophysics (with its rapidly rotating neutron stars with enormous magnetic fields, pulsars are the ultimate test bed for Blackett’s conjecture) to see whether anything was there. Suggestive, but not convincing, results kept turning up. In the end, nothing could be demonstrated beyond a reasonable doubt – the criterion of merit in this business. However, as this investigation was drawing to a close, about the time that Thorne and others got serious about traversable wormholes, detection of an algebraic error in a calculation led to serious re-examination of Luchak’s formalism for the Blackett effect.

Luchak, when he wrote down his coupled field equations, had been chiefly interested in getting the terms to be added to Maxwell’s electrodynamic equations that would account for Blackett’s conjecture. So, instead of invoking the full formal apparatus of GRT, he wrote down Maxwell’s equations using the usual four dimensions of spacetime, and included a Newtonian approximation for gravity using the variables made available by invoking a fifth dimension. He wanted a relativistically correct formalism, so his gravity field equations included some terms involving time. They were required because of the assumed speed of light propagation velocity of the gravity field – where Newton’s gravity theory has no time-dependent terms as gravity “propagates” instantaneously. You might think all of this not particularly interesting, because it is well-known that special relativity theory (SRT) hasn’t really got anything to do with gravity – notwithstanding that you can write down modified Newtonian gravity field equations that are relativistically correct (technospeak: “Lorentz invariant”).

But this isn’t quite right. Special relativity has inertia implicitly built right into the foundations of the theory. Indeed, SRT is only valid in “inertial” frames of reference.[ref]Inertial reference frames are those in which Newton’s first law is valid for objects that do not experience external forces. They are not an inherent part of spacetime per se unless you adhere to the view that spacetime itself has inertial properties that cause inertial reaction forces. This is not the common view of the content of SRT where inertial reaction forces are attributed to material objects themselves, not the spacetime in which they reside.[/ref] So, consider the most famous equation in all physics (that Einstein published as an afterthought to SRT): E=mc2. But write it as Einstein first did: m=E/c2. The mass of an object – that is, its inertia – is equal to its total energy divided by the square of the speed of light. [Frank Wilczek has written a very good book about this: The Lightness of Being.] If inertia and gravity are intimately connected, then since inertia is an integral part of SRT, gravity suffuses SRT, notwithstanding that it does not appear explicitly anywhere in the theory.[ref] As a technical note it is worth mentioning that in GRT inertial reaction forces arise from gravity if space is globally flat, as in fact it is measured to be, and global flatness is the distinctive feature of space in SRT. This, however, does not mean that spacetime has inherent inertial properties.[/ref] Are gravity and inertia intimately connected? Einstein thought they were. A well known part of this connection is the “Equivalence Principle” (that inertial and gravitational forces are the same thing) but there is an even deeper notion needing attention. He gave this notion its name: Mach’s principle, for Einstein attributed the idea to Ernst Mach (of Mach number fame).[ref] Another technical note: spacetime figures into the connection between gravity and inertia as the structure of spacetime is determined by the distribution of all gravitating stuff in the universe in the field equations of GRT. So if gravity and inertia are the obverse and reverse of the same coin, the structure of spacetime is automatically encompassed. Spacetime per se only acquires inertial properties if it is ascribed material properties – that is, it gravitates. Interestingly, if “dark energy” is an inherent property of spacetime, it gravitates.[/ref]

What is Mach’s principle? Well, lots of people have given lots of versions of this principle, and protracted debates have taken place about it. Its simplest expression is: Inertial reaction forces are produced by the gravitational action of everything that gravitates in the universe. But back in 1997 Herman Bondi and Joseph Samuel, answering an argument by Wolfgang Rindler, listed a dozen different formulations of the principle. Generally, they fall into one of two categories: “relationalist” or “physical”. In the relationalist view, the motion of things can only be related to other things, but not to spacetime itself. Nothing is said about the interaction (via fields that produce forces) of matter on other matter. The physical view is different and more robust as it asserts that the principle requires that inertial reaction forces be caused by the action of other matter, which depends on its quantity, distribution, and forces, in particular, gravity, as well as its relative motion. [Brian Greene not long ago wrote a very good book about Mach’s principle called The Fabric of the Cosmos. Alas, he settled for the “relationalist” version of the principle, which turns out to be useless as far as rapid spacetime transport is concerned.]

The simplest “physical” statement of the principle, endorsed by Einstein and some others, says that all inertial reaction forces are produced by the gravitational action of chiefly the distant matter in the universe. Note that this goes a good deal farther than Einstein’s Equivalence Principle which merely states that the inertial and gravitational masses of things are the same (and, as a result, that all objects “fall” with the same acceleration in a gravity field), but says nothing about why this might be the case. Mach’s principle provides the answer to: why?

Guided by Mach’s principle and Luchak’s Newtonian approximation for gravity – and a simple calculation done by Dennis Sciama in his doctoral work for Paul Dirac in the early 1950s – it is possible to show that when extended massive objects are accelerated, if their “internal” energies change during the accelerations, fluctuations in their masses should occur. That’s the purchase on gravity and inertia you need. (Ironically, though these effects are not obviously present in the field equations of GRT or electrodynamics, they do not depend on any novel coupling of those fields. So, no “new physics” is required.) But that alone is not enough. You need two more things. First, you need experimental results that show that this theorizing actually corresponds to reality. And second, you need to show how “Mach effects” can be used to make the Jupiter masses of exotic matter needed for stargates and warp drives. This can only be done with a theory of matter that includes gravity. The Standard Model of serious physics, alas, does not include gravity. A model for matter that includes gravity was constructed in 1960 by three physicists of impeccable credentials. They are Richard Arnowitt (Texas A and M), Stanley Deser (Brandeis), and Charles Misner (U. of Maryland). Their “ADM” model can be adapted to answer the question: Does some hideously large amount of exotic matter lie shrouded in the normal matter we deal with every day? Were the answer to this question “no”, you probably wouldn’t be reading this. Happily, the argument about the nature of matter and the ADM model that bears on the wormhole problem can be followed with little more than high school algebra. And it may be that shrouded in everyday stuff all around us, including us, is the Jupiter mass of exotic matter we want. Should it be possible to expose the exotic bare masses of the elementary particles that make up normal matter, then stargates may lie in our future – and if in our future, perhaps our present and past as well.

The physics that deals with the origin of inertia and its relation to gravitation is at least not widely appreciated, and may be incomplete. Therein lie opportunities to seek new propulsion physics. Mach’s principle and Mach effects is an active area of research into such possibilities. Whether these will lead to propulsion breakthroughs cannot be predicted, but we will certainly learn more about unfinished physics questions along the way.

REFERENCES

More technical and extensive discussions of some of the issues mentioned above are available in the peer reviewed literature and other sources. A select bibliography of some of this material is provided below. Here at Centauri Dreams you will shortly find more recent and less technical treatments available. They will be broken down into three parts. One will deal with the issues surrounding the origin of inertia and the prediction of Mach effects [tenatively titled “Mach’s principle and Mach effects”]. The second will present recent experimental results [tentatively titled “Mach effects: Recent experimental results”]. And the third will be an elaboration of the modifications of the ADM model that suggest exotic matter may be hiding in plain sight all around us [tentatively titled “Stargates, Mach’s principle, and the structure of elementary particles”]. The first two pieces will not involve very much explicit mathematics. The third will have some math, but not much beyond quadratic equations and high school algebra.

  • Books:

Greene, Brian, The Fabric of the Cosmos: Space, Time, and the Texture of Reality (Knopf, New York, 2004).

Sagan, Carl, Contact (Simon and Schuster, New York, 1965).

Wilczek, Frank, The Lightness of Being: Mass, Ether, and the Unification of Forces (Basic Books, New York, 2008).

  • Articles:

Alcubierre, M., “The warp drive: hyper-fast travel within general relativity,” Class. Quant. Grav. 11 (1994) L73 – L77. The paper where Alcubierre writes out he metric for warp drives.

Arnowitt, R., Deser, S., and Misner, C.W., “Gravitational-Electromagnetic Coupling and the Classical Self-Energy Problem,” Phys. Rev. 120 (1960a) 313 – 320. The first of the ADM papers on general relativistic “electrons”.

Arnowitt, R., Deser, S., and Misner, C.W., “Interior Schwartzschild Solutions and Interpretation of Source Terms,” Phys. Rev 120 (1960b) 321 – 324. The second of the ADM papers.

Bondi, H. and Samuel, J., “The Lense-Thirring effect and Mach’s principle,” Phys. Lett. A 228 (1997) 121 – 126. One of the best papers on Mach’s principle.

Luchak, George, “A Fundamental Theory of the Magnetism of Massive Rotating Bodies,” Canadian J. Phys. 29 (1953) 470 – 479. The paper with the formalism for the Schuster-Blackett effect.

Morris, M.S. and Thorne, K. S., “Wormholes in spacetime and their use for interstellar travel: A tool for teaching general relativity,” Am. J. Phys. 56 (1988) 395 – 412. The paper where Kip Thorne and his then grad student Michael Morris spelled out the restrictions set by general relativity for interstellar travel. Their “absurdly benign wormhole” solution is found in the appendix on page 410.

Sciama, D. “On the Origin of Inertia,” Monthly Notices of the Royal Astronomical Society 113 (1953) 34 – 42. The paper where Sciama shows that a vector theory of gravity (that turns out to be an approximation to general relativity) can account for inertial reaction forces when certain conditions are met.

Woodward, J.F., “Making the Universe Safe for Historians: Time Travel and the Laws of Physics,” Found. Phys. Lett. 8 (1995) 1 – 39. The first paper where essentially all of the physics of Mach effects and their application to wormhole physics is laid out.

  • Other sources for Mach effects and related issues

“Flux Capacitors and the Origin of Inertia,” Foundations of Physics 34, 1475 – 1514 (2004). [Appendicies give a line-by-line elaboration of the derivation of Mach effects, and a careful evaluation of how Newton’s second law applies to systems in which Mach effects are present.]

“The Technical End of Mach’s Principle,” in: eds. M. Sachs and A.R. Roy, Mach’s Principle and the Origin of Inertia (Apeiron, Montreal, 2003), pp. 19 – 36. [Contributed paper for a commemorative volume for the 50th anniversary of the founding of the Kharagpur campus of the Indian Institute of Technology. It is the only published paper where the wormhole term in Mach effects was sought.]

“Are the Past and Future Really Out There,” Annales de la Fondation Louis de Broglie 28, 549 – 568 (2003). [Contributed paper for a commemorative issue honoring the 60th anniversary of the completion of Olivier Costa de Beauregard’s doctoral work with Prince Louis de Broglie. The instantaneity of inertial reaction forces, combined with the lightspeed restriction on signal propagation of SRT, suggest that the Wheeler-Feynman “action-at-a-distance” picture of long range interactions is correct. This picture suggests that the past and future have some meaningful objective physical existence. This is explored in this paper, for Olivier Costa de Beauregard was one of the early proposers of the appropriateness of the action-at-a-distance picture in quantum phenomena.]

  • Presentations

Presentations at STAIF and SPESIF (most with accompanying papers in the conference proceedings) yearly since 2000.

Presentation at the Society for Scientific Exploration meeting in June, 2010, now available in video format on the SSE website.

Presentation: Why Science Fiction has little to fear from Science, at the 75th Birthday Symposium for John Cramer, University of Washington, September 2009.

  • Radio Interviews

The Space Show [3/20/2007]

The Space Show [3/3/2009]

tzf_img_post

Support for Dark Energy

The far future may be a lonely place, at least in extragalactic terms. Scientists studying gravity’s interactions with so-called dark energy — thought to be the cause of the universe’s accelerating expansion — can work out a scenario in which gravity dominated in the early universe. But somewhere around eight billion years after the Big Bang, the continuing expansion and consequent dilution of matter caused gravity to fall behind dark energy in its effects. We’re left with what we see today, a universe whose expansion will one day spread galaxies so far apart that any civilizations living in them won’t be able to see any other galaxies.

The initial dark energy findings, released in 1998, were based on Type Ia supernovae, using these as ‘standard candles’ which allowed us to calculate their distance from Earth. Now we have new data from both the Galaxy Evolution Explorer satellite (drawing on a three-dimensional map of galaxies in the distant universe containing hundreds of millions of galaxies) and the Anglo-Australian Telescope (Siding Spring Mountain, Australia). Using this information, scientists are studying the pattern of distance between individual galaxies. Here we have not a ‘standard candle’ but a ‘standard ruler,’ based on the tendency of pairs of galaxies to be separated by roughly 490 million light years.

A standard ruler is an astronomical object whose size is known to an approximate degree, one that can be used to determine its distance from the Earth by measuring its apparent size in the sky. The new dark energy investigations used a standard ruler based on galactic separation. Scientists believe that acoustic pressure waves ‘frozen’ in place approximately 370,000 years after the Big Bang (the result of electrons and protons combining to form neutral hydrogen) define the separation of galaxies we see. The pressure waves, known as baryon acoustic oscillations, left their imprint in the patterns of galaxies, accounting for the separation of galactic pairs. This provides a standard ruler that can be used to measure the distance of galaxy pairs from the Earth — closer galaxies appear farther apart from each other in the sky.

We’re looking, then, at patterns of distance between galaxies, using bright young galaxies of the kind most useful in such work. Galaxy Evolution Explorer identified the galaxies to be studied, while the Anglo-Australian Telescope was used to study the pattern of distance between them. Folding distance data into information about the speeds at which galaxy pairs are receding confirms what the supernovae studies have been telling us, that the universe’s expansion is accelerating. GALEX’s ultraviolet map also shows how galactic clusters draw in new galaxies through gravity while experiencing the counterweight of dark energy, which acts to tug the clusters apart, slowing the process.

Chris Blake (Swinburne University of Technology, Melbourne), lead author of recent papers on this work, says that theories that gravity is repulsive when acting at great distances (an alternative to dark energy) fail in light of the new data:

“The action of dark energy is as if you threw a ball up in the air, and it kept speeding upward into the sky faster and faster. The results tell us that dark energy is a cosmological constant, as Einstein proposed. If gravity were the culprit, then we wouldn’t be seeing these constant effects of dark energy throughout time.”

Image: This diagram illustrates two ways to measure how fast the universe is expanding. In the past, distant supernovae, or exploded stars, have been used as “standard candles” to measure distances in the universe, and to determine that its expansion is actually speeding up. The supernovae glow with the same intrinsic brightness, so by measuring how bright they appear on the sky, astronomers can tell how far away they are. This is similar to a standard candle appearing fainter at greater distances (left-hand illustration). In the new survey, the distances to galaxies were measured using a “standard ruler” (right-hand illustration). This method is based on the preference for pairs of galaxies to be separated by a distance of 490 million light-years today. The separation appears to get smaller as the galaxies move farther away, just like a ruler of fixed length (right-hand illustration). Credit: NASA/JPL-Caltech.

Dark energy is still a huge unknown, but Jon Morse, astrophysics division director at NASA Headquarters in Washington, thinks the new work provides useful confirmation:

“Observations by astronomers over the last 15 years have produced one of the most startling discoveries in physical science; the expansion of the universe, triggered by the Big Bang, is speeding up. Using entirely independent methods, data from the Galaxy Evolution Explorer have helped increase our confidence in the existence of dark energy.”

For more, see Blake et al., “The WiggleZ Dark Energy Survey: testing the cosmological model with baryon acoustic oscillations at z=0.6,” accepted for publication in Monthly Notices of the Royal Astronomical Society (preprint) and Blake et al., “The WiggleZ Dark Energy Survey: the growth rate of cosmic structure since redshift z=0.9,” also accepted at MNRAS (preprint).

tzf_img_post

Key Effects of General Relativity Confirmed

Gravity Probe B has confirmed two of the most interesting effects predicted by Einstein’s General Theory of Relativity. The geodetic effect, which describes the warping of spacetime due to the mass of the Earth, has been confirmed to an accuracy of 0.28 percent. The frame-dragging effect, in which the Earth’s rotation drags or stirs local spacetime, is confirmed to 19 percent accuracy. All of this from a project that drew on 34 years of research and development, 10 years of flight preparation and 5 years of analysis of the data returned from a 1.5 year mission. They were a long time coming, but these results are as much milestones in the history of physics as the 1919 measurements of Sir Arthur Eddington that supported Einstein’s newly published theory.

Subtle Effects, Fantastic Precision

Measuring known effects to higher levels of accuracy is key to physics. It has taken so long to achieve these results because General Relativity is hard to test in the vicinity of the Earth, where its effects are so slight as to fade into the background noise. Gravity Probe B (GP-B) was built to surmount this problem by performing its experiments in an orbiting laboratory where the background noise could be effectively muted. At the heart of the effort were four gyroscopes held in a flight probe that was inserted into a dewar — a vacuum flask — that held 613 gallons of superfluid, supercold helium, keeping the instrument chilled to 2 Kelvin (-271 degrees Celsius).

Launched into a 645-kilometer orbit, Gravity Probe B’s science instrument measured the changes in the directions of spin of the gyroscopes, using a telescope referencing the star IM Pegasi as a fixed measuring point. The gyros were so free of background disturbances that they constituted an ideal measuring system. According to General Relativity, the direction of the spin axis of the gyroscopes should change over time because of the mass and rotation of the Earth, and these effects are what has been confirmed, showing that spacetime is indeed warped by the presence of the Earth, and twisted or dragged by the rotation of the Earth as well.

Kip Thorne (Caltech) once noted how subtle the effects under study by Gravity Probe B really are: “In the realm of black holes and the universe, the language of general relativity is spoken, and it is spoken loudly. But in our tiny solar system, the effects of general relativity are but whispers.” Hearing those whispers allows us to take earlier confirmations of General Relativity to a new level of precision, providing a measurement of the shape of local spacetime.

Envisioning Warped Spacetime

The geodetic warping of spacetime is usually explained through the image of a sheet of rubber on which a ball has been dropped, thus deforming the rubber. But I prefer the ‘missing inch’ explanation. It’s a bit of a thought experiment, though it can be replicated with a sheet of paper and a pair of scissors. Imagine a circle with the same circumference as the Earth, about 40,000 kilometers. A gyroscope that follows this path in empty space should always point in the same direction, as the diagram below illustrates. But General Relativity says that if the Earth were placed inside the circle, its mass would warp spacetime just enough to shrink the circumference of the circle by 1.1 inches.

Cut a small wedge out of a circular piece of paper and then tape the edges back together. The result is not a circle but a cone, a representation of what happens to spacetime as the result of mass (Thorne did a demonstration of this at the Gravity Probe B pre-launch news conference, video of which is available here). The gyroscope following this slightly altered path now shows a measurable shift as it moves around the edge of the cone, as shown in the second panel of the diagram.

Image: Left panel: A circle with Earth’s equatorial diameter (~12,755 km) in empty space has a circumference of roughly 40,074 km. A gyroscope following this circular path in empty space will always point in the same direction, as indicated by the arrows. Right panel: Earth’s mass warps spacetime inside the circle into a cone, formed by removing a pie-shaped wedge (dotted lines). This reduces the circle’s circumference by 1.1 inches. A gyroscope will now change its orientation while tracing the conical path. Credit: Stanford University/NASA.

Frame dragging, the other effect measured by Gravity Probe B, refers to the way a rotating body drags spacetime around with it. Because of the parallel between this effect and the way an electrically charged body generates magnetism as it rotates, the frame dragging effect is often called the ‘gravitomagnetic effect.’ Gravity Probe B’s researchers measured the spin-axis alignment of each of its gyroscopes as the spacecraft orbited the Earth, looking at change both in the plane of the orbit (this measured geodetic precession) and orthogonally in the plane of the Earth’s rotation (this measured frame-dragging precession). You can read more about Gravity Probe B’s design and how it made these measurements at this Stanford website.

Image: Frame dragging as measured by Gravity Probe B. Credit and copyright: James Overduin, Pancho Eekels, Bob Kahn.

We’ve already had measurements of the geodetic effect, although GP-B has been able to refine that measurement considerably, but the new work on frame-dragging marks the first time it has been measured directly. Having confirmed that frame-dragging exists, the plan is to use these measurements to extrapolate what we’ve found around the Earth to more exotic objects, like black holes. “Einstein survives,” said Francis Everitt, who led the Gravity Probe B effort, and that survival should help us tighten our understanding of extreme astrophysical environments. Moreover, it will help us work toward a still more comprehensive theory. After all, the breakdown of General Relativity in singularities, where the curvature of spacetime becomes infinite and the field equations fail, points to how much we have yet to learn about the cosmos.

tzf_img_post

Visualizing Warped Spacetime

What on Earth — or off it — could inspire a physicist with the credentials of Caltech’s Kip Thorne to say “I’ve never before coauthored a paper where essentially everything is new. But that’s the case here.” Yet if Thorne couldn’t say that about some of his earlier work with wormholes (!), he feels safe in saying it about the new tools for visualizing warped space and time that are discussed in a paper he and his colleagues have just published. Imagine space and time undulating in hitherto unfathomable patterns as objects like black holes run into each other.

How do we visualize such effects in a credible way? The new tools help us do just that. They are the result of powerful computer simulations that bring to visual life the complex equations of black hole mergers and other extreme events, and they should help us with problems like this one: Manuela Campanelli (University of Texas in Brownsville) and team used simulations a few years ago to show that colliding black holes produce a direct burst of gravitational waves. The result is that the black hole itself seems to recoil, with a force strong enough that the newly merged object can be thrown entirely out of its own galaxy. When this work was done in 2007, nobody could explain how a directional burst of gravitational waves could be produced.

Thorne’s team can now produce an explanation by working with spacetime analogues to the electric and magnetic field lines that describe those two forces. A tendex line describes the stretching force that warped spacetime exerts, while a vortex line describes the twisting of space. Run enough tendex lines together and you create a region — a tendex — of strong stretching. Merge a bundle of vortex lines and the result is a whirling region of space — a vortex.

Image: Two doughnut-shaped vortexes ejected by a pulsating black hole. Also shown at the center are two red and two blue vortex lines attached to the hole, which will be ejected as a third doughnut-shaped vortex in the next pulsation. Credit: The Caltech/Cornell SXS Collaboration.

Let me quote the Caltech news release on the result:

Using these tools, [the researchers] have discovered that black-hole collisions can produce vortex lines that form a doughnut-shaped pattern, flying away from the merged black hole like smoke rings. The researchers also found that these bundles of vortex lines—called vortexes—can spiral out of the black hole like water from a rotating sprinkler.

The computer tools now show how these distortions of spacetime are produced, and can explain things as complex as the black hole collisions and ejection discussed by Campanelli. So what does account for that gravitational kick experienced by the merged black hole at galactic center? The unidirectional force comes from gravitational waves from spiraling vortexes added together with waves from spiraling tendexes, while on the other side, the vortex and tendex waves are canceled out. The newly merged black hole experiences a recoil. The new conceptual tools are useful not just for a black hole scenario, but a wide range of possibilities:

“Though we’ve developed these tools for black-hole collisions, they can be applied wherever space-time is warped,” says Dr. Geoffrey Lovelace, a member of the team from Cornell. “For instance, I expect that people will apply vortex and tendex lines to cosmology, to black holes ripping stars apart, and to the singularities that live inside black holes. They’ll become standard tools throughout general relativity.”

Various black hole merger situations suggest themselves, including two spinning black holes colliding head on or spiraling toward each other before the merger. Each of these scenarios can be explained through the use of tendexes and vortices, but it’s also important to note that in either case, outward-moving vortexes and tendexes become gravitational waves. Usefully, that’s just the kind of waves that the Laser Interferometer Gravitational Wave Observatory (LIGO) has been created to detect. LIGO, which began its search in 2002, looks for gravitational wave emissions from collisions of neutron stars or black holes as well as supernovae. Tendexes and vortexes may help researchers predict the waveforms LIGO is looking for.

The paper is Owen et al., “Frame-dragging vortexes and tidal tendexes attached to colliding black holes: Visualizing the curvature of spacetime,” Physical Review Letters 106, 151101 (2011). Abstract / Preprint.

tzf_img_post

The Pioneer Anomaly Resolved?

The fascination of the so-called ‘Pioneer anomaly’ is that it offers the possibility of new physics, an apparently constant acceleration on the Pioneer 10 and 11 probes with a value of (8.74 ± 1.33) × 10?10 m/s2 being something that we can’t easily explain. Equally useful is the chance the Pioneer anomaly gives us to validate current physical models by figuring out how we might explain this acceleration through hitherto unsuspected processes, perhaps aboard the spacecraft itself. Either way you look at it, the Pioneer anomaly has deserved the attention it has received, and now a new paper emerges to take a crack at resolving the issue once and for all.

Frederico Francisco (Instituto Superior Técnico, Lisbon) and colleagues have revisited the question of whether heat that is emitted and reflected aboard the spacecraft could account for the anomalous acceleration. Francisco’s team had accounted for between 33% and 67% of the acceleration in a thermal model they developed in 2008. The new paper builds on this earlier work, with a methodology based on a distribution of point-like radiation sources that can model the thermal radiation emissions of the spacecraft. The authors then deploy a method called ‘Phong Shading’ that is commonly used to render the illumination of surfaces in 3D computer graphics. This allows them to study how heat effects can be reflected off the various parts of the spacecraft.

Image: An artist’s rendition of one of the Pioneer probes. Credit: NASA.

I referred to the acceleration as ‘apparently constant’ above, but the authors take pains to note that we haven’t fully characterized the acceleration. In fact, one analysis of the flight data shows that, given the data we have, both a constant acceleration and one with a linear decay of a period greater than fifty years are compatible with the data. This comes into play as the team tests for the constancy of the acceleration, as discussed in the paper:

… a so-called “jerk term” is found to be consistent with the expected temporal variation of a recoil force due to heat generated on board… This is essential if the hypothesis of a thermal origin for the Pioneer anomaly is to be considered, as such [a] source would inevitably lead to a decay with at least the same rate as the power available onboard. Possible causes for an enhanced decay include e.g. degradation of thermocouples, stepwise shutdown of some systems and instruments, etc.

With this in mind, the authors go to work looking at thermal radiation and the force it can bring to bear on a surface, using Phong Shading to model the reflection of this radiation off the various other surfaces of the Pioneer probes. Radiation facing outwards, for example, radiates directly into space with an effect that cancels out. But radiation emitted toward the center of the spacecraft is reflected by the high-gain antenna and the main equipment compartment. The trick is to weigh these effects in terms of the acceleration. The method gives “a simple and straightforward way of modeling the various components of reflection…,” according to the paper, and one that accounts for the effect of thermal radiation on different parts of the spacecraft.

The result: The Phong shading method confirms earlier work suggesting that the Pioneer anomaly results from heat effects aboard the spacecraft. It also offers a method with which to study similar effects aboard other spacecraft. The authors explain:

…the acceleration arising from thermal radiation effects has a similar order of magnitude to the constant anomalous acceleration reported [in a study of the anomaly published in 2002]. We believe that the chosen approach is most adequate for the study of this particular problem, taking into account all its specific characteristics. Moreover, this Phong shading method is well suited for future studies of radiation momentum transfer in other spacecraft.

And the paper concludes:

With the results presented here it becomes increasingly apparent that, unless new data arises, the puzzle of the anomalous acceleration of the Pioneer probes can finally be put to rest.

This is a useful result, and one that will now be scrutinized by the wider community. If its conclusions are accepted, we will have made a step forward in identifying an effect that may need to be taken into account in future spacecraft operations. Just as important, we’ll have been able to rule out a line of investigation that seemed to open a door into new physics, meaning that the analysis of the Pioneer Anomaly, now more than a decade old, has born fruit. This is exactly what good science should do, and while we might hope for breakthroughs into new theories, anomalies like these are just as valid as ways of testing and verifying accepted physical laws.

The paper is Francisco et al., “Modelling the reflective thermal contribution to the acceleration of the Pioneer spacecraft” (preprint).

tzf_img_post