A Dramatic Upgrade for Interferometry

by Paul Gilster on August 15, 2014

What can we do to make telescopes better both on Earth and in space? Ashley Baldwin has some thoughts on the matter, with reference to a new paper that explores interferometry and advocates an approach that can drastically improve its uses at optical wavelengths. Baldwin, a regular Centauri Dreams commenter, is a consultant psychiatrist at the 5 Boroughs Partnership NHS Trust in Warrington, UK and a former lecturer at Liverpool and Manchester Universities. He is also a seriously equipped amateur astronomer — one who lives a tempting 30 minutes from the Jodrell Bank radio telescope — with a keen interest in astrophysics and astronomical imaging. His extensive reading takes in the latest papers describing optical breakthroughs, making him a key information source on these matters. His latest find could have major ramifications for exoplanet detection and characterization.

by Ashley Baldwin

baldwin2

An innocuous looking article by Michael J. Ireland (Australian National University, Canberra) and John D. Monnier (University of Michigan) may represent a big step towards one of the greatest astronomical instrument breakthroughs since the invention of the telescope. In true Monnier style it is down-played. But I think you should pay attention to “A Dispersed Heterodyne Design for the Planet Formation Imager (PFI),” available on the arXiv site. The Planet Formation Imager is a future world facility that will image the process of planetary formation, especially the formation of giant planets. What Ireland and Monnier are advocating is a genuine advance in interferometry.

An interferometer essentially combines the light of several different telescopes, all in the same phase, so it adds together “constructively” or coherently, to create an image via a rather complex mathematical process called a Fourier transform (no need to go into detail but suffice to say it works). We wind up with detail or angular resolution equivalent to the distance between the two telescopes. In other words, it’s like having a single telescope with an aperture equivalent to the distance, or “baseline” between the two. If you combine several telescopes, this creates more baselines which in effect help fill in more detail to the virtual singular telescopes’ “diluted aperture”. The equation for baseline number is n(n-1) /2, where n is the number of telescopes. If you have 30 telescopes this gives an impressive 435 baselines with angular resolution orders of magnitude beyond the biggest singular telescope. So far so easy? Wrong.

The principle was originally envisaged in the 1950s for optical/infrared telescopes. The problem is the coherent mixing of the individual wavelengths of light. It must be accurate to a tiny fraction of a wavelength, which for optical light is a few billionths of a metre. Worse still, how do you arrange for light, each signal at a slightly different phase, to be mixed from telescopes a large distance apart?

Radio interferometers do this via optical fibres. Easy. Remember, you have to allow for the different times at which waves from different sources each arrive at the “beam combining” mirror by mixing them in the phase they left the original scope. This is done electronically. The radio waves are converted into electrical impulses at source, each representing the phase at which they hit the telescope. They can then be converted back to the correct phase radio wave later, to be mixed at leisure by a computer and the Fourier transform used to create an image.

The more telescopes, the more baselines and the longer they are, the greater the singular resolution. This has been done in the UK by connecting seven large radio telescopes by fibre optic cable to create an interferometer, eMerlin, with 15 baselines, the longest of which is 200 kilometers. Wow! This has been connected with radio telescopes across Europe to make an even bigger device. The US radio telescopes have been connected into the Very Long Baseline Array, from Hawaii to mainland US to the Virgin Islands, to create a maximum baseline of thousands of kilometers. The European and US devices can be connected for even bigger baselines and even connected to space radio telescopes to give baselines wider than our planet’s radius. Truly awesome resolution results.

emerlin3b

Image: e-Merlin is an array of seven radio telescopes, spanning 217 km, connected by a new optical fibre network to Jodrell Bank Observatory. Credit: Jodrell Bank Observatory/University of Manchester.

Where does all this leave optical/infrared interferometry, I hear you say? Well, a long way behind, so far. Optical/infrared light is at too high a frequency to convert to stable equivalent electrical pulse proxies as with radio, and current optical cable, despite being good, loses too much of its transmitted signal (so called dispersion) to be of any use for transferral over distance as with the radio interferometer (although optical cables are rapidly improving in quality). There are optical/infrared interferometers, involving the Keck telescopes and the Very Large Telescope in Chile. There is also the CHARA (Center for High Angular Resolution Astronomy) array of Georgia State University and the Australian SUSI (Sydney University Stellar Interferometer). Amongst others.

These arrays transmit the actual telescope light itself before mixing it, a supercomputer providing the accuracy needed to keep the correct phase of light as it was at the aperture. They all use multiple vacuum filled tunnels with complex mirror arrays, “the optical train,” to reflect the light to the beam mixer. It works, but at a cost. Even over the hundred metres or so of distance between telescopes, up to 95% of the light is lost, meaning only small but bright targets such as the star Betelgeuse can be observed. Fantastic angular resolution though. The star is 500 light years away yet the CHARA (just six one-metre telescopes) can resolve it into a disc! No single telescope, even one of the new super-large ELTs currently being built, could get close! This gives some idea of the sheer power of interferometry. Imagine a device in space with no nasty wobbly atmosphere to spoil things.

But the Ireland and Monnier paper represents hope and shows the way to the future of astronomical imaging. What the researchers are advocating is heterodyne interferometry, an old fashioned idea, again like interferometry itself. Basically it involves creating an electrical impulse as near in frequency as possible to the one entering the telescope, and then mixing it with the incoming light to produce an “intermediate frequency” signal. This signal still holds the phase information of the incoming light but in a stable electrical proxy that can be converted to the original source light and mixed with light from other telescopes in the interferometer to create an image. This avoids most of the complex light-losing “optical train”.

Unfortunately, the technique cannot be used for the beam combiner itself or the all important delay lines whereby light from different telescopes is diverted so it can all arrive at the combiner in phase to be mixed constructively. Both these processes still lose large amounts of light, although much less. The interferometer also needs a supercomputer to combine the source light accurately. Hence the delay till now. The light loss can be compensated for with lots of big telescopes in the interferometer — 4-8 meters is the ideal, as suggested in the paper. This allows baselines and associated massive increase in angular resolution of up to 7km. Bear in mind a few hundred metres was the previous best — you see the extent of the improvement.

The problem is obvious, though. Lots of big telescopes and a supercomputer add up to a lot of money. A billion dollars or more. Its a big step in the right direction, though. Extend the heterodyne concept to exclude the beam combiner and delay line loss and the loss of light approaches that of a radio interferometer. Imagine what could be seen. If the concept ends up in space then one day we will actually “see” exoplanets. This is another reason why “formation flying” for a telescope/star-shade combination (as explored in various NASA concepts) is so important, as it is a crucial element of a future space interferometer. The Planet Formation Imager discussed in the Monnier and Ireland paper is seen as a joint international effort to manage costs. The best viewing would be in Antarctica. One for the future, but a clearer and more positive future.

tzf_img_post

{ 12 comments… read them below or add one }

NS August 15, 2014 at 16:38

Apologies for an completely ignorant question.

Is there a way of recording images at individual telescopes, with extremely accurate timestamps, so that the images could be combined later? Of course this would not be real-time but it would remove the need to transmit the light over long distances.

Randy Chung August 16, 2014 at 1:21

Maybe. It looks like the design has 2000 photodiodes, each with 2 GHz bandwidth. The fastest A/D I know of is the TI ADC12D1800, which can sample 12 bits at 3.6 GSPS (it’s also expensive, $2500 each). So you’d need 2000 of those. Then you’d have to store the A/D output, perhaps into a bunch of SSD drives. The Intel 910 has a write speed of 1.5 GB/sec, so you’d need 5 or 6 for each A/D. With all that, you could store about a second’s worth of data. That’s probably $100 million in hardware, which is a lot, but they estimate their telescope to cost $.5B-$1B.

Michael August 16, 2014 at 2:20

@NS

‘Is there a way of recording images at individual telescopes, with extremely accurate timestamps, so that the images could be combined later?…’

There is no reason why not, the NIST Boulder clocks have an instability of one part in 10^-18, when combined with the speed of light which gives a 3 nm instability variance. Infrared light has a wavelength of around 900 nm, so it is doable. I thought of taking advantage of the ‘time recorded’ technique in a Solar lens (multiple craft) to view stellar objects long before we get to the focal point taking advantage of the huge resolution advantage the sun would offer.

Michael Spencer August 16, 2014 at 6:36

@NS: I had exactly the same ‘inspiration’! Let’s hope for an answer.

Eniac August 16, 2014 at 10:39

Right, formation flying is the key. With thousands or even millions of cheap, lightweight, identical elements dispersed across a large area of space, truly amazing telescopic feats could be achieved. For radio, a very simple microsat with a dipole antenna might suffice for an element. Optical it is more tricky, but the heterodyne concept offers hope.

Eniac August 16, 2014 at 10:40

@NS: With radio, that is precisely what is being done. With optical, it is not (yet?) possible.

simon August 16, 2014 at 18:10

Is there a way of recording images at individual telescopes, with extremely accurate timestamps, so that the images could be combined later?

Not with classical information storage I believe (you need interference) but in principle it should be possible with quantum mechanical storage (qubits) if you could manage to develop a qm storage system with enough capacity and longevity.

simon August 16, 2014 at 18:18

Err, I think I was too hasty. There has to be ambiguity on which aperture a photon entered but the electrical signal here should be storable classically just fine, a hologram could also work I think.

Jean-Pierre Le Rouzic August 17, 2014 at 10:33

@NS:
What is explained in this article is how to separate two close points in an astronomical image. One being a planet and the other the sun. There are many other challenges not described in the article, for example contrast.

Pictures are stacked and enhanced in amateur astronomy since years.
However recording the phase of a whole picture wouldn’t be very easy.

I am not sure you need to record the sun and planet light source phase. Maybe creating an interference pattern of the two sources, with a time stamp as you describe, to make possible to classify them with Bayesian statistics, and to stack them by categories to remove noise, is enough.

May be it could provide another way for amateurs to detect exoplanets (the best of them can do it already).
Once amateurs would widely publish detection of exoplanets, the general public would put more attention to interstellar travel because suddenly other earth would exist for mere being. It would have more impact than looking at beautiful but unnatural and out of earth images in magazines and TV.

Opto August 18, 2014 at 0:39

Actually AD converters exist commercially at rates above 32GSamples/sec. This work is entirely possible with existing commercial technology developed for optical fiber communucation. Sorry but you aint buying this ADC from TI dude.

Eniac August 19, 2014 at 14:07

To sample 900 nm light directly, you’d have to have a sampling rate of 10^-14 or so seconds. Seems to me 32GSamples/sec is woefully inadequate.

Ron S August 19, 2014 at 16:54

Eniac: “To sample 900 nm light directly, you’d have to have a sampling rate of 10^-14 or so seconds. Seems to me 32GSamples/sec is woefully inadequate.”

This is true, by several orders of magnitude. But ways can be found. Several years ago an acquaintance of mine was doing fundamental (non-commercial) research into down-converting visible light (optical methods to heterodyne) to a frequency where electronics could work with it. Being able to do this has a variety of applications. I know their team and others globally were having some degree of success. But I haven’t spoken to him in a while and other than speaking to him I don’t follow this field.

Leave a Comment