measuring phase

At an image processing seminar last week, I saw a startling presentation. The speaker noted that we cannot measure phase at optical wavelengths, hence costs a major loss of information. To illustrate, he displayed two face photos, then swapped the phase, with no modification of the magnitudes. The faces switched places, with slight distortion! He's researching algorithms to recover phase.

Which made me wonder: what 's the highest frequency where we can measure phase, and with what accuracy?

Also, a theoretical question manifests - why is it, that most of the information is in the phase? You'd expect 50-50.

--
Rich
Reply to
RichD
Loading thread data ...

Your speaker was wrong, at least as far as relative phase goes. Optical int erferometry depends on it, and laser interferometers routinely divide the w avelength of visible light into small fractions - the He-Ne wavelength is 6

32.8 nm, and it was routinely used to measure displacements to 10nm.
--
Bill Sloman, Sydney
Reply to
Bill Sloman

That's a crock. A camera can't measure phase, and that's all your average s /w guy understands. Optical phase can be measured in at least 100 ways, and Fienup phase retrieval can often do a good job just from an intensity imag e.

I've built a lot of phase-sensitive optical things, e.g. ISICL (which I'm r evisiting now),

formatting link
) and heterodyne interfere nce microscopes,
formatting link
which I've also been revis iting this year. I'm far from unique there--interferometry goes back to the 19th Century.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

There ARE ways to measure phase, and holography is one familiar way to capture phase information in visible light.

The amplitude and phase of a wave (one-dimensional, like a pressure wave in a string telephone) are not commensurate, nor are the measurement uncertainties and range of those two variables similar. If you have two 'components' with different signal/noise ratios, they carry different information, generally.

A light plane wave in three dimensions has vertical polarization phase and amplitude, as well as horizontal polarization phase and amplitude. Its "amplitude" is a root-sum-square of two degrees of freedom, while 'phase' includes two independent components. Even if signal/noise were the same, there'd be more phase info.

Reply to
whit3rd

That is more than a bit out of date in astronomy - at least if you allow for near infrared where there are already several new optical aperture synthesis instruments operating with closure phase observables.

The first off the blocks was COAST at MRAO Cambridge in 1995

formatting link

formatting link

The thing that prevents measuring the phase is the atmosphere and there is a trick that allows you to obtain stable observables from a network of telescopes even when the atmosphere above them is corrupting the absolute phase. The closure phase observables technique comes from the work of Jennison at Jodrell Bank in 1958. It became widely used in the

70's as the enabling technology for VLBI and was moved up to the near optical band by Baldwin's grout at Cambridge in the mid 1990's.

It is messy temperamental and temperature stability and signal to noise are a major headache but it can and has been done.

In the radio world it is responsible for some damage to big dish radio telescopes since keeping the number of scopes N in the VLBI network simultaneously as high as possible vastly improves the number of good phase observables N(N-1)(N_2)/6 to determine the pattern of the source.

N=3 Phases=1 N=4 Phases=4 N=5 Phases=10

etc. So noone really wants to be the first to drop out and they sometimes leave it too late to stow for an incoming storm.

It isn't. Half the information is in the each of the real and imaginary components. If you have only have the visibility amplitudes then you end up with a symmetric image if you don't know any of the phases.

Phase tells you how to place the Fourier components on the image.

Here is a relatively recent review of the sorts of things they are doing with these early instruments and not behind a paywall.

formatting link

--
Regards, 
Martin Brown
Reply to
Martin Brown

Iff you have sufficient SNR. Do people still use that method?

Though only because Michelson was such an incredibly good optical experimentalist. The earliest stellar interferometer by Michelson & Pease graced the Mt Wilson 100" to measure stellar diameters and worked!

formatting link

Pease later was never able to better those early optical interferometer measurements and the early successes were all but forgotten when Hanbury-Brown & Twiss ressurrected it again in the optical at Jodrell bank. This time using closure amplitudes as their chosen observables to some considerable scepticism at the time.

--
Regards, 
Martin Brown
Reply to
Martin Brown

Well, if you don't have a decent SNR you can't even unwrap, so phase measurement is less than useful. (I've been going through Pritt & Ghiglia's 2-D phase unwrapping book, which is interesting.)

The notion that most of the information is in the phase comes from a fairly ancient piece of image processing kewlness. If you Fourier-transform a photograph and zero out the phase, all you have is the autocorrelation of the image. This will always show a large central peak, and usually not too much structure elsewhere. The usual image processing demo is to FFT two intensity images, e.g. Einstein and the Mona Lisa, combine Albert's power spectrum with Mona's phase spectrum, transform back, and show that the result reminds you more of Mona than Albert, though it doesn't look much like either really.

That's the phase of the transform of the real-valued intensity image, _not_ the optical phase.

Genuine optical phase images are pretty interesting, though, especially at high numerical aperture, where you don't get any amplitude contrast from scattering the way you do at lower NA. (This is because the high-NA lens collects all the scattered light and reassembles it in the image, so there's no loss of brightness.)

The other nice thing about optical phase is that it can get you a clean factor of 2 resolution improvement if you do it right, without importing any assumptions into the data whatsoever.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

  • "cannot measure phase at optical wavelengths". Horsemanure. Ever heard of a phase contrast microscope (which existed before holograms)? And there is polarized light that can be used. Probably a dozen different ways...
  • Optical microscope: 0.25 wavelength, easily and repeatably; i did it about 45 years ago and it was "old hat" then...
Reply to
Robert Baer

Well, neutron interferometers work, and so do dispersive X-ray diffractometers, with energies in the 200 keV range. 1 eV is 1.24 microns, so that's 5E19 Hz. Enough range to be getting on with, for sure.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

This is only "true" for image, not so for audio. An intuitive explanation could be along these lines: Consider a horizontal line of an image consisting of a black strip on white background. A linear phase change will move the strip to the right or the left!

So, phase relates to position, probably the most important part in image perception!

Pere

Reply to
o pere o

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.