Hi, all,
I have a gig coming in that will have me revisiting my thesis research from nearly 30 years ago, on interferometric laser microscopes. (Fun.)
Back in the day, I made a nulling-type phase digitizer at 60 MHz by driving a phase shifter with a 12-bit DAC (AD-DAC80), and wrapping a
13-bit successive approximation loop round it (AM2904 with an extra flipflop). With quite a lot of calibration, that got me a 13-bit, 2-pi, 50 ks/s phase measurement that I was pretty happy with. (The extra bit came from deciding which null to head for, which is why I needed the extra FF.) It was all interfaced to an HP 9816 computer via a GPIO card, and (eventually) worked great. I published one of my only two instruments papers on it (this was before I realized the total futility of almost all instruments papers).The advantage of nulling detection is that you only need 1-D calibration tables for phase shift and amplitude, whereas getting that sort of accuracy with I/Q techniques requires a 2-D calibration table, which is a gigantic pain.
I need to do this again, 2015 style. The speed requirements are set by the acoustic delay in the AO scanner, so 50-100 ks/s is about all I can use. Rather than all that squishy analogue stuff, I'm planning to do the SAR in software and use a pair of AD9951 DDS chips, one to generate the desired signal and one to be the phase shifted comparison signal.
So far so straightforward.
What I'm less sure about is being able to keep the two channels sufficiently isolated to be able to maintain 12 or ideally 14 bits of phase accuracy. Even with a full-scale input, I'll need 85 dB of isolation to get 14 bits, and it gets harder with weaker signals. (There'll be a DLVA/limiter ahead of the phase detector, which will help.)
I've never used DDSes before, and I'd appreciate some wisdom from folks who have. How hard is that likely to be, and what should I particularly watch out for?
Thanks
Phil Hobbs