Hello,
I'm in the middle of a project which involves digitizing and decoding baseband NTSC composite video. Right off the top, I'll let everybody know that this is part of an educational project (part of it for a university project, though it's largely a hobbyist type project). I realize that the project will be useless in a couple years, and that there are pre-made devices out there, but I still want to do it.
That being said, I think the hardest part of the whole project (for me) is just getting the data into the FPGA (cleanly)! I know very little about clock management, and I'm worried that I'm pushing the limits of my setup. Let me briefly describe what I'm doing.
The traditional way to sample NTSC video, as I understand it, is to use dedicated chips to derive a "pixel clock" off of the hsync. This clock then feeds the ADC, and perhaps the FPGA. I am not doing this. I am using a fixed, free-running crystal oscillator clock (50 MHz Epson SG-8002JF). For the record, that clock came on my Digilent Spartan 3 starter board, which I'm using for the project. I plan on sampling at the full 50 MSPS, even though the video signal is band-limited to about
4.2 MHz.Now -- I'm posting in this group for help on getting the ADC data into the FPGA. If you don't know anything about video, you can ignore this next paragraph.
But for those who are interested, I then use an algorithm I've developed to look for the hsync pulses. The time between hsync pulses is not exactly constant in the real world. However, since I have many more samples than I intend on keeping, I use an adaptive algorithm to keep a fixed number of pixels per line. I'm still working on the gritty details, but it'll probably involve some type of polyphase resampling. I then use a DPLL to lock onto the chroma subcarrier burst. The DPLL's use an efficient sinusoid table, which are made of a block RAM and an interpolater. I think I've seen this referred to as DDS (direct digital synthesis). Basically, you keep an index which contains more bits than are necessary to index the table. Then you use the extra low bits for interpolation. The DPLL's have two outputs, one of which is 90 degrees out of phase. In practice, I use two DPLL's, which each lock onto every other line... Since the chroma subcarrier phase is reversed every other line, I think this helps the DPLL's to track. I use the DPLL's to coherently demodulate the chroma signal into I and Q. This involves a digital multiplier and a FIR windowed-sinc filter. I might change this to a CIC filter, due to a helpful suggestion from another newgroup poster. I'll won't go into details of anything past that on this newsgroup, unless anyone is interested. There are some things I left out, like a 2d adaptive comb filter, VGA generator, etc.
OK, now onto the issue at hand !
My Digilent starter board has some headers conveniently soldered onto the PCB. These connect to available digital I/O ports on the Spartan 3. I was previously going to naively output my clock onto any old pin on one of these headers, and feed it into the ADC. The ADC will be on a seperate breadboard. I would also attach a header on that breadboard, and would use a ribbon cable to connect the clock and (parallel) data pins. I knew (know) very little about how jittery the clock output would be. So... I would love to get some help from this group.
I did a google seach for "fpga adc jitter", and found some Usenet comments. I posted them at the very end of this post. Sorry if it's considered rude to include such long clips, I'm not sure what the netiquitte is for that. Some of them were helpful, but I still feel shakey about the subject.
I now plan to connect my crystal oscillator *directly* to the ADC (by soldering on a wire directly from the PCB, without going through the FPGA at all), will jitter be a problem? In one of my threads from a different newsgroup, a poster said that I might want to use a buffer register (or a FIFO) to seperate the "clean" clock domain from the "dirty" clock domain. That way, the data will be delayed a tiny amount, but it will always be ready when the FPGA tries to read it (with its jittered clock). However, when I did a search for "adc fifo", I found that lots of people are using an FPGA *as* the FIFO. So is an external buffer needed? Could I perhaps the Xilinx's DCM to just phase-shift my FPGA clock by 90 or 180 degrees, so that the ADC data is sure to be ready and waiting when I read it?
One of the posts (which I clipped to this post) says that some high-speed ADC's have their own clock *OUTPUT*, which I can feed *TO* the FPGA. Can someone suggest an example of a 50 MSPS video ADC (ie, by "video ADC", I mean: with built-in clamping, sample-and-hold, and AGC)? I hadn't even thought of doing it this way, but would it be easier?
Finally, another one of the clippings below says that commercial FPGA + ADC boards often feed the ADC clock directly from the FPGA (as I had intended to do). So, the obvious question is, how bad is it in reality? I only need 8 bits of precision (10 might be nice) - some loss of color detail is acceptable for this educational project. Can I get that by driving the ADC from the FPGA's clock? Remember that my signal is band-limited to 4.2 MHz, so it isn't very fast-changing relative to what the ADC can handle.
Is there a way (in Xilinx) that I can be *sure* that the clock passes directly from the FPGA input pin to the output pin *without* going through a DPLL?
Any completely alternative ideas? Suggestions to web sites or books with explanations?
Thanks for the help, and for reading this long post !!
Regards,
Sean
------------------------------------
1)The issue is not whether or not it works, rather it is an issue of noise added to you sampled signal due to sampling jitter. A better set-up is to run the clock directly from your oscillator to the ADC and to the FPGA, then use the DLL in the FPGA to solve the I/O timing issues at the ADC. Frequently, the ADC data is brought through a discrete registers such as an 'LS374 before going into the FPGA to improve the timing relationship at the expense of an additional cycle of clock latency.
Amazingly, several of the commercial boards out there that have an FPGA and a decent speed ADC are set up with the ADC clock supplied through the FPGA, an arrangement that doesn't make sense for real DSP applications.
2)I'm not sure what you mean by drive the ADC with the FPGA. The signal flow is the other way around: the ADC will drive the FPGA. You should not drive your ADC clocks from the FPGA. The jitter introduced by the FPGA will absolutely kill the noise performance of the ADC at 800MHz. At 100 MHz it will reduce the SNR to considerably less than the 10 bits. Use a clean external clock to clock the ADC. Most high speed ADCs have a clock output that can be fed to the FPGA to get a clean transfer of the ADC data into the FPGA.
3)BTW, the jitter tolerance for the ADC is based on your input frequencies rather than on the ADC clock itself. The problem comes about by the sampling time uncertainty. The faster your signal changes, the less jitter you can handle at a given noise level. It is not trivial to design an ADC circuit that meets the part specs even at
40 MHz, jitter can make the effective size of the converter quite a bit smaller if you are not careful. There are a number of commercial boards out with FPGAs and ADCs on them. I can't think of even one that isn't driving the ADC clock from the FPGA. That doesn't mean it is the right way to do it, just that the designer either didn't know better or didn't want to give up the 'flexibility'.4)
The major caution to using PLL generated clocks is to watch the clock jitter, which can be significant at 200MHz. This can be accounted for in the FPGA design by subtracting the max jitter from the period constraint. It is a bit more problematic if it is used to clock an ADC or DAC, as then you introduce aperature jitter which in turn translates to a degradation of the SNR.