50 MSPS ADC with Spartan 3 FPGA - clock issues

Hello,

I'm in the middle of a project which involves digitizing and decoding baseband NTSC composite video. Right off the top, I'll let everybody know that this is part of an educational project (part of it for a university project, though it's largely a hobbyist type project). I realize that the project will be useless in a couple years, and that there are pre-made devices out there, but I still want to do it.

That being said, I think the hardest part of the whole project (for me) is just getting the data into the FPGA (cleanly)! I know very little about clock management, and I'm worried that I'm pushing the limits of my setup. Let me briefly describe what I'm doing.

The traditional way to sample NTSC video, as I understand it, is to use dedicated chips to derive a "pixel clock" off of the hsync. This clock then feeds the ADC, and perhaps the FPGA. I am not doing this. I am using a fixed, free-running crystal oscillator clock (50 MHz Epson SG-8002JF). For the record, that clock came on my Digilent Spartan 3 starter board, which I'm using for the project. I plan on sampling at the full 50 MSPS, even though the video signal is band-limited to about

4.2 MHz.

Now -- I'm posting in this group for help on getting the ADC data into the FPGA. If you don't know anything about video, you can ignore this next paragraph.

But for those who are interested, I then use an algorithm I've developed to look for the hsync pulses. The time between hsync pulses is not exactly constant in the real world. However, since I have many more samples than I intend on keeping, I use an adaptive algorithm to keep a fixed number of pixels per line. I'm still working on the gritty details, but it'll probably involve some type of polyphase resampling. I then use a DPLL to lock onto the chroma subcarrier burst. The DPLL's use an efficient sinusoid table, which are made of a block RAM and an interpolater. I think I've seen this referred to as DDS (direct digital synthesis). Basically, you keep an index which contains more bits than are necessary to index the table. Then you use the extra low bits for interpolation. The DPLL's have two outputs, one of which is 90 degrees out of phase. In practice, I use two DPLL's, which each lock onto every other line... Since the chroma subcarrier phase is reversed every other line, I think this helps the DPLL's to track. I use the DPLL's to coherently demodulate the chroma signal into I and Q. This involves a digital multiplier and a FIR windowed-sinc filter. I might change this to a CIC filter, due to a helpful suggestion from another newgroup poster. I'll won't go into details of anything past that on this newsgroup, unless anyone is interested. There are some things I left out, like a 2d adaptive comb filter, VGA generator, etc.

OK, now onto the issue at hand !

My Digilent starter board has some headers conveniently soldered onto the PCB. These connect to available digital I/O ports on the Spartan 3. I was previously going to naively output my clock onto any old pin on one of these headers, and feed it into the ADC. The ADC will be on a seperate breadboard. I would also attach a header on that breadboard, and would use a ribbon cable to connect the clock and (parallel) data pins. I knew (know) very little about how jittery the clock output would be. So... I would love to get some help from this group.

I did a google seach for "fpga adc jitter", and found some Usenet comments. I posted them at the very end of this post. Sorry if it's considered rude to include such long clips, I'm not sure what the netiquitte is for that. Some of them were helpful, but I still feel shakey about the subject.

I now plan to connect my crystal oscillator *directly* to the ADC (by soldering on a wire directly from the PCB, without going through the FPGA at all), will jitter be a problem? In one of my threads from a different newsgroup, a poster said that I might want to use a buffer register (or a FIFO) to seperate the "clean" clock domain from the "dirty" clock domain. That way, the data will be delayed a tiny amount, but it will always be ready when the FPGA tries to read it (with its jittered clock). However, when I did a search for "adc fifo", I found that lots of people are using an FPGA *as* the FIFO. So is an external buffer needed? Could I perhaps the Xilinx's DCM to just phase-shift my FPGA clock by 90 or 180 degrees, so that the ADC data is sure to be ready and waiting when I read it?

One of the posts (which I clipped to this post) says that some high-speed ADC's have their own clock *OUTPUT*, which I can feed *TO* the FPGA. Can someone suggest an example of a 50 MSPS video ADC (ie, by "video ADC", I mean: with built-in clamping, sample-and-hold, and AGC)? I hadn't even thought of doing it this way, but would it be easier?

Finally, another one of the clippings below says that commercial FPGA + ADC boards often feed the ADC clock directly from the FPGA (as I had intended to do). So, the obvious question is, how bad is it in reality? I only need 8 bits of precision (10 might be nice) - some loss of color detail is acceptable for this educational project. Can I get that by driving the ADC from the FPGA's clock? Remember that my signal is band-limited to 4.2 MHz, so it isn't very fast-changing relative to what the ADC can handle.

Is there a way (in Xilinx) that I can be *sure* that the clock passes directly from the FPGA input pin to the output pin *without* going through a DPLL?

Any completely alternative ideas? Suggestions to web sites or books with explanations?

Thanks for the help, and for reading this long post !!

Regards,

Sean

------------------------------------

1)

The issue is not whether or not it works, rather it is an issue of noise added to you sampled signal due to sampling jitter. A better set-up is to run the clock directly from your oscillator to the ADC and to the FPGA, then use the DLL in the FPGA to solve the I/O timing issues at the ADC. Frequently, the ADC data is brought through a discrete registers such as an 'LS374 before going into the FPGA to improve the timing relationship at the expense of an additional cycle of clock latency.

Amazingly, several of the commercial boards out there that have an FPGA and a decent speed ADC are set up with the ADC clock supplied through the FPGA, an arrangement that doesn't make sense for real DSP applications.

2)

I'm not sure what you mean by drive the ADC with the FPGA. The signal flow is the other way around: the ADC will drive the FPGA. You should not drive your ADC clocks from the FPGA. The jitter introduced by the FPGA will absolutely kill the noise performance of the ADC at 800MHz. At 100 MHz it will reduce the SNR to considerably less than the 10 bits. Use a clean external clock to clock the ADC. Most high speed ADCs have a clock output that can be fed to the FPGA to get a clean transfer of the ADC data into the FPGA.

3)

BTW, the jitter tolerance for the ADC is based on your input frequencies rather than on the ADC clock itself. The problem comes about by the sampling time uncertainty. The faster your signal changes, the less jitter you can handle at a given noise level. It is not trivial to design an ADC circuit that meets the part specs even at

40 MHz, jitter can make the effective size of the converter quite a bit smaller if you are not careful. There are a number of commercial boards out with FPGAs and ADCs on them. I can't think of even one that isn't driving the ADC clock from the FPGA. That doesn't mean it is the right way to do it, just that the designer either didn't know better or didn't want to give up the 'flexibility'.

4)

The major caution to using PLL generated clocks is to watch the clock jitter, which can be significant at 200MHz. This can be accounted for in the FPGA design by subtracting the max jitter from the period constraint. It is a bit more problematic if it is used to clock an ADC or DAC, as then you introduce aperature jitter which in turn translates to a degradation of the SNR.

--
The above were all posted by Ray Andraka, I believe. Hopefully he's
still around on these newsgroups, he seems to know the subject very
well!
Reply to
sp_mclaugh
Loading thread data ...

Quick calculation: using 4.2 MHz full scale (of the ADC input range) sine wave

4.2MHz is about 26 Mradians/s ADC input range corresponds to -1 to +1 of normalized sine 1 LSB of 8-bit ADC is therefore 1/128 (normalized). 1 / (26M * 128) is about 0.3 nS

So for a 1 LSB sampling error, you could live with 300 pSec of sampling jitter. My guess is that the threads you looked at were concerned about significantly smaller acceptable jitter, as would be the case in most networking applications where the sampling rate and bandwidth are closer to the same frequency.

I would guess that your clock oscillator should have much less than 300 pS jitter unless it is poorly bypassed (power supply decoupling). You can run this through the FPGA without a DCM. Additional jitter would then only come from threshold jitter and ground bounce at the FPGA input, which can be minimized by not using the adjacent IOB's or driving the adjacent IOB's to ground.

I would worry more about accounting for off-board routing and ground returns. Using a differential clock from the FPGA to the ADC board would help. If you don't have an ADC that directly takes a differential clock you'll need to add a receiver of some sort. By this time you'll have a significant delay built up on the data clock, so running the clock back to the FPGA along with the data will help you to properly sample the ADC data.

HTH, Gabor

Reply to
Gabor

o You wonder about the jitter on the clock.

I liked Gabor's calculations that showed you wouldn't have much of a problem in data accuracy for your situation. The differential clock approach would make things cleaner overall.

o You were worried about bypassing the DCM.

Your FPGA won't use a DCM unless you explicitly include it.

o You're concerned about getting the right sampling point for the data.

The clock-to-out times should be well specified for the ADC you choose. At 50 MHz, you'll probably have no issues but if your timing adds up to be tight, you might run a DCM from the same clock feeding the ADC or improve your timing budget through other means. If you can use a global clock I/O pair on the S3 part on your headers (I don't think I/O is available for S3E global clocks, just input) you could even use the clock as it appears on the FPGA pad/ball feeding the ADC as the input to your global clock buffer with a little care.

Put together a timing budget that shows what your times are from the clock edge until data is ready at the FPGA pins and compare that with what the FPGA needs in setup time relative to the 20 ns clock period. It's the amount of slack in the budget that tells you if your implementation is a breeze.

o Polyphase filtering is only part of what you can do.

Since the noise you'll see from the clock jitter will be spread across the full 25 MHz bandwidth of your 50 MS/s data stream, you could either subsample your signal (aliasing all the noise into your slower rate baseband) or you can actively filter the signal before decimating with an FIR or other filter without taking up excessive resources. Good execution on this aspect of a video design is superb experience for digital design. ____

You seem on target with knowing much of what to look for in the design. I hope it's fun.

- John_H

Reply to
John_H

Thanks, it's nice to have a concrete figure like that. I hadn't thought to work backwards and calculate what jitter I can live with (not yet knowing how much jitter I have).

OK, after spending *far* too long on Epson's web site (the eea.epson.com site is poorly organized, though epsontoyocom.co.jp is better), I found some jitter figures. It says that for a 15pF load and a 3.3V power source, I should expect 200 ps maximum cycle-to-cycle jitter, or 250 ps maximum peak-to-peak jitter. As you said, that is assuming a clean (isolated) power source. I'll describe the power source in a second. But first, let me paste two lines from Epson's data sheet that sound a bit ominous:

"Because we use a PLL technology, there are a few cases that the jitter value will increase when SG-8002 is connected to another PLL-oscillator. In our experience, we are unable to recommend these products for applications such as telecom carrier use or analog video clock use. Please be careful checking in advance for these applications (jitter specification is max 250 ps / CL = 5 pF."

Perhaps they recommend against it because most commercial applications would need more than 8 bits of resolution (10 is usually used, I think, maybe 12 for professional video equipment). After reading that, do you still think that my application will be OK? And even if I run the clock through the FPGA?

I don't mind spending $20 or whatever on getting a better clock, if it sounds like the best solution. I want this system for perform reasonably well, and I'm willing to pay for it. The starter board even has an optional 8-pin clock socket, so it would be exceptionally easy to do. After reading the specs on that Epson clock, I know *why* they included that socket! :-)

Anyway, I'll now quickly describe the power supply (and decoupling of clock power input) on the Digilent starter board:

- All power first comes from a 5V AC-DC wall plug

- All power then goes through a LM1086CS-ADJ 3.3V regulator

- For the FPGA, 2.5V and 1.2V are generated from the 3.3V

- The 3.3V is used directly (shared) by a number of on-board components, including the crystal oscillator clock

- There appear to be 35 parallel 47nF capacitors between 3.3V and ground

- The only other isolation provided to the oscillator's power pin is another locally placed 47nF capacitor between power and ground

Does it sound like the clock power input is adequately isolated (clean)? I don't have a "gut feeling" one way or the other.

What do you think about the previous plan I mentioned. I'd use about 6" of standard ribbon cable (about the same grade as ATA cabling) to connect from a header on the Digilent starter board to the ADC breadboard.

I've never used a differential clock before. I wonder if my Spartan can do that... Some initial searching did turn up some mention of differential output pins (being used mostly for DDR memory clocks). If I can't do it on-chip though, there's no point, because I have to get to the breadboard to mount any discrete chips. There's no extra space on the starter board. And I don't intend to build a custom PCB (with the FPGA) to replace the starter board.

I understand why there would be delay, but can you explain the part about running the clock back to the FPGA? Since it's a fixed delay, couldn't I just use the DCM to delay the Spartan's clock by a fixed amount?

Very much !! Thanks.

Reply to
sp_mclaugh

As did I ! I'm looking into the differential clock approach now, though I fear that it won't be do-able. I *think* the Spartan 3 can do differential output, using special features of the IOB's, but it seems that some external setup/calibration components (resistors) are required. It would be up to Digilent (producer of my starter board) to have properly implemented these. There appear to be quite a few "special" output modes (ie, LVPECL, etc) and I would be lucky for them to have implemented exactly the one I need. Building my own PCB for the Spartan is out of the question at this time (it would take me a year or more to learn all the necessary skills). I could be mistaken - maybe there is an easy way. That's just my current best-guess after a few hours of research.

That's good to know. I wonder if I should still worry about routing the clock through the FPGA's output header to drive the ADC. Perhaps there would be added jitter due to other reasons, such as active switching flip-flops near the driving IOB... ? I'm basically repeating this from another post I've read, I don't know what order of noise we're talking about here, and whether it's negligible compared to my poor oscillator.

I think you're talking about the same thing I say a bit further down (offsetting the FPGA clock by the clock-to-out time), but correct me if I'm wrong.

As of even yesterday, anything about the internal clock distribution in the FPGA would have flown right over my head. However, earlier this afternoon, I was reading a bit about the global clock buffers, etc. It'll take me awhile to digest all the literature I've read from Xilinx, plus what you wrote. So I'll get back to you on that one. Though if you're in the spoon-feeding type of mood, my mouth is open.

Ah yes, a timing budget is something I will be doing. Of course, the rest of my design isn't finished yet, so I don't yet know what type of max setup times I'll need. I guess if I use input buffers (using IOB's), the setup time to get the data into the FPGA will be independent of the rest of my design, right? I've never touched any IOB features before, but it seems easy (just set a single attribute, I think...?).

On the other hand, couldn't I avoid the issue altogether by using a DCM to adjust my FPGA clock by the clock-to-out time of the ADC? That way, the data is ready right on the rising edge of my FPGA clock. It seems that I can make adjustments in increments of 1/256 of my clock frequency.

Good point! On my first reading, I got caught up on the "subsample" part for awhile, and kept thinking thoughts about running an ADC below the center frequency of a narrow band-pass signal. Then I realized that you were referring to the method I use to choose which samples to keep (ie, decimation, etc), and the "aliasing noise into..." part became clear.

Now, it turns out that I *was* going to include a low-pass block in my polyphase resampler, but I must confess, I wasn't thinking of cutting out noise due to clock jitter in my ADC. I knew that I had to band-limit my signal before decimation, but I figured that the only high-frequency information would be noise coming directly from the video source. Cutting out a large chunk of the noise caused by jitter in my sampling clock is a very welcome bonus!

So in essence, by sampling at 50 MSPS rather than the minimum of 8.4 MSPS, and then applying a low pass with cutoff around 4.2 MHz, I'm getting rid of about (25-4.2)/25 * 100% = 83% of the noise to to jitter on the ADC clock (assuming the noise content is uniformly distributed from 0 to 25 MHz)... Does that calculation sound right (assumes ideal filters, etc)? If so, what a pleasant surprise!

____

I appreciate the kind words, though I think I'm right on the borderline capability-wise. Let's hope I'm not right below that line - close enough to waste a lot of time, but just too far to ever get it working! But yes, it should be a fun project.

The info you gave was very helpful, thanks!

Regards,

Sean

Reply to
sp_mclaugh

Regarding the frequency range of noise due to sample clock jitter (sampling using an ADC much faster than required for a given band-limited signal):

On a second reading, I was wondering if you could explain this a bit further. In the worst-case scenario, we would have an input signal with a purely 4.2 MHz frequency component (would never happen for video, but just for the arguement). If two samples were taken, each experiencing maximum sample clock jitter, but in opposite directions, then they would be seperated by (sample time + 2 * jitter). However, we would treat them as if they were seperated by only (sample time).

Wouldn't this only introduce noise up to a frequency of:

4.2 MHz * (sample time + 2 * jitter) / (sample time) ?

ie, for 250 ps of jitter on a 20 ns clock, with a 4.2 MHz signal being sampled, I could expect to see noise up to 4.305 MHz...?

Or, instead of assuming an input with a purely 4.2 MHz component, go to the other extreme. Assume the input is a constant DC signal. The jitter on the sampling clock wouldn't cause any noise at all here, would it?

Please excuse the simple question, this is probably something elementary, but it's new to me!

Sean

Reply to
sp_mclaugh

Driven differential signals don't need the resistor networks in the Spartan3. You can generate an LVDS signal from pins marked as complementary pairs without any passives involved; a 100 ohm differential termination at the differential ADC clock is still important. The ideal situation would have these signals routed next to each other with specific differential impedances but I expect your best bet will be to find the complementary signals that don't have anything else routed between and are roughly the same length. There might not be a lot to choose from.

If I recall, the Digilent Spartan3 board has a 40-pin header with one power and one ground (or similarly abysmal path for return currents. The header you connect to might be responsible for introducing most of your system jitter per Gabor's comments on return current. If you have many unused signals on that connector, driving them to output logic low with a strong IOBSTANDARD will help. Changing them to hard wired grounds would be better still. I believe the ribbon cable helps add to the size of the crosstalk effects so keeping that short will also help. But the differential clock is that much more attractive.

You might consider using a "dead bug" addition to your Digilent board. There are small differential drivers available. If you tack the chip upside down by the oscillator (imagine a bug with its legs in the air) you can wire the oscillator output right to the discrete differential driver input. Use a twisted pair to deliver this clock directly to a

2-pin header on your ADC board. If you're not designing the board and it already has only a single-ended input, you can tack a differential receiver to your ADC board in the same way. If you use this approach to deliver a very clean clock (making up for a poorly designed signal header) consider hot-gluing or epoxying the twisted pair to the board so you have a mechanical strain relief that keeps the wires from ripping off your tacked-on chip.

If you're using "mild" I/O switching strengths, you'll be better off than using strong drives. If you look at the data sheet for SSO recommendations, you'll see which standards tend to be nasty and which "play nice." If you're dealing with inputs rather than outputs, things will be much better - it's the current surge from driving the outputs that cause the majority of the jitter-inducing crosstalk.

If you arrange the design to register the ADC outputs directly in the FPGA's IOBs, you can find the setup and hold times in the Spartan3 data sheet without having to look at the Timing Analyzer report. Even when I specify register packing in IOBs and use input registers, I still use OFFSET IN (BEFORE) constraints on my input signals to get a very big warning if something didn't end up in the IOB like I planned.

The DCM gives you flexibility. But when you do your timing budget, you might find there's a better way to reduce the uncertainties rather than just shifting the clock by the reported delay. The shift might be close to optimal but the delay is specified as a worst case, not typical. When you have a "best clock scheme" figured out and the DCM isn't

*between* the oscillator and the ADC, you might get better results with the DCM but not necessarily withe any added phase shift.

It *sounds* right but I haven't been performing these calculations myself recently so my view from 20,000 feet says it's pretty reasonable.

Reply to
John_H

The jitter introduces amplitude errors, not frequency errors. Any amplitude or frequency error can induce problems in the other domain (which is why the ADC frequency error - phase, actually - induces the amplitude error). You're analyzing the signal as if it's in an ideal sampling domain so the errors will show up as amplitude noise.

The jitter won't induce noise on the DC signal, correct. Great observation. You still get the benefit of the ADC noise being reduced at DC.

If you were to only sample at 8.4 MS/s, your 4.2 MHz sinewave would have maximum sample errors at the highest slew of the signal with maximum deviations that constructively add to produce the maximum error. When you have a 50 MS/s stream looking at the 4.2 MHz signal, your maximum values are still the maximums but you throw many other samples in with that same period. Each sample point will have similar noise power, but weighted by the signal slew rate; the top and bottom of the sinusoid are closer to DC for jitter analysis reasons so the noise power isn't constant for all sample points but significantly reduced in the slower slew regions. Filtering over the wider bandwidth allows the worst sample errors to be filtered with the smallest sample errors leading to an overall reduction in jitter-induced noise.

I would expect most of your jitter to be high-frequency since you're coming from a crystal source with the induced noise coming from that "ideal" signal getting phase distortions through various buffer stages from the slight induced shifts of threshold point. Higher frequency jitter is easier to remove from your overall system noise than low frequency jitter that induces real phase shifts in your observed data.

Reply to
John_H

Isn't this calculation a bit crude? I suppose the spectrum of the jitter is also important.

--
Reply to nico@nctdevpuntnl (punt=.)
Bedrijven en winkels vindt U op www.adresboekje.nl
Reply to
Nico Coesel

Yes, but assume that we have a pure 4.2 MHz sine wave, and we sample where the slew rate is fastest (at the zero crossings, if the sinusoid goes from -1 to +1). Call the difference between two such samples max_change. Then, with worst-case jitter, instead of seeing max_change between two samples, we see max_change * (t_sample + 2*t_jitter) / (t_sample). This assumes a first-order expansion around the fast-slew area. In other words, treat that area as having a constant slope (good approx for a sinusoid), so the amplitude between samples is linearly related to the time between samples. But, once we read the values into the FPGA, we treat them as if they were only seperated by t_sample. If the change-per-unit-time increases, doesn't that directly translate to a change in maximum frequency? So... is my 4.305 MHz cutoff above correct?

So what happens between these two extremes (signal being either completely DC or completely high frequency - 4.2 MHz)? Surely if the signal was completely 1 Hz, we wouldn't expect to see jitter uniformly distributed from 0 to 25 MHz, correct? Shouldn't the maximum frequency of jitter-induced noise be a percent (>100%) of the maximum frequency of the input signal?

Yes, I think we are talking about the same thing (compare to what I mentioned above). ie, the first sample is jittered so that it occurs too early, while the second occurs too late -- and all of this happening where slew is the highest.

Ah, now that does make sense to me. If my signal really *was* just a sinusoid (ie, a single tone), then maybe I could even develop some algorithm to pick out the min and max samples (where slew was lowest). Of course, that's not possible with my (real) video signal.

The source of the jitter is beyond my knowledge, but this is certainly good to hear. I will definitely low-pass my signal as close as I can to

4.2 MHz (depending on how steep my filter is, which depends on how much FPGA real estate I have to spare).

One last question/comment. Wouldn't this be an ideal example of when to use dithering? ie, my LSB isn't really significant, so I shouldn't treat it as if it was. I've never used dithering before, but maybe I can use an LFSR (linear feedback shift register) or some other technique to add one LSB of randomness to the samples... ?

Reply to
sp_mclaugh

The 4.2 MHz cutoff is the right cutoff to design for because 1) these are based on ideal-time samples in your filter space and 2) you probably won't have a "brick wall" filter. You should have an analog filter on the front end if your input isn't guaranteed to be cleanly band-limited (such as the steps from a 27 MHz DAC) to help reduce any initial aliasing but the analog filter doesn't need to be extreme, just to have a good block between 45 and 55 MHz since that range would alias back down to your ~5 MHz range of interest. A digital filter can clean up what's left but you don't need to design for 4.305 MHz rather than your desired 4.2 MHz in the digital realm though the difference is rather minor.

Again, the jitter has an effect on the 1 Hz measurement - a very small amount - but you will see a noise floor all the way out to 25 MHz from the jitter if the other system noise (including measurement noise) didn't swamp out those extremely small values. Imagine a .01% random noise source added to your signal. You will see that entire noise source in your spectrum. It's just very small and not worth worrying about in this application.

You will have more jitter-induced error at higher frequencies than at lower frequencies. Happily, the higher frequencies for video produce less noticeable artifacts. If your noise floor for low frequencies was

-40 dB, you might have objectionable results, especially if you're trying to process single frames. If the -40dB noise floor is at the higher frequencies, you have the perceived color getting off track a bit in a composite signal or loss of precision in fast intensity changes for component video. The main luminance content is still very clean.

If you just picked out the min and max, you wouldn't gain any noise averaging from the other samples. If you have two independent jitter sources that individually induce 100 ps of RMS jitter, what would the two jitter sources do to your signal? You wouldn't end up with 200 ps RMS jitter; you'd end up with about 140 ps. Jitter is statistical in nature. If RMS jitter is based on 1 standard deviation, the chances of getting the jitter values to add hits at 2 standard deviations, not 1.

If you average more samples with random distributions, your probability of getting less noise overall is reduced by the same reasoning even if the samples at the slower slew rates didn't reduce the jitter-induced noise on their own.

There's no need to over-design. A "clean" signal can still have some noise (or some alias) and meet all your needs. If you could experiment with different cutoff frequencies or steepness, you might gain better insight into what qualities deliver "better" results at what cost. Superb opportunity for learning experience.

Dithering is useful if you're trying to avoid frequency spurs typically related to the nonlinearity of the ADC you're using. If you want to get a 3 MHz sinewave and a 100 kHz sinewave superimposed without 2.9 and 3.1 MHz components 80 dB below the main sinewave, then yes - dithering is helpful. For video you shouldn't notice any problems from the slight non-linearity of today's converters. You'll already have noise in your system from the amplifiers, the converter, and the jitter-induced effects. This is another aspect that could add nicely to the learning experience but keep in mind that the added dither has to be kept out of the frequency range of interest, such as feeding it through a bandpass that has good bandstops up to 5 MHz and 45-55 MHz (for aliasing) as well as a good rolloff by the time you reach 95 MHz; I wouldn't recommend it because of the stringent analog filter design needs, but seeing the difference is informative.

Reply to
John_H

The calculation is crude, sure. But what DO we know about the jitter from the oscillator and the jitter induced by switching within an FPGA? Almost nothing. There's little chance to "count on" any kind of jitter spectrum for doing anything beyond a first order approximation. If the first order effects are considered, the secondary issues are... secondary.

Reply to
John_H

One more thing: If you're doing your own ADC board (leaving the Spartan3 board to the "experts") you would do yourself the best service by including your oscillator there and supplying that clock to the FPGA. If you don't have a global clock pin on the ribbon cable header, you can still use a twisted pair (signal and ground) to route the clock independently to the unused DIP header on the Digilent board.

Reply to
John_H

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.