Delay Generators in FPGAs

Hi:

I have been tasked to build several custom laboratory instruments. One is a "safety system" for some engine combustion research labs that prevents the "engine controller" computer (a PC with a DIO board implementing a word generator driven by a shaft encoder on the engine) from issuing attempts to fire the engine for too long or at bad timings. This safety system has been delayed for several years due to other priorities.

The engine controller that we have now has problems with signal integrity and so its "patch panel" (the panel that buffers the DIO board signals to BNC connectors for connection to lab stuff) needs to be redone.

Additionally, the engine controller runs on MS-DOS, and so I'd like to update it to a modern architecture. Trouble is, this really isn't necessary because "if it works don't fix it." There are also no capabilities desired that it doesn't do already.

However, the engine controller does need to be connected to various SRS and Berkeley Nucleonics delay generator boxes for generating pulses in the time domain, as the engine controller only works in the shaft encoder's angular domain.

In summary then:

  1. Need a safety system.
  2. Need a new version of the engine controller patch panel.
  3. Would like but don't need a new engine controller.
  4. A future engine controller upgrade would be even better if it incorporated the safety system and time-domain delay generators.

I have decided that the logic required to implement the safety system (uses an absolute encoder to check timings against the quadrature encoder that drives the engine controller computer) is complicated enough that it should be based on a large CPLD or small FPGA. But if I base it on an FPGA, then it is only some additional HDL to implement the engine controller functions anyway.

So why not just make an engine controller with integrated safety features?

That's where I am heading.

One missing link: The time domain delay generators. Some stuff needs only microsecond resolution, so my present elementary logic skills can design those. But there are also nanosecond and 100ps resolution delays which we employ for some laser timing/intensified camera gating situations.

It is my understanding that an FPGA with DLLs and other clock timing/phasing features can be set up to give much finer resolution delays than could be achieved merely by counters at it's max clock rate. But the methods employed to do this and even an understanding of the finer clock management features of modern FPGAs are way beyond me. (I have just about mastered the much simpler world of CPLDs).

So my plan here is to think ahead. I want to design in an FPGA that is approachable with my present skills, but with enough surplus resources that I can expand the initial functionality (the safety system) into a full-blown engine controller with delay generators at a later time.

What FPGA architecture should I employ to implement a gadget which can do what I want (engine controller and safety system) while leaving a considerable amount of free logic resources for future advancement of its capabilities into the realm of delay generation to ns and sub-ns resolution?

I am looking at Spartan 3 development boards now. The Virtex stuff just seems way too overkill. There is no way I'd ever touch a million gates. But a few 10s of thousands or even a 200k FPGA might be suitable.

Would like to stick with Xilinx.

I know my logic demands are vaguely specified at this point. The idea is to get a rough idea of whether incorporating the fine resolution delays is worth planning for or not.

Thanks for comments.

Good day!

--
_______________________________________________________________________
Christopher R. Carlen
 Click to see the full signature
Reply to
Chris Carlen
Loading thread data ...

see comments below

"Chris Carlen" schrieb im Newsbeitrag news: snipped-for-privacy@news3.newsguy.com...

situations.

this may be a hard nut depending on the actual requirements

DCM phase adjust step may be very small around 50-60ps ? but there could be jitter of 300ps so depending on your requirement the DCM may or may not useable option

as you are not targetting 'low low lowest price high volume product' but rather 'mission critical, all performance may come handy, single unit price not so impartant' then, well the overkill may actually come handy.

I understand that the 'overleft' 1M+ gates look like too heavy overkill, but there are other features in V4 that are not there is low cost spartan-3, and it looks like those features may be needed in your case.

1) V4 has ability to runtime reconfigure the DCM parameters, the phase delay can be adjusted in S3 too, but V4 DCM can be fully reprogrammed whild FPGA working 2) V4 fabric is WAY faster. The LUT logic propagation delay is many time smaller than in S3. This may as well be needed. 3) there are some other 'adjustable' delay options for IO, etc, and other architecture enhancements, which as well could be come to use.

So to be on the safe side I would suggest low-end V4

yes, from your specs it is not possible to give 100% assurance that some FPGA would do all the things you mentioned, eg this 100ps accurate delays. It would be possible to make delay gen with 100ps tuning step.

hmm sounds like fun task. V4 fabric can switch at 1GHz so that gives 1ns resolution, with some more tricks the 100ps step could be achived as well. But I would not try that in S3, S3 fabric goes fancy below 500MHz already.

Reply to
Antti Lukats

Sounds like you can do all the engine control and safety stuff in the fpga at a modest clock rate, 10 or 20 MHz maybe. An external dumb-logic watchdog or two might be a good idea, too.

You could use an SRS or BNC delay generator for just the camera stuff, since you have those already.

I know of no obvious way to get decent ps-resolution delays from an FPGA alone. We make a small OEM delay generator that's Spartan3-based,

50 MHz clock, but we use external analog stuff for the fine delays...

formatting link

John

Reply to
John Larkin

You can get "picosecond" pulses out of the Mult-Gigabit transceivers in Virtex-IIPro and Virtex-4FX. At 3 Gbps one data bit = 333 ps, at 6 Gbps it's 156 ps, and when you push it to 10 Gbps, it's 100 ps. So you can modulate an outgoing data stream with that kind of resolution. For example, you can generate a 1 MHz signal with 200 ps adjustment accuracy, which is one part in 5,000. And all of this with crystal-frequency accuracy.

Peter Alfke, Xilinx Applications

Reply to
Peter Alfke

Peter Alfke wrote: [...]

Peter (and Austin),

Speaking of crystal-frequency accuracy, you reminded me of a question I've had for a little while.

How much can the CLKIN input to a DCM vary before you have to worry about it losing lock? Assume the output drive a GBUF, which feeds directly back to the CLKFB. I know Austin answered this type of question at least once before

formatting link
, but I don't see in the V2Pro, V4, or S3 datasheets where the 100 ppm is spec'ed. Is the spec still the same for the above parts?

Thank you,

Marc

Reply to
Marc Randolph

Marc,

It turns out that if you are going only to the DFS, and you do not move the frequency very fast, you can sweep from min to max input (output) frequency before losing lock.

The DLL is fussier, as it chooses to arrange its six delay lines based on what options, range, and where it locks. So in the DLL, if you start sweeping the frequency, you may get an overflow or underflow on one of the delay lines, and lose lock.

We typically spec +/- 100 ppm, because just about any trashy crystal can do that. In reality, +/- .01 is probably safe.

Austin

Reply to
austin

Austin, Suppose the clock starts as 'any trashy crystal', but is then fed via another Xilinx DLL - is there a chain-limit of jitter degradation, in such a system ? This will become a more common scenario...

-jg

Reply to
Jim Granville

Thank you for the response, Austin.

Howdy Jim,

Indeed it is. Not to put words in his mouth, but I suspect that Austin did not mean to imply that crystal oscillators had a large amount of jitter. I took "trashy" to refer to cheap XO's that typically have wide ppm tolerance's (+/- 100 ppm, for example). Even cheap XO's should have relatively low jitter (compared with a DLL)... so meeting the jitter tolerance requirement in the datasheet shouldn't generally be a problem (at least for the first DLL).

Have fun,

Marc

Reply to
Marc Randolph

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.