moving data from slower to faster clock domain


what is the best way of moving data from a faster clock domain (100 mhz ) to slower one(75 Mhz)?


Reply to
Loading thread data ...

There are two issues: How do you throttle the faster data stream, so that it does not overwhelm the slower receiver? How do you interface between two inherently asynchronous clock domains?

A FIFO is a popular device, but it may be an overkill, depending on your answer to my first question. Peter Alfke, Xilinx

Reply to
Peter Alfke

If speed isn't an issue (i.e. one/non-frequent transfers), you need to do two things:

1) synchronize you clock from one domain to the other (there are well- known techniques for this) 2) you can borrow a 2/4-phase protocol from the asynchronous world and implement it in each clock domain. As evindent, this is substantially slower than the async FIFO option. However, depending on the application, it might be more convenient e.g. non-streaming applications.



Reply to

I liked the way some code came together for a similar asynchronous transfer. Specifically targeting a distributed RAM architecture like the Xilinx families, the footprint is rather small.

Use a 4-entry, dual-port distributed CLB Select RAM (or equivalent). Use two 2-bit Gray-coded indexes for the write and read time domains. Increment the write index for each write. Generate a new-value flag by registering when the read and write indicies are different in the read domain. Increment the read index with the new-value flag as you read the new value.

By having the one read flag dealing with the Gray-coded write index, appropriate timing constraints can be applied to overconstrain the timing from the flag to where it's used, reducing metastability to effectively "never" be a problem.

The Gray coding on the write index keeps the new-value flag honest.

The write is in its own timing domain with no concern for the read.

The new-value flag should use a combination of the read index and the current new-value flag to generate the next state.

Total footprint is 3 slices of overhead with 1 slice per bit-width for the intermediate buffer.

It's simple and it gets the job done. There will always be at least one cycle of latency in the read timing domain to let the new-value flag settle and get used. If one tried to generate the flag as a qualifier at the same time as an unqualified read, there's a chance the write index change kicks off the new-value flag before the write value has stabalized so the cycle of delay is helpful on both read and write sides.

This is the fun stuff.

Reply to

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.