A spectre is haunting this newsgroup, the spectre of metastability

To paraphrase Karl Marx: A spectre is haunting this newsgroup, the spectre of metastability. Whenever something works unreliably, metastability gets the blame. But the problem is usually elsewhere.

Metastability causes a non-deterministic extra output delay, when a flip-flop's D input changes asynchronously, and happens to change within an extremely narrow capture window (a tiny fraction of a femtosecond !). This capture window is located at an unknown (and unstable) point somewhere within the set-up time window specified in the data sheet. The capture window is billions of times smaller than the specified set-up time window. The likelihood of a flip-flop going metastable is thus extremely small. The likelihood of a metastable delay longer than 3 ns is even less. As an example, a 1 MHz asynchronous signal, synchronized by a 100 MHz clock, causes an extra 3 ns delay statistically once every billion years. If the asynchronous event is at 10 MHz, the 3 ns delay occurs ten times more often, once every 100 million years. But a 2.5 ns delay happens a million times more often ! See the Xilinx application note XAPP094 You should worry about metastability only when the clock frequency is so high that a few ns of extra delay out of the synchronizer flip-flop might causes failure. The recommended standard procedure, double-synchronizing in two close-by flip-flops, solves those cases. Try to avoid clocking synchronizers at 500 MHz or more... So much about metastability.

The real cause of problems is often the typical mistake, when a designer feeds an asynchronous input signal to more than one synchronizer flip-flop in parallel, (or an asynchronous byte to a register, without additional handshake) in the mistaken believe that all these flip-flops will synchronize the input on the same identical clock edge. This might work occasionally, but sooner or later subtle difference in routing delay or set-up times will make one flip-flop use one clock edge, and another flip-flop use the next clock edge. Depending on the specific design, this might cause a severe malfunction. Rule #1: Never feed an asynchronous input into more than one synchronizer flip-flop. Never ever.

Peter Alfke

Reply to
Peter Alfke
Loading thread data ...

Applause!!!

Reply to
Ben Twijnstra

First, I fully agree that metastability is a very rare issue, and if it _does_ occur, it is because the designer has not guarded against it. Yes

- the designer has a responsibility to meet the setup and hold times (which will help prevent such issues).

Second, metastability, as a potential problem, _does_ exist in any FF based system, by definition. See point number 1.

So I agree with Peter Afke on this. I understand why the thread was named that way too :)

I've noticed the large number of recent threads 'blaming' purported metastability issues for problems, but at (for instance) 10MHz, a metastability issue would only show up in the occasional few million or so transactions (at an absolute maximum, and then only a few times at that rate).

So if there's a problem with your design throwing bad data at a rate of

1 in a million or so, check your timing. It's highly unlikely to be metastability.

I have had *true* metastable problems (where an output would float, hover, oscillate and eventually settle after some 10s of

*milliseconds*), but those I have seen recently don't qualify :)

Cheers

PeteS

Reply to
PeteS

In my experience, ground bounce is a bigger problem. Especially in a device that is nearly 'full' it is wise to invest in a few fabricated grounds (dedicate a pin at a strategic location, i.e. as far away as possible from other ground pins, drive it to ground, and tie it to ground externally).

When you find that moving cells around alleviates or intensifies observed instabilities, you may want to look into ground bounce problems.

I've found that one synchronizing flip-flop was not enough in one particular case (from a 4-ish to 50-ish MHz domain). Two was. Does one ever work reliably ? Or has the 'window' become smaller in the past few years ?

John Kortink Windfall Engineering

--

Email    : kortink@inter.nl.net
Homepage : http://www.windfall.nl

Your hardware/software designs realised !
Reply to
John Kortink

To paraphrase Karl Marx: A spectre is haunting this newsgroup, the spectre of metastability. Whenever something works unreliably, metastability gets the blame. But the problem is usually elsewhere.

Metastability causes a non-deterministic extra output delay, when a flip-flop's D input changes asynchronously, and happens to change within an extremely narrow capture window (a tiny fraction of a femtosecond !). This capture window is located at an unknown (and unstable) point somewhere within the set-up time window specified in the data sheet. The capture window is millions of times smaller than the specified set-up time window. The likelihood of a flip-flop going metastable is thus extremely small. The likelihood of a metastable delay longer than 3 ns is even smaller. As an example, a close-to 1 MHz asynchronous signal, synchronized by a

100 MHz clock, causes an extra 3 ns metastable delay statistically once every billion years. If the asynchronous event is at ~10 MHz, the 3 ns delay occurs ten times more often, once every 100 million years. But a 2.5 ns delay happens a million times more often ! See the Xilinx application note XAPP094 You should worry about metastability only when the clock frequency is so high that a few ns of extra delay out of the synchronizer flip-flop might causes failure. The recommended standard procedure, double-synchronizing in two close-by flip-flops, solves those cases. Try not to use a 500 MHz clock to capture asynchronous inputs... So much about metastability.

Much more common is the typical oversight, when a designer feeds an asynchronous input signal to more than one synchronizer flip-flop in parallel, in the mistaken believe that all these flip-flops will synchronize the input on the same identical clock edge. Or feed an asynchronous byte into an eight-bit register, without any additional handshake. This might work occasionally, but sooner or later subtle difference in routing delay or set-up times will make one flip-flop use one clock edge, and another flip-flop use the following clock edge. Depending on the specific design, this might cause a severe malfunction. Rule #1: Never feed asynchronous inputs into more than one synchronizer flip-flop. Never ever.

Peter Alfke

Reply to
Peter Alfke

Can you clarify the device/process/circumstances ?

-jg

Reply to
Jim Granville

This was a discrete design with FETs that I was asked to test (at a customer site). The feedback loop was not particularly well done, so when metastability did occur, it was spectacular.

Cheers

PeteS

Reply to
PeteS

Our server did strange things, like filing my (long) posting twice...

The numbers I quoted are from experiments with VirtexIIPro, a few years ago. (XAPP094) So it's modern, but not very modern. Old horror stories of millisec> To paraphrase Karl Marx:

Reply to
Peter Alfke

Do you mean they built a D-FF, using discrete FETS ?!

I have seen transistion oscillations (slow edges) cause very strange effects in Digital Devices, but I'd not call that effect metastability.

-jg

Reply to
Jim Granville

D-FF, using discrete FETS ?!

Reply to
Peter Alfke

D-FF, using discrete FETS ?!

Amusing

I too have made flip flops from discrete parts in the distant past. The metastable problem I encountered was due to slow rising inputs on pure CMOS (a well known issue) and was indeed part of the feedback path.

I remember making a D FF using discrete parts only a few years ago because it had to operate at up to 30VDC. I had to put all the usual warnings on the schematic page about setup/hold times etc.

There are times when the knowledge of just what a FF (be it JK, D or M/S) is comes in _real_ handy.

Cheers

PeteS

Reply to
PeteS

D-FF, using discrete FETS ?!

Reply to
Peter Alfke

D-FF, using discrete FETS ?!

Well, I am not an IC designer (well, not regularly). Perhaps the answer is education - real education. The new crowd doesn't seem to understand the fundamentals that are key to successful design of any type be it IC, board level or any other.

Like the other dinosaurs, I've seen and done things most youngsters don't even consider, but the youngsters that have been around when *I did them* were awed, and they wanted to learn, so I think there's hope.

I am sure the youngsters that were around when *you* have done astounding things (to them) were awed too. Perhaps it's a matter of making sure they understand the limitations of their current knowledge :)

It's different in a way - we were *figuring out* what made things work; nowadays it's taken for granted. We need to make sure the kids understand that this knowledge is key to successful design.

Cheers

PeteS

Reply to
PeteS

D-FF, using discrete FETS ?!

There was a TV show perhaps 20 years ago the name of which I do not remember. In it, the computer that ran the spacecraft (named Mentor because the female of the group had thought the name) refused to give information about using the transporter system.

It said 'Wisdom is earned, not given'

Cheers

PeteS

Reply to
PeteS

There is a difference, 60 years ago, a curious kid could at least try to understand the world around him/her. Clocks, carburators, telephones, radios, typewriters, etc. Nowadays, these functions are black boxes that few people really understand, let alone are able to repair. Youngsters today can breathe life into a pc by hitting buttons in mysterious sequences... Do they really understand what they are doing or what's going on? "If the engine stalls, roll down the window" :-)

Here is a simple test, flunked by many engineers: How can everybody smoothely adjust the heat of an electric stove, or a steam iron ? Hint: It is super-cheap, no Variac, no electronics. Smoke and mirrors? Answer: it's slow pulse-width modulation, controlled by a self-heating bimetal strip. Cost: pennies...

Well, the older generation has bemoaned the superficiality of the younger generation, ever since Socrates did so, a hundred generations ago. Maybe there is hope... Peter Alfke

D-FF, using discrete FETS ?!

Reply to
Peter Alfke

Peter,

you're right that youngsters (my friends for example) don't know how the computer really works despite the fact that they use it almost every day. This secret is revealed only to the most curious youngsters that devote their time to it. I am sure that there are still youngsters which are willing to understand the secrets of a silicon brain. Now in the in age of internet information is easily available to anyone connected, may it be a 6 years old kid or a 100 grandpa (no offence Peter). The people with knowledge should be willing to give their knowledge to the masses (p.e. publish it on the net), there will always be someone that will accept it. The problem is that a life is too short to explore all the interesting things. When you explore things we usually begin at the surface of the problem. Then you remove it layer by layer, like peeling an onion. What to do when there are too many layers. Computer technology gains many new layers every year (an exponential number according to moore's law). I think that nobody can keep the pace with this layers. So at some point you give up and study only the things you prefer. Youngsters are familiar with games, so some learn how to make one. The others prefer the secrets of operating system - they build OS. Some of them are interested in HW - like most of us in this newsgroup. I for example am a very curious mechanical engineer. When I got bored in mechanical engineering, where the pace of development is nothing in comparison to electronic industry, I also studied electronics. Now I prefer electronics for a simple reason - it is far more complex hence gives me much more satisfaction when learning. Sometimes I realise that I am very weak in fundamentals, because I am missing the lectures in fundamentals of electronics. To be honest I don't have a clue what a "FF metastability" is and what is the cause of it. BTW: What is an FF? I imagine it like a memory cell. Despite my poor knowledge in fundamentals I am able to build very complex computer systems, write software for it... How that can be? Well some people are devoted to fundamentals, some to layer above that, the others to layer above that layer and so on. At the end there are "normal" computer users that do not want to know how the computer works, they just want to use it for Word, games, watching movies... I wouldn't worry about the passing the knowledge to youngsters. If there will be a need for that knowledge they will learn it. So specific knowledge is learned by a small group of people but the Word usage and Email sending is learned by almost every youngster (in the Western world!). That's an evolution.

Cheers,

Guru

Peter Alfke wrote:

a D-FF, using discrete FETS ?!

Reply to
Guru

Moore's Law says that the transistor density of integrated circuits, doubles every 24 months:

formatting link

I think layers are increased more linear, maybe one every 5 years, like when upgrading from DOS with direct hardware access to Windows, with an intermediate layer for abstracting the hardware access, or from Intel 8086 to Intel 386, with many virtual 8086. So you can keep the pace with the layers. You don't need to be an expert for every layer, but it is easy to learn the basics about which layers exists, what they are doing and how they interact with other layers.

It is more difficult to keep the pace with all the new components, like PCIe, new WiFi standards etc., but usually they don't change the layers or introduce new concepts. If PCs would be built with FPGAs instead of CPUs, and if you start a game, which reconfigures a part of a FPGA at runtime to implement special 3D shading algorithms in hardware, this would change many concepts, because then you don't need to buy a new graphics card, but you can install a new IP core to enhance the functionality and speed of your graphics subsystem. If it is too slow, just plugin some more FPGAs and the power is available for higher performance graphics, but when you need OCR, the same FPGAs could be reconfigured with neuronal net algorithms to do this at high speed.

There are already some serious applications to use the computational power of shader engines of graphics cards, but most of the time they are idle, when you are not playing games. Implementing an optimized CPU in FPGAs for the current task would be much better.

--
Frank Buss, fb@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
Reply to
Frank Buss

Frank, I was addressing a different issue: That the knowledge base of the fundamental technology inevitably is supported by fewer and fewer engineers, so that soon (now!) people will manipulate and use technology that they really do not understand. And that is a drastic change from 40 years ago. I think you understand German, to appreciate Goethe's words:

Was du ererbt von Deinen V=E4tern hast,_ Erwirb es, um es zu besitzen!

Cheers Peter

Moore's Law says that the transistor density of integrated circuits,

ny

tems.de

Reply to
Peter Alfke

The good news: the fewer people know the basics, the more you can earn, when a customer needs it :-)

--
Frank Buss, fb@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
Reply to
Frank Buss

But with a capitalistic economy one would expect that it will be supported by the proper number of engineers and if that number is 'small' than those few will earn more money than if there were a 'lot' of those engineers. In any case, the appropriate amount of money will be spent on those functions....but of course nowhere is there a true capitalistic economy, but I expect that the knowledge/skill transfer will in fact be transferred if there still exists a market for it.

Ummm.....the true 'fundamental' knowledge underpinning all electronics as we understand it today are contained in Maxwell's equations and quantum mechanics. I'd hazard a guess that engineer's have been designing without that true knowledge of both for far longer than 40 years.

What you're considering as fundamental seem to be the things things that you started your career with and were thought to be 'fundamental' back then, like how flip flops are constructed from transistors, why delays are important, bandwidths of transistors, etc. But even those things are abstractions of Maxwell and quantum....is that a 'bad' thing? History would indicate that it's not. The electronics industry has done quite well in designing many things without having to hark back to Maxwell and quantum to design something. There is the quote about seeing farther because I stood on the shoulders of giants that comes to mind.

How far away one can get without a knowledge of what is 'fundamental' though is where catastrophes can happen but productivity improvements over time are driven by having this knowledge somehow 'encoded' and moving away from direct application of that fundamental knowledge so that the designers of the future do not need to understand it all....as stated earlier, there are many layers to the onion, to many to be fully grasped by someone who also needs to be economically productive to society (translation: employable).

There is the real danger of not passing along the new 'fundamentals' to the next generation so that lack of knowledge of old does not result in failures of the future. What exactly the new 'fundamental' things are is subjective....but in any case they won't be truly be 'fundamental' unless it is a replacement for Maxwell's equations and the theory of quantum mechanics.

KJ

Reply to
KJ

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.