A spectre is haunting this newsgroup, the spectre of metastability

Peter Alfke a écrit :

Strangely enough, this newsgroup is the only place where I have seen ths aformentionned ghost in 10 years of digital design. (yeah I know Peter, these 10 years make me look like a baby, compared to your experience ;o)

[...]

Got bitten once there. Never twice.

Nicolas

Reply to
Nicolas Matringe
Loading thread data ...

Well, there's fundamentals and there's fundamentals :)

One I see missing is an intuitive feel for transmission lines. For years, new engineers were churned out with the mantra of 'everything's going digital and we don't need that analog crap', but when edge rates are significantly sub-microsecond everything's a transmission line.

Certainly it has enhanced my employability that I learned those things both in theory and hard earned practice, but far more people need to learn these things in a world of ultra highspeed interconnects. One can not always trust to software simulations[1] quite apart from the issue of setting up a layout.[2]

This is a fundamental, at least imo, and it doesn't seem to be getting the attention it deserves.[3]

Other things could be cited, of course. Using a technology one does not understand is all well and good while it works. When it doesn't, the person is stumped because they don't understand the underlying principles.

[1] The most amusing software bug I ever had was an updated release of Altera's HDL tools where it synthesised a SCSI controller to a single wire. That was the predecessor to Quartus and happened in '98. [2] Really highspeed systems have the layout defined for the interconnect [for best signal integrity and EMC issues] which then determines part placement, which is almost 180 out from standard layouts. [3] This is a huge growth industry for those with the requisite knowledge; see e.g. Howard Johnson et.al. They'll give a seminar at a company for a few $10k or so for a day plus printed materials.

Cheers

PeteS

Reply to
PeteS

This was EXACTLY the point I wanted to make...

There are two issues here - one is a valid point about it being useful to understand one or two layers below the abstraction you're working at, and a tangential one which is merely a placeholder for the vague regret we all feel that very much younger people are capable of doing something like our jobs, using a changing set of skills.

Technological progression, in pushing down what's fundamental and up what's possible RELIES on people being able to concentrate on only a limited number of layers in the stack.

The guys at CERN can't spend their time worrying about how you would use a boson to make a better automobile door handle any more than people programming desktop computers should worry about electronics.

One can endlessly and enjoyably debate which particular things are 'fundamental' to solving a particular task, but one shouldn't fool oneself that there's a right answer.

Will

Reply to
Will Dean

I agree, and I was not making a moral statement. Just that the ranks of engineers that can debug low-level (fundamental) problems are shrinking. Soon only IC designers will understand these things (because they are still their livelihood), since everybody else has "moved up". ( I have a son who works in software R&D, and we have very limited common ground in electronic things). I was, however, bemoaning the fact that so many things in our lives have become black mystery boxes that defy "healthy curiosity". And that phenomenon is new, within the last 50 years, a short time in the evolution of technology. Peter Alfke

messagenews:2_71h.17705$ snipped-for-privacy@newssvr21.news.prodigy.com...

Reply to
Peter Alfke

I consider myself to still be a youngster. I'm only 24 years old and I'm relatively recently out of college, but I find nothing you mention here foreign. This stuff is still being taught in schools (though I might argue my school didn't do a great job of it). The reality of it all is that low level electronics remains useful. I have never once regretted understanding how a transistor works. I have recently been looking a flip flop designs since my company was having a hard time meeting timing. While I'm not an expert, others are, and I've yet to ever meet a person who know this kind of stuff and doesn't want to share that knowledge.

The kinds of things I deal with on perhaps a monthly basis are: * What are the costs of transmission gate input flip flop versus a cmos input? * Can Astro synthesis a 4GHz clock tree? * How much drive would it take to overpower the drive of another cell (multiple outputs tied together)? * What are possible resolve states when you have a race on an async set/reset flop?

People still have to solve these problems. They aren't going away. The younger engineers still face these.

Now I admit that I do work as an IC designer, but ICs are here to stay. They may become fewer, but as long as they exist and get more complicated, plenty of people will be employed in that industry.

My point to add to this is that many older engineers have difficulty grasping new ways of operating. Convincing experienced engineers that synthesis tools actually work can be like pulling teeth sometimes. Just the other day, some engineers were ranting about some code that a contractor wrote that was very very behavioral. They were complaining about how that was killing timing and adding 10s to 100s of levels of logic. They hadn't tried it out. I ran it through the synthesizer and it was *faster* than the low level code.

I don't see knowledge of the really low level stuff going away. In fact I see it increasing. Things like quantum physics and maxwell's equations are getting used more and more to make electronics work. TCAD engineers live in this realm and TCAD is getting used more and more for things like process definition and modeling. What I see happening is the rift between the low level process/cell designers and the logic designers growing as the logic designers get more high level and the process/cell designers have to get closer to the real physics of the system. Not all of the knowledge is necessary for all parties. The fact is that if a good library is present (and nothing super funky is in the design), a logic designer doesn't need to know electronics. They simply need to know the how to work with the models that are employed by the tools.

-Arlen

Reply to
gallen

Reply to
Peter Alfke

But you haven't established that this is a 'problem'. Shrinking to fit the number of slots that the world needs is economically sound. Shrinking below that will cause a shortage which will cause the price of people who have those skills to go up.

But maybe they will be the only ones who use this on a day to day basis and have an actual need to know this. Can you say that you understand the operation of flip flops and can demonstrate this using the equations of quantum mechanical level or can you even compute the fields that will be produced by that changing flip flop by using Maxwell's equations? Maybe you can, but I'll hazard to say that if forced to do this in front of someone who is skilled in either or both of these theories then probably not.

But maybe he is very skilled in other areas....keep in mind Adam Smith and the division of labor in economic theory.

My point on the earlier post was that it has gone on for much longer. The true fundamentals of electronics haven't changed in roughly a century (Maxwell and quantum) and yet I would hazard to say that the number of engineers designing electrical or electronic equipment that directly use these theories is pretty close to 0....and would speculate that that is also roughly the number of engineers who directly used it 10, 20, 30, etc...... years ago as well.

I don't think many things defy the healthy curiousity but as designers get more and more productive there becomes more and more knowledge that one must accumulate if you want to satisfy that curiousity completely not because the fundamentals are changing but because the approximations and shortcuts that are used above those fundamentals in order to realize those productivity improvements get more and more each year. One can still do it, one can still specialize in any of those areas if you choose to but and there will still generally be a market for people who have accumulated more of that specialized skill....if it is still relevant to the world at large.

KJ

Reply to
KJ

Kind of missed how you factored in the stress associated with the interview process. It's easy to sit on the side of the questioner in that situation, not so easy the other way around. Sometimes the weak answers have nothing to do with the skills of that person but reflects how that person tenses up in stressful situations. If they do, then maybe they're not appropriate for a client facing position, but maybe you're not interviewing someone for that type of position either.

Being able to think quickly on your feet is a skill that can help land that job offer. It can also come in handy when on the job...but along with other skills too.

What about them 'people skills'? The arrogant ones who know the nuts and bolts and flew through your test might be rather disruptive in the work place. Generally they become sidelined because of their arrogance...or work their way up the management chain to become CEO.

This made me ponder why most of the posts about problems with tools are centered around brand X. 'Most' here meaning that the percentage of brand X questions/complaints appears to be far above brand X's market share. It could just be my perception of the posts though.

KJ

Reply to
KJ

KJ wrote: Please ignore my previous post in the section after....

and accept my apologies for the inference.

KJ

Reply to
KJ

Is it possible that this situation could occur if register duplication is enabled (to improve timing) in the tools (eg XST)?

If so. is there a method to mark the synchronizer in HDL to ensure it is never automatically duplicated?

Tom

Reply to
Tom

The first synchronizing flip flop on an async input will experience metastability, sooner or later. Whether that metastability lasts long enough to cause a functional problem depends on how the output is used. If it becomes a causal input (i.e. clk or async rst) to something else, it can become a problem very quickly (read: "don't do that"). If there is very little timing margin to the next non-causal (i.e. D, CE, or sync rst) input(s), then it can also cause problems fairly quickly. The admonishment to "add a second flop" is usually an attempt to create a high slack/margin path to the next clocked element, but may not be sufficient. Ideally, that path (or any path out of the first synchronizing flop) should be constrained to be faster than the clock period would indicate, to force the synthesis/P&R process to provide extra timing margin (slack), in case MS should delay the output a bit. The more slack/margin, the more immunity to MS a design has. Also, the first synchronizing flop on an input should have a no-replicate constraint on it, just in case the synth/P&R tool wants to replicate it to solve fanout problems from that first flop.

Also recognize that even async rst/prst inputs to flops must be properly synchronized with respect to the deasserting edge, since that edge is effectively a "synchronous" input, subject to setup/hold requirements too.

Whether or not a problem is caused by metastability or by improper synchronization, it is still solved by the same proper synchronization techniques. It is true that MS has been reduced significantly by the newer, faster FPGA devices, but it is not totally eliminated, and the higher speeds & tighter timing margins of designs implemented in these FPGAs at least partially offset the improvements in MS in the flops themselves.

Follow the guidelines in the app notes for simultaneous switching outputs, and properly ground/bypass the on-board PDS, and ground bounce will not be an issue. Once it becomes an issue, there are numerous "creative" solutions to the problem, but they are best avoided up front.

Andy

John Kort>

Reply to
Andy

In theory it could.

I am sure there is but I haven't used a Xilinx part for quite a long time. Austin or Peter (or many other readers) could give you a more accurate answer.

Nicolas

Reply to
Nicolas Matringe

You may be right, but I think it's unlikely. If you're re-synchronizing properly then there are two levels of FFs, and the front-end FFs have a fanout of exactly one net each. So there is nothing to be gained by duplicating them, even if the back-end stage has a high fanout (the second stage would be duplicated instead).

In fact, register duplication rarely makes timing better; in fact in many high-performance pipelined designs, it can make it much worse (explanation available on demand).

Yes. You can use the REGISTER_DUPLICATION constraint in your source code or XCF file to specifically turn this feature on or off for a specific entity or module.

Cheers,

-Ben-

Reply to
Ben Jones

I have no problem using synthesis tools, although I have a healthy skepticism of *any* software based tool (which does not mean I won't use it or it's conclusions, merely that all non-trivial software has bugs).

As to a logic designer not needing to know the electronics; that's only true if said designer is only designing for where synthesis (or models that will be used) happens to be available. I've yet to see an LSI or larger device where the IO pins could be directly attached to 48V, (and be cheaper than the discrete alternative) yet that is a pretty standard logic design issue in some industries.

A POR indicator circuit for a 24V vehicle, for instance, could be constructed from standard cells, but at some point we meet the (very nasty) 24V system (which can go up to 80V during load dump and droop regularly during engine cranking). Of course, when designing power supplies (which I also do quite regularly) I expect those sorts of challenges.

Incidentally, the 'logic designer' syndrome you mention is precisely what I was railing against in an earlier post; it's shortsighted and foolish. A logic designer that can do logic but not electronics is _not_ an electrical/electronics engineer - they are either a software engineer or a mathematician.

Kudos to you for learning the low level parts.

As with Peter Afke, I too am very particular about who we hire. Generally I would prefer not to hire anyone than hire someone who doesn't have the urge to seek out answers and think for themselves. I am fully aware that such people _will_ make mistakes (it's an occupational hazard) but I would prefer that to hand-holding.

I worry about too few people who call themselves electrical/electronic engineers actually knowing sufficient about physical layer engineering.

Cheers

PeteS

Reply to
PeteS

I guess I'll bite and see if my understanding is close to what you have in mind:

My feeling is that register duplication could worsen a design with combinatorial logic followed by a flip flop. This means either that the combinatorial logic has to be duplicated (which would enlarge the design and perhaps slow down the circuit due to extra routing, or by only duplicating the flip flop which will certainly demand extra routing since it is normally possible to place a FF directly after a LUT using only high speed dedicated routing.

On the other hand, I can't really see that register duplication will make the performance much worse (unless the synthesizer makes very bad choices of course) so you might have something else in mind.

/Anderas

Reply to
Andreas Ehliar

to what you have

Reply to
Peter Alfke

Got it in one. The "enlargement" problem isn't much of a problem, since in FPGA technology if you need to allocate a new register then you basically get the preceding LUT for free. However, it's the "extra routing" problem that's the killer.

Say your design is supposed to run at 400MHz (2.5ns clock period). The extra route from the combinatorial output of the LUT to the input of the "extra" register added by the replication process may be 500ps. That's 20% of your cycle budget! Often, it's more like 800ps... of course if your clock speed is only 100MHz, this is much less of an issue.

There may be a few scenarios in which register duplication really is a good thing, but in my experience synthesis tools don't always find them. So I tend to just leave this "feature" turned off.

Cheers,

-Ben-

(Whoops, off topic...)

Reply to
Ben Jones

messagenews:ei8e0q$avt$ snipped-for-privacy@news.lysator.liu.se...

problem isn't much of a problem, since in

choices)Say your design is supposed to run at 400MHz (2.5ns clock period). The extra

Reply to
Peter Alfke

In a talk at the Computer History Museum that Gordon Moore gave on the history of Moore's Law, he was asked about what influenced his interest in science. His answer included playing with chemistry sets and blowing things up.

He was later asked about the some of reasons for the decline in US science education. His answer was that nowadays kids couldn't buy "real" chemistry set and blow things up.

IMHO. YMMV.

--
rhn A.T nicholson d.0.t C-o-M
Reply to
Ron N.

I think you'd normally use duplication to reduce routing congestion. On a chip I was on recently, the vendor wouldn't take a netlist that had any fanout cones with more than 2500 endpoints, and register duplication was the only practical fix. I used Teraform (deceased?) to measure the cones.

Evan

Reply to
Evan Lavelle

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.