About back annotated simulations...

...what are their really use for ? For several years, I developped and succesfully used FPGA(Xilinx, Altera, Actel) designed with strongs Static Timing analyses, with a good constraint coverage and only some lite back annotated simulations. After some recent discussions, I'm looking for external experiences that can prove me that back-annotated simulations in min/typ/max timings are not only better for design verification - I'm convinced of this-, but are necessary, even if done with the static timing analyses. Is there something I missed ? Thank you for your attention, and your remarks.

Frederic.

Reply to
Fred C
Loading thread data ...

Evaluating synthesis tools. Debugging synthesis problems.

If you're already convinced, why pose the question?

STA provides 100% timing verification for synchronous designs in much less time than a back-annotated simulation requires.

-- Mike Treseler

Reply to
Mike Treseler

Suppose you were designing an FPGA that would control some part of an oil refinery. Or the landing of a probe to Mars. Or the engines on an airliner. Or the dosage of radiation given to a cancer patient.

Wouldn't you want to verify the design as correct in as many ways as you could?

After all, do you really know that static timing analysis is correct and complete? Sure, you can check it twice, but what if you made the same mistake twice? Do you really know that the synthesis, place and route was all done correctly? I've only once know this to fail, but once might be one too many.

A back-annotated simulation isn't fool proof, there are still lots of issues like crossing clock boundaries that can't be checked this way, but it is a way to decrease the risk of something critical being missed. And sometimes that matters a lot.

-- Phil Hays Phil-hays at posting domain (- .net + .com) should work for email

Reply to
Phil Hays

Mike,

I definitely agree that STA provides 100% coverage, if you get all the constraints correct. But there are many designs where the constraints are incomplete, or may not be completely correct, and in those cases timing simulation provides some level of back-up test.

In designs with multicycles & cut paths set that I think there's value to simulating to double-check you didn't cut or multicycle something you shouldn't have. Generally such constraints are set manually, and hence there is the possibility of an error in entering them. Of course, a timing simulation is not guaranteed to catch a problem even if you did make a mistake, but if you can afford the time, it may catch an issue.

Most designs I see use asynchronous clears, and don't set a recovery and removal timing constraint to make sure the reset release occurs sufficiently far from a clock edge for the design to come out of reset cleanly. You can design things such that everything works even if you wouldn't pass a recovery & removal timing analysis, but again, if you don't know exactly what you're doing, or you make a mistake, you can get into trouble without this constraint. Timing simulation provides some level of back-up, although it's definitely a poor substitute for a disciplined reset strategy and a static timing analysis on it.

Regards,

Vaughn Altera [v b e t z (at) altera.com]

Reply to
Vaughn Betz

I agree a time sim can be a useful diagnostic when design rules go wrong. However, it is not an alternative to design rules. A timing sim covers function and timing too poorly to make a it an economical part of my standard development loop.

One of my design rules is no multicycle paths. I prefer pipelining.

True, but in a design composed of clocked processes, and synchronized inputs, fmax is often the only setting. It is impossible to verify synchronizers in simulation, so I don't try.

It might. When a design does not operate in the lab as it did in simulation either I violated a design rule or I need to add one. In this rare event, a timing sim is a useful tool.

That's a case of a design rule violation. I'd rather fix the design than see if it happens to work this time.

I agree.

-- Mike Treseler

Reply to
Mike Treseler

Hi,

I want to thank you for this discussion. This highlights the criticity of static timing analyse, with a strong constraint coverage. The fact is that the design I'm working on requires a high level of safety, as mentionned Phil Hays. This means the cleanest design, well constrainted. But even with 100% cov. I will also use back annotated simulations, but more to cross checking a bug in the timing analyser tool/or a miss in the timing constraints as you mentionned through your experiences.

Regards,

Fred Cezilly

Mike Treseler a écrit :

Reply to
Fred C

One other thing worth doing, if you're using Quartus -- run the design assistant. You can access it from Processing->Start->Start Design Assistant.

It will check for things like an undisciplined reset strategy, missing timing constraints, lack of synchronizers/FIFOs where you should think about having them etc. It's a useful checker on a design, a bit like lint for hardware. It will flag many of the things Mike lists as poor design practice, and the more of those you can get rid of, the more you can trust that meeting your static timing requirements means your design will work without problems.

Regards,

Vaughn [v b e t z (at) altera.com]

Reply to
Vaughn Betz

Sure. But as a practical matter, there's never enough time or money to verify a design in as many ways as I could think of, even if the reliability of the product is super, super important. And were I to come up with a prioritized list of things I could do to ensure the correctness and reliability of a design, functional simulation with back-annotated timing would be near the bottom.

It's not that it's bad; it's just that I can almost always find a better payoff for my time.

Bob Perlman Cambrian Design Works

Reply to
Bob Perlman

I concur with everything Mike has said here. I can count on one hand the number of times I've done a timing simulation on an FPGA design, and I have done literally hundreds of FPGA designs. It is too slow, and too easy to miss a timing parameter. Do a thorough static timing analysis and a functional simulation instead, and inspect any clock domain crossings very carefully.

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com  
http://www.andraka.com  

 "They that give up essential liberty to obtain a little 
  temporary safety deserve neither liberty nor safety."
                                          -Benjamin Franklin, 1759
Reply to
Ray Andraka

I agree that functional simulation with back-annotated timing is not the top of the list. If I'd be happy leaving it off for a high reliability project would depend on details of the project. There are other ways to improve the testing, and the best answer isn't always going to be the same for all projects. That keeps this stuff fun.

My list for a high reliability project would start looking something like this:

Design specification (I know, boring, but important). Full static timing constraints, fully documented. Good functional simulation with self checking test bench. Good board or system level diagnostics. As much run time in a realistic test environment as practical. Run time with temperature and voltage extremes (stress testing). Design documentation (really boring, I know, but needs to be done). Simulation reviews. Code reviews. Specification and documentation reviews (triple boring. I suggest frequent lattes with triple shots of espresso, if that helps). Back annotated simulations without timings. Back annotated simulations with timings.

I didn't add in board level checks for signal quality, thermal design, clocking and such what. Just as important, but I'm focusing on the FPGA here.

The project details matter a lot. Would it be better to improve function simulation or run a back annotated simulation with/with timing? Well?? I think that depends on how good the other ways of verification are at covering the design.

What a back annotated simulation attempts to verify is that:

1) All of the tools did their jobs correctly. 2) There were no subtle error(s) in coding. 3) If timing included, that the static timing constraints were correct and complete, and that the tools correctly used them.

A good board or system level test can find many the first two types of issues. A margin test(temperature and voltage extremes) can find some of the third type of issues.

Does my list miss any method you find important? Would you put things in different order?

-- Phil Hays Phil-hays at posting domain (- .net + .com) should work for email

Reply to
Phil Hays

Right you are Bob. Functional simulation and timing analysis generally finishes faster and provides a more comprehensive result than depending on timing sim. Timing sim is next to useless for actual timing analysis because it only checks the particular timing scenario indicated by the gate delays used. Chances are the gate delays in simulation will only have a loose correlation to what the actual delays are in the actual device. While timing simulation can get you the same functional verification with a little bit of added comfort that any grossly too long delay paths are not kiling the design (at least for the vectors your test uses), it doesn't provide as complete a timing analysis as a good static analysis does, and it runs slower.

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com  
http://www.andraka.com  

 "They that give up essential liberty to obtain a little 
  temporary safety deserve neither liberty nor safety."
                                          -Benjamin Franklin, 1759
Reply to
Ray Andraka

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.