Sporadic simulation result with modelsim

Hello all,

I struggle with an issue I can't understand the root cause.

When simulating my back annoted design with modelsim, I get unexpected behavior when using a simulation step of 1ns, but no errors when using a step of 1ps.

My design is running at 1MHz (so I expect a simulation step of 1ns to be highly sufficient). The part that is causing trouble is a wrapper around an SRAM instance (it is an actel RAM512x18 component on an actel proasic3 FPGA).

I've got the exact same component instanciated in the exact same wrapper simulating fine on an actel igloo FPGA. I am aware that place and route may have produce significantly different results between the two FPGAs and that having the design running smoothly on one FPGA don't prove anything.

Still I can't figure out why modelsim would not simulate identically using a 1ns or 1ps step.

Last but not least, I've got no warning from modelsim (no glitch found).

If any of you have an idea of what could be happening there I would be glad to ear it.

Regards

Reply to
JB
Loading thread data ...

The selected technology knows delays below 1 ns, your clock frequency is not everything to take into account. Consider you have in clock tree between two register 2 buffer difference with 100 ps delay each leading to a skew of 200 ps between those registers and in datapath 1 cell with 500 ps delay. Using 1ns resolution would lead to 2 ns skew vs 1 ns data which ends up with 1 ns data before clock violation against 300 ps clock before data when using 1 ps resolution.

regards Thomas

Reply to
Thomas Stanka

Thanks I will use 1ps resolution then. I still find weird that modelsim (or VITAL libraries) does not warn in such cases.

Regards

Reply to
JB

Recheck the transcript right at the start before running for the following type of message:

The minimum time resolution limit (1ps) in the Verilog source is smaller than the one chosen for SystemC or VHDL units in the design. Use the vsim -t option to specify the desired resolution.

KJ

Reply to
KJ

I don't see evidence that this is your problem, but a classic "back- annotated sim time resolution problem" that often arises is the mixing of your testbench environment made up only of idealistic delays (i.e. delays are only based on the temporal ordering assignments made by your simulator) with your back-annotated "real world" worst case delays. For instance, if you attempt to clock a signal from a simple clocked assignment statement into a back-annotated reg with a real setup time assigned to it, it will always be one cycle behind. A typical fix for this is to add an artificial delay to your testbench assignment statements that interact directly with the back-annotated code to satisfy the setup times.

- John

Reply to
jc

should probably use correct timescale directives.

Thanks Shyam

Reply to
shyam

Well, I re-checked and there is no such warning in the transcript.... Maybe modesim does not warn if a timing is extracted from an SDF back annotation and not "hard coded" into the libraries, which I guess is the case here.

Reply to
JB

I simulate my design as a black box with delays that can be quite random and never synched with the clock. It may happen that modelsim detects timing violation on the antimetastability FF inputs but the design shall be robust to that.

Anyway simulating at 1ps got solved my problem.

I called Actel to ask them what was the optimal time resolution to gain simulation perfomance without trading accuracy and the only answer I got is "generate a simulation through Libero and look in the generated .do file for the requested time resolution", which I did and it was 1ps.

I find weird that this advised resolution can't not found in any Actel documentation I've searched, except one application note which request

1ps only if using a PLL (which was not my case).

Thanks anyway for your advices.

Reply to
JB

n

Do you have all messages enabled? Under simulation run-time options there are choices to disable certain levels of messages ('note', 'warning', 'error').

KJ

Reply to
KJ

in

I launch the simulation using a command line, the only disabled message are VITAL glitch messages (+no_glitch_msg). From what I've read (mostly) all messages are enabled by default.

Reply to
JB

The timescale in your sdf is most likely 100ps. If you consider, that values with two digit after the decimal point are used within sdf, you could guess that you need 1 ps resolution to model these timings provided. I would not expect any vendor to give a minimum timescal explicit, but you need to use a decent resolution when simulating in order to get correct results. This is some basic of engineering, that you are needed to justify all simplifications you do to a model. And setting the resolution of netlist simulation to 1 ns is a major simplification with todays technologies.

You should ask yourself on what assumption you came to the conclusion that 1 ns resolution is good for netlist simulation.

best regards Thomas

Reply to
Thomas Stanka

You're probably being overly harsh on JB as well as being guilty of the same thing that you say he is lax on. After all, what is your definition of 'decent resolution' if you yourself don't expect a vendor to give you minimum timescale...which they did by the way in this case by telling JB the method to determine that timescale.

KJ

Reply to
KJ

Do you need to do back annotated simulation?

If a functional simulation works, the design is properly constrained and is passing static timing analysis then I'd expect it to work solidly.

?

Nial.

Reply to
Nial Stewart

is

This is probably worthy of its own thread, but here goes. Gate-level sim does do its part in verifying the translation of RTL into gates, a phase in the EDA flow not covered by funct sim or STA. There are a number of things that can go wrong down the physical path, from quirky RTL coding that doesn't get synthesized as intended (e.g. inferred latches) to inconsistencies in core (rams, fifos, dsp units, clock managers, etc.) configurations. Having said that, gate-level sim is slow and painful and hard to bring up, and many designers elect not to do it unless there is a compelling reason, such as strange lab results. Even then, one might cook up a chipscope session and perhaps focus on RTL vs. gate schematics to isolate the fault.

I personally depended on it for years for all initial spins, then didn't bother for subsequent code mods because I was confident that I had already verified the physical translation. Since I've been using the same proven HDL and core integration methodologies for years, I'm confident enough not to spend the time for gate-level sim (and delay time to market even further).

Maybe this goes without saying, but should a designer elect not to go down the gate-level sim path, that designer should pay close attention to all warnings that are logged by the synthesis and par tools. The goal should be to eliminate those warnings if possible. The tools have become very adept at recognizing any questionable design practices.

- John

Reply to
jc

At some point you have to trust the tools.

I haven't done a gate-level sim for 15 odd years (showing my age) when the group in which I was working (Nortel, Belfast) decided it was a waste of time. I later worked in Nortel Harlow then Agilent in South Queensferry (both big professional, experienced R&D groups) and AFAIK no-body was doing this as a matter of practice.

I don't think I've ever seen an instance where the synthesised and P&R'd results didn't match with what I was expecting.

Having written this I'm probably going to get stung on my next project!

Indeed.

Nial.

Reply to
Nial Stewart

Maybe to a certain extent. There are many instances where I think it's actually a good practice NOT to trust the tools so much. The hardware designer who wants to reap the benefit of using a "high-level descriptive langauge" by quickly patching together some code using wide comparators and counters while liberally sprinkling his code with "x", "+", and "-" operators--then expects to press the button and get the tools to run at 300+ MHz at 80% capacity inside a large die--is in for a rude awakening. In the end, tools are just that--tools.

I hear what you are saying about general design flow. A seasoned designer shouldn't need gate-level sim if he simply heeds the tool warnings. I too have ceased using gate-level sim a while back. But sloppy design practices (there are plenty of novices out there), such as those related to multiple clock domains and async interfaces, can lead to a set of gates that function different than the RTL (and subsequently funct sim). The knowledge that this can happen is golden when trying to isolate such bugs. (Of course, asynchronous vulnerabilities aren't guaranteed to be exposed via gate-level sim-- another argument against using it!)

- John

Reply to
jc

Well if could avoid post-PAR simulation I would do it.

But the thing is that is aeroneautics tools must be assessed. And there is no way vendors will assess their synthesis/P&R tools (if it is even possible). That said, even gate simulation is not sufficient to check the result of P&R as:

- Simulator is not assessed and can lead to incorrect simulation (this is sometimes solved by unsing two simulators)

- Back annoted design may not reflect programming file content

To address those flaws, hardware tests have to be performed but even those might prove wrong (due to tester distraction for exemple)

The goal in the aeronotics processes is to secure each step and catch most errors at each level.

I hope I might one day avoid gate level sim as it really is a pain but it is still requested by aviation certification authorities.

Reply to
JB

I'm set up to easily (if time-consumingly) do "gate-level" (post-PAR and post-synth) simulations.

They come in handy for first spins, making sure that reset logic works OK, DCMs come up.

Also useful for gathering power consumption data.

And process is to do them before a "final release", which is fine as it's easy to do.

Having said that, the last two "bugs" I've found at the final release stage have been in the *VHDL netlister*. Not my code. Not the synthesis. The netlister - the bit I'm not really wanting to test! Grrr :)

Cheers, Martin

--
martin.j.thompson@trw.com 
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.co.uk/capabilities/39-electronic-hardware
Reply to
Martin Thompson

Yes, a good check off item for release, but not for the code, sim, edit, loop.

Gate-level is more a test of the tools than of the design.

Can also be an error is the sim code or sim configuration.

-- Mike Treseler

Reply to
Mike Treseler

Absolutely. Fast design loops are the key to productivity (IMHO).

Agreed.

Indeed it can. One day I hope for it to be so, it's an awful lot easier to fix when it's my code :)

Cheers, Martin

--
martin.j.thompson@trw.com 
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.co.uk/capabilities/39-electronic-hardware
Reply to
Martin Thompson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.