Why do VHDL gate level models simulate slower than verilog

Hi Group, Iam not sure if this is the right group to post my query on,butI would appreciate any kind of information.I am relatively new to VHDL and am tring to understand whyVHDL gate level descriptions simulate slower than verilog models.I was told that it was because how the VHDL model gets evaluated (delay models) that makes it slower.I didnt quite follow this and if somebody in the group could point me towards a more detailed explanation ,it would be great.I would also like to know why do we see better VHDL performace at behavioural descriptions(as compared to verilog behavioural descriptions.).I am sorry if this has been discussed previously.

Thanks, Abilash.

Reply to
abilashreddy
Loading thread data ...

Beacuase the verilog libraries are more faster and efficient for simulation apart from the fact that algorithms used for simulating verilog netlists are faster.

Reply to
Neo

While some early VHDL simulaters were way slower than early Verilog simulators even upto 50x, there is no reason for this today.

Traditionally Verilog was/is used for high end ASIC flow years before FPGAs were around and chip guys insisted on and paid for the fastest simulators $ could buy. Since VHDL was much less used then for ASICs, the EDA attention went to Verilog.

After synthesis and STA took over, it should be more moot point.

Look at Modelsim and others that support both languages, if you construct an entity in either it is compiled into the same internal representation so simulates the same.

If you are using a free simulator you get what you pay for, free simulators often perform far slower than leading commercial compilers.

Which simulutors are you comparing?

johnjakson at usa dot com

Reply to
JJ

Just a random guess, but I would have thought that it could be simply the amount of logic - there's a lot more elements that are simulated post-synthesis, as it's broken down into luts and 'flops. If you simulate at a higher level, then you can skip simulating all these elements, and simply simulate the behavior of the language -

For instance (making a few assumptions) - if you have a 32 bit counter, you could model that in a simulator as a uint32_t - taking one instruction on a 32 bit processor to add a number. If you break that down into one-bit operations spread across 32 batches of elements, then it would intuitively take longer, unless there was some optimisation going from netlist to simulation.

My 2c,

Jeremy

Reply to
Jeremy Stringer

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.