Xilinx Floating Point C Simulation aka VHDL/Verilog --> C Conversion?

Hi Guys,

I need a C Simulation of some Floating Point Cores from the Xilinx coregen. I thought about automatically converting the behavioral VHDL code to C, e.g. with V2C or VHDL-2-C (found via comp.lang.vhdl FAQ part 3).

While I'm investigating this -- has anyone in this group already done something similar, or are there C Simulations of the cores available somewhere?

Thanks, Simon

Reply to
Simon Heinzle
Loading thread data ...

Hi Simon,

coregen.

YHM.

At the risk of being deluged: if anyone else has a similar requirement, now would be a good time to ask. Reply here or email me directly with details of what you're looking for...

Cheers,

-Ben-

Reply to
Ben Jones

Hi Simon,

People simulate for two reasons ... to prove correctness, and to evaluate timings and performance. The translation from HDL to C is primarily simulation to verify correctness, especially where cores are involved. Timing/performance simulation requires a very tight integration with the target tool chain and architecture, something lost with generic C simulation of an HDL source.

Just what are you looking for?

John

Reply to
fpga_toys

Can't you co-simulate? that is, put a SystemC/FLI/VHPI wrapper around your C code and load that into your simulator? Alternatively, use shared memory/files/sockets to communicate between your C code and your simulator.

If you do translate, how are you planning to validate your translated model? Remember you are converting from a concurrent to a sequential language which might not be that easy...,

Hans

formatting link

Reply to
Hans

: People simulate for two reasons ... to prove correctness, and to : evaluate timings and performance. The translation from HDL to C is : primarily simulation to verify correctness, especially where cores are : involved. Timing/performance simulation requires a very tight : integration with the target tool chain and architecture, something lost : with generic C simulation of an HDL source.

: Just what are you looking for?

I can think of a few things...

For example you might want to create a model in C / Matlab / whatever of a wider system incorporating an FPGA pipeline and feed it lots of sample datasets or link it to a Monte-Carlo simulation etc., and look at the effect of precision/dynamic range in the context of the overall system.

A quick and dirty way of doing that is to do some bit masking etc. in C.

--
cds
Reply to
c d saunter

Hi John and cds,

We have an arithmetic C model with standard IEEE single precision floats. However, in our FPGA implementation we use several different (custom) floating point formats, mainly due to the limited resources.

The C model is used to generate stimuli vectors for the HDL simulation. Surely we play tricks to quantize the intermediate results of the single precision operations, but the results often differ (in a few bits) from the HDL simulation, which is quite annoying (and not practical for automated testing).

In short: I'm just looking for a bit-accurate (non cycle-accurate) model of the Floating Point Operators from Xilinx.

Thanks,

Simon

Reply to
Simon Heinzle

Hi Hans,

also see my other post.

I need floating point operators which produce the same result as the Xilinx FP Cores (bit-accurate, not cycle accurate). Shared memory/files/sockets sound like a lot of work.

I think the Xilinx Cores don't include feedback paths, only a forward pipeline. This should not be so difficult to translate. Validation could be done by applying the same stimuli to a HDL and the C Simulation and comparing the outputs.

Thanks, Simon

Reply to
Simon Heinzle

Simon Heinzle ( snipped-for-privacy@inf.ethz.ch) wrote: : Hi John and cds,

: We have an arithmetic C model with standard IEEE single precision floats. : However, in our FPGA implementation we use several different (custom) : floating point formats, mainly due to the limited resources.

: The C model is used to generate stimuli vectors for the HDL simulation. : Surely we play tricks to quantize the intermediate results of the single : precision operations, but the results often differ (in a few bits) from the : HDL simulation, which is quite annoying (and not practical for automated : testing).

: In short: I'm just looking for a bit-accurate (non cycle-accurate) model of : the Floating Point Operators from Xilinx.

I'll second that request in light of Ben Jones' posting.

Simon, I don't know if you are aware of GHDL? It's a VHDL front end for GCC which spits out object files, so you may be able to compile the Xilinx Behavioural model into a form you can link C code against...

What sort of performance would you be happy with compared to machine native floats?

Cheers, Chris

Reply to
c d saunter

Hi Chris,

Dito, I really hope Xilinx will develop the C models.

Thanks! I'll definitely have a look at that one.

Hmm, doesn't need to be very fast, but it should be faster than the HDL simulation. Compared to native floats, I think there will be a huge performance drop (~10 to 100 times slower) when doing it bit-accurate.

Best, Simon

Reply to
Simon Heinzle

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.