Simon Heinzle ( email@example.com) wrote: : Hi John and cds,
: We have an arithmetic C model with standard IEEE single precision floats. : However, in our FPGA implementation we use several different (custom) : floating point formats, mainly due to the limited resources.
: The C model is used to generate stimuli vectors for the HDL simulation. : Surely we play tricks to quantize the intermediate results of the single : precision operations, but the results often differ (in a few bits) from the : HDL simulation, which is quite annoying (and not practical for automated : testing).
: In short: I'm just looking for a bit-accurate (non cycle-accurate) model of : the Floating Point Operators from Xilinx.
I'll second that request in light of Ben Jones' posting.
Simon, I don't know if you are aware of GHDL? It's a VHDL front end for GCC which spits out object files, so you may be able to compile the Xilinx Behavioural model into a form you can link C code against...
What sort of performance would you be happy with compared to machine native floats?