I'm working on a project with a number of non-EE, non-CE types. A CE mocke d up a real time simulation of the UI which will be done in the FPGA. I am writing the HDL which everyone is nervous about because they know nothing about FPGAs and have heard too many stories I guess.
When the lead heard I was running things in a simulator he asked if it coul d be interacted with in real time. Of course, that isn't practical... or i s it? This design is using a 33 MHz clock, but very little is being done a t full clock rate, mostly counters to generate enables for the much slower circuitry. A PWM output is running at 32 kHz, the button press detect is r unning at 1 ms and other circuitry is running at 10 ms.
So it seems this should be potentially possible. They run simulations of 8 bit processors at much higher speeds than the real thing. It seems a smal l FPGA should be capable of simulation/emulation at real world speeds if th e works are not running at high speeds.
I seem to recall a simulator running compiled code. If we don't use the ty pical logic analyzer output (which has to be pretty slow I would think) how fast can a simulation run? Does something like this exist?
- Get 1,000 miles of free Supercharging
Are you certain that is what he meant? In most instances you would have hardware in the loop form of simulation; where sensors and input devices are emulated, possibly using an FPGA and then use this to ensure fault conditions are detected correctly.
He was asking if the simulation would produce the alarm sounds, etc. Yeah, I'm pretty sure he was talking about real time stuff.
I recall many years ago a company talked about a product that would connect to hardware and simulate the design interfaces to the real hardware. It w orked by driving the outputs and sampling the inputs, but since the softwar e could not respond in real time, it would essentially build up the output vectors one clock at a time and reset the hardware, calculate the next simu lation stimulus vector and start the process adding one clock cycle each ti me. So a simulation that required a million clock cycles to complete would take a million cycles of reset, stimulate and capture, each one getting on e clock cycle longer. If the simulation ran for much real time by the end of the run, the wall clock time could be enormous.
I guess that is why they also use FPGAs to create a test system for an FPGA design, so the FPGA can be tested in real time and not just simulations be fore the rest of the system is built.
I know they do all sorts of things to make sure ASIC designs are right thes e days with mask sets costing millions of dollars.
+ Get 1,000 miles of free Supercharging
Running real code or interacting with a real interface from simulators has been done for decades. I remember visiting a client something like
20 year ago who developed an image compression algorithm. In order to see if it was working the simulator would send pixels via a socket interface to a remote graphics display in the lab. The simulator produced about 1 line per second which was slow but it enabled the engineer to test the compressor quite easily. The actual interface was just a few lines of Tcl. I then wrote something simulator using Modelsim's FLI to display a fern and was surprised how easy it was (note FLI has build in socket calls so no fighting with winsock was required).
There are lots of interfaces on a modern simulator, Modelsim has Tcl/FLI/DPI/PLI/VPI/SystemC to name a few. However, the biggest unknown as you mentioned is the simulation speed and the only to find out is to experiment.
To give you an idea, booting FreeDOS on a 486 processor in VHDL with some XT/AT peripherals (IDE,UART(com0com),8254,8259,RTC,PS2) takes about
40 minutes on a reasonable modern processor (i7-7820x). On an FPGA@25MHz it took just 20-30 seconds. Thus the simulation ran at a few hundred KHz. This is still quite impressive if you consider the number of events it is simulating. I am sure I can reduce the time significantly by preloading the harddisk image in memory, use some high level code for the peripherals etc.
So my suggestion would be to definitely try to connect your simulation to real interfaces, I am sure you will be surprised how fast it is and it looks quite professional if you have to demonstrate it. It also gives you an extra level of confidence that the code will work.
If you do need more speed then the first call is to use a profiler and replace large events parts with some behavioural models. Sometimes simply changing code can make a huge difference. Try to avoid complex primitives like DCM/PLL etc as they are a real performance hog. Simulating in less than ns is normally an indicator that you need some behavioural code. If you have to read/write files use a ramdisk. If you have to send data to another application/computer use sockets. If you are using Linux then look into named pipes as they are faster than sockets but limits the communication to the same PC. You can use name pipes under Windows as well but I found it difficult to use and it was quite flaky (this was on Win7).