Seem to get better results when using inferred data paths?
E.g. letting the synthesis tools insert the multiplexers where they see fit gives better Fmax than laying out the datapath in complete detail. Also don't need to remember and code all the control signals for the muxes. Still code intermediate adders and such to keep the number of inferred carry chains down.
It depends if you structure your design to allow it to be mapped to the features of the underlying hardware.
If you do this well you can get optimal performance. If you do this poorly (e.g have a reset on an internal register that can't be reset) then you will be left scratching your head trying to work out why performance is poor.
Am thinking there are two issues: Mapping to FPGA resources, which is always an issue. (and particular FPGA families differ on things such as resets)
And best Fmax, where the packer and router have the interconnect delay info. My thinking is that the location and ordering of the mux inputs makes a significant difference. And, unless you are a Jan Gray type, the tools (ISE, Quartus, etc) can do a better job? (for the uninitiated, take a look at some real world routing on an FPGA)
I'll ask Jan what his magic is and let you know :-)
I feel it is usually better to work at the highest level of abstraction you can, but always being sympathic to the lower levels.
That way you have to dive into the lower levels less frequently to resolve what in retrospect were trivial issues.
Of course you have to dive to the low levels or read and understand lots an d lots of documentation to get an appreciation of what is sympathic to the lower levels. Experience has to be earned!
Sometimes it is the smallest changes that can matter - using active high vs active low signallimg, or registering a 'clear accumulator' flag so the re gister can be absorbed into a DSP block. If you are aware of tham you can s ave a lot of time. You see what patterns do or don't work, and only use the ones that do
But of course, at the sharp end where absolute performance is needed in com plex designs then carefully engineered datapaths may be needed. The tools a re good, but are only tools.
rly (e.g have a reset on an internal register that can't be reset) then you will be left scratching your head trying to work out why performance is po or.
fo.
ignificant difference. And, unless you are a Jan Gray type, the tools (ISE , Quartus, etc) can do a better job?
A asked:
"For a new moderately complex design, should I go for an engineered or infe rred data path? (a.k.a. Should I initially trust the tools or not?)"
Jan's reply was....
"IMO you should have the technology mapped solution in mind before you star t coding -- then do a min effort bottom up implementation to emit that, whe ther inferred or structurally instantiated.
oorly (e.g have a reset on an internal register that can't be reset) then y ou will be left scratching your head trying to work out why performance is poor.
info.
significant difference. And, unless you are a Jan Gray type, the tools (I SE, Quartus, etc) can do a better job?
)
ferred data path? (a.k.a. Should I initially trust the tools or not?)"
art coding -- then do a min effort bottom up implementation to emit that, w hether inferred or structurally instantiated.
formatting link
g02.html#art ? :-)"
Better info than expected. Although it's from almost 16 years ago.
Would be good to hear from the IP vendors who offer customizable CPUs: RPMs will presumably work for the CPU core. What about, say, NIOS or microBlaze with their many configuration options?
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.