Is FPGA code called firmware?

Definitely C is linked fairly close to VHDL/Verilog. But there are a few key differences that I had to consider when learning HDL's to truly understand what was going on. For example the non-blocking statements in a clocked sequential processes in VHDL. I orignally assumed that like software , signal assignments would happen instantly after the line has executed, but I was wrong. A few minutes playing around with ModelSim revealed that they occur on the following clock pulse (when the flip flops sample the data input).

So there was a bit of a retraining process even though the syntax was somewhat familiar.

This is a fairly tough question as we wouldn't be discussing this if this was something that we could all agree on. I believe that both are hardware and I will explain my reasoning:

FpgaC for example is a totally different ball game from VHDL/Verilog but they ultimately result in a piece of hardware at the output.

FpgaC (from the example posted at the TMCC's website at U of Toronto, where I happen to live :) ) hides completely the hardware elements from the designer. Allowing them to give a software-like *DESCRIPTION* (key word) of the hardware. What you get is ultimately hardware that implements your "program".

VHDL/Verilog on the other hand do hide most of the grunt work of doing digital design but still you have somethings left over like what I pointed out above about the non blocking signal assignments.

We have always progressed towards abstraction in the software world,similar pushes have also been made in the hardware world with EDA's and CAD software packages like MATLAB, which automate most of the grunt work. Perhaps program like HDL's are the new progression.

All I can say though, is only time will tell. It depends on how well compilers like FpgaC will be able to convert a program to hardware description. Also how well it be able to extract and fine opportunities for concurrency.

-Isaac

Reply to
Isaac Bosompem
Loading thread data ...

Sorry I have a bad habit of not reading through my replies. I am using Google so please spare me :)

I meant "Perhaps programs like FPGAC are the new progression"

Reply to
Isaac Bosompem

...

Does this include almost all ASIC design where synthesis SOFTWARE is still used to generate gates from RTL, SOFTWARE is used to place those gates and SOFTWARE is used to route the connections (not to mention sw to run lvs/drc etc.) ?

It must also include then even any schematic entry based ASICs because SOFTWARE is used to enter/netlist all the schematics.

Forcing software handling discipline if software is in the path is not an easy requirement in my opinion. Unless you want to go back to paper napkin diagrams and tape over transparencies.

Reply to
mk

Forcing software handling discipline on software teams isn't easy either.

Reply to
fpga_toys

"software-handling discipline" relates to the tools, as much as your own code. It is fairly common practice to archive the tools, when a design is passed to production, and then ALL MAINT changes are done with those tools. So just because what you ship the customer might look like HW, you still have to do risk-reduction in house.

For a live, and classic, example look at the ISE v8 release. Some of the flaws that shipped in this, are frankly amazing, and one wonders just what regression testing was done....

-jg

Reply to
Jim Granville

Actually VHDL stuck it's toes into this some 20 years back. By 1993

1076.2 Standard Mathematical Package was part of the standards proces, then 1076.3 Numeric Standards, not long later IEEE 1076.3/floating point, and then discussions for supporting sparse arrays and other very high level concepts for pure mathmatical processing rather than hardware logic from a traditional view point. Interest in C based HDL/HLL's for hardware design predate even Dave's TMCC work which is also over a decode old.

So, I don't think it's all that new. Rather it started with sequential high level syntax and automatic arithmetic/boolean expression processing was added to VHDL. When computers were expensive in the

1960's and 1970's we traded design labor for microcode and assembly language designs (frequently done by EE's). As computers dropped drastically in price, that practice became rapidly not cost effective, and was almost completely replaced with higher, and higher levels of abstract language compilers to improve design productivity traded off against inexpensive computer cycles. We see the same process logic "hardware logic simulators" .... AKA FPGA's where they have dropped rapidly in price, allowing huge designs to be implemented on them that is no long cost effective in schematic form. And, we are seeing even larger designs implemented that are not even cost effective to design at the gate level using first generation HDL's that allow the designer to waste design labor on detailed gate level design. Hardware development with 2nd and third generation description languages is likely to follow the software model of using higher degrees of abstraction specifically to prevent designers from obsessing over a few gates, and in the process creating non-verifably correct designs which may break when ported to the next generation FGPA or logic platform.

FpgaC/TMCC has a number of things that are less than optimal, but it rests on a process of expressing all aspects of the circuit on boolean expressions, then agressively optimizing that netlist. The results are suprising to some, but hey, it's really not new, as VHDL has covered nearly the same high level language syntax to synthesis too. I think what is suprising to some, is that low level software design is long gone, and low level hardware design is soon to be long gone for all the same reasons of labor cost vs. hardware cost.

Reply to
fpga_toys

Or more importantly, why the select beta list developers designs didn't stumble into the same problems. In large software land, alpha and beta pre-release cycles are the critical part of not slamming your complete customer base with critical bugs. The alpha and beta testers willing to do early adoption testing is probably one of the most prized vendor assets, and carefully controlled access resources, that any software vendor can develop. And for that priv, and to build that relationship, it's frequently necessary to give your product away to those early adopters long term .... both the beta's and the clean releases that follow.

Reply to
fpga_toys

snipping

It reaslly depends on the style of design and synthesis level. Traditionally Synopsys style DC synthesis performed on RTL code is hardware design more or less no matter how the RTL is prepared, even javascript if thats possible. But HandelC and the other new entrants from my knowledge are usually based on behavioural synthesis, thats the whole point for their existance is to raise productivity by letting these tools figure how to construct the RTL dataflow code so that mere mortal software engineers don't have to be familiar with RTL design. They still find out about real harware issue sooner or later though.

On the ASIC side, Synopsys BC didn't fare too well with hardcore ASIC guys, better with system guys. Since FPGAs let software, system guys into the party, BC style synthesis is going to be far more acceptable and widespread, cost of failure so much lower than with ASICs. As for is it HW or SW, I decline, but the ASIC guys would tend to call BC design too software like given early results, I don't think their opinion for C style behavioural design has changed any either.

If the end result of BC style design produces results as good as typically achieved by DC synthesis then it is everybit as good as hardware but does it produce such results? In hardcore hardware land we expect to drive upwards of 300MHz cycle rates and plan a hardware design onto a floorplan but I wouldn't expect such performance or efficiency from these BC tools. Do they routinely produce as good results, I very much doubt? Replacing RTL for behavioural design may raise productivity but it is not the same thing as replacing assembler coding for HLL coding IMHO given the current nature of OoO cpus.

Reply to
JJ

Back in the old days, it was common to build FSMs using ROMs. That approach makes it natural to think of the problem as software - each word in the ROM holds the instruction you execute at that PC plus the right external conditions.

That still seems like a good approach to me Seems pretty low level too.

I've done a reasonable amount of hack programming where I count every cycle to get the timing right. I could probably have done it in c, but I'm a bottom up rather than top down sort of person.

--
The suespammers.org mail server is located in California.  So are all my
other mailboxes.  Please do not send unsolicited bulk e-mail or unsolicited
 Click to see the full signature
Reply to
Hal Murray

As a half EE and CSc guy from the 1970's I spent more than a few years doing hard core assembly language systems programming and device driver work. The argument about C for systems programming was much louder and much more opinionated about "what real systems programmer can do". As an early Unix evanglist and systems programmer, it didn't take long to discover that I could code C easily to produce exactly the asm I needed. As the DEC systems guys and the UNIX systems guys war'ed over what was best, it was more than fun to ask them for their rosetta assembly language, and frequently knock out a faster C design in a few hours, for a piece of code that took weeks to fine tune in asm. It was almost always because they got fixated on micro optimization of a few loops, and missed the big picture optimizations. Rewriting asm libraries in C as we ported to microprocessors and away from PDP11's was seldom a performance hit at all.

I see similar things happening with large ASIC and FPGA designs, as the real performance gains are in highly optimized, but more complex architectures, and less in the performance of any particular FSM and data path. Doing the very best gate level designs, just like the very best asm designs, at some point is just a distraction, when you start looking at complex systems with high degrees of parallelism and specialized functional units where the system design/architecture is the win, not a few cycles at the bottom of some subsystem.

The advantage of transfering optimization knowledge into HLL tools, is that they do it right EVERY time after that. Where the same energy spent optimizing one low level design is seldom leveraged to other designs. Because of this HLL programming languages routinely deliver three or four nines of the performance hand coding will do, and frequently better as all optimations are automatically taken and applied, where a hand coder would not be able to.

We see the same evolution in bit level boolean design for hardware engineers. A little over a decade ago, all equations were hand optimized .... today that is a lost art. As the tools do more of it, probably in another decade it will no longer be taught as a core subject to EE's, if not sooner. There are far more important things for them to learn that they WILL actually use and need. That will not stop the oldie moldies from lamenting how little the kids today know, and claiming they don't even know their trade. The truth is, that the kids will have new skills that leave the Dino's just that.

I believe, having seen this same technology war from the other side, that it is not only the same, but will actually evolve better because the state of the art and knowledge about how to take optimizations and exploit them is much better understood these days. The limits in software technology and machine performance that slowed extensive computational optimizations for software compilers are much less of a problem with VERY fast cpus and large memory systems today. Probably the hard part will be yanking the knowledge set to do a good job from the minds of heavilly protesting EE's worried about job security. I suspect a few, will see the handwritting on the wall, and will become coders for tools, just to have a job.

Reply to
fpga_toys

Any of us educated in engineering school in the 1970's probably have more than a few times. On the other hand, I also built a DMA engine out of an M68008 using address space microcoding which saved a bunch of expensive PAL's and board space, plus used the baby 68k to implement a scsi protocol engine to emulate a WD1000 chipset. The whole design took a couple months to production.

Having done it the hard way with bipolar bit slices, just gives you the tools to take a more powerful piece of silicon and refine it better. That is the beauty of FPGAs as computational engines today. Looking past what it's ment to do, and looking forward to what you can do with it tomarrow, by exploiting the parallism and avoiding the sequential bottlenecks of cpu/memory designs. Designers that only know how to use a cpu + memory, and lack the skills of designing with lower level building blocks miss the big picture - both at a software and hardware level.

it's not about up or down, its simply learning your tools. for 30 years I've written C thinking asm, and coding C line for line with the asm it produced. For the last couple years, after learning TMCC and taking a one day Celoxica intro seminar, I started writing C thinking gates, just as a VHDL/Verilog engineer writes in that syntax thinking gates. Hacking on, and extending TMCC as FpgaC has only widened my visualization of what we can do with the tool. The things that TMCC/FpgaC does wrong are almost purely masked by the back end boolean optimizer which comes very close to getting it right. Where it doesn't, is because its synthesis rules are targeted at a generic device, and it lacks device specific optimizations to target the available CLB/Slice implementation. That will come with time, but really don't have that big an impact today.

Right now we are focused on implementng the rest of the C language that was left out of TMCC, which is mostly parser work, and some utility routine work inside the compiler. That will be completed march/april, then we can move on to back end work, and target what I call compile, load and go work, which will focus on targeting the backend to current devices. With that will come distributed arithmetic optimized to several platforms as well as using carry chains and muxes available in the slices these days. At that point, FpgaC will be very close to fiting designs as current HDL's do .... if you can learn to write C thinking gates. FpgaC will over time hide most of that from less skilled programmers, requiring only modest retraining.

The focus will be computing with fpgas, not fpga designs to support computing hardware development. RC.

For the last couple weeks we have been doing integration and cleanup from a number of major internal changes, mostly from a symbol table manager and scoping rules fix to bring TMCC inline with Std C scoping and naming, so that we could support Structures, typedef and enum. We've also implemented the first crack at traditional typing in prep for enum/typedef, allowing unsigned and floating point now in the process. Both are likely to be in beta-2 this month. The work I checked in last night has the core code for FP using intrinsic functions, and probably needs a few days to finish. It also now has do-while and for loops, along with structures and small LUT based arrays or BRAM arrays. It currently regression tests pretty well, with a couple minor problems left, including one that cause some temp symbols to end up with the same names. That should be gone by this weekend, as FP gets finished and hopefully unsigned is done too.

svn co

formatting link
fpgac

alpha/beta testers and other developers welcome :)

Reply to
fpga_toys

At my current and previous jobs, FPGA "loads" are/were considered firmware, for the same reason that processor boot code and the lowest-level debug monitor was considered firmware: the images are stored on board in some kind of non-volatile memory.

-a

Reply to
Andy Peters

?????

No-one ever programs in assembly language any more then?

Where price, performance and power consumption don't matter a higher level language might become more prevalent. I think we'll always need to be able to get down to a lower level hardware description to get the best out of the latest devices, stretch the performance of devices or squeeze what needs to be done into a smaller device.

I also wonder if price/performance/power consumption will become much less important in the future, as it has with software. These days you can assume application software will be run on a 'standard' sufficiently powerful PC. It won't be the case that at the start of every hardware project that you can assume you have a multi million gate FPGA (or whatever) at your disposal.

Nial.

Reply to
Nial Stewart

I was talking about the FPGA domain here, not SW.

Not every design has the need for million gate device functionality, Altera and Xilinx's low cost families seem to be selling in big numbers. Sometimes it's important to push the performance of these lower cost devices to keep costs down. Getting the same functionality into a smaller device can also be important if power consumtion is critical (my original point).

How many power supplies do you need for your big devices?

This newsgroup and FPGAs were around long before some numpty at Google decided what their description should be. I don't think we should be taking this as a guiding pointer for the future.

That's probably true, and I expect to be using other tools as well as VHDL in 5 years. However as John posted above, there's alot more to implementing an FPGA design than the description used for the logic and I think we'll still be using HDLs to get the most out of them for a long time to come (to a bigger extent than with C/asm).

Nial.

Reply to
Nial Stewart

Who said anything about C based HLL's NEEDING a large FPGA? ... The only NEED for a large FPGA is if you are doing reconfigurable computing on a grand scale.

C based HLL's work just fine for small devices too. Since devices get bigger in 100% size jumps for most product lines, and the cost penalty for using a C based HLL is under a few percent, the window for justifying an HDL on device fit is pretty small, or non-existant. Any project which is crammed into a device with zero headroom, probably needs the next large size just to make sure that minor fixes don't obsolete the board or force an expensive rework replacing the chip with the next larger device in the middle of the production run.

Probably the biggest change is that EE's will still be putting the chips on boards as they have always done, and the FPGA programming will shift to systems programming staff, which are frequently Computer Engineering folks these days (1/2 EE and 1/2 CSc, or CSc types with a minor in the digital side of EE). Similar to the 70's transition where EE's were doing most of the low level software design and drivers, and it shifted to a clearer hardware/software split over time. With that, tools that expect a designer to mentally be doing gate level timing design are less important that higher level tools which handle that transparently.

Reply to
fpga_toys

That may be the case for large multi designer designs, for smaller devices someone who understands the underlying architecture and what they're actually trying to design to will be needed.

This is a quote from a current thread "Entering the embedded world... help?" on comp.arch.embedded. I don't know how accurate this is.

"If you meant to say "most everyone these days uses an HLL, and for embedded applications most people choose to use C whereas a significant minority choose to use C++" I would not have objected much - although, in terms of code volume on the shelf, assembly language is still at least 30% of all products. Consider that the really high-volume projects use the really cheap micros. I've seen numbers that say asm

40%, C 40%, C++ 10%, other 10%, and I'm quite prepared to believe them.

The problem is, people who talk about this stuff get into their niche and see everything else from that perspective. Few people routinely work with a broad spectrum of systems from 4-bit to 64-bit and code volumes from a few hundred bytes to a few dozen megabytes."

You seem to have a deeply entrenched view of the FPGA development future. Only time will tell if you are correct or not, I don't believe you are and I'll leave it at that.

Nial.

Reply to
Nial Stewart

I think that has always been the case for embedded, and realtime, and any other tightly integrated hardware/software design of any size.

Certainly true. As a consultant, I can only view the diverse sample of my clients for a perspective ... and that is certainly harder for W-2 employees that have lived inside the same company for the last 10 years. It would be interesting to take a survey at a developers embedded conference as get a better feel for the real numbers.

More like a receintly converted evangelist, with a pragmatic view of my prior 35 years of systems programming experiences casting a view on this new field and watching what is happening around me too.

I did have a little fun this evening, writing a PCI target mode core in FpgaC as an example for the beta-2 release that is nearly at hand. It's not quite done, but checked in to subversion on sourceforge in the FpgaC examples directory. For something that is a bus interface state machine, it is expressed in C pretty nicely, and will get better as unions/enums are added to FpgaC.

It brought out a couple problems with using I/O ports as structure members that I need to fix in FpgaC tomarrow, then finish the pci coding along with a C test bench before testing/installing on my Dini DN2K card.

Reply to
fpga_toys

In news: snipped-for-privacy@4ax.com timestamped 23 Feb 2006

12:39:58 -0800, it was posted

"[..]

The Google description for this group is: Field Programmable Gate Array based computing systems, [..]

[..]"

In news: snipped-for-privacy@individual.net timestamped Fri, 24 Feb 2006

10:06:01 -0000, Nial Stewart replied:

"[..]

This newsgroup and FPGAs were around long before some numpty at Google decided what their description should be. [..]

[..]"

This has nothing to do with Google. Check your newsgroups file, the description of this group is "Field Programmable Gate Array based computing systems." See

formatting link
or
formatting link
or something similar.

Reply to
Colin Paul Gloster

Thanks for that nice tidbit. There has been a dream that reconfigurable computing using FPGA's as computing engines for quite some time, and as it seems fostered this news group some time ago. Would be interesting to dig out the archives and see what the topics for this group where for the first year. I have to admit, that I'm a new comer, only taking up the dream some five or so years back, but very hopeful at this point.

"At first, dreams seem impossible, then improbable, and eventually inevitable." Christopher Reeve

Writing an example PCI bus interface this week in FpgaC has been something of an eye opener for me ... feels just like writing device drivers again :) Diddling (creating) the "hardware registers" for the PCI bus interface in FpgaC really isn't that different than many other low level device drivers I've written for 30 years. I'm pretty certain as we train other device driver and embedded guys in this, they will feel the same way about C and FPGAs.

My target is 64bit at 66mhz for the XCV2000E-8's on the Dini DN2K boards I have. First try with ISE PAR was pretty horrible, as even with high effort selected, it was stumbling again with horrible placement choices with less than 1% of the device in use - not even 33 Mhz performance. Hacking on Mike Dini's fpga-f usf file to get the pin assignments to match his board and going thru route/place again last night, missed the 66mhz timing budget by about 2% on the second try.

Placement for some variables used at the top of the pci process created some longer combinatorial chains than I had expected. Placing them at the bottom of the function will remove the deep combinatorials, and leave that path much shorter from the registered version of the variables.

I spent some time in FPGA-Editor again last night, just to discover the pin locking in the ucf file brought the logic down near the bottom of the chip and near the pads, but ISE 6.1i par failed horribly to do a best effort set of assignments WITH NOTHING in the way. Clearly excessive routing incurred do to poor placement. You would expect that it would at least put the LUT's driving the pads in the CLB nearest the IOB, not a half dozen away, crossing others. And then place the read LUT and FF for the PAD into the same CLB, along with the associated access logic, not scattered half way across the chip.

FPGA-Editor really is the schematic design tool for Xil>Nial Stewart replied:

Reply to
fpga_toys

You've had to understand the target architecture and what's causing your timing constraint to fail, then re-jig your HDL to reduce the number of levels of logic to achieve timing closure.

I thought that one of the arguments for using a C based HDL was you can avoid this level of design implementaion detail?

(Serious question, I'm not being facetious).

Nial.

Reply to
Nial Stewart

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.