FPGA C Compiler on sourceforge.net (TMCC derivative)

Here's a reference that was posted here a while ago and I'm just following up just now:

"Survey of C-based Application Mapping Tools for Reconfigurable Computing"

formatting link

On p14, the C-based implementation performs faster than the VHDL implementation, despite the VHDL being developed after 'semester-long endeavor into algorithm?s parallelism'.

They point to one of their own references that describes the implementation, but I guess you'd probably need to ask them for the resulting source.

I guess Celoxica can probably give you some references to C-based examples too.

Note that this sort of example is a more likely application in the HPC community rather than the hardware design community per se.

Martin

Reply to
Martin Ellis
Loading thread data ...

I think we have to accept that high-level languages are going to be the future for FPGAs. Not to say that HDLs will be replaced entirely, but they'll be largely supplanted by the HLLs. Algorithms are easier to verify: testing can be done using a software compiler and, providing you can trust your tools and hardware infrastructure, you shouldn't need to do extensive hardware testing of the implemented algorithm. Why should we want to know what's going on inside of the FPGA? Development time too is vastly reduced.

I think C has been selected as the starting point for most HLL-to-HDL tools not because of its eminent suitability for the task, but because it decreases the pain in switching to the new tool. C syntax is familiar, it's a good jumping-on point. However, my experience in using these tools tells me that hopes for massive re-use of legacy code are still very much a pipe-dream. You will still have to understand the underlying hardware. You will have to understand the spatial, temporal and memory tradeoffs, and understand how to infer the pipelining and parallelism that is most suitable. What HLLs free you up from is the need to fiddle about with the timing on pipelines and other such details. I can change the mix of ALUs in a complex pipelined algorithm easily and painlessly. I don't need to go and manually re-time my pipeline to account for the changes (and so know I won't make an off-by-one error, introducing a fiendish bug).

First generation tools are far from perfect, but they will see use because they significantly decrease development time. Your HLL-designed system may not be as efficient as the best possible VHDL design, but if it's good enough and you get to market months before the competition, you'll come out on top.

Once the user base has been built up, I see the tools maturing and becoming less and less C-like. New languages will be demanded to better express parallelism and pipelining and to account for heterogeneous processing units and memory structures.

Sorry if I've gone on a bit... :)

Robin

Reply to
Robin Bruce

Ok, you want to map GUIs, database engines, programming languages a.s.o. directly on an FPGA. Perhaps it makes sense to map 1% of all application to an FPGA.FPGAs offer massive parallelism, therefore only application/problems which utilize this parallelism should be implemented in FPGAs. They are all a kind of communication system or signal processing system.

May be, but it uses the hardware very efficiently

Never, since most of them think in sequential algorithms and don't understand the advantages of hardware.

Bye Tom

Reply to
Thomas Reinemann

What are you saying? That people who don't understand hardware don't make good hardware designs? Why would that make VHDL better than a HLL-to-VHDL tool?

I could just as easily say that people who've never heard of algorithms don't understand the advantages of a microprocessor, therefore assembler is better than C. It's a non sequitur...

Reply to
Robin Bruce

All? Perhaps you should read these for other high-performance computing applications that can be accelerated using FPGAs:

@MISC{compton00reconfigurable, author = {K. Compton and S. Hauck}, title = {Reconfigurable Computing: A Survey of Systems and Software}, year = {2000}, text = {K. Compton, S. Hauck, Reconfigurable Computing: A Survey of Systems and Software, submitted to ACM Computing Surveys, 2000.}, url = {

formatting link
}, }

@ARTICLE{hauck98roles, author = {Scott Hauck}, title = {{The Roles of FPGAs in Reprogrammable Systems}}, journal = {Proceedings of the IEEE}, year = {1998}, volume = {86}, number = {4}, pages = {615--638}, month = {Apr}, url = {

formatting link
}, }

Isn't that what people said about assembly language? And GOTO statements?

Yawn. I wonder when people from traditional hardware design backgrounds will get over this kind of attitude.

So what if some hobbyists don't 'get' it at first? People aren't born hardware designers, nor software programmers. Are you really saying you've never made any mistakes while you were learning?

It's not like using a HLL for FPGA design is only useful for hobbyists anyway.

Martin

Reply to
Martin Ellis

Isn't that what people said about Schematic based designs when HDLs popped up?

The reality is the HLLs targeting reconfigurable computing on FPGAs get very good fits already, just as HDLs do simply because the back end optimizers in the tool chains for space/time traceoffs, partitioning, mapping, and routing yield the same benefits for HLLs. The biggest differences is that most HLLs hide implementation details that create design risks, which are considered expert tools for HDL users, thus allowing coders with less hardware experience the ability to realize functional designs with a small performace penalty. Given that the speed ups from a RISC/CISC architecture CPU to FPGAs that can be obtained where there is parallism, are often one to three orders of magnitude, this small efficiency loss is completely mouse nuts. To gain that extra effieciency would require an experienced HDL coder and sigificant delays in the development schedule, each of which have marginal cost benefit gains in comparison to the huge gains made by using reconfigurable computing with FPGAs.

Heck, Impulse C is said to use VHDL as the netlist technology to optimize the fit. FpgaC even has some experimental code that uses Verilog as the netlist technology instead of XNF. Even the XNF outputs allow for respresenting the output design as basic gates or packed LUTs with equations allowing the user to decide which will do the best technology mapping. Each of these choices gives the backend tool chain considerable room to optimize the HLL produced netlist for the target technology, just as VHDL and Verilog designs expect.

Celoxica C and Impluse C are becoming thriving products with expensive high value tool chains. Others are likely to become successful as well, and it's very likely that Xilinx or Altera or other FPGA company will offer a C HLL/HDL as their flagship tool chain as reconfigurable computing takes off and drives the high end FPGA revenues. Some expect that may be in the form of System C if that techology really takes off as a system level design specification tool.

Doesn't matter. There are few low cost C HLL/HDL tools available for students, hobbyists, and low budget design shops. The total cost of the Celoxica tool chain for a modest sized development team can easily run the cost of several engineers for the multiple licenses needed. Small 1-10 man shops like mine simply can not afford Celoxica licenses, so I've used a mix of TMCC and Verilog for several projects to meet my budget.

There are several interesting specialty C HLL/HDL research tools that knock your socks off. Several data flow C compilers have been presented at conferences that would be awesome for some projects if they ever became a real product that was affordable or released GPL (search for ROCCC and PiCoGA projects, and work by Oskar Mencer). Sarah's partial evaluation C compiler called HARPE generates some awesome logic optimizations which coupled with her Async work would make a killer tool to add to ones tool chain for certain types of projects (see

formatting link
And Mihai Budiu's research at CMU which produced the ASH tool chain (see
formatting link
) Other projects like SA-C at ColoState.edu (see
formatting link
and Spark at UCSD (see
formatting link
and a few dozen others are all exploring and showing good solid gains in how to map HLLs to FPGA and win big.

C as an HLL to netlist toolchain is here to stay, and probably only get better with time. C as an HDL (in the form of Handel-C by Celoxica) is clearly here.

Reply to
air_bits

That's essentially it. Let's define "good" hardware designs: best use of hardware resources with highest performance (clock speed).

Because VHDL and Verilog are designed from the ground up to take advantage of the parallelism inherent in hardware designs. Sequential programming languages such as C are not, and much hackery has to happen for C to map well to hardware.

As an example, think about how you could implement a FIR filter in C for a DSP, and then think about how you could implement the same filter on an FPGA. I suppose one could write a tool that's smart enough to translate the C description of a FIR filter into efficient hardware but one presumes that sufficient restraints would need to be put into the "hardware C" for the tools to work well. But if the intent is to take high-level C developed by a software guy and have it map to hardware as well as it runs on a DSP, well, I just think you'll leave a lot of FPGA peformance on the table.

You could, but the statement is irrelevant.

-a

Reply to
Andy Peters

Thanks, very interesting link.

I like this oxymoron, on p5 : " * Companies create proprietary ANSI C-based language * Languages do not have all ANSI C features and this very important point * Must adhere to specific programming ?style? for maximum optimization "

The benchmarks are usefull - and show the choice is very much a lottery. One benchmark they did not give, was just what results were if Generic C, from a generic graduate, was thrown at these tools.

Source snippets are important, because these solutions are not C, but C-based. The devil is in the details....

-jg

Reply to
Jim Granville

I'm afraid if you throw generic C at these tools, it won't compile...

Reply to
Robin Bruce

Andy,

firstly, you've misunderstood my post. The previous poster seemed to suggest that there was some kind of link between people not understanding hardware and HLLs being inferior to VHDL. I didn't see what the level of experience of people who use HLLs has to do with whether or not HLLs are superior to HDLs. This was what led to my intentionally fatuous comment:

OK, as for your other comments:

well, much 'hackery' obviously has happened, as there are tools that map C well to hardware. We're not talking about what might happen, we're talking about what is happening.

Check slide 13 of Brian Hollands presentation from MAPLD, you'll find all 3 HLL tools tested beat the VHDL for FIR implementation:

Who said that's what we're trying to do? We're talking about high-level languages not so we can compile legacy code. We're doing it so we can rapidly infer reliable hardware using a more concise expression than that achieved using HDLs while paying a minimal price in lost potential performance.

Reply to
Robin Bruce

I don't think that's an oxymoron. Just because a language is proprietary, it does not mean it can't be based on ANSI C.

Sure it might read a bit funny - but we've all tried to cram too much onto slides before.

Yes. That's a very important point.

I think you've got the wrong end of the stick there.

This isn't about taking arbitrary C code and compiling it to an FPGA. You simply can't do that.

Reason: It's possible to write architecture specific code (that is, code specific to a particular ISA) in C. Self-modifying code and dynamic code generation are examples of this.

For example, linkers and JIT compilers modify code which is then executed. You couldn't compile that efficiently to an FPGA - it would need the re-synthesis every time code was modified..

Another reason is that a compiler can't guess which inner loops are program 'hot-spots', and thus good candidates for synthesis. Such information is application-domain specific.

Concisely: the aim isn't to be able to take a program written by someone who knows nothing about hardware (at least, not yet). The aim is to be able to develop hardware acceleration for a given algorithm.

One advantage of C-based languages is that when trying to accelerate an algorithm, it might not be clear which parts to synthesise - this requiring some trial-and-error for difficult problems, and also being dependent on some rather arbitrary parameters. It's easier to move a computation unit from software to hardware, or vice-versa, if the languages are similar. There's also a whole raft of software based optimisations that can be applied before the hardware optimisations even get a look in.

Another, is that for some projects, a C simulation is developed to check the algorithms anyway. For example, Timothy Miller did a software model for the OpenGraphics project. The practise isn't uncommon.

For the reasons above, it is - in general - necessary to provide a compiler with some pragmas or other hints that describe what code would be a good candidate for synthesis.

However, the solutions are often close enough to C that it's possible to execute the program entirely in software, as well as compile to a object code/bitstream target. That's useful for the intended applications.

Nobody's pretending C-based synthesis is a complete replacement for HDL, only that for some applications/projects it's a very compelling alternative.

Martin

Reply to
Martin Ellis

Actually, Handel-C (Celoxica), Impulse-C (derivative of StreamsC), ASH, SA-C, HarPE, Spark, and even FpgaC all take a subset of the language and to various degrees present extensions to the language to be a language optimized for hardware design, some much more rigorous than others.

Celoxica's long term goal is to compete 1 on 1 with VHDL/Verilog and they are doing pretty well at it so far.

Impules-C with it's VHDL backend is ment to take communication based designs (AKA RPC or MPI or other cluster based communication library) and use FPGA's as computing nodes with clearly defined streams to build pipelined system designs. The use of the VHDL backend leaves lots of room to optimized the resulting design at a low level.

SA-C and ASH are clearly targeting high performance designs, actually large high performance designs, with the intent to get as good a hardware fit as VHDL/Verilog or better by picking a specification language higher than VHDL/Verilog and lower than a full C/C++ which is highly optimizable to give a better design yield than mid-level experienced coders would get with VHDL/Verilog. Actually all the C HLL offerings pretty much share this goal of trying to do better than the average VHDL/Verilog coder.

ASH and HarPE go after optimizations that even a skilled VHDL/Verilog coder are likely to miss, or even decide to avoid in the effort to leave the VHDL/Verilog code readable and maintainable.

Reply to
air_bits

Actually you could, and in the future when a few million luts are cheaper than a fairly fast CPU, some people probably will by using mixed technologies inside the FPGA ... a combination of application specific cpu cores and generic netlists. Already Xilinx is targeting that market with PPC cores and micro blaze cores as an addition to the FPGA logic synthesis.

Actually, that is only partially true. It's been common for some time to use profiler input from actual runs to guide the compiler optimations for later builds. This happens to be one sweet spot that lcc exploits to beat gcc and pcc executables. See

formatting link

Actually, it's very easy to write C to the subset implemented by a particular C to netlist HDL, that with a few #ifdef's is usable in either environment, and can accellerate development testing and debugging by doing most, if not all, the high level debugging in a well structured source code debugging environment.

Others take it a step farther, and use rigourous type checking combined with super "lint" tools that provide verifiable correct construction checking. In a lot of ways, the original C++ development was exactly that, and was implemented as a front end preprocessor for standard C which was specifically ment to be only lightly typed so that it was a productive low level systems programming language only marginally higher level than PDP-11 assembly language.

It's actually fairly easy to code in C++ with abstract types that directly implement (IE emulate) the abstract hardware types that would result after synthesis to yield a strict HLL development environment with strict typing and verifiable designs and still translate to a subset dialect of C (or Verilog) for synthesis.

Choke, cough .... ummmm ... Celoxica, ASH, and a few other projects really have that goal. Celoxica has very clear guidelines, just as VHDL and Verilog have, to allow the coder to understand just what registers and logic will be instantiated.

When you stop and think about it ... there is very little difference between the coding syntax of Handel-C and a subset of Verilog.

Reply to
air_bits

Um. Go back to the self-modifying code example and you'll see that it can't always work. To some extent, I agree with you, but the programs *have* to be sensible and well behaved (type safe to some extent).

And that excludes *arbitrary* C.

Yes, I know about this technique, and I thought about mentioning it here, but didn't for brevity. I don't call that fully automatic though. You need to seed it with some appropriate test data.

That's exactly what I'm arguing. Why are you arguing with me?

As far as I know, Celoxica's products and similar offerings only target digital designs for FPGAs.

I'm not aware of any C-based language that was intended to cover ASIC manufacture, or that would cover, say the 'Standard VHDL Analog and Mixed-Signal Extensions' for example.

I could be very wrong there, and I'd be interested to know if they do intend to target those aspects of HDLs though, if you can point me at any references.

Sure. The syntax is very similar, but syntax is normally the least interesting part of a language.

Martin

Reply to
Martin Ellis

Which has a lot in common with the ASM-HLL debates on microcontrollers.

The best solutions will come from a mix of tools

- but the sad reality is marketing dept drive is to push the hot new thing, as a silver bullet, and any suggestions or examples of mixing HLL/HDL, might be seen as admitting that their hot-new-thing is not actually the universal new tool....

There is another, more recent shift in FPGA's, which means a 'Sea of DSP' deployed in the FPGA, and that is missing from this link: "Survey of C-based Application Mapping Tools for Reconfigurable Computing"

formatting link

The HLL -> HDL path, misses the alternative of HLL -> FPGA Running HLL amd the best tool set, will be one that allows a softer migration between Opcodes and Registers.

The next generation FPGA will be interesting to watch, as we are steadily getting more coarse & complex blocks, in BlockRAM and DSP-able blocks, with each release. This may outflank the efforts to create C -> registers ?

-jg

Reply to
Jim Granville

Or cynically speaking, we may just get bigger, faster and cheaper FPGAs, so that it doesn't really *matter* how efficient you are, merely that you're in the ballpark. I think this has happened to a certain extent in the software world anyway...

Jeremy

Reply to
Jeremy Stringer

What tools would those be? I've yet to see a tool that will take C code that has not been so badly bastardized that it no longer looks much like C code and turn out even half decent hardware. All of them require proprietary extensions to the C language to sufficiently describe hardware, as well as a very specific and stilted programming style that is as foriegn to C programmers as VHDL or verilog is.

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com  
http://www.andraka.com  

 "They that give up essential liberty to obtain a little 
  temporary safety deserve neither liberty nor safety."
                                          -Benjamin Franklin, 1759
Reply to
Ray Andraka

Which is very true, Celoxica being a prime example as code written for their target as an HDL would be very tough to get to run on a RISC/CISC machine and do anything meaningful.

You have to move up the food chain to a C++ design with heavy operator overloading before you can get close to having the same source target both enviroments if you are going to introduce HDL features into C. Std C just lacks the native types that get introduced with HDL features in Handel-C.

SO, that leaves two distinctly different camps each tring to use the same or similar tools for two opposite goals ... the HDL guys designing hardware, and the reconfigurable computing guys just trying to gain a faster computing platform with FPGAs.

Personally, I'm comfortable with using VHDL/Verilog for HDL and a fairly generic C to netlist tool (FpgaC) for general reconfigurable computing, and a mix of tools for gluing projects togather (SoC's).

The which HDL is better debate is pretty much preference and requirements based, and impossible to win as a general case.

I do think we will see HLLs that target particular techologies that are well defined and difficult to easily code ... the whole pipelined data path problem for distributed arithmetic and filters is already shaping up that way with core generators (which are in fact simple forms based HLLs).

Reply to
air_bits

If by ASH you mean the application specific hardware project run by Seth Goldstein and Mihai Budiu (until he graduated) at CMU, then you have gotten the wrong impression. I spent a semester in that research group, and they most certainly do not intend to replace HDLs with C. They have developed some very interesting compiler technologies that can generate surprisingly efficient circuits from nearly arbitrary C code, but even they wouldn't claim that C is an appropriate replacement for HDLs in all cases. They are spinning their compiler as a tool that can be used by many more people than traditional HDL synthesis tools, but the quality of the circuits they produce is still far from optimized HDL-based designs.

Benjamin

Reply to
Benjamin Ylvisaker

Ray,

OK, maybe I should rephrase what I said, reading it again myself I don't quite agree with it :). What I meant was that there are tools out there that can map HLL well to hardware. I didn't mean to suggest that they are ANSI C. I realise it's a bit of an abuse of the language to describe these things as C, but I tend to describe the C-inspired languages of these tools as C, and talk about ANSI C when I want to make it clear I'm talking about canonical C.

It does seem that most of the tools out there are extensions to C. I should say at this point that I've never used any of the tools that have been discussed so far on this board, so I'll leave it to someone else to talk about how close they are to C. I'm a research engineer based in Nallatech, and I've been working with a tool being developed there, DIME-C. I can safely say that DIME-C is to all intents and purposes a subset of C, so everything you do in it can be compiled with a gcc compiler. You can't have pointers, and you have to go round the houses sometimes to avoid breaking your pipelines, but it's definitely recognisable as C. If anyone is particularly interested, I could send them some examples of the code. I don't want to bring DIME-C into the debate though, I'm interested in finding out more about what's out there, rather than in doing marketing :)

Cheers,

Robin

Reply to
Robin Bruce

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.