Finally! A Completely Open Complete FPGA Toolchain

This is not really a FOSS / Closed software issue (despite the thread). Bitstream information in FPGA's is not really suitable for /any/ third parties - it doesn't matter significantly if they are open or closed development. When an FPGA company makes a new design, there will be automatic flow of the details from the FPGA design details into the placer/router/generator software - the information content and the detail is far too high to deal sensibly with documentation or any other interchange between significantly separated groups.

Though I have no "inside information" about how FPGA companies do their development, I would expect there is a great deal of back-and-forth work between the hardware designers, the software designers, and the groups testing simulations to figure out how well the devices work in practice. Whereas with a cpu design, the ISA is at least mostly fixed early in the design process, and also the chip can be simulated and tested without compilers or anything more than a simple assembler, for FPGA's your bitstream will not be solidified until the final hardware design is complete, and you are totally dependent on the placer/router/generator software while doing the design.

All this means that it is almost infeasible for anyone to make a sensible third-party generator, at least for large FPGAs. And the FPGA manufacturers cannot avoid making such tools anyway. At best, third-parties (FOSS or not) can hope to make limited bitstream models of a few small FPGAs, and get something that works but is far from optimal for the device.

Of course, there are many interesting ideas that can come out of even such limited tools as this, so it is still worth making them and "opening" the bitstream models for a few small FPGAs. For some uses, it is an advantage that all software in the chain is open source, even if the result is not as speed or space optimal. For academic use, it makes research and study much easier, and can lead to new ideas or algorithms for improving the FPGA development process. And you can do weird things

- I remember long ago reading of someone who used a genetic algorithm on bitstreams for a small FPGA to make a filter system without actually knowing /how/ it worked!

Reply to
David Brown
Loading thread data ...

You are replying to the wrong person. I was not saying GCC limited the instruction set used, I was positing a reason for Walter Bank's claim this was true. My point is that there are different pressures in compiling for FPGAs and CPUs.

--

Rick
Reply to
rickman

I'm not clear on what is being said about speed. It is my understanding that compiler writers often consider the speed of the output and try hard to optimize that for each particular generation of processor ISA or even versions of processor with the same ISA. So don't see that as being particularly different from FPGAs.

Sure, FPGAs require a *lot* of work to get routing to meet timing. That is the primary purpose of one of the three steps in FPGA design tools, compile, place, route. I don't see this as fundamentally different from CPU compilers in a way that affects the FOSS issue.

I think that is not a useful distinction. If you include all aspects of writing compilers, the ISA has to be supplemented by other information to get good output code. If you only consider the ISA your code will never be very good. In the end the only useful distinction between the CPU tools and FPGA tools are that FPGA users are, in general, not as capable in modifying the tools.

--

Rick
Reply to
rickman

.
t

to

of

he

One could make the analogy that a FPGA's ISA is the LUT, register, ALU & RA M primitives that the mapper generates from the EDIF.

There is no suitable analogy for the router phase of bitstream generation. The router resources are a hierarchy of variable length wires in an assort ment of directions (horizontal, vertical, sometimes diagonal) with pass tra nsistors used to connect wires, source and destinations.

Timing driven place & route is easy to express, difficult to implement. Re gister and/or logic replication may be performed to improve timing.

There are some open? router tools at Un. Toronto:

formatting link

Jim Brakefield

Reply to
jim.brakefield

I'd say the SDCC situation is more complex, and it seems to do uite well compared to other compilers for the same architectures. On one hand, SDCC always has had few developers. It has some quite advanced optimizations, but one the other hand it is lacking in some standard optimizations and features (SDCC's pointer analysis is not that good, we don't have generalized constant propagation yet, there are some standard C features still missing - see below, after the discussion of the ports). IMO, the bigest weaknesses are there, and not in the use of exotic instructions.

The 8051 has many variants, and SDCC currently does not support some of the advanced features available in some of them, such as 4 dptrs, etc. I do not know how SDCC compares to non-free compilers in that respect.

The Z80 is already a bit different. We use the differences in the instruction sets of the Z80, Z180, LR35902, Rabbit, TLCS-90. SDCC does not use the undocumented instructions available in some Z80 variants, and does not use the alternate register set for code generation; there definitely is potential for further improvement, but: Last time I did a comparison of compilers for these architectures, IAR was the only one that did better than SDCC for some of them.

Newer architectures supported by SDCC are the Freescale HC08, S08 and the STMicroelectronics STM8. The non-free compilers for these targets seem to be able to often generate better code, but SDCC is not far behind.

The SDCC PIC backends are not up to the standard of the others.

In terms of standard complaince, IMO, SDCC is dong better than the non-free compilers, with the exception of IAR. Most non-free compilers support something resembling C90 with a few deviations from the standard, IAR seems to support mostly standard C99. SDCC has a few gaps, even in C90 (such as K&R functions and assignment of structs). ON th other hand, SDC supports most of the new features of C99 and C11 (the only missing feature introduced in C11 seems to be UTF-8 strings).

Philipp

Reply to
Philipp Klaus Krause

Is the PIC too much of an odd ball to keep up with or is there no future in 8-bit PIC ? or are 32-bit chips more fun ?

If there is a better place to discuss this, please let me know.

Reply to
hamilton

I don't know tons about the 32 bit chips which are mostly ARMs. But the initialization is more complex. It is a good idea to let the tools handle that for you. All of the 8 bit chips I've used were very simple to get off the ground.

--

Rick
Reply to
rickman

I don't consider 32-bit chips more fun. I like CISC 8-bitters, but I prefer those that seem better suited for C. Again SDCC has few developers, and at least recently, the most active ones don't seem that interested in the pics.

Also, the situation is quite different between the pic14 and pic16 backends. The pic16 backend is not that bad. If someone puts a few weeks of work into it, it could probably make it up to the standard of the other ports in terms of correctness; it already passes large parts of the regular regression test suite. The pic14 would require much more work.

The sdcc-user and sdcc-devel seem a better place than comp.arch.fpga.

Philipp

Reply to
Philipp Klaus Krause

Back to the topic of the open FPGA tool chain, I think there would be many "PICs", i.e. topics which are addressed by no / too few developers.

But the whole discussion is quite theoretical as long as A & X do not open their bitstream formats. And I do not think that they will do anything that will support an open source solution, as software is the main entry obstac le for FPGA startups. If there would be a flexible open-source tool-chain w ith large developer and user-base that can be ported to new architectures e asily, this would make it much easier for new competition. (Think gcc...)

Also (as mentioned above) I think with the good and free tool chains from t he suppliers, their would be not much demand for such a open source tool ch ain. There are other points where I would see more motiviation and even the re is not happening much:

- Good open source Verilog/VHDL editor (Yes, I have heard of Emacs...) as t he integrated editors are average (Altera) or bad (Xilinx). (Currently I am evaluating two commercial VHDL editors...)

- A kind of graphical editor for VHDL and Verilog as the top/higher levels of bigger projects are often a pain IMHO (like writing netlists by hand). I would even start such a project myself if I had the time...

But even with such things where I think would be quite some demand, the "cr itical mass" of the FPGA community is too low to get projects started and e specially keep them running.

Thomas

Reply to
thomas.entner99

levels of bigger projects are often a pain IMHO (like writing netlists by hand). I would even start such a project myself if I had the time...

One big factor against an open source tool chain is that while the FPGA vendors describe in general terms the routing inside the devices, the precise details are not given, and I suspect that these details may be considered as part of the "secret sauce" that makes the device work. The devices have gotten so big and complicated, that it is impractical to use fully populated muxes, and how you chose what gets to what is important.

Processors can also have little details like this, but for processors it tends to just affect the execution speed, and a compiler that doesn't take them into account can still do a reasonable job. For an FPGA, without ALL the details for this you can't even do the routing.

Reply to
Richard Damon

I'm not sure what details of routing aren't available. There may not be a document which details it all, but last I saw, there were chip level design tools which allow you to see all of the routing and interconnects. The delay info can be extracted from the timing analysis tools. As far as I am aware, there is no "secret sauce".

Timing data in an FPGA may be difficult to extract, but otherwise I think all the routing info is readily available.

--

Rick
Reply to
rickman

My experience is that you get to see what location a given piece of logic, and which channels it travels. You do NOT see which particular wire in that channel is being used. In general, each logic cell does not have routing to every wire in that channel, and every wire does not have access to every cross wire. These details tend to be the secret sauce, as when they do it well, you aren't supposed to notice the incomplete connections.

I have had to work with the factory on things like this. I had a very full FPGA and needed to make a small change. With the change I had some over clogged routing, but if I removed all internal constraints the fitter couldn't find a fit. Working with someone who did know the details, we were able to relax just a few internal constraints and get the system to fit the design. He did comment that my design was probably the fullest design he had seen in the wild, we had grown to about 95% logic utilization.

Reply to
Richard Damon

Don't they still have the chip editor? That *must* show everything of importance.

Yeah, that's pretty full. I start to worry around 80%, but I've never actually had one fail to route other than the ones I tried to help by doing placement, lol.

--

Rick
Reply to
rickman

The chip editors tend to just show the LOGIC resources, not the details of the routing resources. The manufactures tend to do a good job of giving the detail of the logic blocks you are working with, as this is the part of the design you tend to specify. Routing on the other hand tends to not be something you care about, just that the routing 'works'. When they have done a good job at designing the routing you don't notice it, but there have been cases where the routing turned out not quite flexible enough and you notice that you can't fill the device as well before hitting routing issues.

They suggest that you consider 75-80% to be "Full". This design started in the 70% level but we were adding capability to the system and the density grew. (And were already using the largest chip for the footprint). Our next step was to redo the board and get the usage back down. When we hit the issue we had a mostly working design but were fixing the one last bug, and that was when the fitter threw its fit.

Reply to
Richard Damon

I'm not sure what details of the routing the chip editors leave out. You only need to know what is connected to what, through what and what the delays for all those cases are. Other than that, the routing does just "work".

The "full" utilization number is approximate because it depends on the details of the design. Some designs can get to higher utilization numbers, others less. As a way of pointing out that the routing is the part of the chip that uses the most space while the logic is smaller, Xilinx sales people used to say, "We sell you the routing and give you the logic for free." The point is the routing usually limits your design rather than the logic. If you want to be upset about utilization numbers, ask them how much of your routing gets used! It's *way* below

80%.
--

Rick
Reply to
rickman

If you're trying to implement an open source toolchain you would likely need to know *how* to specify those connections via the programming bitstream.

Kevin

Reply to
KJ

Look closely. The chip editor will normally show you the exact logic element you are using with a precise location. The output will then go out into a routing channel and on the the next logic logic cell(s) that it goes to. It may even show you the the various rows and columns of routing it is going through. Those rows and columns are made of a (large) number of distinct wires with routing resources connecting outputs to select lines and select lines being brought into the next piece of routing/logic. Which wire is being used will not be indicated, nor are all the wires interchangeable, so which wire can matter for fitting. THIS is the missing information.

And this is why the keep the real details of the routing proprietary. (Not to keep you from getting upset) The serious design work goes into figuring out how much they really need per cell. If they could figure out a better allocation that let them cut the routing per cell by 10%, they could give you 10% more logic for free. If they goof and provide too little routing, you see the resources that you were sold (since they advertize the logic capability) as being wasted by some 'dumb design limitation'. There have been families that got black eyes of having routing problems, and thus should be avoided for 'serious' work.

Reply to
Richard Damon

Well... yeah. That's the sticky wicket, knowing how to generate the bitstream. I think you missed the point of this subthread.

--

Rick
Reply to
rickman

I can't speak with total authority since I have not used a chip editor in a decade. But when I have used them they showed sufficient detail that I could control every aspect of the routing. In fact, it showed every routing resource in sufficient detail that the logic components were rather small and were a bit hard to see.

When you say which wire is used is not shown, how would you be able to do manual routing if the details are not there? Manual routing and logic is the purpose of the chip editors, no?

I don't follow the logic. There are always designs that deviate from the typical utilization in both directions. Whether you can see what details in the chip editor has nothing to do with user satisfaction since you can read the utilization numbers in the reports and don't need to see any routing, etc.

--

Rick
Reply to
rickman

A comment:

All this information sounds like it can be teased out of the physical chips. But, I find there are other considerations. First, doing this might jeopardize the FPGA manufacturer. It's no good to have a FOSS toolchain but no FPGAs to use it on. Second, there is the problem of a FPGA manufacturer releasing a small tweak that would invalidate the entire work done. Third, and this is where it gets interesting, the time and effort spent reverse-engineering a great number of FPGA models is probably better spent engineering a FOSH ASIC toolchain together with the assorted manufacturing technology. Because - honestly - if you are willing to program FPGAs, you are really not very far away from forging ASICs, are you?

Speaking for myself, I'm working alone on FPGAs far away from silicon powerhouses and I have to jump through hoops and loops to get the chips. Jumping through hoops and loops to get my design forged into an ASIC is not really that different.

Reply to
Aleksandar Kuktin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.