Finally! A Completely Open Complete FPGA Toolchain

Perhaps, or it was just a matter of time. Clearly the business model works and I think it was inevitable. MCU vendors understand the importance and pay for tools to give away. Why not give away $100 tool or even a $1000 tool if it will get you many thousands of dollars in sales? It's the tool vendors who I expect have the bigger problem with this model.

For FPGAs the funny part is I was told a long time ago that Xilinx spends more on the software than they do designing the hardware. The guy said they were a software company making money selling the hardware they support.

Free market? I'm talking about company internal management. It is so easy to track every penny, but hard to track your time to the same degree. Often this is penny wise, pound foolish, but that's the way it is. I'm clear of that now by working for myself, but I still am happier to spend my time than my money, lol.

--

Rick
Reply to
rickman
Loading thread data ...

Here's one example: during development, I'm targeting an FPGA that's severa l times larger than it needs to be, and the design has plenty of timing mar gin. So why in the name of Woz do I have to cool my heels for 10 minutes e very time I tweak a single line of Verilog?

If the tools were subject to community development, they probably wouldn't waste enormous amounts of time generating 99.9% of the same logic as last t ime. Incremental compilation and linking is ubiquitous in the software wor ld, but as usual the FPGA tools are decades behind. That's the sort of imp rovement that could be expected with an open toolchain.

It's as if Intel had insisted on keeping the x86 ISA closed, and you couldn 't get a C compiler or even an assembler from anyone else. How much farthe r behind would we be? Well, there's your answer.

-- john, KE5FX

Reply to
John Miles

Don't know about Intel, but I seem to recall that Xilinx tools have incremental compilation. Maybe they have dropped that. They dropped a number of things over the years such as modular compilation which at one point a Xilinx representative swore to me was in the works for the lower cost Spartan chips and would be out by year end. I think that was over a decade ago.

Even so, there are already FOSS HDL compilers available. Do any of them offer incremental compilation?

I believe the P&R tools can work incrementally, but again, maybe that is not available anymore. You used to be able to retain a portion of the routing and keep working on the rest over and over. I think the idea was to let you have a lot of control over a small part of the design and then let the tool handle the rest on autopilot.

--

Rick
Reply to
rickman

If there's a way to do it in the general case I haven't found it. :( I wou ldn't be surprised if they could leverage *some* previous output files, but there are obviously numerous phases of the synthesis process that each tak e a long time, and they would all have to play ball.

Mostly what I want is an option to allocate extra logic resources beyond wh at's needed for a given build and use it to implement incremental changes t o the design. No P&R time should be necessary in about 4 out of 5 builds, given the way my edit-compile-test cycles tend to work. I'm pretty sure th ere's no way to tell it to do that. It would be nice to be wrong.

-- john, KE5FX

Reply to
John Miles

I'm not sure what that means, "allocate extra logic resources" and use them with no P&R time...? Are you using the Xilinx tools?

--

Rick
Reply to
rickman

One key difference here is that gcc is written in C (and now some C++), and it's main users program in C and C++. Although compiler design/coding is a different sort of programming than most of gcc's users do, there is still a certain overlap and familiarity - the barrier for going from user to contributor is smaller with gcc than it would be for a graphics artist using GIMP or a writer using LibreOffice, or an FPGA designer using these new tools.

The key challenge for open source projects like this is to develop a community of people who understand the use of the tools, and understand (and can contribute to) the coding. Very often these are made by one or two people - university theses are common - and the project dies away when the original developers move on. To be serious contenders for real use, you need a bigger base of active developers and enthusiastic users who help with the non-development work (documentation, examples, testing, support on mailing lists) - MyHDL is an example of this in the programmable logic world.

Reply to
David Brown

I don't see the big difference to compilers targeting microcontrollers here. There are plenty of older FPGA types, such as the Xilinx XC9500 still in use. A free toolchain for them would be useful, and having advanced optimizations would be benefitial there as well. On the microcontroller side, SDCC also targets mostly older architectures, and a few newer ones, such as Freescale S08 and STMMicroelectronics STM8. You don't need every user to become a developer. A few are enough.

Philipp

Reply to
Philipp Klaus Krause

s

ral times larger than it needs to be, and the design has plenty of timing m argin. So why in the name of Woz do I have to cool my heels for 10 minutes every time I tweak a single line of Verilog?

t waste enormous amounts of time generating 99.9% of the same logic as last time. Incremental compilation and linking is ubiquitous in the software w orld, but as usual the FPGA tools are decades behind. That's the sort of i mprovement that could be expected with an open toolchain.

dn't get a C compiler or even an assembler from anyone else. How much fart her behind would we be? Well, there's your answer.

Incremental synthesis/compilation is supported by both Xilinx (ISE and Viva do) and Altera (Quartus) tools, even in the latest versions. One needs to u se the appropriate switch/options. Of course, their definition of increment al compile/synthesis may not match exactly with yours. They tend to support more at the block level using partitions etc.

Reply to
Sharad

A dist> >>> >>>>

Few matter. How many ISA designers are there? Yet, if they get good tools that let them creatively hack out the solution, we're all better of. Same with random dudes banging on some FPGA somewhere. You never know where the next thing you want will appear, and having good peer-reviewed tools creates more potential for good stuff to be made.

Maybe you just didn't try hard enough? Maybe you did but didn't notice you found a gaping bug in vendor tools.

Maybe you would be able to generate a FPGA handheld device that can reconfigure itself on the fly. Like a smartphone^H^H^H^H^H^H^H^H^H^H PDA^H^H^H trikoder that runs on some energy-efficient MIPS and that has a scriptable (meaning CLI) synthesizer that you can feed random Verilog sources and then instantiate an Ethernet device so you can jack yourself in while at home, a FM radio to listen to while driving down the road, a TV receiver with HDMA output so you can view the news and maybe a vibrator or something for the evening.

Anyway, that's what I want to have and can't right now but COULD have with FOSS tools (since I'm not gonna use QEMU to instantiate a VM so I could synthesize on my phone).

Okay, now we need to check out SDCC.

But they will happen nevertheless.

Reply to
Aleksandar Kuktin

With GCC, Linux and the ilk, it's actually the other way around. They add support for new CPUs before the new CPUs hit the market (quoting x86_64). This is partially due to hardware producers understanding they need toolchain support and working actively on getting that support. If even a single FOSS FPGA toolchain gets to a similar penetration, you can count on FPGA houses paying their own people to hack those leading FOSS toolchains, for the benefit of all.

Reply to
Aleksandar Kuktin

How will any FPGA toolchain get "a similar penetration" if the vendors don't open the spec on the bitstream? Do you see lots of people coming together to reverse engineer the many brands and flavors of FPGA devices to make this even possible?

Remember that CPU makers have *always* released detailed info on their instruction sets because it was useful even if, no, *especially if* coding in assembly.

--

Rick
Reply to
rickman

(snip)

OK, but that is relatively (in the life of gcc) recent.

The early gcc were replacements for existing C compilers on systems that already had C compilers.

Only the final stage of processing needs to know the real details of the bitstream. I don't know so well the current tool chain, but it might be that you could replace most of the steps, and use the vendor supplied final step.

Remember the early gcc before glibc? They used the vendor supplied libc, which meant that it had to use the same call convention.

If FOSS tools were available, there would be reason to release those details. But note that you don't really need bit level to write assembly code, only to write assemblers. You need to know how many bits there are (for example, in an address) but not which bits.

Now, most assemblers do print out the hex codes, but most often that isn't needed for actual programming, sometimes for debugging.

-- glen

Reply to
glen herrmannsfeldt

We have had open source compilers and simulators for some time now. When will the "similar penetration" happen?

I'm not sure what your point is. You may disagree with details of what I wrote, but I don't get what you are trying to say about the topic of interest.

--

Rick
Reply to
rickman

This thread implies that a bitstream is like a processor ISA and to some extent it is. FOSS tools have for the avoided the minor variations in processor ISA's preferring to use a subset of the instruction set to support a broad base of processors in a family.

The problem in some FPGA devices is that a much larger set of implementation rules are required to produce effective code implementations. This doesn't say that FOSS can't or won't do it but it would require a much more detailed attention to detail than the FOSS tools that I have looked at.

w..

Reply to
Walter Banks

I don't know for sure, but I think this is carrying the analogy a bit too far. If FOSS compilers for CPUs have mostly limited code to subsets of instructions to make the compiler easier to code and maintain that's fine. Obviously the pressure to further optimize the output code just isn't there.

I have no reason to think the tools for FPGA development don't have their own set of tradeoffs and unique pressures for optimization. So it is hard to tell where they will end up if they become mainstream FPGA development tools which I don't believe they are currently, regardless of the issues of bitstream generation.

I can't say just how important users find the various optimizations possible with different FPGAs. I remember working for a test equipment maker who was using Xilinx in a particular product. They did not want us to code the unique HDL patterns required to utilize some of the architectural features because the code would not be very portable to other brands which they might use in other products in the future. In other words, they didn't feel the optimizations were worth limiting their choice of vendors in the future.

I guess that is another reason why the FPGA vendors like having their own tools. They want to be able to control the optimizations for their architectural features. I think they could do this just fine with FOSS tools as well as proprietary, but they would have to share their code which the competition might be able to take advantage of.

--

Rick
Reply to
rickman

As one of the GCC maintainers, I can tell you that the opposite is true. We take advantage of everything the ISA offers.

Reply to
DJ Delorie

My guess is that Walter's experience here is with SDCC rather than gcc, since he writes compilers that - like SDCC - target small, awkward 8-bit architectures. In that world, there are often many variants of the cpu

- the 8051 is particularly notorious - and getting the best out of these devices often means making sure you use the extra architectural features your particular device provides. SDCC is an excellent tool, but as Walter says it works with various subsets of ISA provided by common

8051, Z80, etc., variants. The big commercial toolchains for such devices, such as from Keil, IAR and Walter's own Bytecraft, provide better support for the range of commercially available parts.

gcc is in a different world - it is a much bigger compiler suite, with more developers than SDCC, and a great deal more support from the cpu manufacturers and other commercial groups. One does not need to dig further than the manual pages to see the huge range of options for optimising use of different variants of many of targets it supports - including not just use of differences in the ISA, but also differences in timings and instruction scheduling.

Reply to
David Brown

But the point is the ISA is the software-level API for the processor. There's a lot more fancy stuff in the microarchitecture that you don't get exposed to as a compiler writer[1]. The contract between programmers and the CPU vendor is the vendor will implement the ISA API, and software authors can be confident their software will work.[2]

You don't get exposed to things like branch latency, pipeline hazards, control flow graph dependencies, and so on, because microarchitectural techniques like branch predictors, register renaming and out-of-order execution do a massive amount of work to hide those details from the software world.

The nearest we came is VLIW designs like Itanium where more microarchitectural detail was exposed to the compiler - which turned out to be very painful for the compiler writer.

There is no such API for FPGAs - the compiler has to drive the raw transistors to set up the routing for the exact example of the chip being programmed. Not only that, there are no safeguards - if you drive those transistors wrong, your chip catches fire.

Theo

[1] There is a certain amount of performance tweaking you can do with knowledge of caching, prefetching, etc - but you rarely have the problem of functional correctness; the ISA is not violated, even if slightly slower [2] To a greater or lesser degree - Intel takes this to extremes, supporting binary compatibility of OSes back to the 1970s; ARM requires the OS to co-evolve but userland programs are (mostly) unchanged
Reply to
Theo Markettos

As you note below, that is true regarding the functional execution behaviour - but not regarding the speed. For many targets, gcc can take such non-ISA details into account as well as a large proportion of the device-specific ISA (contrary to what Walter thought).

Indeed. The bitstream and the match between configuration bits and functionality in an FPGA do not really correspond to cpu's ISA. They are at a level of detail and complexity that is /way/ beyond an ISA.

Reply to
David Brown

That frames the point I was making about bitstream information. My limited understanding of the issue is getting the bitstream information correct for a specific part goes beyond getting the internal interconnects being functional and goes to issues dealing with timing, power, gate position and data loads.

It is not saying that FOSS couldn't or shouldn't do it but it would change a lot of things in both the FOSS and fpga world. The chip companies have traded speed for detail complexity. In the same way that speed has been traded for ISA use restrictions (specific instruction combinations) in many of the embedded system processors we have supported.

w..

Reply to
Walter Banks

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.