Folks doing monetary stuff commonly use ~128 bit integers under the hood. Even when scaled 64 bit numbers are sufficient for all stored values, intermediate results are often bigger.
The Cobol folks, who, if nothing else, care about handling currency, have specified ~64 bit integers (actually the requirement is that the user can define types with 18 decimal digits and a sign), and intermediates of twice that size since the sixties, and more recent standards have doubled those requirements.
Silly boy. You are so stuck in your rut. The point is you can change the contents of the LUT from the design without needing to reconfigure the device.
IF you were to spend some time with the existing FPGAs and the existing tools to actually learn something about them, then perhaps you might realize that.
Yes, and if pigs had wings they could fly. Do you want to solve a problem or daydream your time away?
I believe he can. Doesn't an inventor have a year beyond public disclosure to file? All he has to do is "prove" he invented it first, something like notes in a notebook... not hard to fabricate.
COBOL people often store numeric data as one ASCII/EBCDIC numeric character in a byte or as two BCD digits in a byte. Thus, there is quite a flexibility in allocating storage for permanent storage as well as intermediate results.
That sounds like the equivalent of a table-driven filter interpreted by a fixed software program. Native compilation gives much higher performance. You'll always be able to synthesize a smaller circuit for a single function than a universal one. And since it's smaller, you can replicate more of them across the same sized device.
I'm always interested in cool technology, but am I trying to build FPGA-based packet filters right this minute--no I'm not. If I were a manufacturer committed to shipping a particular product that needed FPGA's, I'd have no choice but to go along with the current vendor offerings. If I were a wannabe-entrepreneur casting around for product concepts without already being committed to anything in particular, I'd consider the need for closed-source software to be a significant disincentive against using a technology. So if I couldn't use FOSS FPGA tools, maybe I'd just choose to build something without FPGA's. The hardware product I'd like right now is a compact outlet strip with about
20 outlets instead of the usual 6 or 7. I'm not the guy to build it, but it doesn't exist right now, there's an obvious market for it, and making it doesn't need FPGA tools or any other closed source software.
In reality, FPGA's are not like super-CPU's in terms of complexity. They're more like memory, i.e. a sea of relatively simple, replicated structures. So my current theory is that FPGA vendors tenaciously try to prevent FPGA's from turning into commodity devices like memory, which would be a natural outcome of stuff being open. They instead want to sell them as super-powerful devices more complicated than CPU's. I can understand the business motivation for that, but it seems ripe for disruption.
There's sort of a similar situation with GPU's. For a long time the instruction sets for those were proprietary. But they've been opening up. So I'm holding out for the same thing happens with FPGA's.
The workstation would have to be inside the firewall, in which case the installation's security policy might prohibit running non-FOSS software on it.
In that case you've added a layer of interpretation that kills the performance you were hoping to get by using an FPGA in the first place. You might as well program a real CPU with normal software.
Which are also integers, although clearly not the binary variety. But almost all Cobol implementations do support binary forms, and will pick 1/2/4/8/16 byte binary fields as needed to store the requested data (IOW, if you define a field as "PIC 9(7) BINARY", you'd usually get a four byte binary, "PIC 9(10) BINARY" would get you an eight byte field*).
In any event the size requirements for intermediates are spelled out as part of the arithmetic performance requirements, and are not usually directly visible to the programmer.
*And those would get four and six byte "PACKED" and seven and ten byte "DISPLAY" format fields.
I thought int_least64_t was only required if the compiler has a "long long" of at least 64 bit, and that you could be C99 compliant without supporting anything bigger than 32-bit. But I'm not going to argue here (that's what comp.lang.c is for!), because I don't think it makes any practical difference.
Usually not. A modern Haswell processor will still run 8086 code. And FPGA's are not as complex as microprocessors in terms of internal architecture. They could just issue a new manual for each one.
VHDL or verilog aren't anything like assembler. They're more like SQL in terms of abstraction levels between the source code going into the compiler and the bits coming out.
Yes, and you can then compile the C with a FOSS compiler that makes machine code. So if there's a comparable FOSS VHDL or Verilog compiler that makes bit streams, I'm ok with your solution. But there is none, which is the whole issue here.
Some bit stream formats have in fact been reverse engineered, though apparently more with an eye towards cloning existing designs than compiling new ones.
Not necessarily. Probably /a/ workstation does, but that could VPN to one with the FOSS software. The whole environment has not been defined for us, so we are free to imagine anything.
I think there might be just a few other reasons why it might not work in practice!
You don't have enough information to say that. Particularly since "interpretation" can mean a very wide range of techniques.
Sure VHDL is very different to asm; analogies are always dangerous.
Nonetheless the analogy can be useful and helpful if not pushed too far.
SQL is the abstraction providing stability across changes inside the DB ASM is the abstraction providing stability across changes inside the processor VHDL is the abstraction providing stability across changes inside the fpga
I'm more interested to learn about the extent to which fpgas can modify their own internals. Hopefully rickman will provide pointers
His point being that you can't achieve nearly the same level of complexity/usefulness if you just design the FPGA to do what you would otherwise do using "live" uploads I think he is right.
It may very well be getting a bit silly for the FPGA market. I freely admit I don't know much about that market at the moment; but it's just I have seen the behaviour I describe in other unrelated areas.
Oh, yes. :-)
Simon.
--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.