Small, fast, resource-rich processor

I'm just going to make one more point and then I'm done with this topic. Floating point is just fixed point with a range extending multiplier added. Anything you can do with fixed point you can do with floating point. The opposite is *not* true.

--

Rick
Reply to
rickman
Loading thread data ...

Not sure what that means. It is hard to get revenue from free tools... but wait! That is exactly what they do. They give away free tools and get revenue from the chip sales.

I paid for my Lattice tools, so they appear to be obligated to renew the license for free every year. It is a PITA and has bitten me in the butt a number of times, but they always ship me a new license file... in fact, several license files. Every time I use a new computer they license the new one and still send me license files for the old ones too.

There may be problems with the licensing in the future if they decide to abandon all support for the tool. But that is not a concern to me as I don't wish to use a tool beyond it's expiration date. That reminds me, I have a bunch of very old Xilinx software someone wanted me to ship to them.

Also if you have any ideas of what you would like to see in an FPGA board I would be interested in hearing about it. Making them would be right up my alley.

--

Rick
Reply to
rickman

Here is something you can do with fixed-point that you can't do with floating point: perform a multiply-accumulate with 64 bits of precision.

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

(snip, someone wrote)

(then I wrote)

As long as you stay within range, that should be true for add, subtract, and multiply. On many machines, the product of fixed point multiply goes to a full double width, while a double precision floating point value has fewer significant bits. (Some are used for the exponent.)

But things change for divide. Many fixed point algorithms depend on divide to truncate, and often use the remainder (modulo). If you have floating point that rounds, it is not so easy to get the appropriate truncated quotient and remainder. If you have floating point with unknown rounding mode, it is even harder.

-- glen

Reply to
glen herrmannsfeldt

That is presuming a few things, for one, that the intermediate accumulator done with at least > 64 bits.

My point is that, for certain operations (i.e., MACs), if you want the best possible precision, use 64-bit fixed-point, not floating-point, presuming the range of a 64-bit integer suffices.

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

Correction: "e.g., MACs".

-- Randy Yates Digital Signal Labs

formatting link

Reply to
Randy Yates

with apologies to Tim, who started a nice thread. but it's time, i think, to euthanize it.

maybe not. some threads go > rickman writes:

promise?

and neither is the non-opposite true. if i were Vlad, i might point out in some undiplomatic language that this is a naive belief.

that's processor specific, i think, ain't it, Randy? i mean, a SHArC can do 32x32 to 64 bits. does that count?

here's something that's not processor specific:

moving sum or moving average filter implemented as the simplest non-trivial TIIR (Truncated IIR filter, which is theoretically an FIR).

do it in float and you'll get little turds stuck this critically-stable feedback loop. those turds accumulate, given enough time. do it in fixed and you *know* that you are subtracting exactly what you added N samples prior to that. and you know what you are adding now will eventually be perfectly subtracted later.

--

r b-j                  rbj@audioimagination.com 

"Imagination is more important than knowledge."
Reply to
robert bristow-johnson

well, if you are saying "in one cycle," then yes, it's processor specific. I just meant generally, using standard C data types (especially the stdint.h stuff like int16_t, int32_t, int64_t, etc.) that should extend on whatever processor.

another good example.

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

(snip, someone wrote)

If you truncate enough low bits before the sum, then it should work, but yes, if you aren't careful adding and subtracting, it is easy to have something left over.

On many processors, it isn't so easy to truncate low bits on a floating point value.

-- glen

Reply to
glen herrmannsfeldt

Hmm, well I just learned something I didn't know. According to the C99 spec, the integer types I mentioned above are not required.

However,

int_least8_t int_least16_t int_least32_t int_least64_t

are.

But to do a 64-bit MAC with reasonable precision you would need an int_least128_t, which is not required. So I guess this is processor specific.

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

How on earth can you say it's wide open, if I can't figure out what bits to send into the chip to perform a given function? Why are they more protective than CPU vendors? CPU's have manuals that specify the instruction set in enough detail that I can write a compiler and generate my own binaries to load into the CPU. With FPGA's it sounds like I have to use the vendor compiler.

What if I want to write my own compiler, say to use the fpga as a reconfigurable processor?

If they were willing to share the info with me for the purposes I have in mind (maybe using Kansas Lava, a FOSS compiler for FPGA's), they would publish the info and just sell me the hardware.

They also may be dependent on specific cpu architectures. Maybe I want to build an embedded gadget that generates fpga code on the fly and loads it into the fpga. I can't run their Windows tools in the gadget.

Maybe there is a technical reason why they don't do that.

Reply to
Paul Rubin

I dunno. I tossed a nice little fishy into a pond.

Now its descendants are growing legs to crawl on land with, and teeth and to hunt down cute little bunny rabbits with.

Isn't evolution supposed to be a good thing?

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

You're presuming that floating point ends at 64-bit. There's higher (even arbitrary) precision floating point standards out there.

Just sayin'

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

Well you just cannot write your own tool producing the bitstream, this has been the case last 25 years or so (since xilinx do exist).

Why do they play it so closed I don't know, must be about some sort of control. At which level and about exactly what I don't know, have not lost that much sleep thinking about it, really. But they certainly take no chances when it comes to control over the tools people use with their hardware. Here is an email exchange of mine from 13 years ago (when I was still naive enough to go into that but then I must have got some amusement in return, as you can see - it does not get a lot more moronic than that....):

formatting link

Of course they would if they wanted to and of course they do not. I have been saying this in a number of threads also on comp.arch.fpga, anyone on that group will remember at least one of these.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
dp

(snip, someone wrote)

There are, and IEEE 754 includes both a binary and decimal format with 128 bits. Binary with 113 bits (including hidden bit) and decimal with 34 decimal digits. Other than IBM, though, it is hard to find hardware implementations (including microcode) of either of them.

The IBM 360/85, all S/370, and later (ESA/370, ESA/390, etc.) include a hexadecimal floating point format with 128 bits for all except divide. Somewhere in the ESA/390 days (around 2000) DXR (extended precsion floating divide) was added. Previous to that, IBM statistics indicated it was rare enough that software emulation was good enough.

Some models of VAX have the microcode for H-float. Other models emulate it in software.

Some current processors have the instructions defined, but I believe trap and emulate in software when it is executed.

Some compilers implement it in software though subroutine calls even if the hardware doesn't.

-- glen

Reply to
glen herrmannsfeldt

The size-specific types /are/ required if the target supports them - which means almost every system other than some DSP's with minimum data sizes greater than 8 bits.

int_least64_t is not required unless the compiler supports a suitable type.

The "int_fast8_t" and family are also required in the same way as the "int_least8_t" family.

Compilers don't seem to support 128-bit types very well, unless the target is at least 64-bit in the first place. There is no fundamental reason for this - having a 32-bit target support 128-bit "long long long ints" is no different from having an 8-bit target support 32-bit "long int". gcc for the 8-bit avr even supports 64-bit ints. I suppose there is not much demand for 128-bit integers.

Reply to
David Brown

The important parts of the design are there. Enough to understand how to *use* the chip. You don't need to know the bitstream file format to use it. You use the tools they give and get the work done.

Yep, an MCU would be pretty useless without a very detailed description of the instruction set. But an FPGA only needs to tell you the equivalent of how to use all the bits and pieces inside. They don't need to tell you how to format the configuration stream unless you want to write your own tools. Writing your own tools does nothing for the FPGA vendor and will likely cause them problems.

Why can't you see that? If they control the software they control the issues with it. Open source does nothing for them really and it could easily cause them a lot of bad press from some bad software the chip vendor then can do nothing about.

Like I said, you would be doing a lot of hard work for no reason. Why can't your reconfigurable processor be done in an existing HDL? Or why can't you write your own tools that then output an HDL?

Yes, and they *aren't* willing to share the info.

Hmmm... to date they haven't even been able to generate code that can be downloaded in a partial reconfiguration although what you are talking about may be an unrelated task. How exactly would that be a useful thing? What does the "embedded gadget" use as inputs and what variations would be made to the target FPGA code? I'm pretty sure all the vendors support Linux. Run a supported version of Linux on a BBB and see if you can run any CAD tools in 512 kB of RAM. It has been a long time since PCs came with only 512 kB of RAM and I expect that wouldn't even load the tools.

Yes, it's called market and profit. The market at the low end has little profit unless the volumes are really big. I only know of one vendor with chips aimed at this market and they mostly use packages that are undesirable for lower volume runs, tiny 0.4 mm pitch ball grid arrays. Not my cup of tea. They want to sell to cell phones and tablets where size and power is king and one design win will sell a million chips. But then so would I...

Chip they make...

--- | | ---

Chip I want,

|||||||||||||||| ----------------

-| |-

-| |-

-| |-

-| |-

-| |-

-| |-

-| |- ---------------- ||||||||||||||||

Well, ascii art isn't so good at this. Some of the chips they make are actually smaller than I drew and the chip I currently use is also smaller than I drew, 100 pin QFP. But the ratio may be similar. I don't want to deal with such tiny geometries and the boards I would have to build to use the tiny chips. Heck, I'd be happy with a 64 pin QFP too!

The GA144 comes in a good package. 88 pin, 0.4 mm pitch QFN with one row of pins, easy to route and will work with 6/6 space and trace.

--

Rick
Reply to
rickman

Really? You can't have a 64 bit mantissa with a floating point number? Wow, it must be one of those really subtle mathematical things...

--

Rick
Reply to
rickman

That's like saying a Windows PC is open because I can run Microsoft Word on it. No. I consider programming the PC (or FPGA), with my own code and my own compilers, to be part of "using" it.

I wonder why CPU vendors are always trying to get people to write tools. Could it be that the FPGA vendors are just plain short-sighted?

How do I load the generated HDL into the FPGA, in my embedded board? It doesn't run Windows and can't run the vendor tools.

I don't understand what you mean by that. I could see wanting to generate FPGA code on the fly, just like lots of programs generate CPU code on the fly.

An example might be an ethernet packet filter, something like a hardware version of BPF (Berkeley Packet Filter). BPF takes filtering rules and compiles them into machine code that runs in your computer's network stack. You can change the rules on the fly and it generates new code and loads it. You could imagine having automated intrusion detection generate new rules and compile them, as it sees attacks taking shape. Now it wants to load the new code into the FPGA. It is unattended so there is nobody around to click "accept the license agreement". What is your advice?

Including ARM linux? I doubt that. Don't they want dongles and crap like that? Are you suggesting having a separate dongle for every chip?

It's very hard to use that chip though, as you are well aware.

Reply to
Paul Rubin

Ok, I'm done with this part of the discussion. You have your ideas of what is needed and I have mine.

You clearly don't understand FPGAs, or more specifically the FPGA market. Until you are willing to open your mind to the idea that FPGAs aren't MCUs you won't be able to understand it.

You seem obsessed with Windows. As I have said, the tools run under other OS. I think you vastly underestimate the horsepower required to run FPGA design tools as well. Do you really think they will run on an ARM9 effectively?

Yes, I can see that you don't understand.

Design a filter that can be configured and compile that once. I don't see anything in your description that requires a design to be recompiled. I think you are also underestimating the complexity of generating and compiling code for an FPGA. It ain't an MCU.

So run an Atom! No, the license doesn't use a dongle. I don't use batch mode myself, but I understand all the tools can be operated with a script, many people do that.

I'm only talking about the package... geeze.

--

Rick
Reply to
rickman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.