Small, fast, resource-rich processor

It should be, but it isn't.

Well, for one, sometimes the range isn't so big for a give problem, but you don't know the order of magnitude well enough to use fixed point. If the operands and result have relative error, then floating point is still a good choice.

Another reason is that many people don't know how to do scaled fixed point, or are too lazy to do it. The lack of support from many high-level languages doesn't help.

But the real distinction should be absolute vs. relative error. When the error (uncertainty) is independent of the magnitude, then fixed point should be used. That is often true for DSP problems.

-- glen

Reply to
glen herrmannsfeldt
Loading thread data ...

You just don't know how this works then. Well, you may want to take the word of someone who does.

I already posted a photo of one of the boards which do in situ cpld programming (again,

formatting link
). The processor is an mpc5200b, the OS it runs is DPS (the device comes with a 2.5" HDD on top of the board,
formatting link
).

There is no point to go into details how this is done just for the sake of some party-talk level argument.

Dimiter

Reply to
dp

Have you ever looked at the GA144? It is 144 small, simple processors arranged somewhat like an FPGA. Some of the same concepts apply, others do not. I believe all of the software is open source although "free" (as in speech, not beer) may not apply. I believe they provide source, but you are not free to do with it as you wish, but I'm not sure. Certainly you can modify it.

If you order before midnight tonight from Schmartboard.com, they are only $20 per kit. They will solder the part to the board for $3 by using this link...

formatting link

The monthly special ...

formatting link

--

Rick
Reply to
rickman

That is not why the FPGA tools are closed source. It has nothing to do with stifling innovation. Just the opposite, the vendors vie heavily using their tools. In fact, I was once told that Xilinx spends more money on software development than they do on hardware development, lol. At one time a *lot* of people preferred Altera because their tools were perceived as being more user friendly. I think that perception waned a bit but from what I've heard is now picking up again. So there is a *lot* of incentive to improve the tools.

I only wish the tools were open. But in lieu of that, I'm happy with free (as in beer) tools. The question I have is what exactly do you want to do with FPGAs that you need open source tools? When you say you want to "play with" and "learn the technology", why can't you do that with the free beer tools?

One *big* issue I have with the current tools is that they *are* licensed even though they are free (as in beer). It has been many a time the licensing got in the way of using them. I now have it on my calendar to renew my license key every year around my birthday.

--

Rick
Reply to
rickman

I very seldom take the "word" of anyone. If something is true, it can be shown. If it can't be shown then I can't consider it to be true. I'm certainly not going to take your word over that of the vendors or my own eyes. I have looked a the code they provide to see what it takes to interface it and it looks reasonable to me if you can run C code. If you don't want to explain why this won't work, fine. We are done.

Am I expected to reverse engineer your design from a photo? What is your point? You asked how many PCs I expected you to attach and I replied I expect the C code to run on an embedded processor. Do you just want to bicker? I thought you wanted to discuss this.

Ok, why did you start? If you don't want to discuss it then just let it drop.

--

Rick
Reply to
rickman

On this last point we agree!

Yes, there are occasions when the floating point implementation can be exact such as 2.0 * 2.0. But in general, it is /not/ exact. If you write "a = b / 3.0; c = a + a + a;" you cannot expect b and c to be equal. Of course integer and fixed point arithmetic can also be inexact

- but they have a clear set of rules for which they /are/ exact. (I know IEEE has plenty of rules for floating - but they are inevitably much more complex, and in real life there is a certain amount of implementation dependent behaviour.) And my point is that floating point is always appropriate - I was not making claims that alternative types of arithmetic are always accurate.

Again, I am not making /any/ claims that integers are the best for everything. (Though you might like to think about how floating point calculations are implemented in software - they use integer arithmetic. Apparently integer arithmetic /will/ do for /any/ calculation, if it can be done at all. Floating point just makes some calculations much easier to work with.)

My argument is not against using floating point - it is against the mystical belief that IEEE compliance makes floating point somehow into exact mathematics rather than a numerical approximation, or that IEEE somehow makes a real-world difference in real-world embedded calculations. I am trying to point out to those that seem to have trouble understanding (not you, Rick) that all floating point is approximate. IEEE may give you marginally better tolerances in some circumstances than "-ffast-math" type floating point, but it is marginal, and does not affect the principle of picking the correct representation for the job in hand.

I don't know the details of what was tested. But the test bench involved thousands of the processors running flat out for 6 months, IIRC. The interesting thing is that a research student took the source code for the chip (or the part they were testing) and did a complete formal mathematical verification of the code. He found exactly the same flaws as the brute-force testing found, but he completed the job in 5 months!

Reply to
David Brown

What I tried was to tell you in a nice way that you are way out of your depth so you may want to drop it there. I am not interested in doing classes on jtag basics etc.

You may want to check you posts to see who started what, your memory seems to fail you. No problem with me dropping it, why can't you drop it?

Dimiter

Reply to
dp

Maybe I don't understand. I thought you were comparing fixed point and floating point. How would integer handle "a = b / 3; c = a + a + a;"? In the general case, not so well I expect.

Your point about floating point being *always* approximate is no more right than to say fixed point is *always* approximate (in other words, wrong).

I really don't get what your point is at all.

I don't think anyone has said "IEEE compliance makes floating point somehow into exact mathematics rather than a numerical approximation". Where did you get this?

Wow, hardware running flat out for 6 months or a grad student running flat out for 5 months. I'm not sure which is more impressive. Maybe a post-doc could have done it in three months?

--

Rick
Reply to
rickman

Dimeter, if you can't discuss this fine. I don't get your attitude is all. If I disagree with you I am not "out of my depth".

Consider it dropped.

--

Rick
Reply to
rickman

It's not magic, it's just the properties of the arithmetic. It's just like if the specification of some CPU architecture says a certain combination of instructions will give result X, it's perfectly ok to use that combination if X is what you want. This is especially true if the architects have come out and said they designed the instruction set with that particular effect in mind, i.e. it's not a quirk or anomaly. Obviously if you want something different from X, you should not use that combination of instructions. And whether you want X depends on the very low level details of your application. Obviously you can't just close your eyes and ignore singularities because you think the machine arithmetic will take care of them. It is, however, ok to plan around them and arrange your code for the specific function domain, so that the right thing happens if you hit upon one.

If you are saying X (in this case the behavior of IEEE Infinity) is never a reasonable thing to want to rely on in the first place, you're going to have to take that up with the designers rather than with me. Since they were world-renowned numerical analysts with tons of experience implementing numerical algorithms that dealt with these issues, I am most comfortable assuming that they knew what they were doing when they designed the standard the way they did.

Reply to
Paul Rubin

Oh cool, I'm glad you found that. There is lots of other good stuff on his site too.

Reply to
Paul Rubin

You are jumping in to a discussion (or argument) between other people here. This has become a huge thread with many branches - I think this branch has got crossed somehow.

Someone has been saying that IEEE compliance is important because it gives stricter control of rounding, operation ordering, etc., along with features like NaNs and infs. I have been saying that it is not important in most uses - especially in the embedded world - because you can't rely on NaNs and infs for anything other than "something has gone wrong", and avoiding mistakes early is usually a better strategy, and because floating point is inherently inaccurate so your code must already tolerate rounding issues and the like.

Integers or fixed point did not enter the discussion (in this argument, anyway) until you brought them up.

So I am not arguing with /you/ here - I suspect you agree with me (when you do floating point in your FPGA, do you implement full IEEE ?).

This was a /long/ time ago - 25 years or so. Back in the days when students had to work hard, and when chips and computers were expected to have a long market time, so you could do real long-term testing.

Reply to
David Brown

I haven't looked at the rest, yet. There's only so many minutes left in my remaining life :(

Not all numerical practitioners agree completely with everything Kahan says (surprise!), but they do listen and think about what he says.

Reply to
Tom Gardner

I'm sure the guys at IEEE were very smart. But it was in a different time, for different hardware, different software and different types of applications. They could have done the best possible job at the time - but still it is absurd to suggest that the choices made then are the ideal choices for the type of hardware, software and applications we have now - especially in the embedded world. While the mathematics hasn't changed, other things have.

There will be HPC folk that feel 64-bit IEEE is far too limited in range and resolution. Microcontroller producers feel full IEEE hardware implementations are too big, complex and power-hungry. Toolchain vendors feel software library implementations of full IEEE are too slow and bulky, and they restrict the optimiser too much. Embedded developers feel they don't care about irrelevant details of features they will never use, but they do care about code speed and size.

All I am saying is pick the right tool for the job. When you want simple floating point, the basic IEEE formats are quite a reasonable balance of range and precision - but most of the details beyond that are unnecessary costs with very little real-life benefits.

Reply to
David Brown

I suppose the honest answer is that you can for most practical purposes.

It's just that over the years I've always liked to learn how some new technology works at it's lowest levels. I've never been happy with just writing code against some library/API without understanding how the library/API is implemented on (or uses) the underlying hardware.

And yes, I do realise that's not viable with the current state of the FPGA market place, but like I said previously, it doesn't stop one from wishing. :-)

This also leaves you vulnerable to a vendor changing their plans for those tools or simply been bought out by a rival.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

I was simply responding to the statement about FP *always* being approximate. Unless it is qualified somehow it is an absurd statement. It is a given that FP will round if you do any number of non-simplistic calculations, but these same calcs would likely not fit a fixed point calculation either. So I don't follow the point. I don't see where anyone here is assuming FP is *always* exact either.

I didn't see a response to this.

I should have added a smilely at the end of that suggestion. In my day grad students were the final form of indentured servitude. I don't know about now.

--

Rick
Reply to
rickman

The problem you describe is not really the situation here. There may be aspects of the tools that are hidden. I'm sure the vendors are very protective of their tricks and techniques. But the hardware is wide open. They may not give you all the gory details of what connects to what in an explicit way, but the crux of FPGA functionality is there for you to see and use.

I guess what I am saying is that with the current approach in designing FPGAs you aren't missing anything except all the hard work. If you want to know more about how they work on the inside there are plenty of people with that knowledge. The vendors don't go to great lengths to publish it because 99.99% of the users don't need it. If you ask for it and have a valid reason for wanting it I can't imagine they wouldn't share it with you. Some 10 or 15 years ago all the experts in FPGAs

*had* to have intimate knowledge of the devices to optimize their designs. But now that just isn't needed. For the most part it is like asking an MCU vendor about their microcode. You may be curious, but it doesn't really matter to your work.

Not sure what vulnerability that creates exactly. But yes, the whole licensing issue is a PITA. Some 5 years ago I *bought* what would now be the free tools from Lattice. Between the time I paid for the order and the time they shipped they changed the simulator from Modelsim which I knew and Aldec which I didn't. I ranted and raved but they wouldn't ship me the Modelsim I ordered. In the end I ended up liking... actually preferring Aldec over Modelsim, but I didn't like the fact that I was stuck.

As I have said, this is the state of FPGA development and is unlikely to change anytime soon. There is just too much market force to keep things the way they are. Heck, I would just love to see the FPGA vendors come out with devices in packages like MCUs so that I can use FPGAs in more MCU-like applications. But they are entrenched in their thinking and won't be changing anytime soon in that regard either.

If you are interested in learning FPGA design I would be happy to help. Just let me know.

--

Rick
Reply to
rickman

A long term review of phdcomics.com would seem to suggest it's _still_ the current practice. :-)

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

The kind of thing I was thinking of was someone doing some short term thinking and deciding to turn the free users into a profit source.

Other vulnerabilities include the existing free tools been scrapped in favour of a new set of different tools if a vendor decides to revamp their toolchain line or is taken over. If there's a annual license, you don't have the option to continue using the existing (and known) tools.

Thank you; I appreciate that. There are lots of other things on my outstanding list which I want to tackle/play with first, but if I actually do find time to play with a FPGA board, I will keep that in mind.

Thanks,

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

(snip, someone wrote)

Maybe better is that floating point should always be assumed to be approximate. Yes it often gives the exact result, but it is better not to assume that it does.

In the days before the IEEE, there were many different forms that rounded and such different ways. Cray built machines with no divide, but instead allowed one to multiply by the reciprocal, either to full precision, or less than full precision (but faster).

There was a Cray machine where mutliply wasn't commutative. That is, where X*Y-Y*X might not give zero.

And there are a lot of scientific problems where such arithmetic is perfectly fine.

The IBM Stretch allowed one to select between shifting in zeros and shifting in ones during post-normalization. That is, more or less, round up or round down.

With IEEE, if you don't know the rounding mode in effect, then it is best to treat the result as approximate.

Pretty much all machines allow one to do multiple word fixed point addition and subtraction. That was more important on the eight-bit processors, as that was needed to do larger operations. It isn't all that hard to do multiple word multiply. Divide is a little harder. It is, one most machines, somewhat harder to do higher precision floating point using the available operations. It can be done, is done, and is usually slow enough to discourage its use.

(snip)

-- glen

Reply to
glen herrmannsfeldt

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.