Re: Intel details future Larrabee graphics chip

Of course.

Did your component had tristate pins defined as out or inout? The later, indeed, had problems until relatively recently but the former always worked as expected. I, personally, always prefer to have separate in and out ports in internal components, so I wasn't hit by earlier bugs.

Why they fixed it at the end? I think, the main reason was SOPC builder. They wanted the same SOPC builder output to work both as a top level project and as a component so they had little choise. Hopefully, by now, they realized that except for the toy problems no sane developer will use SOPC builder output as a top level but the inout fix is done already and there are no reason to go back.

As I said in another post, nobody here codes this way so I have no idea whether the compiler does the right thing.

P.S. I added comp.arch.fpga to the list. Let's see the opinion of real experts.

Reply to
already5chosen
Loading thread data ...

exponent format with a lot more bits, like a

after each operation, unless you can fake

bits that would have been shifted away

IEEE doesn't make any requirements on how you implement languages, neither does C require a particular FP format (although C99 strongly recommends IEEE-754 with 32-bit float and 64-bit double).

The IEEE format is pretty well thought out. The exponent bias doesn't cause any issues - in most cases you don't have to worry about it. The format allows you to compare floating point numbers without decoding them into sign, exponent and mantissa. As a result, you can increment/decrement encoded FP numbers to get the next larger/smaller value.

Wilco

Reply to
Wilco Dijkstra

In article , "Wilco Dijkstra" writes: |> |> The IEEE format is pretty well thought out.

That is a matter of opinion. A large number of experts, in many aspects of numerical computing, disagree.

|> The IEEE format is pretty well thought out. The exponent bias doesn't |> cause any issues - in most cases you don't have to worry about it.

That is true.

|> The format allows you to compare floating point numbers without |> decoding them into sign, exponent and mantissa. As a result, you can |> increment/decrement encoded FP numbers to get the next larger/smaller |> value.

That isn't. Excluding the fact that they are signed magnitude and most integers are twos' complement, that won't work at either extreme or at zero.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

What is wrong in your opinion and how would you improve it?

You can't count through zero or past Infinity indeed. But you can count up from zero all the way to infinity without any checks. I use this fact to first encode the result and then round by just incrementing it - no special case checking when rounding denormals, or round up to infinity. Comparisons take just a few instructions despite the difference between sign-magnitude and 2-complement.

Wilco

Reply to
Wilco Dijkstra

In article , "Wilco Dijkstra" writes: |> |> > |> The IEEE format is pretty well thought out. |> >

|> > That is a matter of opinion. A large number of experts, in many |> > aspects of numerical computing, disagree. |> |> What is wrong in your opinion and how would you improve it?

This has been described ad tedium. I would start by defining a clear, consistent objective and work from there. It doesn't matter so much WHAT the objective is, provided that it HAS one.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

and

IIRC: Some were inout because they were bidirectional to the same part. Others were outputs of one part but inputs to another. This was a data bus situation.

I had two Cygnal F124 CPUs and a DMAed input sharing 512K * 8 RAM. The result was that I needed two 8 bit true I/O busses and a MUXed bus at the RAM.

Reply to
MooseFET

IEEE achieved its objective a long time ago: just about all hardware and software uses it. So that is not the problem. However it doesn't explicitly allow a subset of the features to be supported, which is what almost all implementations do.

Flushing denormals to zero is one key feature that is missing for example. Similarly round-to-even is the only rounding mode ever used. It would be easy to fix the standard to make the current defacto situation official.

Wilco

Reply to
Wilco Dijkstra

In article , "Wilco Dijkstra" writes: |> |> > |> > |> The IEEE format is pretty well thought out. |> > |> >

|> > |> > That is a matter of opinion. A large number of experts, in many |> > |> > aspects of numerical computing, disagree. |> > |>

|> > |> What is wrong in your opinion and how would you improve it? |> >

|> > This has been described ad tedium. I would start by defining a |> > clear, consistent objective and work from there. It doesn't |> > matter so much WHAT the objective is, provided that it HAS one. |> |> IEEE achieved its objective a long time ago: just about all hardware |> and software uses it.

That is a political objective, not a technical one; I was referring to a technical objective. If your objective is to eliminate all alternatives, whether or not they are better, as well as potential for future improvement, then you and I will never agree.

|> So that is not the problem. However it doesn't |> explicitly allow a subset of the features to be supported, which is what |> almost all implementations do.

In different ways, which means that it hasn't even achieved its political objective in full.

|> Flushing denormals to zero is one key feature that is missing for example. |> Similarly round-to-even is the only rounding mode ever used. It would be |> easy to fix the standard to make the current defacto situation official.

Which? I know of dozens of variants.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

I wouldn't call it political at all - the goals of standards are obvious and non-political (the *process* of establishing a standard is often political, but that is a different matter altogether).

It is true standards end up targetting the lowest common denominator, and as such fall short of technical excellence. However the IEEE format represents a major advance over other formats on almost all aspects, so it deservedly killed many badly designed formats. If someone comes up with an even better format then I am all for it - though I doubt it can be improved much.

Well the main goal was to create a common binary format which gives identical results on all implementations. That is exactly what we have today, so it is an example of a standard that worked well.

I know a compiler that supports 5 variants, but there aren't any useful variants beyond that. Even that is too much, I think just 1 or 2 commonly used subsets would be sufficient to capture 99% of implementations.

Wilco

Reply to
Wilco Dijkstra

In article , "Wilco Dijkstra" writes: |> |> > |> IEEE achieved its objective a long time ago: just about all hardware |> > |> and software uses it. |> >

|> > That is a political objective, not a technical one; I was referring |> > to a technical objective. |> |> I wouldn't call it political at all - the goals of standards are obvious and |> non-political (the *process* of establishing a standard is often political, |> but that is a different matter altogether).

The technical goals of standards are obvious? The mind boggles. Clearly you haven't been involved with many of their committees.

|> > |> So that is not the problem. However it doesn't |> > |> explicitly allow a subset of the features to be supported, which is what |> > |> almost all implementations do. |> >

|> > In different ways, which means that it hasn't even achieved its |> > political objective in full. |> |> Well the main goal was to create a common binary format which gives |> identical results on all implementations. That is exactly what we have |> today, so it is an example of a standard that worked well.

That is factually false, as you yourself stated. Not merely are some aspects of it left implementation-dependent, you yourself stated that most implementations use hard underflow (actually, it's not that simple).

Also, you are ignoring the fact that almost all programs use languages other than assembler nowadays, and the IEEE 754 model is notoriously incompatible with the arithmetic models used by most programming languages. That, in turn, means that two compilers (or even options) on the same hardware usually give different results, and neither are broken.

|> > |> Flushing denormals to zero is one key feature that is missing for example. |> > |> Similarly round-to-even is the only rounding mode ever used. It would be |> > |> easy to fix the standard to make the current defacto situation official. |> >

|> > Which? I know of dozens of variants. |> |> I know a compiler that supports 5 variants, but there aren't any useful |> variants beyond that. Even that is too much, I think just 1 or 2 commonly |> used subsets would be sufficient to capture 99% of implementations.

There are a lot more than five variants in use today, even just at the hardware level. Actually, Intel has at least three, and quite likely more.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

Actually they are not really in band nor denormal. They used codes that do not occur in the standard value encodings. And there any many more unused encodings than the 5 or so that they have used. Damn, 20 years since i last studied it and i forgot a lot of detail.

Reply to
JosephKK

As I said, it's the process that is the problem. Design by committee rarely leads to something useful due to every member having their own agenda and axes to grind.

what

No, the implementation defined aspects don't lead to different results. The choice of whether to support denormals or flush to zero doesn't affect most software, so it hardly matters. In the rare cases where it does matter, you still have the option to enable denormals and take the performance hit.

That's not true. It's not difficult to optimize based on the chosen floating point model. So compilers give the same result even with full optimization (if they don't then they are broken). You might get different results only if you enable fast floating point options that allow reordering of operations.

example.

be

official.

You can create an infinite number of variants, but only a few make sense. There are only a few choices when flushing a denormal to zero, so defining the correct way of doing this would reduce the variation that exists today.

Wilco

Reply to
Wilco Dijkstra

Right.

Not correct.

Every single (int) cast of a fp variable in a C program must truncate, not round, which means that you absolutely have to have a directed rounding mode on top of the default.

Fix C at the same time then?

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

The only thing I dislike (from a sw viewpoint) is the fact that denormals and zero share the same zero exponent, this makes it slightly slower to separate out regular numbers+zero from the special cases (Inf/NaN/denorm).

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

Make sense please. I have built many systems with hardware that can count though zero. If you have built a system that can count to infinity you need to publish. If you are talking about obscure properties on floating point itself or 754 compliant implementations please be more clear.

Reply to
JosephKK

It has been some time since i have fussed with this. Where do i find discussion the improvements you are talking about?

Reply to
JosephKK

I know of at least 5 different early hardware implementations of floating point and have written two software implementations myself. So what. It was a long time ago as well.

Reply to
JosephKK

Possibly not any. I did a short stint on IEEE 1219 and the politics of protecting profits and other reasons for entrenched positions was glaringly present.

This one is new to me. Where do i go to find backup? The magnitude of the claim does sound a bit extreme.

example.

Some pointers to more details please.

Reply to
JosephKK

In article , JosephKK writes: |> >In article , |> >"Wilco Dijkstra" writes: |> >|> |> >|> > |> The IEEE format is pretty well thought out. |> >|> >

|> >|> > That is a matter of opinion. A large number of experts, in many |> >|> > aspects of numerical computing, disagree. |> >|> |> >|> What is wrong in your opinion and how would you improve it? |> >

|> >This has been described ad tedium. I would start by defining a |> >clear, consistent objective and work from there. It doesn't |> >matter so much WHAT the objective is, provided that it HAS one. |> |> It has been some time since i have fussed with this. Where do i find |> discussion the improvements you are talking about?

I wasn't describing specific improvements. Anyway, probably on the archives of this group, or comp.arch.arithmetic. Google groups seems to be on the blink, as it gets only one hit for "IEEE 754" on the latter, and for "IEEE" and "floating-point" on the former, which I have difficulty believing!

There was also an IEEE 754R mailing list, which may be archived and accessible - see

formatting link

Regards, Nick Maclaren.

Reply to
Nick Maclaren

In article , JosephKK writes: |> |> >Also, you are ignoring the fact that almost all programs use languages |> >other than assembler nowadays, and the IEEE 754 model is notoriously |> >incompatible with the arithmetic models used by most programming |> >languages. That, in turn, means that two compilers (or even options) |> >on the same hardware usually give different results, and neither are |> >broken. |> |> This one is new to me. Where do i go to find backup? The magnitude |> of the claim does sound a bit extreme.

The relevant language standards? Seriously. For 'clarification', you will need to read the archives of the SC22 mailing lists.

To save you time: in Fortran, look for the rules on expression and function evaluation and, in C/C++, the rules on side-effects and the INCREDIBLY arcane syntax and semantics of preprocessor versus compile-time versus run-time expressions. And remember that flag handling is NOT optional in IEEE 754, but a fundamental part of its design, as Kahan points out.

|> >There are a lot more than five variants in use today, even just at |> >the hardware level. Actually, Intel has at least three, and quite |> >likely more. |> |> Some pointers to more details please.

Look at the IEEE 754 specification on the handling of underflow and NaNs, and then study Intel's architecture manuals VERY carefully, looking for x86 basic FP, MMX, SSE, IA64 and optional variants. Then look at the MIPS, SPARC, PowerPC and ARM architecture manuals.

Then laugh, cry or scream, according to taste.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.