Re: Floating point format for Intel math coprocessors - Page 2

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Re: Floating point format for Intel math coprocessors
Quoted text here. Click to load it

Yeah, sometimes the ones who have the most details about a subject sometimes
fail to see the overall big picture (i.e. the old forest for the trees
argument). I've sometimes had revelations about things when I go to teach
someone something that I thought I already knew, but now I understand it
better. :-)

    Yousuf Khan



Re: Floating point format for Intel math coprocessors
On Wed, 02 Jul 2003 03:39:51 GMT, "Yousuf Khan"

Quoted text here. Click to load it

Teaching *is* one of the better ways to learn something.

Jon


Re: Floating point format for Intel math coprocessors
On Fri, 27 Jun 2003 19:14:09 GMT, Jonathan Kirwan


Quoted text here. Click to load it

I have never heard about phantom bits before, but the PDP-11 processor
handbooks talked about hidden bit normalisation when talking about the
floating point processor (FPP) instructions in the mid-70's. It might
even be older, since the same format was used on the FIS instruction
set extensions on some early PDP-11s.

Paul
  

Re: Floating point format for Intel math coprocessors
On Wed, 02 Jul 2003 09:11:38 +0300, Paul Keinanen

Quoted text here. Click to load it

Thanks for that.  I think I still have a PDP-11 book or two
around here... yes!  There it is.  1976, PDP 11/70 Processor
Handbook, and yes... they talk about the hidden bit.

Yup, I was working on PDP-11's (and PDP-8's as well) from about
1972, on.  PDP-8's first, though.  So I'm pretty sure that's
where I got it and it probably *was* circa 1974, my guess.

Damn, my memory is good in spots!

Thanks,
Jon


Re: Floating point format for Intel math coprocessors
On Fri, 27 Jun 2003 14:23:22 GMT, Jack Crenshaw

Quoted text here. Click to load it

The sign bit is there, as the highest order bit -- just as you
note.  This is followed, working left to right, by the exponent
which is in "excess 127" format.  It's not a signed format, but
an excess 127 format.  A couple of special values, 0 and 255,
are reserved for the fancy stuff.  But those correspond to
exponent values of -127 and +128 and no one is supposed to miss
them.  The mantissa is quite simply *always* associated with a
hidden bit, which always leads the value.

Okay, the exception made to the above is for the exact value of
zero, where the exponent is 0 and the mantissa is 0 and the
hidden bit is assumed 0, as well.

Quoted text here. Click to load it

I don't agree.  At least, not yet in my experience.  And I
haven't seen an example below which makes your point.

Quoted text here. Click to load it

Well, I know how to normalize.  After writing a few complex-FP
FFT routines for integer processors, it gets to be kind of
routine and hum-drum.  So I'll skip the explanation.

Quoted text here. Click to load it

Interesting note about the 360.  I only had a few opportunities
to program in BAL and never got into the floating point formats.

Quoted text here. Click to load it

High bit is *always* a 1, after normalizing (except for zero.)
But, as you know, it is thrown away.  Never kept.

Quoted text here. Click to load it

Which doesn't make your point, because it's quite correct to use
those two values to represent 1 and 2.

3F800000 is:

              1 <-- hidden bit
   0 01111111 00000000000000000000000
   - -------- -----------------------
   S exponent mantissa

40000000 is:

              1 <-- hidden bit
   0 10000000 00000000000000000000000
   - -------- -----------------------
   S exponent mantissa

In those two cases, the only difference is that the exponents
are 1 apart from each other.  Which is exactly what you'd expect
for 1.0 and 2.0.  The mantissa is the same for both.

Quoted text here. Click to load it

I have, believe me.

Quoted text here. Click to load it

Well, I hope that helps some.

Jon


Re: Floating point format for Intel math coprocessors

Quoted text here. Click to load it

IIRC, the Navy's UYK-44 processor (probably UYK-20 as well,
though I'm not sure it did FP) also used base 16 for the
exponent, so increasing the exponent by 1 shifted the mantissa
by 4. I dare anybody to claim that's a useful bit of
information to have retained for 15+ years....

--
Grant Edwards                   grante             Yow!  .. bleakness....
                                  at               desolation.... plastic
We've slightly trimmed the long signature. Click to see the full one.
Re: Floating point format for Intel math coprocessors

Quoted text here. Click to load it

Could be worse, I could've explained what BAM variables were...

--
Grant Edwards                   grante             Yow!  YOW!! I am having
                                  at               FUN!!
We've slightly trimmed the long signature. Click to see the full one.
Re: Floating point format for Intel math coprocessors

Quoted text here. Click to load it

I _think_ it was UYK, since everybody prounouced it "yuck".

The 44 was a small version of the same architecture that was
done by, um, Sperry (I think).  Originally it was designed for
use on submarines (A 44 chassis would fit (barely) through a
submarine's loading hatch).  A '20, OTOH, was more of a
standard computer-room VAX-sized thing -- you'd have to build a
sub hull around it.

A '44 consisted of a backplane full of very expensive little
boards (about 3x6 inches).  It took several of the boards for
the CPU, and then there were memory and I/O modules.  The whole
thing, including power supply was the size of a small suitcase.
The CPU was built out of AM2901 bit-slice processors, and
executed a superset of the 20's instruction set.

The '44 was "standardized" as the Navy's official embedded
computer.  It was about as powerful as decent 8086
single-board-computer, only 100X larger and 1000X more
expensive.  It did have plug in cards for all the oddball
USN-specific serial/parallel interfaces, which gave it a leg-up
on commercial stuff. The '44 had FP, and the ones I played with
used EEPROM/RAM instead of core (though core memory was
available for it, IIRC).  It was sort of cool that it could do
polar<->rectangular coordinate transforms in a single machine
instruction.

For the project I worked on, we would have embedded a couple
8086's and done C programs given our 'druthers, but NAVSEA
insisted that we use '44s and write in CMS/2 or CMS-2 or
whatever it was called.  The also wanted us to use some OS or
other from the '20. But, there was no way it could deal with
the real-time requirements we had, so they let us write out own
simple kernel.

The whole project was cancelled after a couple years (never
even got a prototype working). A few years later it was revived
and redesigned using "commercial" processors before being
cancelled again.

Sure glad I'm out of defense work...  ;)

--
Grant Edwards                   grante             Yow!  My forehead feels
                                  at               like a PACKAGE of moist
We've slightly trimmed the long signature. Click to see the full one.

Site Timeline