Re: Floating point format for Intel math coprocessors

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Quoted text here. Click to load it

Which Intel math coprocessor? 8087? 486? Pentium?

Re: Floating point format for Intel math coprocessors

Quoted text here. Click to load it

Unless IEEE 794 is something new, I think you mean IEEE 754.

Quoted text here. Click to load it

With the proviso that the values of 0 and 255 for the exponent are
special cases reserved for 0, Inf, denormals, and NaN.

Quoted text here. Click to load it

I don't think so.  Are you mistaking the lsb of the exponent for the
"visible" phantom bit?

[...]
Quoted text here. Click to load it

seee eeee emmm mmmm mmmm mmmm mmmm mmmm
0011 1111 1000 0000 0000 0000 0000 0000

s = 0, e = 127, m = 0

(-1)^s * 2^(e-127) * (1+m/(2^23)) = 1*1*1 = 1.0

Quoted text here. Click to load it

seee eeee emmm mmmm mmmm mmmm mmmm mmmm
0100 0000 0000 0000 0000 0000 0000 0000

s = 0, e = 128, m = 0

(-1)^s * 2^(e-127) * (1+m/(2^23)) = 1*2*1 = 2.0


Quoted text here. Click to load it

I'm not finding any surprises.

1.5 -> 3fc00000
seee eeee emmm mmmm mmmm mmmm mmmm mmmm
0011 1111 1100 0000 0000 0000 0000 0000
s = 0, e = 127, m = 0x400000
(-1)^s * 2^(e-127) * (1+m/(2^23)) = 1*1*1.5 = 1.5


2.5 -> 40200000
seee eeee emmm mmmm mmmm mmmm mmmm mmmm
0100 0000 0020 0000 0000 0000 0000 0000
s = 0, e = 128, m = 0x400000
(-1)^s * 2^(e-127) * (1+m/(2^23)) = 1*2*1.25 = 2.5

Perhaps I'm misunderstanding your point?

Regards,

                               -=Dave
--
Change is inevitable, progress is not.

Re: Floating point format for Intel math coprocessors

Quoted text here. Click to load it

If you're talking about the format that IA32 FPs store values
in memory, then I doubt it.  If you're talking about the 80-bit
internal format, I don't know.  I've never tried to use that
format externally.

I've been exchanging float data between IA32 systems and at
least a half-dozen other architectures since the 8086/8087
days. I never saw any format problems.

Quoted text here. Click to load it

It is IEEE <something> though 794 doesn't sound right...

--
Grant Edwards                   grante             Yow!  .. this must be what
                                  at               it's like to be a COLLEGE
We've slightly trimmed the long signature. Click to see the full one.
Re: Floating point format for Intel math coprocessors
Quoted text here. Click to load it
... snip ...
Quoted text here. Click to load it
... snip ...
Quoted text here. Click to load it

What chips does this format appear in?  I expect the presence or
absence of normalization depends on the oddness of the exponent
byte.  It makes sense for byte addressed memory based systems,
since zero (ignoring denormalization) can be detected in a single
byte.

--
Chuck F ( snipped-for-privacy@yahoo.com) ( snipped-for-privacy@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
We've slightly trimmed the long signature. Click to see the full one.
Re: Floating point format for Intel math coprocessors
On Fri, 27 Jun 2003 17:58:50 GMT, Jonathan Kirwan

Quoted text here. Click to load it

Actually, I think I've answered my own question, here.  You
really *are* that Jack.

What clues me in is your use of "phantom" here:

Quoted text here. Click to load it

The same term used on page 50 in "Math toolkit..."

In my own experience, even that predating the Intel 8087 or the
IEEE standardization, it was called a "hidden bit" notation.  I
don't know where "phantom" comes from, as my own reading managed
to completely miss it.

So, a hearty "Hello" from me!

Jon


Re: Floating point format for Intel math coprocessors
Quoted text here. Click to load it

Grin!  Yep, I really am.
 
Quoted text here. Click to load it

Hello.  Re the term, phantom bit:  I've been using that term since I can
remember -- and that's
a looooonnnngggg time.  Then again, I still sometimes catch myself
saying "cycles" or "kilocycles,"
or "B+".  I first heard the term in 1975.  Not sure when it became
Politically Incorrect. Maybe
someone objected to the implied occult nature of the term, "phantom"?
Who knows?
but as far as I'm concerned the term "hidden bit" is a
Johnny-come-lately on the scene.

Back to the point. I want to thank you and everyone else who responded
(except the guy who said
"stop it") for helping to straighten out my warped brain.  

It's nice that you have my book.  Thanks for buying it. As a matter of
fact, I first ran across this
"peculiarity" three years ago, when I was writing it.  I needed to
twiddle the components of the
floating-point number -- separate the exponent from mantissa -- to write
the fp_hack structure for
the square root algorithm.  I looked at the formats for float, double,
and long double, and found the
second two formats easy enough to grok.  But when I looked at the format
for floats, I sort of went,
"Gag!" and quickly decided to use doubles for the book.

It's funny how an idea, once formed, can persist.  Lo those many years
ago, I didn't have a lot of time
to think about it -- had to get the chapter done.  I just managed to
convince myself that the format
used this peculiar convention, what with base-4 exponents, and all.  I
had no more need of it at the time,
so never went back and revisited the impression.  It's persisted ever
since.

All of the folks who responded are absolutely right. Once I got my head
screwed on straight, it was
quite obvious that the format has no mysteries.  It is indeed the IEEE
754 format, plain and simple.
The thing that had me confused was the exponents:  3f8, 400, 408, etc.
With one bit for the sign and
eight for the exponent, it's perfectly obvious that the exponent has to
bleed down one bit into the next
lower hex digit.  That's what I was seeing, but somehow in my haste, I
didn't recognize it as such, and
formed this "theory" that it was using a base-4 exponent.

Wanna hear the funny part?  After tinkering with it for awhile, I worked
out the rules for my imagined
format, that worked just fine. At work, I've got a Mathcad file that
takes the hex number, shifts it
two bits at a time, diddles the "phantom" bit, and produces the right
results. I can go from integer to
float and back nicely, using this cockamamie scheme.

Needless to say, the conversion is a whole lot easier if one uses the
real format!  My Mathcad file just
got a lot shorter.  

Thanks again to everyone who responded, and my apologies for bothering
y'all with this imaginary problem.

Jack

Re: Floating point format for Intel math coprocessors
On Tue, 01 Jul 2003 13:03:19 GMT, Jack Crenshaw

Quoted text here. Click to load it

Hehe.  Nice to know one of my antennas is still sharp.

Quoted text here. Click to load it

I think my first exposure to hidden-bit as a term dates to about
1974.  But I could be off, by a year, either way.

Quoted text here. Click to load it

Hehe.  Now those terms aren't so "hidden" to me.  I learned my
early electronics on tube design manuals.  One sticking point I
remember bugging me for a long time was exactly, "How do they
size those darned grid leak resistors?"  I just couldn't figure
out where they got the current from which to figure their
magnitude.  So even B+ is old hat to me.

Quoted text here. Click to load it

Well, that's about the time for "hidden bit," too.  Probably, at
that time the term was still in a state of flux.  I just got my
hands on different docs, I imagine.

Quoted text here. Click to load it

Oh, it's fine to me, anyway.  I knew what was meant the moment I
saw the term.  It's pretty clear.  I just think time has settled
more on one term than another.

But to take your allusion and run with it a bit...  I don't know
of anyone part of some conspiracy to set the term -- in any
case, setting terms usually is propagandistic, designed for
setting agendas in peoples' minds and here is a case where
everyone would want the same agenda.

Quoted text here. Click to load it

Oh, geez.  I've never known a geek to care about such things.  I
suppose they must exist, somwhere.  I've just never met one
willing to let me know they thought like that.  But that's an
interesting thought.  It would fit the weird times in the US we
live in, with about 30% aligning themselves as fundamentalists.

Nah... it just can't be.

Quoted text here. Click to load it

I really think it was more the IEEE settling on a term.  But
then, this isn't my area so I could be dead wrong about that --
I'm only guessing.

Quoted text here. Click to load it

Hehe.  I've no problem if that's true.

Quoted text here. Click to load it

No problem.  It was really pretty easy to recall the details.
Like learning to ride a bicycle, I suppose.

Quoted text here. Click to load it

Oh, there was no question.  I've a kindred interest in physics
and engineering, I imagine.  I'm currently struggling through
Robert Gilmore's books, one on lie groups and algebras and the
other on catastrophe theory for engineers as well as polytropes,
packing spheres, and other delights.  There were some nice
insights in your book, which helped wind me on just enough of a
different path to stretch me without losing me.

By the way!!  I completely agree with you about MathCad!  What a
piece of *&!@&$^%$^ it is, now.  I went through several
iterations, loved at first the slant or approach in using it,
but absolutely hate it now because, frankly, I can't run it for
more than an hour before I don't have any memory left and it
crashes out.  Reboot time every hour is not my idea of a good
thing.  And that's only if I don't type and change things too
fast.  When I work quick on it, I can go through what's left
with Win98 on a 256Mb RAM machine in a half hour!  No help from
them and two versions later I've simply stopped using it.  I
don't even want to hear from them, again.  Hopefully, I'll be
able to find an old version somewhere.  For now, I'm doing
without.

Quoted text here. Click to load it

Yes.  But that's fine, I suspect.  I've taught undergrad classes
and most folks just go "barf" when confronted with learning
floating point.  In class evaluations, I think having to learn
floating point was the bigger source of complaints about the
classes.  You probably addressed everything anyone "normal"
could reasonably care about and more.

Quoted text here. Click to load it

No problem.


In any case, it's clear that your imagination is able to work
overtime, here!  Maybe that's a good thing.

Quoted text here. Click to load it

Hmm.  Then you should be able to construct a function to map
between these, proving the consistent results.  I've a hard time
believing there is one.  But who knows?  Maybe this is the
beginning of a new facet of mathematics, like the investigation
into fractals or something!

Quoted text here. Click to load it

Hehe!!  When you get things right, they *do* tend to become a
little more prosaic, too.  Good thing for those of us with
feeble minds, too.

Quoted text here. Click to load it

hehe.  Best of luck.  In the process, I did notice that you are
entertaining thoughts on a revised "Let's build a compiler."
Best of luck on that and if you feel the desire for unloading
some of the work, I might could help a little.  I've written a
toy C compiler before, an assembler, several linkers, and a
not-so-toy BASIC interpreter.  I can, at least, be a little bit
dangerous.  Might be able to shoulder something, if it helps.

Jon


Re: Floating point format for Intel math coprocessors
Quoted text here. Click to load it
<snip>
Quoted text here. Click to load it

Then you definitely ain't one of the young punks, are you? <g>
Re grid leak:  I think it must be pretty much trial and error.  No doubt
_SOMEONE_ has a theory for it, but I would think the grid current must
vary
a lot from tube to tube.

FWIW, I sit here surrounded by old Heathkit tube electronics.  I collect
them.
Once I started buying them, I realized I couldn't just drive down to the
local
drugstore and test the tubes. Had to buy a tube tester, VTVM, and all
the other
accoutrements to be able to work on them.

Quoted text here. Click to load it

I agree; I was mostly kidding about the PC aspects.  One never knows,
tho.
FYI, I have been known to be called a "fundie" on talk.origins and
others
of those insightful and respectful sites.  I'm not, but they are not
noted
for their discernment or subtleties of observation.

One of my favorite atheists is Stan Kelly-Bootle of "Devil's DP
Dictionary"
fame. Among others of his many myriad talents, he's one of the world's
leading
experts on matters religious. He and I have had some most stimulating
and
rewarding discussions, on the rare occasions when we get together. The
trick
is a little thing called mutual respect.  Most modern denizens of the
'net
don't get the notion of respecting a person's opinion, even while
disagreeing
with it.

Quoted text here. Click to load it

Glad to help.

Quoted text here. Click to load it

Don't get me started on Mathcad!

As some old-time readers might know, I used to recommend Mathcad to
everyone.
In my conference papers, I'd say, "If you are doing engineering and
don't have
Mathcad, you're limiting your career."  After Version 7 came out, I had
to say,
"Don't buy Mathcad at any price; it's broken."  Here at home I've stuck
at
Version 6.  Even 6 has its problems -- 5 was more stable -- but it's the
oldest
I could get (from RecycledSoftware, a great source). The main reason I
talked
my company into getting Matlab was as a refuge from Mathcad.

Having said that, truth in advertising also requires me to say that I
use it
almost every day.  The reason is simple:  It's the only game in town.
It's the only
Windows program that lets you write both math equations and text, lets
you generate
graphics, and also does symbolic algebra, in a WYSIWYG interface.  Pity
it's so
unstable.

Come to that, my relationship with Mathcad is very consistent, and much
the same as
my relationship with Microsoft Office and Windows. I use it every day,
and curse it
every day. I've learned to save early and often. Even that doesn't
always help, but
it's the best policy.  I had one case where saving broke the file, but
the Mathcad
support people (who can be really nice, sometimes) managed to restore
it.

I stay in pretty constant contact with the Mathcad people. As near as I
can tell, they
are trying hard to get the thing under control. Their goal is to get the
program to
such a point that it's reasonable to use as an Enterprise-level utility,
and a means
of sharing data across organizations.  I'm also on their power users'
group, and
theoretically supposed to be telling them where things aren't working.

Even so, when I report problems, which is often, the two most common
responses I get
are:

1) It's not a bug, it's a feature, and
2) Sorry, we can't reproduce that problem.

I think Mathsoft went through a period where all the original authors
were replaced
by maintenance programmers -- programmers with more confidence than
ability.  They
seemingly had no qualms about changing things around and redefining user
interfaces,
with little regard for what they might break.  Mathsoft is trying to
turn things
around now, but it's not going to be easy.  IMO.

Quoted text here. Click to load it

See RecycledSoftware as mentioned above. BTW, have you _TOLD_ Mathsoft
how you feel?
Sometimes I think I'm the only one complaining.

I'm using Version 11 with all the upgrades, and it's still thoroughly
broken. Much less
stable than versions 7, 8, etc.

Quoted text here. Click to load it

F.P. is going to be in my next book. I have a format called "short
float" which
uses a 24-bit form factor; 16-bit mantissa.  I first used it back in '76
for an
embedded 8080 problem (Kalman filter on an 8080!).  Used it again, 20
years later,
on a '486.  Needless to say, it's not very accurate, but 16 bits is
about all we
can get out of an A/D converter anyway, so it's reasonable for embedded
use.

Quoted text here. Click to load it

Grin!  I don't know about that, but there is indeed a connection. I
suppose that,
with enough effort, I could work out a scheme for using base 16, and
still get the
same bit patterns. Epicycles upon epicycles, don'cha know.

Quoted text here. Click to load it

Thanks for the offer.  I'm thinking that perhaps an open-source sort of
approach might
be useful. Several people have offered to help. My intent is to use
Delphi, and there
are lots of folks out there who know it better than I.  Of course, I'll
still have to
do the prose, but help with the software is always welcome.

Jack

Re: Floating point format for Intel math coprocessors
On Sat, 05 Jul 2003 15:47:56 GMT, Jack Crenshaw

Quoted text here. Click to load it

I use PMTs in my work, so yes.

Quoted text here. Click to load it

Excellent.  Of course, their might be some subtlety that they
didn't include in the physical model which becomes dominant, in
the real mccoy.  But odds are the physical approximation will
get you far, if you are wise about modeling the dimensions (and
I don't just mean length here.)

I just like having the theory from which to make such deductions
to the specific, as well.  Sometimes, interesting ideas can
suggest themselves from that.  More, it's hard to apply your
dimensional analysis to the physical modeling without at least
some theory as your guide.

But one uses all the tools available at reasonable cost, I
imagine.  I'm sure they did, too.

Quoted text here. Click to load it
the
Quoted text here. Click to load it

It wasn't about money, for me.  So that's no incentive for me.
I just enjoyed the learning experience.

Quoted text here. Click to load it

Yes, in general.

I think there may be a place for name calling is if other
avenues of goading fail and it's perceived that there might be
some hope with a dash of cold water in the face.  Sometimes, it
just takes a slap to get a response.  If it's done with respect,
even if they don't really realize you honestly care about them
caring about themselves, then it may work out for the better.

Of course, there's always the risk of a broken relationship as a
result.  But sometimes it's already broken by that time and this
is the only remaining possibility for restoring it.  One takes
one's chances.

Quoted text here. Click to load it
I had
Quoted text here. Click to load it
stuck
Quoted text here. Click to load it

Hehe.  Looks like very interesting work with very interesting
people.  Excellent!

Quoted text here. Click to load it

Yes, Kaypro did good and with reasonable pricing at the time.

Quoted text here. Click to load it

I used CP/M a fair amount, too.  In general, my only problems
were with Persei floppies.  Voice coil drive, fast, and I had a
few fail on me.  The Shugarts just kept working.

I rely on DOS, similarly.  Of course, if you've done *any*
assembly programming on a DOX x86 with .COM files or have
programmed with the early DOS 1.0 function calls, you *know*
about the many similarities (identical, sometimes) with CP/M.
But there are times when Windows won't boot and that DOS is
still ticking away just fine, that I can jump in and use it to
get Windows restarted.  Another reason I'm still on Win98, by
the way.

Quoted text here. Click to load it
text, lets
Quoted text here. Click to load it
interface.  Pity
Quoted text here. Click to load it

Okay.  It just makes me angry with them, having paid them well
and received absolutely NOTHING of value for it.  And I keep
seeing that carrot in front of my face that I can't quite get.
I just have to close my eyes, I guess.

Quoted text here. Click to load it
every day,
Quoted text here. Click to load it
doesn't
Quoted text here. Click to load it
the file, but
Quoted text here. Click to load it
restore
Quoted text here. Click to load it

Hehe.  Noted.  I'll remember this example when others rail at
the idea of open source and stuff this example into their face.
It's a classic, for sure.

Quoted text here. Click to load it
to get the
Quoted text here. Click to load it
utility,
Quoted text here. Click to load it
users'
Quoted text here. Click to load it
working.
Quoted text here. Click to load it
than
Quoted text here. Click to load it
redefining user
Quoted text here. Click to load it
trying to
Quoted text here. Click to load it

Like that's anything new.  In the US, at least, it's not just
profit but short-term-profit-in-the-next-three-months which
drives almost all decisions.

Quoted text here. Click to load it

What can I say??  I'm as frustrated and I've not been exposed to
it, like you have.  It all just baffles me.

Quoted text here. Click to load it

hehe.  Well, keep at it!  I used it for sensor fusion with
various phased-array radar and other sensor systems, all with
varying characteristics to model.  It's worth knowing about!

Quoted text here. Click to load it

Of course.  I gave a swipe at that point above, in fact.  But I
added that even then, I've often found better ways -- even in
the face of 12 orders of mag.  The key to my point isn't that
floating point should be entirely avoided.  It's that it should
be applied with understanding -- and more particularly, in the
case of most embedded systems.  If Excel crashes out, "Oh, gee.
I guess I'll just reboot."  You live with it.  But in an
embedded system, subtle errors crop up if you aren't careful.

Quoted text here. Click to load it

Or you can do the analysis to verify that it is impossible for
overflow to occur.  Which is what I've done in many cases.  One
should be careful, no matter, I suppose.

Quoted text here. Click to load it

Well, you've given me a story early on, about modeling PMTs.
Let me tell you one.

Calculating a standard deviation is often done by "smarty pants"
programmers with a modified version of the "standard equation"
where only one pass through the data is required.  You know the
one, where you accumulate both a SUM(x) and SUM(x^2).  At the
end, a difference calculation is used.  But in this case, the
magnitude of both parts are often quite similar, leaving only
the least significant bits in the result.

When this happens, preserving those bits during accumulation can
be very important.  For example, what often isn't realized is
that it is important to pre-sort the data before accumulation so
that the smaller numbers can accumulate to larger values
*before* getting swamped out by the accumulation of one of the
larger values.  If the largest value, for example, were added
first, the smaller values might very well truncate out
completely as they are accumulated and never get a chance to
impact the least significant bits in the summed result, before
the difference is taken and they become crucial for the final
calculation.

This is only one of many subtle examples.  And the analysis is
sometimes rather difficult to shepherd well, without training
and practice.  On the other hand, analyzing integer math is, by
comparison, much more of an "undergrad" kind of thing.  The
issues are more tractable to more people, as a rule.

And in the end, it *is* helpful to remember that it's integer
in, integer out, for many embedded systems and it's worth doing
to analyze the data flows throughout.  My belief is that
floating point should be justified by the proponents.  But so
should integer.

In other words, someone should be paying attention and it should
be clear from the record why either integer or floating point is
chosen for a particular application.  But to be honest, the
issues of floating point are more subtle and the skills required
to properly analyze it are greater, I think.

In any case, it's good to question someone and make them think
about it.

Quoted text here. Click to load it

Tell me about it.  It's quite common for me to prepare a 10 or
20 page analysis, complete with timing diagrams and mathematical
derivations.  I'll include error budgets/tolerances in that
analysis and show how I got them.  Sometimes, people just want
me to roll up my sleeves and get the task out.  But I need
confidence, even if they don't.  So I do the work, anyway.

Originally, I'd hoped that others would actually take the chance
to point out my errors and help me improve the documents.  But
most of my target readers just ignore them, assuming I am
getting things right, or unable to challenge my points, or
unwilling to put in the time.  No matter.  Now, I just do it
mostly for my own sake -- just to help me be sure that I've
covered the issues and to provide myself with something to look
back on at a later time.  It's turned out to help me a lot to
get back into the right mindset, when having to return to a
project.

So the point often isn't anymore to get input from others.  It's
more for me.  I can live with that.

Sadly, too few programmers have learned numerical methods for
analysis -- for example, power functions applied to recurrances.
Who today reads through each page of Knuth's 3-vol set, as I did
when it came out?  (Or his "Concrete Mathematics," published
recently, or Chapra and Canale's "Numerical Methods for
Engineers" or your own book or a host of others worth studying.)

Times have changed, I suppose.

Jon


Re: Floating point format for Intel math coprocessors
            .
        [hundreds of lines]
            .
            .
Quoted text here. Click to load it
            .
            .
            .
Open-source alternatives to Mathcad include Rlab, Scilab, and
especially Octave.  I don't know of any open-source work-alikes
for Mathcad specifically, though; in any case, I suspect <URL:
http:// phaseit.net/claird/comp.programming/open_source_science.html >
will interest you.
--

Business:  http://www.Phaseit.net
We've slightly trimmed the long signature. Click to see the full one.
Re: Floating point format for Intel math coprocessors
Hi Jack!

I've always used the term "implied bit". I think I saw it in the 80186
programmer's reference section on using an 80187 coprocessor.

BTW: thanks for the articles (and the Math toolkit book) on interpolating
functions - I use that stuff over and over again. In fact, I'm simulating an
antilog interpolation routine (4 terms by 4 indicies) right now that will
eventually run on a PIC (no hardware multiply; not even an add-with-carry
instruction). Those foward and backward difference operators make it all
pretty easy! With no hardware support from the PIC, it will end up taking
near 10mS from 24 bit ADC to final LED output but it will be better than 14
bit accurate over the 23 bit output dynamic range.

all the best,
Bob



Re: Floating point format for Intel math coprocessors
Quoted text here. Click to load it

I think I got the term "phantom bit" from Intel's f.p. library for the
8080, ca. 1975.
Then again, I've been doing my own ASCII-binary and binary-ASCII
conversions since
way before that, on big IBM iron.  We pretty much had to, since the old
Fortran I/O
routines were so incredibly confining.  It had to be around 1960-62.
But the old
7094 format didn't use the "phantom" bit, AIR.

You just haven't lived until you've twiddled F.P. bits in Fortran <g>.

Quoted text here. Click to load it

Sounds neat. I'm glad I could help.

FWIW, there's a fellow in my office who has a Friden calculator sitting
on his credenza.
He's restoring it.

Jack

Re: Floating point format for Intel math coprocessors

Quoted text here. Click to load it

Or any other HLL, for that matter!

Quoted text here. Click to load it

He has a sliderule for backup?


Speaking of hardware math mysteries, Dr. Crenshaw, et al,
does anyone know how the (very few) computers that have
the capability perform BCD multiply and divide?  Surely
there's a better way than repeated adding/subtracting n
times per multiplier/divisor digit.  Converting arbitrary
precision BCD to binary, performing the operation, and
then converting back to BCD wouldn't seem to be the way
to go (in hardware).

Re: Floating point format for Intel math coprocessors
On Wed, 2 Jul 2003 07:48:49 PST, snipped-for-privacy@iwvisp.com (Everett M.

Quoted text here. Click to load it

What is the problem ?

IIRC the IAND and IOR are standard functions in FORTRAN and in many
implementations .AND.  and .OR. operators between integers actually
produced bitwise and and bitwise or results.

Quoted text here. Click to load it

Doing bit manipulation in COBOL is a bit messy :-).



Quoted text here. Click to load it

If you have a BCD hardware, why on earth should the values be
converted to binary for multiply or divide ? Do it just as if you are
doing it on paper. To make a BCD computer (or a 4 function calculator)
you just need a BCD adder and some circuitry to make the 9's
complement of a value. Some hardware doing BCD x BCD producing a two
digit BCD value (or a table search) will speed up some operations
quite a lot.
 
Paul


Re: Floating point format for Intel math coprocessors

Quoted text here. Click to load it

Hmphh!  In Fortran II, we were lucky to get add and subtract.  No such
thing
as .AND. and .OR. there.

Jack

Re: Floating point format for Intel math coprocessors

Quoted text here. Click to load it

That's what happens when you punch "//JOB T"
--
Morris Dovey
West Des Moines, Iowa USA
We've slightly trimmed the long signature. Click to see the full one.
Re: Floating point format for Intel math coprocessors
Quoted text here. Click to load it

Grin!!  I did a _LOT_ of work on the 1130. That's where I learned a lot
of my Fortran skills.
IMO the 1130 was one of IBM's very few really good computers. Ours had
16k of RAM <!>, and
one 512k, removable HD.  And it supported 100 engineers, plus the
accounting dept.

The 1130 OS provided all kinds of neat tricks (remember LOCAL?) to save
RAM. I was generating
trajectories to the Moon and Mars on it.  Its Fortran
compiler was designed to be fully functional with 8k of RAM, total.
Let's
see Bill Gates try _THAT_!!!

Jack

Re: Floating point format for Intel math coprocessors
On Sat, 05 Jul 2003 15:00:37 GMT, Jack Crenshaw

Quoted text here. Click to load it

provided timeshared BASIC for 32 users, by the way, and lived in
16k RAM -- 6k for the interpreter and 10k for the swapped user
page.  Included Chebyshev and mini-max methods for the
transcendentals -- something Intel failed to use for their x87
floating point units until the advent of the Pentium, many years
later.

Oh, well.

Jon


Re: Floating point format for Intel math coprocessors
Quoted text here. Click to load it

I was just making a light-hearted comment.  I just guessed that
someone who has an interest in older technology would have some
even older things.  Does he have an abacus or two -- just in case?

Quoted text here. Click to load it

Ah, yes.  I'd forgotten about the ol' divide by zero on the
electro-mechanical calculators.  And the only "reset" was to
pull the plug.

Quoted text here. Click to load it

I'll have to think about the use of multiplication tables,
especially for the case of packed values.

Quoted text here. Click to load it

Addition and subtraction as you say is accommodated by the DAA
instruction.  Most of the micros only have a decimal-adjust for
addition so you have to learn how to subtract by adding the tens-
complement of the value being subtracted.

Re: Floating point format for Intel math coprocessors
Quoted text here. Click to load it

Actually, he does -- several.  He seems to be collecting every possible
example of
mechanical ways to do math.  His office wall is a work of art.

Quoted text here. Click to load it

Agreed, although some (I think the Z80 was one) worked for subtraction
as well.
As for multiplication and division, forget it. But if you do the Friden
trick of
shifting, successive subtraction works pretty well. Still not nearly as
fast as
binary, of course, but if you have to convert back & forth for I/O
anyway, it may
still be more efficient to do it in BCD.  Plus, you don't have to deal
with the bother
of a 1 that becomes 0.99999999.

Calculators do everything completely differently. In a calculator (at
least the old
ones -- modern things like PDA's have RAM to burn), time isn't an issue.
The CPU
is always waiting for the user anyway, so efficiency of computation
isn't required.
Saving ROM space is lots more important.

Jack

Site Timeline