converting float to ascii w/o printf

Have worked on several projects where the "requirements spec" developed in parallel with software development, but where there are big changes, it can result in rewrites of major sections of code simply because the original software architecture is unable to support the new features. It's a dangerous path and always results in added expense and time to market for the client. Guess who gets the blame when the project is late, or the product unreliable etc ?,

It's up to engineers to hold the line against such sloppy project management, even if it means walking away from the job. You always have a choice. There are a lot of smaller to medium sized companies that pay lip service to good development practice and quality, but it gets lost in the political noise, especially if marketing / finance / engineering are at war with each other, (pick any two, or even all 3) which is not uncommon. Lack of understanding, ethics, communication, ego and mistrust etc all contribute to this.

Agreed, and is to be expected, but there are limits. There are more constructive ways to be challenged by your work, without being a hero or martyr :-)...

Chris

Reply to
ChrisQuayle
Loading thread data ...

Although this is true, most do not support the infinities, (signalling/non-signalling) NaNs, and denormalized values of the full standard.

There are buggy users also. I once had a vendor complain that the tan(pi/2 - one bit) didn't agree to seven decimal places with a value obtained from another source. I had to do a lot of explaining to convince them that at that point even the first decimal place is questionable, never mind the seventh.

Reply to
Everett M. Greene

Unless strict conformance to IEEE-754 is required, at least for an 8 bitter floating point software emulation, it would make sense to use some other floating point bit layout and interpretation than the messy IEEE format. For instance the hidden bit normalisation adds extra overhead and I am not sure if using sign/magnitude representation (instead of 2's complement) for the mantissa or the excess notation for the exponent or even use some different base than 2 is the only possible option. It might even make sense to use a full 32 bit mantissa (depending on availability of 8/16 bit multiply instructions) and 8-16 bits for the exponent, but this would cause sizeof(float) to be 5 or 6, which at least would cause problems with unions. An example of such software implementation was the Borland Turbo Pascal "real" data type, in the days before the 8087 floating point co-processor became widely available.

Paul

Reply to
Paul Keinanen

Amazingly, sign-magnitude notation works quite well. I once questioned its use as well, but experience with doing float operation interpreting on small processors finds it to be a help, not a hindrance.

I once worked with a machine that had a 32/32 hardware float format. It gave something between single- and double-precision float range for the mantissa and severe overkill for the exponent range.

Reply to
Everett M. Greene

Going to floating point because one do not properly define system limits and the required ranges is just asking for trouble. If one ends up with NAN or INF, this tends to propogate. On a system with an OS, this is trapped, and one gets an error. On a typical embedded system one ends up with total garbage. Then there is the fact that with integer one can do varaible+1 for the full range and always have the correct value. With floating point one quickly get to a stage where the varaible does not have the right value at all.

Trying to do scaled integer implimentations in the minimu theoretical bit size needed is difficult. With gcc supporting 64 even on something like the AVR makes life much easier. There are few applications where scaled 64 integers does not provide enough precision and range.

Doing a robust floating point implimentation that handles all the exceptions properly is non-trivial. Typical 8-bit implimentations takes short cuts.

Regards Anton Erasmus

Reply to
Anton Erasmus

With integers, it is possible to verify the implementation of an algorithm against the model by bit to bit comparison. It is not so trivial with the floating point because of the implementation dependent precision issues.

I know one practical application where it may be necessary to use the integers of more then 64 bits: CIC filters. Can you give another example?

This is more of the application problem rather then the float library problem. It is difficult to foresee and handle all special cases even for moderately complicated system of equations. However it is impossible to approach the task of that level of complexity with the integer arithmetics.

Vladimir Vassilevsky

DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

I know, and that's one of the best features of IEEE floating point. It allow prevents you from having bogus (but apparently valid) outputs when one of the inputs is missing or invalide.

No, you end up with INF or NAN.

How do you represent infinity or an invalid value in an integer?

No you _do_ end up with the right value: INF or NAN. That's the whole point. If an output is being calculated from a set of inputs where on of the inputs is invalid (NAN), then the output is invalid (NAN). That's very difficult to do with integer representations.

I've been using floating point on 8-bit platforms for 15 years, and I've got no complaints.

--
Grant Edwards                   grante             Yow!  I smell like a wet
                                  at               reducing clinic on Columbus
                               visi.com            Day!
Reply to
Grant Edwards

Yes, but if this is not checked, and the INF or NAN is cast to an integer to drive a DAC for example, then one is driving the DAC with garbage. Typical embedded floating point implimentations will happely give a result if one casts NAN or INF to an integer. AFAIK there is no guarantee that INF cast to an integer will be the maximum integer value. Even if one uses an OS which actually provides floating point exceprion handling, it is quite difficult to keep on controlling if one suddenly get a NAN or INF because one did a number/(big_number -(bignumber-delta)) calculation and ended up with number/0, which in turn caused the value to be NAN. To non get this one has to scale the whole algorithm in any case, so one can just as well use fixed point maths. On some systems one has to reset the floating point co-processor and basically restart the system. For some control systems this is a VERY bad idea.

Make sure thet one has enough bits to represent the range, and then saturate at the maximum or minimum. For many control systems this is good enough. If one gets a huge error, commanding the maximum corrective response is all one can do in any case. The only other invalid case then normally is divide by 0, and this can be tested for and handled. Often again by commanding the maximum corrective response.

In a typical control system handling INF is problematical, handling NAN is a nightmare. I have worked on a system where floating point was used in the control algorithms as well as in a background debugging task that displayed various floating point parameters. The system had hardware floating point support using a co-processor. The control task ran in a timer interrupt, while the debug task used all the left over CPU time. Great care had to be taken to switch the co-processor state each time the timer int routine was entered and exited again. The debug task caused a NAN which propogated to the control task even though it was in a separate thread. One had to re-initialise the co processor to get out of the NAN state.

Support for floating point on 8-bit platforms used to run major OSes tend to be robust and sorted out. On embedded micros shortcuts are taken for speed and size reasons.

What was the main driving force for the development of floating point ? (This is a serious question - not a troll) One can represent the range and precision in fixed point and AFAIK it is much easier to code this in assembler than to code a robust floating point implimentation. The only reason I can think of was that memory was so expensive that people had to do everything possible to minimize memory usage. Today memory is orders of magnitude cheaper and one has relatively a lot even on quite small micros. On bigger systems using fixed point representations with hardware acceleration of 256bit or 1024bit or even bigger would be a lot less complex and should be a lot faster than floating point hardware. A pity that none of the main stream languages has support for fixed point.

Regards Anton Erasmus

Reply to
Anton Erasmus

... snip ...

I think Anton is including omission of detection and propagation of INF and NAN as "short cuts". The other approach is to trap those operations at occurance.

--
Chuck F (cbfalconer at maineline dot net)
   Available for consulting/temporary embedded and systems.
Reply to
CBFalconer

All of the implementations I've used handled INF and NAN properly. That includes processors as small as a 6811 back in

1989 and about 8 others since then.

Perhaps others are less careful when choosing their tools?

--
Grant Edwards                   grante             Yow!  One FISHWICH coming
                                  at               up!!
                               visi.com
Reply to
Grant Edwards

While in-band signalling of special conditions is useful in some cases, when there are a limited number of special cases, however, with more complex situations, when the signal is processed at various stages, getting any meaningful information through the whole chain is quite hard.

However, when using out-of-band signalling, i.e. there is a status field is attached to the actual value. This status field can express much more situations, such as sensor open/short circuit, out of range etc. For instance, in the Profibus PA fieldbus, each signal is represented by 5 bytes, four bytes for the actual floating point value and one byte for the actual status ("quality") of the signal.

So if you are planning to use a floating point value with the sole purpose of using in-band signalling with NAN, +INF and -INF, it might make more sense to use a 16 or 24 bit integer value field and an 8 bit status field, which has the same or smaller memory footprint than the

32 bit float. On an 8 bitter, less memory access is required to check for the status byte than trying to determine, if the float is a very large value, INF or NAN, requiring testing up to 4 bytes. This is significant, since the exception conditions should be checked every time before using the value.

Paul

Reply to
Paul Keinanen

Ada supports fixed point. I consider it a "main stream language", but some perhaps don't...

--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .
Reply to
Niklas Holsti

I didn't get it (or my spam filters got to it first). Try sending it to ron dot blancarte at benchtree dot net. That is my work address and would be where I was looking for this solution.

-R

Reply to
Ron Blancarte

So do COBOL and PL/1, but are these main stream languages in the embedded world ;-).

Paul

Reply to
Paul Keinanen

As far as I am aware there is no main stream language available for from small 8-bit micros up to 64 bit processors that has support for fixed point. This is true whether one considers Ada a main stream language or not. C , and maybe Forth is probably the only languages which is supported across the full range of micros.

Regards Anton Erasmus

Reply to
Anton Erasmus

Ada is now available on the AVR.

What I don't know, because I don't use the AVR, is if Ada's fixed point packages are also available on the AVR.

See

formatting link
for further details if you are interested.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980's technology to a 21st century world
Reply to
Simon Clubley

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.