NIOS and ftoa()

Hi folks-

I'm in the beginning stages of crafting a high-speed measurement device which needs to output a floating point value in ASCII string form that represents the measurement at a rate of 10 kHz. I'm using an Altera FPGA that runs Nios2. Convenient standard library functions like sprintf() or ftoa() running as code will be too time-consuming to meet my high throughput requirements. What I need is an ultrafast float to ASCII conversion function that can run in code *or* a strategy for implementing a conversion function in HDL. Altera has a nice tool called C to HDL compiler which I'm looking at now.

It seems to me that a fast float to ASCII conversion function would be a common function of many embedded systems. Rather than me reinventing the wheel, can anyone point me to a resource (example on the web or a product for sale) that I can use to achieve my goal?

Thanks, John Speth.

Reply to
John Speth
Loading thread data ...

I've always avoided trying to do any sort of conversion in an embedded system /because/ it tends to be slow. Also there are other difficulties with things like ftoa in that they (and other library routines) often use static buffers.

Do you have to output the number as floating point? Could you consider outputting it as hexadecimal and conversion to floating point subsequently?

Do you have to output at 10kHz? Could you consider outputting at a lower rate or is the output device actually storing it? (I personally can't read characters at 10kHz!)

This begs the question where or to what is this output going?

Andrew

Reply to
Andrew Jackson

What is the source of your data? Surely (?) your "sensor" isn't generating floating point data?? I.e., can you deal with the data in some other form that is more readily converted to ASCII? (e.g., if you have to manipulate the data somehow before "output", use decimal math -- this is trivial to implement unless you start getting into the transcendental functions, etc.)

Reply to
D Yuniskis

You are probably in far better position than most to know what features characterize your values. What is their dynamic range and magnitude, for example? How many ASCII characters will suffice? Is a radix point required? A sign?

A highly generic library routine is not going to be as fast as one that takes into acount a priori information you know about your values. It's helpful to start by carefully delineating your boundary conditions. (Unless your values can truly span the entire dynamic range of the floating point representation itself. And if so, then I would wonder about just how serious you are there, such as denormals, NaNs, INFs, and so on.)

Jon

Reply to
Jon Kirwan

Even if it is generating floating point (we'll assume IEEE 754 format), just output the IEEE fp value in binary (or ASCII-hex) rather than attempting to convert it to base-10 floating point in ASCII.

Nope. Reduce/process/buffer the data _then_ convert it to human-readible ASCII form.

--
Grant Edwards                   grante             Yow! Should I do my BOBBIE
                                  at               VINTON medley?
                               visi.com
Reply to
Grant Edwards

This is roughly what the others are saying, but in different words.

First convert your values into scaled integers of some sort. Then convert these to ASCII - that's just a series of divide-by-ten calculations, and a little care where you put your decimal point and zeros.

Also make sure you are using a Nios with fast division, and consider building it with hardware floating point support (though it is better if you can avoid floating point entirely, and stick to scaled integers).

Don't bother trying to make hardware (either manually or with C to HDL) to support the entire conversion function. But it might be worth making a special instruction (or C to HDL conversion) for the inner loop.

Reply to
David Brown

I assume he knows what he needs (too often, we spend a lot of time with "Why do you want *that*?" instead of just assuming the OP is smart enough -- ? -- to *know* that he truly needs what he is asking for). While it is likely (?) that the device on the other end of his "measurement device" *probably* will convert that ASCII string back into some sort of "numeric" representation (perhaps even floating point!), there is no guarantee that this is the case. E.g., the device on the other end may be nothing more than a data logger that takes the ASCII and records it on some medium. Or, the device might *expect* data in this ASCII format.

I am increasingly moving away from "binary" formats to things like ASCII strings because it is so much more portable. When you have to interface lots of heterogeneous systems, its just easier to pick *a* standard interchange format and live with that throughout than to have to deal with various idiosyncrasies of each individual device. E.g., does the FP implementation support subnormal/denormalized values? what range of exponents are supported? does it support signed zeros? It is a big or little Endian representation, etc.

Granted, this is a performance hit, often (not always). But, *FOR ME* it seems to be a worthwhile tradeoff (vs. adding complexity to devices so they know about each

*other's* characteristics. I have little PIC-based sensor/control devices that emit/accept ASCII commands instead of pushing "proprietary bits" to/from them. So far, it has worked quite well (also makes debugging a multiprocessor system considerably harder when you can just *read* messages as text -- instead of having to "decode them" :> )

I was advocating a middle-ground; get the data into a form that is *relatively* easy to process (decimal) and *much* easier to convert to that human readable form at the end.

E.g., there are times I will resort to using decimal counters in a piece of hardware (vs. binary counters) as it can eliminate the overhead of doing the conversion (in software) at the expense of a few gates in the feedback paths to the various stages of the counters.

Reply to
D Yuniskis

(I'm to OP)

Thanks for all the good suggestions that have come forth so far. Some will be useful and some won't be. I can envision considerable execution savings if I use some sort of fixed point math and special-purpose custom formatting instead of using the standard C library functions and built in floating-point math.

I'll try to further explain my application:

As another respondent mentioned, we require ASCII because of its portability, simplicity in decoding, and immediate readability with simple tools (like hyperterminal). The sensor will communicate via USB CDC to a PC. While the PC will be swamped with continuous 10 kHz measurement transmission, we will typically see finite bursts of measurement transmissions, which is manageable. But we've need to be prepared to handle the extreme case in which the burst length may be functionally infinite (continuous 10 kHz).

John Speth

Reply to
John Speth

Niklaus Wirth recently wrote "A Note on Division" describing strategies for implementing integer and real division on processors which lack a division instruction. While not providing a complete solution to your problem the ideas might contribute towards one. The article can be downloaded from:

formatting link

-- Chris Burrows CFB Software Armaide: ARM Oberon-07 Development System

formatting link

Reply to
Chris Burrows

10Khz output sample rate would be: 4-byte FLOAT = 40KBytes/sec !!! 10Khz output ASCII rate would be: xx.xxx 6-byte ASCII = 60KBytes/sec !!!??

Thats 320Kbits or 480Kbits per second.

Will USB CDC handle this data rate (with all the overhead) ?

don

Reply to
don

So use hexadecimal ASCII floating point format (C99 *printf("%a",...) style) It's ASCII, standardized, human-readable of sorts, and way faster to construct than decimal.

Sorry, but that makes no sense at all. To what limited extent it's even possible to "swamp" a current PC with 10000 data records per second, the only effect coding FP data to decimal and back can possibly have on that problem is making it _worse_. ASCII decimal notation consumes both more CPU power on both ends, and more bandwidth on the way. It's completely wasteful.

I would think that the most common function of this sort in embedded systems is "don't do it", closely followed by "if you really have to, at least do it on the high CPU power side of things".

Reply to
Hans-Bernhard Bröker

If the data being sent out starts as a 16-bit binary value from a sensor, you might be able to skip all the floating point math when sending the data:

If the processor has decimal addition instructions, this problem could be broken down into a combination of table lookups and decimal additions.

For a 16-bit binary value, set up four tables of 16 entries, each with the BCD value corresponding to the decimal weights within that particular nybble.

Start with the most significant nybble, and add up the BCD values for each of the subsequent table entries for the other nybbles:

unsigned int data;

BCDValue = TableHH[data >> 12];

BCDAdd(&BCDValue, &TableHL[(data >> 8) & 0x0007]);

BCDAdd(&BCDValue, &TableLH[(data 4) & 0x0007]);

BCDAdd(&BCDValue, &TableLL[data & 0x0007]);

After these steps, BCDValue should have the appropriate binary-coded decimal value and conversion to a string is simply a matter of adding the 0x30 character offset and inserting a decimal point at the appropriate place while sending the string.

If the data varies slowly, you could optimize the heck out of this algorithm, by caching the last weights of the upper nybbles and only doing the addition for the lowest nybble.

Setting up the tables would be a one-time operation based on the required decimal precision and the calibration coefficients for the sensor.

You could also modify this approach to use a single 16-entry table where each entry is the BCD weight corresponding to that bit in the input data. If the input data is well distributed over the 16-bit space, you will do 16 bit tests and about 8 BCD additions for each output.

This is all 7AM pre-caffeine pseudocode, so there is no guarantee that the algorithm is either correct or suitable for the OPs problem.

Mark Borgerson

Reply to
Mark Borgerson

don ha escrito:

Sure. Those HSDPA 3G mobile internet sticks are CDC, and promise rates up to 3,6 mbps.

Reply to
Marc Jet

From

formatting link

Disclaimer: Referenced speeds require an HSDPA 3.6Mbps / HSUPA capable device with Receive Diversity and/or Equalizer. BroadbandConnect speed claims based on our network tests without compression using 3MB data files. 3G devices not enabled with HSUPA support typical upload speeds of 220-320kbps based on our network tests without compression using

500KB data files for upload. Actual throughput speed varies.

I did not see where the OP was using HSDPA !!??

don

Reply to
don

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.