Unusual Floating-Point Format Remembered?

Embedded folk have stronger imperatives than that. Either it doesn't matter, because the embedded system in question is in a radar system or some such and can drink as much power as it likes (in which case the off-the-shelf chip will probably do the trick), or it matters so much that there's no way that it will get a look-in: the guy who does (whatever) in less silicon, with fewer watts/longer battery life will walk away with the business. Mostly that means fixed point, rather than even binary floating point. The other big "embedded" (i.e., non-user-programmed) user of floating point is gaming graphics, and I suspect that that's a field safe from the scurge of decimal arithmetic for the forseeable future, too. (Most of that stuff doesn't even bother with the full binary IEEE FP spec, of course: just what it takes to get the job done.)

Also: if on-chip parallel processing ever gets useful, then there may well be significant pressure to spend the gates/area/power on more binary FUs than fewer decimal FUs. Outside of vector co-processors and GPUs, I have my doubts about that.

So: python and C# (both popular languages) are going to add 128-bit decimal float to their spec, and Java already has it? This works fine on binary computers, I expect. I also don't expect that there will be *any* applications that would get a significant performance boost from a hardware implementation, so customer pull will (IMO) be minimal. The only reason that this is being thought of at all is that no-one can think of anything better to do with the available transistor budget...

--
Andrew
Reply to
Andrew Reilly
Loading thread data ...

My apologies; I had posted the *original* article to comp.dsp and sci.electronics.design because mu-law and A-law are discussed in these groups.

Thus, I thought I would find people in these groups who could answer my original question (which appeared in the post after some background) - if someone had ever decided to make a "more efficient" floating-point by alternating between appending either 0 or 1/2 to a binary mantissa in the high part of the range, and either 0, 1/3, or

2/3 to the binary mantissa (approximated as binary fractions, such as 0, 5/15, and 11/15, perhaps) so as to represent quantities with a precision that only varied by *half* a bit.

I thought perhaps people in these groups might recall something like that from an old issue of Electronics or particularly Electronics Design.

John Savard

Reply to
Quadibloc

In my opinion, and in the work I've done, I simply do not consider floating point for anything that has requirements for speed, power consumption, cost, board area, etc., since fixed-point implementation is an option and, while somewhat of an art, can be executed without all that much extra work.

So I would say no, it doesn't make sense to me to try to optimize floating point from a cost/board space/power consumption/etc. perspective since it will always be less optimal than fixed-point.

I think a more fruitful use of time would be to investigate improving the accuracy of floating point so that those applications that do use it could be more impervious to numerical issues.

--
%  Randy Yates                  % "She has an IQ of 1001, she has a jumpsuit
%% Fuquay-Varina, NC            %            on, and she's also a telephone."
%%% 919-577-9882                % 
%%%%            %        'Yours Truly, 2095', *Time*, ELO   
http://home.earthlink.net/~yatescr
Reply to
Randy Yates

Many embedded systems depend on low power. If cell phones using decimal arithmetic floating point units ran an hour on a recharge, with binary an hour and a quarter, and with integer (fixed point) an hour and a half, which would you want? For machine control, measurements must be made and response signals generated. Outside the system, the signals are analog. Will the system have superior performance with decimal D/As and A/Ds? How does it matter if the spark timing in your car is computed in binary or in decimal? I think your concern is orthogonal to the needs of signal processing and process control practitioners.

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

C is probably overused too. Sometimes a few lines of assembly code are clearer those fluent in the language and execute more quickly (using less power) than an equivalent HLL routine.

As to floating point, the question here is whether it is better to do it in decimal using BCD or a packed version of it than to do it in binary. I believe that the advantages of decimal calculation are worth little to most applications close to hardware. Million-point FFTs used to predict stock futures may be a DSP exception, but I'm not sure of that.

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

Please don't apologize. First, I represent only myself. Second, what you posted was interesting and opened my eyes to new possibilities. I suggest only that the details and controversies aren't really relevant here. It's certainly less annoying than some of our regular trolls.

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

A lot of people do DSP on PCs these days, and floating point performance blows away anything you can do with integers on a PC.

I'm not sure I'd agree about FP being misused in other areas, although the bloat that started with PC software seems to be spreading across the face of the earth. The big volume DSP cores are all fixed point, and go into high volume applications where fine tuning is justified. A lot of FP DSP cores are used in low volume applications, where getting something working quickly is usually the goal. Floating point solutions can be a lot faster to implement.

Steve

Reply to
Steve Underwood

(snip)

I agree. I am not so sure about hexadecimal, though. I was for a while working on FPGA implementations of algorithms, though mostly fixed point. In current FPGAs, a floating point adder is bigger (more logic cells needed) than a floating point multiplier, because of the barrel shifters needed for pre and post normalization. HFP requires a much smaller shifter when implemented as a hardware pipeline. Many FPGAs now have a fixed point multiplier block that can easily be used to build a floating point multiply unit.

-- glen

Reply to
glen herrmannsfeldt

In article , Andrew Reilly writes: |> |> Embedded folk have stronger imperatives than that. Either it doesn't |> matter, because the embedded system in question is in a radar system or |> some such and can drink as much power as it likes (in which case the |> off-the-shelf chip will probably do the trick), or it matters so much that |> there's no way that it will get a look-in: the guy who does (whatever) in |> less silicon, with fewer watts/longer battery life will walk away with the |> business. ...

It's not that simple. Think machine control - a MASSIVELY expanding area, with cars following aircraft and even domestic equipment following on. The power requirements are serious, because active cooling is a REAL pain and more heat leads to a lower lifetime, but the total power used is negligible compared to the machine's use. At least in many important cases.

In article , Jerry Avins writes: |>

|> Many embedded systems depend on low power. If cell phones using decimal |> arithmetic floating point units ran an hour on a recharge, with binary |> an hour and a quarter, and with integer (fixed point) an hour and a |> half, which would you want? For machine control, measurements must be |> made and response signals generated. Outside the system, the signals are |> analog. Will the system have superior performance with decimal D/As and |> A/Ds? How does it matter if the spark timing in your car is computed in |> binary or in decimal? I think your concern is orthogonal to the needs of |> signal processing and process control practitioners.

Signal processing, perhaps - machine control, not at all. See above for one issue. Another is that the 'major' process control (chemical plants etc.) need real-time, highly reliable HPC - and that means as much computational performance as possible, as well as other things I was and am thinking about process/machine control when I sound my trumpet about the problem of power (wattage).

Regards, Nick Maclaren.

Reply to
Nick Maclaren

If you have a multiplier, it can be used to to the bit aligning and the normalization steps of an add. Since one of the terms going to the multiplier has only a single bit high, you can route either through the FFT's bit order reverser if you need to shift in the right(vs left) direction.

Reply to
MooseFET

My brother got his PhD on global control/optimization of chemical plants, according to him the only way they even dream about implementing this is as a fault-tolerant layer on top of the existing low-level control loops, i.e. the individual PID regulators, catalytic converters, fractional distillation columns etc.

Ideally you want to make sure that the high-level control can fail safely at any point, the only downside being that your plant will run in a possibly non-optimal configuration.

OTOH, they really don't care at all about the number of watts used for the high-level control sw, compared to the MWs going into the plant itself, cpu power isn't even noise level.

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

In article , Terje Mathisen writes: |> Nick Maclaren wrote: |> > Signal processing, perhaps - machine control, not at all. See above |> > for one issue. Another is that the 'major' process control (chemical |> > plants etc.) need real-time, highly reliable HPC - and that means as |> |> My brother got his PhD on global control/optimization of chemical |> plants, according to him the only way they even dream about implementing |> this is as a fault-tolerant layer on top of the existing low-level |> control loops, i.e. the individual PID regulators, catalytic converters, |> fractional distillation columns etc.

I have heard of other schemes. Whether they were being dreamt of or actually used, I am not sure, but I understood the latter.

|> Ideally you want to make sure that the high-level control can fail |> safely at any point, the only downside being that your plant will run in |> a possibly non-optimal configuration.

Unfortunately, there are some things that can't be controlled like that, because the process is inherently unstable at a high level. Some forms of nuclear reactor, for example, but I heard that there were also several such chemical processes.

|> OTOH, they really don't care at all about the number of watts used for |> the high-level control sw, compared to the MWs going into the plant |> itself, cpu power isn't even noise level.

For the high-power processes, assuredly. I don't know of any systems that currently use HPC for process control (as distinct from machine control) where power is a problem (outside the military, of course), but I wouldn't rule them out.

A more immediate point for HPC is that high power requirements prevent high packing densities, and the latter mean higher latencies. The larger HPCs are already having trouble with technologies and timing over that. But, even when such things are used 'embedded', they tend to be regarded as HPC more than embedded computing.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

...

I intended to connect the issue of binary vs. decimal floating point to machine control, not power. Measurement and control requires converting between analog and digital. The usual practice uses binary converters and binary arithmetic now. How would decimal arithmetic be an improvement? Do you contemplate decimal converters?

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

In many FPGA implementations, an adder or multiplier is used for one task only. Think hardware. A gate can be used for many different tasks before it is connected, but only one afterward.

I understand normalizing binary with a barrel shifter. How is decimal normalized?

FFTs in binary have reversed bit order addressing at one stage. What is the storage order of a decimal FFT?

Do decimal trees make as efficient use of time and space as binary trees? What does it mean to branch on a digit?

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

In article , Jerry Avins writes: |> |> I intended to connect the issue of binary vs. decimal floating point to |> machine control, not power. Measurement and control requires converting |> between analog and digital. The usual practice uses binary converters |> and binary arithmetic now. How would decimal arithmetic be an |> improvement? Do you contemplate decimal converters?

Decimal converters are the best part of a century old, perhaps older than that. I can imagine no way that they are better, except when they are for direct read-out or input by a human.

|> FFTs in binary have reversed bit order addressing at one stage. What is |> the storage order of a decimal FFT?

Painful. But FFTs for any size are a solved problem. The use of decimal integers would slow down power-of-two FFTs marginally.

|> Do decimal trees make as efficient use of time and space as binary |> trees? What does it mean to branch on a digit?

Irrelevant. Both of those are independent of the storage representation.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

Jerry Avins wrote: (snip)

Yes. Similar to a pipelined processor, such as the ones Cray used to build, except that you build the pipeline to do exactly the problem that you want solved. If done right, every processing unit is running every cycle.

The suggestion was hex, but in all cases it is digit by digit. Either decimal or hex each digit is four bits, meaning two fewer stages of barrel shifter. Also, fewer exponent bits are required for the same range.

The usual FFT is binary based, but it can be done for the prime factors of any length. Less efficiently as the factors get larger. An FFT with a length a power of 10 would have stages that are binary (two way butterfly) and base 5 (five way). That is independent of the base of the arithmetic, but depends only on the length.

Biquinary (alternating binary and quinary) would be a little better than decimal, and not all that much worse than binary. Again, it doesn't depend on the base that arithmetic is done in.

-- glen

Reply to
glen herrmannsfeldt

(snip)

That sounds closer to what I might expect from comp.dsp. Machine control is likely done in fixed point. People in metric countries might be more likely to do it in decimal, but it probably doesn't make much difference either way.

-- glen

Reply to
glen herrmannsfeldt

(snip)

It can, if you don't need them for the multiply. Note that you need both prenormalization (align the radix point before add/subtract) and post normalization (remove high order zero digits).

-- glen

Reply to
glen herrmannsfeldt

...

So memory addresses are to remain binary? How do you expect pointer arithmetic to be implemented?

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

...

But in either case, the granularity is much coarser than binary. That was found to be unobjectionable in times past.

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.