Unusual Floating-Point Format Remembered?

No.

Leave the banal comparisons to the journalists. It is fairly simple to run out of 64 bits when you are working with the cascaded CIC filter, for example.

BTW, there is a MIRACL math library which alows having the arithmetic with any given precision.

Vladimir Vassilevsky

DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky
Loading thread data ...
[ Irrelevant newsgroups removed from follow-up. ]

In article , Vladimir Vassilevsky writes: |> |> Leave the banal comparisons to the journalists. It is fairly simple to |> run out of 64 bits when you are working with the cascaded CIC filter, |> for example.

Well, yes, in theory. But do you actually suffer from serious loss of accuracy in practice? And can you provide any useful references as to how and why?

We both know that FFTs 'lose' up to log_2(N) bits, which means that a cascaded series could lose M.log_2(N) - which could be a lot. But does this happen in practice, and do you know what difference the rounding method makes?

I am interested in this because I have an unproven hypothesis that this could be a real case where probabilistic rounding is numerically superior to even the best nearest rounding. It would be interesting to see if that really is the case.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

OK; you're right. But what is a /cascaded/ CIC filter? Is it like AC voltage (alternating current voltage)?

There are several BigNum packages. I use one written in Forth.

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

formatting link

VLV

Reply to
Vladimir Vassilevsky

Vlad,

I know what a CIC is. What's a _cascaded_ CIC?

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

Point taken.

VLV

Reply to
Vladimir Vassilevsky

Oh that problem.

If part of the address space is disk, it makes sense to stop and re- order the data part way through the bufferfly process. I have never considered it for RAM speed issues.

Reply to
MooseFET

In article , "MooseFET" writes: |>

|> > The problem is that all current memory technologies rely on the data |> > being accessed in contiguous 'blocks'; it requires a LOT more money |> > and watts to make true random access efficient. And there is no way |> > to assign arrays to blocks that doesn't cause some passes of the FFTs |> > to access the data in a very inefficient pattern. |> |> Oh that problem. |> |> If part of the address space is disk, it makes sense to stop and re- |> order the data part way through the bufferfly process. I have never |> considered it for RAM speed issues.

RAM is the new disk.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

Oh, kewl. You could etch a board with that.

No, thank you.

Yes, and I apoligize to the s.m.n-a critters for drifting the thread.

But I'm glad I did trespass here. I learned about something I didn't know I didn't know. Thank you for spending your time on my query. :-)

/BAH

Reply to
jmfbahciv

Because 64-bit ints are enough for the vast majority of all integer apps, and for those that do require carries, it can be _very_ efficient to work in a redundant format like carry-save, in which case you'll handle full carry propagation only when delivering the final result.

As a longtime asm programmer, and author of about 5 bignum/arbitrary precision packages, I do like AddWithCarry, but I have absolutely no problem accepting that this has to be a slower operation than a normal ADD.

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

That is the Forth primitive "*/". n1 n2 n3 */ gives (n1*n2)/n3 using a double-length intermediate product. It makes scaled integer fairly simple.

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

(someone wrote)

Is the question still FFT?

Are there any actual uses where one would try to separate an earthquake signal and Brownian motion with an FFT? And even so, 64 bit floating point won't do any better.

-- glen

Reply to
glen herrmannsfeldt

(someone wrote)

Traditionally it was done by having the result of multiply stored in two registers and one uses the high register. With a double length shift (hopefully fast) one can select the desired bits.

The original TeX is in Pascal, with the suggestion that one routine be written in assembler for the host machine. That does (A*B)/C where A, B, and C are 32 bit values, and A*B has 64 bits. Easy on many machines, not so easy in Pascal.

-- glen

Reply to
glen herrmannsfeldt

Yes, I do that in assembly. Quite a few machines do a a "multiply add" instruction that does a:

Y = Y + A*B

In some DSP like processors, the instruction can be:

Y = X + A*B

where X and Y are both accumulator registers.

You can't use these if you need to shift the result down first. This means you need three instructions to do the operation. Just a little extra logic on the chip would allow for the shifted versions.

Reply to
MooseFET

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.