[ Irrelevant newsgroups removed from follow-up. ]
In article , Vladimir Vassilevsky writes: |> |> Leave the banal comparisons to the journalists. It is fairly simple to |> run out of 64 bits when you are working with the cascaded CIC filter, |> for example.
Well, yes, in theory. But do you actually suffer from serious loss of accuracy in practice? And can you provide any useful references as to how and why?
We both know that FFTs 'lose' up to log_2(N) bits, which means that a cascaded series could lose M.log_2(N) - which could be a lot. But does this happen in practice, and do you know what difference the rounding method makes?
I am interested in this because I have an unproven hypothesis that this could be a real case where probabilistic rounding is numerically superior to even the best nearest rounding. It would be interesting to see if that really is the case.
Regards, Nick Maclaren.