As part of a signal processing design I'm working on (I was given C/Matlab code, and told "go make it hardware"), I need to calcuate log(1+B^d). So far, I've found some information on CORDIC and 2 software approximations. I need to do this in 32 bit floating point, and the requirements call for "as much accuracy as you can get" as no one has been able to figure out how much we really need yet. There's also a large possible difference betwen the biggest and smallest numbers that are used in this calcuation, so using some fixed-point system is most likly more work then it's worth.
I'm thinking that I could calcuate B^d as e^(d * LN(B)), since I'm hoping that whatever way I end up using to find log(x) can be reversed to find e^x as well.
The approximations I've found so far have all been software based, and didn't seem that accurate (about 5 or so decimal digits when tested in Matlab). It also seems like taking the iterative/software-based approach is the wrong way to go, and could easily lead to the log/exp unit needing over a hundred cycles to execute. The two approximations I've looked at are the one from glibc and the one used in the VHDL Real Math package.
CORDIC seemed more promising at first, but it looks like it's not widly used for floating point. However, I haven't been able to find any deatails on what other methods might be better.
Can anyone suggest a better way? Are there any approximations that are more hardware then software based? Or would I be better off either using one of them, or CORDIC?
Thanks, Mike
(Sorry if tihs is a re-post, I tried to post this an hour ago, but it didn't show up, I might have clicked the wrong button.)