in a current ARM7 project I need to do some calculations which include a logarithm. All other math is done with integers (i.e. fixed point arithmetic), and I would like to calculate the log fast and with little code - so I don't prefer using FP conversion and the standard math libraries.
Can anyone point to fixed-point logarithmic routines I could use (resp. tailor to my needs)?
Tilman, the binary log of a number mainly counts the number of bits the most signifgicant bit is moved from One. Then comes the conversion of the number itself.
Good book with an idea on that : Crenshaw "Math Toolkit for Real Time Programming" CMP Books 2000
I adapted it to 6502/FORTH but
formatting link
Heft 9 is no longer free. But there was a preprint in
formatting link
( FORTH e.V. has now all issues of the magazine back to 1984 available for download,
formatting link
but no good index yet ). Both descriptions are in german.
Beyond that one would have to dig out literature on logarithmic numbersystems. The best known published implementation for a controller was FOCUS for the Z80 in assembler. I think i have the description als pdf ( scan from book, bulky ) somewhere and the source typed in, but didn´t get to porting it to 6502/FORTH yet.
Thanks for the link. The description sounds good, unfortunately the listing appears somehow broken (at least it's not readable to me...).
I think I remember that... But this time I just need a few single logarithms within a great mass of linear arithmetics, so I think a log numbersystem wouldn't help me much.
However, the bitlog function appears to be useful here, so I would greatly appreciate more info on it.
I had a need for log of an integer with the result fixed point. For fixed point input there is just a constant difference. I didn't hugely accurate results, but the results are better than I needed. Input was signed 32-bit integer and result was ln(x) as 32-bit fixed point with 24-bit fraction. log10(x) was then obtained by scaling. There is no particular reason why the input can't be unsigned.
However, the method requires a 256-entry 32-bit int table, a 256-entry byte table and one integer divide :-(
I can't give out the source code because of copyright constraints but I'm happy to explain the algorithm.
The title is somewhat misleading: its less "embedded" but more "numeric", but a good read and i think it wasn´t expensive.
Ah well: thats 6502 postfix assembler. IO ( to FORTH ) is via a 16 bit stack simulated by the X-register. I don´t have the 6502-manual online, but the 68HC08 is similar ( but bigendian ):
Log is also undefined for 0 either. My problem was that my input could be negative (difference of a light signal from a dark signal) and so was easy to handle that way i.e propagate the error though the log.
Sounds comparison operation to me as you need to compare the values with your logarithm table. IMHO hash table would yield the best performance with index and key attributes.
First we shift the input left until a one is shifted out while subtracting the fixed point representation of ln(2) from the answer - initialised to
32*ln(2). The typical normization step. Obviously this stage can be speeded up with binary search approaches.
So now we have the value 1+x+y in the form of a 32-bit fixed point number x+y with the binary point at the left. x, here, is the top eight bits and y the next 24. So 0
It all depends on how accurate the log function should be. If you are fine with 10% accuracy, just count the insignificant bits and then do a simplest linear approximation.
The OP didn't give an input range or accuracy requirement so its hard to guess what is best. The thing we do know is that its for an ARM7, which has a reasonable multiplier, so better than linear shouldn't be a issue.
You've gotten some responses already, but here's another you might consider looking at. It's one of the chapters of a book that Analog used to give away (don't know if they still do) to customers. Great book. Anyway, this chapter covers the logarithm for the ADSP-21xx DSP processor, but it includes a good description about the whys and wherefores as well as a coded example (which requires a little bit of learning of the isntruction set if you want to understand each and every step of the code -- but that isn't too hard.) Here is the chapter:
formatting link
Also, if you do actually decide to dig more deeply into the algorithm described and then coded up in the above chapter, you could read up on the instructions with this ZIP:
formatting link
(I couldn't find just the chapters online, quickly. So that is the whole thing, I imagine.)
Jack Crenshaw explains the whole thing (sprinkled throughout the book, but the area that especially applies here is on page 324, equation
10.53). I've used it to write fast approximation functions for several projects of the years.
Log in a nutshell: Use log2() and scale the output as needed. The highest set bit tells you the left-of-radix value. The remaining bits represent the input in a range from 1.0 to 1.9999...
Choose a table size (2^N) and the number of terms in the interpolation polynomial (3 usually works very well).
Generate an A[] table of the log2(x) where x goes from 1.0 to
1+((2^N-1)/(2^N)). Make a table of "forward differences" (call it F) where F[i]=A[i+1]-A[i]. Make another difference table, G[i]=F[i+1]-F[i]. Now you can make B[] and C[] tables for the run-time code. C[i]=G[i]/2 B[i]=F[i]-G[i] run time code is: normalize x and remove MSB int i = x>>alphaBits // get table index int alpha = ((1 shift1 retval += B[i ] retval *= alpha retval >>= shift2 retval += A[i]
The shift1 and shift2 are a function of your table size (N) and any extra scaling you might have done to improve the resolution of B[] and C[]. Crenshaw explains it all with Taylor series and shows how easy it is to extend to D[] or higher order tables. It comes down to a trade-off between accuracy, speed, and table space.
I usually start by analyzing the error vs. table size in a spreadsheet, then use a C program so that I can test it over all possible input values. The C code might get hand converted to asm if I need the speed.
If you are using integers the logs are also integers. The bit position of the most significant bit gives the binary log. If you wish you can also round it up if the next two bits are 11. This is probably about as good as you can do in integers.
You can extract this with (untested)
while (b = c & (c - 1)) do c = b; /* now c is the log2 of the original c value */
It may be worthwhile to test for a zero value before starting this.
Thank you very much, Peter, for that detailed explanation.
Meanwhile, I also had a closer look at a combination of table look up for say the higher 8 bits of the mantissa, combined with simple linear interpolation for the rest. Doing that all in binary (with the binary log as result), this seems to be very efficient since no division is needed (can be substituted with shifting, which is an advantage especially with ARM7). The resulting precision also is very good (i.e. good enough for this application) with this method. If not, I could simply increase the table up to some reasonable amount since flash memory is not an issue here...
However, I will archive your description for further reference. Thanks again for your help.
The input data is in fixed point format, so the range can be choosen rather freely - however the differences in the input data already are small (say about 1/1000), and the flatness of the log curve makes them even smaller in the log result. So the mantissa surely has to be taken into account...
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.