Unless you require the absolutely fastest performance (and someone asking the original question clearly is not - or he would already have found the answer), do not write your own assembly code. It's just pointless optimisation for optimisation's sake.
By all means, look at the generated assembly and see if it uses the ideal instruction. If it doesn't, then file a report or support request with the compiler supplier if you want.
Don't do inline assembly unless you really have a reason for it, especially if you are not used to it.
But the compiler should do the strength reduction for you - take note of the sign, do everything unsigned, then restore the sign. If it doesn't, then check your optimisation settings and/or complain to the supplier.
Yes - do everything unsigned. First think if the incoming data really is signed - in most cases it is not. But if you have signed data (say from a differential input), first note the sign then convert to a positive value if needed. Then do your scaling and division (and if the compiler can't convert an unsigned divide by 0x8000 to a shift, it's a poor compiler - and you can do the shift by hand). Then restore the sign.
But note that ANSI C leaves the contents of the most significant bits of a right-shift of a negative number up to the implementor -- it is equally valid within ANSI-C specs to shift in zeros (affecting both sign and magnitude) as it is to shift in ones.
The only reliable way to do this across compilers (and even major revisions) is to convert to unsigned, shift (unsigned right shifts always shift in zeros), then restore the sign as necessary.
It should be habit. Never right-shift a signed number when it might be positive.
--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
I agree that a right shift on a negative value is implementation defined, but it's very unlikely that a compiler for Cortex M3 would not use the arithmetic shift right instruction.
If you're really paranoid, you could build in a run-time check at program initialization for expected right shift behavior.
Why not do an explicit divide by 2 (or 4 or 8 or ...)? Surely most modern compilers are smart enough to recognize that this can be accomplished via an arithmetic right shift if the processor supports such an instruction or a "logical right shift" (zero filled) followed by stuffing ones up topside if the old MSB was set and the type was signed.
Except when the numbers really do get to be too ridiculously large or wide ranging to handle doing the calculations by long fractions will tend to be faster for most instrumentation needs.
It is always best to look closely at the various ways you can implement your calculations. Gains are easy by fractions (width limited multiplies that discard the least significant bits). Calibration curves can often be done by Polynomial Approximations (see Hastings reference).
"Approximations for Digital Computers" by Cecil Hastings Jr., T. Hayward, James P. Wong Jr. ISBN 0-691-07914-5
--
********************************************************************
Paul E. Bennett...............
Yes. DIV instructions take several cycles (I don't know how many for the M3), and will cause pipeline stalls which reduce the throughput of other instructions. Shifts are therefore faster, even if they need a few other instructions around them. It is also faster to multiply by a scaled pre-calculated reciprocal (case 2 above).
Yes.
The easiest way to make sure you get signed division right is to separate out the sign, then use unsigned arithmetic. That way you can't go wrong, and the C code is portable.
Without FPU support, assuming that the processor has basic integer multiplication instructions, integer operations are ALWAYS faster than floating-point operations. Usually _far_ faster. And always more precise.
The general nature of computers is that all data into the computer has to be quantized in some way (the machine can only accept digital data), and all data out has to be quantized in some way (again, the machine can only output digital data).
There is already quantization error coming in because it is entering a discrete system. How much error depends on the quality of the hardware, which usually depends on how much one was willing to spend on it.
One measure of "goodness" of calculations is whether, for a given set of inputs (all integers), one can prove analytically that one is able to select the best outputs (again, all integers). This confines any error to the hardware rather than the software.
It ends up that for many types of calculations, using integer operations, one can meet this measure of goodness. However, one usually requires larger integers than development tools support in a native way. Which means inline assembly or large integer libraries which were written in assembly-language. Preferably the latter.
In the specific case of linearly scaling by a factor, generally what one wants to do is select a rational number h/k close to the real number to be multiplied by.
There are two subcases.
k = 2^q may be a power of two, in which case it is an integer multiplication followed by a shift or a "byte pluck". It should be obvious why this is extremely efficient.
2^q may be something other than a power of two, which is the general case. In that case, you may find this web page helpful:
formatting link
Finding the best rational approximation when k is not a power of 2 is a topic from number theory, and all the information you are likely to need is at the page above. Software is included.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.