for an estimation of required computing time I would like to roughly know the time that current controllers need for math operations (addition/subtraction, multiplication, division, and also logarithm) in single and/or double precision floating point format (assuming common compilers).
The MCUs in question are ARM7TDMI of NXP/Atmel flavour (LPC2000 or SAM7), and Texas MSP430.
recently Philips/NXP made some noise about their core being 37 percent to 51 better than other ARM7 cores, because of their wider memory paths.
Generally, the ASM opcodes will give some indication. Some uC lack division, others have it in HW, and that will make a huge difference to that corner of the benchmark.
eg recently we needed extended scaling, and we found the Zilog ZNEO has 64/32=32 divide, and 32*32=64 multiply. To access that, we had to use in-line ASM, but once we did that, the result was maybe 1000x faster than a libary call to shift/subtract SW division in a uC without divide opcodes.
Also likely to be well-maths-resourced are DSP uC like the TMS320F2802
I've encountered something simular on an arm7tdmi. We needed the
32*32=64 multiply, but could not find a way to let the compiler emit the smlal (IIRC) instruction. So we also ended up doing this in asm. Has anybody found a way to let the compiler do this (ADS or GCC)?
--
Stef (remove caps, dashes and .invalid from e-mail address to reply by mail)
Just as a general point: If you're considering software DSP applications, unless they're _INHERENTLY_ constrained and will never need to be scalable, ARM is strongly suggested IMHO. MSP430's address space is architecturally limited. Targeting ARM from the get-go will leave the door open for more complex algorithms, larger sample buffers, etc.
Looks almost optimal, I only don't see why the smull result is placed in r4/r5 and then moved to r0/r1, but on a 20 tap filter it wouldn't really be significant.
Last time I tried is was years ago on an ADS compiler and they I couldn't get it. May have been the wrong casts or just the old compiler.
The other optimization we did with this is to run the function out of the AT91's internal SRAM by first copying it from flash on startup and point a function pointer at the sram. We got the function address OK, but the length was (IIRC) fixed in code. Any tips on copying an entire function during run-time using GCC? Or how to get the length argument in this call:
memcpy(sram_loc, try_smlal, try_smlal_length);
--
Stef (remove caps, dashes and .invalid from e-mail address to reply by mail)
Thanks for the note - I already know. However in this application, there is neither much data nor much code. It's just a task that needs some amount of math operations, and I will have to trade power consumption against calculation time... I also tend to using ARM, but I would also like to see some figures.
Thanks for the link - it will probably provide at least some general figures (will have a closer look soon).
I fear that I will need double precision floating point math, for which assembler won't be much better than the RTL of a common compiler, I assume. (I'm well used to programming assembler, so that won't hurt me if it really makes sense.)
Too much power consumption for this application, I think.
I define this in a header #define IRAM_CODE __attribute__((long_call,section(".icode")))
then
IRAM_CODE void foo(void) { ... }
the linker scrip put the secion .icode into Flash just line initialized data something like this __icode_rom__ = ADDR(.gcc_except_table ) + SIZEOF(.gcc_except_table); .icode : AT(__icode_rom__) { __icode_start__ = . ; *(.icode); *(.idata); . = ALIGN(4); } > iram __data_rom__ = __icode_rom__ + SIZEOF(.icode);
then the crt0.s init code copies it out something like this /* Copy data from ICODE to IRAM */ ldr r2,=__icode_start__ ldr r3,=__icode_rom__ ldr r4,=__data_rom__ b 2f
Note that this assumes that the code will stay permanently in RAM rather than being overlayed and loaded dynamically. A more dynamic version could be done by having multiple sections then memcpy()ing the one your interested in.
Note 2 that GCC has a problem if you call an IRAM_CODE function from a non IRAM_CODE function *in**the**same**file* (it seems to lose the long_call attrib and uses a relative call that is typically out of range). So the best idea is to put the IRAM_CODE functions in a separate file.
If you are considering the MSP430 based on power consumption, be aware that the ARM parts are not hugely different once the clock rate is taken into account. I don't have good numbers for the MSP430, but they appear to be around 350 uA at 1 MHz. I don't know exactly how that varies with clock rate, but I'll assume the y-intercept is 0 and the slope is linear. The Atmel SAM7S parts are pretty much linear with nearly no offset other than the bias for the internal LDO. The slope is about 650 uA per MHz. So between the MSP430 and the SAM7S it is about a 2 to 1 power difference. I can't say if the processing power of the 32 bit device makes up for any of this or not.
I have several eval board from Atmel and Philips and would like to run some bench marks to see how the power and speed compares. If anyone would like to provide test code, I would be willing to run it in the next few weeks and make the results public.
None of these support floating point in hardware, so it depends on the libraries you use. On ARM there exist highly optimised FP libraries, the one I wrote takes about 25 cycles for fadd/fsub, 40 for fmul and 70 for fdiv. Double precision takes almost twice as long. You would get 500KFlops quite easily on a 50MHz ARM7tdmi. Of course this is highly compiler/library specific, many are much slower than this, 5-6x slower for an unoptimised implementation is fairly typical.
Doing floating point on the MSP, especially double precision, seems like a bad idea...
Sounds like a badly written library. If the instruction was available the library should have used it in the first place. Even so, making the shift&subtract variant more than 10x slower requires you to really work hard to make it as slow as possible...
Later versions of ADS supported inlined S/UMULL, U/SMLAL was added in RVCT IIRC.
I was (maybe erroneously) assuming that the RTLs of common compiler packages have about equal performance...
Thanks, this is at least a rough figure I can use at first place.
Not all the math needs double precision - and hey, we've done DP floating point math with a Z80 as well. :-) I know that it will me much more work for the MSP than for an ARM. But from the overall application, it seems reasonable to me to also take the MSP into consideration.
Hey, that does all the work, we did it all by hand (memcpy, function pointer..). I have saved your article and will refer to it next time I need something like this, thanks.
-- Stef (remove caps, dashes and .invalid from e-mail address to reply by mail)
Of course this is true. Even when a given set of calculations has to be done, the consumed /energy/ may be fairly the same (ARM faster with more current, MSP with lower current but takes longer) - however it's not /only/ math that has to be done here. The overall current consumption, especially at those times when there's no math to do, is also relevant. To me it seems that these aspects are more easy to take care of when using the MSP, so that's why I am also considering it.
That sounds interesting. But as Wilco mentioned, math performance can be expected to depend on the used libraries - so you'd have to take care about them. Also, at least I can't provide test code yet. For the time being, I will look at the benchmarks that Jim pointed to, and consider the numbers given by Wilco (though I am really interested how long a logarithm takes in a "good" [tm] library... :-) ).
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.