I have a pretty elementary question (I'm an intern - it happens!). My boss asked me to run a timer to measure how long it takes to execute some code on a Motorola 32-bit Microcontroller. Is there a built in tool in Metrowerks Codewarrior that can accomplish this? Or is there a coding technique or tool that I must/can use? Any help would be greatly appreciated!
Do you have an extra output pin you can take high when entering the code and low when exiting the code? Do you have an oscilloscope (and know how to use it) to look at that pin with? I've used this technique innumerable times - it's good not just for timing, but for debugging, to make sure the routine executes at all. You can save a timer count at the start of the routine and take the difference at the end (and account for rollover). It sounds like this is the way your boss is suggesting you do it. Do you have a simulator for the device? It should count cycles. Just record the cycle count at the start, run to the end and subtract. You need to know the cycle frequency (not neccesarily the crystal frequency) to convert the cycle count to time, and likewise for using the timer, you need to know the timer increment frequency.
There are a number of ways, depending on the hardware & software tools at your disposal and the accuracy you require. I'm not familiar with Codewarrior specifically, but you might check the user's manual for code profiling.
If you have to do it yourself, the general idea is to create some condition (raising or lowering a signal, outputting a value to an I/O port, etc.) that indicates entering/exiting the code section and watching for that condition with something that measures time. ICEs (In-Circuit Emulators) and Logic Analyzers are ideal for this.
Probably the minimal solution, requiring a single channel storage scope and giving you a pretty good number, is to connect a scope probe to a line whose signal you can raise or lower at will. Start with the signal low and raise it just before entering the section being measured. Lower it again when exiting the section. Set the scope to its largest time per division and have it trigger on the signal going from high to low (if it shows activity up to the trigger; trigger on low to high if it shows activity starting at the trigger.) This should capture the duration of the code execution. Reduce the time per division setting until the entire event just fits on the screen and determine the time from the length of the signal.
Something to consider - does the execution time depend on some external condition (amount of data to process, presence or absence of signals, etc.) Also, is the clocking variable? (example: the RCA 1802 processor's speed varied by the amount of voltage supplied)
What you might want to do is compute the theoretical time the code section should take by figuring out the cycles for each instruction and dividing by the clock rate (cycles per second). That should give you a sanity check to use.
This is precisely the sort of situation where I'm vaguely frustrated by the lack of a trivial, 32-or-more-bit counter running at the same rate as, or some unalterable factor of, the machine clock. The x86 got one around about the Pentium, presumably for precisely this purpose.
ANSI gives you clock(), which counts up at CLOCKS_PER_SEC clocks per second. If your compiler+library is kind enough to implement ANSI functions in an embedded environment and handles all the hardware for you and happens to use a suitable value for CLOCKS_PER_SEC then this should work:
The Motorola part most likely has a 64-bit counter that ticks at some fraction of the processor clock rate. For a pure software solution, I've read the register at the start of the routine, read the register at the completion of the routine, then calculated the difference. The difference between the two 64-bit values is the time required to execute the code. This value will change according to interrupt processing, logic branching, and cache updates.
If clock_t is an integer type, this will be equivalent because of integer promotion. If clock_t is a floating point type, say with units = seconds, the first method will lose precision by casting, while the second method carries full precision.
Thanks for all your help everyone. Andy Sinclair won the prize...... it's in the mail... However I tried as many of the other suggestions as I could and now I know different ways of accomplishing this task, so thanks. And thanks to Ben Bradley for the tip with the scope... I'm sure that'll come in handy in the future.