Hello, I'm doing a time measurement for an application running in user mode. (System Environment is Timesys Linux/RK on Pentium IV. and I run the application on runlevel 1 which is single user mode) ( Sorry my question is not exactly for embedded linux.
Because I need to measure execution time for even very small function, I used assembly instruction "rdtsc" which returns the number of CPU cycles. As these measured functions are executed a hundred of thousands of times, I record a minimum value and maximum value for cycles elapsed during the functions. But the variance between min and max is too big. The following is a sample data for 7 different functions.
(fn1) (fn2) (fn3) (fn4) (fn5) (fn6) (fn7) MIN 400 92 124 112 8488 548 MAX 1336 296 412960 15368 1256 392 9960
I'm not sure how to explain the variance. Maybe firstly, ISR routines happen to run while executing these functions. Secondly, VM may affect because of page fault or page swap. For the first case, I considered disabling ISR (interrupt) before the starting point and enabling it again at the ending point. Because my app run in user mode, I have to implement a system call for this. But I don't think it's possible to implement kernel function(system call) for this purpose as system call wrapper uses s/w interrupt. (Am I right?)
Is there anybody who has any suggestion to solve my problem? My question is 1) why the measured data's min and max have so big gap, 2) how to minimize the affect of kernel or other processes' activity? 3) Is there a way to disable ISR while executing the function which is measured? 4) To avod page swap, Is there a way to dock the program in memory?
Thank you so much..