Hi,
Subject line gives the gist of the issue -- though the reality is a bit more "involved" (of course! :> )
I'm trying to figure out how to instrument (certain) "ISR's" to more accurately track resource usage in an application. "Time" is the easiest to envision but the concept applies to other resources as well. [I've been struggling to come up with application neutral examples that folks can wrap their heads around]
For "long running" applications (e.g., many found in embedded systems instead of more transient load-and-run desktop applications), some tasks/threads/processes (sidestepping the formal definition, here) have one-to-one relationships with other tasks (et al.) sometimes to the detriment of those other tasks.
E.g., a task may set up an I/O action, blocking on it's completion. That action may cause an ISR to be triggered some (fixed) time later (e.g., the I/O action primes the Tx register in a UART which, one character time later, signals a Tx IRQ). The "cost" of that ISR is born by whichever "task" happens to be running "one character time AFTER the instigating task blocked.
[in an application that can often be synchronous in nature -- do this, do that, lather, rinse, repeat -- this can result in task A's operations *always* penalizing task B (because of the scheduling priorities, etc.)]The same sort of "problem" exists with many network stack implementations -- task A initiates some traffic and task N(etwork) "suddenly" becomes ready to *process* that traffic... at the "expense" of task B (by either deferring task B's execution or by scheduling network I/O that will result in ISR's stealing from task B).
I can handle the latter case (my RTOS "charges" task A with task N's related costs) but the ISR issue slips through the cracks -- it's too small to easily track PER INSTANCE costs (though the overall time spent in the ISR can be significant).
The naive workaround is just to "fudge" quanta for the affected tasks and "hope for the best". But, this isn't an *engineered* solution as much as a kludge. Or, if you have a surplus of resources, you can just choose to "ignore the problem" (i.e., "derate appropriately").
The best solution I've been able to come up with is to wrap each ISR in a preamble-postamble that takes snapshots of a high-speed timer at the start and end of each activation (you have to reset the timer to reflect the impact of nested ISRs, etc.). Of course, this doesn't completely solve the problem as ISRs may have causal relationships... :-/
Are there any other schemes I can employ to adjust for these issues -- preferably at design time (instead of run time)?
--don