Hi,
The subject of "software metrics" has limped across my desk.
I don't dislike metrics for what they may show/fail-to-show. But, rather, because they usually don't have a stated "purpose".
I.e., What are you trying to measure? Why are you trying to measure it? What do you plan to do with the result (besides put it in a 3-ring notebook)?
I can argue many points *against* the (seemingly arbitrary) point of tracking metrics... but, I'd rather approach this from the *other* side of the fence: defining *realistic* goals to which metrics can serve as VALUABLE insights, the appropriate metric(s) to use to measure progress towards/away that goal, and the ACKNOWLEDGED shortcomings inherent in a particular measurement strategy.
E.g., you can use metrics to measure productivity, complexity, reliability, maintainability, cost, completion, etc. But, often you only get a snapshot of *one* of these -- at the expense of all the *others*.
So, rather than arguing on N "fronts" (and appearing "obstructionist"), what guidance (firsthand experience!) can folks offer to bend the debate into one that will produce meaningful results (instead of just "pages of numbers")?
Thx,
--don