Software Metrics (cat flame > /dev/null)

Good to see it in written!

This comment can easily expanded to apply to every other measure in business. It is, of course, worrisome that your witness makes me infer that you _never_ saw them well applied.

This project management metrics, which also have their place and value.

This comment as well could be applied to any other measurement in the business in general, so I would say it will not get us to any point further in the maturity of the organization.

I think everybody can remember cases of number cooking in accounting turned to scandals as big as Enron, etc.

If the teams feel 'rushed' is either because they don't have the intellectual background to defend in robust way that the pace asked from them is unattainable or because the 'work done' metrics takes only one side of the equation and doesn't consider the others like already mentioned elsewhere in the thread, like maintainability attributes, testing results, quality, etc.

Embedded SW engineers should be able to understand the problems of their profession and see the correct goals and in the correct time frame. The measures of success are to be settled between the engineers and the client, be it a boss or contractor.

If you don't have clear ways to demonstrate the later, the risk of expending 400% more in the project would make it very hard to be approved.

We have to break the vicious circle of the delusional measures and offer the good ones that make sense in the business and technical realms.

Yes, of course, see my comments above.

This doesn't solve any problem... we have to start face it and educate our clients.

--
Cesar Rabak
GNU/Linux User 52247.
Get counted: http://counter.li.org/
Reply to
Cesar Rabak
Loading thread data ...

I think that relies on some critical assumptions that, IME, are not true for many organizations. Namely, that the organization has the resources and *time* to spend on thorough evaluations of potential projects *before* undertaking them.

It takes a *lot* of effort to come up with anything more than a back-of-the-napkin sketch of what a product might entail. At the very least, you need to flesh out a "product specification" (which can lack much detail but must cover all the "essentials") that "someone" can explore in greater depth to make an initial estimate of the "interior requirements" thereof.

I've worked for very *few* organizations that have the resources to spend on this sort of up-front effort. Many are "barely staffed" (a step above "under-staffed") and working in fast-moving markets where you can't invest calendar months *thinking* about whether or not to pursue a project. Indeed, some owe their existence to "lucky gambles" (intuiting The Right Projects to pursue) and often can't sit back on their laurels "milking" an old idea for their long-term survival -- their past successes can (or will) be too easily cloned and leave them as second rate competitors in their *own* market (!)

Sure, Apple, MS, etc. can afford to have folks sitting around

*thinking* about the next product to push into the pipeline *while* the current product is working its way through production. But, most (?) firms don't have that luxury. Everyone is either working on a *current* product in preparation for release, a newly *released* product or maintaining a "mature" product. [I'll admit I have deliberately gravitated to these types of firms as the work -- and division of labor (or lack thereof) -- has tended to be more interesting. Others may have different experiences]

I would consider that consistency, not quality. Someone can consistently produce "bad product". :<

See above.

I guess our experiences have been very different. I can recall several instances where a boss was complaining because I was "behind (his) schedule" -- despite my being able to show that I was within man-days of my "initial estimate".

Sure, I can say, "See, *I* was right (with my initial estimate)!". But, if *he* has bid the job at a lower cost or shorter timescale, then that "pressure" has to build *somewhere*. Eventually, it manifests on The Bottom Line.

Let me be clear. I see nothing wrong with metrics. Whether they are used for productivity, planning, quality, etc. Rather, where

*experience* has shown metrics to be A Bad Idea is the lack of maturity of the consumers of those metrics.

There have been *billions* of humans born on this planet. Surely a statistically large enough sample from which to draw some solid statistics. So, we *know* gestation period is ~267 days (IIRC). But, if you had to "bet your life" (livelihood) on this figure, you'd pad it to account for the *expected* variation (~260 - ~290).

Yet, even *that* isn't a sure thing as a child could be born prematurely, etc.

My point is, this is a well documented process *governed* by biological "laws". Yet, you still can't "bet your life" on it with 100.00% certainty. How wide a range of values would you be comfortable with if you were *just* "betting your livelihood" on the outcome? :>

Shirley, any type of new product design is *less* well constrained that this. Yet, folks blissfully prepare charts with milestones laid out AS IF they were magically decided to occur at these points. And, *fret* when "reality" fails to coincide with "fantasy".

I didn't want this discussion to degenerate into other "non-metric" related issues -- as I've done. :< Rather, I want to point out what I suspect most folks would acknowledge from their own personal experiences -- most "planning" (even *with* "good data") ends up being an exercise in "wishful thinking" and, a few ohnoseconds after the planning has been "finalized", all the caveats that were taken into consideration *during* the planning ("*If* we can get X on day Y... AND the algorithms we have designed actually *work*...") are gleefully forgotten.

So, instead of a productive post-mortem on the planning *process* itself (i.e., what *were* the assumptions that were made? why were they faulty? were we overly optimistic or just naive? etc.) the "blame" is placed on "bad performance", "bad luck", "bad metrics", etc.

[this is not unique to our industry. It is sometimes fun to watch other folks going through the same contortions in other industries... and learning just as LITTLE about their failures]
Reply to
Don Y

Cesar Rabak wrote: [ ... ]

It may not be intellectual background. Alpha-dominance has a huge amount to do with management.

Mel.

Reply to
Mel

Understood.

I'm talking more about things like measures of code complexity, quality, etc. "Scheduling" has too many other issues that come into play.

Yes. Though I call this a "consumer" problem. Educate (or replace) the people using the data. I contend that this is "easier" to do than fabricating the data itself out of "nothingness".

Or, causes your firm to simply cease to exist!

Practice seems to indicate the former to be the most often track followed.

OTOH, you might not *have* the $5 to throw at the "right" solution (in which case, you're in the wrong *business*).

The most common delusion that I have encountered is the "We don't have time to do it right -- but, we'll have time to do it over!" mentality. This seems to be the tacit admission that the project should *not* be undertaken, "but we really *want* to undertake it!".

Consider:

- if you don't have the time/money to do it right, the product you are likely to come up with will probably be substandard and not fare well in the market (you will then blame something

*else* for the monies and opportunities that were diverted to this failed project instead of putting the blame where it really belongs)

- if your product *doesn't* fail miserably, you will *still* need to spend those resource (and *more*) trying to finish/fix it to be the way it *should* (ideally) have been. So, your total investment will be increased *and* you will have exposed a product idea to your competitors who *may* have the resources to Do It Right and steal market from your INFERIOR product.

- if your product is wildly successful (sales quantities), you won't have the *time* to spend fixing it. You'll be struggling to ramp up production and deal with all the blemishes that you glossed over previously. Again, an opportunity for a competitor to come in with a (slightly?) better product -- but reliable AVAILABILITY -- and steal your thunder.

In each case, you have diverted your resources and attention from some OTHER project that could have been A Sure Thing -- fitting your resources and capabilities better.

I.e., the only winning scenario here is to hope the product

*fails* and you just swallow your losses up front.

I actually find them useful "for myself" (note that I am self-employed). So, they are really only *relative* metrics, in my case. Used to tell me how a particular implementation compares to another/similar implementation. They help me decide when I need to rearrange the structure of a module ("refactor" being the term currently en vogue) to better manage its complexity, etc.

My DTP tools give me metrics regarding the complexity of my writing. I use these to tell me when my sentence and paragraph structures are getting too complex for Joe Average to digest. (at which point, I insert a few paragraphs of "See Dick run. See Jane run. Run Dick, run!" until the "score" drops to something more acceptable :> )

But, I am neither qualified, motivated nor *educated* enough to be able to compare my metrics to those of another (writer/developer) and come to any *defensible* conclusions based solely on those numbers. Instead, I compare to the only Standard that I have any intimate knowledge of -- myself. :-/

Reply to
Don Y

Hi Cesar,

[*much* elided as there is a lot of overlap with other posts]

I think folks in 9-to-5's have little recourse, here. They are at the mercy of their managers (who are at the mercy of *their* managers, etc.). It doesn't matter how accurate your assessment of a project is if the higher-ups refuse to be bound by physical laws. :>

Even working freelance (with a lot of lattitude as to what jobs I am willing to undertake), you are still pressured by having to pay the bills, etc. Clients don't like it when you say "No (it can't be done for that money/time/size/etc.)".

People *know* "where babies come from" -- so why are there *any* "unplanned pregnancies"? :>

The "Just Say No" type of thinking fails to acknowledge Reality.

Having said all that, there is nothing that prevents you AS AN INDIVIDUAL from benefiting from tracking these sorts of metrics on your own (there are tools to do so for most of them) and using them to better understand *you* "process".

Regardless of the Fantasy that you are forced to work within ("We're going to have this baby in 3.5 months -- don't tell me it's going to take 9 months!"), Reality will, ultimately, prevail.

[I have *no* idea why all my analogies in this thread revolve around childbirth... perhaps the above "classic" comment has been underlying many of my arguments as something easy to relate to]
Reply to
Don Y

Yes, but you can compare to "yourself" (your other projects, etc.) just as well. And, the comparison is probably more appropriate since the types of products will tend to be similar (you won't be comparing a GUI design to an HRT control system), the staff similar (you won't be comparing experienced developers in a high budget shop to "college grads" at a small startup) and your familiarity with the "other side" of the comparison will be more valuable (you won't be comparing yourself to some random project undertaken at some obscure IBM division in the 1980's).

Metrics distill too much out of an experience (intentionally). Some familiarity with all of the things being compared helps put the numbers back in perspective.

E.g., my first commercial (software driven) product I could probably recreate, from scratch, in a few man-weeks *today*. has *it* changed? No. Has it's complexity changed? No. But, the tools and techniques that I would apply *today* (even if forced to use identical hardware) would make it an entirely different experience.

My point was that the types of applications stress different metrics in different ways. And, that those factors might not be reflected in the metrics -- or, not *accurately*/proportionately reflected!

E.g., you can write a graphic application that may be thousands of lines of code. It might have very high complexity measures. It

*looks* (from the standpoint of a set of software metrics) to be much more complex than, for example, a PID loop. OTOH, in terms of *real* complexity, the PID loop might easily exceed that of the bulky graphic application because so much of its complexity is NOT manifest in attributes that can easily be *counted*. (semicolons, operators, etc.)

Agreed. Though I don't aspire to *manage* it as much as *understand* it (I consider the former to be a separate issue *dependent* on the latter)

I disagree. It is an intangible. Most businesses track *tangibles*. You can only measure (resulting) software aspects indirectly... how many hours to develop, how many hours to maintain SO FAR, how many dollars spent settling lawsuits, etc.

And, you never have a "final figure" to point to. Have *all* the bugs been uncovered? Or, will we see a whole sh*tload of new bugs pop up in 2038? :>

It's far too easy to enter uncharted water with a software design. Too many ways to arrange lines of code to come up with different products/results.

By contrast, there are only a relatively few number of ways that a gas pedal can be installed on a Toyota -- correctly and incorrectly. And, you can easily inspect every instance and know how much it will cost to fix each of them (worst case: replace the entire car. What's the worst case cost of fixing a bug on a Mars rover? :> )

No! If loopholos can be compared to other loopholos, your metric still has value!

I've worked in several industries that had wacky metrics to track things that were important to them. E.g., one used "buckets of alumina grit" (what's a "bucket"? what size grit? etc.) poured over the product to *abrade* the appearance (i.e., testing the "finish" on the product). If the number of buckets went down, they quickly stopped the manufacturing process to identify what was going wrong... "It's always been 7 buckets! Why is it now suddenly *6* buckets??" How much worse was '6' than '7'?

If your LoC/day figures start to change, you have to wonder if something in your process has changed (maybe too many meetings?) or if there is something inherently different about this *project* that bears closer examination. I.e., if one metric has changed, there is a chance that *others* may eventually also change (e.g., what if your bugs/day figure changes and you need to double your test/certification time?)

But we don't "sell software". We sell *products*. The consumer cares little about how many LoC/day our developers achieved. Nor how complex their code is. What they care about is cost and functionality. They might not even care about the number of (known + unknown) REMAINING BUGS in the product (i.e., if a bug never manifests for them, what do they care? How many folks born in 1900 worried about the Y2K bug(s)? :> )

I.e., if you priced your product in frodbelgs while everyone else used dollars, customers would probably be distressed because they couldn't gauge the relative cost (to them) of your frodbelgs.

Reply to
Don Y
[%X]

I think, Don, you have come to the crux of the matter. The fact is that not many industries do a post-mortem on the development they have just completed. If they did, they would be better educated and informed about the effectiveness of their planning/estimation or development assumptions.

--
********************************************************************
Paul E. Bennett...............
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

I think it might be better for them to read "Better Embedded Sysstems Software" by Phil Koopman. Highly recommended for every developers desk and all management conference tables (open at all chapters simultaneously by preference).

--
********************************************************************
Paul E. Bennett...............
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

Many decades ago I worked for a well-known company that made testers. They had a nice bonus for the engineer that designed the board with the fewest field failures. The engineers regularly fought hard to design a memory board, which was a shoe-in to win the prize (compared to the tough analog front- ends exposed to regular customer abuse).

They also didn't count labor hours in the metrics for board cost. That led them to take out of production a UART board using an *expensive* crystal and reintroduce its predecessor, which had hand-tweaked-and-soldered RC frequency generation. They also discontinued a subsystem using ribbon cables to reintroduce hand-soldered cable bundles because it was *clearly* less expensive. BTW, labor was even expensive in USA back then.

I could go on for hours...

Most metrics used don't reflect what most of us would consider reality. Software metrics in use today lead to outcomes just as silly as those listed above...

Hope this was entertaining and maybe even helpful, Best Regards, Dave

Reply to
Dave Nadler

Or, if they do, it's distilled to a couple of numbers:

- estimated cost: X

- actual cost: Y suitable for bean-counting (but little else).

I've gravitated towards email only contact with clients (typically out-of-state, etc.). This evolved from the unavoidable hassles of "phone tag" (unlike the client -- who is salaried -- I don't get paid for the time I spend trying to contact someone on the phone!) coupled with my odd working hours.

Initially, it was a "win" because it cut down on a lot of silly "banter" ("How are the wife and kids? How's the weather?" etc.). But, it also saved me the trouble of transcribing/summarizing phone conversations (so I had a record of what was agreed to along with action items in each conversation).

But, I discovered that it also had benefit because it forced folks to *think* about what they wanted to ask instead of just "shooting from the hip"... "musing". This seems to keep a project more focused than random "digressions" that creep in informally during a conversation ("Hey, we could add some blue and green lights and use it for a XMAS decoration, too!")

*And*, it helps document how the project's scope may have changed along the way. Not that client's try to *deny* that there were changes but, rather, they tend to forget how *many* of them creep in if you don't exert some discipline *and* have a record of them!

One client made a casual statement once about my having found "some bugs" in their product. As if it was an inconsequential thing (i.e., hardly worth many billable hours). Since I had kept all the email and snail-mail that I generated during the project, I was able to point to a stack of paper over an inch thick *documenting" those bugs. I.e., a testament to the actual number of bugs as well as a graphic depiction of the amount of labor involved (just in *documenting* them!)

Again, metrics are A Good Thing (whether they describe the product or the process -- you have to have *some* quantifiable way of comparing X to Y). What's lacking is an understanding of how to interpret those metrics and *apply* them, productively.

This brings me back to my initial post: *what* to track and

*why* to track it (acknowledging how easily it is for "metrics for the sake of metrics" to lead one astray).
Reply to
Don Y

Presumably, this was for semiconductor memory (and not core planes :> )...

I don't understand. Are there two different criteria at play, here (cost and failure rate)?

But that, I think, is because the metrics are being used for "business purposes" (cost accounting, etc.).

E.g., I just coded a "unified memory manager" to replace the various different *types* of memory management mechanisms used in many embedded systems. Once I've given it a thorough shake-down in an application, I will go back and write comparable "traditional" tools to provide the same functionality. *Then*, I will see what their "metrics" look like to help me evaluate the utility (or disutility?) of this new approach.

I.e., if the new approach is *conceptually* more complicated but "metrically" simpler/smaller/etc., then that speaks to reliability, maintainability, etc. in a way more readily defensible than some emotional "hand-waving".

Someday, someone will collect, catalog and publish all these anecdotes so we can relive the chuckles in our "declining years" :>

Reply to
Don Y

Bonus metric was simple failure rate. Not core memory, I'm not that ancient.

Manufacturing metric cost excluded labor, because "labor was same for all boards".

Reply to
Dave Nadler

I think is something else. Please refer to my longer reply to Don's post.

--
Cesar Rabak
GNU/Linux User 52247.
Get counted: http://counter.li.org/
Reply to
Cesar Rabak

On my desk as well, but in the few pages he devotes to metrics the theme seems to be "this is a slippery slope" and "you have to compare against yourself"...

Best Regards, Dave

Reply to
Dave Nadler

My process, through the four forms and register that tracks the development and forms the audit trail, provides numbers on the function points in the system, the number of errors or issues raised, the number of errors or issues corrected or dealt with, and the time taken for each one. I do not worry about counting LOC as it is not really that meaningful in Forth.

There is no real effort expended in collecting that data as it falls out of properly apply my process. A paper I gave at one of the Safety Systems Symposia has a description of my process (proceedings published by Springer- Verlag). The core of the process is applicable at all levels and on all technologies involved in a project and meshes hierarchically throughout. The only other aspect of the process is knowing the documentation that needs to be produced.

--
********************************************************************
Paul E. Bennett...............
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

He does mean that it is most valid within the organisation and not across different organisations. It is a slippery slope if the wrong sort of metrics are gathered. However, keeping metrics on sensible facets of the development process will help in current and future development management, especially if you have a post-mortem at the end of each development to see what went right and what was wrong.

--
********************************************************************
Paul E. Bennett...............
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

Em 15/7/2011 14:54, Don Y escreveu:

This way of thinking is akin to paralisis...

But you end up having to say it, isn't it?

This is non sequitur to our conversation. The answer is 'by the same reason' too many people drink and drive? Or do drugs? Or dare to do 'stunt' like maneuvers in Youtube?

If "Reality" is not ingrained in the framework of thinking of the person, the other side of the consequences of you point of view applies as well.

It is about this instill process on the gathering and *correct* use of metrics that we have to put to work and make these instruments part of the correct perception of the Reality.

Yes. See my comment on this on another reply to another post of yours.

--
Cesar Rabak
GNU/Linux User 52247.
Get counted: http://counter.li.org/
Reply to
Cesar Rabak

But that (IMHO) is The Way It Is. An "employee", when faced with a PHB who refuses to face reality, has only one avenue of recourse: to quit and find a new employer who (hopefully) isn't as delusional.

The difference, there, is that you *can* say "I told you so" when Reality bears witness to your assertions. A smart client will learn from that experience. A foolish client won't -- in which case, you "move on".

*Think* about the answers to the questions you posed (as well as my analogy). Despite overwhelming evidence (and, often, *explicit* acknowledgement of the problems that a "worker" points out "ahead of time") that "you guys are going to make a BIG mistake if you ignore what Time and Experience are telling you", why *do* organizations persist in this foolhardy behavior?

Do they think that *next* time Reality will be *different*? How many times do you have to shoot yourself in the foot before you realize you should MOVE YOUR FOOT??

Reply to
Don Y

I imagine computing the FP metric is relatively straight-forward? "Words" = operators (and ins and outs are easily enumerated)?

I tried googling last night but nothing turned up. (though I did happen across a photo of you? riding a unicycle with a chimpanzee atop your shoulders beating a toy drum :> )

Do you have a pointer to a {PDF,PS} -- or, a copy you can email to me?

Thx,

--don

Reply to
Don Y

I might ask for a copy, as well. So a link would be nice.

Jon

P.S. However, it is Springer-Verlag. Unless draft rights were retained or a slightly different version made, he may not retain the rights to do more than send a copy on request.

Reply to
Jon Kirwan

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.