Advice on books about software quality.

Hi.

I would appreciate having some advice on good books about software quality, preferably with a practical approach: how to implement a good development process, tools and techniques to support such a process.

I'd also appreciate your advice on books that focus on the tools for software testing, metrics etc.

After browsing through Amazon I got overwhelmed with so many titles and I am not sure how trustworthy the reviews are.

TIA for your hints and opinions.

Elder.

Reply to
Elder Costa
Loading thread data ...

If you've never read it before, I'd start with "Code Complete" by Steve McConnell. It has a lot on the nuts and bolts of software design, but with an eye to improving quality and documenttion. The section on "Quality Improvement" would be very useful for you to read. It has good coverage of doing code reviews, unit tests, and debugging. This is the kind of book I wish I had read 25 years ago when I was a beginning coder (of course no one had written something like it way back in the stone age).

I was a little leary of the book because it is published by Microsoft, a name I seldom equate with quality. But McConnell writes good stuff. He's got a handful of other books out. IMHO all are worth checking out.

Mark Hahn

PS and if Richard Ames who sat next to me at Oresis is still reading this newsgroup, drop me a line!

Reply to
mark hahn

You can easily become overwhelmed with all that is *written* about this "process". But, why is it, despite the thousands of pages written, that software quality still *sucks*? It would be hard to imagine that no one is READING this material -- since each title sells many thousands of copies! And, hard to dismiss those copies as being read by the wrong *audience* (I doubt many grocers, lawyers, pharmacists, athletes, etc. are snapping up thesee titles!). Perhaps they are written by folks who aren't active *practitioners*? (i.e. it's real easy to "solve world hunger", "provide universal peace", etc. -- ON PAPER...)

In my experience, the biggest thing you can do to improve your software quality is to HAVE A PROCESS. *Not* an ad hoc approach that varies with each project and from week to week. Rather, have *some* process that you stick to. (presumably, this process is RATIONAL!) This affords you a framework in which to operate. It lets you keep track of *where* you are in your process. Did you skip a step? Did you *forget* a step? etc.

My process is quite simple. I have coding guidelines for each language that I use. These are GUIDELINES, not *rules*. They remind me of the types of things that "people" (i.e. the folks who are going to look at this stuff AFTER it has been written) are likely to stumble over (precedence rules, punctuation, etc.) when they are preoccupied trying to understand the code (i.e. if you are busy trying to figure out what the code is doing, you probably won't be observant enough to catch some subtlety -- like a /* FALL THROUGH */ in a switch statement or a /* DO NOTHING */ in a for loop). These guidelines don't waste time with things that a pretty-printer can do for me. I let mechanised tools fix the

*appearance* of my code.

I write test cases *while* I am writing the code. I.e. when I am writing a memory allocator I am more likely to be thinking of the "boundary conditions" that the code has to tolerate than I would be some weeks AFTER the code has been written! Every time I think of something that the code has to handle, I make sure I have a test case that will test for that. Of course, I also add other test cases in the hopes of catching something that I may have overlooked while coding the algorithm. "Hmmm... what happens if I try to strcat(foo, foo)?"

I use version control systems to track *everything*. I "commit" almost every hour in some cases. Having a fine grained log of all of the incremental changes to a code sample makes it easier for me to track *where* something went wrong -- rather than having to "back out" a week's worth of changes to get back to a stable release. Note that the test cases have to be tracked in the same way! And, used for *regression* testing (if a test case doesn't apply, it means you have changed the

*interface* or the *specification* -- these are BIG DEALS and not to be treated casually!

That, of course, bringing me to the most important aspect of writing *good* code -- SPECIFICATION. *Design* the code up front. Resist the temptation to start writing code. Write specs, instead. Write a user manual. Make sure you have thought of EVERYTHING that the code has to deal with -- discovering something while you are in the middle of coding it is one way to guarantee you will have bugs! (because you haven't accommodated this discovery in all the other code you have already written nor do you know if your

*approach*/design will accurately accommodate it!). The actual "code writing", realistically, is a *tiny* part of the job. And, there is very little skill involved in that activity (*think* about that... someone fresh out of school can write code if you tell him/her what the code has to do!). All of the "value" is in the design/specification of the algorithms -- and the design of the test cases to *verify* the code's compliance to those goals.

"Bug fixes are free" is part of my business model. This gives clients a sense of security. And, gives *me* a strong incentive to make sure there are NO bugs in released code! Folks who glibly remark, "Well, you EXPECT it to have SOME bugs..." are just lazy. Do they EXPECT their automobile to catch fire when they are driving down the highway? Do they EXPECT the bridge they are driving on to suddenly crumble and drop them into the river below? Do they expect their CD player to only play every other song on the disc?

*Think* about what you are setting out to design. Then, *do* it.

cat flames > /dev/null

HTH,

--don

Reply to
Don

Antipatterns. Excellent book.

formatting link

S.

Reply to
Stefan Arentz

Hmm, my experience is that

1) people follow commandment 3 until the last day of the project, so the standards and guidelines just become a documentation of how things were done. 2) Commandment 4 never happens because tools changes, customer demands change (i.e, the customer got burned on the last project with a certain standard, so they make sure you changed it), so your back to commandment 3 3) commandment 8 is usually implemented, but the reviewers tend to be people who know very little about the project so the review is more of a superficial review. (the knowledgeable reviewers are too busy to review or are have been promoted to such a high-level they can refuse doing the review or the schedule is so tight they simply don't have enough hours in the day to perform the review)

Commandment 7 and 10 are key, I believe, get smart people who enjoy their work and document it and the quality would naturally follow. By all means develope a set of standards and guidelines but don't assume they will be static throughout the project, unless perhaps if project is an upgrade or change to an existing project.

Reply to
steve

Elder,

I'd advise Jack Ganssle's "The Art of Designing Embedded Systems" for starters. Ganssle is an extremely readable author and the book has lots of good advice.

Dave Bardon, Avocet

Reply to
Avocet Systems, Inc

Elder,

I also can vouch for "Code Complete". Years ago when I first read it, it showed me how bad a programmer I was. This was a useful lesson to learn.

Dave Bardon, Avocet

Reply to
Avocet Systems, Inc

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.