Re: Towards better embedded software (long)(was: Re: Where does C++ fit in?)

But beware of the person at the extreme margins of "careful and thorough". I once had a co-worker who studied problems very completely and never produced a single line of code.

Reply to
Everett M. Greene
Loading thread data ...

You are both correct, as it depends on how you, or the end user, define 'bug'. One can, and should, ship software that is free of defects, but to also claim that it will never need to be updated or fixed, is impossible.

Case study I heard of some years ago: Designer of Process control equipment was getting reports of strange system behaviour from the field. LOTS of bench testing was unable to reproduce, and even swapping units and checking in other installs was not revealing much. Finally needed a trip to the site, and a connection with their best operator. Apparently, he was able to get it to mess up almost at will; Field techie tried: Nope, fine for him. Err?

More carefull system cycles, HW swaps, and suddenly the light bulb goes on.

This plant operator was so good, and so experienced, that he KNEW when the machines momentum and settings were enough to complete, so he was readying the next command right at that narrow time window. Problem was, the designers had neither expected nor anticipated this, and they could prove it worked 100% to the spec.

Q: Is this a bug. Depends on where you are standing - the plant managers would have called it that, and it also was 'fixed' with a software change.....

-jg

Reply to
Jim Granville

Unless the code is very small and has no conditional branches, I don't believe one could claim that their software is bug-free. The fact that the software may meet requirements & satisfy the end user only indicates that the software has conformed to known use cases.

Software is developed under certain contraints. These constraints may be architectural due to software & hardware limitations, performance and development schedule. For complex software, one would adopt a quality plan, where an appropriate "quality" of the software is obtained for the given schedule -- it's a trade-off. In a lot of cases, the software developer is trading test time against delivery date.

One could get a "wet finger in the air" feeling of the state of the software by noting the:

  1. Completeness of the input requirements. Holes in the requirements lead to uncertain or unexpected behaviour of the software.
  2. Robustness of the software architectural design. A good (or appropriate) architecture leads to less bugs. Superfluous or complex interconnections between components and modules increase the probability of bugs being introduced.
  3. Test plan or strategy. Producing bug-free (or low bug) software relies on uncovering the bugs during development. If one is producing software by iterative development, then it's important to have solid regression testing during all development phases. How often has one added a new feature, only to find that it has broken something else.

Ken.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

No code, no bugs.

Reply to
Will

I like it. I'll put that in my status reports: "no bugs this week."

--
Darin Johnson
    I'm not a well adjusted person, but I play one on the net.
Reply to
Darin Johnson

I claim that my software is bug-free, regardless of complexity. Always. I guarantee it.

Paradigm shift: hardware logic designer, lots of async logic. Is it bug free? Who knows? Conversion to sync logic: "is it bug free?" becomes "how long is the longest propagation delay?".

Back at software: given a suitable design methodology, and (of course) an adequate spec, bug-free code is perfectly attainable. Every time.

It's the defeatist acceptance of failure that I'm on a mission to... err... rant about ;).

Steve

formatting link
formatting link

Reply to
Steve at fivetrees

So what is it about software that produces these extremes? Is it the lack of peer feedback & ridicule again?

Steve

formatting link
formatting link

Reply to
Steve at fivetrees

We're about a rat's whisker away from a sticky semantic argument about the definition of "bug-free". I can pretty much guarantee that some combination of circumstances will put your bug-free code into an undesirable state. But if you define the specification to exclude all such circumstances by inclusively listing all allowable ranges of inputs and combinations thereof, your code can be said to be bug-free up to the limits of the spec. The trick is therefore to know whether the spec includes all real-world possibilities, or if it is possible for the real world to violate the spec.

For instance (yes this is a gross and silly example) if the specification says "Button A will always be pressed before button B", your program is off the hook and free to misbehave if button B is pressed first.

Reply to
Lewin A.R.W. Edwards

Yep, we did that one a while back ;).

I take your point in general. However, possibly due to my hardware background, I'm hot on excluded states and I guard against 'em. I ensure (as part of my general methodology and emphasis on defensive design) that unforeseen circumstances won't result in undesirable states. Regardless of the spec, I would consider such a failure to be a bug.

This might be a pretty good example of what I was talking about earlier - that good engineering is a skill that should be learned independently of software coding/design issues.

Steve

formatting link
formatting link

Reply to
Steve at fivetrees

Lewin A.R.W. Edwards wrote: ... if you define the specification to exclude all

That's fine for a programmed logic device, but doesn't come close for software of say 2000 lines of code or more. When the spec is not met that's a 'priority1'/defect/fire-somebody-time.

I offer the reason that most "Bugs" are genuinely unexpected and unspec'd interactions with other parts of code and with the environment (including new users poking buttons when the programmer didn't expect it).

- RM

Reply to
Rick Merrill

Rick Merrill wrote in news:oj6mc.37451$I%

1.2347225@attbi_s51:

One of the most valuable members of one software team I worked with wasn't even a software engineer - he was a student working in the lab who had a talent for "pressing the wrong buttons" and finding errors.

Someone with an unbounded curiousity who refuses to follow instructions is good to have around "just in case" you haven't foreseen every eventuality - it may be traumatising putting your baby in his hands, but better he finds a mistake than your customers :)

*Waiting for someone to tell me we shouldn't have needed this sort of testing ;)

Peter.

Reply to
CodeSprite

Heh ;). Nah, testing (especially this kind of insightless testing) is a Good Thing - if only to reassure you that your defensive excluded-state protection logic is working ;).

Someone (Pete, I think) mentioned "The Practice of Programming" by Kernighan and Pike (which I also wholeheartedly recommend). One of the stress tests mentioned therein, for stream input, is to fire garbage (and/or carefully contrived illegal data) at it and see what happens. I routinely do this - again I'm not expecting my code to fail, but I do want to test and exercise my defenses. Surely we all do?

Back at button pressing: one system I worked on had upto 32 buttons on each of upto 64 attached outstations, and another set of buttons on the master panel. There was no realistic way of testing every combination ;). The code simply had to deal with whatever it was given, in any order, logically. (I always do this. I'm a little surprised at the inference that anyone might do otherwise.) It was a legacy system, and the original code did indeed fail when buttons were pressed in unexpected ways. I watched in calm amusement when the customer tried to make my new code fail.

I consider these kind of defenses an essential part of any embedded code. Sadly far too many embedded products from otherwise reputable companies lack them (example: the Nokia set-top box which my teenage daughter, whose button-pressing abilities are fearsome [presumably from texting practice], manages to crash daily). Hence my rant.

Steve

formatting link
formatting link

Reply to
Steve at fivetrees

I have a "War Story" on my web site about a similar case of where I jiggled a connector on a comm link to ensure my code and the hardware could handle all the garbage that was generated. I eventually won the argument with the hardware manager who thought I was crazy.

formatting link

...Tom

--
Tom Sheppard
Real-Time Fundamentals instructor-led and web-based training courses
 Click to see the full signature
Reply to
Tom Sheppard

Perhaps they mean "testing" Rigorous unit integration and system test is essential and is often referd to as "debugging" by many.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\ /\/\/ snipped-for-privacy@phaedsys.org

formatting link
\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Reply to
Chris Hills

I do agree it's an independent skill, but I think actually it's easier to learn it in the context of software design - because it's cheaper. As a student you can create the software equivalent of a bridge that oscillates to death without actually having to go through the hassle of acquiring the funds, dealing with the bereaved, etc.

Reply to
Lewin A.R.W. Edwards

I've read and enjoyed your war stories before, Tom, but I've just enjoyed reading them again.

attitude with which they approach a problem.

Reply to
Steve at fivetrees

earlier -

Of course. But that's not quite what I mean. I learned the benefits of finding a problem early, first as a production test equipment designer, then as a product designer feeding that same production line. I really think that many software people have yet to learn this lesson.

Steve

formatting link
formatting link

Reply to
Steve at fivetrees

Thank you. I'd love to add to them if others are willing to write something up, anonymously or not.

...Tom

--
Tom Sheppard
Real-Time Fundamentals instructor-led and web-based training courses
 Click to see the full signature
Reply to
Tom Sheppard

On a similar "real man's hardware" test vein : Way back in the days when I did SMPS designs, I recall a visiting designer from Philips who said they used a metal-working file to test SMPS ? - How, you wonder ?

They added croc-clips to a mains power lead, and a bare-furry-cable end was stroked along the file teeth. => Sparks by the truck load! - but it did separate the wheat from the chaff in SMPS circuits that coughed on that....

-jg

Reply to
Jim Granville

Yes, I'm sorry but I didn't make myself clear - I chose too silly an example. Let me amplify a bit: The "genuinely unexpected and unspec'd interaction" is precisely the thing I'm saying lies outside the spec, and therefore the place where the "bug-free" (meets spec) code can fall over.

A tighter definition of "bug-free" might be "by design and intention, never, ever misbehaves under any combination of conditions that can be thrown at the device, regardless of whether these conditions were mentioned in the original specification", which I think is basically what Steve was claiming (that's how I read it, anyway). This essentially states that the code is so robust that it can be guaranteed not to act up even when operated in an out-of-spec environment. But we have merely shifted the burden from defining "bug-free" to defining "misbehaves" and placing limits on "out of spec". :) And I don't think anything can be made to meet this definition of bug-free, because the real world is so malleable. For instance, I load your code onto a nominally compatible, but subtly different masking of the microcontroller. This is out of spec. Can I expect your code to run correctly? Can you blame me for not guarding against this condition? What if you run my hardware 30 degrees Celsius above its absolute maximum rating, thereby altering numerous critical oscillator frequencies in just the right proportions to cause serious communications errors and total lockup? What if you run it at Vcc=4.1V when I designed it for Vcc=5V +/-10%?

In summary: The further the environment deviates from the specification, the greater the chance of failure. It is not possible to avoid this. We might say that the goal of a good engineer is to design the system to behave absolutely 100% identically and predictably right up to the outer border of input conditions beyond which predictable operation cannot be sustained, and more importantly to be able to define accurately where that border lies.

We might further say that a bad engineer designs code that follows some other kind of curve than a square-wave - it gradually (or more likely exponentially!) becomes more likely to fail as environmental conditions deviate from the nominal environment for which the device was designed.

Anyway, I'm basically saying the same thing as you, but approaching it from the other end.

Reply to
Lewin A.R.W. Edwards

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.