Modern debuggers cause bad code quality

Sometimes the overhead of TTY output ... Heisenbugs the system, but these days just hitting a breakpoint probably means a full system restart.

I'm biased because most of the bugs I chase these days are in released product and all the easy debug strategies have been used and failed. I spend most of my time developing resources to *reproduce* bugs; some are pretty obscure.

Debuggers tend to encourage people to run through it once, then mentally line through that function. This, of course, varies. It's nice to have options.

--
Les Cargill
Reply to
Les Cargill
Loading thread data ...

And you magically hope something has changed so the condition doesn't perpetuate? That's called a bug.

Different type of assertion.

static foo(int) {...}

bar(){ ...

while (x > 0) { foo(x); x--; }

...

ASSERT(x == 0) }

would you want a machine to generate that before having confidence in it?

Or, in the above:

foo(int y) { ASSERT(y != 0) ... }

Or:

uint a, b, c; ... ASSERT(b>0, c>0) ... a = b/c; ASSERT(a

Reply to
Don Y

That depends on what you are debugging and whether or not the balance of the system can tolerate pausing the component under test. In my systems, the debugger is integrated with the "OS" so that it effectively suspends the process/task/thread that is being tested. As such, the rest of the system can be allowed to run -- albeit "waiting" on the suspended element(s).

Black boxes are cheap (unless you are in a severely resource constrained environment -- even there, you can glob on extra resources for the BB that need not be present in production). I find them invaluable to troubleshoot real-time activities (where the cost of gathering data can disrupt the activity that is being profiled). They are lightweight and can usually be independently sized/resized as you deem fit.

Reply to
Don Y

TDD is OK if you start the testing early enough. By early enough I am hinting you begin your initial testing on the resilience and robustness of the Requirements Specification. Getting that bit right deals with where 44% of the errors are introduced.

--
******************************************************************** 
Paul E. Bennett IEng MIET..... 
 Click to see the full signature
Reply to
Paul E Bennett

I think I know some of the places where Jack might have obtained his numbers (and I do not necessarily disagree with them). The languages are not used in isolation from a development process so you have to look at the overall package for a proper comparison.

If you look at the development environments where each of those languages are used you will find that the reason for the differences are more to do with the development process users of those languages go through. The SPARK guys usually mathematically prove a lot about the Requirements Specification before they get to coding (usually auto-coding from within their modelling and proof tools).

If those who used the other languages applied a stronger development process the numbers across the selection of languages would not be that different. I would quote the Les Hatton paper on that if I could remember which one it was.

I have already outlined my own development process and considerations in a previous post and this stands me in quite good stead. I programme in Assembler, Forth and the IEC-61131 set for the most part (but have also code some systems in D3 and S80 as well. Don't worry if you haven't heard of the last two, they were rather specialised).

The biggest benefit you will get is by making your process deal with Requirements Specification Improvements before you get to design and coding. That is tackling where the 44% of problems originate.

--
******************************************************************** 
Paul E. Bennett IEng MIET..... 
 Click to see the full signature
Reply to
Paul E Bennett

Yes. Letting a component continue to run in a known wrong state is not acceptable, as that may lead to spreading of the fault.

(And yes, there a of course a lot of "buts" for this.)

Yes - as I haven't seen what "..." does and how "x" is declared. If "x" is an integer type, and "..." doesn't touch "x", then a static analyzer should be able to prove the assertion for you.

Here I would have to know that "foo" is only called from the shown loop. As I typically program in Ada or SPARK, I would have written the specification of "foo" as

subtype Non_Zero_Integer is Integer with Static_Predicate => Non_Zero_Integer /= 0;

procedure Foo (Y : in Non_Zero_Integer);

and have the condition made explicit. An Ada compiler may or may not insert a check that "X" is different from 0 before the call to "Foo", but the SPARK tools would notice that "X" can't be 0 inside the loop and allow the (unchecked) call to "Foo".

A nice ideal, but I prefer code that shouldn't execute to systems that fail because "shouldn't" isn't the same as "can't ever".

I know perfectly well that for DO-178B, you can't have dead code in a system, but I certainly hope that you have valid confirmation that every single disabled assertion can't ever fail - for EVERY release of the system.

Using machine generated/checked proofs makes validating the checking much simpler. That is why I prefer machine generated/checked proofs to the manual kind.

Greetings,

Jacob

--
WTFM: Write the Fine Manual
Reply to
Jacob Sparre Andersen

Sometime in the future, medicine will discover that the proper diet for a long healthy life is sugared bacon with coffee.

I have restraining orders from both destinations preventing me from approaching the gates. Since I have no place to go, I figure that means I'll have to live forever 8-)

George

Reply to
George Neuner

Alas, not really -- just personal observation. The people that screw up are the people who are willing to settle back and declare success at the first splashy "it works" run of the device, and then ignore all the various details that might trip things up.

(And note -- I'm not just talking software here. Lazy engineering is lazy engineering, although the symptoms are different among the disciplines. We all know what buggy code does. Bad analog circuit design leads to devices that don't work when it's cold or hot or wet or really dry, or darker than a lab, or brighter than a lab, etc. Bad mechanical design leads to parts that wear out way too fast, or that only sometimes work, or that need excessive work on the manufacturing floor to assemble, etc.

I look at TDD as a means to stay on the right track -- but just going through the motions of TDD without any attention to the underlying goal means that you're making bugs with more letters attached.

So it's not TDD by itself that helps me -- what mostly helps is the process of breaking my development down into tiny increments, and verifying each incremental change. TDD makes me pay more attention to keeping the increments tiny, and helps me to resist the temptation to write slews of code for expediency's sake. Having the built-in regression test, however, has saved me large amounts of effort from time to time, by finding bugs that would have been monstrously hard to dig out with a debugger, but that immediately pop up the next build cycle when using TDD.

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

This is like the developer who sees his code "misbehave" and, because he can't EASILY reproduce it nor identify an obvious cause, decides that it's "just a fluke" (implying that it will never happen again... "alpha particle" mentality) and dismisses the observation.

The point was to suggest that it is NOT (directly) referenced in either of those elided regions. But, there may be lots of code there that makes it tedious for a developer to determine those things -- WITHOUT SOMEONE HAVING PREVIOUSLY MADE THAT COMMITMENT/DECLARATION for them.

Great! When we all have those tools running on EVERY MCU that we'll ever encounter, life will be grand! And David's microwave oven will come with a 12" touchscreen (with rear facing camera to peek into the cooking chamber so the space occupied by the display can also mimic a traditional "window").

But, we don't have those tools. We often can't even do floating point math, work in protected address spaces, etc.

And, I wouldn't be willing to spend real consumer dollars to afford all of that hardware in every product that has benefitted from MCU's.

If you can't *handle* the failed assertion in a manner appropriate for

*it's* (particular) instance, then you've now added another layer of complexity to your system: what are the effects of ANY of these assertions failing and causing the thread/process/application to be restarted (and "only" making a note of that fact)?

main() { ... ASSERT(foo); ... dispense_medication(); ... ASSERT(something); ... monitor_vital_signs(); ... ASSERT(other); ... }

clearly, the same recovery method for each of the three assertions will result in three different sets of side-effects. The implication is that "dispense_medication()" being executed multiple times is Not A Good Thing.

So, you're better off either crashing in the assertion *or* plowing straight through AS IF it had not failed (because you made sure of this AT TEST).

If you're going to *handle* a condition that you expected to be invariant, then write the LIVE code to handle it. And, be willing to explain why taht code *is* live (i.e., why have you allowed this condition to go unchecked to this point?)

That's part of the job. That's why you invest in specifications. And, good test suites. It's why you don't shrug off an OBSERVED malfunction as a "fluke" and assume it won't happen again, etc.

I will be interested to see how you fare in an environment where that isn't available to you. Perhaps a new thread, "Machine generated proofs cause bad code quality" -- for the same reasons "modern debuggers" are alleged to! :>

Reply to
Don Y

That's good if it works out. But an PFPGA wait for no man :)

Absolutely.

--
Les Cargill
Reply to
Les Cargill

That must have been a quite slow programmer, since usually those small EPROMs could be programmed in 5 minutes.

I used jump tables in the beginning of each EPROM, so a modification of the code within that EPROM only required burning that EPROM, not the whole EPROM set.

By allocating some unprogrammed areas at the end of each EPROM, you could insert a jump instruction in front of the code to be replaced and put the modified code at th end of the EPROM. Of course, you must find a few bytes with a suitable bit patterns to reburn a jump instruction. Reburning a few bytes from 1 (usually also unprogrammed state) to 0 took only a few seconds instead of several minutes.

Reply to
upsidedown
[...]

I agree. The comparison is - especially if used isolated - misleading, and Ganssle's speaking had a focus on processes. Nevertheless the slides showed the numbers above without a reference to "process".

of course. BTW, the comparison was against "C without static analysis", because he also stated that

Reply to
Oliver Betz
[...]

SWD / JTAG / BDM / whatever debugging is usually "for free" if you use this interface also for production programming. Otherwise it has the same hardware cost as TTY. And it gives you extensive access to your system without the need of instrumenting your code.

Consider also automated testing with original binaries, no instrumentation.

Oliver

--
Oliver Betz, Munich http://oliverbetz.de/
Reply to
Oliver Betz

I can't recall if we were using an Intellec8 or had already "upgraded" to the MDS800. The programmer ("UPP") was a dog. I think there was a small squirrel cage inside that (big!) box with a hamster running inside it!

(remember, the "development system" is running off 8" floppies so *nothing* is fast -- not even "file not found"! I think they were like 800KB? six letter identifiers? everything in uppercase?? )

OTOH, it was light-years better than "hand assembling" i4004 code! :-/

You loaded individual "object" files (prepared elsewhere) into memory and then used a monitor, of sorts, to command the programmer to move those bytes into the device being programmed.

Then, separately entered a corresponding "compare" command. If you were unlucky enough to have a device that hadn't completely erased -- or, had developed a stuck bit (that wasn't stuck the way you needed it to be), you started over again.

[Eventually, we started discarding the EPROMs that caused us repeated problems; we'd put tick marks on a device each time it screwed one of us so we could keep track of how flakey it was. Get screwed and see lots of tick marks? Toss it out. Boss wasn't keen on $50 devices (at one point) going into the trash!]

Prior to that, the 1702's were even more of a nuisance to program!

That only works if you have room to spare. We actually wrote a small utility to grep(1) our sources and tabulate the number of each specific "CALL" instruction. The 7 most frequent ones were then assigned to the 7 restart vectors to allow us to save 2 bytes of EPROM for each of those invocations (i.e., if you "CALL foo" in 100 different places, you can save 200 bytes in the image -- handy when you have things like FADD, FSUB, FMUL, etc. littering your codebase!)

We each maintained our own set of sources so that we had some control over what was present in our links. E.g., when working on one subsystem, I might choose to elide huge portions of the user interface and just stub all of its actions. That gives me a bit more elbow room, frees me from having to deal with any of the bugs in that code (someone else's responsibility) AND lets me access a few extra bytes of RAM that I would otherwise have to *share* with that other subsystem.

With *planning*, you could insert things like:

TRY THIS JP IT_WORKED

TRY THAT ; dead code JP IT_WORKED

TRY AGAIN ; dead code

IT_WORKED:

Then, when the code was running, if "THIS" didn't work as expected, you could overwrite the bytes that it occupied PLUS the "JP IT_WORKED" that immediately followed with 0x00 (NOP) and get another shot at some other aspect of the problem without having to build a new image AND burn a new set of EPROMS.

I.e., you kept a "listing" of your code handy with absolute addresses penciled in for those key locations (obtained from the linkage editor's map)

Things were different, then. Software was seen as a special kind of hardware. We described our algorithms AS IF they were implemented with actual dedicated hardware (actually, this was the only practical way of dealing with the patentability of software issue that was just being addressed at the time).

So, you looked at debugging as you would debugging a piece of hardwired hardware: I'll try this on this portion of the design; and something else on some other portion; etc. -- before reworking the prototype (hardware) to accommodate the successful changes and elide the unsuccessful ones.

Reply to
Don Y

The old saying was that cc is half a compiler; the other half is lint.

Reply to
Tom Gardner

I am not sure we know all that well where north is w.r.t. development process. There is much to be humble about.

--
Les Cargill
Reply to
Les Cargill

It is the language, not the rest of the toolchain. "C" is the major contributor to the decline in software quality (where there was some quality to decline of course). Nowadays people have no clue where the machine stack is, write IRQ handlers in C etc. etc. - in a way not dissimilar to writing novels in a language for which they need a phrasebook. The thing is, their novels get sold simply because the general public can't even use a phrasebook. And this happened mainly because x86 entered the scene widely, made assembly programming impractical with its messy programming model etc.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

That's odd, since 'C' has been there since... well, the start. How can a thing-that-has-not-changed be the cause of decline? Some massive lag? Changes in the populations of practitioners?

I have always been a fan of C.A.R. Tony Hoare, but his online video of a talk about "The Billion Dollar Bug" is perfect because someone stands up and notes that Haskell is perfectly safe until you invoke side effects - like the I/O monad.

We all need phrasebooks.

I wrote more assembly language in x86 than in any other architecture. You want something to wreck things? Try assembly.

--
Les Cargill
Reply to
Les Cargill

It is the popularity growth, not the birth date. Then C does not prevent one from writing decent software, it only makes it more difficult - and much easier to write messy such. People who have known what their compiler does - i.e. those who wrote the compiler - must have been able to write some good code using it.

Not all of us. I don't, for example.

This explains why you see assembly as something impractical. There is no such thing as "assembly" language really, there are worlds of a difference between this or that "assembly". And then there is my VPA (virtual processor assembly) which makes me more efficient by at least an order of magnitude than anyone who uses C when it comes to projects which take more than a month to program (before you ask my code is in the millions of lines, >50M sources over the past 20 years).

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

I don't think that's ... demonstrable in any reasonable fashion.

Sure. So don't do that :)

I think I probably spent a total of six months - call it 1000 hours - learning how to write good 'C' code.

I know people who use "more modern" toolchains who have ten times that invested in them and still have problems. By the time you learn all of C++, it will have morphed into something else.

Then I am not sure what to tell you - the idioms of 'C' are a pretty lengthy thing. I have committed many of the patterns to memory over 25 years but not all of them.

I don't.

They're all essentially the same. There is a narcissism of small differences.

Those projects are arguably too large. An old saying is "by the time you get N=a million lines of FORTRAN to compile, you no longer care what it was supposed to do."

There is an N ( doubtless larger ) for 'C'...

--
Les Cargill
Reply to
Les Cargill

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.