Regression testing for /* CAN'T HAPPEN */

Hi,

I use a *lot* of invariants in my code. Some are there just during development; others "ride shotgun" over the code at run-time.

So, a lot of my formal testing is concerned with verifying these tings *can't* happen *and*, verifying the intended remedy when they *do*!

I'm looking for a reasonably portable (in the sense of "not toolchain dependant") way of presenting these regression tests that won't require the scaffolding that I particularly use.

For example, some structures (not "structs") that I enforce may not be possible to create *in* a source file. For these, I create "initializers" that actively build a nonconforming image prior to unleashing the code-under-test. If written correctly, the code being tested *should* detect the "CAN'T HAPPEN" conditions represented in that image and react accordingly (of course, this differs from the production run-time behavior as I *want* to see its results).

I can't see any other flexible way of doing this that wouldn't rely on knowing particulars of the compiler and target a priori.

Or, do folks just not *test* these sorts of things (formally)?

Reply to
D Yuniskis
Loading thread data ...

Yes, they do get tested when "formal" testing is done. However, I think that you can appreciate that a lot of software is being sent out without formal testing.

For example, "formal" testing requires that every case in a switch statement be entered; even the default.

Most companies task the junior engineers with testing - the inexperienced that really don't know where to look for errors, or how to expose them. These companies contest that the persons experienced enough to do the job are "too expensive" for "just testing". IMnsHO this is why most of the software out there is pure cr@p.

To answer your last question, most software is not formally tested. I asked a developer at a large software company how the product was tested, and the reply was "we outsource that". When I asked how they determined if it was tested _properly_, the reply was "we'll outsource that too".

RK

Reply to
d_s_klein

Seems they follow that universal regression test rule, the one that realizes that within the words "THE CUSTOMER" you can always find these words too : "CHUM TESTER"

Using the customer is the ultimate outsourcing coup!!

-jg

Reply to
-jg

So, folks just write code and *claim* it works? And their employers are OK with that?? (sorry, I'm not THAT cynical... :< )

Yes. Hence the advantages of regression testing -- so you don't have to *keep* repeating these tests each time you commit changes to a file/module.

I would contend that those folks are "too expensive" to be writing *code*! That can easily be outsourced and/or automated. OTOH, good testing (the flip side of "specification") is something that you can *only* do if you have lots of experience and insight into how things *can* and *do* go wrong.

I refuse (?) to believe that is the case across the board. You just can't stay in business if you have no idea as to the quality of your product *or* mechanisms to control same!

Reply to
D Yuniskis

Unfortunately, this attitude seems to cause many firms TO CURSE THEM instead of embracing them.

Reply to
D Yuniskis

D Yuniskis wibbled on Thursday 01 April 2010 01:35

It's worse than that. I've worked in places where a significant project was dumped on the desk 3-4 weeks before the deadline and told to produce anything that mostly seemed to work. Upon protestations, they glibly claimed that "any glitches could be sorted out later". Yeah... Needless to say I left that place fairly quickly as it was a pervasive attitude that was spoiling what could have been quite a nice product.

I think you'll find that crap is produced by two types of company:

Ones with crap programmers who BS the bosses.

Ones with potentially pretty good (or who at least aspire to be good) programmers who are persistently rushed to unrealistic deadlines, not allowed to take up training courses or allowed a couple of hours here and there to improve themselves for the cost of an O'Reilly Safari subscription (though on the last point, who buy it themselves and read in the evenings). Testing? You'll be lucky to find time in some places to do the *design* properly.

I've met a few of the former case but directly or indirectly seen a great many examples of the latter case.

For as long (at least in the UK) that company bosses fail to include significant people with engineering backgrounds, this situation is likely to remain the norm.

--
Tim Watts

Managers, politicians and environmentalists: Nature's carbon buffer.
Reply to
Tim Watts

Unless of course they are CUTE MOTHERS who CHEER UTMOST or MUTTER ECHOS.

--
Gemaakt met Opera's revolutionaire e-mailprogramma:  
http://www.opera.com/mail/
(remove the obvious prefix to reply by mail)
Reply to
Boudewijn Dijkstra

One might cynically say, that is the MS way.

For anyone interested in what true testing is about then I would recommend a very nice book by Gerald M. Weinberg (he who gave us the bible on Technical Reviews and Inspections). The book is called "Perfect Software and other illusions about testing" ISBN 978-0-932533-69-9. I'll say no more than every developer should read this and then make their management read it too.

One rule that all development companies should make hard and fast is:- A code developer shall only submit code that has undergone the sanity checks (Compiles clean without warnings, Lint, Static Analysis, test build and functional test...) performed by himself. That way he knows he has submitted a reasonable piece of work and the testers role has been eased somewhat.

I consider that writing code for a project should always be just a small proportion of the development time. Most time should be spent in developing, testing and correcting the technical specification before the coding (or hardware build) is started.

--
********************************************************************
Paul E. Bennett...............
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

I

Please don't refuse to believe.

There are companies that are doing -quite- well that have no fancy idea how to test their product(s). People respond to the advertising, buy the product, and that's all that really matters.

"And it was my idea" is an excellent example. It's cheaper to find new customers than it is to fix the defects.

RK

Reply to
d_s_klein

Seems they follow that universal regression test rule, the one that realizes that within the words "THE CUSTOMER" you can always find these words too : "CHUM TESTER"

Using the customer is the ultimate outsourcing coup!!

----------------------

In the context of analytical instruments testing was something you did to prove each instruments worked. The trick was to restrict the testing to only those tests that are likely to work.

Even very recently I found out that pass bands were being set to the instrument spec accuracy PLUS the reference material tolrance rather than MINUS. All reference material calibrations were traceable to NIST, so that's all right then! When challenged formally to explain themselves, the reply was "That's how we did it when we worked for ".

Peter

Reply to
Peter Dickerson

Ha!

"The light in my office is burned out!" "Hmmm... *mine* works..."

I don't think much thought goes into testing. I think folks see it as "nit picking". Yet, don't seem to be bothered when some customer complains about something that doesn't work (AT ALL!).

"Well, we didn't *test* for that..."

(OK, so what *did* you test for? What product are we

*really* making since we obviously aren't making the product that we advertise!)
Reply to
D Yuniskis

Well, I think there are times when Management gets itself in a no-win situation. I can't see that as being a viable

*long term* method of operation, though. You're ALWAYS waiting for the other shoe to fall...

Here, I *am* a bit more cynical. I don't see many folks doing any rigorous testing of their designs. Even less frequently do I see the development of formal test suites (i.e., are you going to *remember* to repeat all of the tests that you did INformally the next time you "touch" the code? Or, are you misguided enough to believe that the next changes you make CAN'T POSSIBLY break anything?)

As "proof" of this, I note how few open source projects exhibit this "high quality" (and evidence of formal test suites). Here, there is no "market pressure". There is no boss breathing over your shoulder. You don't even *have* to write the code at all!! So, if you are so "diligent", where is the evidence of the testing that you performed on your codebase?

I.e., *maybe* I'll see mention of explicit bugs in the CHANGES file. Or, commentary litered throughout the sources (/* Fixed divide by sero error, 12/15/1963). But, nothing that explains why this wasn't caught *previously* (i.e., document the bug that existed in your test cases!)

[to be fair, there *are* many FOSS projects that *do* include test suites. Some, quite thorough! Yet, despite their presence, often folks maintaining/patching those projects release patches *without* running the regression tests. Sheesh! It's as if folks assume they can't make mistakes -- or, don't want to know about them since that might mean they aren't yet *done* with the code or the patch]

This is more in line with my thought that you shouldn't waste skilled tradesmen "writing code". Instead, they should write specifications and test suites and design the overall structure of the code/system (i.e., any hardware ramifications thereto).

Let an up-and-coming "coder" *prove* himself capable of doing the more advanced things. Similarly, free the more experienced folks from the drudgery of writing code (chasing syntax errors, picking names for variables, etc.)

Reply to
D Yuniskis

Sure! But, those companies have opted to let their "quality" fall to the lowest level tolerated by their customers. Nothing wrong with that approach. E.g., you can design a product with components that have a limited service life -- as long as that life is long enough to not piss off a significant number of your customers!

The same applies to code (its just a component).

But, that doesn't mean they don't have *some* mechanism to control that quality! I mean, even folks who build crappy consumer kit know if their quality is getting worse (even if they "know" this by counting the number of "returns")

When it comes to software, they must have some idea which of their employees are "writing crap" and which aren't. I.e., if "Ted" writes the code for a product and the product doesn't even power up, it sure looks like you've got a problem with The Quality of Ted! :>

Likewise, if other folks on Ted's team complain about having to fix Ted's code (because it doesn't do what it was contractually supposed to do), then you've got a problem with Ted.

So, how does *Ted* know he has a problem with the code he is producing if Ted doesn't do *some* form of testing?

Wow, I'm not sure about that. Most of the firms I've worked with take the opposite approach -- that it is mush easier to *keep* a customer than to try to get a new one. I.e., an existing customer already has a relationship with you. (assuming you don't blow it!) You have to *court* new customers (advertise, incentives, etc.).

E.g., I suspect Toyota has far more *repeat* customers than *new* ones...

Reply to
D Yuniskis

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.