Requesting critique of a C unit test environment

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
[First, I apologize for cross-posting. I just think a wider audience can
critique from different vantage points.]

Unit testing is an integral component of both "formal" and "agile"
models of development. Alas, it involves a significant amount of tedious
labor.

There are test automation tools out there but from what limited exposure
I've had, they are pricey, reasonably buggy, and require compiler/target
adaptation.

Out of my frustration with two out of two of them came my own. Its
instrumentation approach is based solely on profound abuse of the C
preprocessor (and in this respect it is equally applicable to C++).

I would like to ask to evaluate the approach
- whether it has gaping holes in ideology or implementation
- whether in your opinion it has merits

A preliminary draft description is at
http://www.macroexpressions.com/dl/C%20code%20unit%20testing%20on%20a%20shoestring.pdf

A reference implementation (with a C99 accent) with a runnable example is at
http://www.macroexpressions.com/dl/maestra.zip

Please reply to a newsgroup or via email as you find convenient.
Thank you for your anticipated feedback,

-- Ark


Re: Requesting critique of a C unit test environment
Quoted text here. Click to load it
Why not just use one of the free frameworks such as CppUnit?

It works well with both C (with a little fiddling like you do in your
paper for "static") and C++.  I'm sure the same applies for other
frameworks.

--
Ian Collins.

Re: Requesting critique of a C unit test environment
<snip>
Quoted text here. Click to load it
Ian,
Thank you for your response.

Please correct me if I am wrong, but AFAIK CppUnit doesn't provide a
code execution trace, so it's pretty darn hard to prove code coverage.
[There must be reasons why testing tools vendors command big money.]

Also, if I use C in non-C++ compatible way (e.g. tentative definitions),
my source won't even compile for CppUnit.

And finally there is a port issue (it's an embedded type talking :)). I
am proposing something that requires only the compiler.

Regards,
Ark

Re: Requesting critique of a C unit test environment
Quoted text here. Click to load it
If you develop your software test first, you get all the code coverage
you need.

Quoted text here. Click to load it
If you mean K&R style prototypes, don't use them.  Write and compile
your tests in C++ and your code in C.  Don't attempt to compile your C
with a C++ compiler.

Quoted text here. Click to load it
Shouldn't matter for unit testing, develop and test on a hosted system.
 If you require bits of the target environment, mock (simulate) them.

--
Ian Collins.

Re: Requesting critique of a C unit test environment
<snip>
Quoted text here. Click to load it
Test first is a nice model but not of a universal applicability.
Besides, I need to demonstrate test coverage to the certifying/auditing
entity.
OTOH, I wonder if the proposed instrumentation can be made a part of
CppUnit. I think, there is nothing in either that would prohibit it.
Quoted text here. Click to load it
<snip>
 > Write and compile
Quoted text here. Click to load it
Right. It just didn't occur to me :(
Quoted text here. Click to load it
The farthest I can go away from the target is a software simulator of
the instruction set. Same compiler, same version, perhaps, more
"memory". I think I am not alone in this...

-- Ark

Re: Requesting critique of a C unit test environment

Quoted text here. Click to load it

It sounds like learning what their requirements are and meeting them is more
important than guessing if TDD will incidentally meet their requirements.

Quoted text here. Click to load it

You can keep the TDD thing a secret...

--
 Phlip
 http://www.oreilly.com/catalog/9780596510657 /
We've slightly trimmed the long signature. Click to see the full one.
Re: Requesting critique of a C unit test environment
Quoted text here. Click to load it

You are not alone in that, I'd suggest you take this to a TDD list for
advice.

Quoted text here. Click to load it
Why?

--
Ian Collins.

Re: Requesting critique of a C unit test environment
Ian Collins wrote, On 27/08/07 08:14:
Quoted text here. Click to load it

<snip>

Quoted text here. Click to load it

There are many possible *valid* reasons for this. One is that if you are
not using the same version of the same compiler with the same switches
then the code you are testing is not the same as the code that will be
run. Since compilers *do* have bugs it is possible that the bug will be
triggered in the real environment but not in the test environment unless
you ensure that they are the same.

If I was doing QA for a product I would insist than you either use the
same version of the same compiler or you provide testing to the same
level that the deliverable SW requires of *both* the compiler used for
test *and* the compiler used for the final SW. The more critical the SW,
the more insistent I would be on this, and the more testing you would
have to do on the compilers, for safety critical SW this would probably
kill the project dead if you did not use identical SW to build for test
and build for delivery. BTW, I *have* rejected SW and documentation at
review, and even told developers that there was no point in putting it
in for review because I would fail it.
--
Flash Gordon

Re: Requesting critique of a C unit test environment
Quoted text here. Click to load it
But one has to differentiate between developer unit testing (the subject
of this post) and QA (customer) acceptance testing.  The former can be
performed on any environment the developer chooses, the later must be
run on the target.

Quoted text here. Click to load it

Again, this is different from developer unit testing, I don't think
anyone would be daft enough to release a product that hadn't been
through acceptance testing on the target platform.

--
Ian Collins.

Re: Requesting critique of a C unit test environment
Ian Collins wrote, On 27/08/07 21:51:
Quoted text here. Click to load it

You have missed out internal formal testing which in many environments
is far more complete than acceptance testing. For example, I've worked
on projects where a formal test literally takes a week to complete but
the customer acceptance testing takes only a few hours.

Unit test can also be formal, and in a lot of environments, including
the afore mentioned safety critical projects, you are *required* to
perform formal unit tests.

Quoted text here. Click to load it

Informal testing can be run in any environment the developer has
available. Formal testing, which is the only sort of testing that you
can guarantee will be available and working for those maintaining later,
is another matter.

Quoted text here. Click to load it

All formal testing, whether unit testing or testing at a higher level
has to be run on code compiled with the correct compiler, although not
always on an identical target.

Quoted text here. Click to load it

Acceptance testing has very little to do with proving whether the system
works, it is just to give the customer some confidence. The real worth
while formal testing has to be completed *before* doing customer
acceptance testing and done with the correct compiler. At least, this is
the case in many environments, including all the projects where I have
been involved in QA, and on the safety critical project I was involved in.

If your customer acceptance testing is sufficient to prove the SW is
sufficiently correct then your customer has either very little trust in
your company or a lot of time to waste. If your customer acceptance
testing is the only testing done with the correct compiler and it is not
sufficient to prove your SW is sufficiently correct then your SW is not
tested properly. At least, not according to any standard of testing I
have come across.
--
Flash Gordon

Re: Requesting critique of a C unit test environment

Quoted text here. Click to load it

What did y'all do if the "formal" test failed?

What I look for is this: Replicate the failure as a short unit test. Not a
proof - just a stupid test that fails because the code change needed to fix
that formal test isn't there.

The point is to make the fast tests higher value as you go...

--
 Phlip
 http://www.oreilly.com/catalog/9780596510657 /

Re: Requesting critique of a C unit test environment
Quoted text here. Click to load it
If performed, internal formal testing is still a step away from
developer testing.

Quoted text here. Click to load it
How so?  A unit test suite doesn't just vanish when the code is
released, it is an essential part of the code base.
Quoted text here. Click to load it

That depends on your definition of Acceptance tests.  In our case, they
are the automated suite of tests that have to pass before the product is
released to customers.

Quoted text here. Click to load it
Again, that depends on your process.

Quoted text here. Click to load it

Why?  Our acceptance test are very comprehensive, written by
professional testers working with a product manager (the customer).

It sounds like you don't have fully automated acceptance tests.  Where
ever possible, all tests should be fully automated.

--
Ian Collins.

Re: Requesting critique of a C unit test environment
Ian Collins wrote, On 28/08/07 04:46:
Quoted text here. Click to load it
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Quoted text here. Click to load it

Yes. However, above you said that it should not matter for unit testing
whether you use the same compiler or not. Since unit testing can and
often *is* formal such a statement is at least misleading. Had you said
that it did not matter for informal testing and had the OP been asking
about informal testing you might have a point, but it was never stated
that the unit testing was informal.

Quoted text here. Click to load it

Simple. If it is not formal then you (the next developer) have no
guarantee that it is in a usable state. So you, the next developer, have
to fully validate any tests you will rely on during your development.

Quoted text here. Click to load it

Yes, this could be a matter of definition. To me an acceptance test is
the customer coming in and witnessing some pre-agreed tests where if
they pass the customer will accept the SW and/or HW (and pay for it). It
has nothing to do with whether the company is prepared to give the SW to
the customer.

Quoted text here. Click to load it

I've not worked for a company where they would be prepared to try and
get a customer to accept SW before having a decent level of confidence
that it is correct *and* acceptable to the customer.

Quoted text here. Click to load it

It is not possible for a reasonable cost to fully automate all testing.
On a number of projects I have worked on the formal testing included
deliberately connecting up the system incorrectly (and changing the
physical wiring whilst the SW is running), inducing faults in the HW
that the SW was intended to test, responding either correctly and
incorrectly to operator prompts, putting a plate in front of a camera so
that it could not see the correct image whilst the SW is looking at it,
swapping a card in the system for a card from a system with a different
specification etc. It would literally require a robot to automate some
of this testing, and some of the rest of it would require considerable
investment to automate. Compared to the cost of the odd few man-weeks to
manually run through the formal testing with a competent whiteness the
cost of automation would be stupid.

BTW, on the SW I am mainly thinking of there were so few bug reports
that on one occasion when the customer representative came to us for
acceptance testing, a few years after the previous version, both the
customer representative and I could remember all of the fault reports
and discuss why I new none of them were present in the new version. The
customer representative was *not* a user (he worked for a "Procurement
Executive" and not for the organisation that used the kit), so he would
not have seen it for several years.

If you doubt the quality of the manual testing, then look at how many
50000 line pieces of SW have as few as 10 fault reports from customers
over a 15 year period. Most of those fault reports were in the early
years, and *none* were after the last few deliveries I was involved in.

BTW, if they are still using the SW at the start of 2028 we have a
problem, but that is documented and could easily be worked around.
--
Flash Gordon

Re: Requesting critique of a C unit test environment
Quoted text here. Click to load it
We all work from our own point of reference, in mine, units tests are a
developer tool so that's why I answered as I did.

Quoted text here. Click to load it
Again, as one who uses TDD, the tests are always up to date as they
document the workings of the code.  All down to process.

Quoted text here. Click to load it
Ah, that explains a lot!

Quoted text here. Click to load it
Neither have I.

Quoted text here. Click to load it

True, but with care you can automate the majority of them.  The beauty
of automated tests is they cost next to nothing to run, so they can be
continuously run against your code repository.

Quoted text here. Click to load it
There you have what I'd call integration testing, something we also do
with any software that interacts with other equipment.

Quoted text here. Click to load it
I don't doubt it, I just prefer to send my resources elsewhere.  We go
through the full manual integration tests for major software releases
(adding acceptance and unit tests to reproduce any bugs found).  This
process of feeding back tests into the automated suites makes them
progressively more thorough, to the extent that minor updates can be
released without manual testing ant the testing of major releases finds
few, if any, bugs.  Most of the bugs found by the manual testing are
differing interpretations of the specification.


--
Ian Collins.

Re: Requesting critique of a C unit test environment
Ian Collins wrote, On 28/08/07 22:03:
Quoted text here. Click to load it

You should try to avoid assuming everyone works the same way. In the
defence industry at least it is very common for there to be a lot of
formal unit tests.

Quoted text here. Click to load it

If the process is enforced then the testing is formal and, I would
expect, the results are recorded somewhere the 10th developer after you
will be able to find them.

Quoted text here. Click to load it

Acceptance tests are used to accept, simple :-)

Quoted text here. Click to load it

So you do your acceptance tests before the customer sees the kit?

Quoted text here. Click to load it

I fully understand the use of them. However, it is not always either
practical or cost effective. In this case there was no automated test
system available, so if we wanted one we would have had to design,
implement and test it, then write all the test harnesses...

Almost forgot, we would have had to generate and validate a *lot* of
test data instead of just using real kit either with or without faults.

At the end of the day we would also have had to do thorough integration
testing as well. So I still believe doing automated testing would have
been more expensive overall, and certainly would have been a significant
up-front cost.

Note that this SW does a *lot* of HW interaction, since it is actually
the main SW of a piece of 2nd line test equipment.

Quoted text here. Click to load it

Yes and no. Each set of tests was focused on exercising a specific unit,
it was just using the rest of the SW as a test harness.

Quoted text here. Click to load it

Obviously. We just killed multiple birds with the same high-tech
missile^W^W^Wstone.

Quoted text here. Click to load it

I still don't believe it cost more time overall.

Quoted text here. Click to load it

We also added tests to trap the few bugs that were found.

Quoted text here. Click to load it

We started off by making the tests thorough which is why the testing
takes so long. Due to this and the low bug count almost all releases
whilst I worked at the company were major releases (adding support for
major variants of the kit it tested, testing major new features in new
versions of the kit it tested etc) with only a small number of bug-fix
releases.

Quoted text here. Click to load it

Not on this SW. Reviews of requirements caught most of them and reviews
of design most of the remainder. I can only think of one interpretation
issue on the SW that was not caught before coding started on this SW.
--
Flash Gordon

Re: Requesting critique of a C unit test environment
Quoted text here. Click to load it
The results are recorded every time the tests run - either "OK" or
failure messages :)

Quoted text here. Click to load it
They are run as soon as the feature they test is complete.

Quoted text here. Click to load it
It the project is log running, or a family of products are to be
maintained it can be worth the effort.  I preferred to have my test
engineers developing innovative ways to build automatic tests that have
them running manual tests.  Provided they can produces the tests at
least as fast as the developers code the features, everyone is happy.

Quoted text here. Click to load it
I like to capture all of the data generated during manual tests and feed
it back through as part of the automated tests.

Quoted text here. Click to load it
The example I'm referring to were power system controllers.

Quoted text here. Click to load it
This project has been running (the product has to continuously evolve to
meet the changing market) for 5 years, so the up front cost has paid for
its self many times over.

--
Ian Collins.

Re: Requesting critique of a C unit test environment
Ian Collins wrote, On 30/08/07 02:42:
Quoted text here. Click to load it

They are only recorded if they are put somewhere that someone can see
them after you have left the company. Otherwise they are only reported.

Quoted text here. Click to load it

And all re-run after the final line of code is cut, I trust.

Quoted text here. Click to load it

I started on it in the late 80's and the last I heard was a contract
signed giving an option of support until 2020, that long enough for you?

Quoted text here. Click to load it

Only half a dozen or so variants, all using over 90% common code.

Quoted text here. Click to load it

Ah, but we did not spend vast amounts of time running the tests, not
compared to the time/effort involved in generating the required test
data, automating the tests, and then writing the integration tests
needed to prove it works as an entire system.

Quoted text here. Click to load it

We did not have the luxury of dedicated test developers. Those
developing the tests where those analysing the requirements, designing
the SW and implementing it.

Quoted text here. Click to load it

That would require writing a lot of SW to capture the data. All of which
would have to be tested.

Quoted text here. Click to load it

I'm talking about 2nd line test equipment for *very* high end camera and
image processing systems. 2nd line is the kit the customer puts it on
when it has come back from operation broken.

Quoted text here. Click to load it

Ah well, the SW I'm referring changes only every few years due to new
customers or existing customers wanting enhancements to the kit it is to
test. The last set of updates I'm aware of will have started probably in
2001 (maybe 2000) but I had left the company by then. I know we had won
the contract. So definitely over twice as long a period. Requirements
changes also had minimal code impact because we had designed the system
to allow for changes.
--
Flash Gordon

Re: Requesting critique of a C unit test environment
Quoted text here. Click to load it
The tests are part of the project, in the sane source control.  Without
the tests, the project can not build.  Building and running the tests is
an integral part of the build process.

The last sentence is important, so I'll repeat it - the unit test are
built and run each time the module is compiled.

Quoted text here. Click to load it
Rerun every build, dozens of times a day for each developer or pair.

--
Ian Collins.

Re: Requesting critique of a C unit test environment

Quoted text here. Click to load it
Assuming that the test code itself must be reasonably dumb (so that
/its/ errors immediately stand out), that's not terribly realistic:
imagine a sweep over, say, "int24_t" range. One could only hope to run
automated tests overnight - on a long night :).
--
Ark

Re: Requesting critique of a C unit test environment
Quoted text here. Click to load it

It may not appear that way, but it is the reality on any project I
manage.  In all (C++) cases, the tests take less time to run than the
code takes to build (somewhere between 50 and 100 tests per second,
unoptimised).

--
Ian Collins.

Site Timeline