I need some advice about how to perform unit testing and regression testing on embedded software.
- Do you perform unit testing on host (PC) using a different compiler or on target with the actual compiler??? Do you think an "on host" test is useful at all??
- In the case of "on target" tests, how do you interact with the system??? Do you modify the code to only perform the function under test??? Do you pre-load the test cases before compiling or use a tool (emulator, debugger) to modify the variables and parameters on the run??
- How do you define the "unit" to test?? Do you test functions independently or test them by groups of functions of a given feature???
- How to set the test cases?? Do you modify the variables and parameters thru the whole range according to its type???
- How do you set the PASS/FAIL criteria if there are no documents with the specific function characteristics, but the whole feature???
- What about automating the tests???? Do you use a code-analysis software the collect data about each function (global and local variables, parameters, called functions, etc.) or do it manually??
- What about regression tests??? Does anybody have experience with those??
Are you testing Software or a device. The key is to start with a spec. What does it do. You can not give a pass fail if you do not know what it does. ( It explodes -> iPod = Fail, Bomb = Pass) Then come up with test cases to prove it works. Do not forget to provide bad cases to prove its error checking works. Automation is good. Better, if you have to run the test several times. You have to ask will it save time to automate, or just do the test.
Modifying code to prove functions it is not as good, It is labor intensive and does not show true operation. It does have its place. Some times built in debugger type functions can be handy.
I had, like Chris, the feeling that this might be a homework question. However, in this particular instance, I consider it may be better to provide some response that should help to guide the OP to find his own words to provide the answers to the questions. Excuse me for shuffling the questions around a bit but the order of answering might be helpful.
If the development process is properly formulated the answer would have been very obvious. However, the embedded world is far from perfect so you will find that resolving PASS/FAIL criteria will be a bit harder to do without decent structured documentation of the design.
If, as you indicate, the whole feature is documented but the component functions that make up the feature are not you have to determine the PASS/FAIL criteria on the whole function (whether or not it meets its published specification). If there is not enough information in the specification to do this then you will need to seek further clarification from the spec writers. This costs extra money and time. better to have a decent structured specification to start with.
This can depend on the level of the component. In a lot of cases an underlying, operational, model of the product basis can be simulated on the host. So long as you have a very clear specification of the interfaces of the simulation model, and these have been verified to conform to the actual real target, you should have little problem. For the very low level components and for final performance confirmation, testing on the target is often more useful.
I'll answer the above three together. As I run Forth I quite often have "umbilical" or "host to target" communication links which enable me to load new code and test on the target in a function by function basis. No special software needed and at full operating speed. The link traffic is monitored and logged into a file to ease generation of automated scripts for future regression testing. In my development process regression testing may be only minutes away from being needed or a few days.
Defining a "Unit to Test" should be based on the units described in the specification. A specification should also give you some clues about the overal system architectural structures and be of assistance in determining what constitutes a unit. Units can exist at various levels. It is a matter of drawing the boundaries.
Do you have a "Clear Box" knowledge of the unit? Can you only treat the unit as a "Black Box"? Whether or not you apply a full range of values for the parameters or just the close to limits values (straddling the limits) will depend on how good a view you have of the unit.
My Forth functions are so simple that manual techniques are all that are required. All code undergoes a visual code inspection, function test and limits tests. The visual inspection ensures that functional intention implemented in the code is as stated in the specification for the code. Functional testing ensures that function is carried out and that all logical paths through the code are executable. The limits test is an attempt to cause incorrect operation of the code by providing out of bounds parameters.
In other programming languages/environments mileage may vary.
Paul E. Bennett ....................
The OP appears to be very new to the business. Yes, "structured deocumentation to start with" sounds great. The reality may be a bit different. A project usually starts with a few wishes, "could we .....?". An iterative process consisting of "could we ..?" and trials to actually do it drag on. Whatever the concept, whatever the structure in the program, it sooner or later turns to spaghetti. The product is sold long before the last spaghetti are untangled, and the wishes are still coming. Moment, the wishes weren't really wishes, rather requirements. Just before you finally decide to rewrite everything from scratch, the lot has to be shipped, overdue. Testing ... oh it does what it should. Ok. That was worst case.