How do I treat "default" case which is useless?

I need to use only 5 out of the 8 cases, but for completeness's sake, a default case is needed in order to avoid unwanted latches. This default case isn't covered by simulation. As a result, it will bring the coverage down from 100% to 99%. It's kinda annoying to explain this 1% to customer.

How do I treat this situation?

Reply to
Mr. Ken
Loading thread data ...

Could you leave the default case blank (but still include the default) and get better coverage numbers? If there's a statement saying "clear the register" that will never be executed, it's still a statement that's not covered. If it's blank, will the simulation coverage still have a problem that the emty branch was never entered?

Reply to
John_H

default

coverage

Thank you.

For safety's sake, in the default case, I set the registers/variables to zero. The reason is that, without this statement, design compiler will infer a latch for it if it's not a clocked statement.

I could have put "//synopsys case full_case" but client's FPGA/ASIC synthesizer might get into problem.

Reply to
Mr. Ken

What about taking the 5th case as the default case. That means having 4 selects and the default covers the 5th case and

6th to 8th.

Martin

Reply to
Martin Schoeberl

Hi Ken, I know you're using Verilog, but in VHDL I might try defining the thing which does the selection as an integer of range 0 to 4 and see if that helped prevent the latch inference. HTH, Syms.

Reply to
Symon

You can get the same thing as full_case by assigning X to all outputs in the default, but this again leaves the choice up to the synthesizer which means any simulation coverage will not be 100%. I like the idea of setting the default case outputs to one of the previously defined states better.

Just my 2 cents, Gabor

Reply to
Gabor

In the real hardware, anything could happen and the other 3 cases could be hit. I think you should leave that 1% alone because it simply indicates that your simulation doesn't handle error recovery. Any reasonable person wouldn't make a big deal of it if it's well explanined and documented. If you really want 100% coverage, I would suggest you think of a way to inject errors instead of playing with the semantics.

HTH,

formatting link

Mr. Ken wrote:

Reply to
Jim Wu

Hi Jim, Yeah, that would work. You could force the simulation to set the case selector to the illegal values. This would give the 100% coverage needed. Cheers, Syms.

Reply to
Symon

The proper method is to cover all cases explicitly. As mentioned earlier, combining case #5 with any other cases into the 'when others ' branch is an acceptable approach that would also guarantee 100% code coverage.

Designing to cover that last 1% is not just annoyance, it's helping to insure that each line of code in the design actually has been tested in simulation which should be a bare minimum to strive for in test coverage anyway. It's not saying that each line has been tested under each condition that it may encounter but at least each line has been hit.

Depending on just what exactly you have in your source code there might very well be very good reasons for making sure your design has every base covered that you're not even considering. Not knowing what your code is I'm guessing that the signal that is the target of the case is probably one of the following forms:

  1. An integer subtype (i.e. signal X: natural range 0 to 7)
  2. A std_logic_vector subtype (i.e. signal X: std_logic_vector(2 downto
0)

Since you're using 5 of the 8 explicitly in your design what happens if it powers up in states #6, 7 or 8? Presumably you have a reset that will move it out of that state but you can also have a 'when others' that takes you to a known state as well. Adding code to handle this doesn't necessarily impact your code coverage. For example, if your code is of form #2 then the simulator will initialize the vector to 'UUU' and your 'when others' cause will in fact get executed. If your code is of form #1 then the 'when others' code will not get exectured because X will get initialized to 0. But before you think that this is a 'good' thing, consider

- The underlying real hardware may not be giving you that same initialization of X to 0.

- If you (or somebody else) goes to reuse this code in some other design than their underlying hardware may not give that same initialization.

In both of these situations you're setting yourself up for a situation where simulation does not match reality which can be difficult to debug. In that sense code using code of form #1 is less robust than that using form #2 since it is relying on a quirk of simulation that is not present in the underlying hardware.

Making your design and testbench code cover that last 1% may seem annoying but in fact it does make your design more robust and possibly reusable and is good design practice. The fact that the tool is showing you that you have a potential logic error that you haven't considered is a good thing.

KJ

Reply to
KJ

I like that idea! I wish I had thought of it...

To the OP, I am curious, what tool are you using to analyze your simulation coverage of your code? I'm also curious if having 100% coverage of the HDL does anything other than saying there are no statements that were not tested? I don't think it actually verifies that they code will work will it? For example, if the statements in case 2 and 3 were swapped, it would still be covered, but produce a wrong result. Is there a way to verify that the wrong result will be caught?

Reply to
rickman

Typically when people refer to 100% coverage all they actually mean is that every statement has been hit. What it 'should' mean is that every statement has been hit under every possible condition as well. That kind of info is also available (at least Modelsim's code coverage tool has it, I'm guessing that other simulators do as well). Even though most people only mean that the code has at least been hit, and not the more inclusive 'under all conditions' the statement that they at least hit every line is at least a big step up from not knowing whether or not you've actually hit every line of code in your testing.

No

Make use of the 'assert' statement liberally throughout both the design code and the testbench code. Using asserts make the simulation testbench self-verifying to the extent that you've coded an assert for every functional point. In the particular case that you've described of swapping cases, presumably 'some' output will misbehave and cause the design to 'not work' at some level. If there is an assert statement checking that the design is working correctly it should catch it. If it doesn't, then this implies that either there was no 'assert' to check that the design was working under those conditions or that the testbench didn't exercise things adequately to uncover the inadvertant 'case swap' or (somewhat unlikely) that cases 2 and 3 really are not distinct and really are the same thing to the extent that they cause no observable misbehaviour of the design.

Liberal use of assertions is a 'best practice' (in my opinion and I've been using them for years). VHDL has had it since day 1, Verilog has recently added it. There are now even special languages about to help ease the task of writing the design assertions (google for PSL).

KJ

Reply to
KJ

I took some ada software courses to learn some of the "software engineering" approaches to design and coding in the similar VHDL language.

One of the tenets they used was to never code anything but null in a 'when others' clause. 'When others' was strictly for language issues, and was not to be used for functional code. Anything that could happen was to be handled explicitly, and anything that could not happen was not to be handled, because it could not be tested anyway.

Now, in certain HW applications, things like SEUs, reset values, etc. can make "impossible" things suddenly become very probable. But it usually takes far more than adding "others" clauses to handle them correctly, since most synthesis tools perform a reachability analysis, and optimize out anything that is "impossible" anyway.

So, if the synthesis tool thinks that input values 6 through 8 are not possible, it will optimize out any logic that distinguishes between them and any possible other value. To reach 100% coverage, you'd be testing code that is going to get optimized out anyway, and your "100%" code coverage metric would be no more meaningful than "99%".

BTW, I believe Synplicity has stated that they will start honoring initial values (either explicit or implicit) in their FPL synthesis tools, so booleans and integers will have proper initial values, even if they are not explicitly reset. Now, transitioning out of the reset/config value on the first clock edge after reset/config can be problematic, but that's not an initialization problem (and not one that would be caught simply by using slv data types), it is a reset/config synchronization problem, and a whole other topic altogether.

Andy

Reply to
Andy

Just out of curiousity, was another one of the tenets to not use 'else' (or equivalently only use 'else null;')? Seems like the same rationale would apply to the if/elsif/...else/endif statement as well.

You're assuming that all tools will do this. It's better practice to have the 'when others' code for those that might not do such "in depth" analysis. In fact, one could also say that those tools that disregard the 'when others' statements because the result of their optomization analysis has decided that such states could not possibly occur should be used with suspicion. As you also pointed out, when such things really do matter (i.e. critical designs) the techniques that are then required go far beyond just 'woops I'm in the wrong state, go back to 'Idle' response. For most other applications though the 'woops I'm in the wrong state, go back to 'Idle' response is perfectly appropriate and yet if optomized away it will mean that the synthesized design does not match the source...which is almost always bad.

Depends. Not knowing any more about the design I was simply suggesting that the tools were pointing to a possible design issue that should be looked into. In that spirit, the fact that the tool did not give the desired 100% should be looked at as a 'good thing' in that maybe things like simple power up oddities not being considered is something that should be looked at in the design.

KJ

Reply to
KJ

That is what I am referring to. The fact that the statements were "exercised" is not the same thing as saying they were "tested" to work correctly. If a tool can tell you if a statement was exercised under "all conditions", what exactly does that mean, "all conditions"? I would assume it means all combinations of the signals that are inputs to the statement.

I think it is still up to the test designer to determine if the test distinguishes failures in the code from working code. I guess you could compare it to a spell checker. It will certainly be useful to catch errors and omissions, but it will not substitute for a pair of eyes going over the code just lick as pell checker kin miss at least too different kinds of mistakes. ;^)

Reply to
rickman

I don't follow this at all. I am pretty sure that putting NULL in a when others clause will create a latch, no? NULL is saying to do nothing which means hold your present state, ergo a latch.

Only if the tool can determine that the condition is not "possible". Often the condition is a function of how a design is used rather than how it is built. Given the constraints on the inputs, a condition may be "impossible" with no way for the tool to know that.

Reply to
rickman

No, they did not go that far, though you are right, at least when dealing with conditions having more than two possible values. I'm not even saying it is a great idea for hardware, just an interesting way to think about it. In the example above where someone spoke of combining the last legal condition with all the remaining, illegal conditions by using "when others", I would problably prefer something like "when 5 | others", even though it would be identical in operation, it would be a little more self documenting.

It is certainly hard to "argue excellence" against covering all the bases, but where does one stop? If the conditions really are unreachable (excluding SEU, invalid synchronization, etc.), regardless of whether the synthesis tool recognizes it or not, how are you going to test it (the rtl)? How do you know that you made the right choice of what to do in that case?

In many cases, "when others" is only required because of metavalues in the logic system (x, u, -, etc.) If I'm using enumerated types that only define a subset of the hardware representable states, I've already made a tradeoff. If I want to handle the remaining states, I have to tell the synthesis tool that they even exist (which in itself is based on what representation they use, binary or one hot, etc.) before I can tell it what to do with them.

My point about 100% vs 99% coverage was more a stab at blindly thinking (even on the part of the customer, whom we know is always right!) that

100% coverage is good, and 99% coverage is bad. 'Tain't necessarily so, though you are right that uncovered code is a good alert to "go check it out". Perhaps the SW practice came about strictly because of coverage tool issues in the first place (i.e. all uncovered statements had to be null statements?)!

Andy

Reply to
Andy

That's correct. The whole point of the assertion statements that get added to the design code and the testbench code are to embed the knowledge of what is 'correct' behaviour.

With Modelsim's Code Coverage the coverage options you get are:

- Statement coverage

- Branch coverage

- Condition coverage

- Expression coverage

- Finite state machine coverage

- 0/1 Toggle Coverage

- 0/1/Z Toggle Coverage

That is still somewhat short of 'all conditions' but I'd wager that it would be darn difficult but probably not impossible to get 100% coverage on all of those items and have assertion checking on all of the outputs and still slip a bug through.

I agree but I think assertion checking and code coverage (meaning more along the lines of most or all of the above list not just 'statement coverage') are pretty powerful tools at the disposal of the designer to beat pretty heavily on the design without requiring the 'sharp eyes' or the 'smart guy' to get the job reasonably done. Not that such resources shouldn't also be brought in when available though. The things that then get left on the table unchecked are the units of measure that end up crashing probes on to the surface of Mars (at least I thought that was the root cause of that little mishap).

Until someone comes up with an automagic way to convert a design specification in to some sort of testbench I would agree with you....which I'm guessing will be until my last dying breath.

KJ

Reply to
KJ

Latches only happen in a combinatorial process! I don't use those. I only use clocked processes (aka single process description) for FSMs (this is one of the main reasons why). In a clocked process, a null would be implemented as a clock-disable, assuming it determined the state was reachable in the first place.

On the other hand, they had their restrictions on ada code for SW; I'm the one considering it for VHDL and HW, which has other issues to consider (one of which is your issue in a combinatorial process)

Naturally, there are cases where any input condition can happen (i.e. a CPU can access any address within the range binarily represented by the number of bits), but most sythesis tools do simple reachability analysis on state machines and even on counters. With constrained integer types, it is possible to tell them such information explicitly (i.e. natural range 0 to 5 excludes possible 3 bit values for 6 and 7, such that an equality comparison with 5 only takes two inputs, not three), no matter where the info came from (i.e. an external 3 bit port). To what extent they use this depends on the tool provider.

You raise a very good point about unconstrained, loosely coupled inputs. You may know that two input conditions are mutually exclusive, but telling that to the tool is usually nontrivial. One application I have found is to make use of a tri-state bus type description, and then tell the synthesis tool to convert tristates to muxes. They automatically assume that tristate enables are mutually exclusive (otherwise the original circuit would not have worked). I wish we had a standard mutex function that we could call in an assertion statement, and the synthesis tool would pick up on it and realize that the inputs to the mutex function were in fact mutually exclusive.

Andy

Reply to
Andy

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.