Test Driven Design?

Interesting questions with FSMs implemented in software...
Which of the many implementation patterns should you choose?
My preference is anything that avoids deeply nested if/the/else/switch statements, since they rapidly become a maintenance nightmare. (I've seen nesting 10 deep!).
Also, design patterns that enable logging of events and states should be encouraged and left in the code at runtime. I've found them /excellent/ techniques for correctly deflecting blame onto the other party :)
Should you design in a proper FSM style/language and autogenerate the executable source code, or code directly in the source language? Difficult, but there are very useful OOP design patterns that make it easy.
And w.r.t. TDD, should your tests demonstrate the FSM's design is correct or that the implementation artefacts are correct?
Naive unit tests often end up testing the individual low-level implementation artefacts, not the design. Those are useful when refactoring, but otherwise are not sufficient.
Reply to
Tom Gardner
Loading thread data ...
Personally, I custom design FSM code without worrying about what it would be called. There really are only two issues. The first is whether you can afford a clock delay in the output and how that impacts your output assignments. The second is the complexity of the code (maintenance).
Such deep layering likely indicates a poor problem decomposition, but it is hard to say without looking at the code.
Normally there is a switch for the state variable and conditionals within each case to evaluate inputs. Typically this is not so complex.
Designing in anything other than the HDL you are using increases the complexity of backing up your tools. In addition to source code, it can be important to be able to restore the development environment. I don't bother with FSM tools other than tools that help me think.
I'll have to say that is a new term to me, "implementation artefacts[sic]". Can you explain?
I test behavior. Behavior is what is specified for a design, so why would you test anything else?
--

Rick C
Reply to
rickman
The point is if both designs were built with the same misunderstanding of the requirements, they could both be wrong. While not common, this is not unheard of. It could be caused by cultural biases (each company is a culture) or a poorly written specification.
--

Rick C
Reply to
rickman
Yup. Although testing the real, obscure and complicated thing against the fake, easy to read and understand thing does sound like a viable test, too.
Prolly should both hit the thing with known test vectors written against the spec, and do the behavioral vs. actual sim, too.
--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott
If you do hardware design with an interpretive language, then test driven design is essential:
formatting link

My hobby project is long and slow, but I think this discipline is slowly improving my productivity.
Jan Coombs
Reply to
Jan Coombs
It was a combination of technical and personnel factors. The overriding business imperative was, at each stage, to make the smallest and /incrementally/ cheapest modification.
The road to hell is paved with good intentions.
This was an inherently complex task that was ineptly implemented. I'm not going to define how ineptly, because you wouldn't believe it. I only believe it because I saw it, and boggled.
Very true. I use that argument, and more, to caution people against inventing Domain Specific Languages when they should be inventing Domain Specific Libraries.
Guess which happened in the case I alluded to above.
Nothing non-obvious. An implementation artefact is something that is part of /a/ specific design implementation, as opposed to something that is an inherent part of /the/ problem.
Clearly you haven't practiced XP/Agile/Lean development practices.
You sound like a 20th century hardware engineer, rather than a 21st century software "engineer". You must learn to accept that all new things are, in every way, better than the old ways.
Excuse me while I go and wash my mouth out with soap.
Reply to
Tom Gardner
The prior question is whether the specification is correct.
Or more realistically, to what extent it is/isn't correct, and the best set of techniques and processes for reducing the imperfection.
And that leads to XP/Agile concepts, to deal with the suboptimal aspects of Waterfall Development.
Unfortunately the zealots can't accept that what you gain on the swings you lose on the roundabouts.
Reply to
Tom Gardner
It doesn't matter in the slightest whether or not the language is interpreted.
Consider that, for example, C is (usually) compiled to assembler. That assembler is then interpreted by microcode (or more modern equivalent!) into RISC operations, which is then interpreted by hardware.
Reply to
Tom Gardner
I'm sure you know exactly what you meant. :)
--

Rick C
Reply to
rickman
If we are bandying about platitudes I will say, penny wise, pound foolish.
Good design is about simplifying the complex. Ineptitude is a separate issue and can ruin even simple designs.
An exception to that rule is programming in Forth. It is a language where programming *is* extending the language. There are many situations where the process ends up with programs written what appears to be a domain specific language, but working quite well. So don't throw the baby out with the bath when trying to save designers from themselves.
Why would I want to test design artifacts? The tests in TDD are developed from the requirements, not the design, right?
Lol
--

Rick C
Reply to
rickman
I see why you are saying that, but I disagree. The Forth /language/ is pleasantly simple. The myriad Forth words (e.g. cmove, catch, canonical etc) in most Forth environments are part of the "standard library", not the language per se.
Forth words are more-or-less equivalent to functions in a trad language. Defining new words is therefore like defining a new function.
Just as defining new words "looks like" defining a DSL, so - at the "application level" - defining new functions also looks like defining a new DSL.
Most importantly, both new functions and new words automatically have the invaluable tools support without having to do anything. With a new DSL, all the tools (from parsers to browsers) also have to be built.
Ideally, but only to some extent. TDD frequently used at a much lower level, where it is usually divorced from specs.
TDD is also frequently used with - and implemented in the form of - unit tests, which are definitely divorced from the spec.
Hence, in the real world, there is bountiful opportunity for diversion from the obvious pure sane course. And Murphy's Law definitely applies.
Having said that, both TDD and Unit Testing are valuable additions to a the designer's toolchest. But they must be used intelligently[1], and are merely codifications of things most of us have been doing for decades.
No change there, then.
[1] be careful of external consultants proselytising the teaching courses they are selling. They have a hammer, and everything /does/ look like a nail.
Reply to
Tom Gardner
I can't find a definition for "trad language".
I have no idea what distinction you are trying to make. Why is making new tools a necessary part of defining a domain specific language?
If it walks like a duck...
FRONT LED ON TURN
That could be the domain specific language under Forth for turning on the front LED of some device. Sure looks like a language to me.
I have considered writing a parser for a type of XML file simply by defining the syntax as Forth words. So rather than "process" the file with an application program, the Forth compiler would "compile" the file. I'd call that a domain specific language.
There is a failure in the specification process. The projects I have worked on which required a formal requirements development process applied it to every level. So every piece of code that would be tested had requirements which defined the tests.
They are? How then are the tests generated?
--

Rick C
Reply to
rickman
snip
Other than occasional test fixtures, most of my FPGA work in recent years has been FPGA verification of the digital sections of mixed signal ASICs. Your description sounds exactly like the methodology used on both the product ASIC side and the verification FPGA side. After the FPGA is built and working, you test the hell out of the FPGA system and the product ASIC with completely separate tools and techniques. When problems are discovered, you often fall back to either the ASIC or FPGA simulation test benches to isolate the issue.
The importance of good, detailed, self checking, top level test benches cannot be over-stressed. For mid and low level blocks that are complex or likely to see significant iterations (due to design spec changes) self checking test benches are worth the effort. My experience with manual checking test benches is that the first time you go through it, you remember to examine all the important spots, the thoroughness of the manual checking on subsequent runs falls off fast. Giving a manual check test bench to someone else, is a waste of both of your time.
BobH
Reply to
BobH
I've solved the problem with setting up a new project for each testbench by not using any projects. Vivado has a non project mode when you write a sim ple tcl script which tells vivado what sources to use and what to do with t hem.
I have a source directory with hdl files in our repository and dozens of sc ripts.Each script takes sources from the same directory and creates its own temp working directory and runs its test there. I also have a script which runs all the tests at once without GUI. I run it right before coming home. When I come at work in the next morning I run a script which analyses repo rts looking for errors. If there is an error somewhere, I run the correspon ding test script with GUI switched on to look at waveforms.
Non-project mode not only allows me to run different tests simultaneously f or the same sources, but also allows me to run multiple synthesis for them.
I use only this mode for more then 2 years and absolutely happy with that. Highly recommend!
Reply to
Ilya Kalistru
Interesting. Vivado is what, Xilinx?
--

Rick C
Reply to
rickman
h by not using any projects. Vivado has a non project mode when you write a simple tcl script which tells vivado what sources to use and what to do wi th them.
f scripts.Each script takes sources from the same directory and creates its own temp working directory and runs its test there. I also have a script w hich runs all the tests at once without GUI. I run it right before coming h ome. When I come at work in the next morning I run a script which analyses reports looking for errors. If there is an error somewhere, I run the corre sponding test script with GUI switched on to look at waveforms.
ly for the same sources, but also allows me to run multiple synthesis for t hem.
at. Highly recommend!
yes
Reply to
lasselangwadtchristensen
Yes. It is xilinx vivado.
Another important advantage of non-project mode is that it is fully compati ble with source control systems. When you don't have projects, you don't ha ve piles of junk files of unknown purpose that changes every time you open a project or run a simulation. In non-project mode you have only hdl source s and tcl scripts. Therefore all information is stored in source control sy stem but when you commit changes you commit only changes you have done, not random changes of unknown project files.
In this situation work with IP cores a bit trickier, but not much. Consider ing that you don't change ip's very often, it's not a problem at all.
I see that very small number of hdl designers know and use this mode. Maybe I should write an article about it. Where it would be appropriate to publi sh it?
Reply to
Ilya Kalistru
Doesn't the tool still generate all the intermediate files? The Lattice tool (which uses Synplify for synthesis) creates a huge number of files that only the tools look at. They aren't really project files, they are various intermediate files. Living in the project main directory they really get in the way.
--

Rick C
Reply to
rickman
That would be useful; the project mode is initially appealing, but the splattered files and SCCS give me the jitters.
Publish it everywhere! Any blog and bulletin board you can find, not limited to those dedicated to Xilinx.
Reply to
Tom Gardner
Something similar is possible with Intel FPGA (Altera) Quartus. You need one tcl file for settings, and building is a few commands which we run from a Makefile.
All our builds run in continuous integration, which extracts logs and timing/area numbers. The bitfiles then get downloaded and booted on FPGA, then the test suite and benchmarks are run automatically to monitor performance. Numbers then come back to continuous integration for graphing.
Theo
Reply to
Theo Markettos

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.