[OT] I got a JOB!!!

Er, that's not TDD, that is Unit Tests. To be overly simplistic, TDD is a strategy for generating Unit Tests.

Strong typing is very beneficial, but is completely insufficient. As one of an infinite number of trivial examples, consider testing for X>Y when you should be testing for Y>X.

Reply to
Tom Gardner
Loading thread data ...

The simple point to bear in mind is that the results of TDD are only as good as the quality of the tests. Test the wrong/unimportant thing, or don't test important behaviour, and the outcome can be "suboptimal".

That's not a difficult point (to put it mildly!), but it is horrifying how it is ignored by zealots and/or not appreciated by inexperienced.

The best defense is, to quote one of the two mottoes worth a damn, "Think". No change there, then!

Reply to
Tom Gardner

Good code, in the best languages, can be read like a spec. When that happens, your code and your test is the same thing, expressed in the same way. Nothing is achieved by writing it twice.

If you got the code wrong, you'll get the test wrong too. That what I mean by "another way to describe" expected behaviour.

Reply to
Clifford Heath

In article , snipped-for-privacy@blueyonder.co.uk says... .......

The Titanic sank but I bet nearly all the individual parts past their unit tests.

Or the video I saw on Dev Humor a while back of sliding door and bolt to lock door fixed wrong way. Each part past its unit tests.

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk 
    PC Services 
 Logic Gate Education 
     Timing Diagram Font 
    For those web sites you hate
Reply to
Paul

Sigh.

Consider a spec such as "95th percentile latency of less than 10ms". Good luck expressing that in your code; testing it is difficult enough.

More generally, consider that specifications normally deal with what needs to be achieved, and shouldn't specify how they are to be achieved. That is particularly apparent in hardware/software/mechanical systems, where the implementation of required behaviour could be in discrete transistors, HDL, software, or sheets of metal.

Having pointed that out, I know what you are trying to say and it is worth achieving. But in the real world it is never that simple.

Not necessarily. I suggest you read comp.risks for many many many examples where your presumptions are too simplistic in the real world.

I get the feeling your experience in this area is with academic problems - which are valuable pedagogical examples, but no more.

Reply to
Tom Gardner

Yup.

None of this is difficult, but it does seem to escape some people :(

Reply to
Tom Gardner

We had this discussion before. I don't want to do it again.

I personally wrote most of a million of lines of code which is performing daily core management functions on over ten million enterprise computers, including almost all the computers used in the armed forces of three OECD nations, as well as a number of banks, oil companies, national postal services, etc.

I'm well aware of the requirements that cannot even be expressed, let alone tested. I'm also well aware of the general state of ignorance about what *can* be achieved.

Any more ignorant accusations?

Reply to
Clifford Heath

Congratulations Tim. We'll have to exchange trade secrets someday...

BTW, does PS use a specific unit test framework? Have you used it yet? How do you like it?

--
Randy Yates, Embedded Firmware Developer 
Garner Underground, Inc. 
866-260-9040, x3901 
http://www.garnerundergroundinc.com
Reply to
Randy Yates

Some of the classic videos

formatting link

and

formatting link

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk 
    PC Services 
 Logic Gate Education 
     Timing Diagram Font 
    For those web sites you hate
Reply to
Paul

I start next week, so I don't know and no. It's my former coworker from FLIR who's pushing TDD, and the manager who hired me is kind of warily standing on the sidelines going "what's the deal here?"

The guy pushing it is extremely smart and capable, so whatever it is it's probably good.

--
www.wescottdesign.com
Reply to
Tim Wescott

Another stumbling block can be to get people to consider the definition of "unit" in a unit test. Too frequently they equate it with "unit" = class/method.

Sometimes they have difficulty thinking that unit can be a subsystem - they think that /can't/ be unit testing because it is "integration testing".

If they argue, get them to define what they mean by a "unit test", and then point out that actually their "unit test" isn't testing a unit. It is testing the integration of their code and library code (and compiler and runtime).

The corollary is that TDD mentality and technology can - and often should - be applied to integration testing. All you have to do is increase the granularity of the UUT.

Reply to
Tom Gardner

:)

Of course there's a good argument that much (most?) "integration testing" is really just "unit testing" with larger units.

People can get too stuck on the specific definition (of unit) they have been taught.

Reply to
Tom Gardner

I expect you'll find it is the codification of many of the development mentality and practices that you have been using for a long time.

Be cautious about how TDD applies to bottom-up design (i.e. finding things that work and clagging them together) v.s. top-down design. TDD works naturally with top-down design where all the yet-to-be-implemented are well understood and feasible.

Anybody that /thinks/ will realise that sometimes it is beneficial to do a "spike investigation" to quickly validate key concepts from top-to-bottom, and then to use that experience to do it "properly" using full-blown TDD.

Reply to
Tom Gardner

It's not magic. I've been geeking out on the COSMAC 1802 lately, because it was the first microprocessor I ever owned (I had an ELF-II kit). The user's manual has an entire chapter extolling the virtue of SUBROUTINES (ooh, ahh) and how to implement them. It's quite gushy about how using subroutines make your code better. And yet, I've worked on lots of crappy code that has subroutines.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

:)

The 1802's implementation of subroutines was, um, quirky to the point of being obtuse.

I hand built my first computer using a 6800, after having thought long and hard about the 1802 and 8080. It was a mess, but worked, I learned a heck of a lot, and prospective employers were duly impressed.

Reply to
Tom Gardner

The 1802 is neither a CISC processor nor a RISC processor -- it's a NHISC processor -- "Never Had Instruction Set Computer".

I wish I had the chops to organize a contest -- I think an annual "build the fastest 1802" contest would be fun to be involved in. Imagine what you could do if the only basic rule was that it had to execute 1802 machine code faithfully, with no constraints on how much happened per clock cycle. Ditch TDA and TDB, keep the I/O command lines, Q, and the flags, and go to town. Prefetch, pipelines, caches, parallel execution, predictive branching, everything -- all with that crazy 1802 instruction set.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

:)

I hate all those with a vengeance, since they prevent hard real-time software.

I'm currently experimenting with a /small/ XMOS device which deliberately avoids all of those techniques so that it can guarantee timing. So far I've been able to get the /software/ to reliably count the edges on two 20Mb/s input pins, process the results and simultaneously shove them up a USB link to a host PC.

Now I've got to understand the algorithms in reciprocal and continuous timestamping frequency counters :)

Reply to
Tom Gardner

Oops, 50Mb/s, i.e. 100Mb/s total and I might be able to get it to 100Mb/s per input.

Reply to
Tom Gardner

Some guy has Verilog code for an 1802 in which he claims a 60MHz clock, one clock per instruction (or perhaps fetch). That would be deterministic, and fast by some measures.

All the modern pipeline/predict/prefetch whiz-bang doesn't prevent hard real time, if only the processor manufacturers would publish the absolute maximum time it takes to execute any possible instruction, or (better) provide tools for finding the maximum time-from-interrupt for any given chunk of code. Then you could just add up all the critical stuff and make sure it works.

In my experience there isn't THAT much variation -- you just need to know how much variation to allow for to meet hard real time criteria.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

It complicates things enormously, doubly so when all the caches are involved. ISTR someone measuring a 486 with its tiny caches, and finding the mean:max ISR time was somewhere around 1:5. I expect I've still got a paper copy, /somewhere/.

The XMOS tools claim to indicate the exact loop/function times, assuming input is available and output can be delivered (Occam channel semantics).

The event-driven multicore hardware+software co-implementation looks to be rather nice too. And the I/O is pleasantly high-level too: do i/o on a specific clock cycle, wait until there's a change, etc etc. Makes high speed bit-bashing in software tractable.

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.