Evaluating static analysis and Dynamic analysis tools for C/C++

Dear all, Whats the general norm people use to evaluate static code analysis and dynamic code analysis tools in your experience.I am confused on the best tool to choose from,given that so many tools are available over the net.I believe following are the criteria to test the usefullness of these tools:

1)User friendliness 2)Ability to detect bugs 3)Ability to enforce coding guidelines 4)Ability to generate userfriendly reports 5)Speed of detection 6)Should not be providing lot of false defects. 7)Easily customisable

Are there any test suites available to evaluate these analysis tools? How do people who use them evaluate?Are there any test programs available for eg a sample C code which can be run with analysis tool to see how it reports?

Please let me know your thoughts.Advance thanks for all your inputs. Are there any bench marks available for these analysis tools?

Looking farward for your inputs and advanced thanks, Regards, s.subbarayan

Reply to
ssubbarayan
Loading thread data ...

In regards of static analysis tools:

NIST SAMATE project has some test suites for C, C++, Java... You can reach it here:

formatting link

These are syntactic test cases, so they do not represent properly the result of a tool on your code base. It just gives you an idea of the weaknesses coverage of the tool (tools should also provide a list of weaknesses they support, you can make sense of it with the CWE -

formatting link

As criteria to select a tool, I think it depends on lot on how you plan to use to tool. For example, if only few people (software security folks) use the tool, then usability shouldn't be such a big deal; it is if many developers will use the tool.

Otherwise, I would recommend few things:

- proper detection with few false-positive rate on selected test cases

- take some of your code (restrict the scope of the scan), and compare tool results and look for false-negative/false-positive on your code... (tools are sensitive to the code constructs/API used in the code)

- customization (especially if you see an important FP/FN rate) might be considered as important too; I suppose it depends on how you want to use the tool...

Romain

Reply to
romain.gaucher

Op Thu, 29 Oct 2009 13:07:55 +0100 schreef ssubbarayan :

Most companies who evaluate such a tool, use their own code base or a representative, well-known portion of it.

1)You cannot benchmark user friendliness. 2)There is no list of a respectable portion of all known bugs. 3)Coding guidelines have test suites, but sometimes interpretation differs. 4)You cannot benchmark user friendliness. 5)Speed of detection depends on code structure and style and is nearly irrelevant for night-time testing. 6)There is no list of a respectable portion of all known false positives of all coding guidelines. 7)You cannot benchmark user experience.
--
Gemaakt met Opera's revolutionaire e-mailprogramma:  
http://www.opera.com/mail/
(remove the obvious prefix to reply by mail)
Reply to
Boudewijn Dijkstra

There isn't one. People ask different things of such tools, and thus value their properties differently. One person's strictly necessary feature is another's gratuitous gimmick. One person's show-stopping limitation is the next one's barely noticeable glitch.

As the saying goes: is extremely user-friendly. It's just a little picky who it makes friends with.

Depends on what kinds of bugs there are for the detecting.

Depends on the coding guideline in question.

There's no such thing as a user-friendly report.

Depends on usage pattern. Unless the processing time runs considerably north of one whole day, usage patterns will adapt to the tool, once its usage has been prescribed.

See answer to 2).

Depending on what you're trying to do, there's a point to be made that the tools should not _need_ any "customization" ... it should find any an all problems it can, period.

I rather much doubt it.

Initially by throwing at it a significant amount of the worst, the best, and the most typical code they can find at it. The real evaluation only comes from actual long-time usage. The proof of the pudding, as they say, is in the eating.

Reply to
Hans-Bernhard Bröker

y

Hi, I love your quote "As the saying goes: is extremely user-friendly. It's just a little picky who it makes friends with. " Thanks for making me smile.

I realise like what others say,It depends on the targeted audience and no single tool can satisfy all.

Regards, s.subbarayan

Reply to
ssubbarayan

First question: do you lint your code?

Some of the new static analysis tools are wondrous things, that detect subtle bugs in convoluted code. But they tend to be expensive, require extensive setup and tuning, and can take hours to run.

Lint, on the other hand, is a wondrous thing, detects subtle bugs in convoluted, code, is cheap, easy to use, and faster than your compiler.

Reply to
Dave Hansen

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.