[cross-post] verification vs design

Hi everyone,

I've recently had to argue why it is not 'sane' to budget 500 hours of development against 200 of verification.

If you ask the FPGA developer he'd say a factor of 2/3 has to be considered for verification w.r.t. design (that I tend to agree to).

I'd like to give some grounds to those estimates and I asked the fpga group leader to compare among several completed projects what is this ratio. We are usually collecting lots of data on the amount and type of work we do every day and this data can be used to verify the verification effort w.r.t. the design effort.

His counter argument is that it is difficult to compare projects due to their peculiarity, implying that there's very little that we can learn from the past (that I obviously do not buy!).

As of your knowledge is there any source of - trusted - data that I can point at? Is there really a ratio that can be 'generally' applied?

Any comment/opinion/pointer is appreciated.

Al

p.s.: this thread is intentially crossposted to comp.lang.vhdl and comp.arch.fpga. Please use the followup-to field in order to avoid breaking the thread.

--
A: Because it messes up the order in which people normally read text. 
Q: Why is top-posting such a bad thing? 
A: Top-posting. 
Q: What is the most annoying thing on usenet and in e-mail?
Reply to
alb
Loading thread data ...

There's a regular industry survey carried out by Mentor that might help (it's a blind study across all industries/users/countries)

formatting link

That found that the % of FPGA project time spent on verification has grown from a mean of 49% in 2007 to 56% in 2012, which indicates that more time is spent doing verifiation (on average) than design! Obviously various factors need to be taken into account such as design size & complexity, ammount of reuse etc. but a figure of 20-30% is low by industry standards and is bordering on wishful thinking (the survey also found that 67% projects were late!)

Hope thta's useful regards

- Nigel

Reply to
Nigel Elliot

The only opinion I can offer is a cynical one: if the PM is crazy enough not to plan on adequate verification, then he's too crazy to listen to reason.

Make your point, but don't expect to be listened to this time around -- sometimes when you make these arguments the person who ends up listening and taking action is a bystander today, but a PM two years from now.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

I have always heard that one hour of development meant four hours of verification. That gives you 800 hours of verification against 200 of development.

Does your boss like respins? Because that is how you get respins.

John Eaton

--------------------------------------- Posted through

formatting link

Reply to
jt_eaton

Hi Nigel,

In article you wrote: []

thanks a lot for the pointer. Indeed the data are quite interesting and it seems the 70/30 ratio is (on average) far from reality. It be more

55/45 which is the same ratio our client has for software development (testing vs coding).

It would be nice to understand what made the rest of the projects in time. Interesting enough ~40% of the verification effort is debugging (according to the same study, see part 6:

formatting link

Interestingly enough the amount of time spent for coding the testbench is ~20% and a similar amount of time is needed for running the tests. I guess that these metrics could be interesting if applied to our completed projects in order to point out strengths and weaknesses of our flow.

Additionally I believe the amount of lines of code may represent a valuable metric to represent the complexity of the projects, so we can classify similar projects together and see where we are.

Al

Reply to
alb

Hi Tim,

Tim Wescott wrote: []

Unfortunately is not only about planning. Budgets are completely out of target (systematically!) and the main reason (at least from what I hear) is that if we budget more we won't get the project. Here in Switzerland salaries are expensive, as well as manufacturing. This burden is hidden in the offer phase, but then jumps up in the development phase, where you realized it takes 50% (or more) more time or money to do what you promised to do.

I agree, sooner or later somebody will make a difference, if (s)he doesn't quit too soon. But even in that case (s)he will make the right decision and my small contribution will finally be rewarded ;-).

Al

Reply to
alb

Hi John,

jt_eaton wrote: []

a ratio 20/80 in favour of verification seems a bit exagerated, but I can understand that variance might be important. This is one of the reasons why the analysis shown in the link posted is lacking a piece of information.

When it comes to large spreads between data the mean is not representative anymore, or cannot be used to draw too many conclusions. I'd say that an additional effort could be done to select different 'populations', according to different types of metrics (lines of code, requirements changes, turnover, ...) and see if there's really any correlation.

The only effort to differentiate the projects is in the amount of gates, which does not necessarily equal complexity.

Al

Reply to
alb

I used to work at a company like that. Worse, top management practically insisted that it happen -- if a project manager came to them with a realistic schedule, they'd say "trim it down to XXX!", but when it came back trimmed, they'd start up the project while complaining that engineering always lied about schedules!

So it's not just a Swiss thing.

And things will get better. Then some new guy will tell the board a pack of lies, get hired, and it'll all be in the dumpster again.

Which is why I'm now an independent consultant.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

At some sort of user conference - over 10 years ago now I'm sure - an engineer presented a "project management" like presentation. It's main thesis was tracking a project's life through checkins to the revision control system. As I recall, he measured two metrics, number of differences checked in, and number of new files checked in.

The paper was purely looking backward at a completed project. But graphing these two metrics with respect to time, surely showed a pretty good indicator of where a project was. An asymptotic line approaching (but never reaching zero).

He highlighted some events on the graph - spikes when a big bug was found, and fixed, and more interesting too me - highlighted the time of the management 'rah-rah' speeches. You know, "We need to get this stuff done, put in the hours, it's crunch time..."

Those dates show no noticeable change in the graphs' progressions...

Thought it was funny, and interesting.

Don't have a reference, the details are murky. But fun presentation for those (engineers) in the audience. And on a normally (IMHO) dull subject for engineers. Lots of head nodding from the audience.

Regards,

Mark

Reply to
Mark Curry

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.