Software's evil

An extract from "Embedded systems dictionary" by Jack Ganssle and Michael Barr:

"Unfortunately, as hardware design more and more resembles software development, the hardware inherits all of software's evils: late delivery, bugs and misinterpreted specs."

If that is the case, the problem of software doesn't lie in the instrinsic properties of software components but in the development process.

Hardware blocks developed as parts of FPGAs have the same functional characteristics as discrete equivalent hardware parts. If they suffer from software evil's, this is only due to the development process.

Is it be possible to improve the development process of harware blocks so that they don't suffer from these evils and would it be possible to apply these techniques to software? Is it a problem of costs only?

Reply to
Lanarcam
Loading thread data ...

I think it is simpler than that. As designs become more complex, you inevitably get more of late delivery, bugs and misinterpreted specs. This applies equally well to software and hardware.

--
Pertti
Reply to
Pertti Kellomäki

I think this is only part of the problem. A microcontroller is a complex design and generally without bugs. If a design is complex, you can decompose it into manageable units with well specified interfaces. This requires time and rigour and is not compatible with tight schedules and scarce resources.

Reply to
Lanarcam

Volumes and volumes have been written on this. But in general consider that the tools available for software verification are not as mature as, and CONSIDERABLY more labor-intensive than the tools available for hardware verification. Further consider that the cost to be invested up front in verifying something "hard" (silicon, spacecraft, etc) vs. verifying something "soft" (can be field-updated for free) are very different value propositions.

Consider also that a company designing a microcontroller is designing a general-purpose device that must be precisely characterized in order to be saleable. Would you buy a micro if the datasheet said that every parameter was TBD? A product that uses the microcontroller, on the other hand, is going to have a limited range of use and will not, as a rule, be as completely characterized - in fact it's unlikely to have any characterization data at all outside the intended use cases.

Reply to
larwe

"Lanarcam" schreef in bericht news:47eb7ad1$0$27901$ snipped-for-privacy@news.free.fr...

You are right about that, though it will only be possible to get a 100% garantuee if You can manage to simulate every possible state in which something can be and that may be possible with some simple combinatorial logic circuit but will very rapidly become impossible if Your design becomes more complex. And only testing managable units is not always good enough either. Suppose one of them has an error which does not show up if You examine it because it only alters some bit somewhere else in memory (and all "replies" You get from Your unit when testing it are what You expect them to be). However, when You run the system as a whole, that bit might be a part of a variable in another managable unit which turned out to be OK as well when You tested it isolated. So You will have to test the units separately and together and creating every possible state that Your system might ever stumble upon, that is impossible.

I do not know if it is true in all countries, but there are many countries in which certain electronic devices, like anaesthesiological equipment used to keep a patient asleep during an operation, is not allowed to have software in the circuits that control vital stuff.

This does not rule out the fact that many errors remain in software because no "as-decent-as-possible" testing is done.

Yours sincerely, Rene

Reply to
Rene

One of the biggest differences between verifying software and verifying hardware is that software as a rule deals with dynamically allocated, complex data structures. Hardware, while complicated, is at least fixed.

One should also keep in mind the level of granularity. Verifying a microprocessor basically means verifying that if the device starts from a state within its specs, and executes one instruction, then the processor ends up in the state that the spec presrcibes. I have been in formal verification myself, so I don't mean to imply that this is a trivial task. However, the software equivalent would be to verify that each function satisfies its postcondition if the arguments satisfy the precondition. But the properties one really wants to verify are much more abstract, such as "does this piece of software land my plane safely?".

--
Pertti
Reply to
Pertti Kellomäki

If the same quality of people were working on them as on the discrete parts, they were subjected to equivalent testing and field evaluation by as many customers then they would still be more problems because there are more variables between the VHDL and the final functionality.

I think it's mostly a matter of costs (including time to market).. if you're willing to do a spacecraft level of documentation and design then it can be pretty much perfect the first time it escapes out the door, but it will cost orders of magnitude more money and take many times longer.

In particular, the cost of doing a very high quality design seems to increase very rapidly with complexity.. maybe the square or the cube of complexity. Something 10x-50x more complex than a program that could be created bug-free in a few months by a single person might take 100-500 people and 3-10 years.

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
 Click to see the full signature
Reply to
Spehro Pefhany

This is certainly an oft spoken subject, but the fact is that, imho, the state of software development has not really improved over the years, it is probably worse now than it was a few years ago due to more complexity, tighter schedules, etc. The software crisis is still with us. The fact that programmable hardware goes the same way can be an opportunity to discover new causes even if solutions are far ahead.

There is also the fact that hardware engineers who knew how to do it right the first time, now meet, with programmable components, the same sort of problems as do software developers. They could certainly understand what has changed in their process.

This can indeed be a problem if the product is later used for other applications as are reusable components. The point is how to produce really reusable components fully characterized and free of side effects. Another point is to decide when to invest to produce reusable components.

Reply to
Lanarcam

This is indeed impossible to manage unless you find a way to fully test "manageable units" and make sure that they are free of side effects. Needless to say I don't have the solution.

Some industries are reluctant to admit software in their safety products, for instance railways signalling relays.

Reply to
Lanarcam

It certainly requires time to do a high quality design, but there are (must be) ways of separating parts so that the cost is less than what it is for the design as a single piece.

Reply to
Lanarcam

[...]

Yes, this is mainly a problem of resources. I put it this way: to be done perfectly, just about every complex project requires 3 times more time and effort then it is allowed to be allocated for that.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

In other words, how to balance costs and potential prejudices or simply short term costs and long term costs.

Reply to
Lanarcam

LOL

Since the times of i286, all of the microcontrollers are laden with bugs. Look into the silicon errata sheets.

Only the simple designs mass produced for many years can be relatively bug free.

Every textbook says that.

Consequently, there is no economical demand for the things to be perfect; good enough is good enough.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

It is enough if this piece of software lands the plane safely in 99.999% =

of cases. There is no point for the further improvement if the insurance =

premiums and the other possible losses are already lower then the cost=20 of the development and testing.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

There are multiple "evils" in development: schedules just note how frequently cell phone models come out nowadays

requirements as systems can do more, customers expect the next generation to do even more

problem complexity Some problems are now being automated which may be unsolvable.

problem size Or tyranny of the numbers: processors may double in speed but problem size may be growing faster , e.g graphics processing where doubling the resolution quadruples the data size. So any problem with a larger than O(n) solution algorithm suffers.

These don't even touch the development issues: just communicating between developers changes as project scale up: single developer -> small team -> large team -> multiple large teams More complex problems with some development tools essentially the same as 30years ago

So it is not the development process alone. Managers need to know that all these factors (and more) influence the process.

I don't know when a breakthrough will come, but I don't think we have seen it yet. And it may not come as a singular event. We may muddle thru and improve our processes, our solutions, and our tools in varying steps. Ed

Reply to
Ed Prochak

I tend to agree with all of that, but somehow, they managed to build a "complex" rocket in 1969 which involved communications among many teams and at a time when tools were scarce. Applying systems building techniques would certainly help.

Reply to
Lanarcam

In message , Vladimir Vassilevsky writes

In the real world it all comes down to money. Insurance premiums and liability.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
 Click to see the full signature
Reply to
Chris H

It's possible to improve the development process of software so it doesn't suffer from these evils. Unfortunately, the improvements have to have support of nearly everyone in the whole organization, and certainly majority support at every layer. This means that any manager in the chain from the developer up to the Big Boss can cut the whole thing off at the knees by choosing to believe an optimist who promises a shorter schedule.

I've seen big complex projects work, but I've seen it far less often than I've seen them fail. And I've seen them fail due to individual failures at just about any level.

--
Tim Wescott
Control systems and communications consulting
 Click to see the full signature
Reply to
Tim Wescott

My thought is that the first step is to improve the specification process. When was the last time you got a software specification that was actually complete and didn't require revisions?

Customer/Marketeer: It should have a two line ascii display.

Engineer: What should it say on this display?

C/M: You know, a menu of something? How long to you think it will take?

Apologies for over simplistic example, but we've all been there.

Scott

Reply to
Not Really Me

True, but it pays not to be too cynical about the calculation. Obviously there are the difficulties of valuing damage to reputations and other 'soft' issues.

IANAL, but my understanding is that if it can be shown in court that you were aware of a substantial risk, but opted not to moderate that risk on the basis that it would simply be cheaper to settle any cases that arose as a consequence, there is a real possibilty of an exemplary damages award.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.