And there is a *cost* associated with each "deviation" we encounter between the *expected* (and/or *documented*) behavior of those "chips" and their *actual* behavior. I suspect anyone who's had a development schedule ruined because of a vendor's screwup is *really* hesitant to climb back into bed with that same vendor!
There's a big difference between hardware and software.
First, most folks *can't* grow their own silicon. The bar is too high to acquire that set of skills/tools/personnel. In terms of *software*, it's HARDER to get a license to sell real estate (which is one of those "professions" that is "open to everyone, regardless of skills"... kinda like "used car salesman") than it is to call yourself a "programmer". I.e., for as little as $20 you can get a 2 year old PC, a friend's old copy of Windows N-2, a "free" compiler and a "Learn C in 24 Hours" book. Now, you're a "programmer".
Be honest, how many of your colleagues are formally qualified to be writing code? How many just "picked it up" without any real "education"? (at school we used to laugh at all the physics majors who ended up writing code for a living... never having taken any of the associated courseware -- but, needing a *paycheck* from *something*...)
[granted, you still have to get someone to *hire* you... or, maybe just WRITE SHAREWARE and hope someone incorporates it into their product and sends you a royalty check?? :> )Second, there are lots of companies providing reliable hardware. If company A screws you, you limp through the project and then abandon company A on your next design! 30-40 years ago, it was a "necessary prerequisite" to announce multiple sources when you brought a new processor to the market. People *didn't* want to be at the mercy of a sole source supplier (pricing, availability, foundry problems, etc.). How many vendors of "reliable, ALL-PURPOSE software components" can you name?
Third, the flip side of the "low entry cost" argument applies. While its easy for John Q Public to call himself a programmer, it's also a lot easier for Joe Professional to *write* a piece of reliable code, "in house". You're not at the mercy of some vendor to provide you with a solution (or, an *alternative*, but WORKING, solution) for your "software componentry".
Fourth, it's a LOT harder to test software than hardware. The fact that so many software bugs get *through* testing attests to this. Hardware can be "put through its paces" in a lot more controlled way (e.g., by the vendor). There's a lot less "state" that affects the performance of the hardware.
Fifth, its a lot easier to come up with (an accepted) "general purpose" hardware solution/(sub)system than a similarly "general purpose" software solution. E.g., I was burning a DVD-ROM in Nero earlier today with files named:
- foo
- bar
- baz ...
In the right "file explorer" pane, these files were listed in
*numerical* order. In the *left* "DVD content" pane, they were listed using a L-R alpha sort -- so, 100 appeared after 10 instead of after 99. IN THE SAME APPLICATION. Sort() is sort() is sort(), right? Obviously, there were two sorts in use and neither agreed with the other. Which is the *right* "sort" for you? Does your 3rd party library have a different notion?I've seen file sizes reported in a Windows Explorer window and an IE window that used different notions of how to round. "Hmmm... is this 17KB file really the same as this *other*
18KB file that has the same name and timestamp?"Will the GUI you use for the first part of your application use the same conventions as the GUI you use in some *other* part? Will you ever *notice* the discrepancies?
Sixth, we (psychologically) more readily acccept/embrace the
*requirements* that a particular hardware solution imposes. "It needs a 16b data bus". "It only works with NAND flash." etc. But, for software solutions, we think we can magically *kludge* something that will "adapt" what we have to what we *need* instead of *fixing* (rewriting) it: "we'll take the existing memory/buffer management software designed for a 64KB memory space and *extend* it to handle our 16MB memory space by creating lots of 64K *pools* (each managed with the old management software) and we'll just add some glue logic on top to keep track of which *pool* the buffer came from!" Would you use a *real* printf (OR ANY LIBRARY THAT RELIED ON THE PRESENCE OF SAME) in a small PIC deployment? You'd probably be miffed to discover the printf was being used *only* in: for (finger = 0; finger < 10; finger++) { printf("This is finger #%d\n", finger); }Seventh, *who* assumes responsibility for testing (and fixing!) the third party software? How keen are *you* to do that for somebody else's code? Will there be any unspoken pressure on
*where* the fix gets made -- i.e., in the third party code or in an "adapter" that you develop in *your* code? When it comes to hardware, the folks involved in testing it are usually clearly identified. And, the range of solutions they have at their disposal is easily quantified: can the board be patched or redesigned? Or, is there something fundamentally flawed in the implementation that needs a complete rethink? (wanna bet that this results in major code rewrites when the problem is a software one... instead of abandoning the "component")
Both are examples of "reusable". It's just that it is typically easier to "fit" code from that existing solution to the "new" problem -- the problems are similar, the platforms (tend to be) similar, the design constraints similar, the personnel similar, etc.
You wouldn't, for example, want to take the memory allocator out of my "network speaker" and use it in a generic application. It's a bad fit. OTOH, a generic memory allocator would give abysmal runtime performance in my application.
IME, you have two fundamental "problems" that poke their head in your way when it comes to reuse:
- the guy (boss?) who thinks the problem can be greatly simplified by reusing existing code (while being clueless as to the actual details involved)
- the guy (gung-ho programmer?) who fails to see *any* similarity with existing solutions (NIH) and believes that only he/she can "save the day".
My approach is to reuse *designs* (which *could* result in lots of copy/paste from existing *codebases*) but fit them to the specific needs of the application. I have no desire to write yet another "sort" from scratch. I've long since forgotten the formal names for each of the various sorting techniques (bubble, shell, insert, etc.). *But*, I will know (remember) that some other project had data organized in a manner similar to "this one". And, I'll go see which sorting algorithm was used, there. And, tweek it to fit the needs of *this* application.
I.e., the "engineering" gets reused and I just have to do some "tidying up" to make it work right in this new use.
"Big companies" that can afford to "specialize" particular staff can benefit from this sort of approach by having "experts" in each "application sub-domain". E.g., an OS guy, a math guy, an I/O guy, a UI guy, etc. These people (resources) can accumulate knowledge as to the costs and benefits of the various approaches that they have used over the years (or, had to *maintain* on behalf of the company). So, they can be called upon to offer advice as to appropriate solutions (and the costs/rewards thereof) to staff making implementation decisions.