There's nothing wrong with component reuse. The problem lies with components that weren't designed with security, reliability, robustness, accuracy, etc. in mind. Would you use a semiconductor in a particular set of environmental conditions if you didn't KNOW that the device was designed for use *in* those conditions? At the very least, you'd look at the datasheet and see what "typ" performance is specified for your conditions. If you're a more robust developer, you'd also want to see worst case numbers (barring that, something that quantifies the distribution of values
*around* "typ").Show me *any* of those parameters for *any* piece of software! :>
Reliability and security are different animals entirely. An application can be VERY reliable -- and VERY insecure.
You can take measures during development to increase reliability, accuracy/correctness and/or security. Even things as simple as "best practices" can make a remarkable difference in each of these aspects of a design/implementation.
You use similar "best practices" when designing hardware. E.g., derating components, shake 'n' bake, etc.
But, too many software development and execution environments do not address these. Or, try to address them as afterthoughts. As if you can buy a big lock to put on the front door of your house to keep it "secure" -- while ignoring the fact that a large percentage of the exterior walls is *glass*!
Coding on bare metal will probably result in a more insecure, less reliable and more costly (to develop) implementation. The whole point of an operating system is to give you an enhanced execution environment that increases your chances of producing secure, reliable and cost-effective implementations.
In my current project, I put lots of "mechanism" into the OS and "support services". So, applications don't need to reimplement things that might be commonly used (e.g., BigRational numerics) as they stand a greater chance of implementing those things incorrectly -- or, cutting corners as an expedient (e.g., "I only need 72bit Rationals") and later erroneously reapplying those "incomplete implementations".
[Great example of this: see how many buggy floating point libraries you can encounter over the years as folks "rolled their own" before these sorts of things were readily available/standardized. Heck, it's just simple arithmetic operations -- how could they possibly get them *wrong*?? :> ]I provide individual protected execution spaces for each "job" so *you* can't "accidentally" alter or corrupt *my* data -- or, my execution. *Your* bugs affect *you* and no one else!
I provide authentication and authorization mechanisms for fine-grained resource control. E.g., I can let *you* "mute" the audio but never "increase" or "decrease" it. I can let you *set* a value but prevent you from *seeing* its current setting. At the same time, let someone else *see* it but not *set* it.
While none of these things GUARANTEE that the code will be more accurate/correct, reliable or secure, they provide a convenient framework for *cooperative* developers to do the sorts of things that they need to do and reap the benefits of the underlying mechanisms to make their lives easier *and* their code more predictable/inspectible.
[E.g., you could choose to share a value with through some ad hoc mechanism. But, then *you* have to build that mechanism. In doing so, it makes it harder for a code audit to see where you may be introducing a vulnerability. OTOH, if it is EASIER for you to just use some preexisting mechanism to share that value, then your sharing is more readily identified in the codebase. And, the criteria that you have chosen to apply to that sharing are more readily identified: "Why are you sharing this with EVERYONE? Don't you just want to share it with Foo?" (unnecessary sharing leads to exploits) "And, why, exactly, does Foo need to see this? Can't Foo do it's job *without* it??"]Silly boy. Why would you think it hasn't happened? People have hacked pacemakers, insulin pumps, cars, gaming device$, pay phones, banks, electronics/computer companies, etc. A little time with your favorite search engine should turn up enough documentation to get you thinking...
Note that a "hack" need not mean something was "commandeered". Rather, it represents an unexpected and undesired interference with the intended, normal operation of the device/system.
In the case of the hacked Jeep, you needn't be able to steer, accelerate, brake, etc. in order to have hacked it. Simply interfering with its ability to operate as intended constitutes a hack. E.g., jabbering on the interconnect network -- even if everything you are "saying" is total jibberish -- could easily prevent the system from operating as required. Whether the system handles that assault gracefully or catastrophically is up to the implementors.
When I designed the comms for my automation system, it would have been incredibly naive to think that someone wouldn't elect to "hack" it -- to gain entry to the residence, to "spy" on what I was doing within, to track my TV/radio habits (valuable to a commercial entity -- especially if they could be gleaned without criminal activity!), etc.
And, it's also possible that someone might want to simply *disrupt* the system's proper functioning. Either for some particular exploit or simply to deny those services to the occupants. Imagine the opportunities when accessing that system is possible remotely! (which greatly increases its value to the *owner* -- at the expense of downside hacking risk!)
While there is only *one* such system, the risk is effectively non-existent -- a special case of security by obscurity. OTOH, if I expect others to build upon my efforts -- opening up the design in intricate detail to all sorts of potential hacking experiments -- then failing to address those issues UP FRONT means they will NEVER be addressed, adequately.