A group of colleagues and I regularly meet up (somewhere) a few times, annually, to exchange ideas and libations. It was my turn to host this past week.
Lots of stuff gets discussed at these get-togethers -- which, of course, is the reason for the "inconvenience" of having to fly around the country to attend them!
One idea that developed over a dinner was the possibility of a
*developer* (business/individual) being targeted prior to product release. I.e., to embed malware in the product AS RELEASED ("infected from birth").We're all "bare metal" developers (hardware and software). As a result, we INITIALLY were relatively confident in ASSUMING that we would pose a less "accommodating" attack surface for a blind "pre-release" attack; the attacker would have no foreknowledge of the development language, target OS, even the processor family in use!
OTOH, folks building on Windows, Linux or any other COTS platform could probably be easily identified (fingerprint the file names in their repository) and silently infected. Furthermore, as most of those folks would probably be relying on many prebuilt libraries, one of those binary packages could be infected and not be detected before linking (static or dynamic).
[Does your build system know if a file has been altered since the last make(1)? Or, does it simply rely on a timestamp to determine this??]For COTS systems, the developer might not even have the sources available for the library that is targeted! I.e., "make world" leaves the library files untouched!
OTOH, most of us use toolchains that are publicly available (COTS or FOSS) so the tools themselves could be targeted to inject the specific malware into the objects -- regardless of whether or not the binaries are ever rebuilt! (e.g., in the spirit of Ken Thompson's hack).
Again, I suspect this is practically impossible for the GENERIC class of embedded apps -- the attacker has no way of knowing WHICH aspect(s) of a particular design to target!
The more interesting question is whether or not some "common" facility (e.g., some part of stdio/stdlib/math/etc.) could be compromised in a manner that would yield an effective "in" for a system of UNKNOWN (to the attacker) capabilities/functionality. At the same time, remaining innocuous enough that it doesn't prematurely reveal itself to developers of systems that CAN'T be compromised by that technique!
(I.e., if folks started noticing that, for example, strlen(3c) was "misbehaving" and explored the issue, they would discover such an attack before it had the opportunity to "bear fruit" -- in some OTHER system/product/application)
So, given NO knowledge of the targeted application domain, hardware, OS, etc., can you imagine a PRACTICAL exploit that would put designs at risk, "from birth"?