What is more distressing is the Blind Faith people put in some of these concepts: "Oh, we'll just use floats/doubles/long doubles/etc." as if this magically absolves them from having to understand how the data types *behave*.
"Gee, how did *that* happen?"
The flipside is also true. If you write *expecting* to port, you tend to think ahead (if you are prudent) to the stuff that *will* go wrong -- and not make those mistakes in the first place!
"Gee, aren't ALL ints (at least) 32 bits??" "What do you mean, *signed* chars???"
Then you find yourself grumbling because it's hard to interface to the hardware or write the scheduler or... Tricks have merit! :>
You would think things like buffer overruns would be the first thing folks would check! I.e., once you start rewiring a light fixture and discover the power is NOT off, you PERMANENTLY learn not to make that mistake again!
Oh, I misread your comment. I thought you were claiming the XP machines more *resistant* to this problem (and the obsolescence of XP meaning we'd be left with *newer* LESS RESISTANT machines)
What's his incentive to perform better? (rhetorical question) You've got everyone running around *believing* that "software (always) has bugs". So, it becomes a self-fulfilling prophesy!
"We don't need to test it -- we'll let the USERS do that!" (modern programming practice)
Do you ever *notice* when you encounter something bug-free? When was the last time your microwave oven turned itself on spontaneously? Or, ran for 3 hours instead of 3 minutes?
That is how it was sold to us (dawn of practical embedded systems) in the 70's! "Just change some code and you've got a new product!" Now, it seems the *hardware* is cheaper to change thna the software (assuming a "software issue" *could* be alleviated with hardware)
Ha! Yes. "Should have been 1.2K.... but, 1.0K *seems* to work!"