We sometimes do present a customer with a cost/performance tradeoff, although once a gadget becomes a catalog item there's not much wiggle room. We have never, so far, offered a customer a price:reliability tradeoff. The image we'd like to convey is that we make things as reliable as we can.
In some cases, it's like never telling your mother that you have a term paper due; if you did, she'd nag you about it. Similarly, if we have an MTBF tradeoff of some sort, it's probably better to not let the customer get involved.
They kludged up x86 and DOS. Windows came years later.
Also "once they hit" they WERE mainstream items, as it costs a hell of a lot less to have a viable CAD workstation on a PC than any other platform at the time.
Took us off the drafting boards with 4X and onto computers and CAD based layout.
As noted, we have decided that there are far too many system states that could hang us up, not the few in an obvious, official "state machine." So we try to make sure that the logic is solid enough that invalid states are never entered, without worrying much about what happens if they are. Even if the design provides for exiting bad states, just getting there is a system failure.
I'd rather have a system hang in a bad state, because that would make us aware of a problem, and fix it. If it "fixes" itself, then it becomes a mysterious, occasional, transient malfunction, the worst kind of error.
For the LFSR case, one should pick a maximum length equation so there is only one "illegal" state and that trivially tested for. Other state machines are fairly simple to add illegal state decode, also. Often one-hot state machines are made with shift registers which make the number of possible illegal states small.
There is something to be said for that decision. The original IBM PC hung on a parity error, under the assumption that bad data was worse than a hung system. Many customer didn't agree so others skipped parity checking all together (the only two real choices at the time).
On Sep 11, 4:58 pm, John Larkin wrote: [... buffer overflow ...]
No the problem isn't really with code mixed with data. It is data mixed with data. The return addresses etc are on the stack along with the local arrays. This means that a routine can overwrite the return address with data by walking off the end of an array. Once that happens the return instruction jumps you to the bad code.
With an x86's MMU, you can make segments for code and stack and the like that have limits on their sizes. The problems can be partly overcome by this.
On most programs, the stack is just above the data segment in physical memory and the malloc() obtained memory is beyond that.
A C++ compiler could be created that inserted checking code in every operation that may overrun. Every buffer would have to have its length recorded somewhere.
The OS can let your program single step and check what every instruction does.
The OS can always leave a dead page after every malloc() block so you get a segment fault on stepping off the end.
Doing it right isn't all that hard to ensure. XOR gates don't cost much. If you XOR a pseudo-random bit string with a slightly random bit string, you get a more random bit string.
If you clock everything with a good crystal, you can avoid having the clock rate vary depending on the state of the output. The ring oscillators can be sampled at the clock edge to make a slightly random bit stream.
For a bit of extra fun, you could make the Vcc with a hit and miss regulator.
At least on Linux/x86, the primary stack is at the top of memory (the 3GiB mark; the last GiB is reserved for the kernel, so it can combine its own address space and the process' address space into a single 32-bit space).
Everything which is mapped from files at start-up (i.e. the code, data, rodata and BSS segments of the executable and shared libraries) is at the bottom.
The heap (memory allocated by brk()) goes immediately above that. [brk() is used by traditional malloc() implementations; modern malloc()s use a combination of brk() and mmap(..., MAP_ANONYMOUS).]
Regions which are created dynamically (mmap(), shmat(), stacks for additional threads, etc) go somewhere in the middle.
Before life got complicated with mmap() etc, you typically had the heap at the bottom and the stack at the top, and "out of memory" occurred when one collided with the other; IOW, a combined limit for the two rather than a separate limit for each.
[Using the traditional definition (the kernel, i.e. the part that the process can't bypass), not Microsoft's definition (the OS plus a bunch of user-level applications which are bundled in order to kill the market for competing applications).]
The language can prevent overruns; the OS just sees machine code.
The OS doesn't know where one "buffer" ends and the next one begins; it's all just memory. And single-stepping programs would be out of the question from an efficiency perspective.
That prevents overflowing malloc()ed buffers, but it doesn't work for those on the stack ("auto" local variables) or in the data segment (global or "static" local variables).
Historically, stack overflows have caused the most problems, due to the ability to overwrite the function's return address.
Also, malloc() is part of the library, not the OS (kernel). Requesting individual blocks directly from the kernel would cause a substantial performance hit (not as bad as single-stepping, but still too high to be practical).
Even if you c>> > Nothing the OS does can prevent machine code from overrunning a buffer.
There are lots of things which can be done to prevent overruns (and other things which can be done to prevent overruns from being exploitable), but most of them need to be done at a higher level than the OS.
The issue isn't about modifying code, related or otherwise. It's about either injecting new code or executing existing code with attacker-supplied data.
This isn't about protecting one process from another, but about protecting a process from itself. Most of the existing mechanisms for mitigating buffer overruns are implemented in either the compiler or libraries. The only OS-level mechanisms (things that work on any executable, however it was built) involve making it harder to exploit an overrun (e.g. randomising memory locations) rather than actually preventing the overrun.
No, the issues apply to any OS. But binary compatibility is much more important for Windows (and Mac) than for Linux.
If you try to run a 5-year old Linux binary on a current distribution, you'll probably find that a lot of the interfaces on which it depends have either disappeared or have changed in an incompatible manner. Lack of a stable ABI is a simple fact of life on Linux.
While at the same time conveying the image of not increasing reliability by 1% at the cost of 100X higher costs and a huge slip in delivery date...
That's usually the case. There are some customers who prefer to be kept in the loop on such issues[1], but most of them correctly assume that the person doing the work is in a better position to make such choices.
In the toy industry, the assumptions are just the opposite; The image you like to convey is that you make things as cheaply as you can as long as the number of dead on arrival toys is under a percent or so. This, of course, applies to the real customers; toy buyers for Walmart and Toys-R-Us. The image they like to convey to their customers (parents, not kids) is quite different.
Note [1] This reminds me of the classic repair-shop sign:
LABOR RATES
If you let us do our job: $100/hour
If you want to watch: $120/hour
If you want to talk: $150/hour
If you want to help: $200/hour
If you worked on it yourself: $500/hour
If you worked on it yourself and still think you know how to fix it better than we do: $1000/hour
Assuming no correlation between two sources of data, XOR cannot decrease the randomness of the output stream below that of either input stream If one of the streams is 100% true random, the output of the XOR will be 100% true random, the only exception being the case where the second input is derived from the first.
That's not the usual definition of cryptographic "salt."
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.