How to develop a random number generation device

No a buffer overrun is over running the buffer. It doesn't matter what is in the memory you've over run into.

The exploit that takes advantage of the buffer overrun, causes an overrun onto the return address or some other data that shouldn't be writable by this task.

If an application trashes its own variables via a buffer overrun, only that application is hurt in the process. The is exactly what Mr. Larkin said was the case and he is correct in that.

Reply to
MooseFET
Loading thread data ...

-- Service to my country? Been there, Done that, and I've got my DD214 to prove it. Member of DAV #85.

Michael A. Terrell Central Florida

Reply to
Michael A. Terrell

The 80286 and later have a built-in MMU, and Windows 3.1 and later make use of it (although the 95/98/ME branch don't make quite as much use of it as they should). With NT/2K/XP, processes don't trash each other's memory (one process might "persuade" another to trash its *own* memory, but that's a separate issue).

That feature is available at least on current Linux and BSD systems; I'm not sure about Windows. It can break certain programs, e.g. those written in languages which make extensive use of thunks (trampolines), as well as some emulators and languages which use just-in-time compilation.

That doesn't *prevent* buffer overflows, but it prevents exploitation via the "classic" mechanism (write "shellcode" into a stack variable and overwrite the return address to point into the shellcode).

However, there are other ways to exploit buffer overflows, e.g. overwriting function pointers, or data which affects control flow. If you can't inject your own code, you're limited to whatever code is already part of the process, but that may be more than enough (system() will almost certainly be available; if you're lucky, there may be a complete interpreted language available to use).

There's no technical reason why it couldn't have happened under DOS.

But that almost certainly isn't due to the user-level process directly trashing OS memory. It's far more likely that the user process passes "bad" data to the OS and the OS trashes its own memory.

That will prevent code injection, limiting an attack to whatever is code is already available in the process' address space.

That would eliminate the return address as a vector. There's still function pointers; C++ uses these extensively (virtual methods), COM even moreso (all COM methods are virtual).

And there's still the case of overwriting data which affects control flow (the most extreme case is data which is "code" for a feature-rich interpreter).

Only sometimes.

In any case, none of this deals with *preventing* buffer overflows, but with mitigating the consequences.

Reply to
Nobody

"Patch Tuesday" is what he does to himself after he makes a stupid remark, and finds out he was full of shit. His brain has more scar tissue on it than Sammy Davis Jr.'s liver did, and it was the size of a basketball.

You guys are the same dopey crowd that spews horseshit about Vista.

Reply to
TheKraken

An idiot that applies patch "every Tuesday" is just as retarded as one that runs "defrag" more than once or twice a year when they have no apps that cause severe fragmentation, like a database app or such.

Reply to
ChairmanOfTheBored

But it knows what chunks of memory it has allocated to a particular process. As long as it's in your own memory space, who cares if you overwrite/overrun your own buffers?

That's that sort of catchall "software that can catch the exception" part of my answer. :-)

Cheers! Rich

Reply to
Rich Grise

Evidently, you haven't done much compiling or linking or memory management.

Good Luck! Rich

Reply to
Rich Grise

Yes. Well, it matters in terms of what happens next, but not in terms of whether or not a "buffer overrun" has occurred.

No.

Leaving aside whether the return address "should" be writable (that's how the code which most compilers generate normally works; whether or not that's a good idea is a different matter), the term "buffer overrun" is normally used where a process overruns one variable or field (which is part of the memory which the process is allowed to modify) and corrupts another variable or field (which is also part of the memory which the process is allowed to modify).

E.g.:

char name[256]; int x;

x = foo(); strcpy(name, str); bar(x);

If strlen(str)>255, and "x" follows "name" in memory, then the strcpy() will corrupt the contents of x. That is a buffer overrun; the buffer is "name", and the strcpy() overruns it.

Yes and no. Only that application is directly affected, but that application can then do anything which its owner can do, e.g. email sensitive files, connect to the owner's bank's website and request the transfer of funds, etc. If the owner has administrative privileges it can install new software (rootkit).

Mr. Larkin appeared to be talking about a different issue (process isolation).

The "problem" with buffer overruns is when they hijack the process which owns the buffer.

If the overrun tries to modify memory which doesn't belong to that process, the process will normally be terminated. That has been a "solved" problem ever since CPUs started having MMUs built in and OSes started to make use of them. For "wintel", that's 80286 and Windows 3.1; mainframes and minicomputers had this much earlier.

When the overrun corrupts the process' own memory, the process continues to run as if nothing untoward has happened, but it is now doing the bidding of the attacker.

Reply to
Nobody

You seem to be confusing whether it is possible to address an issue with whether a particular statement actually addresses the issue.

Please read my actual question, quoted above, and removed from any irrelevant context which might confuse the issue.

FWIW, I have no problem with either "An OS can surely make it impossible to write safe code" or "a real OS is required to make safe code possible". However, they don't appear to address the question which was actually being asked.

If it helps, that question can be rephrased as whether an OS (any OS) can "make unsafe code impossible", which is a different property to either of those given.

AFAICT, you cannot do this without sacrificing the ability to run arbitrary chunks of machine code, which appears to be a "must have" feature for any OS (if there are OSes which don't allow this, they have yet to escape from the lab).

Actually, even if you do sacrifice that ability, you can't truly eliminate buffer overruns. If the OS only allows you to run e.g. Java bytecode, you can write an x86 emulator in Java then feed it x86 code which contains buffer-overrun bugs. Requiring the use of a higher-level language simply means that a programmer has to make some effort to get buffer overruns.

All things considered, eliminating buffer overruns is something which should be the responsibility of the language. If you don't allow unbounded arrays (i.e. referring to an array by its start address and relying upon the programmer to keep track of where it ends), buffer overruns aren't an issue. Once the program has been compiled into machine code, the information which is required has been lost.

Reply to
Nobody

So you agree at this point.

Yes go back a re-read it carefully.

You seem to be confused about what we are talking about. We are talking about making an OS safe. If an application task commits an overrun that causes that task to fail, it is quite a different matter than talking about a buffer over run based exploit.

[...]

He is talking about process isolation and it not being violated by a buffer overrun if the OS is well written. He is correct in what he said.

You have assumed that by causing the over run the attacker has gained control. As I explained earlier this need not be the case.

Reply to
MooseFET

For a while, merely receiving an email, without even opening it, could infect a Windows machine!

Microsoft's design philosophy seems to be "when in doubt, execute it." So "data" files, like emails and Word docs, can contain embedded executables, and guess what Windows likes to do with them?

The other trick Microsoft does is to ignore filename extensions, examine the file headers themselves, and take action based on the content of the files, without bothering to point out to the users that something is fishy.

But AlwaysWrong, He Of Many Nyms, loves Windows, and gets huffy if anybody points out defects.

MS brought in the guy who wrote VMS, since even they knew they weren't competant to do NT themselves. But the combination of legacy structure and corporate culture limited what could be done.

Windows is a "big OS" in that thousands of modules make up the actual priviliged runtime mess. A "small OS" would have a very tight, maybe a few thousand lines of code, kernal that was in charge of memory management and scheduling, and absolutely controlled the priviliges of lower-level tasks, including the visible user interface, drivers, and the file systems. This was common practice in the 1970's, and even then a decent multiuser, time-share OS would run for months between power failures. You can totally debug a few thousand lines of code, authored by one or two programmers; you will never debug a hundred million lines of code that has 2000 authors.

John

Reply to
John Larkin

Even a Von Neuman machine with memory management is in effect a Harvard machine. There's no excuse for executing data or stack spaces.

John

Reply to
John Larkin

Clearly the design of Windows can never be fixed; it was bungled from Day 1. I wonder what will be next?

I like the idea of a multicore CPU that has a processor per task, with no context switching at all. One CPU would do nothing but manage the system; it would be the "OS". Other CPUs would run known-secure device drivers and file systems. Finally, some mix of low-power and high-performance CPUs would be assigned to user tasks.

Microsoft's approach to multicore is incompatible with this architecture. In a few years we'll have, say, 1024 processors on a chip, and something new will be required to manage them. It will be a thousand times simpler and more reliable than Windows.

John

Reply to
John Larkin

Right. The first thing an OS should do is protect itself from a pathological process. But it should also manage data, code, and stack spaces such as to make it very, very difficult for anything internal or external to corrupt user-level processes.

John

Reply to
John Larkin

I used to run a PDP-11 timeshare system, under the RSTS/E os. It would run, typically, a dozen or so local or remote users and a few more background processes, system management and print spooling and such. Each user could dynamically select a shell, a virtual OS, to run under, and could program in a number of languages, including assembly, and run and debug the resulting machine code. You could also run ODT (like "debug"), poke in machine instructions, and execute them. Non-priviliged users could do all this, and crash their own jobs, but they absolutely could not damage the OS or other user tasks. In a hostile environment (we sold time to four rival high schools, who kept trying to hack one another) the system would run for months, essentially between power failures.

This OS, and RSX-11, and TOPS-10, and VMS, and UNIX, and no doubt many more, from other vendors, *did* escape from the lab.

John

Reply to
John Larkin

I don't think databases are common causes of fragmentation - serious databases often do their own file handling at a lower level precisely to avoid that sort of thing.

But other than that, I agree - blindly patching windows is not a good idea, and defragmenting does not last long enough to make it worth the effort - it's more effective to invest in more RAM so your file caches are bigger.

Reply to
David Brown

David Brown snipped-for-privacy@hesbynett.removethisbit.no posted to sci.electronics.design:

Yes, and with the addition of dynamically linked modules there are now at least three pieces to the linker issue.

1st a semi-static linker to tie the loadable base modules of a single possibly transitory application program. 2nd a link-loader that corrects system and other resource calls external to the application ad load time. 3rd what are called dynamically linked libraries that provide various less commonly used capabilities for the main application and provides for an extensibility interface (API). The notable difference is that these libraries can be dynamically loaded and unloaded to make more room for other uses of memory.
Reply to
JosephKK

MooseFET snipped-for-privacy@rahul.net posted to sci.electronics.design:

OK. Thanks for the addition to my knowledge.

Reply to
JosephKK

You have a bad case of Windows-user denial.

Reply to
Richard Henry

Nobody snipped-for-privacy@nowhere.com posted to sci.electronics.design:

Agreed mostly, a lot more hardware help is needed. Protection from user programs altering MMU data, the stack pointer itself, making I/O instructions privileged, and probably much more; all with careful OS support. So the answer is possible, yes, but not without serious hardware support.

Reply to
JosephKK

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.