How to develop a random number generation device

I suppose neither of you has ever heard of a "chroot jail".

Cheers! Rich

Reply to
Rich Grise
Loading thread data ...

You and I are talking about different issues. Process isolation has been solved, even on mainstream OSes.

I'm not talking about process isolation. I'm talking about the ability to make a program behave other than how its author intended by overrunning a a buffer (e.g. by making some portion of its input larger than the buffer in which it will be stored).

Reply to
Nobody

You might want to check my User-Agent header before you assume that I'm a Windows user.

You might also want to check that you are actually correct before you start making ad hominem attacks against anyone who contradicts your viewpoint.

Reply to
Nobody

A messed up data segment is still the data segment. It shouldn't be possible to execute it as a code.

Since 286 there were the goodies like 4 levels of priviledge, separate LDTs for every process, different segment rights for code, data and stack. In the theory, that should allow for a pretty solid protection, however in the practice it was (and still is!) unused for the simplicity, sw compatibility and performance reasons.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

It is possible to declare every data object in a program as a separate segment. That is what LDT was intended for. Of course, there will be a lot of overhead and the compatibility issues, too.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

The answer is "No, regardless of hardware support".

I should ask the same of you.

Or, avoiding loaded terms like "right answer" ...

Why is it that for three days now, you've been resisting accepting that the problem with "buffer overruns" isn't about segfaults (they aren't a problem; process misbehaves, process gets SIGSEGV, process dies; good riddance), it's about intra-process overruns.

Look at the Wikipedia article for "buffer overrun". Or search the BugTraq archives for that term. This isn't about process isolation, it's about a process trashing its own memory in response to "bad" input.

Checked my User-Agent headers? ;-)

FWIW, I use both Linux and XP; Linux if I can, XP if I have to.

I'm by no means enamoured of Windows, but suggesting that it lacks the process isolation found in Linux or MacOSX is incorrect (95/98/ME were partially lacking in this regard, but not the NT/2K/XP branch). Windows has plenty of problems, but lack of memory protection isn't one of them.

If you think that the "buffer overrun" problem is somehow specific to Windows, try searching for "buffer overrun" and "buffer overflow" along with "linux". Or take a glance at the GLSA list:

formatting link

Reply to
Nobody

Why would a *system* care about the latency of one processor accessing memory? The system only cares about net performance. As it stands now, only one *process* can access memory at a time (because all processes share a single CPU) and they all suffer from context switching overhead. Multiple CPUs never context switch, so *must* be overall faster.

It's the OS's that we have problems with. Hardware is cheap and reliable; software is expensive and buggy. So we should shift more of the burden to hardware.

The IBM Cell structure is a hint of the future.

Sure, but Moore's Law keeps going, in spite of a pattern of people refusing to accept its implications.

Far fewer than software bugs. Hardware design, parallel execution of state machines, is mature, solid stuff. Software just keeps bloating.

So let's make it simple, waste some billions of CMOS devices, and get computers that always work. We'll save a fortune.

My XP is currently running about 30 processes, with a browser, mail client, a PDF datasheet open, and Agent running. A number of them are not really necessary.

How many on yours?

John

Reply to
John Larkin

A decent OS, using decent hardware, should enforce isolation of code, stack, and data, in itself and in all lower-priority processes. It should be impossible for data to ever be executed, anywhere, or for code segments to be altered, and buffer overflow exploits should be impossible. This ain't even hard, except for the massive legacy issues.

Right. A good hardware+OS architecture should prevent this, too. Bad code should crash, terminated by the OS, not take over the world, or send all your email contacts to some guy in Poland.

John

Reply to
John Larkin

You're probably better off going quite a bit longer than that between replacements. Believe me, there are two ends of a bath tub. They gave me a new Dell (hmm, maybe there is a common thread here) on my first day. The disk drive died that afternoon and I lost a couple of days work (couldn't get it replaced right away). It could have been far worse though.

--
  Keith
Reply to
krw

I'm not at all happy with SuSE. I guess it's mutual because it doesn't like my hardware, even though it is rather vanilla. I have Ubuntu on the to-do list for a rainy day. I bought a new drive for my laptop so I'll try it on this system too.

Ick! I know XP has "issues" but I didn't think it was within a few orders of magnitude of that!

Understandable.

I don't believe 2K does. I rather liked Win2K. It was the only O$ I was willing to drop OS/2 for. I'd planned to move to Linux by now, but there are too many hardware issues.

--
  Keith
Reply to
krw

You're always off the wall, so you cannot even make that claim.

Reply to
ChairmanOfTheBored

The Cell BE processor beats it by a factor of at least three.

Reply to
ChairmanOfTheBored

The dirty panties on his head was a dead giveaway.

--
Service to my country? Been there, Done that, and I\'ve got my DD214 to
prove it.
Member of DAV #85.

Michael A. Terrell
Central Florida
Reply to
Michael A. Terrell

And he still can't spell 'Crack Whore' properly. :)

--
Service to my country? Been there, Done that, and I\'ve got my DD214 to
prove it.
Member of DAV #85.

Michael A. Terrell
Central Florida
Reply to
Michael A. Terrell

DOSBox is better. My 640x480 legacy apps are at 1280x1024 now. Beautiful upscaling capacity. Tango PCB and OrCAD are great under DOSBox.

You should try the emu apps within windows then as well, especially DOSBox.

Vista's VDM is far better than XP's. Ever heard of the NET USE command, and its brethren?

Learn how to change the settings on the ports in device mangler then.

Sounds like more operator error to me.

XP works just fine on a machine that has no network capacity.

Reply to
ChairmanOfTheBored

There are good CPUs that are less than a cm square. And the one I refer to is scalable too.

Reply to
ChairmanOfTheBored

It's certainly insufficient. I'm not sure that it's even necessary. There are mechanisms which the OS could provide and which a language could use, but there are also mechanisms which don't require support from the OS.

Agreed. The fact that ANSI C provides e.g. strcpy() but doesn't provide a safe alternative (strncpy() won't NUL-terminate the string if it is truncated) has been responsible for innumerable buffer overrun bugs.

Compilers are constrained by the language. Not only does C support treating any pointer as an array, but arrays are automatically converted to pointers when used in an expression or as a function argument. Keeping track of the end (and ensuring that it isn't overrun) is the programmer's responsibility.

[OTOH, C doesn't require that negative indices are supported, yet every compiler which I've ever used allows this.]

If arrays were a first-class data type, containing both their start address and length, many of the problems would go away.

I didn't say that it can't do "something", although I'm not sure that it actually matters all that much.

Windows is no worse than any other OS when it comes to buffer overruns.

[And the NT/2K/XP branch is no worse when it comes to process isolation. That's a separate issue to what I was talking about, but there seems to be some confusion.]

The real problem is the widespread use of C/C++ as an application programming language.

C was designed as a systems programming language, one step up from assembler. In that area, you often need the flexibility of a language which will let programmers do whatever they want, including shooting themselves in the foot. You may also need the efficiency.

This isn't true for applications, where the additional overhead and reduced flexibility of a higher level language wouldn't be a problem for a word processor or web browser.

IMHO, Windows' weak point is its extreme complexity (sometimes I'm convinced that Rube Goldberg is alive and well and working as a systems programmer at Microsoft).

Windows doesn't crash because applications are allowed to trash system memory. Windows crashes because Windows trashes system memory, because of occasional programming errors multiplied by the massive size of the code base.

[In this context, "system" memory doesn't have to be "kernel" memory. Windows has lots of auxilliary "services", which run as normal applications but without which the OS will effectively cease to function.]
Reply to
Nobody

On Sep 17, 4:01 pm, Nobody wrote: [...]

It also requires that the write cause a bad thing(tm) to happen.

I, however, am now bored. I think I go read some eletronics related ones.

Reply to
MooseFET

Actually, with RAID, I can just let one of the drives fail, and pop in a new one when it does. No panic.

John

Reply to
John Larkin

No, no, NO. You seem to be assuming that we'd use multiple cores the way Windows would use multiple cores. I'm not talking about solving big math problems; I'm talking about assigning one core to be a disk controller, one to do an Ethernet/stack interface, one to be a printer driver, one to be the GUI, one to run each user application, and one to be the system manager, the true tiny kernal and nothing else. Everything is dynamically loadable, unloadable, and restartable. If a core is underemployed, it sleeps or runs slower; who cares if transistors are wasted? This would not be a specialized system, it would be a perfectly general OS with applications, but no process would hog the machine, no process could crash anything else, and it would be fundamentally reliable.

This is not about performance; hardly anybody needs gigaflops. It's all about reliability.

Programmers have pretty much proven that they cannot write bug-free large systems. Unless there's some serious breakthrough - which is really prohibited by the culture - the answer is to have the hardware, which people *do* routinely get right, take over most of the functions that an OS now performs. One simple way to do that is to have a CPU per process. It's going to happen.

When I was just a sprout, my old mentor Melvin Goldstein told me "in these integrated circuit things, one day transistors could cost a penny each." I thought he was crazy. OK, one day CPUs will cost 5 cents each, and Windows is not the ultimate destiny of computing.

Hey, he wrote a book!

formatting link

John

Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.