x86 real mode

On the 8080, you could "snoop" on the internal actions of the processor. On the 8085, much of this "status" was covered up and replaced by a terser "read/write:data/code" summary. On the Z80, even less information was *easily* accessible (aside from "fetch/notfetch".

OTOH, you could often get a glimpse of the internal *operations* by monitoring pins at "undocumented" times. E.g., IIRC, it was easy to see 16 bit increments (opcodes) "on the pins".

And, the encodings of many opcodes made it easy to snoop on "select" operations with a minimal of external hardware. E.g., you could use things like the RST's as an economical means of syncing/commanding external hardware without elaborate decoding logic.

[And, the fact that you could jam ANY instruction opcode onto the bus during an INTA cycle allowed for some really clever implementations effectively making chip internals accessible to external hardware "for low cost"]

The Z80 (and later 180 family parts) tended to be more *practical* in the signals that they presented to the pin drivers. E.g., extending the I/O space to 64K was a significant win (esp because you didn't have to *use* it that way!)

It would have been interesting to see what Zilog *could* have done with the Z80 had they not been so dysfunctional at that time.

Talk about throwing away your market... :-/

Reply to
Don Y
Loading thread data ...

I think there's a clear need for at least 4 general protection zones: kernel, device driver, OS services and user applications. Beyond that you're getting into user politics.

IIRC NT 3.x and OS/2 1.x/2.x did use rings 0, 1 and 3. NT 4 retreated to using only rings 0 and 3, and I *think* OS/2 Warp versions did also. And x86 now has dropped segmentation altogether in 64-bit mode so the rings aren't even available.

George

Reply to
George Neuner

That should be quite sufficient.

The interrupt/device driver environment should have an own "ring" especially, if there are a lot of third party device drivers, which quality and compatibility with other kernel mode services might be questionable.

Running OS services such as file systems (e.g. RMS in VAX in executive mode) in some intermediate ring between kernel and user mode makes sense, since that only need to address the user address space in addition own ring, while kernel/driver code will access the OS service ring as well.

There are usually private user mode address spaces for each process, so a single user mode ring would be sufficient. If there are needs to share memory between two user processes, memory mapped files etc. with process specific (RW/RO/EXEC) access should be sufficient.

NT3.x had the GUI routines in user mode. Since this was a quite low level interface, the frequents switching between user and kernel mode caused a lot of overhead.

With NT 4.0, those routines were moved to kernel mode. Apparently they did not check the codes at all, since in NT4 SP0, you could crash the whole system, by mistakenly sending a NULL pointer, when a valid pointer was required. The access in kernel mode to the unmapped 0 page of caused an unhandled page fault and crash.

The first thing a kernel mode API should do is to check, if the requested memory access would have been allowed from the calling (here user mode) and immediately reject such request, before even attempting such memory access. With NT4 SP1, most of these checks were added and that is the first more or less stable NT4 version.

Early 386+ processors only had some protection bits in the segment registers. If I understand later 32 and new 64 bit processors have extra protection bits somewhere in the page table hierarchy.

On a decent virtual memory hardware, these bits should be on each page table entry.

Reply to
upsidedown

I think the multics folks looked at this with an attitude of "if these features were free, what would be the 'right/best' way of designing a complex system".

I.e., a *user program* could benefit from the same schemes that benefit the "system". So, let's extend the ring concept into user-land!

Of course, crossing protection boundaries is not free. And, designing with that sort of strict structure takes considerably more planning than the ad hoc approaches so common! :-/

I'm sure I've got a copy of Organick here, someplace... but, I'd wager few (if any!) user-land programs took advantage of any of this! [A possible exception *may* have been The Consistent System]

As we've discussed before, I really think just two execution levels are adequate for most systems **if** you have fine-grained control of access to peripherals (whether they are memory- or I/O-mapped).

Reply to
Don Y

x86-32 have both segmentation and paging and the protections overlap somewhat, but the rings are part of the segment.

The only new thing (and not so new) is NX - No eXecute - bits on the pages.

There are important use cases - JIT compilation and GC prominently among them - that involve rapidly changing permissions on large blocks of addresses. It is inefficient to do this at page granularity and that is one reason for the introduction of "large" VMM pages into most OSes. However, large pages present their own set of problems such as protection granularity, lengthy page I/O, address space fragmentation and pollution[*], interoperability issues dealing with differently sized pages (allowed by most OSes) including frame allocation fragmentation, partial pages, etc.

Segments done properly (which Don and I have discussed in the past) solve a lot of protection domain problems very nicely. They are _not_ a good solution for virtualizing memory, but they should be part of any general protection solution.

Intel's attempt to layer protection on top of the existing real mode base registers was not well thought out ... they should have left the existing base:offset addressing alone (or removed it entirely) and introduced a new separate protection API. Unfortunately, Intel is the only experience of protected segments that most programmers have and so they are, IMO, too willing to dispense with something that can be very useful simply because they haven't seen it done well.

George

[*] even in the immense 56-bit address space of today's 64-bit processors, overly large VMM pages can be wasteful. Some systems now allow pages to be up to 1GB and most permit different processes to use different page sizes (some even permit a single process to use different page sizes). Physical memories, while large, still are tiny relative to the address space and though "allocated" addresses consume no physical memory, "mapped" addresses do and touching a page maps all the addresses it contains (and possibly also maps additional space in backing store).
Reply to
George Neuner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.