PIC versus AVR

That's approximately my recollection. I specifically mentioned the PDP-11/20 because that is the one I recall as being "earliest." My memory could have been wrong on that point, and you have clarifed the details some in my own mind. Thanks.

I personally paid about $300 each for 4k dynamic ram cards from MITS that didn't even work right because they were designed wrong. That was my first, and forced, serious foray into the wonderful world of electronic design. I fixed them. But it took a month of learning and doing. That was... I think end of 1975, maybe early 1976. Not quite $1/byte, but not cheap and it came out of my meager (very meager) pocketbook.

Jon

Reply to
Jonathan Kirwan
Loading thread data ...

Actually it was almost identical to the PDP-11 that I used for a couple years (from a user point of view). Each process saw a

64-bit text space and a 64-bit data space. When the scheduler switch processes, both spaces were swapped for those of the new processes. As long as everything fit in physical RAM, then all that was changed were the segment control registers: it went quite smoothly. As soon as you stared swapping to disk, things got very slow.
--
Grant Edwards                   grante             Yow!  My DIGITAL WATCH
                                  at               has an automatic SNOOZE
 Click to see the full signature
Reply to
Grant Edwards

No, I meant that the 286 (I assumed you meant the 80286, when saying

80268) includes a protection scheme that has quite another purpose to it. Code space that can be set to execute-only, which begins to look a little like a separate memory but without the benefits of separate address and data buses.

I've worked with the x86 series for many years of course, including chipset register programming, MTRRs and MSRs, etc., and in multi-cpu systems using APIC. And never did I feel anything similar as when I was programming PDP-11's. The differences are manifest.

Let's just agree to disagree. How a scheduler handled things isn't what I was thinking about, I guess.

Jon

Reply to
Jonathan Kirwan

I pretty much believe that you understand timing incorrectly. From now few years separating the early PDP-11 where C has been developed (and those where with the MMU indeed) and later models where separate I/D spaces have been introduced definitely looks like "the same time" - unfortunately for me it is :( As I remember - C compiler did work on the common I/D but even lint - did not... So, no lint on 11/34!

11/70 is later one with 22 bit of physical address, earlier ones had only 18 bits. And unix did not work w/o the MMU so there where the MMUs at the time when unix and C have been introduced (essential for unix but not for C). And separate I/D spaces where to extend the VIRTUAL address - limited to 16 bits no matter what the amount of memory we had.

I took Grant's notice about the constants - but my words weren't about the micros - it was about the harvard arch in generic - so no limitations for preloaded data in the Data space!

You think I do?

Thanks, Arcady

Reply to
Arcady

If we look back to the PDP-11 years - I do not remember that there was any programmatic way for the user program to read the I space - only execute commands from it. So with the "proper" Harvard arch (PDP-11 wasn't the HA - just for clarification) there is no way for program to implement such "smart pointers" - only if arch provides you with some tricks - definitely not the case for big (big? man, look around!) machines and vital stuff for micros. Even the PIC's retlw trick will do (oh, then there WAS a way - pretty expensive though - to read at least some data from the I space of the PDP-11!)

Reply to
Arcady

On such a pure Harvard architecture, the need for an abstraction between C pointers and hardware address doesn't arise in the first place. If consts can never be in code space, C only needs pointers into data space, so the "one kind of pointer fits all" approach just works.

The problem discussed in this subthread arises only if

a) the micro is Harvard, yet offers read-only access to code memory for storing constants

b) it does this by using different *instructions*, rather than e.g. a special 'magic' bit in the address or more generally by using a non-uniform memory mapping

c) the C compiler offers support for using this, but

d) its maker decided to do it by introducing non-standard pointer types, such as a 'int __code_memory__ MyConstant;'

The decision between portability and efficiency only happens in step d). A C pointer can always be implemented to store the extra bit of information ('this is in code memory') in the pointer's representation, at the cost of rendering any access through such an expanded pointer more costly than if it were just an address. Not surprisingly, such an architecture will expose quite a few bugs in code that's sloppy about the rules of pointer usage in C.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

I don't think I'd consider a computer to have a Harvard architecture if its code and data memories differed only in what their addresses were.

Reply to
mc

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.