PIC vs ARM assembler (no flamewar please)

Have you a refernce for that definition? All the definitions I've found refer to separate addres spaces for code and data as being the key distinction for a Harvard architecture. None refer to simply having multiple data paths to the same address space that way much less refer to multiported ram as Harvard (as ST does).

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett
Loading thread data ...

The "Harvard Architecture" I learned about in grad school meant separate address spaces for code and data. You claim it merely means multiple physical memory busses -- so a processor with 3 busses is "Harvard" even though there's only one address space and code/data can be fetched from any bus?

--
Grant Edwards                   grante             Yow!  TAILFINS!!...click...
                                  at               
                               visi.com
Reply to
Grant Edwards

That's the definition in all the books and articles I've ever seen.

--
Grant Edwards                   grante             Yow!  With this weapon
                                  at               I can expose fictional
                               visi.com            characters and bring about
                                                   sweeping reforms!!
Reply to
Grant Edwards

At least architectures with different instruction word and data word length must be harvard architecture. It would be hard to efficiently put e.g. 14 bit instruction words and 8 data words in the same address space or even into a unified memory array.

Having a 32 bit instruction word with double word addressability and byte addressability in the data space would also be a sign of harvard architecture. This is efficient of I-space bit allocations in branch offsets and jump tables, but would still allow space efficient transfers between I and D space with special instructions.

Any other systems with equal addressability granulation in both spaces but with different logical or physical (memory banks) address spaces would be on the borderline (harvard or not).

Why would instruction space data access be required in the early systems (except for self modifying code, which became more or less redundant after addition of index registers) ?

It was not a problem to have two separate program loads, one to load the code instruction space and an other to load the constant (or initial) values into data space. With core memory the constant data remained in memory even after a power failure, so at most if you wanted to restart the program the next day was to reload the initial values of the impure data modified by the previous program execution.

Moving data from instruction space to data space has become an issue with non-volatile data memory and especially in micro controllers, in which you do not have a program/data loader. The obvious solution would be to have a data-space initial image in a non-volatile memory with same word length as the data space. However, this would not be cost effective, since only a small part of data-memory needs to be initialised with non-zero values, so a program loader reading with some special instructions from instruction space would be required. But after the initial data space loading, there should not be much need to access the instruction space.

That would be quite stupid when one think about bit efficiency. If the I space is 16 bit wide with word addressing and D-space is 8 bits wide with byte addressing, 16 bit pointers could access 64 KiW (128 KiB) I-space and 64 KiB of D-space without any special tricks, such as segment registers. The need for data access from I-space, so this does not affect the overall efficiency.

The issue about higher level languages is more the question if the architecture includes a call stack at all and were it is located (internal registers/data space RAM) and if it is accessible from the data space.

Paul

Reply to
Paul Keinanen

They would likely be Harvard indeed. They could have a unified address space, for example it would be possible to allow 8-bit and

16-bit reads to a 14-bit instruction memory.

If there are different memory banks and buses to them, then it is a Harvard. If there is one bus to multiple memories then it is not.

You need access to instuction memory to program it. The early machines were programmed by hand...

That's not true. A significant proportion of data is const data, and there is often very little RAM on micro controllers, so copying it from flash to RAM is simply not an option. Therefore efficient access to flash is required and flash and SRAM need to share the same address space. The good thing is that you can still use a Harvard architecture.

Yes, that is why only a few small 8/16 bitters go for a pure Harvard architecture - being able to use twice as much memory matters. But for high level languages a unified memory is the way to go.

You don't need to access the call stack in C, it is perfectly possible to use separate stacks for data and return addresses. If the hardware doesn't allow any access to the call stack the only issue is setjmp/ longjmp, but they are not that useful anyway.

Wilco

Reply to
Wilco Dijkstra

I agree with that.

The number of ports for accessing the data spaces does not change a von Neumann architecture into a Harvard one, or vice versa. Historically, there was a match in the number of ports, since Harvard architectures have separate code and data spaces and have two ports, and v.N. architectures have a single address space and need only one port - multiple ports to the same address space were difficult and expensive in times past.

The great majority of modern processors have a unified address space, i.e., they are von Neumann. They might have a separate IO space, but that does not make them Harvard any more than a "core register space" does. They are likely to have more than one bus internally - that's implementation optimisation. Since cache memories became common, the speed advantages of Harvard designs (with their dual buses) disappeared, and Harvard is now only used for small micros (which have no cache, but may have distinct requirements for flash and ram areas) or specialised architectures.

Reply to
David Brown

See Hennessy&Patterson Computer Architecture - A Quantitative Approach. Or wikipedia (although their definition is a bit vague).

Harvard was revived by RISC and the key goal was performance by using 2 ports to access code and data simultaneously. Some RISCs used traditional pure Harvard (eg. Sparc) but most used a unified memory architecture with separate data caches.

Correct. They use a Harvard architecture with Von Neumann unified memory, so are neither pure Harvard nor pure Von Neumann.

My point was that a separate memory space is not an indication of a Harvard, two or more memory buses for code and data is. Harvards may also have separate memory spaces, but this is not popular anymore.

Cached cores are typically Harvard...

Harvard is used for most architectures today. Few pure Harvards exist indeed nowadays, the vast majority uses a unified address space.

Wilco

Reply to
Wilco Dijkstra

No, I'd call it Harvard when there are separate, dedicated instruction and data memories which can be accessed simultaneously via dedicated buses. There may be more buses to connect the memories (eg. to a unified main memory) but that doesn't stop it being a Harvard.

Wilco

Reply to
Wilco Dijkstra

If you want to pass function pointers as arguments to a subroutine and later execute the routine, you need some method to load the program counter. One way is to put the address in the return address stack and issue a return instruction.

Paul

Reply to
Paul Keinanen

Yes, if you don't have an indirect branch then that would be the only option (switch statements become a nightmare). If you do have an indirect branch then calling a helper function that does the branch is an alternative.

Wilco

Reply to
Wilco Dijkstra

We are using a significantly different definition of "Harvard" and "von Neumann". With your definition, i.e., that it depends entirely on whether the core has a single memory bus (von Neumann) or separate code and data buses (Harvard), then what you wrote above is correct.

With the definition that I use, and many others here agree on, it is a different matter. That is to say, the differentiator is whether the core uses a single address space for data and code, or whether it uses two separate spaces.

Historically, the definitions overlapped (as indicated by wikipedia). With modern designs, it's a different matter - most cpus have a single address space but may have more than one bus, while some small processors (or other more specialised cores) have separate address spaces.

The number of internal buses is, of course, totally irrelevant except for performance reasons, and is therefore a useless way to distinguish between architectures. The number of address spaces, on the other hand, is highly visible to the programmer, and often makes the difference between a C-friendly architecture and an unfriendly one.

Reply to
David Brown

How do you quantify "most"? Number of processor families? Number of processor variants? Number actually made? Are the numbers different for 4-bit, 8-bit,..., 64+ bit processors?

Reply to
Everett M. Greene

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.