As if the subject didn't give it away already, I'd like to point out that I am merely an interested amateur when it comes to embedded issues. My apologies if this question is OT.
Reading the interesting thread on "What's wrong with a PIC", I noticed a lot of comments regarding the C friendly/unfriendly architectures of various micros.
So, the question - what makes an architecture C friendly? Also, what are the other merits/demerits (C apart) of the various architectures?
In my opinion, probably one of the biggest factors would be the development tools. By this I mean things like how efficient is the output code, how much support are provided by the compiler libraries, how compatible is the compiler and its pre-processor with standard C, how well does the debugger or emulator handle high level stepping and hierachy, etc.
Think about how you call and return from functions, especially recursive calls. Also how do you assign and use local storage so that routines are re-entrant.
--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at:
Also see
Only indirectly - a C friendly architecture makes it easier to make a good C toolchain, but there are some good toolchains for architectures that are by no means C friendly. Often the quality of the toolchains is more important than the C friendliness of the architecture, but that's not what the OP wanted to know about.
Being C friendly requires a number of things from the ISA - von Neumann memory (from the programmer's viewpoint, regardless of the implementation), good handling of data at least 16-bit wide, good pointer support, and a data and return stack (normally one stack, but can be separate) with fast access to stacked data. Multiple registers and an orthogonal instruction set are "C compiler writer friendly", but are not strictly necessary to be "C friendly".
Pretty much 32-bit architectures are C friendly in all these respects (the x86 is register-poor, but otherwise fine), as are most 16-bit architectures. The msp430 is a good example of a C friendly ISA.
8-bit architectures vary much more. For example, the AVR meets most of the criteria but not all - the different memory access to RAM and Flash being the biggest sticking point, followed by its limited pointer capabilities. The PIC (at least, the PIC16 family which I have used) fails on every point. The 8051 falls somewhere in between.
Why are PICs C unfriendly? o limited hardware stack size (8 typically) o limited ability to use pointers (only one FSR and it requires use of banking) o banking of variables and code space o lacks ability to do multiply and divide on 16 series o rudimentary instruction set
You forgot the lack of general purpose registers, everything has to go through the single W (for wretched) register. You spend all day swapping values in and out of it.
At the top level, 'C friendly' means that there's a good correspondence between what C needs in a processor and what the processor delivers.
C needs a processor with easy * A deep stack that is visible to code (for data into/out of routines)
A good method for indirect data retrieval (i.e. good indexing)
Lots of registers are nice, but not necessary.
If the processor has an underlying Harvard architecture, it should appear to be VonNeuman.
Among 8-bit processors that I know of the PIC 16xx stinks (I don't know about the others), the AVR is very good, the Freescale 6811 is good but a bit slow because of lack of registers, the 8051 is quite awkward, and the 8080, Z-80 (and therefore the Rabbit) is probably as good as the
6811 but at the cost of driving the compiler guys a bit mad.
--
Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
Posting from Google? See http://cfaj.freeshell.org/google/
"Applied Control Theory for Embedded Systems" came out in April.
See details at http://www.wescottdesign.com/actfes/actfes.html
For multi-threaded applications (which includes anything that uses interrupts), atomic read/write of as many of C's basic types as possible makes life way easier. The H8 (the high end anyway) does atomice 8/16/32 bit accesses. A 68xx/Z80/MSP430 does atomic 8/16 bit accesses, an AVR only 8 bit. That means that on an AVR, all variables shared inter-thread larger than 8 bits have to be protected by mutexes of some sort. On an H8, you only need mutexes for atomic operations on _sets_ of multiple variables.
--
Grant Edwards grante Yow! Why was I BORN?
at
visi.com
Because C is inherently VonNeumann. A void pointer has to be able to point to anything. memcpy() expects to be able to copy from read-only objects as well as writable ones[1].
Implementing those sort of things on a Harvard architecture sucks rocks.
[1] On embedded systems, one often puts one's read-only data in ROM along with the executable stuff. On Harvard machines, that would mean that read-only data occupy a different address space than read-write data. So you end up having to jump through hoops when you implement any function that accepts a pointer to that which may be to either read-only or read-write data.
--
Grant Edwards grante Yow! Did YOU find a
at DIGITAL WATCH in YOUR box
visi.com of VELVEETA?
It means that you can have a heterogenous pointer type that can address different memory spaces without any special mechanisms. (e.g. threading a linked list of objects through ROM and RAM).
You can make a pointer abstraction that works across different memory spaces on Harvard architectures, but it involves a lot of work (basically storing 'tags' in the pointer and dereferencing through library routines) and a massive efficiency hit (look at the IAR PIC18 compiler for a good example of how to do it) - it's much nicer if "code" and "data" spaces are addressable as different areas of the same address space.
(In fact, I remember having discussions with Microchip when the dsPIC was being designed -- we kept emphasising that having some mechanism for mapping code space into data space was key to making it properly C-friendly...)
pete
--
pete@fenelon.com "That is enigmatic. That is textbook enigmatic..." - Dr Who
"There's no room for enigmas in built-up areas." - N Blackwell
Further to what others have said, and just to expand a little about what I said about the H8 in the PIC-knocking thread: I started out with the H8 by checking what the compiler had produced, as you do. With other processors to that point, I was used to seeing recognisable assembler, but somewhat clumsy/inefficient, and with some noise - automation without human insight. With the H8, which was designed from the ground up to be C-friendly, I had a pleasant surprise - the generated assembler was alien ;). Closer inspection revealed that it was very dense, and inhuman - the compiler was not thinking like me at all, but was managing to translate the requirement (stated in C) into highly efficient, albeit not hugely human-readable, code. This, IMO, is as it should be.
As an exercise I wrote a small program in both C and assembler. The compiler beat me on code size and speed, hands down. (However I maintain I was no H8 assembler expert at that point. Leave me some dignity!)
[As an aside: the optimisation levels on the H8, however, were originally a bit weird (I forget which compiler - Hitachi changed from IAR to in-house and back again often enough for me to lose track - and I'm now using GCC via KPIT/GNU). I found that lower levels of optimisation would do weird things like push/pop parameter stacks even when no parameters were used, while higher levels would often break the code. Finding a level I had confidence in took quite a bit of trial and error.]
This is comp.arch.embedded, so I assumed we were talking about "C-friendly" in the context of embedded systems, and cost is always a factor in embedded systems. Some architectures simply don't provide a way to add a second ROM. If you don't have an external bus, then it's not a matter of wanting only one ROM. One ROM all there is.
True. If there is a way to map part of ROM into data space, that makes things much more C friendly. Adding a second data-space ROM would also work, but in my experience that's pretty much a moot point.
--
Grant Edwards grante Yow! This is a NO-FRILLS
at flight -- hold th' CANADIAN
visi.com BACON!!
First a stack for parameters and local variables with efficient stack relative addressing modes.
Second available and efficient general indirect addressing modes.
The hoops C compilers have to jump through to make up for the lack of these features on processors like the PIC16 are frightening and often result in grossly inefficient code.
At one point C friendly was does it have a stack and is it a von Neumann memory model. C friendly required support for the seven basic arithmetic and logical operations and run time resolution of pointer references. A lot of that has changed. The language, applications and support tools have both improved and changed.
Traditionally but not a requirement was that arguments and locals were passed on a data stack. Local variables allocated on the stack meant that functions could be re-entrant. The von Neumann memory memory model is a linear single address space.
The things that have changed are compilers are a a lot smarter about doing as many things as possible at compiler time which untangles many of the run time requirements.
The C language has gone through many changes in the last 15 years. The first change was a recognition that all processors did not have the ideal resources to run applications on. The second was all applications are not created equal, and the application goals may be met by by specialized hardware a PIC for example in an instrumented tennis ball is a better choice than my desk top. Some C code may run on both platforms but UNIX is not a requirement in the tennis bad and will not run on a small PIC.
The specific C language changes that soften the C friendly requirements were driven from organizations like MISRA that pointed out the dangers of some C capabilities and the then C restrictions on memory space and low level processor access.
MISRA and organizations like it pointed out that algorithm and size specific data types are essential in moving some code from one platform to another. C99 addressed that issue. ISO TR-18037 is a document that specifically addresses C on embedded systems and documents compiler vendors practice over the last 15 years or so to get around the von Neumann memory model requirements. This document essentially does three things.
1) Formal definition of support for multiple memory spaces including application defined memory.
2) Add C support for direct processor C access the biggest reason that asm was needed. This provides access to processor registers and condition codes.
3) Adds new fixed point data types to the C language.
Recent compilers that have been written using these provisions can target C to most new processors with the 7 fundamental operators, conditional execution and run time pointer dereferencing.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.