Moving from 8051 to AVR

Ian Bell wrote: [...]

Ooooh yes! Most other microprocessors (at least the 8-bitters) look "mentally challenged" in comparison.

--
http://www.flexusergroup.com/
Reply to
Bjarne Bäckström
Loading thread data ...

It's a general rule that people use higher-level constructs in higher-level languages. It's possible to do object oriented programming in C or even assembly (I did some in Z80 assembly as a kid), but it's a lot easier with C++ or a higher level language. You don't often find hash table structures in an assembly language program, but you use them all the time with Perl or Python. I fully agree that it does not mean they are better - you use the structures and the language that suits the task.

I didn't specify that I was meaning the specific use of a data stack, which explains the misunderstanding.

Yes, but then you use different instructions to access the code space data. The fact that the same register is used does not make it the same sort of pointer (just like the Z pointer in the AVR).

I guess there is no fixed terminology for address modes. By "indexed", I meant "register content plus an offset".

You started a little earlier than me - at that time, I was using a TI-99/4A, programmed in Basic (there was no access to assembly or machine code). I didn't start using assembly until 1984, when reading through an Open University course on cpu design, and a year or two later I got a spectrum with a Z80A, which was a nice cpu.

Reply to
David Brown

Eek! If that's the case, then I fully agree with Ian Bell. If you program an AVR like it's a PC, using dynamic memory, floating point, printf, and all the rest, then you will have a terrible system. Even the typical "everything is an int" attitude learned from standard C programming will cost you hugely when you move to an 8-bitter.

Being C friendly means that you can code it in C and expect efficient code to be generated from appropriate embedded C code. It doesn't mean that anyone trained in C programming on PC's can write decent programs on the chip.

Getting rid of global variables is not "good programming practice". There are times when global variables are the right choice, and times when they are not. For some sorts of programming, it makes sense to encapsulate everything in structures or classes, and pass around pointers to data rather than directly accessing global variables. For other sorts of programming, global variables are the most appropriate tool.

For example, when programming on a PC, if you have a function that calculates two results and returns an error code, you might declare it as: int foo(int arg, int *pResA, int *pResB);

The advantages of avoiding globals are better control of your namespace, less interdependencies between modules, and re-entrant and thread-safe code.

On an AVR (assuming re-entrancy is not an issue - a valid assumption for most code), it makes far more sense to use:

int resA; int resB; int foo(int arg); // Sets resA and resB

The advantages here are smaller and faster code for both the caller and the callee, clearer generated assembly (checking the generated assembly is important), and easier debugging.

I'll not dispute that - there are reasons why the AVR instruction set was designed like it is (though I think a few mistakes were made).

It won't excel on anything if you let PC-style C programmers anywhere near the chip without proper re-education, although I'll agree they would do even worse when faced with an 8051.

Reply to
David Brown

Another good thing about the msp430 is the quality of the open source tools - both gcc, and all the supporting tools (including the gnu assembler :-) For debugging, I'd recommend an Olimex parallel port debugger, priced at around $10.

There are a number of other tool options too (at least, on Windows) - ImageCraft does a nice compiler for greater newbie ease-of-use without an excessive cost.

Reply to
David Brown

Hence the past tense in my comment. ;)

IIRC, PLM wasn't "stack-dependent" like C is, so it fit much better on the 8051. I've also seen languages for the 8051 not entirely unlike C (these languages were usually called "C", a fact which would have caused fits among the denizens of comp.lang.c) which would generate code not dependent on having a stack for data. Like PLM, it would do a rather clever call-tree analysis and then allocate a block of "static" RAM onto which all local+dynamic variables were overlayed. Of course, that means you gave up things like re-entrancy.

--
Grant Edwards                   grante             Yow!  I feel like I am
                                  at               sharing a "CORN-DOG" with
 Click to see the full signature
Reply to
Grant Edwards

I always thought that Modula-2/3 would have been ideal languages for embedded systems: much safer than C. Actually Ada has it's problems, but I've always wanted to give it a try as well.

--
Grant Edwards                   grante             Yow!  I'm working under
                                  at               the direct orders of WAYNE
 Click to see the full signature
Reply to
Grant Edwards

Actually I quoted Dennis Ritchie (lest I be accused of plagerism) somewhere down the thread.

The VM sounds a lot like forth, but the language is very C-like. UCSD Pascal's "P-code" was similar but more of a byte-code instead of actuall addresses of thunks.

--
Grant Edwards                   grante             Yow!  I want to read my new
                                  at               poem about pork brains and
 Click to see the full signature
Reply to
Grant Edwards

While that may be a worthwhile goal, the "PC Programmer" is going to have to change so many "programming manners" to overcome "non-CPU-core" differences in the target environment and OS, that it's not much extra work deal with the fact that the CPU core is different.

--
Grant Edwards                   grante             Yow!  I want to read my new
                                  at               poem about pork brains and
 Click to see the full signature
Reply to
Grant Edwards

Global variables should only be used when necessary. It should not be the standard way of doing things.

I think functions which returns two results in itself is bad programming practice. Makes code less readable.

And less readability.

Which is the point...

--
Best Regards,
Ulf Samuelsson
 Click to see the full signature
Reply to
Ulf Samuelsson

I am pretty sure it isn't a coincidence, as they is how array copying is done no matter what the underlying instructions say. The Vax carried it to the logical if baroque extreme with

d[i++] = s1[j++] * s2[k++]

resolving to a single machine instruction.

Never read that one before.

At the time, there was a huge debate in the academics about the possibility of using a high-level language to write systems code. C came from Bell Labs, Bliss came from CMU, at about the same time. Some folks even tried to use PL/1 for systems level work, I think Multic had a fair amount of success with it, and of course, Unix is a pun on Multics because Bell Labs couldn't afford to give them a good

36 bit machine like they wanted.

I much prefered Bliss, but C won out once AT&T released Unix and C to the academic world.

--
Pat
Reply to
Pat Farrell

I agree there is certainly a class of high volume applications that are using high level languages. I was involved in a set top box development - it used linux and a bunch of standard libs so there was not a huge amount of code to write and only a few programmers were needed. I have also been involved in mobile phone development and I can tell you that most of the complexity is in the silicon (including much of the protocol stack) and pretty much the only C code written is for the user interface. Even this is disappearing with the advent of java. That said, I am certain there are other high vol apps that do use lots of high level code.

There's a whole bunch of other stuff in between like DVD players, printers, keyboards, alarm systems and so on. Like you I don't really know. I'm going to do some googling to see if there are any stats out there.

Ian

Reply to
Ian Bell

I did some work for a certain hole in the wall company about 20 years ago and they used a range of peripheral boards, all 8051 based and all programmed in PLM51.

Ian

Reply to
Ian Bell

I only meant to correct tour statement that DPTR point 'only' to external RAM.

For my sins I designed the Dragon 32 ;-)

Ian

Reply to
Ian Bell

Thank you for making my point better than I could myself. That is

*precisely* why a 'C friendly' 8 bit microcontroller is NOT and advantage.

The worst kind of programmer you could possibly imaging using on an 8 bit microcontroller development is a person educated at university on a PC. Heavens preserve us from the computer scientist - now there's an oxymoron.

Ian

Reply to
Ian Bell

Cool, I remember those! My first machine was a UK101 (kit). I guess the Dragon would have been a couple of years after that.

--

John Devereux
Reply to
John Devereux

I generally agree with a lot of what you write, but this seems like a thin point if you suggest that it is Atmel's intent to make a chip that is friendly to C programmers who know as little about embedded programming as a PC programmer without such education might...

I haven't used C for the 8051 (always used assembly) nor for the AVR (always assembly, as well -- which I enjoyed doing.) But every C compiler I have used for small embedded CPUs, the vagaries of the language implementation (#pragmas, special qualifiers, etc) was the least of the issues I worried over. Pretty much, it was everything else and little to do with whether or not the CPU was C-friendly at the instruction level. I can't recall ever worrying much about that.

If I were a compiler vendor considering porting an existing compiler over to a CPU, then I'd probably care more. :)

Jon

Reply to
Jonathan Kirwan

I take the point, but there is a grain of truth in what Ulf is saying.

I started out using assembler, then "graduated" to C on a HC11. I had to use various nasty habits to get efficient code, due to limitations in the CPU architecture (or compiler, perhaps).

- prefer global to local variables

- avoid accessing structures through pointers

- prefer "char" sized variables to "int"s

Etc.

Then I moved on to AVR, and found that these were no longer needed. It took me a while to unlearn them.

I am now using ARM, and they are actually actively counterproductive.

ARM *likes* everything in local variables, since it can put them into registers (or an efficiently accessed stack frame). It *prefers* access via pointers over absolute addresses, and handles 32 bit ints at least as well as chars. The most "natural" C programming style is also the most efficient.

Of course there are still things to be avoided, and one would still hesitate before using "malloc", floating point maths or a fully grown printf. But even relatively low-end single chippers *can* use these, and it can make things easier sometimes.

--

John Devereux
Reply to
John Devereux

On Thu, 09 Feb 2006 12:02:41 +0000, Ian Bell wrote:

I was pretty sure you'd cover this, and you did. I had written something then just held off.

In addition to cost, there are these not uncommon reasons -- power consumption and what that may yield in additional cost and footprint, size, breadth and depth of part source options, package types, etc.

There is an infrequently mentioned case -- that of the toolset's availability for long product lifetimes where the continuity of a wide variety of factors (programmers, company ownership, and the very existence of the tool vendor, for example) is all in question. If I select a C compiler vendor who uses a form of copy protection tied to the disk drive or cpu, for example, and issues a license for a particular machine -- what then happens when a client calls me up 5 years later and wants a new modification made? Whatever machine that compiler was once on, is likely long since gone along with the unlock key for it. Even if the toolset's install package was saved away, there is little or no guarantee that it will be usable on a new machine without a new unlock key (code may exceed the "free use" threshold, there may be no test-drive period at all, etc.) If the company is no longer supporting the old tool (having gone on to greener pastures, let's say) or is otherwise long gone (as, for example, is the case with Lattice C for the IBM PC), then this could leave the entire project in serious difficulties. Typically, though, this does NOT occur with assembly, as the tools usually are unlimited and otherwise quite capable. Putting away the installation kit, when using assembly coding, is as good as making sure that you can get back to where you need to be -- so long as the necessary hardware can also be unearthed.

And -insensitive ones, as well. ;)

Jon

Reply to
Jonathan Kirwan

I just think that you don't like architectures where hard earned skill on how to wring out the best of this architecture is not neccessary because the obvious way to do this is the best way to do it.

Maybe, but any new programmer entering the industry is very likely to have that background. A good person educated at a university has written compilers and will understand what happens when he/she uses certain high level constructs. They can probably write good C code for an 8 bitter as well without having to adopt bad habits.

Have seen object oriented C-code for mobile phone accessorys which was a pleasure to read, and still fitted well within the 4kB flash available. Most of the few global variables used could be optimized into registers on the AVR for a 20% code reduction. Competition required 8 KB...

Sure beats the hacks I have seen 8051 programmers produce in any language.

--
Best Regards,
Ulf Samuelsson
 Click to see the full signature
Reply to
Ulf Samuelsson

I think this is more an argument for using gcc instead of commercial tools!

--

John Devereux
Reply to
John Devereux

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.