Moving from 8051 to AVR

Ah, that explains it! Thanks.

Meindert

Reply to
Meindert Sprang
Loading thread data ...

How on earth is that an advantage to someone trained to code on a PC?

Ian

Reply to
Ian Bell

A programmer can continue to program as usual, and the code will be OK. With the 8051 the programmer has to chek the generated code and rewrite, rewrite,rewrite to make it fit. The 8051 simply wastes the time of the programmer.

--
Best Regards,
Ulf Samuelsson
ulf@a-t-m-e-l.com
This message is intended to be my own personal view and it
may or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

Sorry, my mistake. of course the AVR is a RISC machine with a fairly orthogonal instruction set. I still don't see how it being a register based RISC makes it easier to program by a PC taught coder.

Ian

Reply to
Ian Bell

With a regular architecture, the optimizer can do a better job and the user needs to spend less time manually trying to improve. Time saved can be spent on other projects.

--
Best Regards,
Ulf Samuelsson
ulf@a-t-m-e-l.com
This message is intended to be my own personal view and it
may or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

I agree on coding 8051 with any high-level language. The compiler may need to go through astonishing contortions to produce good code, and a subtle change in the source code may change the generated code drastically.

Any RISC processor it a PITA (pain in the ... lower back) to program in assembly language. It is the intention of the RISC designers that the tedium of code generation is left to the compiler.

The key to good embedded code on any small controller is to look at the generated assembly code and look how it changes when the source is changed to an alternative expression of the same algorithm. This leads quickly to an insight of how to write the code.

I dropped 8051 in favour of the AVR (or an ARM if the problem grows over a few tens of kilobytes).

--

Tauno Voipio
tauno voipio (at) iki fi
Reply to
Tauno Voipio

Here we agree: The AVR member that appeals the most to me is the Tiny24 series. = Small applications.

A couple of years ago I did look to port a very simple C51 project to the 90S1200, but it just had too little resource. Seems you cannot live by registers alone, but have to go to RAM eventually.

For larger, embedded designs I use the C51, and for larger still the ARM is the obvious candidate.

-jg

Reply to
Jim Granville

Further to this, it in interesting to see what they changed in the new

32 bit processor, called by Atmel marketing the AVR32 ( it does rather reveal what is missing in the 8 bit AVR... ) [ If it wasn't broke, they would not have fixed it :) ] ** ALL registers are now ~equal. The location dependance of Registers has gone. ** Added Register bank switching, better interrupt handling [just like the 80C51..] ** Added opcodes like XCH, that swaps REG-Memory [with some caveats!] ** variable length ocodes - selected 16 bit ones, have 32 bit versions with more "reach". ** Register count dropped to 16 ** Added R/M/W for Immediate memory, a la 80C51, you can now SET, CLR, CPL any bit in 64K=>Atomic access to large SFR space [just like the 80C51]

-- BUT, no opcodes to TEST bit, or move bit-C, or move C-bit ?! - surely a strange omission ? Why do it half baked ? Did I miss something ?

** Nice sign extend opcodes ** Short compares ** Even a Boolean->Byte opcode, for the poor C programmers ** Heaps of optional DSP and JAVA support

-- NO immediate address, and even Mov Rn,#K, has only options for 8 bit sign extended [16b opcode] and 21 bit SE, [32b] So, no idea how you load 32 bit constant into a register ?! Maybe I missed something ?

-- NO memory move, ALWAYS seems to be Pointer indexed ? Tho there is a tiny-page move, that is 16 bits Opcode, and uses 5 bit offset index and a 32 bit version that allows 64K indexes, so these do need a pre-defined register as base...

-- No Register page selection...

AVR32 is also actually many cores, so making efficent tools will be difficult. [Atmel's Variable Risc?] The words "implementation defined" appear 37 times !!

Also, as you would expect, the benchmarks do not compare with their real competition, like the Tricore, Cortex, (PowerPC?), or something like this :

formatting link

Q: Will this come in FLASH versions, and at what speed ?

-jg

Reply to
Jim Granville

I think AVR in this case could mean Audio-Video RISC

Don't think the ST "Nomadik" is for the average Joe. Arrow has run a series of seminars inviting customers to listen to the ARM story told by 7 different vendors, and the ST guy never talks about Nomadik.

Those details are not released yet.

The demo at Nurnberg runs full Linux 2.6.14 (if I am not mistaken) (not uCLinux), and this definitely does not fit into internal flash. The coolest thing I have found is the full Nexus debug interface allowing debugging over JTAG while the application is running.

--
Best Regards,
Ulf Samuelsson
ulf@a-t-m-e-l.com
This message is intended to be my own personal view and it
may or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

Correct me if I am wrong, but doesn't that defeat the purpose of the C language?

The C language (I'm sure you know) was designed so that the programmer wouldnt not have to worry about the nuances of the underlying microprocessor architecture. If the programmer has to tune their code (I'm talking about using methods suitable for embedded systems, not PC programming methods), then the compiler wouldn't be serving its purpose.

Reply to
Isaac Bosompem

And you seriously think that a PC trained programmer will do any better simply because he uses a RISC part instead of a CISC one?

Ian

Reply to
Ian Bell

Damned if I know. C is not a baseline skill in an EE degree. It's possible to include it in the design electives part of the course, naturally. But the only programming skills that are mandatory are (a) some assembly language in the third? fourth? year, and (b) some generic computer science subjects, which are of course driven by the current buzzwords du jour.

Reply to
larwe

Partially, yes.

Microprocessors weren't even around when C was designed. C was largely based on the "B" language, which didn't even compile to native machine instructions, it compiled into a low level threaded-interpreted-language that implemented a stack-based virtual machine.

Whatever. Feel free to not use C compilers on microprocessors.

The rest of us will continue to use them even though they're imperfect and "not serving their purpose".

You line of reasoning would seem to lead one to the following.

A Doctor's purpose is to keep his patients alive and healthy.

All his patients are going to get sick and die.

Therefore, doctors are useless.

--
Grant Edwards                   grante             Yow!  If elected, Zippy
                                  at               pledges to each and every
                               visi.com            American a 55-year-old
                                                   houseboy...
Reply to
Grant Edwards

It seems like I have struck a nerve :)

I don't really feel that way I use C myself to write code for both the PC and the MCU's. I do use a different set of rules for each though.

I was just alluding to the fact that people like to sell "C" as the ultimate portable programming language and go on saying "C code from one machine will run fine on another machine unmodified", yet here we are talking about "tuning" code to a particular unit. I even recall an article saying that tuning code dies along with assembly language.

-Isaac

Reply to
Isaac Bosompem

Not really, I just don't see what your point is. Who claimed compilers were perfect?

And sometimes you need a different set of rules for a 6811 target and for an ARM target. That's just the reality of the situation.

People around here claim that?

For some definitions of "run fine", that's mostly true. For the one used in real-time embedded work, it's only partially true.

Perhaps to a particular target architecture. I've never heard of anybody tuning code on a unit-by-unit basis. Hopefully the manufacturing guys have things figured out well enough that code tuned for one unit will run the same way on the next one.

They've never done embedded system work with tight memory or processing time constraints.

You can't believe everything you read. :)

--
Grant Edwards                   grante             Yow!  I just put lots of
                                  at               the EGG SALAD in the SILK
                               visi.com            SOCKS --
Reply to
Grant Edwards

Well not people here but in general, for example a lot of introductory books in C that I have used to learn the language make this point.

Sorry about that, my stupid typographicals. I meant architecture in that point.

I did believe that for general media, I guess I've learned my lesson.

I am a student myself but I would like to get into the industry.

Now how much time do you guys spend tuning you code for a particular application?

Say when you have your code ready for your application and you find out your code is a bit too big or runs too slow on your MCU? When do you move to a more powerful MCU and when do you decide to tweak?

Reply to
Isaac Bosompem

Depends what you mean by tuning.

It's poor engineering practice to push hard up against the limits unless there is a _real_ good reason. When I finish my code, it goes into a minimum two week (usually six week) intensive alpha test program. After passing this, it goes to a minium eight week external beta in the hands of external testers.

If either of these test stages shakes out a problem that requires code additions to fix, I'm in a distinct pickle if I only have three bytes of code space or one nibble of RAM left.

When do I move to a more powerful MCU? Depends on the family. If a vendor sells a particular micro in code- and pin-compatible variants with 8K, 16K or 32K of flash, and my program hits 8.5K of code, I would probably migrate up to the 16K variant unless the cost difference is really critical.

On the other hand, if moving up to the next size involves a new PCB layout, I'd do some pretty strenuous things to avoid this.

Reply to
larwe

"Introductory" books usually contain over-simplifications, over-generalizations, and stuff that's just plain wrong.

I wondered if that's what you meant. ;)

It varies. Usually, once you get the hang of how a particular compiler/target combination works, you can just "do it right" the first time. Mostly, anyway.

That said, I once spent about a week working on a checksum routine that was probably no more than a couple dozen lines of code.

If you run out of data space before you run out of code space, you may end up spending days converting "int" or "char" struct members to bitfields, or converting a string of if-then-elses to a switch statement (or the other way around). Or you may switch to a table-driven scheme instead of either if-then-else or switch.

You want to find that out as early as possible.

You usually run some tests on "evaluation boards" containing the processor you plan on running. That gives you a good idea if you're going to have enough throughput. You can usually get decent code/data size estimates based on similar products along with compiling a few chunks of code with the compiler you're going to use.

If you really have no experience or data, you make conservative guesses and try to find out the real answers before the board is layed out. Don't make the guesses _too_ conservative, though, or people will start to assume you're padding things to make life easy for yourself. ;)

If the hardware is already designed, you usually tweak. If the hardware is already in production, you almost always tweak. There often isn't a way to move to a more powerful MCU without a major redesign -- and you _dont_ want to be the cause of that. Sometimes you don't have the money to buy a better part or the mA to run one, so you tweak.

--
Grant Edwards                   grante             Yow!  ... or were you
                                  at               driving the PONTIAC that
                               visi.com            HONKED at me in MIAMI last
                                                   Tuesday?
Reply to
Grant Edwards

I think you need to ask yourself:_

Why do you use different rules? - Because you have to. That does not mean that it is the ideal situation.

Would it not be a good thing if you could use more common rules and less target specific rules. Does it not take longer time to learn two sets of rules than one set of rules.? If you want to have a rule per target processor, what is the likelyhood that you can get someone trained to use those rules. To me, it is obvious that if all other things are alike, I prefer the CPU which allows me to have to learn less rules and still do the job.

It Is easier to optimize for processors which do not have multiple strange ways of doing things. It is easier to optimize for processors with lots of general purpose register, not requiring an accumulator. No use arguing this with me, ask your local compiler export.

--
Best Regards,
Ulf Samuelsson
ulf@a-t-m-e-l.com
This message is intended to be my own personal view and it
may or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

The Russian SPARC clones came with a bug list applicable to that speicif chip. Other chips on the wafer woudl have slightly different bug list. The buglist was fed into the compiler generation system which did workaround.

That is treating chips with RESPECT!!!

--
Best Regards,
Ulf Samuelsson
ulf@a-t-m-e-l.com
This message is intended to be my own personal view and it
may or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.