AVR or 8051

If anyone can tell me when to use AVR and when to 8051. We all have known that AVR has fine instruction set and is much faster than 8051. During product developing, what make you choose 8051? Any comment is appreciated. Thanks

Reply to
Yukuan
Loading thread data ...

No, I think not.

If it's a better fit for the job. Not just the core, but the available chips with their peripherals, prices, supplier(s), distributors etc. etc. Development costs may figure in there too.

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

"Yukuan" wrote

Do what you know: as in the catch phrase: "Building on core competencies." Master something new when you _have_ to.

Chasing after the uP of the month will leave you knowing a little bit of everything, but not knowing a lot about anything.

If you are picking a uP for you and your company to start using for the first time, then the AVR is a good choice.

If your firm has been using an 8051/68xx/H8 ..., and it will do the job, then you should stick to that standard.

If your old uP is not up to the task then the AVR can again be a good choice.

However, in general, anything an AVR can do an 8051 can do. 8051's have a long established record, the bugs are out (famous last words), lots of tools are available and many engineers are fluent with this processor.

For SOC/ASIC/FPGA an AVR has the advantage of needing less silicon.

--
Nicholas O. Lindan, Cleveland, Ohio
Consulting Engineer:  Electronics; Informatics; Photonics.
Remove spaces etc. to reply: n o lindan at net com dot com
psst.. want to buy an f-stop timer? nolindan.com/da/fstop/
Reply to
Nicholas O. Lindan

Perhaps a better question is why your choices are so limited - if your application could use AVR or 8051, then there are literally dozens of other possible candidates.

For my low-volume consulting projects where 8-bit cores are indicated, I use AVR because I'm familiar with the architecture, the costs are _reasonable_, and the tools are free and easy to use. Plus the commonality across the entire spectrum of feature sets is much greater than with 8051s (which need to come from different vendors if you're covering all possible performance/feature sets). E.g. one standard ISP header for all AVRs, close family resemblance of peripheral programming, ... ...

For day-job production projects, I use whatever makes sense, be that

8051, PIC, NEC 78K0, COP, MSP430, ...............
Reply to
larwe

In article , Yukuan writes

No. You have to work that you for yourself.

Faster than which 8051? There are over 600 of them and some have single clock cores. Also the peripherals are often independent to the core

Well there are 600 odd variants from 30+ silicon vendors and another dozen or so IP cores and radiation hardened parts. With ROM, EPROM, EEPROM, FLASH and external memory parts. Memory from 1K to 16M

A lot of free and cheap tools as well as high quality tools suitable for high end safety critical work.

There is masses of information and free software for the family as well as the fact that 90% of all embedded engineers have used the 51 at some time and are familiar with it.

That said if you are well used to the AVR, have the tools, expertise and a code base and the AVR will do the job why use a 51? You will probably develop faster with the parts and tools you know.

Unless of course you are producing a lot of them and there is a suitable

51 that is a lot cheaper. SO the cost of buying a new tool set and learning a new part is worth the hassle.

All you need is Experience :-)

Some Chines philosopher said:- "Experience is what you get looking for something else." but I don't think he mentioned 51 or AVR

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\ /\/\/ snipped-for-privacy@phaedsys.org

formatting link
\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Reply to
Chris Hills

?! - you need to do more research, or is a smiley missing ?

Multi sourced, Direct memory opcodes, Boolean opcodes, interrupt structures, speed, power, Analog performance, tool chain performance, smallest packages, ....

-jg

Reply to
Jim Granville

I think you will find most 8051 chips to be single sourced except the very basic ones.

I think that the RISC approach to have load/store and many registers is well proven

This is nice, the AVR can sort of do so in the I/O area, but so far there is not a lot of resources for this in the actual chips.

AVR interrupt processing is limited to 6 clocks, and the good compilers will not cause excessive push/pops. For real high performance you can use global variables in registers, even in C, so no PUSH/POP is needed. Not even the PSR needs to be pushed if the instructions in the interrupt handler does not update the PSR

Very few chips on the market have as fast interrupts as the AVR.

A 100 MHz 8051 will not be faster than a 48 MHz AVR which is the current top of the line for std micros You need three instructions to add two bytes and 16 bit performance (as required by ANSI C) is horrible. Most 8051s are a lot slower than most AVRs.

I find that there is quite a relation between memory size and the performance needs and very few projects are lost on AVR performance. The AVR core runs at 70-80 MIPS in ASICs.

When you go above 64 kB of code you are normally in trouble with an 8051, and with an AVR up to 8 MB is supported.

Most 8051s use a lot more power than the average AVR.

Customer have achieved 16,5 bit resolution on the AVR ADC converter There is a lot of new analog stuff in the new AVRs, especially in the AT90PWMxxx parts

Hard to measure, typically a lot more expensive than the AVR toolchain. The AVR normally generate less bytes of code for a large program.

4 x 4 mm available for the smallest AVR.

--
Best Regards
Ulf at atmel dot com
These comments are intended to be my own opinion and they
may, or may not be shared by my employer, Atmel Sweden.
Reply to
Ulf Samuelsson

Well proven for what - proven to consume code memory in the many index loads that are needed ? RISC made sense when memory was off chip, and large, but in a 8 bit uC you can have working poduct in 64-256 bytes RAM, so it only makes sense to have opcodes that can directly access that ? - see the Z8 for a example of a register-register uC design that understands that.

The C51 has register bank switching, and the direct memory and boolean opcodes mean you can write functioning interrupts without pushing anything. Most C51s now have FOUR int priority levels as standard. AVR Int priority handling ?

Wow - I missed the release of the 48MHz Flash AVRs - where can I get some ?

...and sustained that across replacements, temperature, and production ? [ and all with a 10 bit result register, that has a 4 bit typ [no max] error band ?! ]

ALL of the highest performance Analog 8 bit uC's out there, (and there are many :ADi, BB, SiLabs, TDK ), use the 80C51 core.

That puzzles me - if the AVR core is so tiny, then how come SiLabs can ship an 8K/256R+ADC device in 3mm, and now Philips too have

3mm C51 devices 1K/128R+ADC (both with 10/11 pin MLF), how is it that the 8 pin Tiny13 (1K/64R) die has to go into the 4mm MLF20 package ?

-jg

Reply to
Jim Granville

well

What you loose by having to load memory, you gain by not having to write it back all the time. I think compiler output of real applications proves that. Odd benchmarks which focuses on a specific strengths of a microcontroller will always confuse the issue.

Productivity and maintainability should also be discussed. If you write code optimimzed for maintainability, you will do good on an AVR. On an 8051, you will have a problem with a lot of local variables and recursion. You have to be much more careful, I.E: spend more time to do good code on a

If your application fits in 64 bytes SRAM you are probably much more efficient if you can use the internal 32 registers for the most used data.

When you need to work with devices needing large amount of SRAM like USB Host controllers, then you lose out, since the USB host stack requires 64 kB in itself, and then you have to add space for the applications.

Noone has made a mobile phone based on the 8051 AFAIK, since you use megabytes of code. Plenty use the AVR for cellular phones.

If you only want to

even in

So you can write AVR code and 8051 code without pushing anything. While priority levels are nice, this has not been mentioned by a single customer. My conclusion is that most people do not need nested interrupts.

top

?

AT76C713 = 48 MHz which is an SRAM device but also is a standard micro. Actually runs faster, but then the USB does not run at that speed.

Atmel uses 0,18u for the ARM7 where the flash will run at 48 MHz in room temperature. Needs to degrade to 30 Mhz at industrial temp range. A 7-8 ns flash if available is impressive.

They needed more than 10 bit resolution for their product and managed to do it with the AVR ADC.

Availability of third party IP allowing anyone to build a C51 controller strengthens the C51 market. I think anyone that could legally get their hands on the AVR core would prefer to use that in their chips.

The core is only a small part of the device. So far I have only had one significant customer asking for 3 x 3 mm, but when we showed the pinout of the 4 x 4 mm tiny13, then they said that since there is only need for routing in the east-west direction and not north-south and decided to drop their current 3 x 3 mm micro because they like the AVR.

I think in this case some companies may be using more aggressive technology like 0,18u. This typically means dropping 5 V capabilites.

--
Best Regards
Ulf at atmel dot com
These comments are intended to be my own opinion and they
may, or may not be shared by my employer, Atmel Sweden.
Reply to
Ulf Samuelsson

"Ulf Samuelsson" wrote in news: snipped-for-privacy@individual.net:

Something else to consider is the life expectancy of the particular AVR one chooses to use. Some models only seem to survive for a couple of years in the marketplace before being obsoleted. The bog standard '51, whilst a "Model T" performancewise, is still available 25 years on. Don't get me wrong, I think the AVR is a great device, but will the one I used in my last design still be around in 2030.... M

Reply to
Mike Diack

"Mike Diack" wrote

JC Whitney still carries new piston rings, valves, distributors and such for the Model T. If enough are made, for a long enough time, then parts will be around forever.

This is something _very_ important to consider in the embedded world.

This is all old-hat to those wearing old-hats, but for those starting out:

Any uP used in cell-phones seems to have a 1-year lifespan, just like the cell phone it is designed into. Motorola obsoleted several cell phone wonders before production quantities became available.

Industrial controls are produced for some 30 years. Upgraded all the time, but it's the same basic system with the same uP. It is the same with mature consumer goods: ranges, washing machines, mixers, etc..

Second sourcing is still important in industrial products. A uP without a viable producing second source won't (shouldn't) get designed in.

With a 25 year history, the 8051 was the right choice for designs in the 80's. Will it survive for another 25 years? Probably, _lots_ of people make it and the chances are some niche manufacturer will be making it the year 2030. Will the AVR still be around, will Atmel still be around?

Odds on that all-in-ones (A/D, D/A, CAN, USB, I2C...) will be soon gone and replaced with new whizzees with the latest in 3-D graphics, sat-com, 8Mpix vision, T1...

That Nokia designed the AVR9000TMXSW54 into the new cell phone doesn't count for squat. That Maytag or Bosch designed a PIC16C54 into a washing machine timer does.

--
Nicholas O. Lindan, Cleveland, Ohio
Consulting Engineer:  Electronics; Informatics; Photonics.
Remove spaces etc. to reply: n o lindan at net com dot com
psst.. want to buy an f-stop timer? nolindan.com/da/fstop/
Reply to
Nicholas O. Lindan

There are VERY few uCs with second sources. Even for a popular core like 8051, every vendor adds their own twists and niggles (at least) or massive add-on functionality slices. So this requirement restricts you to a tiny number of specific devices. If you're buying a 4K/256b OTP

8051 variant from vendor A with second source at vendor B, moving up to 8K/512b might turn the part into single-source again.

At my [Fortune 50] employer we regard all uCs as single-source products. We do not design in a uC unless we have a guaranteed 5-year buy life on the part. We use truckloads of COPs (though, we do not use them in new designs), specific 8051s, venerable HPCs, ...

Reply to
larwe

In article , snipped-for-privacy@larwe.com writes

However as there are about 600 odd 8051's out there if the one you use disappears there will be a very similar one fro some one else that will do the job. You have the same basic architecture and tools. If the SWis well designed you will probably only have to adjust a few lines of code.

So whilst there are few pin compatible second sources there are a lot of near misses that require very little hassle to use.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\ /\/\/ snipped-for-privacy@phaedsys.org

formatting link
\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Reply to
Chris Hills

use

will

In my field, as in many others I think, the porting effort (SW/HW engineering time) is trivial compared to the length and expense of the testing and qualification process that follows it. If we change a part number and firmware, we need to go through a minimum of three months QA functional testing (this usually takes > 4 months due to resource availability) and in most cases also FCC and UL re-cert. UL leadtimes are being quoted as 16 weeks now. All in all, it takes at least six to seven months to pull this "trivial" change through the pipeline, not counting production ramp-up and component leadtimes (some of our mask parts have 15-18 week leadtimes and that clock often cannot start to tick until regulatory approval is finished).

Historically we have seen it takes our team about 2 weeks to take a C project and port it to an *entirely* different microcontroller, and this can almost always run in parallel with the HW respin. It might only take a day or two to port to a similar micro, such as the situation you describe, but the 8-9 days thus saved are irrelevant compared with the 168-196 days of process after that.

I know you have a hard-on for the 8051 ( :) ) and there are other issues where we disagree too, but in our situation the "somewhat retargetable" nature of 8051 code is lost in the noise compared to the other hell that surrounds changing parts. We were talking here about second-sourcing, where one can switch between suppliers with utter transparency. The engineering time required to port between parts that aren't pin-compatible, binary-compatible, feature-compatible drop-in replacements isn't part of the issue.

Reply to
larwe

Your scenareo is a bit worse than most of the projects I've worked on. But, I'd have to agree that unless the second part is pin-for-pin, drop-in, same-part-number-on-the-BOM comptable, changing to a different architecture just isn't much more work than changing to a "similar" part with the same ISA but different pinouts and peripheral interfaces.

Once you've got to re-layout the board and re-write the peripheral handling code, you can almost as easily switch to different architecture.

The last time I worked on a project with an honest second-sourced CPU it was 15 years ago using an 8086 in a DIP package. IIRC, the signal thresholds on one of the interrupts pins weren't _quite_ the same on the two parts, and some component value changes actually did have to be made when purchasing decided to by the parts from the second source. :(

--
Grant Edwards                   grante             Yow!  Of course, you
                                  at               UNDERSTAND about the PLAIDS
                               visi.com            in the SPIN CYCLE --
Reply to
Grant Edwards

--snip--

you

the PLAIDS

--

Hi Grant,

what about buying and / or learning the new tools if switching architecture? If you are working as a one man show that penalty might be minor, however for a team of designers the pain and loss in productivity is significant. As an attempt to answer the original question why AVR, why 8051, if you are familiar with one and not the other, stick with the one as long as you can get the micro you are looking for. If you do a new design you might find more 51s with suitable peripherals than AVRs, simply because there are more out there. Asking multiple vendors for competitive bits has always been helpful to get a good price ;-) AVR, one vendor one more or less competitive bit. For the widest range of high quality peripherals based on 51 core have a look at Sylabs, looking for the best value in 51 based small chips, comparible to Tiny AVR look at the LPC900 family from Philips. If you are looking at MEGA AVR, sopt that! do not start with an 8-bit engine, go to the "8051 of the 32-bit world" and start developing with an ARM microcontroller. You get a 32-bit micro for less than $5 that outperforms any AVR in features and performance while costing less. An Schwob in USA

Reply to
An Schwob in USA

That usually takes a few days -- though thanks to Gnu tools you can switch from AVR to ARM to H8/300 to 6812 to 68K to IA32 to PPC to SPARC and never have to learn a new toolchain. :)

I guess I never found it to be that difficult to figure out the options for a new compiler or write a commadn file for a new linker. The ones I've worked with all worked fairly similarly.

That depends on volume. If the unfamiliar one will save you $1/unit on 10 million units, you better figure out how to get familiar.

There are some very cost effective ARM parts out there. Hitachi (now Renesas) has some very good, cheap H8/300 uControllers as well.

--
Grant Edwards                   grante             Yow!  Hey, LOOK!! A pair of
                                  at               SIZE 9 CAPRI PANTS!! They
                               visi.com            probably belong to SAMMY
                                                   DAVIS, JR.!!
Reply to
Grant Edwards

My guess is those might be the ones that will still be around in 25 years.

Harvard MBA's:

o Product differentiation; o A belief America can't compete in commodity electronics.

Result:

o So many incompatible products that each vendor only gets a small slice of the pie - right back where they would have been if they made commodity 12th sourced uPs; o End Product Manufacturers faced with proprietary products with a short market life; o EPM's unable to take advantage of advanced product features if they want a long product lifetimes.

I have seen that fall flat on it's face. If the manufacturer goes under the contract may be hard to enforce.

I have a client who maintained a five year inventory in lieu of no 2nd source. The IC maker promptly went belly up after the client took delivery of the 5 years' worth. The client's product took off like a rocket - the 5 year stockpile lasted less than a year. What a way to make a flop.

OffT:

Speaking of worthless contracts:

I once had an engineer try to sell me a patent: The firm where he once worked still owned the patent; But that was OK, he had an agreement with the President of the firm to privately sell the patent to non-competitors; No, the agreement wasn't in writing, it was a handshake contract; And it was really sad that said company President died last year; He then confessed that the new owners were really litigious, and of course didn't know about the agreement he had with 'The Old Man', it was "sort of secret".

A secret verbal contract with a dead man.

  • * *

This has happened to me, at one time vanilla P8732's weren't available (or so the client's purchasing department said). A Philips with a timer array worked just fine. No change in the code required.

And the generics are available again.

--
Nicholas O. Lindan, Cleveland, Ohio
Consulting Engineer:  Electronics; Informatics; Photonics.
Remove spaces etc. to reply: n o lindan at net com dot com
psst.. want to buy an f-stop timer? nolindan.com/da/fstop/
Reply to
Nicholas O. Lindan

... snip ...

Back then nobody in their right mind would design in a single-sourced component. The manufacturers were out looking for second sources to license long before their own product hit the market. What happened?

--
"If you want to post a followup via groups.google.com, don't use
 the broken "Reply" link at the bottom of the article.  Click on 
 "show options" at the top of the article, then click on the 
 "Reply" at the bottom of the article headers." - Keith Thompson
Reply to
CBFalconer

I cannot follow this, Jim. Most times, I think I find myself in near or complete agreement with you. But not here.

When the MIPS R2000 was first released into the market (and I consider it one of the first, if not precisely *the* first, commercially available true-RISC CPU), one of the truly mind-boggling problems was keeping the darned thing fed from memory. I used 8kx8 RAM chips for the caches that were single-source at the time (Performance Semi) and burned one watt apiece! I haven't even mentioned the difficulties with the connectors between the CPU board and the memory board! The required bandwidth to memory was one of the PROBLEMS, not one of the benefits, as you suggest above.

The reason was simple. To get a task done, more instructions were required. They were very fast, but you needed some 40% more memory to hold them. And that put pressure on memory channels. What I really wished to have was such a CPU with the RAM built into it, so that external chip-to-chip type drivers weren't needed and the speeds could be more easily maintained.

Anyway, the "RISC made sense when memory was off chip" just slaps me in the face, big time. I know different. From personal experience. It was CISC that had the advantage in terms of memory, because the code was denser. On the other hand, RISC was fantastic in the sense of the speed you could get by converting all that silicon real-estate (which at the time was at a premium, but is not nearly so these days) used for microcode and microcode sequencing logic (which on the 68020, for example, occupied some 70% of the total die space) and turning it into more (add new register space, multipliers, ALUs, etc.) and/or faster (less sequential and more combinatorial) functional units. At the time, the trade off made tremendous sense -- especially if you weren't Motorola or Intel and didn't have access to the very top of the line FABS and had to live in the cracks, so to speak, with fewer transistor equivalents and still outperform the competition who had access to 5-10X more available in their expensive FABs.)

Jon

Reply to
Jonathan Kirwan

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.