8051 to ARM - code size differences?

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
A move from 8051 to ARM7 (with Thumb) is being contemplated. This move
would also incorporate a move from asm to C. Has anyone published
metrics for code space increases in such a migration? I'm imagining a
40% size increase (wider instructions vs. optimize-friendly
architecture) but this is basically a guess.

I'd prefer a documented case study rather than simply anecdotes, but
I'll take anecdotes if that's all I can get :)

Performance is not a concern for this application; code size is.


Re: 8051 to ARM - code size differences?
It depends!

the two extrems are bit handling and arithmetic. Lots of bit handling
will blow up the code size with an ARM by factor >> 2, lots of 16 or
even 32-bit arithmetic will shrink the code.

For example the size for Drystone benchmark shrinks going from 51 to
ARM (both in "C").

My experience is that designs done with a 51 use the strengths of that
architecture like bit handling, byte variables, special addressing
modes based on the Harward architecture. If so, the code size will most
certainly increase. The 40% you mentioned is actually at the lower end
of what I have seen, my typ number would be around 50%, but then again,
it depends. As soon as your data types are anything else then chars,
the ARM has a clear advantage even in code size because handling of
integers or longs takes a lot more code with 51.

Last but not least it depends on the original code size. If you convert
a program from a 51 that was 1 kB, most likely it will explode (>2x),
if you convert a program that was written in ASM and used 64 kB it
might grow very little (typ 10-20%). It is difficult to really
optimized 64k while it is very feasible with 1k. Most 51 applications
that I have seen are in the 8k code size range, that is where my 50%
come from.

Hope this helps, Schwob


snipped-for-privacy@larwe.com wrote:
Quoted text here. Click to load it
move


Re: 8051 to ARM - code size differences?
Hi,

Quoted text here. Click to load it
that
most
end
again,

This is very interesting information, thanks for the response. Although
I've programmed both architectures, I've never migrated a project from
one to the other because the gap between them (in price) was always too
far. But these days it seems feasible...


Re: 8051 to ARM - code size differences?

Quoted text here. Click to load it

  Since this will need a code re-write (ASM -> C), it will be very hard
to get simple metrics.
  You need to provide more info on the Code, Data Memory and data type
footprints of the application.
  If it is byte variables, with data variables fitting in the 80C51
memory, and not much XDATA access, then you could triple the code size.
  If there is lots of 32 bit maths, and large xdata multi dimension
arrays, then you could be smaller in Arm - it all depends....
  Normally, such a move is made when you want to fundamentally change
some features.
  I have seen massive bloat occur in moves up from 80C51's, but that
usually involved a lot of 'creeping featurism' being heaped on board....

-jg



Re: 8051 to ARM - code size differences?

Quoted text here. Click to load it
move
hard
type

There's not much XDATA moving around. I can't provide application
details (sorry, it's a work thing) but the info you and An Schwob
provided is very useful.

By the way, the reason for the migration is standardization of tools
and skills. The specific projects that are being migrated don't
actually have any need to run on ARM, but the idea is to cut down the
number of different micros, toolchains, emulators, etc. that are being
used, and to make engineers more interchangeable amongst different
projects, plus to pave the way to start building new, more
complex/higher performance apps based on the old code.

The end goal is to have everything written in C and running on one of a
few different ARM variants. Currently it's a big mixture.


Re: 8051 to ARM - code size differences?
Quoted text here. Click to load it

If that's the motivation, I'd do it in two steps.

  First, move the ASM to C, but stay on the C51.
Then, you can compare the migration more easily, and give the
bean counters the chip price delta.
  You can also verify the ARM can actually replace the uC.
The newer single cycle C51s (SiLabs, Atmel et al) have very nimble
interrupt handling, it would be a shame to do all the porting, only to
find the ARM is too slow..

  Also, a wholesale move to ARM will be less than price-optimal -
most designer I know are looking to support C51 _and_ ARM, and
the chip vendors themselves have drawn the line, at ~32-64K Code,
and >= 48 pins.
  ie, makes little sense to replace a 85c C51 variant with a much more
expensive ARM, just to do the same task ?

  C51's are also moving down in price and package size (~TSSOP14), so
can do many serial I/O and wdog type tasks.


-jg


Re: 8051 to ARM - code size differences?

Quoted text here. Click to load it

Trouble with this is that the C migration would have to be done as a
skunkworks project. It's very hard to justify the engineering cost to
do it as an intermediate step (at a rough guess it would have a
budgetary cost of three quarters of a million dollars just to develop
the code, and perhaps a quarter of a million again to test and qualify
it, assuming that engineering delivered a bug-free product the first
time around). And then it probably wouldn't fit in the '51 variants we
have qualified, and then we have tens of thousands of dollars and six
months' delay in regulatory paperwork to get the new designs
approved... not something we can just do experimentally :)

Quoted text here. Click to load it
to

The timing requirements are mild, though. I'm aware of latency issues;
they have been discussed at the vendor pow-wows. FIQ is good enough for
everything we do.

Quoted text here. Click to load it

You're right /now/ but the quotes from ARM vendors, in the volumes we
use, are getting more attractive every week. Plus once the success has
been demo'ed on C51, we can eliminate about half a dozen radically
different micros (some quasi-obsolete, some expensive, most very
single-source) by porting those other projects.


Re: 8051 to ARM - code size differences?
Quoted text here. Click to load it

Would you mind to define "too slow" in this context, in particular
interrupt latency? I am considering Philips LPC213x family for some
applications and I may have missed something.

TIA.

Elder.

Re: 8051 to ARM - code size differences?

Quoted text here. Click to load it
8 bit traditional MCU like the 8051/AVR/6811 typically take minimal
processing from HW receiving an interrupt to your handler. Some micros
provide vectored interrupts for various IRQ sources so it is very fast
even if you multiple types of interrupt sources.

ARM architecture itself defines a single IRQ vector. Most ARM7 MCU
provides some sort of fast vectored interrupt processing unit so it can
handle multiple sources, but it still mean that it needs to take at
least couple hops before it gets to your handler. Then of course you
have to save the volatile registers which of course is more expensive on
the ARM (32 bits registers times X numbers to save/restore)...

--
// richard
http://www.imagecraft.com

Re: 8051 to ARM - code size differences?
Quoted text here. Click to load it

The interrupt in the ARM7 will jump to address 0x18.
The IRQ vector is immediately followed by the FIQ vector at 0x1c.
Since you enter 32 bit ARM mode immediately, you have precisely
one instruction to handle the interrupt
so it obviously have to be a jump/branch.

In the AT91SAM7S series you can do an indirect jump to the highest priority
interrupt,

    pc := (pc + displacement);

The displacement allows you to select the interrupt vector from the AIC
(Advanced Interrupt Controller).

So you need two jumps to enter the interrupt routine.

At 48 MHz, that should be pretty quick.

If you need even faster interrupts, you can connect a single interrupt in
the AIC
to the FIQ (Fast Interrupt).
This gives you 5 registers for free, which do not need to be pushed or
popped.
A MOVEM (Move multiple) could solve some problems in push/pop.

Your trouble starts if you want to support nested interupts.
This will cost some code.

--
Best Regards,
Ulf Samuelsson
We've slightly trimmed the long signature. Click to see the full one.
Re: 8051 to ARM - code size differences?

Quoted text here. Click to load it

  It's mostly an issue when you work very close to the iron...
The typical modern 80c51 allows 4 levels of INT priority, and has direct
data and boolean opcodes - which mean you can get deterministic direct
(limited) action in interrupts, without any PUSH/POP.
  The jitter on the int time is also relatively low.

  ARM's tend to have better peripheral buffering, which helps tolerate a
more elastic response time, so the areas to watch are where the
better peripherals don't help, like SW DACs or SW current protection,
or SW modulation.

  Some 32 bit uP/uC have separate 'co-processors' for handling the
timers, and critical IO, so their main CPU response time ( and
importantly, how that changes over time with Sw revisions ) is
insulated from the real IO. IIRC the TI ARMs have this ?

  If you have special areas like that, their code is normally small,
so I'd suggest a tiny 80C51 as a real time co-processor/peripheral.

-jg


Re: 8051 to ARM - code size differences?
Hello Jim,

Quoted text here. Click to load it

I like that phrase...

Quoted text here. Click to load it

Or when there is something in the hardware around the uC that absolutely
has to have an interrupt handled in x clock cycles. There is stuff that
could blow up if this doesn't happen. I remember a big wall-to-wall
crack in a concrete floor that was the result of a phase synchronizer
not being synchronized in time. No idea who dunnit but this was expensive.

Before a dead-stick switch from one uC to another I'd carefully look at
all the hardware that it supports, in all designs that are current.

Quoted text here. Click to load it

That is a great idea. But then you'd have to do what Lewin's company
wants to avoid: Maintain the 80C51 tools and local expertise. In that
case they might as well leave such designs to the 51 architecture
altogether.

Regards, Joerg

http://www.analogconsultants.com

Re: 8051 to ARM - code size differences?
Quoted text here. Click to load it

I guess it's not my case. I am going to use Philips LPC213x as a
replacement for PICs, AVRs and 80C188EC and a latency on the order of
microsseconds is more than adequate for all of my applications. We are
going to purchase IAR tools. I hope their MakeApp generate decent code.

Regards.

Elder.

Re: 8051 to ARM - code size differences?
Quoted text here. Click to load it

I don't know whether this is OT, but moving a project from 80C186EB
(Borland C) to ARM/Thumb (GCC) kept the code size roughly equal
(40064 vs. 40032 bytes).

The architectures of ARM and 8051 are so vastly different that the
result depends very much (e.g. 16 or 32 bit arithmetic, bit
operations etc).

--

Tauno Voipio
tauno voipio (at) iki fi


Re: 8051 to ARM - code size differences?

Quoted text here. Click to load it
If you ask this in two years, I bet you will get real life figures
(well, if people would actually share such info). Your 40% is as good as
any. OTOH, 8051 addresses up to 64K w/o bank switching right, and then
the smallest ARM7 so far is the Atmel SAM7S32 with 32K, so any
reasonable ARM7 code should handle any 8051 app with room to spare :-)

--
// richard
http://www.imagecraft.com

Site Timeline