Techniques to reduce RAM/ROM usage in an embedded system

Hello,

Please share some of your experiences/techniques in reducing RAM/ROM usage in an embedded system. I know some C compilers provide memory optimization option but I would like to know, as a programmer, some of the tips/techniques to reduce RAM/ROM consumption.

Thanks

Reply to
C Beginner
Loading thread data ...

Look at the assembler code that the compiler produces & see if it is behaving sensibly. (If you don't understand assembler, learn it - you will never write really efficient C on a micro until you understand the underlying architecture in detail)

Try making simple changes to expressions & compare the code size, e.g. instead of a=c*3+d, try a=c;a*=3;a+=d; for things like longs on 8-bit architectures this can sometimes make big dirrences to both code and temporary RAM usage.

Avoid floating point at all costs - it is very rarely necessary in lower-end embedded apps as real-world variables don't often need the range that FP provides.

Reply to
Mike Harrison

Factor your code. This technique works for all languages. Some small MCUs, like some PICs, have a very small return stack so factoring isn't appropriate.

Noel

Reply to
Noel Henson

Please elaborate as to what 'factor your code' means---to you...

Bo

Reply to
Bo

Find common pieces of code and turn them into subroutines.

--

Tauno Voipio
tauno voipio (at) iki fi
Reply to
Tauno Voipio

Factoring your code is much like factoring a number. For example, 42. 42 factors to 2 * 3 * 7. 100 factors to 2 * 2 * 5 * 5.

When programming, many programers will copy/paste/edit blocks of code that are similar but slightly different. All of this takes up space in ROM and RAM. To factor, duplicate code should to be placed into functions (or subroutines). That way it is coded just once. It can be used many times and only takes up enough RAM and ROM for the function (or subroutine) call.

Suppose a program needs to log and output error messages. A programmer might code the following. (please ignore minor syntactical errors, I don't normally program in c)

switch(errnum) { case 0: break; case 1: beep(); log(errnum,now()); printf("Data input error."); break; case 2: beep(); log(errnum,now()); printf("Data processing error."); break; case 3: beep(); log(errnum,now()); printf("Data storage error."); break; case 4: beep(); log(errnum,now()); printf("Data output error."); break; default: beep(); log(errnum,now()); printf("Error! Unknown error number %i.",errnum); break; }

To factor the code and have it take up less space, a programmer could code:

void handleError(int errnum,char *errtext) { beep(); log(errnum,now()); printf("%s",errtext); }

switch(errnum) { case 0: break; case 1: handleError(errnum,"Data input error."); break; case 2: handleError(errnum,"Data processing error."); break; case 3: handleError(errnum,"Data storage error."); break; case 4: handleError(errnum,"Data output error."); break; default: handleError(errnum,"Error! Unknown error number %i.",errnum); break; }

I know it doesn't seem like much of an improvement. But if the technique is applied liberally throughout the code a surprising amount of space can be saved.

I hope that's a little clearer,

Noel

Reply to
Noel Henson

It is possible to take it another step by pulling out all the constant data and putting it into a table of some kind. This method is also very useful for generating displays and menu systems.

Please note this code is completely untested.

char const * const errStrings[] = { "Data input error.", "Data processing error.", "Data storage error.", "Data output error." };

if ( 0 != errnum ) { beep(); log(errnum.now()); if ( errnum < sizeof( errStrings ) / sizeof( *errStrings )) { printf( errStrings[ errnum - 1 ]); } else { printf("Error! Unknown error number %i.",errnum); } }

This method can be extremely good at reducing code size if function calls are relatively expensive.

Kevin Bagust.

Reply to
Kevin Bagust

[snip]

Yes. I use tables whenever I can. Many of the MCUs I program have only 1024 words of program space. Some of the embedded systems only 128K available for program storage. Factoring, tables and using a good preprocessor are important techniques.

Noel

Reply to
Noel Henson

Thanks for the clarification.

'factor>

Reply to
Bo

  1. Don't return values unless it's neccessary. In other words, use void whenever possible and don't return status flags unless you have to use them. Superfluous returns (even int1 / booleans) can increase the ROM footprint by a large number.
  2. Use local variables as much as you can. The compiler will free and reuse the space when the function exits. This will free up a lot of RAM compared to all global variables. You'll have to have _some_ globals, but keep it to a minimum.
  3. A large number of calls to a small function yields a smaller footprint than a small number of calls to a large function. I think someone else was calling that "factoring".
  4. Using a more powerful chip can double the space you have to play in, but then you'll get feature creep and you'll have to change the chip again later.
  5. If your compiler has built-in functions, it might use inline substitution instead of calling the function. Try putting the built-in functions in a wrapper function to reduce space. It's stupid, but it works.

YMMV.

--
Magnus McElroy
Electrical Engineer (EIT)
HABIT Research
(250) 381-9425
Reply to
Magnus McElroy

RAM use reduction is probably easier than program space reduction, at least it is for a machine. You 'just' keep track of where and when variables are used and aggressively re-use them. Of course you can minimize buffer size and such like by doing careful design and testing, and by making the routines more tightly suited to the application.

Saving code space (or trading it off for speed) is a matter of recognizing code that can be put into routines (maybe routines can be combined). If you're doing it in assembly, you can spaghettify things by using multiple entry points, jumping from one subroutine into the middle of another routine, falling out of the end into another routine that would always be called afterward, etc. None of this is very pretty, but it can reduce code memory a bit and maybe even increase speed.

Make sure there's no useless code being linked in. Re-write overly general library functions such as printf to make them more limited and more application-specific, or eliminate them entirely.

Remove options, and even test code, and required them to be ISP'd individually from different binary images (this also solves the problem of the customer (or, more likely, distributor) who figures out how to turn a $300 device into a $500 device by changing an option).

Optimize data structures. Do you really need an array of 32-bit function pointers, or could 8-bit offsets do the trick? Do you need to use floating point, or should you write your own math routines, or can you use the native machine or compiler's math directly?

Really it should start at the top with algorithm and data structure design, and then hopefully much of the above will not be necessary.

Oh, and always make sure you have a palatable path upward in memory capacity if things look like they might be close. In any program of a few K or larger, there's usually (obviously, not always) a way to reduce the size by a byte or two, but as you approach the absolute minimum size, the cost per byte saved is asymptotic to infinity.

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

On 13 Oct 2005 05:59:01 -0700, "C Beginner" wrote in comp.arch.embedded:

The biggest savings I have generally found are in programs that have a significant amount of text, for example messages output to a serial port or other interface.

There are two big areas for savings here. The first is to properly define true string constants properly. Consider:

char error_message [] = "Error %d\n";

In C terms, error_message is an array of ordinary writeable characters that is initialized with the contents of the string literal. A strictly conforming C compiler following the abstract semantics of the language must copy this string literal from ROM/EPROM/flash into 10 bytes of RAM.

If you will truly never change this string during program execution, define it that way:

const char error_message [] = "Error %d\n";

Now the compiler does not need to take up RAM.

A bigger savings is possible if you use arrays of string literals:

char mess [][25] = { "Oil Pressure Low", "Coolant Temperature High", "Fuel Low", "Alternator Discharging" };

This uses 100 characters, because each entry in the array uses as many bytes as the longest string requires. And just as much RAM if not defined as 'const' and the start up code copies them there.

Instead, do:

const char * const mess [] = { /* same four strings */ }

This adds four pointers (typically 8 or 16 bytes), but saves 16 bytes for sure in the strings themselves.

Wherever possible, define pointers to strings or substrings that are used in more than one place instead of repeating the string.

In applications with a lot of text, I have saved larger amounts of space by tokenizing repeated strings.

Something like this:

const char * const tokens [] = { "Error ", "Temperature ", "Voltage ", "Parameter Out Of Range" };

This is matched by a table of macros:

#define ERROR_ 0x80 #define TEMPERATURE_ 0x81 #define VOLTAGE_ 0x82 #define PARAMETER_ 0x83

Then you define strings like:

const unsigned char param_error [] = "0x83" "In Calculation\n";

Of course, you need to write a special function to recognize the tokens and expand them on their way to the output device.

Finally, there are compiler and architecture specific savings possible. For example, Keil's 8051 compiler provides a 3-byte generic pointer, the first byte containing a type (data, idata, xdata, code), the other two the 16-bit address. Whenever possible, if you define pointers with specific types, they will have only one or two bytes.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~ajo/docs/FAQ-acllc.html
Reply to
Jack Klein

I'm still a C beginner, so this is probably a mistake....

Try to replace C functions, that require inclusion of large chunks of library code, with stand alone code.

In a recent porting effort I found a few high level C functions that should have been there given the context for which the code was originally written, but that could be replaced by stand alone code that didn't need the otherwise necessary library. One example was the use of memset() which resides in the String library...

#include memset(dirptr, (char)0, (s16_t)sizeof(struct Dentry));

was replaced with...

u8_t count = sizeof(struct Dentry); u8_t xdata * cntptr = (u8_t xdata *)dirptr; while(count--) { *cntptr++ = 0u; }

Only save 50 or so bytes of code space in this case, but you get the idea.

If you can get hold of the library code source, then maybe some of the library routines can be individually adapted without having to include all of them.

Regards, Murray R. Van Luyn.

Reply to
Murray R. Van Luyn

The most basic one is that for most compilers if you have a string in your code it will be put in RAM. For example printf("printing something"); will load printing something into ram. Many systems get around this somehow, but I have seen it cause stack problems all the time on very small ram processors.

Reply to
DarkD

Not using printf is often also a good way to save space - it will usually have functions that your app will never need, and a lot of space may be recoverable by using routines more suitable for your application.

Reply to
Mike Harrison

That's a very good example of how supposed optimizations can easily make things worse than they started out being.

Yes, if you call 'memset' once in the entire program, a hand-coded memory initialization *might* possibly generate shorter code. But

1) do this three times in one program (including all libraries used), and you're most likely making the entire thing bigger instead of smaller. 2) on a good many architectures, if the compiler authors weren't asleep at their desks while writing their implementation of memset(), there'll be no savings at all.

If you honestly believe that the C library is something that will be pulled in entirely, just because you use a single one of the functions it provides, you indeed have a lot to learn about C yet. Well, either that or the C compilers you've used so far were a waste of storage capacity.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

Not on the majority of platforms it won't. String constants are called constants for a reason, and compilers that haven't been deliberately rendered stupid (e.g. by setting the wrong options) will not put them in RAM.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

Hi Hans,

Maybe you can clear me up a bit on how I managed to shrink code by not using functions that require large libraries like the floating point library etc.

My meagre understanding is that if you do happen to do something silly like use memset(), as opposed to replacing it with stand alone code, the linker will link in the entire string library, even those parts you don't actually need. Could be I'm wrong about this...perhaps you could clear me up as to how the observed savings are actually made?

The compiler was the Keil C51 toolset. I replaced all the high level functions with stand alone code and got the same results...difficult to read but smaller code.

Regards, Murray R. Van Luyn.

all

Reply to
Murray R. Van Luyn

Hi Hans,

Looks like you're right. The linker doesn't seem to link in the whole #included library when you only use one of it's functions. That makes sense. I suppose that what I am then doing is implementing 'simpler versions' of library code, and only obtaining advantage by doing this when these simpler segments are only called the once. i.e. when I'm multiplying through just the once!

I'll still avoid using larger library code, but I have a finer understanding of how the advantage might be obtained, and indeed lost. Thanks.

Regards, Murray R. Van Luyn.

using

etc.

like

actually

read

smaller.

Reply to
Murray R. Van Luyn

Of course it doesn't --- because a library is hardly ever #include'd.

What you #include is a header file, which holds _declarations_ of functions. Then the linker checks your program against the library, which holds their _definitions_, and links in those that your program actually needs, plus those needed by the functions thus linked in, and so on, until the process is finished because all necessary functions have been found.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.