Assigning data structures to certain memory locations during porting

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
I'm new to embedded programming. I've gone through the archives a bit
to find answers to the questions I have, but either I'm not doing a
good job of reading them or the answers just aren't present. I'm aiming
to port a C algorithm to a target machine, but I'm stuck. Specifically
I'm trying to dynamically allocate data structures and uniquely map
these data structures to different memory regions.

As an example of something I'm trying to accomplish, suppose I have a
target machine with a scratch pad, a DRAM, and a Flash Memory. I would
like to dynamically allocate a memory space for data structure A in the
scratch pad. Similarly then, I wish to allocate memory for data
structure B in the DRAM, and then again dynamically allocate data
structure C to Flash Memory. How do I accomplish this?

Again, my experience with memory management has only ever been for use
on desktop machines. If I understand correctly, the linker script is
where you specify the different available memory regions. This then
makes the program aware of the hardware available, but how then do I
make sure data structures are allocated in specific memories? I imagine
that globals and automatic variables can be assigned in the linker
script, right? Should I also use the linker script to state which
dynamically allocated data structures should belong to which region?

Any help would be greatly appreciated!

Thanks,
Pieter Arnout


Re: Assigning data structures to certain memory locations during porting

Quoted text here. Click to load it

The big snag comes with freeing memory.  Dynamically allocating once
at startup is not a problem, and is common if there are run time
configuration otpions.  The problems can come when allocating memory
later on, and with freeing memory.

When you allocate memory later on, there's no guarantee that you'll be
able to get enough memory.  The application has to be aware of this
and it requires more error checking (always test to see if the memory
allocation succeeds).  It's simpler to do this error checking at
startup than to deal with it at an inconvenient time (such as a very
high priority and critical task that can't continue because there's no
memory).

When you free memory it can fragment the heap, making it even harder
to successfully allocate memory later.

These allocs and frees can also take up a little bit of CPU time.  For
a real time system it's often better to just have the memory
pre-allocated and waiting to be used.  Even if this means wasting
memory space (by allocating a max sized string rather than waiting to
see how big the strings will actually be).

Quoted text here. Click to load it

You can pre-allocate a lot of frames at the start and use those.
You can also change the algorithms to be able to deal with less
memory.  If you need frame 1 and frame 1000 in memory at the same
time, then maybe something's wrong with the algorithm.  You should
also know the maximum frame size perhaps (640x480x8 or so).  From
a design point of view, start with the assumption that you have
only a small amount of memory to work with (say 1MB only).  Most
programs for unix or windows love to assume lots and lots of memory
and stack and a backing page store; which is the wrong assumption
for embedded systems (even in larger systems).  Then figure out
how you'll get the algorithm to work with only that limited supply
(and maybe you'll figure out that 1.2MB is the absolute minimum
lter one).  When that works, then you can think about using more
memory based upon a configuration option or what the device actually
has.

Another thing to do is think about how to break large chunks of memory
into smaller pieces.  Maybe you could have a set of 1024 byte chunks
that can be chained together larger pieces of memory are needed.  This
sort of thing happens with networking code all the time (even in
desktop operating systems); part of the packet is in one chunk and
part is in another, and the network device knows how to combine them
together when transmitting.

Quoted text here. Click to load it

Actually there are more than one type of memory pool, and different
people have different needs.  I was talking about a fixed size memory
pool, where only fixed size objects can be allocated from any one
pool.  It's implementation is pretty simple.  If you want to have a
pool of N objects of M bytes each, you just alloc a N*M chunk of
memory (plus a little extra for bookkeeping or debugging).  Those N
blocks are then chained together into a linked list by reusing the
start of the block to hold a pointer.  This list is then the free
list.  To allocate a new block you just remove the head of the free
list and return that, and to free it block you just link it back onto
the list.  (you can add some extra debugging so that you keep track
of who allocated the memory in order to track down memory leaks, or
to make sure no one frees a block twice, etc)

Another type of pool is just like a generic malloc() routine,
except that it works on a limited set of memory instead of the
entire heap.  If a task uses only that pool for all of its memory
needs, then if it runs out of memory it won't cause problems for
other tasks and vice versa.

--
Darin Johnson
    "You used to be big."
We've slightly trimmed the long signature. Click to see the full one.

Site Timeline