Paper on dynamic memory allocation for real-time systems

"Dynamic memory allocation has been used for decades. However, it has seldom been used in real-time systems since the worst case of spatial and temporal requirements for allocation and deallocation operations is either unbounded or bounded but with a very large bound.

In this paper, a new allocator called TLSF (Two Level Segregated Fit) is presented. TLSF is designed and implemented to accommodate real-time constraints. The proposed allocator exhibits time-bounded behaviour, O(1), and maintains a very good execution time. This paper describes in detail the data structures and functions provided by TLSF. We also compare TLSF with a representative set of allocators regarding their temporal cost and fragmentation."

Reply to
bitrex
Loading thread data ...

Never really understood why C had such a notorious issue with DMA (the free() function in particular) but I'm willing to be disabused of my outstanding ignorance of the matter if anyone knows.

--
This message may be freely reproduced without limit or charge only via  
the Usenet protocol. Reproduction in whole or part through other  
protocols, whether for profit or not, is conditional upon a charge of  
GBP10.00 per reproduction. Publication in this manner via non-Usenet  
protocols constitutes acceptance of this condition.
Reply to
Cursitor Doom

What? DMA has nothing to do with memory allocation...no matter the language, it's purely hardware driven, once set up by software.

Reply to
Bill Martin

Dynamic Memory Allocation DMA not Direct Memory Access DMA

Reply to
bitrex

Oh nooo, more overloading.. :-)

Reply to
Bill Martin

AIUI the issue is heap fragmentation when you don't have abundant RAM. A me mory allocator has conflicting requirements: it needs to make efficient use of the available memory as well as being fast and general. As memory is al located in odd-sized chunks and then freed in whatever order, the allocator has to try to combine the freed blocks into larger ones. In languages cont aining pointers, the allocator can't move allocated blocks around, so the s ize of the combined chunks depends sensitively on the allocated sizes and t he order in which they're freed.

When there's lots of memory available, this isn't a terrible problem, but s eriously RAM-constrained systems such as MCUs are another story. If all the allocated blocks are the same size, it's easy.

Cheers

Phil Hobbs

Reply to
pcdhobbs

It's not just C all non-garbage collected languages rely on the user to either use the default dynamic memory allocation algorithm (often just an interface to the kernel's implementation on a desktop processor) or come up with their own.

The implementations of malloc and free that come with C's standard library are always compromises between portability, speed, size, and fragmentation rate and aren't terribly suited to applications that have small heap sizes but also need lots of data structures allocated and de-allocated during execution.

This was an issue even back in the DOS days when you only had 640k of conventional memory; I remember an old video game where the authors had set aside 64k of conventional memory with some custom implementation of malloc to dynamically load scripts and objects as the game progressed.

64k should've in theory been plenty to store every script the program needed at any given time, except for the fact that being humans in such a large project they made mistakes and forgot to free memory in the pool they had allocated, causing a leak.

Nothing more fun than when you're about to fight the big boss bad and get an "Out of heap space" fatal error.

The idea with this whitepaper is to address the fact that many dynamic allocation algorithms are unsuitable for real time systems (i.e. ones that don't run inside a managing OS and can't just say wait for the network card driver to handle sending packets in a timely manner when the app itself gets tied up) because their execution time is theoretically unbounded; _usually_ it works OK but you can't say mathematically with certainty when it it'll come up with a free block for your data or if it ever will.

Reply to
bitrex

The algorithm in the paper relies heavily on the "find first set bit in a word" CPU instruction which unfortunately most 8 bit processors at least don't have.

Reply to
bitrex

Forgot to add becauuuuse the only software-fallback that's going to have a deterministic execution time would is probably a full lookup table which is going to be a suck with a 16 bit pointer size

Reply to
bitrex

Wasn't it Bill Gates who once said 64k should be enough for anyone? Some futurist!

--
This message may be freely reproduced without limit or charge only via  
the Usenet protocol. Reproduction in whole or part through other  
protocols, whether for profit or not, is conditional upon a charge of  
GBP10.00 per reproduction. Publication in this manner via non-Usenet  
protocols constitutes acceptance of this condition.
Reply to
Cursitor Doom

640k, to give the d. his due.

Cheers

Phil Hobbs

Reply to
pcdhobbs

I have often wondered, why C-programmers are so keen of using dynamic memory allocation instead of stack allocation. Is it because some early processors had a very limited stack, typically intended only for return addresses ?

In Pascal, you could allocate short lived memory areas on the stack and pass a pointer to that stack area as a procedure parameter. Of course, this was simplified by having source level nested procedures, so you didn't have to explicitly pass these pointers to low level procedures, since these allocations were readily visible on the outer source code nesting level.

Reply to
upsidedown

The programming environment (language+libraries) is key, since they are the foundation which enables/limits what it possible.

There are such C-based environments which attempt to force stack-based allocation via forcing a particular programming style, but in my experience they don't bother to /clearly/ say that is the driving force. That's annoying, since it makes /learning/ the style more difficult.

Two examples of such are

1) the old Apple iphone environment Cocoa, which is an event-based library on top of Objective-C (the latter is Smalltalk based messaging on top of C) 2) protothreads, which are extremely lightweight stackless threads designed for severely memory constrained systems, such as small embedded systems

It is notable that Apple has finally given up on that approach, and is using a modern language, Swift.

Reply to
Tom Gardner

The two key points are memory fragmentation (which has already been mentioned) and non-deterministic timing. In embedded systems or other software with constraints on timing, dynamic memory allocation and deallocation can easily be a problem because it is very difficult to predict the timing - sometimes the functions run quickly, sometimes they can take a long time.

There is an additional challenge that has become more relevant in recent times - it gets really complicated when you have multiple threads or multiple processors trying to do such allocations.

Many C programmers are quite happy with memory allocated on the stack. There are two ways of doing it - VLAs and alloca(). These have the same uses as stack allocated memory in Pascal - and the same advantages and disadvantages regarding lifetime.

Reply to
David Brown

The stack on modern desktop machines is usually limited as well to something like 1 or 8 MB. This is to protect the OS; if a heap allocation call goes awry or you attempt to dereference a heap allocated pointer outside the program's virtual address space the worst that can happen is you get a bad alloc or segfault and your program crashes.

There's no such intrinsic protection for the stack; without some kind of artificial limitation an AWOL recursive function could corrupt the OS kernel at best and be a huge security risk at worst (write arbitrary code into the OS' address space? nice.)

You can change the default stack limit from within C using a syscall but then your code won't work in some other environment.

Reply to
bitrex

Large stack allocations are discouraged by the CERT-C standard, for security reasons:

"Stack overflow has been implicated in Toyota unintended acceleration cases, where Camry and other Toyota vehicles accelerated unexpectedly. Michael Barr testified at the trial that a stack overflow could corrupt the critical variables of the operating system, because they were located in memory adjacent to the top of the stack [Samek 2014]."

alloca() can be useful but it isn't ISO C so strictly speaking is non-portable. I've also read that unfortunately even the non-portable versions included with compilers for some architectures are buggy.

Reply to
bitrex

in C++11:

#include #include

struct Foo { int bar = 0; int baz = 1; int bloop = 2; };

void my_func(const Foo* stuff) { //etc... }

void my_func2(std::unique_ptr stuff) { //etc... }

int main() { /* stack allocated structure, instantiation calls implicitly-declared constructor, above default values assigned to fields, will be popped off stack at end of function */

Foo foo1;

// foo1 is a value

/* default-constructed heap allocated struct Foo, the smart pointer wrapper is constructed on the stack and will automatically destroy the wrapped heap structure and free memory at end of function */

auto foo2 = std::unique_ptr(new Foo{});

//foo2 can be used like an ordinary "dumb pointer" //but wrapper is non-copyable and cannot be passed by value

foo1.bar = 2; foo2->baz = 3; foo2->bloop = foo1.bar; //as usual due to overloads

foo2 = new Foo{}; //ERROR, no way

delete foo2; //ERROR, foo2 is a stack-allocated wrapper class, //not a pointer

my_func1(foo2); //OK, the smart pointer is overloaded to behave //as a const-qualified pointer to Foo when passed my_func2(std::move(foo2)); //OK, passed as an rvalue reference, //foo2 in "main" releases control //of managed heap-allocated object }

Reply to
bitrex

Basically, the rule is "don't write buggy code". If you don't know it is safe to allocate something on the stack (with a VLA, an alloca, or just any old variable), then don't do it. The same applies to calling the function "foo()", or doing anything else in a program.

alloca() is in Posix, and thus is portable across a fast range of C implementations. It is also supported by most serious non-Posix compilers and libraries. Certainly a compiler can be standards conforming and not support it, but such compilers will be rare. (There are certainly plenty of compilers that don't support alloca, but they are highly unlikely to support C11, C99 or even C90 fully.)

I have not heard of compilers with buggy alloca. But it is not a function I have used, as far as I can remember (I have used VLAs sometimes), and there are lots of compilers around. However, it is not a feature that is particularly hard to get right.

Reply to
David Brown

Very often one's using dynamic memory exactly because one wants it to persist after the function returns. Otherwise, unless the memory requirement is large, or of unknown size (at time of programming), one uses an automatic variable stack allocated variable.

Sylvia.

Reply to
Sylvia Else

Of course there is protection for the stack. A general-purpose OS that lets stack corruption in an application cause kernel corruption or other problems is a broken OS. On Linux, Windows, etc., stack corruption does no more harm than corruption anywhere else. And on embedded systems without memory protection where application code /can/ corrupt your kernel (if you have one), that applies equally to heaps and other memory as well as the stack.

Programs mainly use stacks for function calls, and small bits of data - and they need it to be fast. That is the typical usage pattern. So OS's (like Linux and Windows) will allocate a relatively small amount of memory, but make sure it is ready for use and not paged out virtual memory. It is likely that they will make it contiguous memory to reduce the number of indirections, MMU lookups, etc. Programs can ask for larger stack space if they need it - a program that unexpectedly requires lots of stack space most likely has a runaway recursion bug, so it is good to stop it.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.