STL containers and managing memory allocation in embedded systems

[snip]

But we can come accross a similar problem if we are not using dynamic memory allocation too.

For instance,

void entry_point() { struct Foo f; // Stuff }

The entry_point() function is an entry point for tasks that are created in run time. Number of 'the task existed in the same time' depends on situation on our embedded system. If that number is too big and sizeof(Foo) is enough big, the memory can be exceeded.

If entry_point2() is used insred of entry_point() void entry_point2() { struct Foo *f = new (nothrow) f; if (f == 0) { // On memory allocation failure } // Stuff } we at least can manage memory allocation failures.

Alex Vinokur email: alex DOT vinokur AT gmail DOT com

formatting link
formatting link

Reply to
Alex Vinokur
Loading thread data ...

Alex Vinokur wrote: [snip]

delete f;

[snip]

Alex Vinokur email: alex DOT vinokur AT gmail DOT com

formatting link
formatting link

Reply to
Alex Vinokur

If that happens in a system requiring high reliability, you have either failed to provide enough memory to satisfy the absolute maximum memory requirement. If you can not specify that absolute maximum value, then the system design is faulty.

If the system must work with a predefined amount of memory, then the maximum number of such tasks can be calculated at startup and during normal execution, the system must _enforce_ that the number is not exceeded. If there is a risk that this limit might be exceeded, there must be a predefined policy (such as a higher level protocol) what to do, when running on low resources, such as rejecting all requests, reject less important requests and as a last resort, put the system into safe-mode.

IMHO, it is much simpler to have a task counter and compare it to a fixed limit :-).

The suggested use of the dynamic allocation failure as an indication of a severe system overload might be realistic, provided that:

1.) the struct Foo is absolutely the largest allocation in the system (preferably by one or two orders of magnitude) 2.) this big allocation is done as the first (and preferably only) allocation in the task

If you first do some small allocations and finally do the largest one and several tasks are starting concurrently, the small allocations may succeed in all tasks, but driving the free memory to dangerously low levels, which might make it impossible to recover, especially if the recovery routines require some dynamic memory as work space. This would course a deadlock situation.

At least if you put the big allocation as the first allocation in the task, if it fails, you have less memory than the big allocation required, but quite likely only slightly less than the big allocation (since smaller allocations should be at least one magnitude smaller), so recovery should be possible.

Paul

Reply to
Paul Keinanen

Huh? Unless I'm missing something, in this case you're using the stack rather than the heap. The stack will never be fragmented.

Yes, if your stack is too small you have a problem. But that's a rather different kettle of fish.

Steve

formatting link

Reply to
Steve at fivetrees

In most small systems, stacks (and task control blocks) are usually preallocated into a fixed number of stack slots. Creating multiple instances of the same task will consume more stack slots and when the slots are all used, the system fails to create new tasks, thus there is a well defined maximum number of concurrent task activations.

In larger systems supporting dynamic task creation and deleting with different task sizes, it is quite common to allocate the stack for a new task from the global dynamic memory pool.

Paul

Reply to
Paul Keinanen

Right. Don't do that either.

--
	mac the naïf
Reply to
Alex Colvin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.