Paper on dynamic memory allocation for real-time systems

Even so, if I'm designing a real-time system, I can either allocate all the memory I'm going to need, either "dynamically" during initialization, or statically. Then I don't have to concern my self with trying to figure out whether the worst case execution time or worst case fragmentation may break my program.

Indeed, I don't have to determine what my worst case memory requirement is.

I may then be creating something that needs more physical memory than might otherwise have been enough, but memory isn't that expensive.

I suppose there may be real-time systems out there where failure IS an option, but they'd be few and far between.

Sylvia.

Reply to
Sylvia Else
Loading thread data ...

Without commenting on the rest of your post, that sentiment is unhelpful motherhood and apple pie.

I have simple rule which is very useful at cutting through the spiel from salesmen and politicians. Simply invert the statement, and if the result is manifest nonsense, then the original statement carries no useful information.

In this case "write buggy code" would be manifest nonsense, so "don't write buggy code", while true, is unhelpful.

Perhaps it would be better to choose a tool that gets the job done with reduced ways of introducing tool-specific bugs.

Reply to
Tom Gardner

I'm unconvinced. Persisting a variable after a function returns can be satisfactorily achieved either by allocating it in any of the calling functions "further up" the stack, or failing that as a global.

Dynamic memory is necessary when the number of such variables cannot be determined at compile time.

Reply to
Tom Gardner

Yes - but that is /precisely/ the point.

There is nothing wrong with allocating memory on the stack - done properly, it is more efficient and safer (for some meanings of "safer") than allocating memory on the heap or other memory pools. There is no reason whatsoever to suppose that Toyota's software here would have worked correctly had the memory been allocated on the heap rather than the stack. Their problem was allocating memory of unknown size without due care - /not/ that they were allocating memory on the stack.

And the way to avoid the kinds of troubles they had is not by arbitrary rules such as "don't put big variables on the stack" - it is by using good development practices which hinder or catch all sorts of bugs. Code reviews, static analysis, testing methodologies, measurements, etc. It might well be that a code review spots a non-conformance with the style guide when they allocate a large object on the stack - it might well be that allocating a large object on the stack causes the static analysis to through up an error. But those are consequences of solving the root issues. Banning large stack allocations is just putting a band-aid over a cancer tumour.

I was going to snip this, but it's a nice rule that I plan to copy :-)

Reply to
David Brown

On a 32 bit virtual memory machine, why on earth would the stack be limited to a few megabytes ? Usually the stack is set at 2 GiB growing downwards and heap above code growing upwards towards th stack. There might be a gigabyte or more between heap and stack. Initially, most of this area between is marked as unmapped pages. When both stack and heap grows, now and then a memory access hits a page slightly below currently allocated stack or slightly above currently allocated heap, a page fault will occur. The OS then checks, is it time to extend the stack or heap. If OK, the inaccessible page is returned as a demand zero page to the stack or heap an program execution continues.

The practical limitation is the size of the page file. With several GB of page file, this might not be an issue, but the OS can always determine, if there is a risk that the stack and heap areas collide and refuse allocation.

The statement about 1 or 8 MiB sizes sounds more like the _initial_ stack size mapped to a created process, but after consuming it, 1 to 8 MiB chunks are allocated to the process over and over again.

Things get hairy on 32 bit multithread processes, since each thread needs a private stack/heap so the size of that area should be known at thread creation time.

On 64 bit platforms, you could allocate stack/heap areas say every 100 GiB so no big problems.

The OS can protect itself, no big problem.

On any virtual memory system, the OS can easily protect itself simply by determining what to do with a stack page fault.

Can you really do that after the thread is created. Allocating virtual addresses during thread creation is trivial.

Reply to
upsidedown

On a 32 bit virtual memory machine, why on earth would the stack be limited to a few megabytes ? Usually the stack is set at 2 GiB growing downwards and heap above code growing upwards towards th stack. There might be a gigabyte or more between heap and stack. Initially, most of this area between is marked as unmapped pages. When both stack and heap grows, now and then a memory access hits a page slightly below currently allocated stack or slightly above currently allocated heap, a page fault will occur. The OS then checks, is it time to extend the stack or heap. If OK, the inaccessible page is returned as a demand zero page to the stack or heap an program execution continues.

The practical limitation is the size of the page file. With several GB of page file, this might not be an issue, but the OS can always determine, if there is a risk that the stack and heap areas collide and refuse allocation.

The statement about 1 or 8 MiB sizes sounds more like the _initial_ stack size mapped to a created process, but after consuming it, 1 to 8 MiB chunks are allocated to the process over and over again.

Things get hairy on 32 bit multithread processes, since each thread needs a private stack/heap so the size of that area should be known at thread creation time.

On 64 bit platforms, you could allocate stack/heap areas say every 100 GiB so no big problems.

The OS can protect itself, no big problem.

On any virtual memory system, the OS can easily protect itself simply by determining what to do with a stack page fault.

Can you really do that after the thread is created. Allocating virtual addresses during thread creation is trivial.

Reply to
upsidedown

On Oct 23, 2017, David Brown wrote (in article ):

In the hard realtime embedded world circa 1980 to 1990 or so, for problems that did not lend themselves to simple static allocation schemes, we usually built our own dedicated memory allocator. The standard design was to have a linked-list of empty buffers (all the same size), and one would get an empty buffer as needed from the list, and when done with the buffer it would be linked back into the list of empty buffers. In many systems there were two such buffer lists, one with a large number of small buffers, the other with a small number of large buffers. The small buffers were used to pass access to the large buffers, which could be accessed by multiple threads in parallel (using reference counts and inversion-proof mutual exclusion mechanisms).

This approach has the advantage that it is constant time, and does not get slower as the system is more and more heavily loaded, causing free memory becomes scarce. This scheme is basically bulletproof.

A better alternative for soft realtime applications is Knuth?s Buddy-System allocator:.

Joe Gwinn

Reply to
Joseph Gwinn

Walking a tightrope with a net tends to make you careless and prone to accidents, so remove the net and just learn to be more careful!

Reply to
bitrex

And also step back in time to the 1950s wrt how flexible your design is. Say you have two processes each of which executes 50% of the time non-concurrently, and yet if you rely on static initialization all their resources are initialized at startup and exist forever to sit around and do nothing half their life.

Reply to
bitrex

So remove airbags in cars, and replace them with spikes?

That would make things safer by slowing drivers down unnecessarily, just as the probability of tool-induced bugs slows down developers unnecessarily.

Reply to
Tom Gardner

That's the joke

Reply to
bitrex

A fairly typical scenario is parsing a hierarchical data stream which may something as relatively simple as XML, or may be the text of a program in a high-level language such as C.

The number and nature of the elements cannot be determined in advance of the parsing (which tends to be recursive), so it is not practical to allocate them further up the stack.

Sylvia.

Reply to
Sylvia Else

yeah, 8086 had only a 16 bit stack pointer but I never tried alloca in a huge memory model.

in the 8 bit world

6800 had only an 8 bit stack pointer, and PIC16 even fewer, and stack outside of main memory.

AVR and Z80 have 16 bit stack pointers

In C too, "alloca()" I think the "a" on the end stands for "auto"

The standard does not require that it be allocated on the stack, but if "longjmp()" is also provided, the only sensible implementation (that I am aware of) is to do it that way.

GCC has that in C too, not sure if it's part of some C standard or purely a GCC extension.

--
This email has not been checked by half-arsed antivirus software
Reply to
Jasen Betts

Just so.

And the key point w.r.t. using a heap is that the number cannot be determined at compile time, not their lifetime.

Reply to
Tom Gardner

The 6800 had a full 16 bit PC, stack pointer and index register.

(The 6809 had two SPs and two index registers; it was the best of its class but came too late to be influential)

Reply to
Tom Gardner

On Tuesday, October 24, 2017 at 11:01:21 PM UTC-7, Jasen Betts wrote: ...

...

The 6800 and descendants had a 16-bit stack pointer - the 6502 and variants (used by Apple, BBC, Commodore etc) only had 8 bits.

kevin

Reply to
kevin93

On AVR 8 bit at least an out-of-control recursive function could in theory damage hardware, as the SRAM alias addresses for the I/O port registers are within the stack pointer's address space. Halt and Catch Fire

Reply to
bitrex

(Some AVRs have 16-bit stack pointers, but that's nit-picking.)

The C standard does not require a stack at all. A stack is, of course, the most efficient known mechanism for function calls, local variables, parameters, etc. - but it is not required by C. There are devices with no data stack and only a small dedicated hardware return stack (like small PIC's and some AVR Tiny devices), and plenty of devices with such inefficient access to data on the stack that compilers use static memory allocations for parameters and local variables (like many old-style

8-bit microcontrollers).

The C standard also does not have alloca(). It is, however, in the Posix standards - and there a stack /is/ required.

longjmp() is in the C standard, but is required only for "hosted" implementations and not for "freestanding" implementations. Devices like the AVR Tiny without a stack are unlikely to support longjmp(). (I haven't checked gcc's support here - as someone interested in clear, structured and reliable coding, I would not touch longjmp with a bargepole.)

There are a few C implementations for bigger systems that do not have a normal stack as such. AFAIUI, there are are some mainframe systems where function frames are allocated in a linked list in a heap, rather than on a stack. And these should support longjmp. But whether it is an efficient implementation is another matter - the C standard only requires that it works according to spec, not that it is a "sensible" implementation!

It is not part of any C standard - it is a gcc extension. gcc supports a number of languages, not just C and C++. Amongst others it supports Ada, which /does/ have nested procedures. (It also has out-of-tree support for Pascal.) Since the machinery is all in place in the compiler, and there was a clear and obvious syntax, gcc developers decided to make it available in C also. But the implementation of nested functions in C requires trampolines on many architectures - executable code snippets on the stack. This breaks code safety restrictions about non-executable stacks.

If you want nested functions, you would be better to use C++ and lambdas rather than gcc's extension to C here.

I wonder if your half-arsed antivirus software would have spotted the error in your signature delimiter - or that this is not an email? :-)

Reply to
David Brown

6502 that's the one I meant, thanks.
--
This email has not been checked by half-arsed antivirus software
Reply to
Jasen Betts

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.