Policy on rebooting?

... snip ...

Using _any_ C construct can lead to problems, so don't use them, is the logical extension of that attitude. Among the worst are casts, pointers, indexing. :-) It is analogous to the teenager who has learned to drive, and now can do anything.

"You gotta know the territory."

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer
Loading thread data ...

It's a logical extension that I never stated. The trick is knowing the benefit of using the construct vs. the possible downside (which unfortunately is lacking in a lot of developers - professional or otherwise).

-->Neil

Reply to
Neil Bradley

disagree. dynamic memory allocation allows the use of a more robust operating system with flexible message passing.

In a previous job, I worked on a major cellular phone product that is now in the hands of hundreds of thousands of people. It reboots all the time. But most the time, the user will never know it. Before we reboot, we even set a bit indicating that the boot sequence should be modified _not_ to play startup sounds and such.

Reply to
Mike

This does not *prevent* the problem, it only limits/slows the problem if the application has a particular pattern of dynamic memory usage.

Imagine a limited supply of large fixed size blocks. If these blocks are allocated and released and then subsequently allocated and fragmented to fulfill the needs of another part of the system requiring a smaller block size (of which there are no more), then eventually such an allocation system will fail as well.

Of course, we're talking about general malloc/free new/delete here, not private memory pools, which are pre-allocated for a particular part of the system with fixed sized blocks.

What's more is that block memory allocation schemes reduce the "memory efficiency" advantages of dynamic memory allocation by wasting the unused portion of each allocated block.

BTW, allocation is not the issue. If memory is never freed, then fragmentation will not happen. Of course, then its not

*really* dynamic.
--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"... abstractions save us time working, but they don't
  save us time learning."
Joel Spolsky, The Law of Leaky Abstractions

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

It's quite possible to do "flexible" message passing without dynamice memory allocation (malloc/free new/delete.)

Yikes! I hope this is just a troll.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"... abstractions save us time working, but they don't
   save us time learning."
Joel Spolsky, The Law of Leaky Abstractions

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

Well using any C-like heap manager is nuts in embedded. I agree. But doing dynamic memory management using fixed-size pools if often done for various reasons. It depends what you mean by "dynamic memory allocation".

No troll. Our software was large and complex. We were very defensive because a screwup could mean that the phone starts spewing out garbage RF that could mess up the rest of the system and be a violation of the FCC. On any panic, we wrote the information to a flash block, set the powerdown reason (so boot wouldn't play sounds and etc), and rebooted.

In fact previous products for the Japanese PDC system were designed such that a reboot could occur doing the middle of a phone call without the call being dropped! Unfortunately this can't be done with GSM and CDMA systems. Hence dropped calls are mostly due to RF and loss of signal. But I'm certain that some are due to software bugs and reboots.

Reply to
Mike

The fixed-size-block allocation scheme is foolproof unless some fool tries to circumvent its "shortcomings" by doing the kind of thing you describe above. The solution is to have the right number of right-sized pools containing the right number of right-sized blocks (as you say below) - but with *no poaching allowed*. If you can't get the proper-sized block, then there aren't enough, so back to the drawing board. Stealing a bigger block from another pool (transparently or not) is not acceptable; it just re-introduces fragmentation by the back door.

I agree wth eschewing malloc() and free(). However, if you want to use C++ effectively in an embedded system, you really have to customise new() and delete() to use a safe, fixed-size-block scheme tailored to your system. This is painful, but essential. It is very hard to use C++ effectively without using new() and delete().

True, but better that than the indeterminacy of fragmentation.

Also true, and not as daft as it sounds. Using such a "heap" is quite a convenient way of initially allocating pools, stacks, etc.

--
--
Peter Bushell
http://www.software-integrity.com/
Reply to
Peter Bushell

A production system should not have, what we called "memory block promotion" enabled. This is where a larger block is given when no more appropriate-size blocks are available. It is one of the hardest things to tune, but really necessary. I found that determining the appropriate sizes of the pools statically was impossible. Hence we did it with lots and lots of testing/tuning.

Reply to
Mike

Actually, it does prevent the problem. The reason is that the RTOS won't take (for example) a 256 byte space and re-allocate it in smaller chunks, then turn around and try to make a 256 byte allocation out of a 128 byte and two 64 byte blocks. Basically there is a memory pool.

The downside is what you alluded to in that it is not as memory efficient. For example, a structure that needs 200 bytes will get a 256 byte allocation. Life is not perfect.

Dennis,

--
If sending a reply you will need to remove "7UP".
Reply to
D. Zimmerman

Like malloc/free , new and delete get their resources from a pool that is *at least* global to a given thread if not the entire system.

However, this is *not* true about "placement new", which I use to construct objects using statically/pre allocated memory as required.

My point is that a general purpose new/delete (beyond the "placement" variety) is the problem due to heap fragmentation issues.

Note that I am not arguing against the use of fixed block pools, but against general purpose dynamic memory allocation, which allocates *dynamically sized* objects from the same source. IOW, I *am* arguing that such schemes introduce fragmentation and are therefore evil in any system that is to execute indefinitely.

Yep ... I pre-allocation at startup frequently.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"... abstractions save us time working, but they don't
  save us time learning."
Joel Spolsky, The Law of Leaky Abstractions

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

It seems to me that such a system with "memory block promotion" disabled is prone to behavior changes due to run-time variations in the order in which blocks of various sizes are allocated. This could wildly affect the memory usage efficiency, if several large blocks are allocated and released before small blocks, and thus limiting the number of small blocks available. Thus, determinism is affected.

Further, the situation can still exist where there is plenty of memory available for a purpose, but it cannot be used because it exists in the wrong "size partition." That seems like a fragmentation issue, though perhaps of a different kind.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"... abstractions save us time working, but they don't
  save us time learning."
Joel Spolsky, The Law of Leaky Abstractions

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

Such allocation algorithms certainly exist, however, while maintaining "ease of use" for the programmer, such schemes can lead to stability/determinism/efficiency issues due to fragmentation at a different level of granularity.

Life is not perfect, however, if a system is *required* to execute predictably for an indefinite period of time without reboot (most autonomous embedded systems), then these issues must be addressed directly *by design* and ad-hoc use of malloc/free new/delete is inappropriate.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"... abstractions save us time working, but they don't
  save us time learning."
Joel Spolsky, The Law of Leaky Abstractions

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

I have worked with major Aircraft manufacturers and their suppliers(.i.e., life critical software) and some have a "run until you burn" policy and the others has a totally opposite "shut down on first fault" policy. These policies are across the board within the supplier (generally set by the chief engineer).

The reasoning for "run until your burn" policy can be best describe by this scenario "a fighter jet has just be shot up badly, has one engine left, half a flight control, is flying over the ocean trying to get back to the aircraft carrier, you have one CPU left controlling the surfaces and your not going to shut no matter what type of CPU faults you have, because shutting down means your in the ocean".

The reasoning for "shut down on the first fault policy" goes something like this " One CPU goes semi bad on an otherwise healthy aircraft, basically the CPU goes partially insane and starts telling others misinformation (valid data but incorrect), since the weird scenario hasn't been tested the others get confused and everybody decides to shut down and the aircraft drops into the ocean"

Since planes aren't falling down from the sky you can make the assumption either policy will work as long as you have the proper redundancy management. However, each policy requires totally different solutions to the same problem, so its best for upper management to pick one policy and stick with it to maximize your experience base(so now you know my answer, listen to your boss).

Reply to
steve

... snip ...

The problem is not the use of dynamic memory, the problem is fragmentation. Avoiding this doesn't require garbage collection, etc, in fact that is harmful. What is required is a means of avoiding fragmentation while preserving efficiency.

What you can do (and I have in the past) is build an allocater that functions through indirect pointers, so that the allocation can be moved. Then a routine can be called periodically, basically in the idle time of the process, that advances one step towards complete defragmentation. That means find the lowest (or highest) portion, move it, and update what the indirect pointer points to. Sooner or later, barring continuous allocate/free, the memory will be compacted. So you usually have to be able to test whether there is sufficient memory when malloc fails, and just postpone something until it can succeed. My application timed out the use of the storage (which was a store and forward message system) and dumped the timed out message to the console with an undeliverable tag attached. So we were guaranteed eventual freeing of the storage.

If you can guarantee that only one pointer to a particular block is in play at any time, you can, in the allocation supervisory block, replace the indirect table with a pointer back to the actual pointer. I hope that makes sense, but I am too lazy to make an ascii diagram.

I repeat, dynamic memory is not the problem. Fragmentation is.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

While this would work nicely in a cooperative multitasking system, how do you handle the situation in a preemptive system ?

Running the compacter in the idle task simplifies things, since each dynamic memory element access does not have to be protected in normal code, but you still have to disable interrupts when the segments are actually moved around in the compacter, to prevent any high priority task from activating during this move operation. If large amounts of data must be moved, this may disable the interrupts for an unacceptably long time.

Paul

Reply to
Paul Keinanen

Actually, those guys who have worked with TI DSP chips and read about the XDA(I)S (eXpress DSP Algorithm Standard) already know this. Those who haven't, through experience may come to this (or may not). A few key points of the XDA(I)S (the I is in parentheses 'coz TI's changed the name by sticking in this additional letter) are:

- the algorithm instance (think of C++ class realization as an object, with common code and as many "*this" things created as required) must never: use any global data but constants and any peripheral registers, call OS functions, i.e. be the kind of software that can't break the system or any other instance of the same or other algorithm

- memory allocation/freeing is unified in a way that when you create an algo instance (think of constructing a C++ object) the algorithm says what memory it wants, how much and how aligned but it does not allocate/free anything on its own. This information is then passed to the whatever memory manager you have (standard or custom, doesn't matter as long as you comply with the API)

- the algorithm functions (think of C++ class member functions) are always called with 1st parameter being the pointer to the algorithm's data/object (think of the "this" pointer of C++)

- memory fragmentation can be reduced by calling the standard XDAS moved() function (think of an abstract member, whose interface is defined but the inner implementation is up to the programmer). This moved() member is used to relocate the instance (its data) to a different location if needed, in this function you should update/relocate any pointers internal to the instance/data.

- other nice things...

So, following a few simple rules and mechanisms you get a safe piece of code that can be used in multichannel and multithreaded systems and you can defragment memory whenever needed by calling the moved() methods of the instances.

IMO, this is a rather useful standard to follow when designing code.

Alex P.S. I've referred to C++ because it's very similar concept, but not C++. It's all pure C. Probably more closer to Objective C (correct me if I'm wrong).

Reply to
Alexei A. Frounze

The compactor moved only one allocation at a time, thus limiting any possible interrupt inhibition. This is relatively inefficient for compaction, since various things have to be recalculated on each call, but who cares - this is idle time anyhow. As far as large items are concerned, this application knew each live item was at most 100 or so bytes, and the (possibly) large free areas didn't have to be copied.

This was another application that ran undisturbed for years, with no reboots.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

One way around this problem would be to disable interrupts, make the decision to move one block and temporarily reserve a block at the final destination. The interrupt can then be enabled and the actual copying started. After the copying, the interrupts are disabled and if the copying had been done without an interrupt (e.g. each interrupt routine sets a flag when executed), the movement is committed and the original block is freed. However, if there had been interrupts, the temporary copy result is discarded (rollback) and tried again at some later time. Anyway, the interrupts are enabled again.

Of course, the worst case copy time must be much less than the minimum time between any interrupts, so that the compactor will eventually be able to do the move.

Paul

Reply to
Paul Keinanen

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.