Languages, is popularity dominating engineering?

Newlines cause your source to take up more vertical space (on a display device).


for the






is to








Imagine if it's contents included structural syntactic elements that affected its meaning (besides just a bunch of sequential words). E.g., without scrolling back up, how many *words* did I type? How many were on contiguous lines? Any punctuation/capitalization errors in that sentence??

(No, *this* isn't important. But, other "little details" of comparably simple complexity -- locations of braces, parens, etc. -- *do* affect source code's meaning.

Reply to
Don Y
Loading thread data ...

I wonder if Don's goal is to try and collapse his code so that more of it appears on the screen at the same time.

In my case, my personal brace style is the Whitesmiths brace style so I am pretty much the opposite of Don here. However, I still wrap braces around everything because I think it makes things clearer even at the expense of getting slightly less code on the screen at any one time.


Simon Clubley, 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

Am 14.12.2014 um 10:38 schrieb

Which is why I chose the wording "automatic variables", rather than "stack variables".

Because that's not the meaning of "static" Les, Simon and myself were talking about.

You're talking about the allocation schemes for _automatic_ variables employed by compilers for essentially stack-less machines (these days the 8051 may be the most prominent example).

We were talking about flagging those variables _static_, at source level.

Of course it will. If you allow the compiler to apply it. Making them "static" at source level, forbids this optimization, causing excess RAM usage.

Reply to
Hans-Bernhard Bröker

Exactly. As should have been clear from my (elided) comment:

"One of the criteria that (on initial exposure) seemed "arbitrary" in my first language design class was "be able to write functions/subroutines on a single page" (which was never formally defined). It doesn't take long to realize why this can be A Good Thing."

And, expounding on that to indicate that syntactic sugar that tries to cram

*too* much onto a single line can be counterproductive. The example I gave in another reply, up-thread:

s := tokenize(s, "\t;, \n"); case hd str { "foo" => spawn do_foo(tl str); "baz" or "bar" => do_bar(str); "move" => x = int hd tl str; y = int hd tl tl str; rest = tl tl tl str; move(x,y); eval(rest); * => die(); }

is very expressive and "tight". But, intimidating and error prone (omit a '>' and "=>" becomes '=', etc.

A similar question would be: "how much do braces cost on your machine?"

When writing (code or prose), I tend to use large screens ("windows") so I can opt for a "longer (and usually wider!) page size". Scrolling back and forth is just too easy to miss an indent level of a structure or a line of code at the "page crease", etc.

Reply to
Don Y

Don't forget that in the message which started this sub-thread I mentioned this was for 8 bit MCUs with limited memory resources available and hence the goal for me is to try and use development techniques which expose problems as early on in the process as possible and do it in a more predictable deterministic way as possible.

In 32 bit MCUs with more resources available I go for a much more traditional stack based approach and it's only the big buffers I tend to keep as static.

In even larger 32 bit MCUs even the large buffers tend to get dynamically created at run-time in my code.

I am also aware that as a hobbyist I am not building thousands of devices so you may have to make tradeoffs I don't such as saving a few pence by using a more resource limited MCU; hence a technique which may increase memory consumption slightly may not be available to you.

However, there was a question I asked earlier and that is how do the costs of debugging a stack trashing .bss/.data compare with the costs of using a slightly more larger MCU and different development techniques in the first place ?

BTW, I am also very aware that this technique designed to produce reliable code on small resource constrained MCUs is the same technique which can produce hard to maintain code on much larger systems.


Simon Clubley, 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

Am 14.12.2014 um 14:55 schrieb Simon Clubley:

The problem remains that your approach doesn't just expose those problems: it makes them worse.

There's a common turn-of-phrase about a cure that's worse than the disease. In your case, the even diagnostic is worse than the disease.

I had answered that, but apparently that message didn't make it out into the net:

That's making the assumption that using automatic variables where they can be used, has to increase development time. I strongly doubt that assumption.

Stack overflow is something you have to protect against, anyway. The actual amount of data in automatic variables doesn't change that in any meaningful way. Both the cost of stack checking and that of debugging a stack overflow remain essentially the same regardless of how much stuff is on the stack.

That being said, let's just state that 50 million cents do indeed pay for quite a bit of effort.

Reply to
Hans-Bernhard Bröker

It really doesn't matter how big the processor is.

Do you only have to verify the brakes work on "fast cars" but not "slow cars"?

The stack can overflow on a big MCU just as easily as on a small MCU. (probably *more* likely as you may have many more stacks on that big MCU -- any of which can be problematic)

Let's be clear: there are three types of memory in play, here.

Dynamically allocated memory is created by explicit actions in your code. You call malloc/new and some part of (some) heap is (or is not) allocated for your needs.

Static's are durable pieces of memory that are present at all times. They may not always be *accessible* (e.g., a static inside a function can only be accessed when that function is executing) but they always consume a fixed amount of resources.

Automatic variables are "automatically" created on the stack when a function/block is entered. They can only be accessed within that function/block AND DISAPPEAR AUTOMATICALLY when the function is terminated/exits. These are, in a sense, dynamically allocated,

*by* the compiler -- but ON THE STACK, not on the heap.

There is a camp that frowns upon use of (true) dynamic memory allocation (because of the "run with scissors" argument: you can get hurt if you aren't careful). It, however, gives the programmer the most run-time flexibility over memory usage (you can create a persistent object *in* a function -- like a static would do; *or* an object with limited lifetime -- like an auto variable; you can control that object's visibility -- by only "telling" the folks you want to have access to it where it is located; etc.)

But, aside from forgetting to free() every allocation (memory leak), you can also forget to verify that each allocation succeeds and, when faced with a failed allocation, end up dereferencing a NULL pointer. Or, you could just fail to have given consideration to how you react/recover from this condition ("Ohmigod! The sky is falling!!")

OTOH, you tend to be far more aware of the amount of memory you are

*expecting* to acquire in this way (from *a* heap). You wouldn't, for example, create a heap of size X if you *know* you will be requesting Y>X bytes from it!

Using statics, you can get the compiler (linkage editor) to tell you how much "data" you are consuming. If this ever exceeds the amount of "RAM" in your system, you're screwed.

But, what about when it *doesn't* exceed the amount of RAM? How do you decide how large the stack(s) and heap(s) should be? (I mean this seriously! Do you just make an arbitrary GUESS and see if the code runs? If it does, how confident are you that every possible combination of execution paths/orders will *still* yield a functional system?

You still have to know what your maximum stack penetration will be (and, in many environments, stack and heap share a memory region; stack can grow if heap is shrinking -- retreating towards the opposite end of the region -- so this is a tougher metric to evaluate).

Assume you use statics exclusively! (no recursion, no reentrancy, single threaded, etc.) How do you "count" the stack space consumed by each function invocation? I.e., the "return address" silently pushed onto the stack? Do you have some metric that will TELL you the maximum

*number* of nested subroutine/function invocations? Or, do you have to look at the code to determine that?

See, you need to understand where your code is *likely* to go in order to accurately assess its memory needs.

I have no idea what sort of code you write so I can't comment on how much it may increase your effective memory usage (well, let's call it "allocation" because that's what it is; the memory may not be "used much" but it is permanently "allocated" by the use of statics).

I write multithreaded code -- almost exclusively. So, virtually *every* function/procedure can be invoked multiple times, simultaneously. Anything that is static will get clobbered when another instance of the same function tries to access that SINGLE static (i.e., I would have to write every such function to actively *share* that object -- mutex -- in order to declare it as static)

[One of my OS's doesn't suffer from this constraint -- but it is much heavierweight creating *processes* instead of threads]

OTOH, if I create auto variables on the stack, then each thread has its own *private* copy of each such variable -- because each thread has its own stack! Sharing isn't inherent unless the thread explicitly takes action (and responsibility) for sharing an object.

[similarly, if I dynamically allocate objects on the heap -- even a *shared* heap! -- then I can control their visibility by just not sharing the reference to a particular object with anyone with whom I'm not prepared to explicitly share that access.]

Dynamic allocation (either via explicit heap actions *or* automatic allocation in stack frames) has a big advantage in that it allows your RAM to be used more efficiently. When X is done using all of the memory that it needs (i.e., when X exits), all of that memory *magically* becomes available for Y to use -- without any explicit action on Y's part!

Again, you're missing the point. Tell me how big your stack *needs* to be in order to GUARANTEE that it won't overflow (i.e., it will never, ever overflow regardless of ANY conditions that it encounters while you're not babysitting it). 1KB? 10KB? 10MB?

Are you *picking* a number based on how "comfortable" you feel with the unlikelihood of it being wrong? Or, do you have "science" to backup your assessment and the number reflects a true understanding of your code's design, operation and performance?

It's like clowns who design (electronic) forms and allow N characters for a first name (or last name, street name, etc.). How do they *know* that N is big enough? "Bob Smith" might think "4 or 5" is a good number for first name with "7 or 8" for a surname; "Esmerelda Humperdink-Ticonderoga" may think "10 or 12" for first and *30* for surname!

If you don't care about folks whose names "don't fit" (i.e., if the form doesn't HAVE TO WORK), this is an easy decision. OTOH, if the form HAS to work, what do you do? 80 characters for each? (i.e., the equivalent of "picking" 10MB as your stack size -- "to be safe")

It's really the same problem. Would you sleep well at night knowing your child/spouse was scheduled for a robotically assisted surgery in the morning and you had written some of the code that controls that robot and had just "picked a very big number, hoping it was big enough" for the stack size? Would you pick a "worst case" number for the current limit for the servo that will be driving the actuator arm based on "presumed overkill"?

"We're sorry, Mr Clubley -- we did all that we could! But, one of the tendons was just too thick for the robot to cut through. The servo kept faulting. And, by the time we recognized the problem and got the robot out of the way..."

[of course, I am exaggerating]

Or, would you study the problem and implementation and come up with hard numbers -- backed by data -- that indicate WHY each of your design decisions (coding decisions) are appropriate?

Reply to
Don Y

In real time control systems, I use some malloc() but try not to use free() and the system runs for years without reboots :-).

With small systems, there is always the risk of dynamic memory fragmentation. Frequently allocating and freeing variable sized objects, you can easily end up in a situation, in which there are no single _continuous_ memory for new allocations, even if there would be a lot of free heap bytes available.

For this reason frequent allocation/deallocation should be avoided.

Alternatively some form of garbage collection/memory compacting would be needed, but C doesn't provide it and if available, would harm the real-time performance with unpredictable latencies. In a HRT system, the program is faulty, if the computation result is not delivered at the specified time.

Except for large dynamic memory allocations, any failed dynamic memory allocation would be catastrophic.

Assuming you want to make a 100 byte dynamic memory allocation and it fails, how do you expect to continue from this ? Any fprintf to stderr or crash dump routine would potentially use dynamic memory, which again will cause allocation failures etc., creating a vicious circle. Thus, the only safe thing to do, if a small dynamic memory allocation fails is to halt or restart the processors.

For larger allocations returning a NULL pointer makes sense, since it might be perfectly reasonable to try a smaller allocation in some cases.

Reply to

The RAII idiom has more to do with ensuring proper cleanup in all cases than with initialization. If it were just about initialization RAII wouldn't be a big deal. In C++ the destructor of an object will always be called when that object goes out of scope, regardless of what caused it to go out of scope (return statement, exception...). This makes it possible to automate resource management, ensuring resources are never leaked without having to rely on the caller to do the right thing in every possible flow. Since C has no destructors or similar mechanism saying "RAII holds for 'C'" too makes no sense.

Destructors are a natural and integral part of the C++ language from pretty much the beginning rather than a stopgap measure to "close a hole". When (much later) exceptions where added to the C++ language the RAII idiom became rather essential. One might argue that without destructors there would be no point in having exceptions in the C++ language because resource management would become even more error prone to point of being impractical to get it right in every possible flow.

Reply to

The devil is *always* in the details.

With more "modern" languages (where dynamically allocation is done "for you"), you tend to end up with lots of smaller alloc/free actions -- every object instantiation potentially poking a hole in the heap's freelist.

In C (explicit allocation/release), the programmer has more control over where these allocations are done. E.g., you almost assuredly wouldn't malloc 4 bytes for an int -- and then free it some time later.

Again, depends on the allocation pattern. I have a character-based UI that I frequently use in small products. It lets me create menus, list boxes, radio buttons, check boxes, etc. "on the cheap". It would be foolish to static allocate each POSSIBLE UI "control/widget" and just let *most* of them sit idle while the interface is running (and ALL of them sit idle while the interface is OFF!).

Each object is different size (as each menu, list, etc. can vary based on whatever the developer thinks appropriate for *this* control when invoked in *this* manner from *this* menu, etc.).

*BUT*, objects tend to be created and deleted (free'd) in complementary orders. So, you don't create 1, 2, 3, 4 and free 2, 4, 1, 3 (which could lead to the fragmentation problem you describe). Rather, 1, 2, 3, 4 are deleted as 4, 3, 2, 1. I.e., a LIFO/stack ordering.
[What's "large"? "small"?]

That depends on what the code "expects" and how willing it is to accommodate the failed allocation.

E.g., one of the allocation strategies I implement in my heaps is "get largest" (vs. "get smallest", "get at least", "get adjoining", etc.). So, an algorithm can issue a request for the largest contiguous block of memory in a particular heap. If this is sufficient, use it. If excessive, point to a portion of the allocated block (front or back) and free it (telling the memory manager what strategy to use when reintegrating that chunk into the heap's free list).

Note "sufficient" and "excessive" need not be the same value! This allows me to enhance an algorithms *performance* by exploiting larger buffers (etc.) WHEN AVAILABLE without *forcing* them to be used ALWAYS.

What if you can get by with 90 bytes?

What if you can reschedule the operation to a later time? (as long as you meet your FINAL deadline, the fact that it doesn't get done "now" isn't necessarily fatal)

[This is why SRT is *harder* than HRT!]

That's why you have special routines for reporting errors! They *can't* fail due to resource issues.

I don't agree (devil, details). If an HRT *task* fails to meet its deadline, then the HRT *task* has failed. The "system" hasn't, necessarily.

[Incoming ballistic missile. Defensive intercept fails to destroy it (missed deadline). Silly to waste any more effort on that missile -- the deadline has past so any additional effort is for naught. Incoming missile destroys intercept's launcher -- too bad, so sad. Incoming missile destroys some *other* target leaving launcher intact. In each case, the missed deadline doesn't mean the "system" has failed -- and should be rebooted!]

Or delay the attempt and try again. The language has no knowledge of how it will be applied.

Languages that silently manage dynamic objects leave the programmer either blissfully ignorant of the potential perils or litter the code with all sorts of exception handling ("what if this object can't be instantiated,

*here*? how do I handle *this* case?"). IME, this results in the default exception handler(s) being used which, typically, just crash the app.

What assurances do you have that *restarting* the app won't result in the same failure? In the same place? Or, elsewhere?

E.g., if the stack overflows and stomps on your heap or your "data", what assurance do you have that restarting the app won't result in the same failure?

Reply to
Don Y

That might be sensible, if the attempted allocation is 20-40 % of the total dynamic memory, but if you are so low in memory that 1-10 % of dynamic memory will fail, you are just creating interesting deadlock situations :-).

Reply to

You don't know what other consumers (of that same resource) are doing AT THIS MOMENT. Your allocation could succeed if it had been requested a few microseconds hence -- and *fail* if many more microseconds later.

I allow requests to queue at the allocator in much the same way that you can queue up for any other resource. This allows "the most important" request to proceed *when* the resource (eventually) becomes available and, thus, avoids lots of "interesting" priority-inversion-type deadlocks (where the most important consumer didn't happen to make his request at the "opportune" time). A timer in each request allows you to decide how long you want to pend for that request to be satisfied. (i.e., it can return several different error *codes* -- not just "NULL" to indicate "failed")

The point is to allow the system *designer* (deliberate emphasis) to come up with an approach that "works" -- instead of throwing things together and hoping entropy is on your side...

Reply to
Don Y

Yes it is, if you have 8k of memory and your account says you're using

2872 bytes of stack, then maybe you allocate 3k to it in case you missed something. But that's not much of a safety factor. While if your account says you're using 82 bytes of stack (because you put those large objects in static regions instead), you can allocate 1k giving yourself a safety factor of over 10x. It's much harder then for anything to go wrong.

How do they do that if the program is using callbacks?

[in other message]

I don't think that is right, see:

formatting link
formatting link

Reply to
Paul Rubin

Am 14.12.2014 um 22:05 schrieb Paul Rubin:

Or maybe I'll allocate the entire rest of RAM not used by other static data to it, because there's no sane reason to leave any RAM go completely unused in such a tight situation.

Or I allocate 2875 bytes, depending on how precise my stack size determination is. Static size determination _can_ be perfectly accurate, depending on how clever the compiler is, and how strictly some coding guidelines are enforced.

Rigidly proven upper bounds don't need safety factors.

If i moved 1890 bytes stack consumption's worth of large objects to static storage, it's practically guaranteed that the program would fail to link, and that would be the end of that. I would never have to worry about stack size again, because the RAM will not even be able to hold all those statics, let alone have any space for stack left.

It's not, because the underlying assumption that safety lies in factors as far as stack usage is concerned is false. Stacks fail by being one byte too small, not by being 10% too small.

Callbacks as such are trivial. It's calls through function pointers that are hard. And recursion is effectively impossible. If you avoid both (or supply the analysis tool with extra input), the problem becomes tractable.

Huh? Did you seriously just link two explanations of why protection against stack overflow is absolutely crucial, to back up a claim of "It's not needed"?

Reply to
Hans-Bernhard Bröker

Doesn't matter, if your estimate can be off at all, it can be off by enough to exceed the total remaining ram, especially if that amount is small. You can reduce the likelihood if the amount is small.

Let me know when you manage to rigidly prove anything about a C program.

OK, then you're in a situation where you have to juggle memory more, which introduces hazards. I think that's what Les was getting at. If you can't avoid that situation then you have to deal with it, but if you can avoid it, there are sane arguments for doing so.

That makes no sense, do you have any examples? If they can be 1 byte too small, they can be 2 bytes too small, or 3 bytes, etc. In the Toyota case they apparently forgot to take library functions into account when accounting for stack space. That's much more than 1 byte.

How do you think callbacks are normally implemented? Maybe you're using that word to mean something different from what I thought it means.

Oh ok, I apparently misunderstood what you were saying, sorry. But, that type of problem can be very hard to find statically. Dynamic testing in a simulation environment might have had better chance of spotting the issue, but it can never be guaranteed to cover every possible input.

Reply to
Paul Rubin

That's absolutely true. I didn't want to insult people's intelligence and state the obvious - that if you *can't* do something, you probably shouldn't :)

Heh - pretty much.

Les Cargill
Reply to
Les Cargill

Hello Don,

That's true. I suppose what's really at play here in my mind is the percentage of memory resources used by my code and how I have much more headroom available in the 32 bit MCUs I use.

Yes there are and my apologies for confusing you. What I wrongly called dynamically allocated should have been called stack allocated in the examples I was thinking of. I do as little true dynamic (malloc style) allocation as possible.

The reason I don't like true dynamic memory allocation in an embedded system is the risk of fragmented memory if you are freeing the allocations during normal operations and using a malloc() style allocator. However, I am aware of the usage cases in which the memory is allocated at startup and never freed - I do this myself in a couple of cases.

One thing I have done with true dynamic memory allocation in some 32 bit projects is to use a simple allocator I wrote which allocates fixed size memory blocks from a pool so there can never be any memory fragmentation when the memory gets released.

In my embedded projects, I control everything from the startup code to the library code to the application code so I can have a good feeling for how much stack space is required. I also don't really use recursion all that much in my embedded projects.

That's only an informed estimate however, but if I think I'm going to use somewhere in the region of (say) 10K-15K bytes of stack space and my linker map tells me I've got 50K of memory spare then that isn't something I need to worry about.

However, this breaks down with larger 8-bit MCU projects because of the much smaller resources available so I can't be 100% confident I will get the analysis right during design hence my tendency towards static allocation in that case.

Shock, horror, I actually like to do an initial design before writing any code and think about approximately what memory resources I will need versus what are available. :-)


I most certainly do _not_ pick a number at random and then use that.

My comments above show the kind of thinking I use and that thinking is geared towards correct functioning of the code and using the tools I have available to help me with that.

In situations where I can't be confident in my manual stack analysis I use design techniques designed to minimise that risk. Yes, I accept that may use more memory, but the technique means I can be more confident of the code.


Simon Clubley, 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

I think that misstates what RAII means, at least in the C++ world, as Dombo explained. For example, in C you might open a file and do stuff with the contents like this:

void foo(char *filename) { FILE *fd = fopen(filename, "r"); // compute stuff and read from the file close (fd); }

but that leaves you with the issue of what happens in case of an abnormal return, a return from the middle of the function, etc. You have to carefully navigate all those possibilities to make sure the file gets closed instead of leaking the file descriptor. RAII style in C++ looks like this:

void foo(string &filename) { std::ifstream fs(filename); // compute stuff and read from the stream }

Note the absence of any call to explicitly close the stream before returning. That's because the ifstream object destructor automatically closes it when the ifstream goes out of scope. That means if the function returns from the middle or some lower level throws an exception, the file still gets closed. There's not an equivalent for this in C unless you build a bunch of special machinery into your application.

Sure, that's good style, resembling functional programming; but the term RAII usually means something different, described above.

The above is idiomatic C, I think.

It depends on what you're doing though yeah, dynamic structures are probably less important in MCU applications.

I'd be interested in seeing an example application in this style, if you've got one you can release.

It's easier with garbage collection, but those environments aren't well suited to small embedded systems.


Reply to
Paul Rubin

It depends on your understanding of how *you* will be using the heap. Along with your understanding of the strategies that the memory manager employs in satisfying alloc/free requests. (along with how many consumers are prodding that particular heap!)

My manager moves much of the policy decisions into the hands of the developer. It is just "mechanism" -- I decide how that mechanism is applied (on each invocation).

To create a "partition/buffer pool", I use the allocator (on the heap from which I want to allocate the pool) to first allocate a piece of memory big enough for the buffer pool. Then, I claim *this* is a heap and iteratively allocate N buffers of size M. These are then free()-d back into that heap (now a buffer pool -- but still managed with the same allocator!). But, in free()-ing them, I tell the memory manager to just insert each of them at the head of the free list (this is a fixed cost operation -- predictable performance) and NOT try to coallesce "segments" from the free list into larger contiguous blocks of memory. I.e., they *remain* as fixed size blocks linked together by the free list that the memory manager maintains

*for* me!

Now, any operation requesting a block can use the same allocator, point it at that "buffer pool" (which, AFAICT, is just another heap!) and request the "first fitting block" from the pool -- knowing that they are all the same size -- in yet another constant time operation.

Change the allocation policy -- or free-ing policy -- and the same set of alloc/free routines can behave differently.

When done with the buffer pool, pass it back to the heap from which it was allocated via "free" with an appropriate release policy and now it can be used as "general memory" for other needs.

Having *one* set of routines to manage ALL my memory needs makes it easier. E.g., when I create a new process, the same routines create the initial stack and heap for that process from the "system heap". Funneling all memory management activities through one interface means I can make the same capabilities available to every "memory consumer".

E.g., if you want a (arbitrary size) chunk of memory off a particular heap, you can block waiting for such a chunk to become available. Because the same code is involved in doling out "buffers" from that buffer pool, you can just as easily block waiting for a buffer to become available! And, in each case, specify a timeout beyond which you no longer are willing to wait. (the alternative is to just "spin" constantly trying to get the allocator to grant your request... why spin when you can block and let other/lower priority tasks execute and free up those resources?)

[different mechanisms for OS's that support protection domains]

I design the algorithms to optimize whatever resource is most pertinent (in some cases space, others speed, others writeable store, etc.). Then, figure out what drives the algorithm's worst case performance. And, create a test case that will stress the algorithm in that manner. Finally, *measure* stack penetration (previous methods described) to verify my estimate.

As most of my projects are real-time, there's a certain amount of uncertainty involved in trying to collect data from a running system *without* altering the behavior of that system. So, I debug and characterize algorithms algorithms in simulators at "D.C." and then crank up the clock in the run-time environment.

I suspect you will find this is the exception and not the rule. Most folks (esp desktop coders) probably can't tell you *anything* about what their memory usage looks like with "hard numbers". Or, even an expression that they could "massage" to get those numbers!

Try some of the techniques I mentioned to see what sorts of numbers they yield vs. the numbers you get "on paper". If you don't have access to compiler sources, you can instrument function/procedure invocations (clumsily) by creating a macro for each function *definition* that essentially imposes a short preamble on each function call before dispatching *to* the specific function. That preamble can record the current stack pointer "somewhere" (compare against "worst thus far"). Matching postamble can scan the stack to see how much of it has been "altered" (from some previously "filled" pattern).

Any time you get stuck using libraries (or any "foreign code"), you're at the mercy of the developer of those libraries/foreign code. Most of this stuff is rarely documented (because the language has no provisions for documenting internal behavior -- just "interfaces"!)

Back to my baking. Another 17 dozens tonight. Not enough days left! :<

Reply to
Don Y

If you are going to have such simple stack access, why don't you use automatic variables, possibly using alloca() ?

With multiple threads and a single heap, there are no guaranties in which order memory segments are allocated or released.

Reply to

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.