Languages, is popularity dominating engineering?

The problem with many small 8 bitters is that they do not have good stack pointer relative addressing modes. You either have to sacrifice an index register (if you have one) or use fixed address references calculated at compile/link time.

Reply to
upsidedown
Loading thread data ...

Agreed. It is too difficult to be tricked (e.g., by indent) into THINKING braces exist where they don't. Especially if your coding style tries to minimize newlines. E.g., (my preference):

if () { ... }

or even

if () { ... }

vs.

if () { ... }

One of the criteria that (on initial exposure) seemed "arbitrary" in my first language design class was "be able to write functions/subroutines on a single page" (which was never formally defined). It doesn't take long to realize why this can be A Good Thing.

However, languages can also use this goal to justify being unduly cryptic and overly reliant on syntactic sugar.

The same sort of thing can be said for parens in expressions. Being redundant isn't necessarily A Bad Thing.

I use lots of pointers -- they tend to let me make the code tighter and cut down on resource requirements (information is encoded in the pointer's

*position* in addition to the thing it points *at*. So, a big issue (for me) is keeping track of what the pointer is currently referencing.

The biggest downside to adding documentation is there is no way for the compiler (or any other tool) to ensure it is kept up to date AND coincides with what the code is ACTUALLY doing.

Many folks get bit because they "debug the *comments*" (i.e., what those CLAIM the code is doing) and not "debug the *code*".

Documenting what each statement is trying to do gets to be clutter. Instead, document what the function/block of code's goal happens to be. Then, salt the code with details to anchor particular lines to that description (easier to maintain -- if someone changes the approach, they don't have to go through and remove/update/replace lots of individual comments but, rather, reformulate the description for the block. Or, elide it entirely if appropriate/lazy)

I now document algorithms in PDF's and *describe* the code (in the sources) in a manner consistent with that (external) documentation. This lets me be more thorough AND draw on alternative media in my presentation. E.g., graphs, illustrations, animations, sound clips, etc. -- things that just aren't practical in a "text" source file.

Yes (let the compiler produce a .s file that you can peruse. Many will annotate that file -- as much as is practical). I'm not saying you should

*distrust* the compiler. Rather, use this as a means of verifying that what you *thought* was happening was, in fact, the case.

E.g., if adding a line of code suddenly makes a dramatic change in the size/complexity/speed of the resulting code, you should wonder "why WAS that the case?".

(it is always amusing to see embedded newbies add a printf() and watch the size of their code mushroom: "Yikes! printf() is THAT BIG???")

Similarly, when looking at the .s version, if lots of YOUR code has apparently been elided (optimized away), you should think about the reasons for that and its consequences -- did that code really NOT need to be present, here? Has the compiler seen something that I've missed (i.e., a "D'oh" moment)? Or, have I specified something in a way that allows the compiler to elide it WHEN IT SHOULDN'T HAVE (because I wrote something incorrectly).

Reply to
Don Y

Statics are a real downer if you're writing reentrant code. You have to ("manually") ensure (by design) that no two consumers can access is reliant on that static "at the same time". Even if the static isn't "required" to preserve data between function invocations (e.g., like strtok).

A developer then needs intimate familiarity with every "library" that he calls upon (i.e., code written by the guy in the next cubicle) to ensure he isn't exposing his code to one of those (typical) "intermittents" that you never manage to track down (because its impractical to reproduce the EXACT conditions that caused it to manifest).

A cheap way of checking stack penetration is to use a "fence" on the stack and reexamine it, periodically.

E.g., during development, my "create_task()" fills the stack with a regular pattern. At each reschedule(), I look at the value of the stack pointer that my context switch will now preserve and:

- determine if it is deeper into the stack than any previously recorded instance

- if so, store that value (deepest_stack_pointer) and

- verify that it is within the range of valid addresses for the memory allocated for the stack (if not, fall into the debugger before the "contamination" spreads, obfuscating the underlying *cause* Then, periodically, explore the region *beyond* the stack pointer to see how much of this "regular pattern" has been obliterated *between* reschedule()'s.

One system that I designed allowed me to instrument every function call. So, I could perform these tests in a much finer-grained manner -- on entry and exit from each function.

In the larger systems I am working with currently, I track which memory is faulted in for the stack.

You also have to examine your algorithms to verify that their behavior is appropriately bounded. E.g., I rely on recursive algorithms a lot (simple, elegant). But, have to ensure the constraints governing the recursion are known a priori.

One of my speech synthesizers does a recursive pattern match. But, controls the match (and recursion) by the CONST TEMPLATE in the code and not the VARIABLE INPUT TEXT (that is completely unconstrained). So, I can guarantee the maximum recursion the code will ever experience at compile time regardless of the input that it may encounter "in use".

The first time you have to chase down this sort of "problem" will pay for every precaution you ever take against it in future efforts! :-/

Reply to
Don Y

Sort of. I don't like the idea that RAII is only specific to C++ even though that's where it came from. The point of it is to make sure everything is properly initialized to a reasonable value.

I understand RAII was developed to close this hole in C++, but I think there's a more general principle inside that.

Using 'C' idioms:

const double numerator = ((x*z)+y); const double denominator = (....); // we may want range checking to return error codes or something here. const double ratio = (abs(denominator)>t_epsi) ? LARGENUM : (numerator/denominator);

The point is to break calculations, especially those that invoke division into manageable chunks for clarity and to control divide-by-zero problems. Have the declarations tell the story of how the ratio is derived one step at a time.

-- or --

char *thng(....) { static char beast[n] = {0}; sprintf(beast,...); ... if (cond) return NULL; return beast.

}

The point is to use one-time rules to manage what would be exceptions in C++. What you want of this is to have all unhappy paths be completely covered by unit tests.

I am being very specific to 'C' here.

It's an iterator, so it goes well with the integer-index approach. What I've found is that "for every time you need to use allocated pointers, there is a cleaner implementation using static arrays and indexing into them."

You need good array bounds checking, but I find that less tiresome than exceptions.

It depends. Default signature of functions is void return. I prefer to have function returns be used for a list of constraint violations until the last one, which is the happy path.

This is another fine approach, but it's not one I think you can use as much in 'C'. I don't automatically assume "stateful is bad"; it's just something that must be managed properly. That probably means "kept to a miniumum."

Not so much.

Mostly; yes. But suppose you have a configuration option to use metric instead of English units. I find it somewhat cleaner to have the "metric" version of the calculation seperated from the English version, and switched by a callback.

I would also add "intermediate calculation values" to the list.

You can often allocate buffers and intermediate values statically and this helps with serializing for testing.

--
Les Cargill
Reply to
Les Cargill

This varies.

Mmmmm... maybe. I find that 8 bit programs will lend themselves to a small number of globals to be used. It, of course, depends.

This being said, I haven't seen an 8 bit micro in some time. Even for PIC, they've been 16 or 32 bit and not all that memory constrained. And the programs on them are small enough that you can more or less keep the state of the thing in your head.

That's true. You may have to play a little game with yourself in memory-constrained environments.

--
Les Cargill
Reply to
Les Cargill

I've never had a lick of trouble with it. YMMV. Coroutines in particular are "run to completion". Likewise threads generally need to have as spare an interface as possible - the main program only writes to the buffer, the thread only reads. Semaphores may be necessary, but usually aren't.

Having interrupt routines have their own buffers is pretty good practice anyway. There are, of course lots of approaches. Enforcing producer/consumer roles is pretty important if you use shared buffers.

My default habit is to declare it static, then move to a malloc/free or stack regime if that's all that can work and I can prove out that it'll never leak.

The point is to not reach for dynamic memory right off; escalate to it.

--
Les Cargill
Reply to
Les Cargill

I just wonder what you guys are doing where you have these problems. :)

I wouldn't use too many statics in *library* code. This being said, the 'C' library uses them all over the place.

Obviously, if you have contention over a memory object/reentrancy issues, you can't do this. But mainly it's about each ... thread/routine suite having its own memory - and not using dynamic memory when you don't need it. And when in doubt, use a semaphore.

Arrange the larger structure of the peice to where things interact minimally and you'll have no problems. *This* buffer is only used for *this* purpose. That's part of the point.

If you're memory constrained, you just have to use stack or globals. But hopefully, the problem to be solved is enough smaller that this doesn't render the thing incomprehensible.

Sytstems I've seen lately, each "thread" is pretty much completely isolated form other "threads" except for a few variables that manage interaction. You can "prove out" the interface with grep and a piece of paper. When that gets too messy, you add interface routines.

That's somewhat unreliable.

Oy. I think you're explaining exactly why I prefer the way I do it :) I'll also make sure that plenty of stack is allocated; over do it.

Ah, Well, I only use recursion very sparingly if at all.

I haven't seen a stack overflow in decades, excepting where I'm porting code.

--
Les Cargill
Reply to
Les Cargill

One should consider the expected lifetime of the software. If the software expected life time is one or more decades, one must think about the amount of competent programmers available at the end of that period.

Using some exotic languages or something gaining rapidly popularity recently (and possibly falling off as quickly) would be a risk. Using some main stream languages (such as C/C++) and there will still be competent programmers for a few decades.

I haven't done COBOL since the Y2K issues, but still encounter Fortran applications written two decades ago and the users wondering what to do during the next decade and when to rewrite it, thus the existing code base needs maintenance during the next 0-10 years. If those applications had been written with some exotic languages or using some special vendor specific extensions, maintenance becomes harder by each year.

Reply to
upsidedown

Simple: dealing with others that aren't as skilled/disciplined/motivated/etc. Usually, folks from desktop environments where they don't have to worry much about the code they are writing.

Over the years, I have learned to write my code so others can't break *it*. Tired of having to prove that *my* code is working properly -- by finding the bug in someone *else's* code (e.g., accessing a private struct in my code that isn't exported by the interface; failing to observe the contract for a particular piece of code, etc.)

Most functions with internal state (e.g., strtok, asctime/localtim, et al.) have obvious workarounds (foo_r, etc.).

Still others *obviously* need to be munged to support reentrancy/multithreaded use (errno, malloc, signal et al., etc.)

I've encountered things like printf() that choke badly (due to static buffers used for conversion). And, even floating point support ("helper routines") that precluded use in multithreaded environments (i.e., you cah to treat the state of those helpers AS IF they were registers in an FPU)

If the language doesn't inherently (explicitly) support concurrency, those hooks can be costly. E.g., up to and including a trap.

If the developer doesn't *know* he's being screwed (because the interface for the library doesn't *disclose* this sort of detail!), he won't know to protect "shared objects" (and the objects won't know how to protect themselves)

Then why have it persistent? It only needs to be around for *that* purpose so let it go away afterwards.

Globals are The Root of All (most) Evil. You always want to control the exposure of every datum/object. E.g., if you (a function/subr) have to rely on me to give you a pointer (reference) to an object, then you won't stomp on it without my knowledge of that.

I am a huge fan of true isolation (protection domains). It makes coding and debugging *so* much easier -- step out of line and the OS brings down the hammer on you!

But, this is a more costly feature (e.g., processes vs. threads).

The "easy way out" is to adopt things like C-S relationships between producers and consumers. *Formalize* their interactions (at some cost for the interface).

In my current designs, most interfaces can (potentially) span processor boundaries. This is a blessing, of sorts, in that it forces the ENTIRE interface to be specified in the IDL. There are no "back doors" whereby data can leak *around* the interface (you simply don't have physical access to it unless made available via the IDL!)

It's not intended to be the cat's meow. It's meant to give you an idea of what your stack penetration is so you can verify it is on a par with what you expected *or* completely out of whack.

When I write ASM code, in addition to describing the call/return interface in terms of "changes to the machine's state" (registers going in vs. out, memory altered, etc.) I also quantify the maximum stack penetration (as a function of inputs, if thusly related)

I live with tightly constrained resources. It's important for me to size large objects (e.g., stacks, heaps, etc.) to fit their actual *needs* and not just "throw memory at them". E.g., my memory allocator lets me "trim" allocations (instead of being forced to release them in the same chunks that they were allocated) as well as *extend* existing allocations (the policy that the allocator uses can be specified in the allocation request -- e.g., find me a piece of memory that adjoins *this* piece).

This allows me to "move" memory between a consumer and producer. E.g., arrange for the producer and consumer's memory to be contiguous and "free" it from one while "alloc"ing it to the other.

The iterative/recursive duality implies you can always avoid it. But, often iterative solutions are much more difficult to "get right" than their equivalent recursive solutions (too much manual housekeeping).

Then you haven't been tasked with trimming your stack to fit the needs of the routines using it! Or your heaps. :>

One of the "problems" with "embedded" is it groups a wide variety of application domains and implementation constraints into a single subject. E.g., in the same application/codebase, I have devices that use Q10.14 while others use "Big Rationals" based on the resources available to each.

An unfortunate consequence of many languages is that you're pretty much "stuck" with the types that the language gives you. This forces you to express "other" things in unnatural ways -- that are more prone to error (e.g., it would be nice to be able to use infix notation on ints, BCD's, floats, Qs, bigrats, decimals, etc. -- even INTERCHANGEABLY!). If you're working in a resource rich environment, this isn't an issue. But, as you get more resource constrained, you tend to *need* to do these more often *and* have less capabilities to do them portably/safely/intuitively.

Reply to
Don Y

Am 13.12.2014 um 21:48 schrieb Les Cargill:

It's really not that bad, actually. Yes, there are a few such things, but mostly in functions that I, for one, have never felt tempted to use in embedded code. And most of those have been deprecated by thread-safe alternatives since back then.

But it'll still occupy resources even while the program is doing *that*, which has nothing to with *this* whatsoever. That's where this approach becomes wasteful.

I do wonder how you expect the distinction between statics and globals to have any effect at all regarding memory size constraints.

Reply to
Hans-Bernhard Bröker

Am 13.12.2014 um 17:36 schrieb snipped-for-privacy@downunder.com:

Or, like the 8051, none at all, which is why compilers for that platform default to not using the stack for automatic variables. There's not enough of it, and it's just too darn painful to use.

But that's for the compiler writer to figure out how to deal with. Even in worst case the resulting code will never be less memory efficient than a strategy of "just make everything static".

Reply to
Hans-Bernhard Bröker

Am 13.12.2014 um 21:22 schrieb Les Cargill:

Not really. Supposing you start out with a correct program and compiler, making previously automatic variables static can never reduce memory consumption. It can only increase it.

Careful you don't fall for Microchip's marketing ploys too easily, there. Those 32-bit "PICs" have less of a meaningful relation to the original PIC than the doomed Itanium had with the 8086.

Reply to
Hans-Bernhard Bröker

LOL, is this news ???

The PIC32 has a MIPS M4K Core.

formatting link

Reply to
hamilton

Because - if you have the memory - it's simpler to just leave it in place. But mainly because a statically allocated buffer is measurable with sizeof().

If you can use malloc()/free() pairs such that you're guaranteed to never leak, then that works too.

Nah. Too many is too many. But for say an 8 or 16 bit PIC, you'll be using globals.

"globals" includes control block structs, which provides some degree of ... control.

Ideally, yes.

Yep.

Absolutely.

Understood - I've done it too.

Memory is cheap.

Newp - haven't run into that in ages. It's just time-intensive. I'm sitting on about 100K lines of code these days. Maybe more.

I'm just saying - there are $50 ARM systems - COTS - that sport gigs of memory these days.

--
Les Cargill
Reply to
Les Cargill

Obviously, if you don't have the RAM you have to do something else. But if you have it, a few buffers here and there in... I dunno, dozens of K aren't gonna hurt you on a target with multiple megs of SDRAM. It only has a cost if it *has* a cost.

But being able to do away with memory allocation fail is worth something.

True enough. static just controls access.

--
Les Cargill
Reply to
Les Cargill

First, you don't always have the memory.

As to sizeof:

static some_type_t file_scope[SOME_NUMBER];

foo() { static another_type_t function_scope[ANOTHER_NUMBER];

... }

You *know* the sizes of each of these: SOME_NUMBER * sizeof(some_type_t) and ANOTHER_NUMBER * sizeof(another_type_t).

Of course, you can also do: sizeof(file_scope) and sizeof(function_scope).

Or, derive the number of elements: sizeof(array_name)/sizeof(array_name[0])

I think you are using "static" in a different sense than intended in the standard. :>

Regardless, you also know how big each SUCCESSFUL allocation (minimum size thereof) will be just by the arguments to malloc().

I don't see how that follows.

main() { some_type_t not_a_global[SOMETHING]; ... }

works just fine. Leave it to the compiler to put things where it can get at them. *You* decide what sorts of visibility objects should have. By using the narrowest scope possible.

No, it isn't. I have several designs, currently, where memory is *finite*. Adding memory means moving to a bigger SoC *or* moving to a multi-chip solution (when the range of SoC choices with "sufficient memory" falls below a threshold that makes the functionality I need "unavailable").

These aren't like "PC-based" designs where you can replace a 1G DIMM with a 4G DIMM, reboot and be fat happy and glorious.

Code size doesn't matter. I can write 10K lines of code that never use more than 200 *bytes* of RAM (been there, done that). I can write 500 lines of code that will gobble up gigabytes and still crave more! (in neither case being "pathological")

ROM is cheap. FLASH, less so. RAM, least of all. (then cache, registers, etc.)

E.g., I can unroll a loop to save an iterator. I can inline function calls to save stack frames. etc.

OTOH, once RAM is exhausted, I either come up with a new algorithm, move to a new platform *or* consider the problem as unsolvable given the space/power/cost constraints placed on it with today's choice of components.

Great! I'm designing an earpiece. How many gigs can I fit in that "ear canal"? *ASSUMING* cost is not an issue?? And, where will the user carry the battery for it all?? (in, perhaps, the *other* ear?? :> )

What if your price point is $9.95? (e.g., the price of many earpieces)

As I said (immediately above) "embedded" encompasses a very wide range of targets -- price points, physical sizes, operating power, etc.

Reply to
Don Y

I think the idea was not necessarily to minimize memory consumption, but rather to ensure that memory consumption stayed within the available bound. E.g. if you have 8k of memory available and your program uses

7k, there's no real benefit in shrinking it to 6k. But if you're allocating buffers or other large objects on the stack, you have to do very careful accounting of the possible call trees, to know how much stack memory can be consumed by nested allocations. But if you allocate those buffers statically, the linker can tell you their total consumption without your having to analyze too much. That just leaves a few dozen bytes per call level for some temporaries, so it's mainly a matter of preventing excessively deep nesting such as from runaway recursion.

It might be interesting to see MISRA C's recommendations for this.

Reply to
Paul Rubin

I have been working with computers that did not even have any concept of stack. Many had some kind of page 0 addressing in which a few bits (typically 5-12) was used to access variables in low memory in page 0. That low memory access is similar to 680x direct addressing mode (1 byte opcode and 1 byte address to the low 256 memory locations).

Of cause, these low memory locations can be reused as local variables in multiple functions, as long as they do not directly or indirectly call each other. I do not see why a static allocation would be any larger than stack allocation. In processors without stack pointer relative addressing modes, this will significantly reduce the code size.

Typically, a compiler intended for such primitive processors usually have means of defining the load address of variables or structures into the address space, making it possible to allocate variables into the same addresses, at least in separately compiled modules.

Reply to
upsidedown

Please explain

how much does a new line cost on your machine?

tim

Reply to
tim.....

Am 14.12.2014 um 08:08 schrieb Paul Rubin:

The idea I criticized was that making things static was a good strategy on memory-constrained systems. Which it's not.

And it fails that goal by _increasing_ memory consumption, moving it beyond that available bound with considerable likelihood. I.e. it creates the very problem that it's supposed to help avoid.

Not until you need another 1.5 KiB, that is.

As I said before, you need to do that accounting anyway, so that's not really much of an argument either way. Compilers for small-ish embedded systems worth their salt usually come with a static stack analyzer for that purpose.

And it will tell you that your program failed, long before it actually has to.

There are none.

Reply to
Hans-Bernhard Bröker

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.