Languages, is popularity dominating engineering?

They're not (simple) "variables" but, rather, (variable sized) structs. The UI code is *tiny* and invariant wrt the actual content of the UI. It parses a table and builds the structs that the interface needs on the fly. Then, "registers" them with the actual interface code.

So, the cost of the interface is basically just the text that will be displayed *in* it.

This means you don't have to debug each new menu, listbox, etc. but, rather, just "fill in the blanks" and let the existing code get it on the screen and interact with the user.

That isn't necessarily true, either. If the code imposes dependencies on what each thread does (and when), then you can ensure X has happened before Y based on those dependencies.

Sticking with the UI example, if the "action routine" associated with a menu selection is not invoked until the menu has been torn down, then the action routine *knows* that the memory allocated for that menu has already been freed before the first statement executes.

By contrast, languages that "hide" (well, lets say "make it more of an effort to be perpetually aware of *every* call to the memory management system") these actions (perform them "automatically" for the user and in a fine-grained manner) make it impractical for a developer to know what the state of a shared heap is likely to be at any point in the developer's code.

Reply to
Don Y
Loading thread data ...

One thing I haven't seen in this thread is comments on the efficiency (code space and run-time) of using statics vs. auto data. Non-static locals are not necessarily allocated on the stack (or /a/ stack) - they can be in registers, or they can be eliminated or combined by the compiler.

Good compilers for brain-dead processors like the 8051 will automatically turn local variables into statics because the stack access is so slow - but they will do so cleverly, re-using the same static slots for different functions in order to reduce memory and/or banking.

And on decent processors, a large proportion of local variables stay in registers - giving the smallest and fastest code with the least memory usage (stack or static).

Using locals also means you can feel free to break complex expressions into smaller parts with local names, rather than writing it all as one line - the compiler will combine them.

Minimising the scope /and/ lifetimes of variables is always good programming practice, and gives the compiler the best chance at finding flaws and generating good code.

For large variables (i.e., arrays, and perhaps large structures), it can make a lot of sense making them static - such data is usually accessed more efficiently as statics, and you can see the memory usage more clearly in the map files). But for everything else, use local (auto) storage when possible.

Reply to
David Brown

employ something at least similar to a 'standard' language. However, the language is only a small part of the learning curve, and generally contributes relatively little to the difficulties that often accompany a build. Much more impactive are the specifics of each environment, things like inbuilt data structures and methods, and the myriad platform-specific gotchas that lurk for the unwary. It can take years to become confident and fluent in a single environment, dealing with several can be a major career challenge.

Reply to
Bruce Varley

Am 15.12.2014 um 09:26 schrieb David Brown:

It's actually slightly misleading to state that the compiler turned those automatic variables into "statics", because they obtain only one of the two key properties the variable would get by flagging it "static": they will have a build-time fixed address, but no static storage duration, i.e. their value will usually _not_ be conserved from one entry into their scope to the next.

A better description of this kind of data overlaying in terms of C would be that the linker silently builds (a hierarchy of) static _unions_ out of automatic variables from separate branches of the call tree.

Reply to
Hans-Bernhard Bröker

That would be more accurate, yes.

Reply to
David Brown

That's not really a "static". Rather, more like a "register". It can *break* code unless the developer is aware of it and takes steps to treat these "locations" AS IF part of the processor's state.

It's comparable to "helper functions" used to implement floating point on processors without FP hardware (i.e., you can't interrupt the flow of execution and begin *another* FP operation without deliberately preserving the contents of those "hidden globals".

As I said up-thread, let the compiler do the optimizing. Concentrate on

*clearly* writing what you intend. If the compiler is clever enough to extract some nugget of efficiency from what you've written and *equivalently* transform your code into something "better", let *it* do that -- instead of *you* trying to be cryptically clever in what you *write*.

Spend time finding good *algorithms* rather than micro-optimizing instructions. E.g., I haven't used "register" in ages -- figuring the compiler has a much better chance of figuring out what *should* go in registers than *I* do.

OTOH, a compiler isn't going to know that the accuracy of the algorithm can be satisfied with integers (or fixed point) instead of "floats".

+1 IME, "block scope" is too infrequently exploited.

IMO, "static" should be read as "persistent". The size of the object, frequency of access, etc. shouldn't drive its use but, rather, the

*required* lifetime (and, visibility).

But, then again, in my applications, the goal is *not* to keep things around (unnecessarily). YMMV.

Reply to
Don Y

So your top example does not manage the "can't open the file case" at all.

void foo(char *filename) { FILE *fd = fopen(filename, "r"); if (fd==NULL) { // set things to indicate that the // open failed

return;

} // compute stuff and read from the file close (fd); }

I think we have to separate this into two issues:

- Some language systems ( not 'C' ) provide for automatic calling of destructors.

- We wish to have the constraint that something is opened/allocated be managed as quickly as possible and in a nicely localized fashion.

Yes, dynamic allocation leads to states where partial allocation/opening occurs. RAII was specifically developed to combat this.

But there's an even more *general* case - one exemplified by perhaps a more esoteric ... thing - the "as if simultaneous" rule in SNMP agents

- resolution of all varbinds ( attribute value pairs ) within a single PDU must be atomic.

This forces separation of concern between "real work" and evaluation of constraints. Specifically, if anything cannot be done properly within an SNMP agent evaluating a single PDU containing multiple varbinds, the state of the agent is to be exactly as it was before the PDU was received.

So I think there's a way of doing the latter in 'C' by arranging things carefully, possibly using early return.

I expect it's simpler ( and perhaps less simple ) than it's made to be. I think it generalizes beyond C++.

I'd like to see them used less in general. If the set of constraints for an operaiton is complete and closed, then evaluate each one in order and do the "real work" at the bottom. You might even "bag up" all the constraint checking in a separate routine so the flow is better.

I feel like - and this would take a long time to fully write out - that perhaps we can limit the number of times new() or malloc() are called within many software systems and improve reliability a smidge that way. This isn't a sore spot - the codebases I use now use nearly no dynamic allocations and then only when you're *guaranteed* not to leak memory, enforceable by inspection.

I feel like an example would make things worse right now. :)

That's one of the big wins here.

--
Les Cargill
Reply to
Les Cargill

Absolutely. Compilers are better able to optimised from clear code than cryptic code - for example, they can do a better job with array expressions or multiplies than when someone has tried to "help" by using pointers or shifts.

Not by me it isn't! I am also a fan of C99's mixing of declarations and statements, so that you don't have to declare variables until you actually need them.

Reply to
David Brown

I think a lot of this (programmer) behavior comes from the days of naive compilers -- folks got used to "being clever" with their source to try to force particular opcodes to be generated.

*Now*, the behavior just obfuscates what the programmer is *really* trying to "say" (do).

Again, I think this is a legacy behavior. People get used to declaring variables at the top of a function -- because you *had* to.

One of the hazzards (inconveniences? inefficiencies?) or dealing with compilers of different vintage, different languages, etc. is the effort required to keep "best practices" in tune with the *current* environment.

E.g., I have to make a conscious effort to alter my coding style to exploit tuples or lists under Limbo; then suffer the opposite problem when I carry-over that same style to C and find the compiler "unreasonably" complaining.

"Whaddya mean, 'syntax error'???"

:-/

Reply to
Don Y

Which would be Java and Python.

But then how do you make yourself indispensable? 8-)

When time to market is paramount, there can be benefits to using the "great" language even if it is obscure. If the boss and/or the client doesn't object, why unduly burden yourself by using inferior languages? [he says, tongue planted firmly in cheek]

Outside the embedded and RBS (really big science) arenas, efficiency is hardly even a consideration any more. Witness the proliferation of bytecode interpreted languages and resource managed execution environments. If you find yourself making extensive use of C libraries from your oh!_so_wonderful high level language, you probably should have been writing in C to begin with.

OTOH, the majority of programmers today have never seen a malloc() and wouldn't know what to do with one if they did. For most programmers, GC is not a nicety but a necessity without which they cannot write a functioning program.

There is near consensus that a _safe_ language, by default, should provide arbitrary precision, base 10 arithmetic. The majority of programmers today have had little or no mathematics education and are unable to identify or fix potential problems caused by fixed precision and/or range in numerical calculations.

"Pascal is a voluntarily worn straight jacket. You use it precisely because it won't let you do certain things. It's a PITA when you're writing the code, but a godsend later when you're trying to debug it." -- Marvin Minsky

Management has a vested interest in not getting stuck with a useless pile of [code] if you leave, but that has to be weighed against the value of a developer's time. In many projects, software development is the major time expenditure and the major monetary expense. Anything which makes software developers more efficient should be welcome.

By that definition, few people *know* any language.

I certainly can read/understand more languages than I can sit down and immediately start to work with (there's only ~ a half dozen there). But generally what prevents me using a new language quickly is not it's grammar but it's vocabulary: i.e. before I can do anything useful I have to learn about its "standard library".

But so far as writing effective, bug free code the first time: that's a fine ideal, but it really isn't that important. What is important is that buggy code not escape from the development environment and that the final code be effective.

Obviously, the straighter the path, the better ... but the important thing is the result, not how you got to it.

Lisp is ok, but Prolog would be a more natural choice. However, naively written Prolog can be slower than a molasses popsicle and it can yield unexpected results if you don't cut appropriately. For best trade off of development time, program size and execution speed, you'd probably want to use OCaml.

The definition of "bug" is operation deviating from specification. When in doubt, change the specification. 8-)

George

Reply to
George Neuner

True, I was aiming to show how the two versions differ in how they close the file, rather than in how they open it. The C++ version throws an exception if it can't open the file.

Right, so I think this is a terminological quibble: RAII as I've always seen that term used, refers explicitly to style of relying on that automatic calling of destructors that is absent from C. So RAII is idiomatic in C++ and some other languages, but not in vanilla C.

See:

formatting link

where it says "RAII can be summarized as follows... the destructor always releases the resource".

Ok, that is basically the other part of RAII in the summary cited above. The RAII summary mentions exceptions but that doesn't seem important.

That sounds like a traditional transactional commit in a database. There are well-established techniques for implementing that, such as with a rollback log.

Maybe you'd like Ada or even SPARK/Ada (which I guess is now subsumed into Ada 2012?) better than C.

Yes, I think MISRA and SPARK both don't allow dynamic allocation.

Reply to
Paul Rubin

In the days with compilers hosted (running) in 64 KiB of core/RAM or less (with possibly overlay loading from slow floppies :-), you could not expect much optimization from most of the compilers. Some generated quite awful code without manual help (such as common subexpression extraction)..

However I have seen some Fortran compilers generating so good code that it was hard to beat with manual assembler, unless you skipped the Fortran parameter passing mechanism and used global register allocations etc.

These days with compilers running on big platforms, much optimization can be done and there is no point in trying to help the compiler. In practice the only need for manual assembler is to utilize some special target machine instructions that can't be expressed in HLL.

I very much doubt that a compiler could detect a section of C code doing FFT and generate a special instruction doing FFT, if such instruction is available.

One thing that I miss about Pascal is that you could declare procedures within procedures and being able to access the local variables from the outer procedure from an inner procedure scope. In C you would have to declare them outside the function or pass a huge number of parameters. In modern C/C++ using inlining helps somewhat.

Reply to
upsidedown

Yes. I have worked with compilers that needed such "clever help" in order to produce efficient code. Thankfully, I left such tools behind for the most part. (And when working with such tools, I always examined the generated code to see the results.)

For some people, that's the case. Others think it is somehow clearer, or better style, to declare variables at the top of the function. People have strange ideas about code style!

Yes.

I know the problem!

Reply to
David Brown

gcc supports nested functions, if you don't mind using such a gcc-specific extension. Personally, I never made much use of nested functions in Pascal - it made the outer function so inconveniently long.

Alternatively, C++ lambdas can act as local functions in some ways - and of course C++ classes also cover many use-cases for nested functions.

Reply to
David Brown

Yes, but it goes beyond "trying to out-think the compiler". Some folks seem to think that coming up with a "tight" way to express something in the *source* (i.e., use the fewest glyphs/newlines/etc.) will magically make the resulting object smaller!

E.g., as if:

a=b=c=5;

is somehow "better" than:

c=5; b=c; a=b;

(and variations thereon).

Many compilers are in that class. NatSemi's 32K C compiler was remarkably good in the mid *80's* (considering how relatively obscure the product was to have received that much "effort")

My point is: there is really *never* a point trying to out-think the compiler (if doing so comes at the expense of code clarity). The "big" optimizations come from choices of algorithms, etc. All these other things are "micro-optimizations". The time spent trying to sort out all those little things, debug them *and* their costs to those who "follow" could better be spent napping and waiting for technology to get faster without your effort.

The problem we tend to see in COTS devices is that others "spend" those technological improvements in ways that aren't always beneficial to their end user(s). E.g., "glitz" instead of "reliability"/accuracy.

The same was true of Algol -- "nested functions".

I miss nothing about Pascal! :-/

Reply to
Don Y

IME, that was the problem: the "cleverness" didn't typically result in more efficient code. Just "harder to read" or "easier to break". The developer, however, would typically be *so* convinced that his "tricks" were smarter than the compiler's "dumb" (hey, it's a machine, right?) approach that they *must* be more efficient.

And, they would be part of his/her *style* instead of applied where needed AND VERIFIED to achieve their intended goals. I.e., if you

*really* think this operation needs to be expressed in this weird manner, where's the commentary justifying your action??

(What happens when the compiler is upgraded? Do you go back and remove all that cruft? Or, leave the cost of its presence there when it's not really improving anything??)

I would *prefer* to be able to find definitions in a fixed place. But, it's only a problem when you're dealing with some lengthy, complicated bit of code and have to go chase down *where* the definition might lie.

Solution: strive for simple functions and good "presentation" -- so you can more readily "perceive" where the declarations *should* be (and, surprise!, that's where they are!)

At times, it can be frustrating as it can add (some small amount of) typing to your effort -- e.g., when you decide to move the declaration (because a variable must be accessed in a larger scope; or, earlier).

Limbo allows declaration and assignment with slightly different syntax: foo: int; foo = 2; vs. foo := 2; Note that the latter is "encouraged" by the syntax -- it's easier to type than the former (I believe you should strive to make better practices the ones that require the least effort from the user... so laziness causes them to be adopted! :> )

I can't tell you the number of times I've had to change the ":=" to '=' and scroll up to insert the declaration a few lines earlier. This gets old *really* quick! It would save a lot of un-typing/re-typing to just put all the declarations in one common place...

My "pro bono" day (perhaps last of the year?? :> ) ...

Reply to
Don Y

It is perhaps true compilers can be made that good but I have yet to see HLL written code which my VPA (which I control how high it gets while I write, lines with register operations alternate with actions on objects etc.) written code won't beat by at least a factor of 10 when it comes to density. Execution speed likely too but this is harder to compare.

It is not just a compiler thing, it is about how the programmer thinks; high level languages simply constrain that assuming that he is too stupid to not make mistakes which are known to be commonly made.

I realize I am probably alone on the world left doing that but I can say that once one becomes really good at writing with access to low level - good register model in the head all the time - and all the facilities to go higher (plenty of argument/text processing abilities, partial word extraction, recursive macros, multidimensional text variables etc. etc.) high level languages with their predefined "one fit all" sentences look like tools from the stone age.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

Hey George!

How they hang> >>

I'll take your word on that. I'm not around youngsters, much.

Polaroids of the boss engaged in unfortunate acts with livestock?

There is some merit to this. E.g., the goal of my "scripting language" is to allow folks who probably would never be able to write a line of code in any "modern" language to implement things with some degree of success

*without* a big investment or high anxiety.

But, there, efficiency isn't the concern.

Joe Average *User* shouldn't be expected to do so -- hence my reason for adding that support. They're "just trying to get an ANSWER", not write a treatise that serves as a testament to their cleverness.

We've discussed this previously -- I suspect (modern) programmers are similarly motivated (by Manglement and training): just push it out the door, we'll fix it when the user's tell us what's wrong! (look at all that FREE TESTING we'll get!)

Unfortunately, software seems NOT to be considered an "investment". I'm not sure if this is because time is not being allotted to create "reusable components"; or, if developers are loath to reuse software (NIH, etc.) and thus "train" management not to waste time on creating reusable components because they WON'T be reused!

I can "read" lots of languages (including things like Spanish, Greek, etc. -- despite never having learned any of them). But, could only

*guess* as to what they were saying, in many cases. If you don't have enough confidence to be able to spot errors *in* a piece of code, then I wouldn't consider that as "knowing".

I.e., something that separates the real code from "pseudo-code" in your mind (so, instead of "it looks like this is trying to...", you are confident saying "this code DOES...")

I guess it depends on the OP's intent behind the question. I see "know" and "familiarity" as two entirely different things. I can PROBABLY sit down and look at a huge number of different code fragments in many languages -- including imaginary ones -- and claim, with some amount of confidence, that "it looks like this is trying to..." (especially if the code was written by someone who "knows" the language so there is some inherently high degree of correctness to it).

But, I'd never claim/imply to be able to sit down and "make it work" (in some small number of attempts).

"Well, if we're going to *change* it, why waste time WRITING IT in the first place?? Let's wait until we're done -- and, hell, at that point, we won't NEED it!! :> "

Gotta go hide the last batch of cookies before C decides they're "breakfast"...

Reply to
Don Y

Maybe not a full FFT but they can do remarkable things. The following is some code that takes advantage of the ARM short vector instructions. The vectors a,b and c are 1024 elements of float. The loop is executed 256 times using 4 array elements at a time. I did have to specify in the compiler options the architecture level and that a vector unit was there

- but that was a lot easier than writing it by hand using inline assembler or built-ins. The "big platform" was an original Beaglebone Black (about $50) for compilation and execution.

60 .L3: 20:vectest.c **** } 21:vectest.c **** for (i=0;i
Reply to
Dennis

That looks like x86 code.

Reply to
Paul Rubin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.