Arduino / compilers on uC

On Saturday, February 11, 2017 at 10:31:33 AM UTC-5, snipped-for-privacy@gmail.com wrot e:

ng ago

e

ICE -- that's nice.

formatting link

I remember single-stepping in QuickBasic. That was a damn fine compiler.

Cheers, James Arthur

Reply to
dagmargoodboat
Loading thread data ...

I used the Atmel Studio with AVR Dragon plus "jtag" for single stepping through C and the generated asm. It worked well, apart from being windows only.

That was equivalent to the enormous (size, price) Z80 etc ICE systems I used in the early 80s. The only difference was a couple of orders reduction in size and price of both the MCU and the dev kit.

I'm still schizophrenic about wwjetjer to be delighted or appalled by the lack of change in >30 years.

Reply to
Tom Gardner

One of the main points about abstraction and libraries is that they should allow you to - ignore what it isn't necessary to think about in detail - concentrate where you can uniquely add value

It has /always/ been the case that - there is a learning curve with abstractions and libraries - they can hide too much for your application

Those continue to be standard engineering tradeoffs, and the choice depends on the individual application and the individual.

My main objection is where the toolset adds to what I need to understand, rather than minimises it. C (and especially C++) is now in that category.

Reply to
Tom Gardner

formatting link

That's been available since before Arduino, which is why I find their burn- and-crash model so ridiculous.

ler. Not nearly as good as HP Basic, which had al sorts of cool instrument contr ol stuff.

Cheers

Phil Hobbs

I used to like Quick C too.

Reply to
pcdhobbs

All AVRs have hardware breakpoint support, and many have data breakpoints too.

Reply to
Clifford Heath

Tek BASIC was pretty nice too, with "waverform" variables and "primatives" like INT, DIF, FFT, etc.). The hardware was on the expensive side ($1/4M, in 1977 bought a pretty nice system ;-).

Reply to
krw

That is the point of "bool", yes. But you took something that you did not know to be a valid bool, and told the compiler that it /was/ a valid bool - that's a lie.

An alternative way to think about this is that the data in the eeprom is external data from an unknown source. You don't know it is valid until you have checked it.

If you are writing a web application, and it asks the user for a name, you don't assume that they have entered a valid name. You /check/ the entered data - you don't pass it on to a database query unless you want to create an SQL injection attack. The data in the eeprom is the same - it is unchecked unknown external data, and you have to confirm that it fits your requirements before using it. You can't simply /declare/ to the compiler that it fits your requirements - you have to check it.

The compiler /will/ take steps to make sure that a bool variable holds a legal value - it does so every time it stores something that is not necessary legal. So if you have a uint8_t item that you are storing into a bool, the compiler will convert any non-zero value into 1 before storing it. But in your code, you have told the compiler that you already have a checked, valid bool value - so a simple copy is all the compiler needs to do to be absolutely sure the bool is valid.

It removed code that was clearly unnecessary based on the information you gave the compiler.

Variables (with program lifetime) are /always/ initialized to a known condition at the start of main(), as far as C is concerned. They are either initialised to an explicit value, or implicitly to 0. If the startup code (as run between reset and main()) does not do this, then that code is lying to the compiler. (I have seen tools that do this - it's a serious PITA.) Local stack data and heap data are not initialized, and the compiler knows this too.

You don't have to do anything of the sort. You simply have to treat external unknown data as having any value that can physically fit in the space. So if you are mapping data from an eeprom, then use uint8_t for the bytes. Then you can assign that to a bool type if you want.

Alternatively, you can have a mechanism that checks and corrects the eeprom data once, then you know it is correct. For example, at program startup you could read the eeprom data as a uint8_t, and if it is not a valid bool, then write a 0 or 1 to it. For my eeprom data, I will often have a checksum to be sure that the data is valid - when starting up, I verify the checksum. If it is correct, then my eeprom data is safe to use (bools, enums, and all). If it is incorrect, then I fill the eeprom structure with a valid default set of values.

You have told the compiler that U.dat.flg contains either 0 or 1 - you did this when you declared its type to be "bool". You are asking the compiler to take this value, and if it is 1 then it should store 1 in w, and if it is 0 then it should stor 0 in w. Implicitly, you are also saying that if U.dat.flg is /not/ 0 or 1, then the compiler can do what it wants - including launching nasal daemons. That's how programming works - it is a contract between the programmer and the compiler. When you break your side of the bargain, don't expect the compiler to perform miracles.

Reply to
David Brown

No - "more defensive" means "less efficient". I would not want a compiler to generate slow and inefficient code on the basis that some programmer somewhere might have made a mistake.

Reply to
David Brown

No, porting C code is not necessarily difficult. C makes it quite clear what behaviour is full specified by the C standards, what is implementation-specified (i.e., different compiler implementations can do things in different ways, but they have to tell you their choices in their documentation), and what is undefined (i.e., the standards make no comment on what happens).

If your code relies only on standards specified behaviour, and you use conforming compilers, "porting" is just a matter of re-compiling.

If your code relies on implementation-defined behaviour, then porting is a matter of checking that the new compiler fits your requirements.

If your code relies on undefined behaviour, you should fix the code before you even think about porting.

Now, it is certainly the case that a good deal of code relies on implementation-dependent behaviour. And much of that does so in a /bad/ way (such as making assumptions about the size of an "int"), making code porting difficult, rather than in a /good/ way (such as using implementation-dependent sized integer types). That is a problem with the way C code is written, rather than a problem with C itself - and C99 included useful changes to make it easier to write good portable code.

C has been, and continues to be, remarkably successful for such an "unsuccessful" language.

That is why much of it is left undefined by the C standards. Nonetheless, people seem to manage.

It is resolvable in some cases, unresolvable in others. C++ takes an alternative approach in some ways, which make it easier to be correct but sometimes needs duplication in the source code.

The flexibility of templates was unintentional, but it is not a "monstrosity" - it opened a new set of possibilities in the language. The fact that you can compute prime numbers at compile-time with templates does not mean that you /must/ compute prime numbers at compile-time with templates - it is not a disadvantage in any way. Many assemblers have powerful enough macro languages to let you generate primes at compile time - no one makes a big deal out of that.

With C++11, the language designers realised that powerful compile-time computation functionality was actually very useful, but templates were a cumbersome and inefficient way to do it - so they introduced constexpr to make it easier. C++14 made it even more flexible and convenient.

Perhaps it is beyond /your/ comprehension. Certainly C++ is a large and complex language, and you can write code in it that is difficult to understand without a good deal of experience. But it is just a skill - you can study it and learn it, if you have enough natural aptitude and dedication. Kind of like most other tasks in life.

Reply to
David Brown

Certainly gdb debugging works fine on the AVR using other IDE's - I use Eclipse. (Of course there are limitations from the hardware, such as a limited number of hardware breakpoints.)

Reply to
David Brown

Why? Is it just because I explained things in a lot of detail, or do you think any of what I wrote is incorrect? If so, give me the details and references to the section numbers in a C standard (the N1570 draft of C11 is the modern one) and I will see if we can sort out the differences in understandings.

Reply to
David Brown

The compiler is fine - it is gcc, and certainly counts as a decent optimising compiler, and it is /not/ broken. (Code generation for the avr is not always optimal, but is usually at least "decent".) The OP had two problems here - one was the level of abstraction through inefficient Arduino libraries, the other was a misunderstanding of how "bool" works (aided by unclear documentation and information from Arduino).

Reply to
David Brown

While true, that argument has always struck me as unhelpfully splitting hairs.

Gurus are different, of course, but are in limited supply.

Strawman argument: I've never said C is "unsuccessful", and I didn't above.

Emphasis on "seem", especially with XP and TDD programming practices. Multithreading errors are notoriously difficult to reliably provoke during testing. As a hardware engineer, it has always surprised me how many such errors are made by most programmers.

That's a strawman argument that avoids the point, to wit: "... the designers *discovered* they'd created ..." and "The designers didn't believe it until someone demonstrated it"

I prefer languages to be designed, not uncovered.

Reply to
Tom Gardner

If you write code that assumes it is safe to store a pointer in an int, then you will have more problems porting it than if you store your pointer in an intptr_t. If you write code that assumes an int is

32-bits, then you will have more problems than if you use an int32_t (or your own home-made equivalent type pre-C99 - then you only need to change one definition in your code).

Fair enough. You said it "has been unsuccessfully straddling two stools". However, it has, in fact, been remarkably successful for both purposes. It has been pushed out from many "general purpose high-level language for application" uses - mainly because there are alternatives that give far better productivity rather than because C is so bad for the purpose. And C reigns supreme for "low-level near-the-silicon" code.

You can't get multi-threading correct by trial and error - you have to know what you are doing. That applies in all languages. C gives you no help there (though some C compilers do, with tools like gcc/clang's sanitizers), and certainly some other languages make it easier to get multi-threading right.

With any tool sufficiently complex, uses will be found beyond those conceived by the original designers.

Reply to
David Brown

Thread support is finally getting someplace reasonable, it looks like. BITD you just had to use 'volatile' and boatloads of mutexes, and hope the undefined behaviour you were forced to rely on stayed consistent from release to release.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

That's been a well-known error for >35 years, and is probably taught in schools :) Unfortunately there are far too many more subtle traps that continue to catch out experienced programmers.

Not just productivity.

More modern and more closely specified languages enable the correct construction of far more complex and ambitious systems, including by typical programmers.

Which enables them to make new classes of cockup :( (But as I taught my daughter, "let's make new mistakes")

Agreed. But there are other's coming along in the that domain; maybe some will succeed.

C will never disappear, just as Cobol and PDP11s are still around.

Of course.

But C goes further by actively stating that such considerations are outside the scope of C and are the libraries' responsibility. Which would be fine except that the libraries have to rely on things that C avoids specifying. (Caveat: maybe not the latest incarnations)

Consequence: either the compiler+libraries must be a unit, or there might be subtle errors lurking in corners. And "compiler" means the specific version of a compiler, since they change over time.

Sure, but that's not the point, is it!

Reply to
Tom Gardner

Yes, and let's hope so - but in this respect I'm a "kick the tyres" guy. I suspect it will be quite a few years before the problems are apparent and ironed out.

It is notable that Java, even though having a much better starting point and more decades of experiences, had to revise its memory model after ~10 years. Very few developers noticed.

Reply to
Tom Gardner

That depends on the processor and the target architecture - with a simple enough system, "volatile" can be enough if you use it correctly. But usually you need some implementation-specific features - such as inline assembly for memory barriers, cache flushes, "sync" instructions, etc. Or you use an RTOS with this done for you.

With C11/C++11 there are atomics, which can make it easier to get things right. But you still need to know what you are doing, especially if your target is multi-core, super-scaler with caches, buffers and the rest of it.

Reply to
David Brown

Yes, but these go hand-in-hand. In C, I could write code for an efficient and leak-free string indexed hash table. But in Python, I can use "ht = dict()" and it's done. (In C++, I could look up the right headers and the convoluted template syntax, and it is done in half a dozen lines - it's a compromise.) The greater productivity means I can write more advanced systems.

(I mostly use C on embedded systems, and mostly Python on PC's.)

It's called "learning by experience" :-) And it's more enjoyable than making the same mistakes again and again.

C++ is taking over more, and we are seeing more embedded systems written in higher-level code than before (use an 800 MHz ARM running Linux and Python instead of a 40 MHz ARM running C or C++). But it will be a long time before any of the newcomers like Go or Rust are realistic for the low-level code.

No - C is still popular, unlike Cobol and PDP11's. New stuff is written in C. Its longevity, portability and stability are some of its key selling points.

Yes. This has always been the way of C. The language is specifically designed for two different types of coding - portable coding written to match the standards-defined behaviour so that the same code can be used on a wide variety of systems, and implementation-specific coding written to get the best out of a particular target or to use features that cannot be portable.

Reply to
David Brown

Neatly and accurately snipped :)

I was spoiled by using Smalltalk when it came to container classes. I was dismayed, but not surprised, at how difficult they are to do well in C++ - and at how many years it took before the first (not very usable) implementations became available.

With Java I was pleased, but not /too/ surprised, at how quickly high quality libraries were available from many many sources. The contrast with C++ was compelling.

Choosing the right tool for the job - one of the key attributes of an engineer :)

I've used a bit of Python, and it seems OK, except that pretty print reformatting is unreliable /by design/.

I remember makefiles, where the invisible difference between tabs and spaces was semantically crucial :(

Just so :)

True, but has it reached an inflection point?

I wonder if INFOSEC will turn out to be a significant pressure. Time will tell.

A question is then whether an employer can obtain the relevant skills for design implementation /and/ /maintenance/.

Everything I've seen and heard indicates that while it /appears/ possible, in practice there are latent faults that will bit you.

Mind you, I've seen crap written in many languages. One project I've seen wasn't at all concerned about a liberal injection of try { ... } (catch NullPointerException) { // nothing here! } used to solve the problem of unit tests failing.

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.