Fundamental C question about "if" statements

That was fixed (bar any bugs in needlessly complex bitfield definitions) in gcc 4.6, from 2011, with the "-fstrict-volatile-bitfields" option that is on by default in ARM compilers.

Reply to
David Brown
Loading thread data ...

What do you expect them to do? Demote their existing programs (their "babies"), throw away their expert knowledge (sunk cost fallacy) and get out of their comfort zones? I takes a bit of wisdom to handle these discussions.

--
(Remove the obvious prefix to reply privately.) 
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

A few comments here.

Don't use "unsigned int" - use uint32_t. Make it all absolutely clear and fully specified.

You don't need to do this in two parts. Put it altogether, using an anonymous struct (this is a very common extension to C99, which has existed in gcc since close to the dawn of time, and it is now part of C11).

I prefer to make typedefs of all my types, so that I don't need union or struct tags later. But that is a matter of style. And I strongly dislike systems Hungarian style, such as prefixing a type with U just because it happens to be a union (especially if you haven't used typedef, so that you have to have the "union" tag too).

Make your padding bits explicit with a name. The C++ memory model says that unnamed fields may not be changed by any write operations (C is more lenient), which can cause trouble for fields. It's better to make all your padding explicit.

I also like to add a static assertion to check that there are no mistakes when defining structures like these. Of preference I also use "-Wpadded" in gcc to warn of any unexpected padding - but that does not play well with most manufacturer-supplied headers.

typedef union { struct { uint32_t bit0 : 1; uint32_t bits1_10 : 10; uint32_t bit11 : 1; uint32_t padding : 16; uint32_t bits28_31 : 4; }; uint32_t all; } periphRegister_t;

static_assert(sizeof(periphRegister_t) == 4, "Check bitfield definition for PeriphRegister");

Having said all that, your definition will also work correctly (assuming you've got the volatile there somewhere, and are using gcc 4.6 or above which fixes the volatile bitfield problem).

I presume you mean "extern volatile const", and that's only for the read-only registers. It's common to define registers like this:

#define periphRegister (*((volatile periphRegister_t*) 0x12345678))

(Registers can, of course, be collected in larger structures first.)

The advantage of that is that everything is defined in a C header file - you don't need to mess around with linker files too. And since the compiler knows exactly what addresses are used, it can generate more efficient code than if they have to be sorted out at link time.

If you write:

void test1(void) { periphRegister.bit0 = 1; periphRegister.bit11 = 0; }

then the compiler will (correctly) generate two independent RMW sequences, optimising only the calculations of the address to use.

If you want both assignments to be done in a single RMW operation, you must do so explicitly - that's what the "all" field is for.

void test2(void) { periphRegister_t cache; cache.all = periphRegister.all; cache.bit0 = 1; cache.bit11 = 0; periphRegister.all = cache.all; }

No, it will not (assuming gcc with the volatile bitfield fix) - access is always by the defined size. However, the C standards do not say anything about this, so compilers that do it differently (such as by using a minimum access size) are not breaking the standards even if they may be breaking the target's ABI.

The rambling is good - it is an excuse to discuss these things that can be difficult to get write. Hopefully lots of people will see these posts, and maybe someone will learn something. (And hopefully someone will correct me if I've written anything incorrect!)

Reply to
David Brown

A key issue with bitfields is that there is no way in C to find out or specify the ordering - LSB first or last. There is also no correspondence between bitfield ordering and the target endianness. (There is also no standard way to find that out either, though typically compilers have predefined macros for the purpose so that you can write "#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN___" .)

For some platforms, there is no standard for bit ordering. In particular, on x86 there is no standard - and different compilers use different orders (though LSB first is most common). Famously, MS changed the order from MSB first to LSB first on a point upgrade of their C compiler - that caused much wailing and gnashing of teeth.

However, on other platforms, the ABI is much clearer. On little-endian ARM, bit ordering is always LSB first. (I don't know about big-endian ARM.) So you can rely on that across different ARM compilers.

Reply to
David Brown

Changing languages has switching cost. At least when you're at work, there's nothing fallacious about that. I've little doubt that Ada is a "better" language; unfortunately, it's still harder to obtain toolchains and use them than 'C'.

'C' is sort of a "lingua franca". Ada is kind of an esperanto.

--
Les Cargill
Reply to
Les Cargill

Yeah not tried earlier PIC's, I'd be surprised if they're in GCC.

GCC does have MIPS now that Ingres have taken it over.

My whole problem was that GNAT couldn't be built without hacking the configure scripts up and basically breaking; this is his Brian does it. As of GCC 5 that is no longer the case, I've included the same patch in free-Ada.

You should be fine with PIC32.

Luke

Reply to
Luke A. Guest

Op 24-Sep-15 om 2:22 PM schreef Boudewijn Dijkstra:

Choosing a language is not just about the language. It is also (and maybe even more) about the tool quality, price and support, about literature, training and school that is available, about the existing body of users of that language, and about your guess about how these factors will evolve in the future.

These factors are all hughly in favour of existing well established languages. A new language must either access a niche that existing languages left barren, or have enormous benefits over the existing languages.

So a programmer who chooses to base his career on C (or C++, or Python, C#, Java, ...) should do so nog just on the quality of the language, but also (and mybe even mainly) on the other factors I mentioned.

I think C as a language has HUGHE problems, and C++ copied most of them. Jet C++ is my favourite language.

blog about this subject:

formatting link

Wouter

Reply to
Wouter van Ooijen

gcc has supported MIPS since the dawn of time - it was one of the earliest ports.

Reply to
David Brown

David Brown

But not everything as microchip didn't give changes back to GCC as they had their own tool chain.

Reply to
Luke A. Guest

Ah, yes - gcc supported MIPS and MIPS cores, but Microchip took a snapshot of gcc and did their own changes and made their own headers and libraries. And that has AFAIK never made it back to mainline - Microchip doesn't really understand how open source works.

Reply to
David Brown

It is biased in many ways NOTHING to do with language, some of this is based on my experience of various computing projects I mentor or be the "customer" for students over the years.

1/ He freely admits in first year the hardware was not finished till a few weeks before the end of prject time. 2/ 2nd years onwards he is then less wrapped up with getting hardware sorted and probably spending more time tutoring/mentoring projects He now has data on what areas are difficult for students to understand and implement. 3/ 3rd year onwards he no doubt has better idea of working software to supply, also how better to supply definitions documents. 4/ By the time he starts the Ada version has has over 5 YEARS previous experience on structuring the project for the students, the areas the students take longer to grasp, also he has workimg examples of his software to port to Ada. 5/ Over TEN years the hardware and compilers may have improved but not necessarily his coding styles. Compilers that take a tenth of the time to run and are better at detecting errors (whatever language) will have some effect.

As with anything if you repeat it often enough you get more adept at doing the same task, whatever your field.

His analysis reminds me more of I want the world structured this perfect way that I always do things this way but I have to bend for the tools.

Very similar to some computing exam questions I see where certain questions are done in various languages, but they ALWAYS start in Pascal so all other languages have kludges in to make things like arrays start at index 1, or being able to return values from functions in methods that better suit the language just some old Pascal way. Let alone global and local variables and function parameters having the SAME name.

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk 
    PC Services 
  Raspberry Pi Add-ons 
 Timing Diagram Font 
 For those web sites you hate
Reply to
Paul

And those headers/libraries are restricted by Microchip so even though you can download the specific versions of the gcc/binutils toolchain Microchip are using and build them, you are not allowed to use the Microchip headers/libraries with your own toolchain.

I'm just using a generic toolchain from MIPS which builds and works just fine for me. I'm using libraries I wrote myself and ported to MIPS and I create my own headers from the public Microchip documentation as I need them.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

There's nothing gcc based for 8-bit PICs. The only open source compiler that I am aware of is SDCC.

Microchip have their own one-off port of the gcc/binutils toolchain for the PIC24 but have not added the changes back into the mainline toolchain.

Oh, now that's interesting. Thanks, Luke.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

They also practically never go away, design in MC parts and it keeps the supply people happy.

It's what you get used to, I've been using PIC's since '93.. and I can still buy the part (PIC16C56) - the product can still be made as-is 22 years later. I use PIC18F2525 processors a lot, for both hobby and commercial stuff. Instead of messing around with Microchip's compilers I use SDCC, works for me and it's open source so if something goes wrong I have a fighting chance of fixing it myself. But so far the only bug I've had was a failure to note that the manual says to beware of --obanksel=2, known issue.

Yes. It works, it's reliable, plentiful supply, and cheap enough. I don't discount using other processors, but for anything that needs

Reply to
Terry Newton

AVR's (like the ATMEGA) have their quirks, but they are almost negligible in comparison to working with PIC devices. The only hinder to programming AVR's in "normal" C is access to data in flash - that is slightly messy (though not nearly as bad as for PICs). Other than that, you get a solid cpu, albeit 8-bit, and a top-quality modern compiler and well-written library.

The compiler used by Arduino is the AVR port of gcc - this is the standard compiler for the AVR, and Atmel also distributes it as part of their tools. Although the compiler itself is under the GPL, it places no restrictions on your own code - nor does the standard AVR library (avrlibc).

The Arduino is a different matter - it is a hardware/software project aimed at prototyping and hobby use, with an unpleasant and unclear mix of licenses. So when you are making software with Arduino and its libraries, I am not convinced that there is /any/ license you can legally use to distribute your software if you use all the parts of the library - but there are certainly parts that require GPL or at least require that you release your source code or linkable object files.

But if you stick to more normal embedded development styles, then the AVR is absolutely fine. And once you have moved to gcc, you will not want to go back to SDCC (SDCC is a fine piece of work, but as a compiler it is not in the same league as gcc).

However, if you are moving away from PIC's it does not really make sense to take a small step to AVRs - take a bigger step to Cortex-M devices. Much as I like variety and competition, the Cortex's are now dominant and there are fewer and fewer reasons - other than legacy - to choose anything else for a microcontroller.

Reply to
David Brown

In article , snipped-for-privacy@hesbynett.no says... ....

Yes the Harvard architecture and accessing strings or arrays of strings in flash is 'quirky', but a good cheap breadboarding, proof of principle or few off in house hardware/system test suite base.

Personally would only use ATMEGA (Arduino Mega), myself.

Good for start point quick and dirty, also their ARM offerings for hardware platforms DUE etc are quite good, have some quirks.

At least for proof of concept or small in house runs they are usable and with Due and like same IDE with gcc compiler for M3 part.

Currently got a Arduibo Due setup Cortex M3, for some DMX (theatre lighting), using Arduino as proof of conecpt and a 18 year old student project for his A level in computing. Advantages of IDE and arduino libraries actually means he spends more time at higher levels rather than building UART functions and some of the other complexities that take longer for a project they are doing alongside other subjects.

Dont want him lost in working out interrupts for M3 from scratch.

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk 
    PC Services 
  Raspberry Pi Add-ons 
 Timing Diagram Font 
 For those web sites you hate
Reply to
Paul

The AVR is - IMHO - a thing of the past. We used to use them a good deal in my company, but there are /very/ few situations where you cannot get a Cortex device that is cheaper, smaller, lower power, and has more features and flexibility than an AVR. We choose them only for legacy uses (upgrades of old designs) or occasional very specialist uses. The same goes for other old favourites such as the msp430. It is a dead-end architecture - don't expect to see Atmel put much more effort into new AVR devices (and the AVR32 line is already dead).

There are hundreds of small Cortex-M boards available - including genuine Arduinos, if you like that platform, and boards that are compatible with Arduino add-ons. So why settle for "less quirky than a PIC" when you can get a nice, clean 32-bit core with uniform memory access?

(Note - all microcontrollers, and all tools, have their quirks - here I can only talk about those of the core. And the Cortex-M cores are about as non-quirky as it is possible to get.)

Arduino ARM boards might be a good choice for quick prototypes. But Arduino is a poor base for most commercial projects - with the questionable licensing being one of the reasons. As long as that is not a problem, and you work in a development environment where the boss understands the difference between a prototype and version one of the product, that's fine. (And obviously it's fine for hobby, educational and experimental purposes.)

Reply to
David Brown

With a PIC I just copy the code from the datasheet. I'm sure it's the same with the ATMEGA. The obstacle I ran into was I could find no info in the datasheets about in-circuit programming (besides bootloader), with the PIC I just had to run a few wires to a connector and add R's to the other stuff those pins were connected to, then a PICKIT 2 or 3 can be connected to reflash the PIC. Maybe can do the same with ATMEGA but all I saw was serial bootloaders, ATMEGA's with the Arduino bootloader pre-flashed, but no clear license. Even if I did use the plain gcc compiler. So was faced with the possibility of having to write my own bootloader, and just didn't have time to do that, or hunt down something I could use. I'm sure there is something that would have worked fine, but came down to either get new tools and figure out a bunch of new stuff with roadblocks due to the hobby nature of the part, or port it to a PIC18F2525 which I already knew would work just fine. Probably spent about the same amount of time (a few days) porting to the PIC as I would have spent porting to generic AVR gcc - in both cases replacing the Arduino-specific libs/code with generic, but most of it was writing code that emulated the Parallax LCD's positioning codes. The majority of the app remained the same.

The difference was with the AVR route I had unknowns that could have taken an unknown amount of time to solve (managers hate that!), whereas with a PIC there was practically no time spent on hardware design and programming hardware (already had it). I did have to learn to use SDCC but that was easy - where I wasted time was trying to make MC's compilers work. Actually HiTech C did work but the working version of the compiler was no longer available so moved it to SDCC to future-proof - almost identical code.

The ATMEGA328 doesn't impress me much, but the larger '644 and '1284 chips sure look nice. Main thing that worries me is eeprom endurance, rated 100K cycles whereas the PIC's are rated 1M. For most apps this won't make any difference but for some it can mean 10x longer lifetime or 10x more reliable depending on how you look at it. One really nice thing about AVR is they're

1 cycle per instruction, PICs are 4 so a 20mhz ATMEGA is probably 2-3 times faster than a 32mhz PIC.

If I run into an app that needs that kind of power the ATMEGA is definitely in the running. I know the PIC18F2525 and similar but not the larger parts, so if it comes down to figuring out a new chip it really makes no difference. But that's IF... most of the embedded stuff I do is quite simple - ranging from dumb (but vital) 8-pin apps that control power on/off and blink an LED if the voltage gets low, to 28-pin apps for testers that have to measure ohms, drive signal generators, LCD's and stuff like that.

Standard GCC exception. BTW SDCC is a port of GCC and has the same license, only restriction is from the MC headers which stipulate the code has to be used on genuine MC parts.. uh no prob.

The main thing the Arduino had going for it was I was able to get one from Radio Shack (oh I miss them) along with a breadboard and pile of parts, and have a working proof-of-concept in days. Looked like hell but when our client plugged in their headset and it correctly identified the speaker ohms, mic type and simulated an aircraft intercomm they said go for it, exactly what they needed. But they don't care one bit about CPU's licenses and all that, they want a product they can sell. Now. Would have been nice if I could have simply packaged what was on the breadboard but the real world isn't so kind.

Elaborate? What can I do with gcc that I can't do with SDCC? Is it just better libs, or the core language itself?

Agreed. At least for anything needing more computing power than the simple bit-bang stuff I usually do. And in some cases that too, there are 8-pin versions for less than $1. But the 5V ones I see are lacking in EEPROM.. go up to 256 bytes.. not that that's a huge deal, external ee's are cheap. Gotta have 5V though, that's what all the stuff I'm controlling and reading runs at - sure I can use level shifters etc but not if I don't have to.

Doesn't mean I'll be ditching PIC's any time soon... I d/l'd a bunch of M0 docs to study, they do look nice and perhaps one day I'll figure out what to do with them... vastly more complicated which means many more things getting in the way at first. But they can potentially do more.

The point of all this is it really doesn't matter what uC is used so long as it meets the needs of the app. If starting fresh with uC's then yea PIC's can be a pain. But I do think the pain is exaggerated to a degree, and what someone is used to counts more than philosophical preferences. Time to market means a lot, I can code a PIC app in a day or three and know it's gonna work and so long as MC doesn't fold (no evidence of that) I know supply will be there. Other parts.. ???? definitely worth learning but it's gonna be awhile before I can do the same stuff with them that I can do now with a PIC. There's more to it than just this part is better than that.

Terry

Reply to
Terry Newton

AVR's can be programmed by JTAG, or using SPI - they work in much the same way as an SPI flash chip, with a little extra toggling of reset and another pin to put the chip in SPI programming mode. It's well documented (I've made my own programmers on occasion, using FTDI USB devices), and there are open source programmers - though most people use Atmel's own tools. It is also not hard to write a bootloader - you have easy access to writing to the flash from within a boot program. But I agree that it takes time. There are probably boot loaders available from different sources (maybe even from Atmel) with clearer and more usable licenses than the Arduino stuff.

The AVR is a good deal faster than that, relative to the PIC - not only are most instructions single-cycle or two cycle, but they do a good deal more useful things than PIC instructions. Pretty much everything on a PIC needs to pass through W, while on the AVR you have 32 registers to play with - that makes the AVR approximately twice as efficient. And if you need to use pointers, or data on a stack, you are in a different world - the PIC takes ten times the instructions that the AVR needs.

SDCC is a totally separate project from GCC, and is in no way a port of it. AFAIK it uses the same sort of license (the compiler itself is under the GPL, but all library bits and pieces have exceptions to allow you to use the compiler with code of any licence - the GPL does not affect the source code you write or the object code you compiler and link).

It is the Arduino libraries that have licensing complications.

SDCC supports much of C90, and a few bits of C99 - keeping up with modern C language support and features is not a priority of SDCC. gcc, on the other hand, aims for full support - and while the latest C11 doesn't have much extra of use for small embedded systems (though it has a few useful points), C99 is a significantly nicer language to work with. And with gcc you also have the option of C++ - apart from exceptions and RTTI, which are not implemented as yet in the avr C++ library, you get C++ all the way to C++14 on the latest avr-gcc.

For the AVR, gcc has support for multiple address spaces to handle data in flash, as well as a range of extension "attributes" for making it easy to write interrupt functions and other specialist code in normal C.

gcc is a highly optimising compiler. While speed is not everything, it means you can wave a fond goodbye to the sort of nonsense of manual optimisation needed with weaker compilers and weaker processors, in order to generate efficient code. Never again will you have to convert for loops to while loops to save cycles, or use pointers instead of arrays, or use shifts when you want to multiply.

Perhaps one of the biggest "selling" points of gcc is its extensive static error checking and warnings. It can catch a great many potential errors at compile time, if you give it the right options.

gcc has the best inline assembly system of any compiler I have used. It is not the easiest to use (far from it!), but it is very powerful and lets you fully integrate assembly and C code. It also has a variety of builtin intrinsic functions to improve critical code.

The cheapest Cortex device I know of is $0.40, with 16 pins IIRC. The smallest is about $0.70, has 20 pins (balls), and is 2 x 1.6 mm - perhaps not targeted at the hobby and prototype market!

There are very few modern microcontrollers at 5V now.

I fully agree there.

Reply to
David Brown

My understanding is SDCC is a completely different compiler that's much more primitive than GCC.

GCC is a far more serious optimizing compiler and it wouldn't surprise me if the code it generates can be as much as 2x faster, smaller, etc. than SDCC's. GCC also implements the current ISO standard very completely, has various language extensions, debugging features, etc. And it can do C++ if you want that (or for that matter Ada). On the other hand, some people complain about its optimizations being too aggressive, pushing the hairy edge of the standard, so it can make code that's correct under the ISO spec, but different from what you expected so your code can fail if you're not up on ISO's nuances. You can turn off some of those optimizations with command line options.

The TI MSP430FRxxxx series seem interesting to me, since you were also concerned about write wear in the EEPROM. The 430FR's have on-chip FRAM that is basically non-volatile ram, good for many billions of write cycles. At the moment there is no ARM part with that.

I have about the same impression (ARM vs AVR), that the ARM is a lot more complicated to deal with at the lowest level, and people have also said the hardware interfacing is harder or has fewer amenities (like

5V). But I don't have experience with this and am mostly a software guy.
Reply to
Paul Rubin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.