Are you sure? Have you checked the assembly output for this code? For non-volatile variables at least, I expect that these inline functions are optimized to the same instructions. I don'T have an ARM compiler handy, but maybe you could check if it also works with "volatile unit32_t x"
If C++ is used, an inline function with a reference parameter is also easily optimized:
Yes, I have checked - though not for a wide variety of versions, optimisation flags, etc. The inline functions will result in the same ARM code in the non-volatile case (assuming at least -O1 optimisation). But in the volatile case, the compiler correctly generates different code for the structure access and the manual bitfield extraction.
It is still a mess compared to the bitfield struct access, and it is still different in the volatile case. Yes, the compiler can optimise the ugly, unmaintainable and error-prone source code into good object code - but why not just write clear, simple source code that gets optimised into good object code?
I would expect the compiler to manage this optimisation.
I think the OP should look first and foremost at the /source/ code, and decide from that. The code generation details are up to the compiler - enable optimisation, give it clear and correct source code, and let it do its job. It is only in very rare situations that the generated code should be the guide for a decision like this.
In both C and C++, you have to embed the item in a struct - "typedef" itself does not introduce a new type. Yes, this technique can make the code safer.
But you are going round in circles, writing lots of extra source code, writing access functions with potentially error-prone masks (imagine a case where the bitfields were not as simple, or where they were changed later), with duplicate information in multiple places, and finally resulting in some unnecessarily verbose and ugly usage.
In the end, all you have done is duplicate a feature that C and C++ support as part of the language. It is a pointless exercise.
I haven't looked at K&R for an older definition, but C11 certainly says that the order of allocation of bit-fields is implementation-dependent. There are some rules about packing, but there is plenty of scope for weird implementations if the compiler writers so fancy.
As long as his compiler follows the ARM EABI (and all serious embedded ARM compilers will do so), there should be no problem - the EABI gives more details for this sort of thing than any of the C or C++ standards.
(He could make it a little easier by dropping the bitfields - they are not necessary when the struct fields are uint8_t and the bitfields are 8 bits.)
No, endianness is not remotely "undefined behaviour" - it is /implementation/ defined behaviour. There is a huge difference.
This means that the implementation has to document exactly what endianness is used - and in the case of ARM compilers for embedded systems, this is specified in the ARM EABI to be little endian.
Being implementation-defined behaviour, this also means that you /can/ test it - sometimes that is easier than trying to find the relevant documentation! Once you have tested it, you know exactly how it works for that implementation. And while in theory it could change in later versions of the compiler, making such a change between compiler versions would be astoundingly stupid and unhelpful - especially in an embedded compiler - so it is not going to happen. (I know of only one case where this happened - and that was on MS's x86 compiler, a long time ago, where they changed the bitfield endianness.)
If endianness (or bitfield endianness) really were undefined behaviour, you could not rely on it being the same between two runs of the same compiler on the same source code - testing would be useless.
We have software that is built for both big-endian and little-endian hosts (and more interestingly, it must internally simulate both big-endian and little-endian processors).
Yes, that looks like a reasonable solution if you need to handle different endiannesses. Another possibility will come with gcc 6, which supports attributes for structs with specific endiannesses (and I know at least one compiler that has had that feature for perhaps a couple of decades).
But if your code does not have to be portable - such as if you are modelling a hardware register on a particular device - you can just use the one fixed layout.
It happened to me on an 8051 C compiler in the '90s. The endianness of bitfields changed between one compiler version and the next. I learned a lesson, and have not used bitfields in C since then.
I believe the compiler vendor is still in business today (well, they were bought by a microcontroller manufacturer).
Thanks - now I know of /two/ cases where compilers have changed bit ordering!
You are throwing out the baby with the bathwater. If you avoid using a feature just because a small-time compiler writer for a brain-dead and C-unfriendly processor made an inconvenient decision two decades ago, you would have to give up programming altogether. Bitfields have portability issues, but can be extremely useful - they are perfectly safe to use once you understand them.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.