Delay Routine: Fully-portable C89 if possible

Tends not to be true for smaller micros.

I tend to use "int" for all "numbers" (integers), unless

- I need the specific features of unsigned arithmetic (overflow behaviour) - I am interested in the bit pattern, rather than the numeric value - I absolutely need the extra 1 bit of range and don't want to use "long"

16 bits! You are really undermining your argument here... :)
[...]
--

John Devereux
Reply to
John Devereux
Loading thread data ...

First off, that's not correct (consider the original Motorola 68000 - it was a 32-bit processor, so "int" has a natural size of 32 bits on that architecture, and yet it processed 16-bit data faster). Secondly, even if it were correct, there is still plenty of use for "int_fast16_t" as it says exactly what you want it to do, and is consistent with other types such as "int_fast8_t" which will *not* be the same as "int" on many architectures.

Reply to
David Brown

That's only the case for some architectures - in particular, for 8-bit micros, it is far from true.

When writing code for small embedded systems, you are often interested in getting the code to do exactly what you ask - no more, and no less. You care at a level of detail unfamiliar to those used to programming on big systems - thinks like exact type sizes are important. For bigger systems, the tradeoffs for development are different - you don't have to worry so much about the minor details, and can afford to be sloppy about implementation efficiency in the name of developer efficiency (for example, you might use higher-level interpreted languages). For small systems, you are looking at something with similar levels of control as for assembly programming, but faster development. Thus you should be aware of things like type sizes, library implementations, and the strengths and weaknesses of your target cpu. Development for larger embedded systems falls somewhere in between these two.

Reply to
David Brown

... snip ...

long long is a C99 feature. Most C compilers today do not implement that. However, they do implement C90/C90/C95 which does provide long and unsigned long. These, in turn, are guaranteed to provide at least 32 bits. So change the 'ull' above to 'ul' and things will probably work, provided the compiler is C89 or later compliant.

--
 Chuck F (cbfalconer at maineline dot net)
   Available for consulting/temporary embedded and systems.
Reply to
CBFalconer

... snip ...

There is no such thing as a 'sint16_t', except as a non-standard extension in some system or other. The following extracts are from N869.

7.8 Format conversion of integer types [#1] The header includes the header and extends it with additional facilities provided by hosted implementations. [#2] It declares four functions for converting numeric character strings to greatest-width integers and, for each type declared in , it defines corresponding macros for conversion specifiers for use with the formatted input/output functions.170)

Forward references: integer types (7.18).

and

7.18.1.1 Exact-width integer types [#1] The typedef name intN_t designates a signed integer | type with width N. Thus, int8_t denotes a signed integer | type with a width of exactly 8 bits. [#2] The typedef name uintN_t designates an unsigned integer | type with width N. Thus, uint24_t denotes an unsigned | integer type with a width of exactly 24 bits. | [#3] These types are optional. However, if an | implementation provides integer types with widths of 8, 16, | 32, or 64 bits, it shall define the corresponding typedef | names.

Notice the above paragraph 3, stating these types are _optional_. Also note that all this is not present in the C90 (or C95) standard.

However the thing that is universally known, and not optional, is that an int (and a short int) has a minimum size of 16 bits, and that a long has a minimum size of 32 bits. If a compiler fails to meet these values it is non-compliant, and does not deserve to be called a C compiler.

--
 Chuck F (cbfalconer at maineline dot net)
   Available for consulting/temporary embedded and systems.
Reply to
CBFalconer

... snip ...

No such guarantee exists. You need to check the various values in limits.h. If the C99 long long type exists it does guarantee 64 bits.

In general you use unsigned to avoid undefined behaviour (or implementation defined) on overflow. Remember you can always cast an int to unsigned int, but you cannot reliably do the reverse with a cast.

I suggest you read the standard. A useful text version of N869 exists, bzip2 compressed, at:

You can also get a completely up to date free draft at:

but that is in PDF format, and not nearly as useful.

The point of reading the standard is that you will then know where (if anywhere) your particular compiler and library system fails to meet it, and thus where to take especial care. You can also choose to use fully portable code as far as possible, with much less fuss over future porting.

--
 Chuck F (cbfalconer at maineline dot net)
   Available for consulting/temporary embedded and systems.
Reply to
CBFalconer

That may no longer be the case once you come across your first 128-bit CPU. short at 32 bits, int at 64, and long int at 128 might just be the right choice --- with int_least16_t still having 16 bits.

There are more things than meet the eye here. One of them is that the classical short/int/long arrangement runs out of steam sooner or later, as CPU s keep getting wider. We already got "long long" because of this.

... and doing so, you would be introducing the exact mistake that uint8_t was invented to avoid. uint8_t is exactly 8 bits, or it doesn't exist. unsigned char is _not_ synonymous to that --- there's no particular reason it couldn't be 11 bits wide.

So you'ld rather bet your code's future on would-be probabilities and what you happen to be familiar with today, than get it right once and for all.

Reply to
Hans-Bernhard Bröker

I know... but are there actually any current embedded processors that have uint8_t, but do not have 8 bit chars?

It is far more likely that I will want to use a routine on a system without uint8_t, than one without 8 bit chars. YMMV.

--

John Devereux
Reply to
John Devereux

David:

Sounds exactly like a 16-Bit CPU to me.

I have a 32-Bit CPU that can do 64-Bit arithmetic at a slower rate... should I start calling it a 64-Bit CPU?

Martin

Reply to
Martin Wells

David:

Are you talking about machines that can do 8-Bit arithmetic faster than 16-Bit arithmetic... ? I hadn't considered that on these particular systems, char will be faster than int. I think I'll bring this topic up in comp.lang.c.

Martin

Reply to
Martin Wells

No, it was a 32-bit cpu that had a 16-bit ALU. All the registers were

32-bit, all instructions could be done as 8-bit, 16-bit or 32-bit, and its internal address space is 32-bit. It uses the same 32-bit ISA as modern ColdFire devices. The only limitations were that the ALU was 16-bit (to save space and money), so 32-bit ALU operations took twice as long as 16-bit operations, and a 16-bit external databus. There are modern derivatives such as the 68332 that have 16-bit external databuses (but 32-bit ALU), and are thus also faster at working with 16-bit data if it is external to the CPU. The 68k architecture is nonetheless a full 32-bit architecture, and "ints" are 32 bits.

No, you should consider the cpu's "width" to be that of its internal general purpose integer registers and datapaths. If it makes it easier for you, the width of the cpu is the width of its "int".

Reply to
David Brown

Of course an 8-bit "char" will be faster than a 16-bit "int" on an 8-bit architecture! At the very least, 16-bit data takes twice as much code and twice as much time for basic arithmetic. The C language's rule of "promote everything to int" is a royal PITA for users and implementers of compilers for 8-bit micros, and requires a lot of optimisation to produce good code. For many small micros, the fastest type is specifically an "unsigned char" - some operations, such as compares, are faster done unsigned if the architecture does not have an overflow flag.

What microcontrollers have you worked with (or are planning to work with), and what compilers? And have you tried looking at the generated assembly for the code you compile?

Reply to
David Brown

It was a 32-bit CPU that came packaged with bus interfaces of various widths. Using a type that was large than the bus width was slower than using a type that was less than or equal to the bus width.

The 68K was a 32-bit CPU.

--
Grant Edwards                   grante             Yow! I'm meditating on
                                  at               the FORMALDEHYDE and the
                               visi.com            ASBESTOS leaking into my
                                                   PERSONAL SPACE!!
Reply to
Grant Edwards

I'm afraid you're still not getting it. You bet on assumptions about current processors, when the primary purpose of uint8_t is that it removes the both need to assume anything, and the restriction to currently existing hardware.

Fixing a missing uint8_t in that case is a one-liner (fix or create a for that system). Fixing the code is an open-ended job.

Reply to
Hans-Bernhard Bröker

uint8_t is not the same as an 8 bit char. Char may be implemented as signed or unsigned and often compilers have switches to change the default. With C99 uint8_t is always 8bits unsigned independent of implementation or compiler switches. (Thanks Misra/C99 for size specific types)

w..

Reply to
Walter Banks

So that's a "no" then? :)

It still assumes the existence of an 8 bit type, which would likely not exist on a machine where "unsigned char" was not 8 bit. I.e, if I am worried about portability to that extent, I should use a mask instead (as discussed upthread).

--

John Devereux
Reply to
John Devereux

I'm curious about how a storage type smaller than char could exist without knock-on effects.

For instance, what would sizeof(uint8) return when a char contained more than 8 bits? Afterall, by definition, sizeof(char) == 1.

--
Paul
Reply to
Paul Black

Likeliness is irrelevant. Either uint8_t exists, or it doesn't. Period.

A routine should use uint8_t if, and only if, the algorithm needs those variables to be exactly 8 bits wide and unsigned. So if uint8_t isn't available, the code won't compile --- but that's a good thing, since it wouldn't work even if compiled. If it is available, the code will compile and work exactly as designed.

Reply to
Hans-Bernhard Bröker

I should imagine that if a compiler supports a "uint4_t", for example, then sizeof(uint4_t) will give an error, just as applying sizeof to a bitfield does.

Storage types smaller than a "char" (other than bit-fields) are not required or specified by the standards. Thus any such feature is an extra, and the compiler writer can quite reasonably have restrictions such as not allowing sizeof, or making it impossible to take the address of such a type.

Reply to
David Brown

2^32 requires 33 bits (it's a one with 32 zeros), which is why I specifically wrote ull. The only way a compiler could support 2^32 without supporting long long ints is if it is a 64-bit compiler with 64-bit long ints. There are very few 64-bit cpus in the embedded arena

- MIPS and perhaps PPC are the only ones I know of, excluding amd64 cpus, and any practical compiler for these devices will support long long ints.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.