Delay Routine: Fully-portable C89 if possible

That, presumably, would be why your applicaqtion specific integer types would often boil down to unsigned char or signed char.

That's why int exists. It's minimum size requirements cause an issue with 8-bit micros in that regard, but otherwise it will be the native integer type (and thus the most efficent). Any non 8bit platform for which that's not true? In particular any 16 bit platform that uses a 32 bit int? It's forbidden by what I uderstand to be the spirit of the language.

The int(size) definitions, however, select a type w/o regard for its efficiency.

The fast and at least types do provide some advantages here, but I don't think I've ever seen them 'in the wild'.

The int(size) macro are butt barnacles on the language to coin a phrase.

In any case none of the stdint type should IMO be used directly, but rather as the basis of application specific types where appropriate. The application specific types then give you additional type checking (beyond size).

At that point you are so specific that using the appropriate underlying type is no more difficult. There is a minor documentation advantage but that's it. On a number of interfaces even that is misleading. There are inferfaces where a 32 bit number must be broken into parts for writing and others where an 8 bit number must be written as a 32 bit so the size of the I/O and the size of the value written (or read) may be somewhat independant.

But size only covers a small portion of that issue. You also have to deal with endianess and alignment at least. Size is actually the least troublesome IME.

Well if you are doing logic it's one bit by definition ;)

Bitwise operations are another question but width is only an issue when going to the hardware. Theres some modulo arithmetic that's used for checksums etc but that's a small portion of the code I've dealt with. Modulo arthmetic on buffers is best done in other ways.

Now a standard way to introduce MAC operations and saturating arithmetic might be interesting but I doubt there's enough experience with them to justify burden the language with them.

This one rather left me open mouthed with astonishment.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett
Loading thread data ...

David:

Use fully-portable code to pick the most appropriately sized type provided by the implementation.

Again, fully portable

I dunno how important this is. I'm currently working with a microcontroller that has 6-Bit ports but which has 8-Bit registers.

Fair enough, it would be a hell of a lot more efficient if you were using an implementation that provided exact types you're looking for, but it's still not impossible to do it portably.

Again simple to do with portable code.

Not quite sure how this applies here.

Martin

Reply to
Martin Wells

The first example is correct - you are using a specific-sized integer type, which will wrap as expected. It is not entirely portable (cpus without 8-bit support will not have a uint8_t, and it's conceivable that a cpu could be built with 8-bit support but not modulo arithmetic).

The second example uses a nonsensical type "unsigned char" (it may work like a uint8_t, but the name is silly), and requires an extra expression when index is used, thus complicating the code and adding scope for programmer errors. It is also not the case that the compiler *will* optimise to the same code - it perhaps *should* do so, but not every compiler is as good as it ought to be, and sometimes optimisations are disabled or limited (such as for some types of debugging, or examining generated assembly code).

So while the second example might be more useful for those writing code that must compile cleanly on an AVR and a PDP-6, back in the real world of embedded development, the first version is greatly preferred.

Reply to
David Brown

These macros of yours just roll of the tongue, don't they? The very existence of the typos (such as the "& 1" instead of "| 1" near the end) shows how ugly and error-prone they are. Good code is clearly understandable and easily readable, and does not contain unnecessary code (either source code, or instructions generated which the compiler must then optimise away).

Thus the correct way to get the lowest 32 bits of x, assuming x is not known to be 32-bit or less, is simply (x & 0xffffffff).

And if you need 2^32 as a constant (which you don't for 32-bit modulo addition and subtraction), the most practical way is to write

0x100000000ull, or perhaps (1ull
Reply to
David Brown

I don't know what you mean by this. If I want the smallest type that holds 16-bit signed integers, I use a sint16_t - that's portable, and gives me exactly what I want.

And again, I don't know what you mean here. Getting the smallest and fastest code is *not* fully portable, because the types are different. If I want a datatype that can hold at least 8 bits, but is as fast as possible, it should be 8-bit on an AVR and 32-bit on a ColdFire. There is no way I'd want my AVR code cluttered up with junk to improve the speed on a ColdFire - I want my code to be readable, and would rather write two clear and optimal versions than one illegible and sub-optimal "portable" version. In a case like this, if portability is of real concern, then using the "sint_fast8_t" would be the answer.

Your 6-bit ports are probably accessible as 8-bit hardware registers. But trust me, getting the access size on hardware registers right is often (though not always) important.

But why on earth would you want to jump through hoops to get this mythical "portability" instead of just using size-specific types in the first place?

But clearer, simpler, faster to write, faster to debug, and perhaps smaller and faster to execute if you know the size of the types you are using.

Look at the code:

typedef struct { int x; short int sx; } data_s; data_s array[200];

int foo(int i) { return array[i].x; }

If "int" is 32 bits, and "short int" is 16 bits, the structure is 6 bytes long. This has two effects - the "foo" function has to multiply its index by 6, and the int elements will be misaligned. On many cpus, a multiply by 8 is faster than a multiply by 6, and on a cpu with a

32-bit bus, an aligned 32-bit access is normally significantly faster than an unaligned access. Thus the array could be faster (though larger) with a 16-bit padding element in the struct. Since I use types of known size in such declarations, I can make that sort of decision.
Reply to
David Brown

The thing is, this was a contrived example, like all the others I can think of unfortunately. In *practice*, I would not code a circular buffer like this (modulo-N arithmetic). Even if did, it is unlikely to be exactly 256 bytes long, so I would have to use the explicitly masked version anyway.

[...]
--

John Devereux
Reply to
John Devereux

I would use "short" for this.

What I find is that I might write code for, say, an AVR and perhaps use "short" to reduce the code space, run time and ram space. Then, I might want to use the code in an ARM version. It runs just fine, since I used portable code. Now it might be that if I used int_fast16_t, then the code could run microscopically more efficiently on the ARM. But it probably would make no difference. (And because the ARM is so much faster generally, it would not be noticable).

I am not usually worried about portability for this, since such code is inherently portable. I can use my knowledge that e.g. unsigned chars are 8 bits and unsigned longs are 32 bits in this case.

I find if I want to be portable, I have to manually pack/unpack data anyway due to endianness issues. So the fixed length types don't buy me anything.

Maybe - but I can't think of a real-life example.

You mean padding the struct with extra elements, or adjusting the type of existing ones, to bring the total struct size to a power-of-two number of bytes? I have never thought of doing this, but it sounds very unportable. I suppose it could avoid doing a multiply on lookup.

--

John Devereux
Reply to
John Devereux

I agree that extra features should not cause problems with standard code. Just remember that these features were added for a reason - if they let you write better, or smaller, or faster, or clearer code, then use them if they provide more benefit that portability would (that's one reason I like gcc - I can use the same extra features on many targets).

You write the redundant information if it is useful. Sometimes (though not often) it is useful to include the plus sign on positive numbers. Some people think it is useful to use "auto". I am very much of the opinion that writing "int" in all cases is important, even if it is technically redundant.

The English you write here is full of redundant words and letters that are not necessary to communicate your message. Why not try writing in SMS-speak, which has much less redundancy, and see what people here think about your post's legibility? Redundancy is a *good* thing, when it adds to what you write, but a bad thing if it merely wastes space and distracts from the important parts of the code.

You can contest all you want. At best, you can argue that it is

*possible* to write embedded code (portable or otherwise) without having specific sized types, but there is no doubt that people write better code by taking advantage of these types. You are clearly new to embedded development, at least on small micros (judging from your original post in particular) - those of us who have been developing on a wide range of cpus for years understand the benefits of size-specific types. That's why was introduced, that's why number 1 hit on lists of shortcomings of C is its inconsistent type sizes, and that's why embedded programmers always like to know the exact size of the int types on their target. Sure, it's possible to get away without it - just as its possible to do embedded programming without C - but why *not* use size specific types to your advantage?

You'll find that these are logically 8-bit registers, with only 6 bits implemented.

Reply to
David Brown

Well, perhaps that was a "straw man" argument - I am having trouble thinking of real examples that would actually *use* "modulo-N arithmetic based on type sizes".

--

John Devereux
Reply to
John Devereux

How exactly is that monster macro supposed to be used, and what result does it give?

Reply to
David Brown

Err..... "Highly Commended" in the Obfuscated C (Macro section) competition? :-)

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris Hills

Fair enough. I thought the point of Grant's post was that sometimes you would specifically want modulo-N arithmetic based on type sizes without needing extra bit masks.

mvh.,

David

Reply to
David Brown

But a "short int" may be bigger than 16-bit. Some 8-bit compilers may may also implement "short int" as 8-bit - I know it's non-conforming, but it's far more likely that an embedded programmer will meet a non-standard compiler with 8-bit shorts than a standard compiler with

36-bit ints.

The speed difference will, of course, vary depending on the code in question, and the cpus in question - but some 32-bit cpus work twice as fast on 32-bit data as they do on smaller data, and 8-bit cpus clearly work much faster on the smaller data. As you say, one solution is to use int_fast16_t and similar types - you specify precisely the characteristics of the type that you are interested in.

Here you are implicitly using the size specifications of the types. I prefer to be explicit about such things.

Sometimes endianness is an issue. I'd love C to have types with specified endianness as well as size.

extern void sendChar(char c); void sendHexNibble(uint8_t n) { static const char hexChars[] = "0123456789abcdef"; sendChar(hexChars[n]); } void sendHexByte(uint8_t n) { sendHexNibble(n >> 4); sendHexNibble(n & 0xf); }

If the uint8_t used by sendHexByte had extra non-zero bits, you would end up reading outside the hexChars[] array.

It's perfectly portable - as long as you use size-specific types. And yes, it avoids multiplies on lookups - multiplies can be costly on small cpus. Even on big cpus, avoiding multiplies can be beneficial - sometimes multiplies have longer latencies than shifts, and many have addressing modes that can include shifts in a single instruction. The other advantage is that you can be sure that accesses are aligned. The compiler should pad the struct to ensure that accesses are possible, but not necessarily optimal - if you know the exact sizes, you can add padding yourself to choose a balance between space and speed.

Reply to
David Brown

That's a good point (although "static" as a word makes sense when applied to a local variable). But when there is a choice, I prefer to make sense - "unsigned int" makes more sense than plain "unsigned". If "static" functions called alternatively be written "private" functions, I'd use "private" for improved readability.

Reply to
David Brown

On this point we agree entirely! If you know whether your compiler's plain chars are signed or unsigned, you're probably doing something wrong.

mvh.,

David

Reply to
David Brown

[...]

If a "short" is bigger that 16 bit, surely the int_least16_t will be too? They both have the same rationale AFAIK.

[...]

I confess, I would probably just go ahead with the exact same code but using unsigned char as a synonym for uint8_t. They are always the same on the machines I am familiar with. (And if they were not, there would probably not be a uint8_t at all!)

Aren't compilers free to pad structures how they like?

[...]
--

John Devereux
Reply to
John Devereux

David:

No, it just proves that I was smart to write "(Unchecked, likely to contain an error or two)". My checked-over code is a different animal altogether.

Yes, good code is clearly understandable and easily readable *where possible*.

Yes, that's a hard-coded way of doing it.

Martin

Reply to
Martin Wells

David:

The C language is described in such a way that "int" should be the most efficient type (or at the very least tied for first place).

If I want to store a number, I use unsigned. If the number can be negative, I use int. If the number needs more than 32-Bits, I use long. If I need to conserve memory, I use char if it has enough bits, otherwise short. If I *really* need to conserve memory, I make an array of raws bytes and do bit-shifting.

There's nothing wrong with the likes of uint_atleast_8, it's just that they're not portable C89. I've heard there's a fairly efficient fully- portable C89 stdint.h header file going around, so maybe that would be useful.

Again I don't see much use for them. If you want efficiency, go with int. If you wanna save memory, go with char if possible, otherwise short. If you've got big numbers, go with long.

Indeed the highest two bits are ignored when outputing to the pins.

Martin

Reply to
Martin Wells

John:

"int_fast16_t" shouldn't be anything other than plain old int.

Martin

Reply to
Martin Wells

Yes, I think that's true. But if the target/compiler does not support a true 16-bit int, then I the type "int16_t" would not exist (even though int_least16_t and int_fast16_t would).

Note that it is perfectly possible, though somewhat unlikely in this contrived example, for an embedded compiler to support 16-bit integers while having short ints bigger than 16 bits. Embedded compilers sometimes support different sized types as extensions, beyond what is available through char, short int, int, long int and long long int. It's more realistic to expect sizes such as 24-bit or 40-bit integers to be available in this way, but some compilers have 32-bit short ints (perhaps having 64-bit ints, or perhaps just because 32-bit data are much faster on the architecture in question), and may provide 16-bit integers through an extension.

I hope you don't have to try to get code working on one of these horrible 16-bit DSPs that can't access 8-bit data directly - an unsigned char is 16 bits on such architectures. I'd rather have the compiler reject the code because there is no uint8_t implemented, than generate incorrect code because I'd assumed a char is 8 bits.

Yes, and they are allowed to generate all sorts of code as long as the visible results are the same. The padding will (normally!) follow specific rules according to alignment rules on the target, but it's certainly advisable to check that the padding is as you expect. I generally have the "-Wpadding" flag on my gcc compiles, so that I get a warning if my structs are unexpectedly padded.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.