Delay Routine: Fully-portable C89 if possible

I'm doing an embedded systems project which consists of taking input from the user via simple buttons, and giving output in the form of lighting LED's. So far, I've written the program as fully-portable C89, and I intend to keep it that way as much as possible. Obviously, I'll have microcontroller-specific parts to it such as:

void SetPinHigh(unsigned) { /* Must call microcontroller-specific library functions or something here */ }

, but the rest of my program calls these "wrapper" functions so I can keep the bulk of it fully-portable.

Anyway, I've come to a point where I need to introduce delays, and I again want this to be fully portable. The delays will be in the region of milliseconds (typically 250ms, eg. for flashing LED's).

I had considered using a macro which indicates the "Floating Point Operations per Second" for the given hardware setup, and then doing something like:

void Delay(unsigned const fraction_of_full_second) { long unsigned amount_flop = FLOPS / fraction_of_full_second;

float x = 56.3;

do x /= x; while (--amount_flop); }

(I realise I'll have to take into account the delays of looping and so forth)

Is this a bad idea? How should I go about introducing a delay? Must this part of my program be non-portable?

Martin

Reply to
Martin Wells
Loading thread data ...

Yes, that's a bad idea - embedded systems are inherently non-portable, and are dependant on the particular choice of target and compiler. Obviously it's a good idea to limit the number of non-portable parts of the program, but it's impossible to make an entirely portable embedded solution. Given that, there is no reason to worry about your delay function being non-portable. It should have a portable API, but does not need a portable implementation.

Also, if you are going to try and stick to a standard, C99 makes more sense (even though few compilers support it completely). Most importantly, it gives you "stdint.h" and types like "uint32_t" so that you can avoid unspecific non-standardised types like "long unsigned" (which should always be written "long unsigned int").

As for using floating point ops for a delay - that's a bad idea too. Floating point on most embedded systems is slow and requires large library routines, thus wasting large amounts of space. Floating point operations also have inconsistent runtimes - even on a cpu with hardware floating point, division times are normally not constant.

Finally, a loop like the one you wrote would be optimised away by any half-decent compiler. If you want to use busy waiting loops like this, learn to use "volatile".

There is no way in C to express a delay or any other time-related information - thus your solution will inevitably be non-portable. If you want accurate times, it is normally best to use a hardware timer of some sort. Many compiler's also come with library functions handling delays, which can be a good choice. Otherwise you have to write and test your delay functions carefully to see that they work properly even when using different optimisation functions (check the generated assembly code, as one usually does for critical functions in embedded programming).

Reply to
David Brown

Just think quickly about how portable your idea REALLY is. Two obvious problems:

- What happens if there are interrupts coming in?

- What happens if the Icache is being thrashed, on architectures that have cache?

Reply to
larwe

David:

"long unsigned int" is a part of C89.

Perhaps you were on about "long long unsigned int"? (which is a part of C89 but not C99)

As far as any compliant C89 compiler is concerned, "long unsigned" and "long unsigned int" are the same thing. If the compiler doesn't accept it, then it isn't a C89 compiler.

Even if it were worth switching to C99 (which I don't think it is), I still wouldn't because it's so poorly implemented today.

Martin

Reply to
Martin Wells

No, he is saying that he prefers being able to use uint32_t, instead of, for example, long unsigned.

I personally don't agree though. At least in my own work I have not found a real-life situation where these types are an improvement. Basically, if you are worried about the exact widths of types, then that part of the program is likely non-portable anyway, so the new types don't help much.

For example, in C99 I could define a 32 bit hardware register like this:

#include #define PORT (*(volatile uint32_t *)(0x1FFF0000))

But in fact this code will likely be useless on some hypothetical other CPU anyway. I can just as easily rely on ints being 32 bit on my platform, and do

#define PORT (*(volatile unsigned long *)(0x1FFF0000))

--

John Devereux
Reply to
John Devereux

*All* C standards are implemented to varying degrees, and *all* embedded compilers add their own extensions. Take advantage of what you get (such as , inline, and // comments), and leave out the parts that are poorly or inconsistently implemented (such as variable length arrays). Even nominally C89 compilers frequently support such features.

Actually, I was saying two things (without making that very clear).

First the simple part - omitting the "int" part of declarations and definitions is an abomination brought about by the terrible keyboards K&R had to work with when developing C. The type in question is called "long unsigned int" - "long" and "unsigned" are type qualifiers. The language standards say that if a type is expected, but you haven't given one, then an "int" is assumed, and any C compiler will accept that. But the compiler will accept all sorts of hideous code and generate working results - you have to impose a certain quality of code standards if you are going to write good code. One of these rules is that if you mean a type should be "int" (optionally qualified), then you write "int". If you don't want to write out "long unsigned int", then use a typedef.

Secondly, I was suggesting that if you want portable code, you have to use size-specific integer types. Using is an easy way to get that - otherwise, a common format header file that is adapted for the compiler/target in question is a useful method. It doesn't really matter whether you use "uint32_t" from , or have a "typedef unsigned long int uint32_t" in a common header file - nor does it matter if you give the type your own name. But it *does* matter that you have such types available in your code.

Certainly many of the situations where size specifics are important are in hardware dependant and non-portable - and thus the only issue is that the code in question is clear.

But there are many cases where you need a minimum range which may not be satisfied by "int" on every platform, and also where you want the fastest implementation. If you have a function delayMicrosecs(unsigned int n), then the useful range is wildly different on a 32-bit target and a 16-bit (or 8-bit, with 16-bit int) target. On the other hand, if it is declared with "uint32_t n", it is portable from 8-bit to 64-bit processors. Since the OP was asking for portable code in an embedded newsgroup, there's no way he can make assumptions about the size of "int".

mvh.,

David

Reply to
David Brown

John:

This will only work on implementations that actually have an unsigned integer type that has exactly 32 value representational bits. On other implementations, it won't compile. Best to use uint_least32_t (at least 32-Bits).

unsigned long is guaranteed to be atleast 32-Bit, so that's fine. If you wanted to go easy on space consumption, you could still use C89 and use macros:

#if VALUE_BITS(char unsigned) >= 32 typedef char unsigned uint_least32_t; #elif VALUE_BITS(short unsigned) >= 32 typedef short unsigned uint_least32_t; #elif VALUE_BITS(unsigned) >= 32 typedef unsigned uint_least32_t; #else typedef long unsigned uint_least32_t;

Reply to
Martin Wells
[...]

If I wanted a minimum range of 32 bits, I would use "unsigned long". As you know this is already guaranteed to be at least 32 bits on all platforms, so I don't think there is any portability problem with 8,16 and 32 bit processors.

Now I admit that these are the only cases I think about when writing my own embedded code. But I suppose you could argue that on some hypothetical 64 bit embedded processor, I would then be using values that were longer than needed. But,

- with 64 bit CPUs, is it not true that the compilers still tend to have 32 bit longs, and use "long long" for 64 bits?

- If longs *are* 64 bits, it could be because 32 bit operations are *slower* on that processor. Strictly, I think there might not even *be* a 32 bit type - unlikely, I agree.

- On a 64 bit system, it is very likely that I would want to take advantage of the greater word size and use 64 bit arguments in any case.

--

John Devereux
Reply to
John Devereux

I think perhaps you missed my point, which was that this sort of code is inherently non-portable since it refers to hardware registers built into the chip. In this situation I am *allowed* to assume that, e.g, unsigned long is 32 bits. And even if it was not, your version would not be an improvement since PORT is a 32 bit register. If uint_least32_t was in fact 64 bits, the code could still fail (e.g. you would overwrite the next register, or endianness could be wrong).

That was my point; it is just as good as uint32_t for this sort of usage.

--

John Devereux
Reply to
John Devereux

Absolutely

Absolutely NOT All the embedded compilers are based on C95. Also the next version of C will be C99 less some items (and some more added)

Yes

I would suggest a HW timer and an interrupt.

BTW what is/are the target MCU(s) for this

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris Hills

"unsigned long int" will, as you say, work for when you need at least 32 bits, and "unsigned int" will work for when you need at least 16 bits. I prefer to be more specific and explicit - I find I often want my types to be of a given width, neither more nor less than I specify. I'll grant you that this is somewhat a matter of style - but I am not alone in this (it takes a big demand to get something like into the standards).

The 64-bit MIPs processors are non-hypothetical embedded processors (although they are not common in this newsgroup).

That varies. "long long int" is always at least 64 bits, but there are different models for whether "long int" is 32-bit or 64-bit. In particular, 64-bit Windows uses 32-bit long ints, while 64-bit Linux uses 64-bit long ints. It is also perfectly possible for both "int" and "long int" to be 64-bit, but I think that is uncommon.

As you say, possible but unlikely. I don't agree with your thoughts as to why "long int" might be 32-bit or 64-bit - the trouble is, different implementations of the same instruction set might have different balances (the first Intel chips to support amd64 instructions were, IIRC, quite a bit slower at 64-bit arithmetic, while the AMD chips were faster in 64-bit mode).

For most of my work, I use 8 and 16 bit systems, but I sometimes use

32-bit cpus (some with 16-bit external buses, making 16-bit ints faster for some purposes) - thus I have the same situation when using 32-bit cpus as you describe for 64-bit cpus.
Reply to
David Brown

[...]

OK. I think I do see the rationale, but I don't find it convincing enough to want to expunge the native types and "pollute" my code with

*int*_t everywhere.

One other point I did not mention (perhaps someone else did?) would be interfacing to standard C functions. E.g. What happens when you call printf with a uint32_t, or a int_atleast_32_and_fast_please_t?

Doesn't that imply a whole new set of things to worry about?

--

John Devereux
Reply to
John Devereux

These things are a matter of personal style, but I think it's important to have concrete specific sized types available when you need them. In my code, a lot of local variables end up as "int" for convenience (although not on 8-bit targets), but for many exported functions, types, and data, I use size-specific types. I also use them for structs that will be used in arrays - on small micros, it can make a big difference if the size of such structs makes it easy to calculate addresses. In practice, I make more use of my own typedef'ed types like "byte" and "word" (16-bit, regardless of the cpu), simply because I've been using them for over a decade. But whereas previously my common include file (target/compiler specific) might have "typedef unsigned short int word;", it would now have "typedef uint16_t word;".

First off, I don't use printf or friends very often (I write small embedded systems). Secondly, if I *do* use printf (more likely snprintf), I use gcc which will type-check the parameters against the format so that any mistakes are caught - although with any variable parameter function, you've lost much of C's already limited type checking. Thirdly, I occasionally have to cast the parameters explicitly so that I can be sure there are no mistakes.

I would not say so, no.

On the other hand, if I were writing a C program on a PC or an embedded Linux box, I'd expect to use the fundamental C types a lot more often - because then parameters like the size of an "int" are known and fixed, "int" is generally the fastest type (which is not always the case in embedded systems), and there are far fewer demands in trying to use the smallest possible type in order to save memory.

mvh.,

David

Reply to
David Brown

You'd be better off using uint_least32_t. It _may_ mean the same thing to the compiler, but it expresses your intent more clearly to the reader (and that's what really matters). It also allows the compiler to user a longer type if that would result in faster or smaller code.

Why worry about it? Use uint32_t if you want exactly 32 bits. Use uint_least32_t if you want at least 32 bits. Use uint64_t if you want 64 bits.

--
Grant Edwards                   grante             Yow! I just went below the
                                  at               poverty line!
                               visi.com
Reply to
Grant Edwards

Its a terribly good example to try to access H/W functionality when we are talking about portability - not.

Main problem is likely to be the int, which is quite often 16 bit on a small micro and 32 bit on a larger micro. char's are sometimes set to unsigned char on small micros, and to be able to guarantee signedness regardless of compiler options is an improvement.

--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may not be shared by my employer Atmel Nordic AB

> -- 
>
> John Devereux
Reply to
Ulf Samuelsson

Many embedded systems need to do multiple tasks, either with an RTOS kernel or with a task loop. These systems will _not_ look favorably on some "portable code" that steals the processor for a delay loop.

To make your code compatible with both a task loop and an OS you need to put everything into a state machine that gets run when the application programmer calls your 'update' function (don't just name it 'update', of course).

The best way (to my mind) to enforce timing is to require the user to hook you up with a system time function that returns your choice of time units (I find myself using milliseconds quite often). Then you can keep track of when you want to come alive, and immediately return from your update function if sufficient time has not passed.

The second best way to enforce timing, and always a good adjunct to the first way, is to have your 'update' function return a delay that you want to be externally implemented. So if you run your update function and decide that the next time you should be called is in 250 ms, you can return a number that means 250ms, and leave it up to the application programmer to wait that long.

I have done this with good success in portable code that needs to run in both sorts of environments.

--
Tim Wescott
Control systems and communications consulting
http://www.wescottdesign.com

Need to learn how to apply control theory in your embedded system?
"Applied Control Theory for Embedded Systems" by Tim Wescott
Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

That is kind of what I was saying! Most cases where I want to use exact widths are not portable anyway, so the "exact-width" types are no help.

You can already guarantee signedness using "signed" or "unsigned" as needed.

I.e., I don't see what is wrong with using

"signed char" when you need signed "unsigned char" when you need unsigned "char" when you don't care.

If you start using e.g. uint8_t everywhere then you get into trouble with the library functions that expect plain char.

--

John Devereux
Reply to
John Devereux

Who said anything about expunging native types? The reason for uint16_t and friends is that there situations (e.g. if you want to control wrap-around behaviour in an expression) where you need a type of exactly that size, or the code won't work. Now, of course "unsigned int" might work just the same, too --- on the platform the code is aimed at now. But the next controller you want to run it on may be a 32-bit one. So you'll have to go over the *entire* code and decide, based on design documentation (lots of it, hopefully), comments and the occasional guess, which of those "unsigned int" was actually meant to be 16-bit unsigned, sharp, and which wasn't. Better to spell it out right there on the first shot, and be done with it.

Ultimately, the closer you look, the less useful the traditional, "native" integer types turn out to be.

You learn about . And you make sure the tools (compiler, lint) are up to the task of helping you with these like they hopefully already do with the traditional integer types. The ability to have tools help you with these is actually a major reason why they should be standardized. Lint shouldn't have to learn everybody's and their grandma's private re-invention of uint16_t.

Reply to
Hans-Bernhard Bröker

I suspect you'll find this hard to believe, but: that's actually a good thing. A platform that has no such type can't run that code as designed, so there's no point for it to compile on that platform. It should fail.

Not in this particular case. If the platform doesn't have 32-bit integers, it can't have 32-bit hardware registers, so it shouldn't compile this code.

It would be rather nice if a VALUE_BITS like that were a C89 standard functionality, wouldn't it? Well, sorry, it's not. And not only that: it can't even be implemented in C89. There's only CHAR_BIT (in ), and sizeof() --- but the latter doesn't work in preprocessor instructions.

There are reasons was made part of the standard. One of them is that it's quite hard to implement its functionality unless you're the compiler implementor.

Reply to
Hans-Bernhard Bröker

The above is critically unclear due to a typo. I'll assume what you meant is:

And I find that an absolutely stunning statement. So stunning that I find it impossible to believe. "All the embedded compilers", you say? As in each and every single one of them, and you're sure of that?

Reply to
Hans-Bernhard Bröker

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.