newlib and time() (2023 Update)

I often use newlib standard C libraries with gcc toolchain for Cortex-M platforms. It sometimes happens I need to manage calendar time: seconds from 1970 or broken down time. And it sometimes happens I need to manage timezone too, because the time reference comes from NTP (that is UTC).

newlib as expected defines a time() function that calls a syscall function _gettimeofday(). It should be defined as in [1].

What is it? There's an assembler instruction that I don't understand:

asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");

What is the cleanest way to override this behaviour and let newlib time() to return a custom calendar time, maybe counted by a local RTC, synchronized with a NTP?

The solution that comes to my mind is to override _gettimeofday() by defining a custom function.

[1]
formatting link
Reply to
pozz
Loading thread data ...

It is a "software interrupt" instruction. If you have a separation of user-space and supervisor-space code in your system, this is the way you make a call to supervisor mode.

Yes, that's the way to do it.

Or define your own time functions that are appropriate to the task. I have almost never had a use for the standard library time functions - they are too much for most embedded systems which rarely need all the locale stuff, time zones, and tracking leap seconds, while lacking the stuff you /do/ need like high precision time counts.

Use a single 64-bit monotonic timebase running at high speed (if your microcontroller doesn't support that directly, use a timer with an interrupt for tracking the higher part). That's enough for nanosecond precision for about 600 years.

For human-friendly time and dates, either update every second or write your own simple second-to-human converter. It's easier if you have your base point relatively recently (there's no need to calculate back to 01.01.1970).

If you have an internet connection, NTP is pretty simple if you are happy to use the NTP pools as a rough reference without trying to do millisecond synchronisation.

Reply to
David Brown

It's the internal implementation of a bog-standard BSD Unix system call. Have you tried `man 2 gettimeofday`?

The first parameter points to a struct timeval with time_t tv_sec and suseconds_t tv_usec, and the second one (both optional) to a struct timezone with two ints called tz_minuteswest and tz_dsttime.

That depends on details of newlib and your tool chain.

Clifford Heath

Reply to
Clifford Heath

Ok, but how that instruction helps in returning a value from _gettimeofday()?

I agree with you and I used to implement my own functions to manage calendar times. Sometimes I used internal or external RTC that gives date and time in broken down fields (seconds, minutes, ...).

However most RTCs don't manage automatic DST (daylight saving time), so I started using a different approach. I started using a simple 32-bits timer that increments every 1 second. Maybe a timer clocked from an accurate 32.768kHz quartz with a 32768 prescaler (many RTC can be configured as a simple 32-bit counter). Rarely I need calendar times with a resolution better than 1 second.

Now the big question: what the counter exactly represents? Of course, seconds elapsed from an epoch (that could be Unix 1970 or 2000 or 2020 or what you choose). But the real question is: UTC or localtime?

I started using localtime, for example the timer counts seconds since year 2020 (so avoiding wrap-around at year 2038) in Rome timezone. However this approach pulls-in other issues.

How to convert this number (seconds since 2020 in Rome) to broken-down time (day, month, hours...)? It's very complex, because you should count for leap years, but mostly for DST rules. In Rome we have a calendar times that occur two times, when the clock is moved backward by one hour for DST. What is the counter value of this times as seconds from epoch in Rome for this time?

It's much more simple to start from seconds in UTC, as Linux (and maybe Windows) does. In this way you can use standard functions to convert seconds in UTC to localtime. For example, you can use localtime() (or localtime_r() that is better).

Another bonus is when you have NTP, that returns seconds in UTC, so you can set your counter with the exact number retrived by NTP.

Reply to
pozz

It traps into Kernel mode, with a different stack. The kernel uses memory manipulation to push return values into user-mode registers or the user stack as needed to simulate a procedure return.

That's a good way to always get the wrong result. You are ignoring the need for leap seconds. If you want a monotonic counter of seconds since some epoch, you must not use UTC, but TAI:

formatting link

When I implmented this, I used a 64-bit counter in 100's of nanoseconds since a date about 6000BC, measuring in TAI. You can convert to UTC easily enough, and then use the timezone tables to get local times.

Clifford Heath.

Reply to
Clifford Heath

It will work if you have an OS that provides services to user-level code. The service type is passed in the SWI instruction (in this case, "SWI_Time"), and the service should return a value in r0.

Calls like this are part of the "hosted" C library functions - they rely on a host OS to do the actual work.

That should all be fine.

Reply to
David Brown

It appears 'libgloss' is the system-dependent part of newlib. It has various ideas of what those low level system functions should do, in particular linux-syscalls0.S, redboot-syscalls.c and syscalls.c (which appears to be calling Arm's Angel monitor).

There's a guide for porting newlib to a new platform that describes what libgloss is and how to port it:

formatting link
well as:
formatting link
So the cleanest way wouldn't be to override _gettimeofday() as such, you'd make your own libgloss library that implemented the backend functions you wanted.

Theo

Reply to
Theo

What happens if the counter is UTC instead of IAT in a typical embedded application? There's a time when the counter is synchronized (by a manual operation from the user, by NTP or other means). At that time the broken-down time shown on the display is precise.

You should wait for the next leap second to have an error of... 1 second.

Reply to
pozz

The key thing to remember is there is more than one system of time.

As I remember time() returns UTC seconds since the epoch, which ignores leap seconds. This means the time() function will either pause for 1 second during the leap second, or some systems smear the 1 second anomoly over a period of time. (This smeared UTC is monotonic, if not exactly accurate during the smear period).

For time() ALL days are 24*60*60 seconds long.

There are other time systems (Like the TAI) that keep track of leap seconds, but then to use those to convert to wall-clock time, you need a historical table of when leap seconds occured, and you need to either refuse to handle the farther future or admit you are needing to "Guess" when those leap seconds will need to be applied.

Most uses of TAI time are for just short intervals without a need to convert to wall clock.

Reply to
Richard Damon

No. You always have to ensure that time keeps flowing in one direction.

So, time either "doesn't exist" before your initial sync with the time server (what if the server isn't available when you want to do that?) *or* you have to look at your current notion of "now" and ensure that the "real" value of now, when obtained from the time server, is always in the future relative to your notion.

[Note that NTP slaves don't blindly assume the current time is as reported but *slew* to the new value, over some interval.]

This also ignores the possibility of computations with relative

*intervals* being inconsistent with these spontaneous "resets".
Reply to
Don Y

How did you address calls for times during the Gregorian changeover? Or, times before leap seconds were "created" (misnomer)?

Going back "too far" opens the door for folks to think values before "recent times" are valid.

I find it easier to treat "system time" as an arbitrary metric that runs at a nominal 1Hz per second and is never "reset". (an external timebase allows you to keep adjusting your notion of a "second"). This ensures that there are always N seconds between any two *system* times, X and X+N (for all X).

Then, "wall time" is a bogus concept introduced just for human convenience. Do you prevent a user (or an external reference) from ever setting the wall time backwards? What if he *wants* to? Then, anything you've done relying on that is suspect.

You don't FORCE system time to remain in sync with wall time (even at a specific relative offset) but, rather, treat them as separate things.

So, if I (the user) want to "schedule an appointment" at 9:00AM, the code uses the *current* notion of the wall time -- which might change hundreds of times between now and then, at the whim of the user. If the wall time suddenly changes, then the time to the appointment will also change -- including being overshot.

Damn near everything else wants to rely on relative times which track the system time.

If a user wants to do something "in 5 minutes", you don't convert that to "current wall time + 5 minutes" but, rather, schedule it at "current SYSTEM time + 300 seconds".

OTOH, if it is now 11:50 and he wants something to happen at

11:55 (now+5 minutes) then he must *say* "11:55".

This allows a user to know what to expect in light of the fact that he can change one notion of time but not the other.

Reply to
Don Y

There haven't been that many of them, so it's not a very big table.

There is a proposal to never add more leap seconds anyhow. They never did anyone any good. Astronomers don't use UTC anyway.

Yes. But if you're going to implement a monotonic system, you may as well do it properly.

Clifford Heath

Reply to
Clifford Heath

You're asking a question about calendars, not time. Different problem.

...> Then, "wall time" is a bogus concept introduced just for human

That doesn't work for someone who's travelling between timezones. Time keeps advancing regardless, but wall clock time jumps about. Same problem for DST. Quite a lot of enterprise (financial) systems are barred from running any transaction processing for an hour during DST switch-over, because of software that might malfunction.

Correctness is difficult, especially when you build systems on shifting sands.

Clifford Heath.

Reply to
Clifford Heath

They are related as time is often interpreted relative to some

*other* "bogus concept" (e.g., calendar) related to how humans want to frame time references.

Or for someone who wants to change the current wall time. Note that these library functions were created when "only god" (sysadm) could change the current notion of time -- and didn't do so casually.

Now, damn near ever device (c.a.EMBEDDED) allows the user to dick with the wall clock with impunity. Including intentionally setting the time incorrectly (e.g., folks who set their alarm clocks "5 minutes fast" thinking it will somehow trick them to getting out of bed promptly whereas the CORRECT time might not?)

And, one can have a suite of devices in a single environment each with their own notion of "now".

Because wall time has an ill-defined reference point -- that can often be changed, at will!

E.g., we don't observer DST, here. So, the broadcast TV schedules are "off" by an hour. When something is advertised as airing at X mountain time (or pacific time), what does that really mean for us?

The issue is considerably larger than many folks would think. Because there are a multitude of time references in most environments; what your phone claims, what your TV thinks, what your PC/time server thinks, how you've set the clock on your microwave, bedside alarm, etc.

If you have two "systems" (appliances) interacting, which one's notion of time should you abide?

How do you *report* a timestamp on an event that happened 5 minutes ago -- if the wall clock was set BACKWARDS by an hour in the intervening interval? Should the timestamp reflect a *future* time ("The event happen-ED 55 minutes *from* now")? Should it be adjusted to reflect the time at which it occurred relative to the current notion of wall time?

How do you *order* events /ex post factum/ in the presence of such ambiguity?

(i.e., I map everything, internally, to system time as that lets me KNOW their relative orders, regardless of what the "wall clock" said at the time.)

If you've scheduled "something" to happen at 11:55 and the user sets the wall clock forward, an hour (perhaps accidentally), do you trigger that event (assuming the new "now" > 11:55) instantly? If you automatically clear "completed events", then setting the wall clock back to the "correct" time won't resurrect the 11:55 event at the originally intended "absolute time".

Reply to
Don Y

At startup, if NTP server is not available and I don't have any notion of "now", I start from a date in the past, i.e. 01/01/2020.

Actually I don't do that and I replace the timer counter with the value retrieved from NTP. What happens if the local timer is clocked by a faster clock then nominal? For example, 16.001MHz with 16M prescaler. If I try to NTP re-sync every 1-hour, it's probably the local counter is greater than the value retrieved from NTP. I'm forced to decrease the local counter, my notion of "now".

What happens if the time doesn't flow in one direction only?

Reply to
pozz

NTP has solved this question, just get publications of prof. Mills.

There is a short description in Wikipedia article on NTP.

Reply to
Tauno Voipio

Eventualli I had some free time to read this interesting post and reply.

Il 03/10/2022 04:02, D>> Il 30/09/2022 20:42, D>>>> Another bonus is when you have NTP, that returns seconds in UTC, so

Certainly there's an exception at startup. When the *first* NTP response received, the code should accept a BIG shock of the current notion of now (that could be undefined or 2020 or another epoch until now). I read that ntpd accepts -g command line option that enable one (and only one) big difference between current system notion of now and "NTP now".

I admit that this could lead to odd behaviours as you explained. IMHO however there aren't many solutions at startup, mainly if the embedded device should be autonomous and can't accept suggestions from the user.

One is to suspend, at startup, all the device activites until a "fresh now" is received from NTP server. After that, the normal tasks are started. As you noted, this could introduce a delay (even a BIG delay, depending on Internet connection and NTP servers) between the power on and the start of tasks. I think this isn't compatible with many applications.

Another solution is to fix the code in such a way it correctly faces the situation of a big afterward or backward step in the "now" counter. The code I'm thinking of is not the one that manages normal timers that can depend on a local reference (XTAL, ceramic resonator, ...) completely independent from calendar counter. Most of the time, the precision of timers isn't strict and intervals are short: we need to activate a relay for 3 seconds (but nothing happens if it is activated for 3.01 seconds) or we need to generate a pulse on an output of 100ms (but no problem if it is 98ms). This means having a main counter clocked at 10ms (or whatever) from a local clock of 100Hz (or whatever). This counter isn't corrected with NTP. The only code that must be fixed is the one that manages events that must occurs at specific calendar times (at 12 o'clock of 1st January, at

8:30 of everyday, and so on). So you should have *another* counter clocked at 1Hz (or 10Hz or 100Hz) that is adjusted by NTP. And abrupt changes should be taken into account (event if I don't know how).

Good point. As I wrote before, events that aren't strictly related to wall clock shouldn't be coded with functions() that use now(). If the code that makes a 100ms pulse at an output uses now(), it is wrong and must be corrected.

Same thing. Instead of using now(), that returns "calendar seconds" related to NTP, this code should returns ticks or jiffies that are related only to local reference.

If you implement in this way:

void do(action_fn fn, uint32_t delay_ms) { timer_add(delay_ms, fn); }

and timer_add() uses the counter that is clocked *only* from local reference, no problem occurs.

Some problems could occur when time1 and time2 are calendar times. One solution could be to have one module that manages calendar events with the following interface:

cevent_hdl_t cevent_add(time_t time, cevent_fn fn, void *arg); void cevents_do(time_t now);

Every second cevents_do() is called with the new calendar time (seconds from an epoch).

void cevents_do(time_t now) { static time_t old_now; if (now != old_now + 1) { /* There's a discontinuity in now. What can we do? * - Remove expired events without calling callback * - Remove expired events and call callback for each of them * I think the choice is application dependent */ } /* Process the first elements of FIFO queue (that is sorted) */ cevent_s *ev; while((ev = cevents_queue_peek())->time == now) { ev->fn(ev->arg); cevents_queue_pop(); } old_now = now; }

Good questions. You could try to implement a complex calendar time system in your device, one that mimics full featured OS. I mean the counter that tracks "now" (seconds or milliseconds from an epoch) isn't changed abruptly, but its reference is slowed down or accelerated. You should have an hw that supports this. Many processors have timers that can be used as counters, but their clock reference is limited to a prescaled main clock and the prescaler value is usually an integer, maybe only one from a limited set of values (1, 2, 4, 8, 32, 64, 256).

Anyway, even if you are so smart to implement this in a correct way, you have to solve the "startup issue". What happens if the first NTP response arrived after 5 minutes from startup and your notion of now at startup is completely useless (i.e., no battery is present)? Maybe during initialization code you already added some calendar events.

I got the point, but IMHO is not so simple to implement this in a correct way and anyway you have the "startup issue".

In a real world, could this happen? Except at startup, the seconds reported from NTP should be very similar to "local seconds" that is clocked from local reference. I didn't make any test, but I expect offsets measured by NTP are well below 1s in normal situations. The worst case should be:

12:01:07 start something 12:01:08 did whatever 12:01:15 did something else 12:01:14 finished up

I admit it's not very good.

Reply to
pozz

Yes, the only solution that comes to my mind is to have a startup calendar time, such as 01/01/2023 00:00:00. Until a new time is received from NTP, that is the calendar time that the system will use.

Of course, with this wrong "now", any event that is related to a calendar time would fail.

Yes, of course. At the contrary, NTP is useless at all.

Yes, a log with timestamps can be managed in these ways.

Yes.

Yes, but I don't remember an application I worked on that didn't track the wall time and, at the same time, needed a greater precision than the local oscillator.

Suppose you have some alarms scheduled weekly, for example at 8:00:00 every Monday and at 9:00:00 every Saturday. In the week you have 604'800 seconds.

8:00 on Monday is at 28'800 seconds from the beginning of the week (I'm considering Monday as the first day of the week). 9:00 on Saturday is at 194'400 secs.

If the alarms manager is called exactly one time each second, it should be very simple to understand if we are on time for an alarm:

if (now_weekly_secs == 28800) fire_alarm(ALARM1); if (now_weekly_secs == 194400) fire_alarm(ALARM2);

Note the equality test. With disequality you can't use this:

if (now_weekly_secs > 28800) fire_alarm(ALARM1); if (now_weekly_secs > 194400) fire_alarm(ALARM2);

otherwise alarms will occur countinuously after the deadline. You should tag the alarm as occured for the current week to avoid firing it again at the next call.

Is it so difficult to *guarantee* calling alarms_manager(weekly_secs) every second?

A second is a very long interval. It's difficult to think of a system that isn't able to satisfy programmatically a deadline of a second.

Simple to write.

No. The meeting is always at 5:00PM.

IMHO if the user set a time using the wall clock convention (shut the door at 8:00PM every afternoon), it shouldn't be changed when the calendar time used by the system is adjusted. Anyway this should be application dependent.

Reply to
pozz

You can't assume AC frequency will be held ... generator spin rates are not constant under changing loads, and rectification is more difficult because of the high voltages and the fact that generators typically are 4..12 multiphase (for efficiency) being squashed into (some resemblance of) a sine wave.

But the utilities are required to provide the expected number of cycles in a given period. In the US, that period is contractual (not law) and typically is from 6..24 hours.

If you've ever seen cycle driven wall clocks run slow all morning or afternoon and then suddenly run fast for several minutes just before noon, or 6pm, or midnight (or all three) ... that's the electric utility catching up on a low cycle count.

'exactly once' semantics are impossible to guarantee open loop. Lacking positive feedback, the best you can achieve is 'at most once' or 'at least once'.

Exactly. The above is an example of 'at most once'. But incomplete: it fails to reset for next week. ;-)

Yup!.

Simple enough to maintain monotonically increasing system time (at least until the counter rolls over, but that's easily handled).

The washer repairman says "I have you down for Tuesday". But where is down? And which Tuesday? -- Erma Bombeck

George

Reply to
George Neuner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.