Software real time clock using timer unit

I am trying to implement a software real-time clock on ARM using the timer module.

I am pretty much unable to do it at this point and was hoping that someone could point me in the right direction -- perhaps even coded examples that could serve as a way to better understand it. I C language but am a newbie in

micro-controllers. Have to display time using one of the GPIO ports.

Thanks in advance!

AJ

--------------------------------------- This message was sent using the comp.arch.embedded web interface on

formatting link

Reply to
alex99
Loading thread data ...

It would help if you told which ARM controller you're using.

Reply to
Arlet

Decide on a useful interval. For a human-space display, a 1 Hz update is plenty. Then, given your device's clock, PLL setting, and peripheral clock divider, select a prescaler and match register pair that results in a 1 Hz event. Set the timer's control registers appropriately and, depending on the ARM, do the necessary magic to connect the timer match event to an interrupt.

Keep the running counts of hours/minutes/seconds in the foreground process, with the timer interrupt service routine just setting a flag that "it happened." That keeps the interrupt nice and short. The foreground process resets the flag, does the arithmetic to update the counts, and displays the result.

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

I would definitely recommend a different approach - do your clock accounting in interupt service. If you only set a flag, there is no need for an intterupt, as you may equally well test the timer event flag in your software and the sole function of int routine would be setting the software flag upon setting the hardware flag by timer - the extra software flag is not needed at all.

Reply to
BlueDragon

That's the way I handle the software RTC. In my timer interrupt, I increment a tick count, which is a short integer. I generally have between 100 and 480 ticks per second. When the tick counter reaches the proper value, I increment a long integer which is the Unix seconds count and set the tick count to zero.

That is the minimum timer interrupt handler--and generally takes only a few microseconds. If there are other operations that need to be done at the tick interval with low jitter, they may also be done in the interrupt handler.

The foreground routine calls a function to retrieve the integer seconds and does all conversions between that long integer and various time and date displays using the standard C time conversion routines. The function to return the seconds has to disable interrupts when fetching the seconds, since that is not an atomic operation on the

16-bit MSP430.

Mark Borgerson

Reply to
Mark Borgerson

That's a fair cop, given the OP's minimalist requirements. As with Mark, I'm typically running a higher rate tick in the background but, yes, a simple 1 Hz ticker could be tested by examining (and resetting) the match flag.

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

One way of implementing this is assuming that a timer interrupt is available say at 1234 Hz, thus the average time between interrupts is

810372,77147487844408427876823339 ns.

Just use an ISR, which adds 810373 to a 32 bit nanosecond counter. Each time the counter overflows (mod 1E9), add one to the second counter.

If the hardware does not support div/mod instructions usable in ISRs, just use some base2 operations, which can be implemented with shifts and mask instructions.

Paul

Reply to
Paul Keinanen

Hmmm, do you think that interval is sufficiently precise? ;-) Given the tolerances and temperature coefficients of standard crystals, 6 significant figures ought to be enough.

Is this supposed to be a joke? Why not just add 1 to a static short integer in the ISR and increment the seconds count each time the variable 'rolls over' at 1234?

That sounds like a special-case software floating point function.

I guess you can make a simple timing chore as complex as you like. If you keep this up, you'll get to the 18.2mSec tick interval of the IBM PC. ;-)

Mark Borgerson

Reply to
Mark Borgerson

What if the interrupt occurred at 1234.567890123456 Hz rate. That would require weeks to get a usable reading.

Some operating systems (Windows NT at least) allowed to specify how many time units (e.g. 100 ns) occurred between each timer interrupt.

Paul

Reply to
Paul Keinanen

If I understood you correctly, masking the interrupts may be unnecessary. This is pretty much how time is kept on the PPC (power architecture, as they now have it); two 32 bit register accessing a 64 bit free running counter. The way to do it is simple: read the MS-part, then read the LS part, then the MS again; if the two MS reads differ, repeat.

Obviously the "correct" way to get the real time is the one you describe, just keep the time as simple as possible and calculate whatever is needed whenever it is needed. In DPS, I use the mentioned PPC timebase registers plus a known moment derived upon boot time and later each hour or so from some platform dependent RTC part (perhaps none... just using NTP to get the real time over the net then).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

Problem with cycle counters is you can't put the CPU to sleep.

VLV

Reply to
Vladimir Vassilevsky

I am sure they do count in nap mode for the parts I have used, really never used the sleep mode on these though (seemed a bad deal, almost as much as nap consumption - leaky dense technology... - and a lot more hassle to get into and out of it). But I can understand how this can be a problem with your MCU PPC parts, I guess you will have to do clock sync every time you come out of sleep... (not necessarily practical, but could be at times).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
Didi

Our machine is BlackFin, and it has 64-bit cycle counter. That would be very neat for time measurement, however it is stopped in all low power modes. The cost of keeping the CPU core always awake is +200mW. There are peripheral timers which can run even if the core is halted, however they are only 32 bit wide (and designed by stupid idiots). I have to do tricks to get the continuous and accurate time measurement.

VLV

Reply to
Vladimir Vassilevsky

That takes more cycles than the two instructions needed to disable, then reenable interrupts. However, it does have the advantage of avoiding about 1 microsecond of possible jitter in the response to the timer interrupt.

Mark Borgerson

Reply to
Mark Borgerson

Or two minutes to change to a crystal which is an integer multiple of your desired time interval. Isn't it Jeff Ciarcia of Circuit Cellar who says "my favorite programming language is solder"?

Mark Borgerson

Reply to
Mark Borgerson

If Jeff said that, he is a bastard and should give credit to Steve Ciarcia.

Reply to
Uniden

The benefit is indeed in latency - and in the ability to do this at user level where masking interrupts is not that fast/easy at all. But the penalty in time is not as bad as it may seem because of the "repeat" word; as I have it at the moment here, the timebase is clocked at 33 MHz, carry from ls to ms occurs once every

130 seconds or so. And without carry we just have 3 reads, a compare and a branch (with static prediction the obvious way). Masking/unmasking in power would take 6 instructions, which is actually more (but this is power/RISC specific, of course).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

g

For an MSP430 running at 4MHz, 1 microsecond of jitter when servicing interrupts at 120Hz isn't usually a problem.=20

On a M68K system or some ARMs, reading the 32-bit seconds requires no masking, as reading a long word is an atomic operation.

Which all goes to prove the point that in embedded system programming a LOT depends on the processor you are using--as well as the OS (if any).

Mark Borgerson

Reply to
Mark Borgerson

DOH! I should have visited the Circuit Cellar web site to get the attribution correct. I may have been mixing up names with Jeff Bachiochi, who writes the "From the Bench" articles in Circuit Cellar.

Mark Borgerson

Reply to
Mark Borgerson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.