[cross-post] g21k and non-interruptible functions

Dear all,

I have the need to define some functions as "non-interruptible" and I remember some #pragma at least for C51 which would do this. I looked for similar #pragmas for the g21k but apparently I haven't found any hint.

Does anyone out there know how to reliably define a function as non-interruptible? I understand that I could disable all interrupts and enable them once the function has completed - which will potentially cause to loose an interrupt, unless level-sensitive - but I believed there are some "utilities" to do that at compilation level.

Some additional info: the non-interruptable function is a function which returns system time, therefore if interrupted while the value is being read it may return a wrong value.

Al

--
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
 Click to see the full signature
Reply to
Alessandro Basili
Loading thread data ...

Such #pragmas are exceedingly non-portable. Besides, all they're going to do "under the hood" is disable interrupts for the duration of the function.

If your interrupt hardware is decent then you won't miss an interrupt: either the interrupt is level sensitive and the hardware that generates it won't give up until it is explicitly serviced, or the interrupt is edge sensitive and the interrupt controller takes care of remembering. In either case, it's your job to make sure that the interrupt will persist and be serviced as soon as you come out of the protected section of your function.

So -- bite the bullet and do it the grown-up way. You want to save interrupt status (because interrupts may already be turned off going into your function), do your interrupt-sensitive stuff as quickly as can be, restore interrupts to their former state, then exit.

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
 Click to see the full signature
Reply to
Tim Wescott

Obviously Tim is right if you need non-interruptable code.

But there are other ways of doing the job in hand - sometimes disabling interrupts is not the only solution. For something like system timers like this, you can get correct results by reading repeatedly until you have two successive identical reads (and often you can do it with only partial duplicate reads). Most of the time you only need the two reads

- but if you are unlucky and a timer rollover occurs during a read, perhaps because of an interrupt, then you will need an extra read.

You'll have to think carefully about the possible interactions between the code, the hardware, and interrupts - but it is typically quite possible without disabling interrupts.

Reply to
David Brown

If your application supports it you could write an inline asm which would mask all the interupts. Atleast on the 21[234]xx the sequencer willl respond to the latched interrupt when they are unmasked.

Mark DeArman

Reply to
Mac Decman

The word that I would use to refer to interrupt hardware that did not interrupt in such situations would be "broken".

--
Tim Wescott
Control system and signal processing consulting
 Click to see the full signature
Reply to
Tim Wescott

Haha, I have never done this but I remember it from the processor manual. That usage of the sequencer is beyond my application.

Mark DeArman

Reply to
Mac Decman

Since I don't know the G21K system, I'm not sure if this is true. On an ARM system with 32-bit memory, reading 32-bit Unix time value should be an atomic operation and not require any protection from interrupts. If the function is returning seconds and fractions, the function could use the well-established technique of reading the two or more values and repeating until only the least-significant value changes between reads.

Unless the time function is unacceptably long and the system has a killer interrupt-response requirement, that's the way to go. If that technique is a problem, the system needs re-thinking.

Mark Borgerson

Reply to
Mark Borgerson

On 1/10/2012 7:04 PM, Tim Wescott wrote: [...]

[...]

Indeed I need to check both cases. I'm not sure the hardware will continue to keep the interrupt until serviced, I'm pretty sure the timer interrupt is kept for 4 clock cycles and that's it, no handshake of any sort. About the interrupt controller is also not so clear to me how I can test it. The description in the user manual is pretty clear about the IRPTL functionality but in a very simple case I tried I was loosing timer interrupts and I'm not quite sure why, yet. Should the interrupt controller work correctly I don't see the reason why the timer interrupt can go lost (no nesting and only the timer interrupt enabled).

Actually before posting this reply, I read again the user manual and is clearly stated that masking the interrupt will not prevent latching, therefore I should not disable interrupts, but rather mask them.

Even though I do not need the application to be portable and could in principle whine a little about the lack of a #pragma, I believe that coding the functionality by myself is good practice, surely will teach me much more than a stupid preprocessor directive.

Thanks for the push!

Al

Reply to
Alessandro Basili

(snip)

But if you wait too long, you will miss an interrupt. Maybe there are interrupt controllers that can remember that, though the usual solution is that the interrupt routine should take care of everything that needs to be done, emptying any I/O FIFO's, for example.

A computer I had a long time ago had a terminal program with a software UART, running on a 6809. At some point, I disassembled that routine and figured out that it took longer than the interrupt period to get through. It seems that it depends on only processing every other interrupt.

-- glen

Reply to
glen herrmannsfeldt

That can lead to once-every-two-days kind of failure. A debugging nightmare.

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

Note that some architectures do not tolerate glitches on the interrupt line. Glitches could be created when disabling the interrupt through a peripheral interrupt mask, instead of the CPU interrupt flag/level. If you disable an interrupt just after it occurs, the glitch that the CPU receives may be long enough that it triggers some portion of the interrupt handling, but too short to complete it.

Reply to
Arlet Ottens
[...]

Such behaviour would deserve to have the aforementioned word upped from "broken" to "FUBAR".

I mean, come on: _not_ losing track of events (before they repeat themselves) is the one job that interrupt hardware is charged with. If it can't do that job, it might as well give up its floorspace to something useful.

Reply to
Hans-Bernhard Bröker

Link with thread-safe libraries.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

It depends. The manufacturer could also say it's broken/fubar behavior to withdraw an interrupt before it's properly handled by the CPU, and there's some logic in that claim.

Anyway, I discovered this behavior the painful way on an ARM-7. A short glitch on the FIQ line can cause an interrupt, but if you deassert it right away, the CPU can actually make a jump to the IRQ vector. No matter how you'd like to call that behavior, it's good to be cautious.

Reply to
Arlet Ottens

It will only lead to such problems if you have a badly designed system. You are right to be sceptical - after all, we are talking about an unbounded loop.

But if the circumstances are right, you can determine the maximum time for the function. In particular, you need to know your interrupt functions' maximum run times and rates, and how they relate to timer overflows (which can lead to a new run round the loop). If you know that these can never lead to a second overflow, your loop is clearly bounded. (And if you don't have that information about your interrupt functions, you'd better get it - otherwise you have no control about the rest of your scheduling.)

As with all such things, it is vital to think through the possibilities, and the circumstances that could lock up or delay the code. You want to be confident that the once-every-two-days failure will not occur.

Note also that while you don't want a one-every-two-days risk, some risks /are/ acceptable. Maybe a one in a million chance is too much - but if a failure occurs with two coincidental uncorrelated one in a million chances, then it might be fine. Risk management, reliability, and secure programming is not about being sure that everything will always work. It's about being sure the chances of failure, and the consequences of them, are low enough to be acceptable.

Reply to
David Brown

In the AD processors as far as I know (Do not quote me on this as I have never verified the timing?) take the flag from the interrupt processor and then evaluate the int against the mask. If it is masked they do nothing. This is a normal part of the sequencer delay and does not affect timing at all.

Mark DeArman

Reply to
Mac Decman

They do not exist, to my knowledge, for this architecture. As of now the only C library I have is the one ADI distributed along with their development suite (VisualDSP or the like). Unfortunately the device is old enough and unpopular enough to have very limited support. But I'll be more than happy to find out that I'm wrong.

Reply to
Alessandro Basili

On 1/10/2012 11:00 PM, glen herrmannsfeldt wrote: [snip]

While all the interrupts are asynchronous, every one has a limit on the maximum frequency it can run at. Therefore I know that as long as the ISR-masked routine is faster than two of the fastest interrupts I should be on the safe side.

I do not have any wired-OR interrupts and all external interrupts have their own dedicated bit in the interrupt vector.

Reply to
Alessandro Basili

This is not unreasonable provided it's handled intelligently by the MCU when it does happen.

This however is totally unacceptable IMHO.

Which specific MCU was this ?

That's just seriously broken behaviour in my opinion. It's acceptable for the interrupt source information to no longer be available in this situation when your interrupt handler executes (and you need to handle that) but to jump to a different vector sounds like a MCU hardware level bug IMHO.

(Of course, _if_ you don't need the FIQ line, you could just turn off the FIQ interrupt in the CPSR, but I would still be interested in knowing which MCU this was.)

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

It was a custom built ASIC, with an ARM7TDMI core (not sure what revision). In most cases, the IRQ would be quietly ignored, because the IRQ handler didn't see anything set in the interrupt status registers.

It would however crash the firmware when it was already servicing an IRQ. Even though IRQs were disabled, the FIQ glitch would cause an interrupt (because it was a FIQ), and then jump to the IRQ handler, messing up all the IRQ-banked registers.

By the way, the ARM7TDMI TRM says about nFIQ:

"This signal is level-sensitive and must be held LOW until a suitable interrupt acknowledge response is received from the processor"

So, the code was in violation of the ARM7 specification.

I have to say none of the peripheral interrupt sources would do this by themselves. It would only occur when disabling an interrupt by writing the interrupt mask register in the peripheral. I solved it by turning off the FIQ in the CPSR, masking the interrupt source, waiting for the write delay, and re-enabling the FIQ.

Reply to
Arlet Ottens

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.