ARM Interrupts

Hi there

Looking at several processors at the moment but ARM7 and Coldfire in particular. I like the popularity of ARM and associated tools, but cant get over the fact that the interrupt structure within them (only two priority levels) is pants. Coldfire has 7 or 8 nestable levels.

I would appreciate anyones view on this.

Cheers

Andrew

Reply to
Andrew Blackburn
Loading thread data ...

Please remember thet the ARM is a processor core only. It is usually embedded as part of a larger chip containing a bunch of necessary peripherals, including an interrupt controller.

For examples, get the data for Atmel AT91 series chips.

The need for several nested interrupt levels usually comes from an attempt to avoid a thread scheduler or attempt to do wrong things in the interrupt service.

--
Tauno Voipio
tauno voipio (at) iki fi
 Click to see the full signature
Reply to
Tauno Voipio

Look on the freescale site for some app notes on the MAC7xxx series MCUs. There is an app note explaining how to do nested interrupts with the ARM. Their ARM controllers, as has most of their other families, a very capable interrupt controller.

Regards Anton Erasmus

Reply to
Anton Erasmus

The ARM7 itself is only a processor core and does not include an interrupt controller. Most (all?) devices using the core have an external interrupt controller that is priorities to some level. There are also some good app notes on nesting interrupts. If you download some of the Atmel sample programs for their SAM7 series then you can see the interrupt entry and exit code they use. The Cortex-M3 on the other had has an integrated interrupt controller which is fully featured.

Regards, Richard.

formatting link

*Now for ARM Cortex-M3!*
Reply to
Jaba

No, it comes from trying to use a processor to do real-time things. Why are you saying he is trying to avoid using a scheduler? Maybe he is trying to do something useful in the real world like count pulses from a flow meter. Man, what has happened to "embedded" engineering? It is like handling interrupts is some kind of foreign thing.

Lou

Reply to
Mr. C

I've been handling hard real-time with single level interrupts for over 40 years now.

The trick is to in the interrupt service only handle the immediate attention to the interrupt request line and leave the data processing to a high-priority thread triggered by the interrupt handler. To have the threads, you need a scheduler (or at least a context switcher).

Interrupt handling is OK, but you have to get you of it ASAP, to not block the other interrupts for a too long time.

All the multi-level interrupt constructions I've seen actually attempt to do operations belonging to the peripheral handling threads in the interrupt routines. This gives little or no advantage to a construction with the handlers in the associated threads. A single-level interrupt system is simpler in the hardware, and that matters pretty often in embedded systems.

--
Tauno Voipio
tauno voipio (at) iki fi
 Click to see the full signature
Reply to
Tauno Voipio

That's impressive - what processor were you using in 1966 ? :)

-jg

Reply to
Jim Granville

The first one was an IBM 1710, an IBM 1620 with an interrupt line.

The second one was built at the Helsinki University of Technology, called Reflac. It had a built-in 4 level interrupt system, and it was built for hard real-time. Of course, the times were remarkably longer than what we are considering now.

About 1/3 of Reflac was soldered by me.

--
Tauno Voipio
tauno voipio (at) iki fi
 Click to see the full signature
Reply to
Tauno Voipio

Most ARM implementations include third-party interrupt controllers that give prioritisation (or at least arbitration).

Very few designs actually *need* multi-level interrupts -- IMHO they encourage the development of sloppy code, running things that should really be task bodies at above user level. The job of an ISR is to service the interrupt source and get back down to user level for real work to be done ASAP.

Unfortunately, baroque interrupt controllers feature on too many CPUs these days, and "because it's there", fully exploiting them becomes a tick-list feature for RTOSes.

I've seen designs where the customer demanded a complex multi-level nested interrupt scheme in the OS, then when it turned out to be slow (duuuuh, surprise!) managed to design everything elegantly using only single-level interrupts.

A plethora of priorities isn't an excuse for abandoning good design.

pete

--
pete@fenelon.com "That is enigmatic. That is textbook enigmatic..." - Dr Who
                 "There's no room for enigmas in built-up areas." - N Blackwell
Reply to
Pete Fenelon

If i'm reading this correctly, not sure I agree with all of that. Coming from a time in micro design where you had only wire or irq and nmi, the problem is that you can execute a page of code just to get to the device that caused the interrupt, which is not very helpfull.

The good designs, imho, are the ones that have fully vectored interrupts and priority levels that can be assigned to the different on and off chip peripherals via a register bitmap, in a user preferred order. This provides the best response time and allows high priority devices like scheduling timers to be set to a higher priority than serial drivers etc. It's also good for code modularity...

Chris

--
Greenfield Designs Ltd
-----------------------------------------------------------
 Click to see the full signature
Reply to
Chris Quayle

Guys

The application is real time embedded. .Surely it all depends on the criticallity of the interrupt to be serviced and the amount of latency one can aford.

Ive been loking a the Sharp ARM7s. Plenty of IO but pants on the interrupt vectors. High and Low priority only provided by the IPC.

The application for those interested is industrial metal detection. We're talking plent of signal processing as well as handling rejection mechaninsm half a dozen uarts and a QVGA with touch screen.

Cheers for all your input.

Andrew

Reply to
Andrew Blackburn

I fully agree with this. Where interrupts are implimented as in the PC architecture one ends up with having extreme difficulty in meeting some realtime deadlines. Whoever decided to make the keyboard interrupt the second highest priority on the PC should be shot. The fact that they used a poorly designed interrupt controller does not help either. In a proparly designed interrupt controller as described above, one would have had the option of changing the interrupt priorities to something sensible.

Regards Anton Erasmus

Reply to
Anton Erasmus

Even the very first pdp11 from 1969 had fully vectored interrupts, so it's not a new idea. It's spirit lives on in processors like the Coldfire and MSP430, both of which have a risc core. It's faster to have dedicated hardware vector an interrupt than to poll registers and it can aid data hiding and code modularity because you don't end up with header files from several drivers included in the interrupt dispatch module. Of course, it saves silicon if you drive the functionality into software and would suspect that's the real reason for leaving it out - can't thing of any other justification...

Chris

--
Greenfield Designs Ltd
-----------------------------------------------------------
 Click to see the full signature
Reply to
Chris Quayle

Inexperience on the part of the designers ? Incompetence ? I sometimes find it disheartening that it seems that best technical designs are commercial failures, and visa versa.

Regards Anton Erasmus

Reply to
Anton Erasmus

Difficult to say, perhaps it wasn't priority because the target market didn't need it, or they never got round to doing it right :-). Apparently, the original designers of the Arm were inspired by 8 bit designs like the 6502, widely used in early home computers. 8 bit micros typically had only a 2 level interrupt structure, though istr the Z80 peripheral devices were fully vectored. At the time, processor throughput and interrupt response were good enough, software was not very demanding, interface data rates were low etc. Now we have streaming video and the rest of the MM circus that require much more throughput, memory bandwidth, dma etc, but perhaps still don't need smart interrupt handling. From what I can see, Arm's market is primarily multimedia - phones, pda, set top boxes, hard disk video recorders etc, for which it seems to do a great job. Different requirements to traditional embedded real time though.

Even Wind River offer a supported Linux - wonder if they were pushed into this by the affect on revenue from open source, too much good and usefull stuff to ignore, or what ?...

Chris

--
Greenfield Designs Ltd
-----------------------------------------------------------
 Click to see the full signature
Reply to
Chris Quayle

While the 8259 is pretty basic as far as interrupt controllers go, it's trivially reprogrammable to put any of the eight interrupts at the top of the priority stack (eg. you can request that the priority sequence is 23456701), and it supports equal (rotating) priority mode as well as a mode (special mask mode) in which the ISRs can specify which interrupts are enabled with a couple of instructions, which lets you implement any priority scheme you want. That is complicated in cascaded more (eg. with the AT scheme).

That the keyboard ended up as the second highest priority interrupt has more to do with the ways the 8259 is set up by the BIOS.

The APIC is rather more flexible.

Reply to
robertwessel2

The problem if you set up the interrupts to 23456701, is that the timer interrupt is then then second lowest. For many type of apps, one would typically want a timer interrupt to be the highest priority, and then maybe some I/O interrupts for disk drives, ethernet etc.

The fact remains that the 8259 is not fully configurable, there are still some hard limits in that one can only rotate the priorities, not choose them arbitrarily as on the 68000.

Have'nt looked at this in detail, but I would expect an Advance Peripheral Interrupt Controller to be a bit more flexible than the

8259. Is it possible with the APIC to have the timer interrupt the higest, COM1 the second highrst, the ethernet interface third higest etc. ?

Regards Anton Erasmus

Reply to
Anton Erasmus

Yes, there is considerable flexibility in assigning interrupt sources to targets (IOW, this interrupt pin to that interrupt vector on that CPU), although using it requires that you be rather aware of the way interrupts actually work in a modern system. The legacy IRQ0-15 stuff for the most part doesn't exist in any real fashion, is only appears because the APIC is in 8259 simulation mode.

For example, each PCI slot has hour interrupt pins (A/B/C/D). Those may be shared in various ways between slots, or not at all, and eventually the mapping between those and actual IRQs has to be done by setting up the local and I/O APICs. Message signaled interrupts add a further wrinkle, as do multiple CPUs and multiple APICs.

OTOH, if you just want to mess with priorities, special mask mode on a

8259 can do a lot.

Of course hardware I/O interrupt priorities are rarely* the correct solution. Fixing the software is. If interrupts are disabled in the CPU, you're going to have latency problems no matter what that hardware does, OTOH, there's no real reason not to just handle an interrupt and then reenable interrupts in general. Or just special mask mode, just take the interrupt, disable the one you just got and reenable all the others.

*Excepting some hardware (usually broken) with really hard latency requirements living with (arguably broken) system interrupt handler code that actually reenables interrupts but won't EOI the controller.
Reply to
robertwessel2

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.