Regarding interrupt sharing architectures!

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Dear all,
  At many points of time during my career path as an embedded developer
I have come across the technology of interrupt sharing but still has
not been fortunate enough to understand what it means.Can anyone
explain me how this interrupt sharing works?
   My most curious doubt is,how does the CPU come to know which device
has interrupted him?Is there any special register which will have a
reference for the devices sharing the same interrupt so that the CPU
can verify it and find out who is the interrupt requester.
  Also In one of the OS manual I found that the manual saying,Incase
the interrupts are shared between multiple devices its the job of the
ISR to verify whether the device which it is connected has requested an
interrupt or not.I believe having such an architecture inside the ISR
will be wastage of time,because whether the device requires service or
not,it will will checking which is very much similar to polling for a
status.
Also if what the manual says is true where will be the status
containing the requester be maintained so that the ISR can identify who
interrupted at a point of time?
I tried to get some books also to understand this,but in vain.Most
often I see this shared interrupt concept in PCI bus standards and PCI
card implementations ,but have not understood till now how this
happens.
Can anyone point me to good links or explain me much in detail how this
works?
Regards,
s.subbarayan


Re: Regarding interrupt sharing architectures!


Quoted text here. Click to load it

When several devices share the same interrupt line, the processor
must poll all devices in order to know which one has generated
the interrupt.

This should be done in the ISR and usually by the processeur
reading status registers on each device by performing read
access in the address space corresponding to the devices.

You have also vectored interrupts where devices generating the
interrupt place the vector (or the address of the slot in
memory corresponding to the ISR) on the data bus. The ISR
executed thereafter is dedicated to that particular device.
The processor and the devices must support this feature.

Some processors have also several auto vectored
prioritized interrupts.  If only one device is assigned
a priority level, then upon interrupt, the vector corresponding
to the level is read by the processor and the corresponding
ISR is dedicated to the device.


Re: Regarding interrupt sharing architectures!
[F'up2 reduced to 1 group --- should have been done by OP]

Quoted text here. Click to load it

Several logical interrupt sources control the same physical interrupt
input on the CPU, by a simple logic OR combination.  I.e. if any of
them raises an interrupt request, the CPU will receive one.  

This is to be seen as opposed to a specialized chip like the
"Programmable Interrupt Controller" (PIC) chip, as found in the
original IBM PC and its offspring, which not only generated the IRQ to
the CPU, but also offered a register telling the CPU which of the PIC
inputs raised it.

Quoted text here. Click to load it

It doesn't.  If it did, the interrupts wouldn't be shared, but
separate.

Quoted text here. Click to load it

No.  The CPU has to ask all registered ISRs for that shared interrupt
if they're responsible for it.  The driver has to know how to find out
if the device it manages has just created an interrupt (or at least
has to be able to find out if there's something to be done,
independent of the interrupt itself).

Quoted text here. Click to load it

Exactly.


Well, sort of.  Sharing interrupts without need can indeed be a waste
of CPU cycles.  So you shouldn't usually do it if you can do without.

Quoted text here. Click to load it

No.  It's a middle way between a dedicated interrupt and polling.  You
still get no calls to your handler unless _something_ has happened.
It may just not have been something happening to the device the
particular driver is responsible for.

Quoted text here. Click to load it

That's for the individual ISR programmer to formulate --- it's none of
the OS's business to prescribe how hardware has to be constructed.

--
Hans-Bernhard Broeker ( snipped-for-privacy@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.

Re: Regarding interrupt sharing architectures!
Quoted text here. Click to load it

Registering an interrupt source and raising a common IRQ is a very basic
thing, easily done in logic, and does not deserve the special treatment
you seem to afford it. It does require glue.

[...]

Re: Regarding interrupt sharing architectures!

Quoted text here. Click to load it

In fact the MSP430 interrupt scheme is largely based on sharing. Most
interrupt status registers are designed so that you can use them
directly as an index into a lookup table to locate the appropriate ISR.

Paul Burke

Re: Regarding interrupt sharing architectures!

Quoted text here. Click to load it

Exactly.  And as soon as you add such glue, it's no longer really a
shared interrupt.

--
Hans-Bernhard Broeker ( snipped-for-privacy@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.

Re: Regarding interrupt sharing architectures!

Quoted text here. Click to load it

Depends on the architecture.

Quoted text here. Click to load it

That is the case for the "better" architectures; but on some
architectures (x86 traditionally), the OS has no other choice then
to call all interrupt handlers defined for a specific interrupt
and have them return success or failure depending on whether they
were able to handle the interrupt_

Quoted text here. Click to load it

Yes, but deficiencies in the hardware/architecture don't allow for a
more efficient solution.

Quoted text here. Click to load it

The ISR belongs to a certain device; each device has its
own status registers which need to be read or reset in order to
determine whether an interrupt occurred.

Casper
--
Expressed in this posting are my opinions.  They are in no way related
to opinions held by my employer, Sun Microsystems.
We've slightly trimmed the long signature. Click to see the full one.
Re: Regarding interrupt sharing architectures!


Quoted text here. Click to load it

Thanks Mr.Casper and others.As mentioned in your reply and from others
who have posted their reply to me,If what I understood is correct,the
maintaining of request status ,mask status and the Service status are
part of the Device and not part of the Interrupt controller.Till now I
have been under the thinking that these registers are part of the
Interrupt controller and not the device itself.(Is this thinking
wrong?)

Can every one in this group please confirm whether my understanding is
correct?

Thanks once again for all your replys and looking farward to confirm
whether my understanding is correct or not,

Regards,
s.subbarayan


Re: Regarding interrupt sharing architectures!
[F'up2 reduced --- again.  Whitespace added after punctuation.]


Quoted text here. Click to load it

You can either have an interrupt controller, or interupted sharing,
but not both for the same interrupt signal.  The two concepts are
mutually exclusive.  They can still be combined in a hierarchy of
bundled and shared interrupt lines, of course.

In a nutshell, interrupt sharing is what you do when no free inputs
remain on the existing interrupt controller circuits, or when you've
exhausted the limited number of interrupt signal lines on a system bus
architecture like ISA or PCI.

--
Hans-Bernhard Broeker ( snipped-for-privacy@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.

Re: Regarding interrupt sharing architectures!
[F'up2 reduced --- again.  Whitespace added after punctuation.]


Quoted text here. Click to load it

You can either have an interrupt controller, or interupt sharing, but
generally not both for the same interrupt signal.  The two concepts
are mutually exclusive.  They can still be combined in a hierarchy of
bundled and shared interrupt lines, of course.

In a nutshell, interrupt sharing is what you do when no free inputs
remain on the existing interrupt controller circuits, or when you've
exhausted the limited number of interrupt signal lines on a system bus
architecture like ISA or PCI.

--
Hans-Bernhard Broeker ( snipped-for-privacy@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.

Re: Regarding interrupt sharing architectures!
Quoted text here. Click to load it

Punctuation edited above for readability.

Yes, it does reduce to polling.  However the polling is restricted
to a small group of devices (those who share the interrupt) and is
done only once (or one cycle) per interrupt.

The coding has to go in the ISR.  Basically each device (on that
interrupt) has a status register which can announce "I am
interrupting", and which can be reset by the ISR.  When all the
sharing devices are no longer interrupting, the ISR work is done
and is exited, restoring overall interrupt service.

--
"If you want to post a followup via groups.google.com, don't use
 the broken "Reply" link at the bottom of the article.  Click on
We've slightly trimmed the long signature. Click to see the full one.
Re: Regarding interrupt sharing architectures!
Quoted text here. Click to load it

well in 360 ... there were channels which were typically shared i/o
bus.  the processor could mask interrrupts from individual channels.
allowing interrupts from a channel allowed that any pending interrupts
from devices on that channel could be presented.

an i/o interrupt would have the processor load a new PSW (program
status word, contains instruction address, interrupt masking bits,
bunch of other stuff) from the i/o new psw location (this new psw
normally specified masking all interrupts) ... and storee the current
PSW into the I/O old psw field.

OS interrupt routines were frequently referred to as FLIH (first level
interrupt handlers) which would identify the running task, saved the
current registers, copied in the old PSW information saving the
instruction address, etc. There was typically a FLIH specific to each
interrupt type (i/o, program, machine, supervisor call, external). The
I/O FLIH would pick up the device address from the i/o old PSW field
and locate the related control block structures.

the low-end and mid-range 360s typically had integrated channels ...
i.e. they had microprocessing engines which was shared between the
microcode that implemented 360 instruction set and the microcode that
implemented the channel logic.

an identified thruput issue in 360 ... was the SIO (start i/o)
instruction ... would interrogate the channel, control unit, and
device as part of its operation. channel distances to a control unit
could be up to 200'. The various propogation and processing dalays
could mean that SIO instruction could take a very long time.

for 370, they introduced a new instruction SIOF (start i/o fast) ...
which would basically interrogate the channel and pass off the
information and not wait to hear back from the control unit and
device. a new type of I/O interrupt was also defined ... if there was
unusual status in the control unit or the device ... in the 360, it
would be indicated as part of the completion of the SIO instruction.
With 370, the SIOF instruction had already completed ... so any
unusual selection status back from the control unit or the device now
had to be presented as a new i/o interrupt flavor.

however, interrupts themselves had a couple of thruput issues.  first
in the higher performance cache machines ... asyncronous interrupt
could have all sort of bad effects on cache hit ratios. also on large
systems there was an issue with device i/o redrive latency. in a big
system there might be a queue of requests from different sources for
the same device. a device would complete some operation and queue
and interrupt. from the time the interrupt was queued, until the
operating system took the interrupt, processed the interrupt, discovered
there was some queued i/o for the device ... and redrove i/o to the
device could represent quite a bit of device idle time. compound this
with systems that might have several hundred devices ... this could
represent some amount of inefficiency.

370-XA added bump storage and expanded channel function with some
high-speed dedicated asyncronous processors. a new type of processor
instruction could add stuff to a device i/o queue managed by the
channel processor. the processor could also specify that i/o
completion was to be placed on a queue of pending requests ... also
available to the processor. the channel processor could now do
real-time queuing of device i/o completion and immedate restart the
device with the next request in the queue.

this was also frequently referred to as i/o handling offload. the
issue here was that some of the operating systems had something like a
few 10k pathlength to take i/o interrupt and get around to doing an
i/o device redrive.

in the late 70s ... just prior to introduction of 370-XA ... i was
wandering around the disk engineering lab ... and they had this
problem with regression testing of disk technology ("testcells") in
engineering development. they had tried doing some of this in a
traditional operating system environment ... and found that the
operating system MTBF was something like 15 minutes. As a result they
were scheduling stand-alone machine time between all the various
testcells contending for regression time. So i thot I would rewrite an
operating system i/o subsystem to be failure bullet proof so they
could concurrently work with all testcells concurrently ... not having
to wait for stand-alone test time.
http://www.garlic.com/~lynn/subtopic.html#disk

another thing i did was to cleanup the total pathlength so device i/o
redrive time was a frew hundred instructions instead of a few 10k
insruction pathlength. i claimed that this would significantly
mitigate the requirement for doing i/o offload in 370-xa.

this was just a hypothetical argument (somewhat to highlight how
inefficient some kernel implementation pathlengths were). a number of
years earlier, I had done VAMPs ... a multiprocessor system (that
never shipped to customers
http://www.garlic.com/~lynn/subtopic.html#bounce

that had multiple close-in microcoded engines (in addition to the
processor microcode engines) all on the same memory bus. i had
designed a queued i/o offload interface for vamps.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn /

Re: Regarding interrupt sharing architectures!

Quoted text here. Click to load it

Why is everything these days a 'technology'???  Interrupt sharing is a
method or technique but not a technology.

Quoted text here. Click to load it

No it has to find out itself. The interrupt service routine has to check
each of the possible shared interrupting devices to find out which one(s)
interrupted. Obviously you need to have some means in hardware of doing
this.

IAn


Re: Regarding interrupt sharing architectures!

Quoted text here. Click to load it

Because it sells;) have you never heard the would be techies
explain to anxious investors why they should spend their
money in the next cosmic revolution.


Quoted text here. Click to load it

There is also the daisy chaining of interrupts, the device placing
the vector on the bus or forwarding the signal to the next if
it didn't request an interrupt. Worked wonders with the good old 68K.


Re: Regarding interrupt sharing architectures!

Quoted text here. Click to load it

And the PDP-11 (at least the Q-Bus version), the Z-80, the
Z-8000, and probably a bunch of others.

A very elegent solution.  It does require special backplane
connectors that short together the interrupt request in/out and
the interrupt ack in/out pins when a card is removed.  Or you
can't have empty slots between cards.  I suppose you could put
dummy cards "stubs" in. For single-board systems that's not a
problem -- you just wire all the chips together and it works.

--
Grant Edwards                   grante             Yow!  .. hubub, hubub,
                                  at               HUBUB, hubub, hubub, hubub,
We've slightly trimmed the long signature. Click to see the full one.
Re: Regarding interrupt sharing architectures!

Quoted text here. Click to load it

On PDP-11's the "bus grant" card was inserted into empty card slots to
get interrupt continuity.

Unfortunately these "continuity" cards were very small, installing one
into a slot previously occupied by an other card would either cause a
few bleeding fingers :-) or you had to remove the neighbouring cards,
insert the small bus grant card and then reinstall the original cards.

Just a minor detail, if the bus grant card had been of full size,
there would not have been any problems installing or removing it when
actual cards were removed or installed.

Paul


Re: Regarding interrupt sharing architectures!

Quoted text here. Click to load it

Field service calls were not unusual after
a customer "reconfigured" their machine and
didn't get the bus grant card right.

I spent my early 20's maintaining DEC and
DG minis.  I have a working pdp 8/l in the
garage.



Re: Regarding interrupt sharing architectures!
Quoted text here. Click to load it

There was a similar daisy-chain of interrupt and DMA chains
in the DG minis.

--

Tauno Voipio
tauno voipio (at) iki fi



Re: Regarding interrupt sharing architectures!
Quoted text here. Click to load it

Because "nobody" understands what anything actually is
or how anything works, and the word "technology" is
commonly used as a synonym for "magic".

Many of today's programmers are only capable of cutting
and pasting together "technologies", and incapable of
creating anything.  The phrase "program their way out
of a paper bag" comes to mind.   (There have always been
such programmers, of course.  Just not so many of them.)

Re: Regarding interrupt sharing architectures!

Quoted text here. Click to load it

Super LOL. Thanks for that, you made my day

Ian

Site Timeline