Sending multiple MSI interrupts via Xilinx "AXI Memory Mapped to PCIe" core

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View

I wanted to prepare a small IP core able to serialize multiple interrupts as MSI interrupts handled by the Xilinx AXI MM 2 PCIe bridge.

In the newest documentation  ( ) it is stated that:

* Additional IP is required in the Endpoint PCIe system to create the  
  prioritization scheme for the MSI vectors on the PCIe interface.
  (that's why I want to create that core)

* intx_msi_request
  Initiates a MSI write request when msi_enable = 1.
  Intx_msi_request is asserted for one clock period.
* The intx_msi_grant signal is asserted for one clock
  period when the interrupt is accepted by the PCIe core.

Does it mean, that I should set the MSI interrupt number on msi_vector_num and assert intx_msi_request for one clock period, and then (with deasserted intx_msi_request) wait until intx_msi_grant goes high for one clock?

The more natural way would be for me to keep the intx_msi_request high until intx_msi_grant goes high...
Unfortunately the documentation does not contain any example timing diagram/waveform.

What if I have multiple interrupts to send?

Can I set the new msi_vector_num in the next cycle after asserting intx_msi_request, or should I keep the old value until it is confirmed by intx_msi_grant?

If the first option is right, may I assert intx_msi_request in the same cycle in which intx_msi_grant is asserted, to immediately send the next interrupt?

I've sent this question also to the Xilinx forum
( ) but have receive no answer yet. Maybe somebody here  

Thank you in advance,
With best regards,

Re: Sending multiple MSI interrupts via Xilinx "AXI Memory Mapped to PCIe" core
I've analysed the axi_pcie.vhd module in the original Xilinx IP core, and I
 have found, that:

* The msi_vector_num is delayed together with the intx_msi_request.
* The delayed msi_vector_num is read when the rising edge of  
  the delayed intx_msi_request is detected. Therefore,  
  the msi_vector_num value is important only in that cycle,
  in which the intx_msi_request is set to 1.
* After the the delayed msi_vector_num is read, the state machine
  changes its state to INTR_HS. In that state it ignores further  
  changes of intx_msi_request and msi_vector_num.  
  The machine leaves that state only, when sig_blk_interrupt_rdy
  is asserted. However, it is important, that intx_msi_request  
  goes low before INTR_HS state is left (it is OK to set it high
  only for one cycle).
* As delayed signals msi_vector_num and intx_msi_request are used,
  it is safe to set the new value of mis_vector_num and to assert  
  intx_msi_reuqest in the same cycle in which the intx_msi_grant
  was set to '1'.

However, the above scheme of interrupt handling rises one more question.

How masking of interrupts is handled?

In case of legacy interrupts, it is easy. When my device requires service,  
it keeps the IRQ line asserted.

If the interrupt is masked, the host CPU will not be interrupted. Of course
 the device may be serviced in another way (e.g., via polling). So it may h
appen that even though interrupt was masked, the irq line may get deasserte

When the interrupt gets unmasked, the interrupt will be generated only if t
he irq line is still asserted.

Now, in the MSI handling scheme implemented in the AXI MM 2 PCIe, it is unc
lear how the masking is handled.

The user IP core should assert intx_msi_request for one clock cycle, and wa
it until intx_msi_grant is asserted (also for one cycle).

What happens if the interrupt is masked? Will the intx_msi_grant be never a
sserted? So the core will wait forever and other MSI interrupts can't be po
sted - obviously bad solution.

If the core asserts intx_msi_granted, even if the interrupt is masked, then
 of course next MSI interrupts may be posted, but what will be the state of
 the current interrupt?

Will it be remembered as active, and the appropriate ISR will be executed b
y the host CPU, when the interrupt is unmasked?

However, what if the device is serviced by polling, and it does not require
 servicing any more?

One of main advantages of MSI(X) interrupts is that I don't need to check t
he status of the device at the beginig of my ISR to ensure that my device r
equires servicing. With that behavior I still have to do it!

OK. So what if the masked interrupt is confirmed (by intx_msi_granted) but  
silently dropped. In that case it is even worse, because my IP core assumes
 that the interrupt was successfully posted and ISR will be executed, which
 will never happen. So if that solution is implemented, my IP core should r
esend the interrupt periodically until it is finally serviced. So we get an
other crazy situation, where the peripheral must poll the PCIe host.

I hope that I've mistaken in the above analysis. If not, it seems that hand
ling of MSI interrupts is somehow broken...

I'll be glad if someone explains how it is really handled.

Thank you in advance,
Best regards,

Re: Sending multiple MSI interrupts via Xilinx "AXI Memory Mapped to PCIe" core
I have found the following document:
It states (page 189):

"Per-vector masking is managed through a Mask and Pending bit pair
per MSI vector or MSI-X Table entry. An MSI vector is masked when its associated Mask bit is set. An MSI-X vector is masked when its
associated MSI-X Table entry Mask bit or the MSI-X Function Mask
bit is set. While a vector is masked, the function is prohibited from
sending the associated message, and the function must set the
associated Pending bit whenever the function would otherwise send
the message. When software unmasks a vector whose associated  
Pending bit is set, the function must schedule sending the
associated message, and clear the Pending bit as soon as the message
has been sent."

So it seems, that in case of masked MSI(X) interrupt, once triggered
it can't be revoked.
So if my driver switches to the polling mode for performance reasons
(like in NAPI), it must be prepared for receiving an MSI(X) interrupt
as soon, as it gets unmasked, even though the device does not require
servicing (It was serviced in the polling mode, before unmasking the  
So reading the device status at the begining of ISR may be required
like in the legacy mode...

Am I right?

Site Timeline