Re: [PATCH RFC] x86/acpi: don't ignore I/O APICs just because there'sno local APIC

From: Jeremy Fitzhardinge
Date: Thu Jun 18 2009 - 15:34:38 EST

On 06/17/09 19:58, Eric W. Biederman wrote:
>> One of the options we discussed was changing the API to get rid of the exposed
>> vector, and just replace it with an operation to directly bind a gsi to a pirq
>> (internal Xen physical interrupt handle, if you will), so that Xen ends up doing
>> all the I/O APIC programming internally, as well as the local APIC.
> As an abstraction layer I think that will work out a lot better long term.
> Given what iommus with irqs and DMA I expect you want something like
> that, that can be used from domU. Then you just make allowing the
> operation conditional on if you happen to have the associated hardware
> mapped into your domain.

A domU with a PCI passthrough device can bind a pirq to one of its event
channels. All the gsi->pirq binding happens in dom0, but binding a pirq
to event channel can happen anywhere (that's why it doesn't bind gsi
directly to event channel, as they're strictly per-domain).

MSI interrupts also get bound to pirqs, so once the binding is created,
MSI and GSI interrupts can be treated identically (I think, I haven't
looked into the details yet).

>> On the Linux side, I think it means we can just point pcibios_enable/disable_irq
>> to our own xen_pci_irq_enable/disable functions to create the binding between a
>> PCI device and an irq.
> If you want xen to assign the linux irq number that is absolutely the properly place
> to hook.

Yes. We'd want to keep the irq==gsi mapping for non-MSI interrupts, but
that's easy enough to arrange.

> When I was messing with the irq code I did not recall finding many
> cases where migrating irqs from process context worked without hitting
> hardware bugs. ioapic state machine lockups and the like.

Keir mentioned that Xen avoids masking/unmasking interrupts in the I/O
APIC too much, because that has been problematic in the past. Is that
related to the problems you're talking about? Is there anywhere which
documents them?

> How does Xen handle domU with hardware directly mapped?

We call that "pci passthrough". Dom0 will bind the gsi to a pirq as
usual, and then pass the pirq through to the domU. The domU will bind
the pirq to an event channel, which gets mapped to a Linux irq and
handled as usual.

> Temporally ignoring what we have to do to work with Xen 3.4. I'm curious
> if we could make the Xen dom0 irq case the same as the Xen domU case.

It is already; once the pirq is prepared, the process is the same in
both cases.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at