Hi Marc,
On 11/18/2015 12:48 AM, Marc Zyngier wrote:
On Tue, 17 Nov 2015 16:31:54 -0800
Ray Jui <rjui@xxxxxxxxxxxx> wrote:
+static int iproc_msi_irq_domain_alloc(struct irq_domain *domain,
+ unsigned int virq, unsigned int nr_irqs,
+ void *args)
+{
+ struct iproc_msi *msi = domain->host_data;
+ int i, msi_irq;
+
+ mutex_lock(&msi->bitmap_lock);
+
+ for (i = 0; i < nr_irqs; i++) {
+ msi_irq = find_first_zero_bit(msi->used, msi->nirqs);
This is slightly puzzling. Do you really have at most 6 MSIs? Usually,
we end up with a larger number of MSIs (32 or 64) multiplexed on top of
a small number of wired interrupts. Here, you seem to have a 1-1
mapping. Is that really the case?
Yes, based on the poorly written iProc PCIe arch doc, :), we seem to
have 1-1 mapping between each wired interrupt and MSI, with each MSI
handled by an event queue, that consists of 64x word entries allocated
from host memory (DDR). The MSI data is stored in the low 16-bit of each
entry, whereas the upper 16-bit of each entry is reserved for the iProc
PCIe controller for its own use.
If so (and assuming the wired interrupts are always contiguous), you
shouldn't represent this as a chained interrupt (a multiplexer), but as
a stacked irqchip, similar to what GICv2m does.
Okay, I think I might be missing something here, but I thought I
currently have a stacked irqdomain (chip), i.e., GIC -> inner_domain ->
MSI domain?
And does this imply I should expect 'nr_irqs' in this routine to be
always zero and therefore I can get rid of the for loop here (same in
the domain free routine)?