Re: [PATCH v5 13/23] iommu: introduce device fault report API
From: Jean-Philippe Brucker
Date: Fri Sep 07 2018 - 07:23:19 EST
On 07/09/2018 08:11, Auger Eric wrote:
>>> On 09/06/2018 02:42 PM, Jean-Philippe Brucker wrote:
>>>> On 06/09/2018 10:25, Auger Eric wrote:
>>>>>> + mutex_lock(&fparam->lock);
>>>>>> + list_add_tail(&evt_pending->list, &fparam->faults);
>>>>> same doubt as Yi Liu. You cannot rely on the userspace willingness to
>>>>> void the queue and deallocate this memory.
>>>
>>> By the way I saw there is a kind of garbage collectors for faults which
>>> wouldn't have received any responses. However I am not sure this removes
>>> the concern of having the fault list on kernel side growing beyond
>>> acceptable limits.
>>
>> How about per-device quotas? (https://lkml.org/lkml/2018/4/23/706 for
>> reference) With PRI the IOMMU driver already sets per-device credits
>> when initializing the device (pci_enable_pri), so if the device behaves
>> properly it shouldn't send new page requests once the number of
>> outstanding ones is maxed out.
>
> But this needs to work for non PRI use case too?
Only recoverable faults, PRI and stall, are added to the fparam->faults
list, because the kernel needs to make sure that each of these faults
gets a reply, or else they are held in hardware indefinitely.
Non-recoverable faults don't need tracking, the IOMMU API can forget
about them after they're reported. Rate-limiting could be done by the
consumer if it gets flooded by non-recoverable faults, for example by
dropping some of them.
>> Right, an event contains more information than a PRI page request.
>> Stage-2 fields (CLASS, S2, IPA, TTRnW) cannot be represented by
>> iommu_fault_event at the moment.
>
> Yes I am currently doing the mapping exercise between SMMUv3 events and
> iommu_fault_event and I miss config errors for instance.
We may have initially focused only on guest and userspace config errors
(IOMMU_FAULT_REASON_PASID_FETCH, IOMMU_FAULT_REASON_PASID_INVALID, etc),
since other config errors are most likely a bug in the host IOMMU
driver, and could be reported with pr_err
> For precise emulation it might be
>> useful to at least add the S2 flag (as a new iommu_fault_reason?), so
>> that when the guest maps stage-1 to an invalid GPA, QEMU could for
>> example inject an external abort.
>
> Actually we may even need to filter events and return to the guest only
> the S1 related.
>>
>>> queue
>>> size, that may deserve to create different APIs and internal data
>>> structs. Also this may help separating the concerns.
>>
>> It might duplicate them. If the consumer of the event report is a host
>> device driver, the SMMU needs to report a "generic" iommu_fault_event,
>> and if the consumer is VFIO it would report a specialized one
>
> I am unsure of my understanding of the UNRECOVERABLE error type. Is it
> everything else than a PRI. For instance are all SMMUv3 event errors
> supposed to be put under the IOMMU_FAULT_DMA_UNRECOV umbrella?
I guess it's more clear-cut in VT-d, which defines recoverable and
non-recoverable faults. In SMMUv3, PRI Page Requests are recoverable,
but event errors can also be recoverable if they have the Stall flag set.
Stall is a way for non-PCI endpoints to do SVA, and I have a patch in my
series that sorts events into PAGE_REQ and DMA_UNRECOV before feeding
them to this API: https://patchwork.kernel.org/patch/10395043/
> If I understand correctly there are different consumers for PRI and
> unrecoverable data, so why not having 2 different APIs.
My reasoning was that for virtualization they go through the same
channel, VFIO, until the guest or the vIOMMU dispatches them depending
on their type, so we might as well use the same API.
In addition, host device drivers might also want to handle stall or PRI
events themselves instead of relying on the SVA infrastructure. For
example the MSM GPU with SMMUv2: https://patchwork.kernel.org/patch/9953803/
>>> My remark also
>>> stems from the fact the SMMU uses 2 different queues, whose size can
>>> also be different.
>>
>> Hm, for PRI requests the kernel-userspace queue size should actually be
>> the number of PRI credits for that device. Hadn't thought about it
>> before, where do we pass that info to userspace?
> Cannot help here at the moment, sorry.
> For fault events, the
>> queue could be as big as the SMMU event queue, though using all that
>> space might be wasteful.
> The guest has its own programming of the SMMU_EVENTQ_BASE.LOG2SIZE. This
> could be used to program the SW fifo
>
> Non-stalled events should be rare and reporting
>> them isn't urgent. Stalled ones would need the number of stall credits I
>> mentioned above, which realistically will be a lot less than the SMMU
>> event queue size. Given that a device will use either PRI or stall but
>> not both, I still think events and PRI could go through the same queue.
> Did I get it right PRI is for PCIe and STALL for non PCIe? But all that
> stuff also is related to Page Request use case, right?
Yes, a stall event is a page request from a non-PCI device, but it comes
in through the SMMU event queue
Thanks,
Jean