RE: A problem of Intel IOMMU hardware ?

From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Date: Sun Mar 21 2021 - 19:52:35 EST


Hi Nadav,

> -----Original Message-----
> From: Nadav Amit [mailto:nadav.amit@xxxxxxxxx]
> Sent: Friday, March 19, 2021 12:46 AM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@xxxxxxxxxx>
> Cc: Tian, Kevin <kevin.tian@xxxxxxxxx>; chenjiashang
> <chenjiashang@xxxxxxxxxx>; David Woodhouse <dwmw2@xxxxxxxxxxxxx>;
> iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx; LKML <linux-kernel@xxxxxxxxxxxxxxx>;
> alex.williamson@xxxxxxxxxx; Gonglei (Arei) <arei.gonglei@xxxxxxxxxx>;
> will@xxxxxxxxxx
> Subject: Re: A problem of Intel IOMMU hardware ?
>
>
>
> > On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) <longpeng2@xxxxxxxxxx> wrote:
> >
> >
> >
> >> -----Original Message-----
> >> From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> >> Sent: Thursday, March 18, 2021 4:56 PM
> >> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >> <longpeng2@xxxxxxxxxx>; Nadav Amit <nadav.amit@xxxxxxxxx>
> >> Cc: chenjiashang <chenjiashang@xxxxxxxxxx>; David Woodhouse
> >> <dwmw2@xxxxxxxxxxxxx>; iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx; LKML
> >> <linux-kernel@xxxxxxxxxxxxxxx>; alex.williamson@xxxxxxxxxx; Gonglei
> >> (Arei) <arei.gonglei@xxxxxxxxxx>; will@xxxxxxxxxx
> >> Subject: RE: A problem of Intel IOMMU hardware ?
> >>
> >>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>> <longpeng2@xxxxxxxxxx>
> >>>
> >>>> -----Original Message-----
> >>>> From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> >>>> Sent: Thursday, March 18, 2021 4:27 PM
> >>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>>> <longpeng2@xxxxxxxxxx>; Nadav Amit <nadav.amit@xxxxxxxxx>
> >>>> Cc: chenjiashang <chenjiashang@xxxxxxxxxx>; David Woodhouse
> >>>> <dwmw2@xxxxxxxxxxxxx>; iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx; LKML
> >>>> <linux-kernel@xxxxxxxxxxxxxxx>; alex.williamson@xxxxxxxxxx; Gonglei
> >>> (Arei)
> >>>> <arei.gonglei@xxxxxxxxxx>; will@xxxxxxxxxx
> >>>> Subject: RE: A problem of Intel IOMMU hardware ?
> >>>>
> >>>>> From: iommu <iommu-bounces@xxxxxxxxxxxxxxxxxxxxxxxxxx> On Behalf
> >>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>>>>
> >>>>>> 2. Consider ensuring that the problem is not somehow related to
> >>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead of
> >>>> qi_flush_iotlb().
> >>>>>>
> >>>>>
> >>>>> I tried to force to use __iommu_flush_iotlb(), but maybe something
> >>>>> wrong, the system crashed, so I prefer to lower the priority of
> >>>>> this
> >>> operation.
> >>>>>
> >>>>
> >>>> The VT-d spec clearly says that register-based invalidation can be
> >>>> used only
> >>> when
> >>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> >>>> provide
> >>> an
> >>>> option to disable queued-invalidation though, when the hardware is
> >>> capable. If you
> >>>> really want to try, tweak the code in intel_iommu_init_qi.
> >>>>
> >>>
> >>> Hi Kevin,
> >>>
> >>> Thanks to point out this. Do you have any ideas about this problem ?
> >>> I tried to descript the problem much clear in my reply to Alex, hope
> >>> you could have a look if you're interested.
> >>>
> >>
> >> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> >>
> >
> > Not test yet. It's hard to upgrade kernel in our environment.
> >
> >> Also one way to separate sw/hw bug is to trace the low level
> >> interface (e.g.,
> >> qi_flush_iotlb) which actually sends invalidation descriptors to the
> >> IOMMU hardware. Check the window between b) and c) and see whether
> >> the software does the right thing as expected there.
> >>
> >
> > We add some log in iommu driver these days, the software seems fine.
> > But we didn't look inside the qi_submit_sync yet, I'll try it tonight.
>
> So here is my guess:
>
> Intel probably used as a basis for the IOTLB an implementation of some other
> (regular) TLB design.
>
> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
>
> "Software wishing to prevent this uncertainty should not write to a
> paging-structure entry in a way that would change, for any linear address, both the
> page size and either the page frame, access rights, or other attributes.”
>
>
> Now the aforementioned uncertainty is a bit different (multiple
> *valid* translations of a single address). Yet, perhaps this is yet another thing that
> might happen.
>
> From a brief look on the handling of MMU (not IOMMU) hugepages in Linux, indeed
> the PMD is first cleared and flushed before a new valid PMD is set. This is possible
> for MMUs since they allow the software to handle spurious page-faults gracefully.
> This is not the case for the IOMMU though (without PRI).
>

But in my case, the flush_iotlb is called after the range of (0x0, 0xa0000) is unmapped,
I've no idea why this invalidation isn't effective except I've not look inside the qi yet, but
there is no complaints from the driver.

Could you please point out the code of MMU you mentioned above? In MMU code, is it
possible that all the entries of the PTE are all not-present but the PMD entry is still present?

*Page table after (0x0, 0xa0000) is unmapped:
PML4: 0x 1a34fbb003
PDPE: 0x 1a34fbb003
PDE: 0x 1a34fbf003
PTE: 0x 0

*Page table after (0x0, 0xc0000000) is mapped:
PML4: 0x 1a34fbb003
PDPE: 0x 1a34fbb003
PDE: 0x 15ec00883

> Not sure this explains everything though. If that is the problem, then during a
> mapping that changes page-sizes, a TLB flush is needed, similarly to the one
> Longpeng did manually.
>