Re: [PATCH v4 4/7] iommu/vt-d: Handle non-page aligned address

From: Lu Baolu
Date: Tue Jul 21 2020 - 21:16:10 EST


Hi Jacob,

On 7/22/20 12:50 AM, Jacob Pan wrote:
Hi Baolu,

Not sure what state is this patch in, there is a bug in this patch
(see below), shall I send out an updated version of this one only? or
another incremental patch.

Please send an updated version. I hope Joerg could pick these as 5.8
fix.

Best regards,
baolu


Thanks,

Jacob

On Mon, 6 Jul 2020 17:12:51 -0700
Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx> wrote:

From: Liu Yi L <yi.l.liu@xxxxxxxxx>

Address information for device TLB invalidation comes from userspace
when device is directly assigned to a guest with vIOMMU support.
VT-d requires page aligned address. This patch checks and enforce
address to be page aligned, otherwise reserved bits can be set in the
invalidation descriptor. Unrecoverable fault will be reported due to
non-zero value in the reserved bits.

Fixes: 61a06a16e36d8 ("iommu/vt-d: Support flushing more translation
cache types")
Acked-by: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx>
Signed-off-by: Liu Yi L <yi.l.liu@xxxxxxxxx>
Signed-off-by: Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx>

---
drivers/iommu/intel/dmar.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index d9f973fa1190..b2c53bada905 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -1455,9 +1455,25 @@ void qi_flush_dev_iotlb_pasid(struct
intel_iommu *iommu, u16 sid, u16 pfsid,
* Max Invs Pending (MIP) is set to 0 for now until we have
DIT in
* ECAP.
*/
- desc.qw1 |= addr & ~mask;
- if (size_order)
+ if (addr & GENMASK_ULL(size_order + VTD_PAGE_SHIFT, 0))
+ pr_warn_ratelimited("Invalidate non-aligned address
%llx, order %d\n", addr, size_order); +
+ /* Take page address */
+ desc.qw1 = QI_DEV_EIOTLB_ADDR(addr);
+
+ if (size_order) {
+ /*
+ * Existing 0s in address below size_order may be
the least
+ * significant bit, we must set them to 1s to avoid
having
+ * smaller size than desired.
+ */
+ desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT,
+ VTD_PAGE_SHIFT);
Yi reported the issue, it should be:
desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT - 1,
VTD_PAGE_SHIFT);

+ /* Clear size_order bit to indicate size */
+ desc.qw1 &= ~mask;
+ /* Set the S bit to indicate flushing more than 1
page */ desc.qw1 |= QI_DEV_EIOTLB_SIZE;
+ }
qi_submit_sync(iommu, &desc, 1, 0);
}

[Jacob Pan]