Re: [PATCH v14 6/6] iommu: Remove mode argument from iommu_set_dma_strict()

From: Lu Baolu
Date: Mon Jun 21 2021 - 10:32:36 EST


Hi Robin,

On 2021/6/21 19:59, Robin Murphy wrote:
On 2021-06-21 11:34, John Garry wrote:
On 21/06/2021 11:00, Lu Baolu wrote:
void iommu_set_dma_strict(bool force)
{
          if (force == true)
         iommu_dma_strict = true;
     else if (!(iommu_cmd_line & IOMMU_CMD_LINE_STRICT))
         iommu_dma_strict = true;
}

So we would use iommu_set_dma_strict(true) for a) and b), but iommu_set_dma_strict(false) for c).

Yes. We need to distinguish the "must" and "nice-to-have" cases of
setting strict mode.


Then I am not sure what you want to do with the accompanying print for c). It was:
"IOMMU batching is disabled due to virtualization"

And now is from this series:
"IOMMU batching disallowed due to virtualization"

Using iommu_get_dma_strict(domain) is not appropriate here to know the current mode (so we know whether to print).

Note that this change would mean that the current series would require non-trivial rework, which would be unfortunate so late in the cycle.

This patch series looks good to me and I have added by reviewed-by.
Probably we could make another patch series to improve it so that the
kernel optimization should not override the user setting.

On a personal level I would be happy with that approach, but I think it's better to not start changing things right away in a follow-up series.

So how about we add this patch (which replaces 6/6 "iommu: Remove mode argument from iommu_set_dma_strict()")?

Robin, any opinion?

For me it boils down to whether there are any realistic workloads where non-strict mode *would* still perform better under virtualisation. The

At present, we see that strict mode has better performance in the
virtualization environment because it will make the shadow page table
management more efficient. When the hardware supports nested
translation, we may have to re-evaluate this since there's no need for
a shadowing page table anymore.

only reason for the user to explicitly pass "iommu.strict=0" is because they expect it to increase unmap performance; if it's only ever going to lead to an unexpected performance loss, I don't see any value in overriding the kernel's decision purely for the sake of subservience.

If there *are* certain valid cases for allowing it for people who really know what they're doing, then we should arguably also log a counterpart message to say "we're honouring your override but beware it may have the opposite effect to what you expect" for the benefit of other users who assume it's a generic go-faster knob. At that point it starts getting non-trivial enough that I'd want to know for sure it's worthwhile.

The other reason this might be better to revisit later is that an AMD equivalent is still in flight[1], and there might be more that can eventually be factored out. I think both series are pretty much good to merge for 5.14, but time's already tight to sort out the conflicts which exist as-is, without making them any worse.

Agreed. We could revisit it later.

Best regards,
baolu