Hi Robin,
Thanks for the reply.
On Mon, Jan 08, 2024 at 05:35:26PM +0000, Robin Murphy wrote:
Hmm, we've got what looks to be a set of magazines forming a plausible depot
list (or at least the tail end of one):
ffff8881411f9000 -> ffff8881261c1000
ffff8881261c1000 -> ffff88812be26400
ffff88812be26400 -> ffff8188392ec000
ffff8188392ec000 -> ffff8881a5301000
ffff8881a5301000 -> NULL
which I guess has somehow become detached from its rcache->depot without
being freed properly? However I'm struggling to see any conceivable way that
could happen which wouldn't already be more severely broken in other ways as
well (i.e. either general memory corruption or someone somehow still trying
to use the IOVA domain while it's being torn down).
The machine is running a debug kernel that among other things has KASAN
enabled, but there are no traces in the kernel log so there is no memory
corruption that I'm aware of.
Out of curiosity, does reverting just patch #2 alone make a difference?
Will try and let you know.
And is your workload doing anything "interesting" in relation to IOVA
domain lifetimes, like creating and destroying SR-IOV virtual
functions, changing IOMMU domain types via sysfs, or using that
horrible vdpa thing, or are you seeing this purely from regular driver
DMA API usage?
The machine is running networking related tests, but it is not using
SR-IOV, VMs or VDPA so there shouldn't be anything "interesting" as far
as IOMMU is concerned.
The two networking drivers on the machine are "igb" for the management
port and "mlxsw" for the data ports (the machine is a physical switch).
I believe the DMA API usage in the latter is quite basic and I don't
recall any DMA related problems with this driver since it was first
accepted upstream in 2015.