On Thu, 2020-09-03 at 14:41 +1000, Alexey Kardashevskiy wrote:
I am new to this, so I am trying to understand how a memory page mapped
as DMA, and used for something else could be a problem.
From the device prospective, there is PCI space and everything from 0
till 1<<64 is accessible and what is that mapped to - the device does
not know. PHB's IOMMU is the thing to notice invalid access and raise
EEH but PHB only knows about PCI->physical memory mapping (with IOMMU
pages) but nothing about the host kernel pages. Does this help? Thanks,
According to our conversation on Slack:
1- There is a problem if a hypervisor gives to it's VMs contiguous
memory blocks that are not aligned to IOMMU pages, because then an
iommu_map_page() could map some memory in this VM and some memory in
other VM / process.
2- To guarantee this, we should have system pagesize >= iommu_pagesize
One way to get (2) is by doing this in enable_ddw():
if ((query.page_size & 4) && PAGE_SHIFT >= 24) {
page_shift = 24; /* 16MB */
} else if ((query.page_size & 2) && PAGE_SHIFT >= 16 ) {
page_shift = 16; /* 64kB */
} else if (query.page_size & 1 && PAGE_SHIFT >= 12) {
page_shift = 12; /* 4kB */
[...]
Another way of solving this, would be adding in LoPAR documentation
that the blocksize of contiguous memory the hypervisor gives a VM
should always be aligned to IOMMU pagesize offered.
I think we diverted from the original patch :) I am not quite sure what you were fixing there. Thanks,
I think the best approach would be first sending the above patch, which
is faster, and then get working into adding that to documentation, so
hypervisors guarantee this.
If this gets into the docs, we can revert the patch.
What do you think?