On 08.10.22 14:08, Xenia Ragiadakou wrote:
Hello Xenia
On 10/7/22 16:27, Oleksandr Tyshchenko wrote:
Hi Oleksandr
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
Passed to xen_grant_dma_map_page() offset in the page
can be > PAGE_SIZE even if the guest uses the same page granularity
as Xen (4KB).
Before current patch, if such case happened we ended up providing
grants for the whole region in xen_grant_dma_map_page() which
was really unnecessary. The more, we ended up not releasing all
grants which represented that region in xen_grant_dma_unmap_page().
Current patch updates the code to be able to deal with such cases.
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
---
Cc: Juergen Gross <jgross@xxxxxxxx>
Cc: Xenia Ragiadakou <burzalodowa@xxxxxxxxx>
Depens on:
https://urldefense.com/v3/__https://lore.kernel.org/xen-devel/20221005174823.1800761-1-olekstysh@xxxxxxxxx/__;!!GF_29dbcQIUBPA!xnkNaKpfZ4LssQJcJs_J91KERZKMP2Rd-xEdBqXNXJ8GyCXJ0gkRer1elVYfxOWtwN_FOl9tVieDWlfN-UZaHQsyLMhA$
[lore[.]kernel[.]org]
Should go in only after that series.
---
drivers/xen/grant-dma-ops.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index c66f56d24013..1385f0e686fe 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -168,7 +168,9 @@ static dma_addr_t xen_grant_dma_map_page(struct
device *dev, struct page *page,
unsigned long attrs)
{
struct xen_grant_dma_data *data;
- unsigned int i, n_pages = PFN_UP(offset + size);
+ unsigned long dma_offset = offset_in_page(offset),
+ gfn_offset = PFN_DOWN(offset);
+ unsigned int i, n_pages = PFN_UP(dma_offset + size);
IIUC, the above with a later patch will become:
dma_offset = xen_offset_in_page(offset)
gfn_offset = XEN_PFN_DOWN(offset)
n_pages = XEN_PFN_UP(dma_offset + size)
If saying "later" patch you meant "xen/virtio: Convert
PAGE_SIZE/PAGE_SHIFT/PFN_UP to Xen counterparts" then yes, exactly.
grant_ref_t grant;
dma_addr_t dma_handle;
@@ -187,10 +189,10 @@ static dma_addr_t
xen_grant_dma_map_page(struct device *dev, struct page *page,
for (i = 0; i < n_pages; i++) {
gnttab_grant_foreign_access_ref(grant + i,
data->backend_domid,
- xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
+ xen_page_to_gfn(page) + i + gfn_offset, dir ==
DMA_TO_DEVICE);
Here, why the pfn is not calculated before passing it to pfn_to_gfn()?
I mean sth like pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)
The gfn_offset is just a const value here, which just means how many
gfns we should skip. But ...
... I think, I get your point. So, if the region which is contiguous in
pfn might be non-contiguous in gfn (which seems to be the case for x86's
PV, but I may mistake) we should indeed use open-coded
construction "pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)". And
the gfn_offset should be renamed to pfn_offset then.
Correct?
}
- dma_handle = grant_to_dma(grant) + offset;
+ dma_handle = grant_to_dma(grant) + dma_offset;
return dma_handle;
}