Re: [PATCH 17/19] powerpc: book3s64: convert to pin_user_pages() and put_user_page()
From: Jan Kara
Date: Mon Nov 25 2019 - 03:59:36 EST
On Sun 24-11-19 20:20:09, John Hubbard wrote:
> 1. Convert from get_user_pages() to pin_user_pages().
>
> 2. As required by pin_user_pages(), release these pages via
> put_user_page(). In this case, do so via put_user_pages_dirty_lock().
>
> That has the side effect of calling set_page_dirty_lock(), instead
> of set_page_dirty(). This is probably more accurate.
>
> As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
> dealing with a file backed page where we have reference on the inode it
> hangs off." [1]
>
> 3. Release each page in mem->hpages[] (instead of mem->hpas[]), because
> that is the array that pin_longterm_pages() filled in. This is more
> accurate and should be a little safer from a maintenance point of
> view.
Except that this breaks the code. hpages is unioned with hpas...
> [1] https://lore.kernel.org/r/20190723153640.GB720@xxxxxx
>
> Signed-off-by: John Hubbard <jhubbard@xxxxxxxxxx>
> ---
> arch/powerpc/mm/book3s64/iommu_api.c | 12 +++++-------
> 1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c
> index 56cc84520577..196383e8e5a9 100644
> --- a/arch/powerpc/mm/book3s64/iommu_api.c
> +++ b/arch/powerpc/mm/book3s64/iommu_api.c
> @@ -103,7 +103,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
> for (entry = 0; entry < entries; entry += chunk) {
> unsigned long n = min(entries - entry, chunk);
>
> - ret = get_user_pages(ua + (entry << PAGE_SHIFT), n,
> + ret = pin_user_pages(ua + (entry << PAGE_SHIFT), n,
> FOLL_WRITE | FOLL_LONGTERM,
> mem->hpages + entry, NULL);
> if (ret == n) {
> @@ -167,9 +167,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
> return 0;
>
> free_exit:
> - /* free the reference taken */
> - for (i = 0; i < pinned; i++)
> - put_page(mem->hpages[i]);
> + /* free the references taken */
> + put_user_pages(mem->hpages, pinned);
>
> vfree(mem->hpas);
> kfree(mem);
> @@ -212,10 +211,9 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
> if (!page)
> continue;
>
> - if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY)
> - SetPageDirty(page);
> + put_user_pages_dirty_lock(&mem->hpages[i], 1,
> + MM_IOMMU_TABLE_GROUP_PAGE_DIRTY);
And the dirtying condition is wrong here as well. Currently it is always
true.
Honza
--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR