Re: [BUG] vfio device assignment regression with THP ref counting redesign

From: Alex Williamson
Date: Thu Apr 28 2016 - 20:44:38 EST


On Fri, 29 Apr 2016 01:21:27 +0200
Andrea Arcangeli <aarcange@xxxxxxxxxx> wrote:

> Hello Alex and Kirill,
>
> On Thu, Apr 28, 2016 at 12:58:08PM -0600, Alex Williamson wrote:
> > > > specific fix to this code is not applicable. It also still occurs on
> > > > kernels as recent as v4.6-rc5, so the issue hasn't been silently fixed
> > > > yet. I'm able to reproduce this fairly quickly with the above test,
> > > > but it's not hard to imagine a test w/o any iommu dependencies which
> > > > simply does a user directed get_user_pages_fast() on a set of userspace
> > > > addresses, retains the reference, and at some point later rechecks that
> > > > a new get_user_pages_fast() results in the same page address. It
>
> Can you try to "git revert 1f25fe20a76af0d960172fb104d4b13697cafa84"
> and then apply the below patch on top of the revert?

Looking good so far! I haven't seen any errors yet with this
combination of v4.5, 1f25fe20a reverted, and your patch applied on
top. I'll keep testing since reverting 1f25fe20a alone already made
the bug much more elusive. Thanks Andrea!

Alex

> Totally untested... if I missed something and it isn't correct, I hope
> this brings us in the right direction faster at least.
>
> Overall the problem I think is that we need to restore full accuracy
> and we can't deal with false positive COWs (which aren't entirely
> cheap either... reading 512 cachelines should be much faster than
> copying 2MB and using 4MB of CPU cache). 32k vs 4MB. The problem of
> course is when we really need a COW, we'll waste an additional 32k,
> but then it doesn't matter that much as we'd be forced to load 4MB of
> cache anyway in such case. There's room for optimizations but even the
> simple below patch would be ok for now.
>
> From 09e3d1ff10b49fb9c3ab77f0b96a862848e30067 Mon Sep 17 00:00:00 2001
> From: Andrea Arcangeli <aarcange@xxxxxxxxxx>
> Date: Fri, 29 Apr 2016 01:05:06 +0200
> Subject: [PATCH 1/1] mm: thp: calculate page_mapcount() correctly for THP
> pages
>
> This allows to revert commit 1f25fe20a76af0d960172fb104d4b13697cafa84
> and it provides fully accuracy with wrprotect faults so page pinning
> will stop causing false positive copy-on-writes.
>
> Signed-off-by: Andrea Arcangeli <aarcange@xxxxxxxxxx>
> ---
> mm/util.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/util.c b/mm/util.c
> index 6cc81e7..a0b9f63 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -383,9 +383,10 @@ struct address_space *page_mapping(struct page *page)
> /* Slow path of page_mapcount() for compound pages */
> int __page_mapcount(struct page *page)
> {
> - int ret;
> + int ret = 0, i;
>
> - ret = atomic_read(&page->_mapcount) + 1;
> + for (i = 0; i < HPAGE_PMD_NR; i++)
> + ret = max(ret, atomic_read(&page->_mapcount) + 1);
> page = compound_head(page);
> ret += atomic_read(compound_mapcount_ptr(page)) + 1;
> if (PageDoubleMap(page))