Re: GUP guarantees wrt to userspace mappings
From: Kirill A. Shutemov
Date: Mon May 02 2016 - 12:13:10 EST
On Mon, May 02, 2016 at 05:22:49PM +0200, Jerome Glisse wrote:
> On Mon, May 02, 2016 at 06:00:13PM +0300, Kirill A. Shutemov wrote:
> > > > Quick look around:
> > > >
> > > > - I don't see any check page_count() around __replace_page() in uprobes,
> > > > so it can easily replace pinned page.
> > >
> > > Not an issue for existing user as this is only use to instrument code, existing
> > > user do not execute code from virtual address for which they have done a GUP.
> >
> > Okay, so we can establish that GUP doesn't provide the guarantee in some
> > cases.
>
> Correct but it use to provide that guarantee in respect to THP.
Yes, the THP regression need to be fixed. I don't argue with that.
> > > > - KSM has the page_count() check, there's still race wrt GUP_fast: it can
> > > > take the pin between the check and establishing new pte entry.
> > >
> > > KSM is not an issue for existing user as they all do get_user_pages() with
> > > write = 1 and the KSM first map page read only before considering to replace
> > > them and check page refcount. So there can be no race with gup_fast there.
> >
> > In vfio case, 'write' is conditional on IOMMU_WRITE, meaning not all
> > get_user_pages() are with write=1.
>
> I think this is still fine as it means that device will read only and thus
> you can migrate to different page (ie the guest is not expecting to read back
> anything writen by the device and device writting to the page would be illegal
> and a proper IOMMU would forbid it). So it is like direct-io when you write
> from anonymous memory to a file.
Hm. Okay.
> > > > - khugepaged: the same story as with KSM.
> > >
> > > I am assuming you are talking about collapse_huge_page() here, if you look in
> > > that function there is a comment about GUP_fast. Noneless i believe the comment
> > > is wrong as i believe there is an existing race window btw pmdp_collapse_flush()
> > > and __collapse_huge_page_isolate() :
> > >
> > > get_user_pages_fast() | collapse_huge_page()
> > > gup_pmd_range() -> valid pmd | ...
> > > | pmdp_collapse_flush() clear pmd
> > > | ...
> > > | __collapse_huge_page_isolate()
> > > | [Above check page count and see no GUP]
> > > gup_pte_range() -> ref page |
> > >
> > > This is a very unlikely race because get_user_pages_fast() can not be preempted
> > > while collapse_huge_page() can be preempted btw pmdp_collapse_flush() and
> > > __collapse_huge_page_isolate(), more over collapse_huge_page() has lot more
> > > instructions to chew on than get_user_pages_fast() btw gup_pmd_range() and
> > > gup_pte_range().
> >
> > Yes, the race window is small, but there.
>
> Now that i think again about it, i don't think it exist. pmdp_collapse_flush()
> will flush the tlb and thus send an IPI but get_user_pages_fast() can't be
> preempted so the flush will have to wait for existing get_user_pages_fast() to
> complete. Or am i missunderstanding flush ? So khugepaged is safe from GUP_fast
> point of view like the comment, inside it, says.
You are right. It's safe too.
> > > So as said above, i think existing user of get_user_pages() are not sensitive
> > > to the races you pointed above. I am sure there are some corner case where
> > > the guarantee that GUP pin a page against a virtual address is violated but
> > > i do not think they apply to any existing user of GUP.
> > >
> > > Note that i would personaly like that this existing assumption about GUP did
> > > not exist. I hate it, but fact is that it does exist and nobody can remember
> > > where the Doc did park the Delorean
> >
> > The drivers who want the guarantee can provide own ->mmap and have more
> > control on what is visible in userspace.
> >
> > Alternatively, we have mmu_notifiers to track changes in userspace
> > mappings.
> >
>
> Well you can't not rely on special vma here. Qemu alloc anonymous memory and
> hand it over to guest, then a guest driver (ie runing in the guest not on the
> host) try to map that memory and need valid DMA address for it, this is when
> vfio (on the host kernel) starts pining memory of regular anonymous vma (on
> the host). That same memory might back some special vma with ->mmap callback
> but in the guest. Point is there is no driver on the host and no special vma.
> From host point of view this is anonymous memory, but from guest POV it is
> just memory.
>
> Requiring special vma would need major change to kvm and probably xen, in
> respect on how they support things like PCI passthrough.
>
> In existing workload, host kernel can not make assumption on how anonymous
> memory is gonna be use.
Any reason why mmu_notifier is not an option?
--
Kirill A. Shutemov