Re: [PATCH 2/2] uprobes: add speculative lockless VMA-to-inode-to-uprobe resolution

From: Andrii Nakryiko
Date: Mon Sep 09 2024 - 17:29:33 EST


On Mon, Sep 9, 2024 at 6:13 AM Jann Horn <jannh@xxxxxxxxxx> wrote:
>
> On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@xxxxxxxxxx> wrote:
> > Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely
> > access vma->vm_file->f_inode field locklessly under just rcu_read_lock()
>
> No, not every file is SLAB_TYPESAFE_BY_RCU - see for example
> ovl_mmap(), which uses backing_file_mmap(), which does
> vma_set_file(vma, file) where "file" comes from ovl_mmap()'s
> "realfile", which comes from file->private_data, which is set in
> ovl_open() to the return value of ovl_open_realfile(), which comes
> from backing_file_open(), which allocates a file with
> alloc_empty_backing_file(), which uses a normal kzalloc() without any
> RCU stuff, with this comment:
>
> * This is only for kernel internal use, and the allocate file must not be
> * installed into file tables or such.
>
> And when a backing_file is freed, you can see on the path
> __fput() -> file_free()
> that files with FMODE_BACKING are directly freed with kfree(), no RCU delay.

Good catch on FMODE_BACKING, I didn't realize there is this exception, thanks!

I think the way forward would be to detect that the backing file is in
FMODE_BACKING and fall back to mmap_lock-protected code path.

I guess I have the question to Liam and Suren, do you think it would
be ok to add another bool after `bool detached` in struct
vm_area_struct (guarded by CONFIG_PER_VMA_LOCK), or should we try to
add an extra bit into vm_flags_t? The latter would work without
CONFIG_PER_VMA_LOCK, but I don't know what's acceptable with mm folks.

This flag can be set in vma_set_file() when swapping backing file and
wherever else vma->vm_file might be set/updated (I need to audit the
code).

>
> So the RCU-ness of "struct file" is an implementation detail of the
> VFS, and you can't rely on it for ->vm_file unless you get the VFS to
> change how backing file lifetimes work, which might slow down some
> other workload, or you find a way to figure out whether you're dealing
> with a backing file without actually accessing the file.
>
> > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr)
> > +{
> > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE;
> > + struct mm_struct *mm = current->mm;
> > + struct uprobe *uprobe;
> > + struct vm_area_struct *vma;
> > + struct file *vm_file;
> > + struct inode *vm_inode;
> > + unsigned long vm_pgoff, vm_start;
> > + int seq;
> > + loff_t offset;
> > +
> > + if (!mmap_lock_speculation_start(mm, &seq))
> > + return NULL;
> > +
> > + rcu_read_lock();
> > +
> > + vma = vma_lookup(mm, bp_vaddr);
> > + if (!vma)
> > + goto bail;
> > +
> > + vm_file = data_race(vma->vm_file);
>
> A plain "data_race()" says "I'm fine with this load tearing", but
> you're relying on this load not tearing (since you access the vm_file
> pointer below).
> You're also relying on the "struct file" that vma->vm_file points to
> being populated at this point, which means you need CONSUME semantics
> here, which READ_ONCE() will give you, and something like RELEASE
> semantics on any pairing store that populates vma->vm_file, which
> means they'd all have to become something like smp_store_release()).

vma->vm_file should be set in VMA before it is installed and is never
modified afterwards, isn't that the case? So maybe no extra barrier
are needed and READ_ONCE() would be enough.

>
> You might want to instead add another recheck of the sequence count
> (which would involve at least a read memory barrier after the
> preceding patch is fixed) after loading the ->vm_file pointer to
> ensure that no one was concurrently changing the ->vm_file pointer
> before you do memory accesses through it.
>
> > + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC)
> > + goto bail;
>
> missing data_race() annotation on the vma->vm_flags access

ack

>
> > + vm_inode = data_race(vm_file->f_inode);
>
> As noted above, this doesn't work because you can't rely on having RCU
> lifetime for the file. One *very* ugly hack you could do, if you think
> this code is so performance-sensitive that you're willing to do fairly
> atrocious things here, would be to do a "yes I am intentionally doing
> a UAF read and I know the address might not even be mapped at this
> point, it's fine, trust me" pattern, where you use
> copy_from_kernel_nofault(), kind of like in prepend_copy() in
> fs/d_path.c, and then immediately recheck the sequence count before
> doing *anything* with this vm_inode pointer you just loaded.
>
>

yeah, let's leave it as a very unfortunate plan B and try to solve it
a bit cleaner.


>
> > + vm_pgoff = data_race(vma->vm_pgoff);
> > + vm_start = data_race(vma->vm_start);
> > +
> > + offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start);
> > + uprobe = find_uprobe_rcu(vm_inode, offset);
> > + if (!uprobe)
> > + goto bail;
> > +
> > + /* now double check that nothing about MM changed */
> > + if (!mmap_lock_speculation_end(mm, seq))
> > + goto bail;
> > +
> > + rcu_read_unlock();
> > +
> > + /* happy case, we speculated successfully */
> > + return uprobe;
> > +bail:
> > + rcu_read_unlock();
> > + return NULL;
> > +}