Re: [PATCH] kvm: defer huge page recovery vhost task to later

From: Keith Busch
Date: Fri Jan 24 2025 - 23:05:58 EST


On Fri, Jan 24, 2025 at 04:10:45PM -0800, Sean Christopherson wrote:
> On Fri, Jan 24, 2025, Keith Busch wrote:
> > On Fri, Jan 24, 2025 at 12:07:24PM -0800, Sean Christopherson wrote:
> > > This is broken. If the module param is toggled before the first KVM_RUN, KVM
> > > will hit a NULL pointer deref due to trying to start a non-existent vhost task:
> > >
> > > BUG: kernel NULL pointer dereference, address: 0000000000000040
> > > #PF: supervisor read access in kernel mode
> > > #PF: error_code(0x0000) - not-present page
> > > PGD 0 P4D 0
> > > Oops: Oops: 0000 [#1] SMP
> > > CPU: 16 UID: 0 PID: 1190 Comm: bash Not tainted 6.13.0-rc3-9bb02e874121-x86/xen_msr_fixes-vm #2382
> > > Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
> > > RIP: 0010:vhost_task_wake+0x5/0x10
> > > Call Trace:
> > > <TASK>
> > > set_nx_huge_pages+0xcc/0x1e0 [kvm]
> >
> > Thanks for pointing out this gap. It looks like we'd have to hold the
> > kvm_lock in kvm_mmu_post_init_vm(), and add NULL checks in
> > set_nx_huge_pages() and set_nx_huge_pages_recovery_param() to prevent
> > the NULL deref. Is that okay?
>
> I don't _think_ we need to take kvm_lock. And I don't want to take kvm_lock,
> because we're also trying to eliminate a (very theoretical) deadlock[1] due to
> taking kvm_lock in the params helpers.
>
> There is a race that can happen with my proposed fix[2], but I'm not sure we care
> enough to address it. If kvm_nx_huge_page_recovery_worker() runs before the params
> are set, and the param setter processes the VM before nx_huge_page_recovery_thread
> is set, then the worker could sleep for too long, relative to what userspace expects.
>
> I suppose if we care then we could fix that by taking kvm->arch.nx_once.mutex
> when waking the task?

I think we actually can do this without any additional locks. The only
thing we need to ensure is that the vhost task sees the updated
variable, and I think we can achieve that with appropriate memory
barriers on the reads and writes.