Re: [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait

From: Wanpeng Li
Date: Fri Mar 12 2021 - 19:58:09 EST


On Thu, 11 Mar 2021 at 23:54, Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Tue, Feb 23, 2021, Wanpeng Li wrote:
> > On Tue, 23 Feb 2021 at 13:25, Wanpeng Li <kernellwp@xxxxxxxxx> wrote:
> > >
> > > From: Wanpeng Li <wanpengli@xxxxxxxxxxx>
> > >
> > > After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
> > > splatting below during boot:
> > >
> > > raw_local_irq_restore() called with IRQs enabled
> > > WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x26/0x30
> > > Modules linked in: hid_generic usbhid hid
> > > CPU: 1 PID: 169 Comm: systemd-udevd Not tainted 5.11.0+ #25
> > > RIP: 0010:warn_bogus_irq_restore+0x26/0x30
> > > Call Trace:
> > > kvm_wait+0x76/0x90
> > > __pv_queued_spin_lock_slowpath+0x285/0x2e0
> > > do_raw_spin_lock+0xc9/0xd0
> > > _raw_spin_lock+0x59/0x70
> > > lockref_get_not_dead+0xf/0x50
> > > __legitimize_path+0x31/0x60
> > > legitimize_root+0x37/0x50
> > > try_to_unlazy_next+0x7f/0x1d0
> > > lookup_fast+0xb0/0x170
> > > path_openat+0x165/0x9b0
> > > do_filp_open+0x99/0x110
> > > do_sys_openat2+0x1f1/0x2e0
> > > do_sys_open+0x5c/0x80
> > > __x64_sys_open+0x21/0x30
> > > do_syscall_64+0x32/0x50
> > > entry_SYSCALL_64_after_hwframe+0x44/0xae
> > >
> > > The irqflags handling in kvm_wait() which ends up doing:
> > >
> > > local_irq_save(flags);
> > > safe_halt();
> > > local_irq_restore(flags);
> > >
> > > which triggered a new consistency checking, we generally expect
> > > local_irq_save() and local_irq_restore() to be pared and sanely
> > > nested, and so local_irq_restore() expects to be called with
> > > irqs disabled.
> > >
> > > This patch fixes it by adding a local_irq_disable() after safe_halt()
> > > to avoid this warning.
> > >
> > > Cc: Mark Rutland <mark.rutland@xxxxxxx>
> > > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> > > Signed-off-by: Wanpeng Li <wanpengli@xxxxxxxxxxx>
> > > ---
> > > arch/x86/kernel/kvm.c | 4 +++-
> > > 1 file changed, 3 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > > index 5e78e01..688c84a 100644
> > > --- a/arch/x86/kernel/kvm.c
> > > +++ b/arch/x86/kernel/kvm.c
> > > @@ -853,8 +853,10 @@ static void kvm_wait(u8 *ptr, u8 val)
> > > */
> > > if (arch_irqs_disabled_flags(flags))
> > > halt();
> > > - else
> > > + else {
> > > safe_halt();
> > > + local_irq_disable();
> > > + }
> >
> > An alternative fix:
> >
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > index 5e78e01..7127aef 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -836,12 +836,13 @@ static void kvm_kick_cpu(int cpu)
> >
> > static void kvm_wait(u8 *ptr, u8 val)
> > {
> > - unsigned long flags;
> > + bool disabled = irqs_disabled();
> >
> > if (in_nmi())
> > return;
> >
> > - local_irq_save(flags);
> > + if (!disabled)
> > + local_irq_disable();
> >
> > if (READ_ONCE(*ptr) != val)
> > goto out;
> > @@ -851,13 +852,14 @@ static void kvm_wait(u8 *ptr, u8 val)
> > * for irq enabled case to avoid hang when lock info is overwritten
> > * in irq spinlock slowpath and no spurious interrupt occur to save us.
> > */
> > - if (arch_irqs_disabled_flags(flags))
> > + if (disabled)
> > halt();
> > else
> > safe_halt();
> >
> > out:
> > - local_irq_restore(flags);
> > + if (!disabled)
> > + local_irq_enable();
> > }
> >
> > #ifdef CONFIG_X86_32
>
> A third option would be to split the paths. In the end, it's only the ptr/val
> line that's shared.

I just sent out a formal patch for my alternative fix, I think the
whole logic in kvm_wait is more clear w/ my version.

>
> ---
> arch/x86/kernel/kvm.c | 23 ++++++++++-------------
> 1 file changed, 10 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5e78e01ca3b4..78bb0fae3982 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -836,28 +836,25 @@ static void kvm_kick_cpu(int cpu)
>
> static void kvm_wait(u8 *ptr, u8 val)
> {
> - unsigned long flags;
> -
> if (in_nmi())
> return;
>
> - local_irq_save(flags);
> -
> - if (READ_ONCE(*ptr) != val)
> - goto out;
> -
> /*
> * halt until it's our turn and kicked. Note that we do safe halt
> * for irq enabled case to avoid hang when lock info is overwritten
> * in irq spinlock slowpath and no spurious interrupt occur to save us.
> */
> - if (arch_irqs_disabled_flags(flags))
> - halt();
> - else
> - safe_halt();
> + if (irqs_disabled()) {
> + if (READ_ONCE(*ptr) == val)
> + halt();
> + } else {
> + local_irq_disable();
>
> -out:
> - local_irq_restore(flags);
> + if (READ_ONCE(*ptr) == val)
> + safe_halt();
> +
> + local_irq_enable();
> + }
> }
>
> #ifdef CONFIG_X86_32
> --