[PATCH v2 1/1] KVM: x86/xen: Use trylock for fast path event channel delivery

From: shaikh.kamal

Date: Wed Apr 01 2026 - 21:33:14 EST


kvm_xen_set_evtchn_fast() acquires gpc->lock with read_lock_irqsave(),
which becomes a sleeping lock on PREEMPT_RT, triggering:

BUG: sleeping function called from invalid context
in_hardirq(): 1, in_serving_softirq(): 0
Call Trace:
<IRQ>
rt_spin_lock+0x70/0x130
kvm_xen_set_evtchn_fast+0x20b/0xa40
xen_timer_callback+0x91/0x1a0
__run_hrtimer
hrtimer_interrupt

when called from hard IRQ context (e.g., hrtimer callback).

The function uses read_lock_irqsave() to access two gpc structures:
shinfo_cache and vcpu_info_cache. On PREEMPT_RT, these rwlocks are
rt_mutex-based and cannot be acquired from hard IRQ context.

Use read_trylock() instead for both gpc lock acquisitions. If either
lock is contended, return -EWOULDBLOCK to trigger the existing slow
path: xen_timer_callback() sets vcpu->arch.xen.timer_pending, kicks
the vCPU with KVM_REQ_UNBLOCK, and the event gets injected from
process context via kvm_xen_inject_timer_irqs().

This approach works on all kernels (RT and non-RT) and preserves the
"fast path" semantics: acquire the lock only if immediately available,
otherwise bail out rather than blocking.

Reported-by: syzbot+919877893c9d28162dc2@xxxxxxxxxxxxxxxxxxxxxxxxx
Closes: https://syzkaller.appspot.com/bug?extid=919877893c9d28162dc2
Fixes: 77c9b9dea4fb ("KVM: x86/xen: Use fast path for Xen timer delivery")
Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx>

Signed-off-by: shaikh.kamal <shaikhkamal2012@xxxxxxxxx>
---
arch/x86/kvm/xen.c | 33 +++++++++++++++++++++++++++++----
1 file changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index d6b2a665b499..479e8f23a9c4 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -1817,7 +1817,17 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm)

idx = srcu_read_lock(&kvm->srcu);

- read_lock_irqsave(&gpc->lock, flags);
+ /*
+ * Use trylock for the "fast" path. If the lock is contended,
+ * return -EWOULDBLOCK to use the slow path which injects the
+ * event from process context via timer_pending + KVM_REQ_UNBLOCK.
+ */
+ local_irq_save(flags);
+ if (!read_trylock(&gpc->lock)) {
+ local_irq_restore(flags);
+ srcu_read_unlock(&kvm->srcu, idx);
+ return -EWOULDBLOCK;
+ }
if (!kvm_gpc_check(gpc, PAGE_SIZE))
goto out_rcu;

@@ -1848,10 +1858,22 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm)
} else {
rc = 1; /* Delivered to the bitmap in shared_info. */
/* Now switch to the vCPU's vcpu_info to set the index and pending_sel */
- read_unlock_irqrestore(&gpc->lock, flags);
+ read_unlock(&gpc->lock);
+ local_irq_restore(flags);
gpc = &vcpu->arch.xen.vcpu_info_cache;

- read_lock_irqsave(&gpc->lock, flags);
+ local_irq_save(flags);
+ if (!read_trylock(&gpc->lock)) {
+ /*
+ * Lock contended. Set the in-kernel pending flag
+ * and kick the vCPU to inject via the slow path.
+ */
+ local_irq_restore(flags);
+ if (!test_and_set_bit(port_word_bit,
+ &vcpu->arch.xen.evtchn_pending_sel))
+ kick_vcpu = true;
+ goto out_kick;
+ }
if (!kvm_gpc_check(gpc, sizeof(struct vcpu_info))) {
/*
* Could not access the vcpu_info. Set the bit in-kernel
@@ -1885,7 +1907,10 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm)
}

out_rcu:
- read_unlock_irqrestore(&gpc->lock, flags);
+ read_unlock(&gpc->lock);
+ local_irq_restore(flags);
+
+ out_kick:
srcu_read_unlock(&kvm->srcu, idx);

if (kick_vcpu) {
--
2.43.0