[BUGFIX PATCH tip/master V2 1/3] kprobes/x86: Fix a possible deadlock case in kretprobe

From: Masami Hiramatsu
Date: Thu Feb 09 2017 - 11:31:32 EST


Fix a possibility of deadlock case in kretprobe on x86
implementation. There is a small chance that the kretprobe
hash table lock can cause a dead lock.

The senario is that a user puts 2 kretprobes, one on normal
function and one on a function which can be called from NMI
(we don't recommend it, but possible). In this case, if the
kernel hits the 1st kretprobe on a normal function return
which calls trampoline_handler(), acquires a spinlock on
the hash table in kretprobe_hash_lock() and disables irqs.
After that, if NMI is occurred and the 2nd kretprobe is kicked,
it also calls trampoline_handler() and tries to acquire the
same spinlock (since the hash is based on current task,
same as the 1st kretprobe), it causes a deadlock.
Note that this is very rare case, but theoretically happens.

Actually, this bug has been introduced by kretprobe-booster,
which removes a kprobe from return trampoline code, but also
resets current kprobe, which can be a stopper for the nested
k(ret)probes.

To fix this issue, I introduced a dummy kprobe which is set
as a current kprobe while holding the kretprobe-hash lock.
With that, if an NMI occurred and 2nd kretprobe's kprobe
is kicked (to modify the return address, a kprobe is kicked
when the target function is called), the kprobe (and the 2nd
kretprobe also) is skipped because it detects there is
another kprobe is running.

This reentrance detection and nested kprobe blocker had
existed when the original kretprobe was implemented by
using kprobe on trampoline code. This fixes just revived it.

Signed-off-by: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
---
arch/x86/kernel/kprobes/core.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 6384eb7..6aaabe1 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -709,6 +709,8 @@ asm(
NOKPROBE_SYMBOL(kretprobe_trampoline);
STACK_FRAME_NON_STANDARD(kretprobe_trampoline);

+static struct kprobe dummy_retprobe = {.addr = (void *)&kretprobe_trampoline};
+
/*
* Called from kretprobe_trampoline
*/
@@ -722,7 +724,6 @@ __visible __used void *trampoline_handler(struct pt_regs *regs)
kprobe_opcode_t *correct_ret_addr = NULL;

INIT_HLIST_HEAD(&empty_rp);
- kretprobe_hash_lock(current, &head, &flags);
/* fixup registers */
#ifdef CONFIG_X86_64
regs->cs = __KERNEL_CS;
@@ -733,6 +734,11 @@ __visible __used void *trampoline_handler(struct pt_regs *regs)
regs->ip = trampoline_address;
regs->orig_ax = ~0UL;

+ /* This prevents kernel to change running cpu while processing */
+ preempt_disable();
+ get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
+ __this_cpu_write(current_kprobe, &dummy_retprobe);
+ kretprobe_hash_lock(current, &head, &flags);
/*
* It is possible to have multiple instances associated with a given
* task either because multiple functions in the call path have
@@ -773,10 +779,9 @@ __visible __used void *trampoline_handler(struct pt_regs *regs)
orig_ret_address = (unsigned long)ri->ret_addr;
if (ri->rp && ri->rp->handler) {
__this_cpu_write(current_kprobe, &ri->rp->kp);
- get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
ri->ret_addr = correct_ret_addr;
ri->rp->handler(ri, regs);
- __this_cpu_write(current_kprobe, NULL);
+ __this_cpu_write(current_kprobe, &dummy_retprobe);
}

recycle_rp_inst(ri, &empty_rp);
@@ -791,6 +796,8 @@ __visible __used void *trampoline_handler(struct pt_regs *regs)
}

kretprobe_hash_unlock(current, &flags);
+ __this_cpu_write(current_kprobe, NULL);
+ preempt_enable_no_resched();

hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist);