Re: [PATCH v6 0/6] arm64: Add kernel probes (kprobes) support

From: William Cohen
Date: Tue May 05 2015 - 17:49:24 EST


On 05/05/2015 11:48 AM, Will Deacon wrote:
> On Tue, May 05, 2015 at 06:14:51AM +0100, David Long wrote:
>> On 05/01/15 21:44, William Cohen wrote:
>>> Dave Long and I did some additional experimentation to better
>>> understand what is condition causes the kernel to sometimes spew:
>>>
>>> Unexpected kernel single-step exception at EL1
>>>
>>> The functioncallcount.stp test instruments the entry and return of
>>> every function in the mm files, including kfree. In most cases the
>>> arm64 trampoline_probe_handler just determines which return probe
>>> instance matches the current conditions, runs the associated handler,
>>> and recycles the return probe instance for another use by placing it
>>> on a hlist. However, it is possible that a return probe instance has
>>> been set up on function entry and the return probe is unregistered
>>> before the return probe instance fires. In this case kfree is called
>>> by the trampoline handler to remove the return probe instances related
>>> to the unregistered kretprobe. This case where the the kprobed kfree
>>> is called within the arm64 trampoline_probe_handler function trigger
>>> the problem.
>>>
>>> The kprobe breakpoint for the kfree call from within the
>>> trampoline_probe_handler is encountered and started, but things go
>>> wrong when attempting the single step on the instruction.
>>>
>>> It took a while to trigger this problem with the sytemtap testsuite.
>>> Dave Long came up with steps that reproduce this more quickly with a
>>> probed function that is always called within the trampoline handler.
>>> Trying the same on x86_64 doesn't trigger the problem. It appears
>>> that the x86_64 code can handle a single step from within the
>>> trampoline_handler.
>>>
>>
>> I'm assuming there are no plans for supporting software breakpoint debug
>> exceptions during processing of single-step exceptions, any time soon on
>> arm64. Given that the only solution that I can come with for this is
>> instead of making this orphaned kretprobe instance list exist only
>> temporarily (in the scope of the kretprobe trampoline handler), make it
>> always exist and kfree any items found on it as part of a periodic
>> cleanup running outside of the handler context. I think these changes
>> would still all be in archiecture-specific code. This doesn't feel to
>> me like a bad solution. Does anyone think there is a simpler way out of
>> this?
>
> Just to clarify, is the problem here the software breakpoint exception,
> or trying to step the faulting instruction whilst we were already handling
> a step?
>
> I think I'd be inclined to keep the code run in debug context to a minimum.
> We already can't block there, and the more code we add the more black spots
> we end up with in the kernel itself. The alternative would be to make your
> kprobes code re-entrant, but that sounds like a nightmare.
>
> You say this works on x86. How do they handle it? Is the nested probe
> on kfree ignored or handled?
>
> Will
>

Hi Dave and Will,

The attached patch attempts to eliminate the need for the breakpoint in the trampoline. It is modeled after the x86_64 code and just saves the register state, calls the trampoline handler, and then fixes the return address. The code compiles, but I have NOT verified that it works. It looks feasible to do things this way. In addition to avoiding the possible issue with a kretprobe on kfree it would also make the kretprobes faster because it would avoid the breakpoint exception and the associated kprobe handling in the trampoline.

-Will
diff --git a/arch/arm64/kernel/kprobes-arm64.h b/arch/arm64/kernel/kprobes-arm64.h
index ff8a55f..0b9987d 100644
--- a/arch/arm64/kernel/kprobes-arm64.h
+++ b/arch/arm64/kernel/kprobes-arm64.h
@@ -27,4 +27,40 @@ extern kprobes_pstate_check_t * const kprobe_condition_checks[16];
enum kprobe_insn __kprobes
arm_kprobe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi);

+#define SAVE_REGS_STRING\
+ " stp x0, x1, [sp, #16 * 0]\n" \
+ " stp x2, x3, [sp, #16 * 1]\n" \
+ " stp x4, x5, [sp, #16 * 2]\n" \
+ " stp x6, x7, [sp, #16 * 3]\n" \
+ " stp x8, x9, [sp, #16 * 4]\n" \
+ " stp x10, x11, [sp, #16 * 5]\n" \
+ " stp x12, x13, [sp, #16 * 6]\n" \
+ " stp x14, x15, [sp, #16 * 7]\n" \
+ " stp x16, x17, [sp, #16 * 8]\n" \
+ " stp x18, x19, [sp, #16 * 9]\n" \
+ " stp x20, x21, [sp, #16 * 10]\n" \
+ " stp x22, x23, [sp, #16 * 11]\n" \
+ " stp x24, x25, [sp, #16 * 12]\n" \
+ " stp x26, x27, [sp, #16 * 13]\n" \
+ " stp x28, x29, [sp, #16 * 14]\n" \
+ " str x30, [sp, #16 * 15]\n"
+
+#define RESTORE_REGS_STRING\
+ " ldp x2, x3, [sp, #16 * 1]\n" \
+ " ldp x4, x5, [sp, #16 * 2]\n" \
+ " ldp x6, x7, [sp, #16 * 3]\n" \
+ " ldp x8, x9, [sp, #16 * 4]\n" \
+ " ldp x10, x11, [sp, #16 * 5]\n" \
+ " ldp x12, x13, [sp, #16 * 6]\n" \
+ " ldp x14, x15, [sp, #16 * 7]\n" \
+ " ldp x16, x17, [sp, #16 * 8]\n" \
+ " ldp x18, x19, [sp, #16 * 9]\n" \
+ " ldp x20, x21, [sp, #16 * 10]\n" \
+ " ldp x22, x23, [sp, #16 * 11]\n" \
+ " ldp x24, x25, [sp, #16 * 12]\n" \
+ " ldp x26, x27, [sp, #16 * 13]\n" \
+ " ldp x28, x29, [sp, #16 * 14]\n" \
+ " ldr x30, [sp, #16 * 15]\n"
+
+
#endif /* _ARM_KERNEL_KPROBES_ARM64_H */
diff --git a/arch/arm64/kernel/kprobes.c b/arch/arm64/kernel/kprobes.c
index 2b3ef17..f5dab2d 100644
--- a/arch/arm64/kernel/kprobes.c
+++ b/arch/arm64/kernel/kprobes.c
@@ -561,32 +561,27 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
}

/*
- * Kretprobes: kernel return probes handling
- *
- * AArch64 mode does not support popping the PC value from the
- * stack like on ARM 32-bit (ldmia {..,pc}), so atleast one
- * register need to be used to achieve branching/return.
- * It means return probes cannot return back to the original
- * return address directly without modifying the register context.
- *
- * So like other architectures, we prepare a global routine
- * with NOPs, which serve as trampoline address that hack away the
- * function return, with the exact register context.
- * Placing a kprobe on trampoline routine entry will trap again to
- * execute return probe handlers and restore original return address
- * in ELR_EL1, this way saved pt_regs still hold the original
- * register values to be carried back to the caller.
+ * When a retprobed function returns, this code saves registers and
+ * calls trampoline_handler() runs, which calls the kretprobe's handler.
*/
-static void __used kretprobe_trampoline_holder(void)
+static void __kprobes __used kretprobe_trampoline_holder(void)
{
- asm volatile (".global kretprobe_trampoline\n"
- "kretprobe_trampoline:\n"
- "NOP\n\t"
- "NOP\n\t");
+ asm volatile (
+ ".global kretprobe_trampoline\n"
+ "kretprobe_trampoline: \n"
+ SAVE_REGS_STRING
+ "mov x0, sp\n"
+ "bl trampoline_handler\n"
+ /* Replace trampoline address in lr with actual
+ orig_ret_addr return address. */
+ "str x0, [sp, #16 * 15]\n"
+ RESTORE_REGS_STRING
+ "ret\n"
+ : : : "memory");
}

-static int __kprobes
-trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
+
+static void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
{
struct kretprobe_instance *ri = NULL;
struct hlist_head *head, empty_rp;
@@ -647,7 +642,7 @@ trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
}

/* return 1 so that post handlers not called */
- return 1;
+ return (void *) orig_ret_addr;
}

void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
@@ -659,18 +654,7 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
regs->regs[30] = (long)&kretprobe_trampoline;
}

-static struct kprobe trampoline = {
- .addr = (kprobe_opcode_t *) &kretprobe_trampoline,
- .pre_handler = trampoline_probe_handler
-};
-
-int __kprobes arch_trampoline_kprobe(struct kprobe *p)
-{
- return p->addr == (kprobe_opcode_t *) &kretprobe_trampoline;
-}
-
int __init arch_init_kprobes(void)
{
- /* register trampoline for kret probe */
- return register_kprobe(&trampoline);
+ return 0;
}