Re: [PATCH 3/3] arm64: ftrace: add cond_resched() to func ftrace_make_(call|nop)

From: Steven Rostedt
Date: Tue Dec 04 2018 - 00:50:19 EST


On Mon, 3 Dec 2018 22:51:52 +0100
Arnd Bergmann <arnd@xxxxxxxx> wrote:

> On Mon, Dec 3, 2018 at 8:22 PM Will Deacon <will.deacon@xxxxxxx> wrote:
> >
> > Hi Anders,
> >
> > On Fri, Nov 30, 2018 at 04:09:56PM +0100, Anders Roxell wrote:
> > > Both of those functions end up calling ftrace_modify_code(), which is
> > > expensive because it changes the page tables and flush caches.
> > > Microseconds add up because this is called in a loop for each dyn_ftrace
> > > record, and this triggers the softlockup watchdog unless we let it sleep
> > > occasionally.
> > > Rework so that we call cond_resched() before going into the
> > > ftrace_modify_code() function.
> > >
> > > Co-developed-by: Arnd Bergmann <arnd@xxxxxxxx>
> > > Signed-off-by: Arnd Bergmann <arnd@xxxxxxxx>
> > > Signed-off-by: Anders Roxell <anders.roxell@xxxxxxxxxx>
> > > ---
> > > arch/arm64/kernel/ftrace.c | 10 ++++++++++
> > > 1 file changed, 10 insertions(+)
> >
> > It sounds like you're running into issues with the existing code, but I'd
> > like to understand a bit more about exactly what you're seeing. Which part
> > of the ftrace patching is proving to be expensive?
> >
> > The page table manipulation only happens once per module when using PLTs,
> > and the cache maintenance is just a single line per patch site without an
> > IPI.
> >
> > Is it the loop in ftrace_replace_code() that is causing the hassle?
>
> Yes: with an allmodconfig kernel, the ftrace selftest calls ftrace_replace_code
> to look >40000 through ftrace_make_call/ftrace_make_nop, and these
> end up calling
>
> static int __kprobes __aarch64_insn_write(void *addr, __le32 insn)
> {
> void *waddr = addr;
> unsigned long flags = 0;
> int ret;
>
> raw_spin_lock_irqsave(&patch_lock, flags);
> waddr = patch_map(addr, FIX_TEXT_POKE0);
>
> ret = probe_kernel_write(waddr, &insn, AARCH64_INSN_SIZE);
>
> patch_unmap(FIX_TEXT_POKE0);
> raw_spin_unlock_irqrestore(&patch_lock, flags);
>
> return ret;
> }
> int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
> {
> u32 *tp = addr;
> int ret;
>
> /* A64 instructions must be word aligned */
> if ((uintptr_t)tp & 0x3)
> return -EINVAL;
>
> ret = aarch64_insn_write(tp, insn);
> if (ret == 0)
> __flush_icache_range((uintptr_t)tp,
> (uintptr_t)tp + AARCH64_INSN_SIZE);
>
> return ret;
> }
>
> which seems to be where the main cost is. This is with inside of
> qemu, and with lots of debugging options (in particular
> kcov and ubsan) enabled, that make each function call
> more expensive.

I was thinking more about this. Would something like this work?

-- Steve

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 8ef9fc226037..42e89397778b 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2393,11 +2393,14 @@ void __weak ftrace_replace_code(int enable)
{
struct dyn_ftrace *rec;
struct ftrace_page *pg;
+ bool schedulable;
int failed;

if (unlikely(ftrace_disabled))
return;

+ schedulable = !irqs_disabled() & !preempt_count();
+
do_for_each_ftrace_rec(pg, rec) {

if (rec->flags & FTRACE_FL_DISABLED)
@@ -2409,6 +2412,8 @@ void __weak ftrace_replace_code(int enable)
/* Stop processing */
return;
}
+ if (schedulable)
+ cond_resched();
} while_for_each_ftrace_rec();
}