Re: I915 CI-run with kfence enabled, issues found

From: Marco Elver
Date: Mon Mar 29 2021 - 17:35:12 EST


On Mon, 29 Mar 2021 at 23:03, Dave Hansen <dave.hansen@xxxxxxxxx> wrote:
> On 3/29/21 10:45 AM, Marco Elver wrote:
> > On Mon, 29 Mar 2021 at 19:32, Dave Hansen <dave.hansen@xxxxxxxxx> wrote:
> > Doing it to all CPUs is too expensive, and we can tolerate this being
> > approximate (nothing bad will happen, KFENCE might just miss a bug and
> > that's ok).
> ...
> >> BTW, the preempt checks in flush_tlb_one_kernel() are dependent on KPTI
> >> being enabled. That's probably why you don't see this everywhere. We
> >> should probably have unconditional preempt checks in there.
> >
> > In which case I'll add a preempt_disable/enable() pair to
> > kfence_protect_page() in arch/x86/include/asm/kfence.h.
>
> That sounds sane to me. I'd just plead that the special situation (not
> needing deterministic TLB flushes) is obvious. We don't want any folks
> copying this code.
>
> BTW, I know you want to avoid the cost of IPIs, but have you considered
> any other low-cost ways to get quicker TLB flushes? For instance, you
> could loop over all CPUs and set cpu_tlbstate.invalidate_other=1. That
> would induce a context switch at the next context switch without needing
> an IPI.

This is interesting. And it seems like it would work well for our
usecase. Ideally we should only flush entries related to the page we
changed. But it seems invalidate_other would flush the entire TLB.

With PTI, flush_tlb_one_kernel() already does that for the current
CPU, but now we'd flush entire TLBs for all CPUs and even if PTI is
off.

Do you have an intuition for how much this would affect large
multi-socket systems? I currently can't quite say, and would err on
the side of caution.

Thanks,
-- Marco