Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all()
From: Andy Lutomirski
Date: Tue Oct 16 2018 - 19:29:13 EST
On Tue, Oct 16, 2018 at 2:39 PM Sebastian Andrzej Siewior
<bigeasy@xxxxxxxxxxxxx> wrote:
>
> On 2018-10-16 14:25:07 [-0700], Andy Lutomirski wrote:
> > > diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
> > > index 51a5a69ecac9f..fe6b21f0a6631 100644
> > > --- a/arch/x86/mm/pageattr.c
> > > +++ b/arch/x86/mm/pageattr.c
> > > @@ -2088,7 +2088,9 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
> > > * We should perform an IPI and flush all tlbs,
> > > * but that can deadlock->flush only current cpu:
> > > */
> > > + preempt_disable();
> > > __flush_tlb_all();
> > > + preempt_enable();
> > >
> >
> > Depending on your CPU, __flush_tlb_all() is either
> > __native_flush_tlb_global() or __native_flush_tlb(). Only
> > __native_flush_tlb() could have any problem with preemption, but it
> > has a WARN_ON_ONCE(preemptible()); in it. Can you try to figure out
> > why that's not firing for you?
>
> It is firing, it is the warning that was introduced in commit
> decab0888e6e (as mention in the commit message; I just noticed it way
> later because it popped early in the boot log).
>
> > I suspect that a better fix would be to put preempt_disable() into
> > __native_flulsh_tlb(), but I'd still like to understand why the
> > warning isn't working.
>
> __native_flulsh_tlb() just had its preempt_disable() removed in
> decab0888e6e and __kernel_map_pages() is only called from the debug
> code. The other caller of __native_flulsh_tlb() seem to hold a lock or
> run with disabled interrupts.
>
> Sebastian
Fair enough. But can you copy the warning to __flush_tlb_all() so
it's checked on all systems (but make it VM_WARN_ON_ONCE)?
--Andy