Re: [PATCH] x86/mm/pat: Fix missing preemption disable for __native_flush_tlb()
From: Dave Hansen
Date: Mon Nov 12 2018 - 14:07:58 EST
On 11/10/18 4:31 PM, Dan Williams wrote:
>> If it indeed can run late in boot or after boot, then it sure looks
>> buggy. Either the __flush_tlb_all() should be removed or it should
>> be replaced with flush_tlb_kernel_range(). Itâs unclear to me why a
>> flush is needed at all, but if itâs needed, surely all CPUs need
>> flushing.
> Yeah, I don't think __flush_tlb_all() is needed at
> kernel_physical_mapping_init() time, and at
> kernel_physical_mapping_remove() time we do a full flush_tlb_all().
It doesn't look strictly necessary to me. I _think_ we're only ever
populating previously non-present entries, and those never need TLB
flushes. I didn't look too deeply, so I'd appreciate anyone else
double-checking me on this.
The __flush_tlb_all() actually appears to predate git and it was
originally entirely intended for early-boot-only. It probably lasted
this long because it looks really important. :)
It was even next to where we set MMU features in CR4, which is *really*
early in boot:
> + asm volatile("movq %%cr4,%0" : "=r" (mmu_cr4_features));
> + __flush_tlb_all();
I also totally agree with Andy that if it were needed on the local CPU,
this code would be buggy because it doesn't initiate any *remote* TLB
flushes.
So, let's remove it, but also add some comments about not being allowed
to *change* page table entries, only populate them. We could even add
some warnings to keep this enforced.