Re: [PATCHv3 4/8] x86/mm: Handle LAM on context switch

From: Edgecombe, Rick P
Date: Fri Jun 10 2022 - 19:55:19 EST


On Fri, 2022-06-10 at 17:35 +0300, Kirill A. Shutemov wrote:
> @@ -687,6 +716,7 @@ void initialize_tlbstate_and_flush(void)
> struct mm_struct *mm = this_cpu_read(cpu_tlbstate.loaded_mm);
> u64 tlb_gen = atomic64_read(&init_mm.context.tlb_gen);
> unsigned long cr3 = __read_cr3();
> + u64 lam = cr3 & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57);
>
> /* Assert that CR3 already references the right mm. */
> WARN_ON((cr3 & CR3_ADDR_MASK) != __pa(mm->pgd));
> @@ -700,7 +730,7 @@ void initialize_tlbstate_and_flush(void)
> !(cr4_read_shadow() & X86_CR4_PCIDE));
>
> /* Force ASID 0 and force a TLB flush. */
> - write_cr3(build_cr3(mm->pgd, 0));
> + write_cr3(build_cr3(mm->pgd, 0, lam));
>

Can you explain why to keep the lam bits that were in CR3 here? It
seems to be worried some CR3 bits got changed and need to be set to a
known state. Why not take them from the MM?

Also, it warns if the cr3 pfn doesn't match the mm pgd, should it warn
if cr3 lam bits don't match the MM's copy?