Re: [PATCH Part2 v5 08/45] x86/fault: Add support to handle the RMP fault for user address
From: Borislav Petkov
Date: Wed Sep 29 2021 - 14:19:33 EST
On Fri, Aug 20, 2021 at 10:58:41AM -0500, Brijesh Singh wrote:
> +static int handle_user_rmp_page_fault(struct pt_regs *regs, unsigned long error_code,
> + unsigned long address)
> +{
#ifdef CONFIG_AMD_MEM_ENCRYPT
> + int rmp_level, level;
> + pte_t *pte;
> + u64 pfn;
> +
> + pte = lookup_address_in_mm(current->mm, address, &level);
> +
> + /*
> + * It can happen if there was a race between an unmap event and
> + * the RMP fault delivery.
> + */
> + if (!pte || !pte_present(*pte))
> + return 1;
> +
> + pfn = pte_pfn(*pte);
> +
> + /* If its large page then calculte the fault pfn */
> + if (level > PG_LEVEL_4K) {
> + unsigned long mask;
> +
> + mask = pages_per_hpage(level) - pages_per_hpage(level - 1);
Just use two helper variables named properly instead of this oneliner:
pages_level = page_level_size(level) / PAGE_SIZE;
pages_prev_level = page_level_size(level - 1) / PAGE_SIZE;
> + pfn |= (address >> PAGE_SHIFT) & mask;
> + }
> +
> + /*
> + * If its a guest private page, then the fault cannot be resolved.
> + * Send a SIGBUS to terminate the process.
> + */
> + if (snp_lookup_rmpentry(pfn, &rmp_level)) {
> + do_sigbus(regs, error_code, address, VM_FAULT_SIGBUS);
> + return 1;
> + }
> +
> + /*
> + * The backing page level is higher than the RMP page level, request
> + * to split the page.
> + */
> + if (level > rmp_level)
> + return 0;
> +
> + return 1;
#else
WARN_ONONCE(1);
return -1;
#endif
and also handle that -1 negative value at the call site.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette