RE: [PATCH Part2 v6 09/49] x86/fault: Add support to handle the RMP fault for user address

From: Kalra, Ashish
Date: Tue Sep 06 2022 - 11:12:37 EST


[AMD Official Use Only - General]

>> On Tue, Aug 09, 2022 at 06:55:43PM +0200, Borislav Petkov wrote:
>> > On Mon, Jun 20, 2022 at 11:03:43PM +0000, Ashish Kalra wrote:
>> > > + pfn = pte_pfn(*pte);
>> > > +
>> > > + /* If its large page then calculte the fault pfn */
>> > > + if (level > PG_LEVEL_4K) {
>> > > + unsigned long mask;
>> > > +
>> > > + mask = pages_per_hpage(level) - pages_per_hpage(level - 1);
>> > > + pfn |= (address >> PAGE_SHIFT) & mask;
>> >
>> > Oh boy, this is unnecessarily complicated. Isn't this
>> >
>> > pfn |= pud_index(address);
>> >
>> > or
>> > pfn |= pmd_index(address);
>>
>> I played with this a bit and ended up with
>>
>> pfn = pte_pfn(*pte) | PFN_DOWN(address & page_level_mask(level
>> - 1));
>>
>> Unless I got something terribly wrong, this should do the same (see
>> the attached patch) as the existing calculations.

>Actually, I don't think they're the same. I think Jarkko's version is correct. Specifically:
>- For level = PG_LEVEL_2M they're the same.
>- For level = PG_LEVEL_1G:
>The current code calculates a garbage mask:
>mask = pages_per_hpage(level) - pages_per_hpage(level - 1); translates to:
>>> hex(262144 - 512)
>'0x3fe00'

No actually this is not a garbage mask, as I explained in earlier responses we need to capture the address bits
to get to the correct 4K index into the RMP table.
Therefore, for level = PG_LEVEL_1G:
mask = pages_per_hpage(level) - pages_per_hpage(level - 1) => 0x3fe00 (which is the correct mask).

>But I believe Jarkko's version calculates the correct mask (below), incorporating all 18 offset bits into the 1G page.
>>> hex(262144 -1)
>'0x3ffff'

We can get this simply by doing (page_per_hpage(level)-1), but as I mentioned above this is not what we need.

Thanks,
Ashish