Re: [PATCH Part2 v6 09/49] x86/fault: Add support to handle the RMP fault for user address
From: Borislav Petkov
Date: Sun Sep 04 2022 - 02:37:44 EST
On Sat, Sep 03, 2022 at 05:30:28PM +0000, Kalra, Ashish wrote:
> There is 1 64-bit RMP entry for every physical 4k page of DRAM, so
> essentially every 4K page of DRAM is represented by a RMP entry.
Before we get to the rest - this sounds wrong to me. My APM has:
"PSMASH Page Smash
Expands a 2MB-page RMP entry into a corresponding set of contiguous
4KB-page RMP entries. The 2MB page’s system physical address is
specified in the RAX register. The new entries inherit the attributes
of the original entry. Upon completion, a return code is stored in EAX.
rFLAGS bits OF, ZF, AF, PF and SF are set based on this return code..."
So there *are* 2M entries in the RMP table.
> So even if host page is 1G and underlying (smashed/split) RMP
> entries are 2M, the RMP table entry has to be indexed to a 4K entry
> corresponding to that.
So if there are 2M entries in the RMP table, how does that indexing with
4K entries is supposed to work?
Hell, even PSMASH pseudocode shows how you go and write all those 512 4K
entries using the 2M entry as a template. So *before* you have smashed
that 2M entry, it *is* an *actual* 2M entry.
So if you fault on a page which is backed by that 2M RMP entry, you will
get that 2M RMP entry.
> If it was simply a 2M entry in the RMP table, then pmd_index() will
> work correctly.
Judging by the above text, it *can* *be* a 2M RMP entry!
By reading your example you're trying to tell me that a RMP #PF will
always need to work on 4K entries. Which would then need for a 2M entry
as above to be PSMASHed in order to get the 4K thing. But that would be
silly - RMP PFs will this way gradually break all 2M pages and degrage
performance for no real reason.
So this still looks real wrong to me.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette