On 21/09/17 16:14, Boris Ostrovsky wrote:
On 09/21/2017 04:01 AM, Juergen Gross wrote:
Physical addresses on processors supporting 5 level paging can be up to
52 bits wide. For a Xen pv guest running on such a machine those
physical addresses have to be supported in order to be able to use any
memory on the machine even if the guest itself does not support 5 level
paging.
So when reading/writing a MFN from/to a pte don't use the kernel's
PTE_PFN_MASK but a new XEN_PTE_MFN_MASK allowing full 40 bit wide MFNs.
full 52 bits?
The MFN mask is only 40 bits. This plus the 12 bits page offset are 52
bits of machine address width.
Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
---
 arch/x86/include/asm/xen/page.h | 11 ++++++++++-
 arch/x86/xen/mmu_pv.c | 4 ++--
 2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/xen/page.h
b/arch/x86/include/asm/xen/page.h
index 07b6531813c4..bcb8b193c8d1 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -26,6 +26,15 @@ typedef struct xpaddr {
ÂÂÂÂÂ phys_addr_t paddr;
 } xpaddr_t;
 +#ifdef CONFIG_X86_64
+#define XEN_PHYSICAL_MASKÂÂÂ ((1UL << 52) - 1)
SME is not supported for PV guests but for consistency (and in case sme
bit somehow gets set)
#define XEN_PHYSICAL_MASKÂÂÂ __sme_clr(((1UL << 52) - 1))
Hmm, really? Shouldn't we rather add something like
BUG_ON(sme_active());
somewhere?
But the real question that I have is whether this patch is sufficient.
We are trying to preserve more bits in mfn but then this mfn is used,
say, in pte_pfn_to_mfn() to build a pte. Can we be sure that the pte
won't be stripped of higher bits in native code (again, as an example,
native_make_pte()) because we are compiled with 5LEVEL?
native_make_pte() just encapsulates pte_t. It doesn't modify the value
of the pte at all.
Physical address bits are only ever masked away via PTE_PFN_MASK and I
haven't found any place where it is used for a MFN other than those I
touched in this patch.