Re: [tip:x86/mm] x86/asm: Remove __VIRTUAL_MASK_SHIFT==47 assert

From: Denys Vlasenko
Date: Wed Apr 05 2017 - 07:50:59 EST




On 04/05/2017 01:12 PM, Kirill A. Shutemov wrote:
On Tue, Apr 04, 2017 at 05:36:33PM +0200, Denys Vlasenko wrote:
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 044d18e..f07b4ef 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -265,12 +265,9 @@ return_from_SYSCALL_64:
*
* If width of "canonical tail" ever becomes variable, this will need
* to be updated to remain correct on both old and new CPUs.
+ *
+ * Change top 16 bits to be the sign-extension of 47th bit

The comment above stops being correct: it's not necessary 16 top bits
we sign-extend now. With larger __VIRTUAL_MASK_SHIFT for 5-level translation,
it will become 7 bits (if I do the math right).

Does the patch below look okay to you?

*/
- .ifne __VIRTUAL_MASK_SHIFT - 47
- .error "virtual address width changed -- SYSRET checks need update"
- .endif
-
- /* Change top 16 bits to be the sign-extension of 47th bit */
shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx

The bigger problem here would be the future boot-time choice of 4/5-level
page tables: __VIRTUAL_MASK_SHIFT will need to depend on that choice,
but in this location it is preferable to not use any variables
(memory references).

Yeah. Will see what I will be able to come up with. Not sure yet.

-------------------8<----------------------

From 2433cf4f8847bbc41cc2b02d6af4f191b3b5a0c5 Mon Sep 17 00:00:00 2001
From: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx>
Date: Wed, 5 Apr 2017 14:06:15 +0300
Subject: [PATCH] x86/asm: Fix comment in return_from_SYSCALL_64

On x86-64 __VIRTUAL_MASK_SHIFT depends on paging mode now.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
---
arch/x86/entry/entry_64.S | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 607d72c4a485..c70e064d9592 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -266,7 +266,8 @@ return_from_SYSCALL_64:
* If width of "canonical tail" ever becomes variable, this will need
* to be updated to remain correct on both old and new CPUs.
*
- * Change top 16 bits to be the sign-extension of 47th bit
+ * Change top bits to match most significant valuable bit (47 or 56
+ * depending on paging mode) in the address.

Er.... "Change top bits ... ((47 or 56 [bits] depending on paging mode)"?
I know that's wrong and that's not what you meant to say,
but it can be read this way too. "47th" instead of "47"
would eliminate this reading, but you removed "th".

Spell it out to eliminate any chance of confusion:

Change top bits to match most significant bit (47th or 56th bit
depending on paging mode) in the address.


*/
shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx