Re: [tip:x86/mm] x86/asm: Remove __VIRTUAL_MASK_SHIFT==47 assert

From: Denys Vlasenko
Date: Tue Apr 04 2017 - 11:36:48 EST


On 04/04/2017 10:29 AM, tip-bot for Kirill A. Shutemov wrote:
Commit-ID: 361b4b58ec4cf123e12a773909c6454dbd5e6dbc
Gitweb: http://git.kernel.org/tip/361b4b58ec4cf123e12a773909c6454dbd5e6dbc
Author: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
AuthorDate: Thu, 30 Mar 2017 11:07:26 +0300
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitDate: Tue, 4 Apr 2017 08:22:33 +0200

x86/asm: Remove __VIRTUAL_MASK_SHIFT==47 assert

We don't need the assert anymore, as:

17be0aec74fb ("x86/asm/entry/64: Implement better check for canonical addresses")

made canonical address checks generic wrt. address width.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Brian Gerst <brgerst@xxxxxxxxx>
Cc: Dave Hansen <dave.hansen@xxxxxxxxx>
Cc: Denys Vlasenko <dvlasenk@xxxxxxxxxx>
Cc: H. Peter Anvin <hpa@xxxxxxxxx>
Cc: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: linux-arch@xxxxxxxxxxxxxxx
Cc: linux-mm@xxxxxxxxx
Link: http://lkml.kernel.org/r/20170330080731.65421-3-kirill.shutemov@xxxxxxxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
arch/x86/entry/entry_64.S | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 044d18e..f07b4ef 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -265,12 +265,9 @@ return_from_SYSCALL_64:
*
* If width of "canonical tail" ever becomes variable, this will need
* to be updated to remain correct on both old and new CPUs.
+ *
+ * Change top 16 bits to be the sign-extension of 47th bit

The comment above stops being correct: it's not necessary 16 top bits
we sign-extend now. With larger __VIRTUAL_MASK_SHIFT for 5-level translation,
it will become 7 bits (if I do the math right).

*/
- .ifne __VIRTUAL_MASK_SHIFT - 47
- .error "virtual address width changed -- SYSRET checks need update"
- .endif
-
- /* Change top 16 bits to be the sign-extension of 47th bit */
shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx

The bigger problem here would be the future boot-time choice of 4/5-level
page tables: __VIRTUAL_MASK_SHIFT will need to depend on that choice,
but in this location it is preferable to not use any variables
(memory references).

--
vda