[tip: x86/boot] x86/kexec: Use correct swap page in swap_pages function

From: tip-bot2 for David Woodhouse
Date: Wed Jan 15 2025 - 04:22:07 EST


The following commit has been merged into the x86/boot branch of tip:

Commit-ID: 85d724df8c82c060dcdeb8d0de0bd986e6c37b72
Gitweb: https://git.kernel.org/tip/85d724df8c82c060dcdeb8d0de0bd986e6c37b72
Author: David Woodhouse <dwmw@xxxxxxxxxxxx>
AuthorDate: Thu, 09 Jan 2025 14:04:15
Committer: Borislav Petkov (AMD) <bp@xxxxxxxxx>
CommitterDate: Tue, 14 Jan 2025 12:54:36 +01:00

x86/kexec: Use correct swap page in swap_pages function

The swap_pages function expects the swap page to be in %r10, but there
was no documentation to that effect. Once upon a time the setup code
used to load its value from a kernel virtual address and save it to an
address which is accessible in the identity-mapped page tables, and
*happened* to use %r10 to do so, with no comment that it was left there
on *purpose* instead of just being a scratch register. Once that was no
longer necessary, %r10 just holds whatever the kernel happened to leave
in it.

Now that the original value passed by the kernel is accessible via
%rip-relative addressing, load directly from there instead of using %r10
for it. But document the other parameters that the swap_pages function
*does* expect in registers.

Fixes: b3adabae8a96 ("x86/kexec: Drop page_list argument from relocate_kernel()")
Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx>
Signed-off-by: Borislav Petkov (AMD) <bp@xxxxxxxxx>
Link: https://lore.kernel.org/r/20250109140757.2841269-4-dwmw2@xxxxxxxxxxxxx
---
arch/x86/kernel/relocate_kernel_64.S | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 6fce4b4..3ca3bf6 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -264,6 +264,10 @@ SYM_CODE_END(virtual_mapped)
/* Do the copies */
SYM_CODE_START_LOCAL_NOALIGN(swap_pages)
UNWIND_HINT_END_OF_STACK
+ /*
+ * %rdi indirection page
+ * %r11 preserve_context
+ */
movq %rdi, %rcx /* Put the indirection_page in %rcx */
xorl %edi, %edi
xorl %esi, %esi
@@ -302,7 +306,7 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages)
jz .Lnoswap

/* copy source page to swap page */
- movq %r10, %rdi
+ movq kexec_pa_swap_page(%rip), %rdi
movl $512, %ecx
rep ; movsq

@@ -314,7 +318,7 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages)

/* copy swap page to destination page */
movq %rdx, %rdi
- movq %r10, %rsi
+ movq kexec_pa_swap_page(%rip), %rsi
.Lnoswap:
movl $512, %ecx
rep ; movsq