[PATCH 4/9] x86/kexec: Fix stack and handling of re-entry point for ::preserve_context
From: David Woodhouse
Date: Mon Dec 16 2024 - 18:39:20 EST
From: David Woodhouse <dwmw@xxxxxxxxxxxx>
A ::preserve_context kimage can be invoked more than once, and the entry
point can be different every time. When the callee returns to the kernel,
it leaves the address of its entry point for next time on the stack.
That being the case, one might reasonably assume that the caller would
allocate space for it on the stack fram before actually performing the
'call' into the callee.
Apparently not, though. Ever since the kjump code was first added in
2009, it has set up a *new* stack at the top of the swap_page scratch
page, then just performed the 'call' without allocating any space for
the re-entry address to be returned. It then reads the re-entry point
for next time from 0(%rsp) which is actually the first qword of the page
*after* the swap page, which might not exist at all! And if the callee
has written to that, then it will have corrupted memory it doesn't own.
Correct this by pushing the entry point of the callee onto the stack
before calling it. The callee may then adjust it, or not, as it sees fit,
and subsequent invocations should work correctly either way.
Remove a stray push of zero to the *relocate_kernel* stack, which may
have been intended for this purpose, but which was actually just noise.
Also, loading the stack for the callee relied on the address of the swap
page being in %r10 without ever documenting that fact. Recent code
changes made that no longer true, so load it directly from the local
kexec_pa_swap_page variable instead.
Fixes: b3adabae8a96 ("x86/kexec: Drop page_list argument from relocate_kernel()")
Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx>
---
arch/x86/kernel/relocate_kernel_64.S | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 0d6fce1e0a32..b680f24896b8 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -113,8 +113,6 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
* %r13 original CR4 when relocate_kernel() was invoked
*/
- /* set return address to 0 if not preserving context */
- pushq $0
/* store the start address on the stack */
pushq %rdx
@@ -208,12 +206,19 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
.Lrelocate:
popq %rdx
+
+ /* Use the swap page for the callee's stack */
+ movq kexec_pa_swap_page(%rip), %r10
leaq PAGE_SIZE(%r10), %rsp
+
+ /* push the existing entry point onto the callee's stack */
+ pushq %rdx
+
ANNOTATE_RETPOLINE_SAFE
call *%rdx
/* get the re-entry point of the peer system */
- movq 0(%rsp), %rbp
+ popq %rbp
leaq relocate_kernel(%rip), %r8
movq kexec_pa_swap_page(%rip), %r10
movq pa_backup_pages_map(%rip), %rdi
@@ -247,6 +252,7 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
lgdt saved_context_gdt_desc(%rax)
#endif
+ /* relocate_kernel() returns the re-entry point for next time */
movq %rbp, %rax
popf
--
2.47.0