Re: [PATCH] powerpc: Save AMR/IAMR when switching tasks

From: Michael Ellerman
Date: Mon Sep 19 2022 - 08:38:56 EST


Christophe Leroy <christophe.leroy@xxxxxxxxxx> writes:
> Le 16/09/2022 à 07:05, Samuel Holland a écrit :
>> With CONFIG_PREEMPT=y (involuntary preemption enabled), it is possible
>> to switch away from a task inside copy_{from,to}_user. This left the CPU
>> with userspace access enabled until after the next IRQ or privilege
>> level switch, when AMR/IAMR got reset to AMR_KU[AE]P_BLOCKED. Then, when
>> switching back to the original task, the userspace access would fault:
>
> This is not supposed to happen. You never switch away from a task
> magically. Task switch will always happen in an interrupt, that means
> copy_{from,to}_user() get interrupted.

Unfortunately this isn't true when CONFIG_PREEMPT=y.

We can switch away without an interrupt via:
__copy_tofrom_user()
-> __copy_tofrom_user_power7()
-> exit_vmx_usercopy()
-> preempt_enable()
-> __preempt_schedule()
-> preempt_schedule()
-> preempt_schedule_common()
-> __schedule()

I do some boot tests with CONFIG_PREEMPT=y, but I realise now those are
all on Power8, which is a bit of an oversight on my part.

And clearly no one else tests it, until now :)

I think the root of our problem is that our KUAP lock/unlock is at too
high a level, ie. we do it in C around the low-level copy to/from.

eg:

static inline unsigned long
raw_copy_to_user(void __user *to, const void *from, unsigned long n)
{
unsigned long ret;

allow_write_to_user(to, n);
ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
prevent_write_to_user(to, n);
return ret;
}

There's a reason we did that, which is that we have various different
KUAP methods on different platforms, not a simple instruction like other
arches.

But that means we have that exit_vmx_usercopy() being called deep in the
guts of __copy_tofrom_user(), with KUAP disabled, and then we call into
the preempt machinery and eventually schedule.

I don't see an easy way to fix that "properly", it would be a big change
to all platforms to push the KUAP save/restore down into the low level
asm code.

But I think the patch below does fix it, although it abuses things a
little. Namely it only works because the 64s KUAP code can handle a
double call to prevent, and doesn't need the addresses or size for the
allow.

Still I think it might be our best option for an easy fix.

Samuel, can you try this on your system and check it works for you?

cheers


diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index 97a77b37daa3..c50080c6a136 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -432,6 +432,7 @@ int speround_handler(struct pt_regs *regs);
/* VMX copying */
int enter_vmx_usercopy(void);
int exit_vmx_usercopy(void);
+void exit_vmx_usercopy_continue(void);
int enter_vmx_ops(void);
void *exit_vmx_ops(void *dest);

diff --git a/arch/powerpc/lib/copyuser_power7.S b/arch/powerpc/lib/copyuser_power7.S
index 28f0be523c06..77804860383c 100644
--- a/arch/powerpc/lib/copyuser_power7.S
+++ b/arch/powerpc/lib/copyuser_power7.S
@@ -47,7 +47,7 @@
ld r15,STK_REG(R15)(r1)
ld r14,STK_REG(R14)(r1)
.Ldo_err3:
- bl exit_vmx_usercopy
+ bl exit_vmx_usercopy_continue
ld r0,STACKFRAMESIZE+16(r1)
mtlr r0
b .Lexit
diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
index f76a50291fd7..78a18b8384ff 100644
--- a/arch/powerpc/lib/vmx-helper.c
+++ b/arch/powerpc/lib/vmx-helper.c
@@ -8,6 +8,7 @@
*/
#include <linux/uaccess.h>
#include <linux/hardirq.h>
+#include <asm/kup.h>
#include <asm/switch_to.h>

int enter_vmx_usercopy(void)
@@ -34,12 +35,19 @@ int enter_vmx_usercopy(void)
*/
int exit_vmx_usercopy(void)
{
+ prevent_user_access(KUAP_READ_WRITE);
disable_kernel_altivec();
pagefault_enable();
preempt_enable();
return 0;
}

+void exit_vmx_usercopy_continue(void)
+{
+ exit_vmx_usercopy();
+ allow_read_write_user(NULL, NULL, 0);
+}
+
int enter_vmx_ops(void)
{
if (in_interrupt())