Re: [PATCH 2/5] x86, fpu: don't drop_fpu() in __restore_xstate_sig() if use_eager_fpu()

From: Oleg Nesterov
Date: Mon Aug 25 2014 - 13:11:51 EST

On 08/25, Linus Torvalds wrote:
> On Mon, Aug 25, 2014 at 7:41 AM, Oleg Nesterov <oleg@xxxxxxxxxx> wrote:
> >
> > I think this should be safe, because this thread and/or swapper/0 can
> > do nothing with with fpu->state, and they should not use fpu.
> .. but if that's the case, then what was wrong with the old code

Confused... Just in case, I think that you mean current code, and ignoring
the lack of preempt_disable() around math_state_restore() it is correct.

I'd like to change it only because this code is the main source of the
nasty special case, used_math() and/or __thread_has_fpu(current) can be
false even if use_eager_fpu().

> that
> just copied the state over the unused space from the user space
> buffer?

But it is not unused? Although I probably misunderstood you from the
very beginning.

OK, what I meant that without switch_fpu_xstate(init_task.fpu.state)
or another hack we can't avoid drop_fpu() which leads to this special

Currently __copy_from_user(&xstate->xsave) copies the new registers
right into this thread's fpu->state. If this thread is preempted before
math_state_restore(), the context switch (__save_init_fpu) will overwrite
the same buffer, the result of __copy_from_user() can be simply lost
(entirely or not).

With this patch we can safely do __copy_from_user(xstate), this buffer
is not used until the 2nd switch_fpu_xstate().

> You can't have it both ways. Either the old code was fine (because it
> doesn't use the buffer while it is in flux), or the new code is broken
> (because it uses the shared buffer). Your choice.No?

It uses the shared buffer, yes. But in this case (I think! please correct
me!), when this thread uses the swapper's fpu->state, schedule() ->
fpu_xsave() into this shared buffer should be fine because it should write
the same content?


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at