Re: [PATCH v2] ipc/mqueue: Avoid relying on a stack reference past its expiry

From: Varad Gautam
Date: Fri May 07 2021 - 09:33:05 EST


Hi Hillf,

On 5/7/21 5:24 AM, Hillf Danton wrote:
> On Thu, 6 May 2021 12:33:56 Varad Gautam wrote:
>> On 5/6/21 11:18 AM, Hillf Danton wrote:
>>> On Thu, 6 May 2021 08:56:19
>>>> do_mq_timedreceive calls wq_sleep with a stack local address. The
>>>> sender (do_mq_timedsend) uses this address to later call
>>>> pipelined_send.
>>>>
>>>> This leads to a very hard to trigger race where a do_mq_timedreceive cal=
>> l
>>>> might return and leave do_mq_timedsend to rely on an invalid address,
>>>> causing the following crash:
>>>>
>>>> [ 240.739977] RIP: 0010:wake_q_add_safe+0x13/0x60
>>>> [ 240.739991] Call Trace:
>>>> [ 240.739999] __x64_sys_mq_timedsend+0x2a9/0x490
>>>> [ 240.740003] ? auditd_test_task+0x38/0x40
>>>> [ 240.740007] ? auditd_test_task+0x38/0x40
>>>> [ 240.740011] do_syscall_64+0x80/0x680
>>>> [ 240.740017] entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>>> [ 240.740019] RIP: 0033:0x7f5928e40343
>>>>
>>>> The race occurs as:
>>>>
>>>> 1. do_mq_timedreceive calls wq_sleep with the address of
>>>> `struct ext_wait_queue` on function stack (aliased as `ewq_addr` here)
>>>> - it holds a valid `struct ext_wait_queue *` as long as the stack has
>>>> not been overwritten.
>>>>
>>>> 2. `ewq_addr` gets added to info->e_wait_q[RECV].list in wq_add, and
>>>> do_mq_timedsend receives it via wq_get_first_waiter(info, RECV) to call
>>>> __pipelined_op.
>>>>
>>>> 3. Sender calls __pipelined_op::smp_store_release(&this->state, STATE_RE=
>> ADY=
>>>> ).
>>>> Here is where the race window begins. (`this` is `ewq_addr`.)
>>>>
>>>> 4. If the receiver wakes up now in do_mq_timedreceive::wq_sleep, it
>>>> will see `state == STATE_READY` and break. `ewq_addr` gets remov=
>> ed from
>>>> info->e_wait_q[RECV].list.
>>>>
>>>> 5. do_mq_timedreceive returns, and `ewq_addr` is no longer guaranteed
>>>> to be a `struct ext_wait_queue *` since it was on do_mq_timedreceive's
>>>> stack. (Although the address may not get overwritten until another
>>>> function happens to touch it, which means it can persist around for an
>>>> indefinite time.)
>>>>
>>>> 6. do_mq_timedsend::__pipelined_op() still believes `ewq_addr` is a
>>>> `struct ext_wait_queue *`, and uses it to find a task_struct to pass
>>>> to the wake_q_add_safe call. In the lucky case where nothing has
>>>> overwritten `ewq_addr` yet, `ewq_addr->task` is the right task_struct.
>>>> In the unlucky case, __pipelined_op::wake_q_add_safe gets handed a
>>>> bogus address as the receiver's task_struct causing the crash.
>>>>
>>>> do_mq_timedsend::__pipelined_op() should not dereference `this` after
>>>> setting STATE_READY, as the receiver counterpart is now free to return.
>>>> Change __pipelined_op to call wake_q_add before setting STATE_READY
>>>> which ensures that the receiver's task_struct can still be found via
>>>> `this`.
>>>>
>>>> Fixes: c5b2cbdbdac563 ("ipc/mqueue.c: update/document memory barriers")
>>>> Signed-off-by: Varad Gautam <varad.gautam@xxxxxxxx>
>>>> Reported-by: Matthias von Faber <matthias.vonfaber@xxxxxxxxxxx>
>>>> Cc: <stable@xxxxxxxxxxxxxxx> # 5.6
>>>> Cc: Christian Brauner <christian.brauner@xxxxxxxxxx>
>>>> Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
>>>> Cc: "Eric W. Biederman" <ebiederm@xxxxxxxxxxxx>
>>>> Cc: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx>
>>>> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
>>>> Cc: Davidlohr Bueso <dbueso@xxxxxxx>
>>>> ---
>>>> v2: Call wake_q_add before smp_store_release, instead of using a
>>>> get_task_struct/wake_q_add_safe combination across
>>>> smp_store_release. (Davidlohr Bueso)
>>>>
>>>> ipc/mqueue.c | 33 ++++++++++++++++++++++++---------
>>>> 1 file changed, 24 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/ipc/mqueue.c b/ipc/mqueue.c
>>>> index 8031464ed4ae..bfcb6f81a824 100644
>>>> --- a/ipc/mqueue.c
>>>> +++ b/ipc/mqueue.c
>>>> @@ -78,11 +78,13 @@ struct posix_msg_tree_node {
>>>> * MQ_BARRIER:
>>>> * To achieve proper release/acquire memory barrier pairing, the state =
>> is =
>>>> set to
>>>> * STATE_READY with smp_store_release(), and it is read with READ_ONCE =
>> fol=
>>>> lowed
>>>> - * by smp_acquire__after_ctrl_dep(). In addition, wake_q_add_safe() is =
>> use=
>>>> d.
>>>> + * by smp_acquire__after_ctrl_dep(). The state change to STATE_READY mu=
>> st =
>>>> be
>>>> + * the last write operation, after which the blocked task can immediate=
>> ly
>>>> + * return and exit.
>>>> *
>>>> * This prevents the following races:
>>>> *
>>>> - * 1) With the simple wake_q_add(), the task could be gone already befo=
>> re
>>>> + * 1) With wake_q_add(), the task could be gone already before
>>>> * the increase of the reference happens
>>>> * Thread A
>>>> * Thread B
>>>> @@ -97,10 +99,25 @@ struct posix_msg_tree_node {
>>>> * sys_exit()
>>>> * get_task_struct() // UaF
>>>> *
>>>> - * Solution: Use wake_q_add_safe() and perform the get_task_struct() be=
>> for=
>>>> e
>>>> - * the smp_store_release() that does ->state = STATE_READY.
>>>> + * 2) With wake_q_add(), the receiver task could have returned from the
>>>> + * syscall and had its stack-allocated waiter overwritten before the
>>>> + * sender could add it to the wake_q
>>>> + * Thread A
>>>> + * Thread B
>>>> + * WRITE_ONCE(wait.state, STATE_NONE);
>>>> + * schedule_hrtimeout()
>>>> + * ->state = STATE_READY
>>>> + * <timeout returns>
>>>> + * if (wait.state == STATE_READY) return;
>>>> + * sysret to user space
>>>> + * overwrite receiver's stack
>>>> + * wake_q_add(A)
>>>> + * if (cmpxchg()) // corrupted waiter
>>>> *
>>>> - * 2) Without proper _release/_acquire barriers, the woken up task
>>>> + * Solution: Queue the task for wakeup before the smp_store_release() t=
>> hat
>>>> + * does ->state = STATE_READY.
>>>> + *
>>>> + * 3) Without proper _release/_acquire barriers, the woken up task
>>>> * could read stale data
>>>> *
>>>> * Thread A
>>>> @@ -116,7 +133,7 @@ struct posix_msg_tree_node {
>>>> *
>>>> * Solution: use _release and _acquire barriers.
>>>> *
>>>> - * 3) There is intentionally no barrier when setting current->state
>>>> + * 4) There is intentionally no barrier when setting current->state
>>>> * to TASK_INTERRUPTIBLE: spin_unlock(&info->lock) provides the
>>>> * release memory barrier, and the wakeup is triggered when holding
>>>> * info->lock, i.e. spin_lock(&info->lock) provided a pairing
>>>> @@ -1005,11 +1022,9 @@ static inline void __pipelined_op(struct wake_q_h=
>> ead=
>>>> *wake_q,
>>>> struct ext_wait_queue *this)
>>>> {
>>>> list_del(&this->list);
>>>> - get_task_struct(this->task);
>>>> -
>>>> + wake_q_add(wake_q, this->task);
>>>> /* see MQ_BARRIER for purpose/pairing */
>>>> smp_store_release(&this->state, STATE_READY);
>>>> - wake_q_add_safe(wake_q, this->task);
>>>> }
>>>
>>> Given the smp_mb() in __wake_q_add(), and more crucially the position of
>>> wake_q_add() corrected in this work, that mb helps prevent waiter from
>>> reading stale data at least in case of pipelined_send(). Update comment
>>> if that is true.
>>>
>>
>> This race IIUC is covered by 3) under MQ_BARRIER after this patch. But
>> instead of the smp_mb() in __wake_q_add(), it is guarded by the use of
>> smp_store_release(STATE_READY), which ensures that the waiter only sees
>> STATE_READY after wake_q_add and after updating receiver->msg in
>> pipelined_send().
>
> Even after this work, if I dont misread you, waiter takes care only for
> READY and nothing more; otherwise feel free to show the reasons why
> wake_q_add is a concern for instance on the waiter side, because READY
> does nothing to prevent a waiter from going home and timeout can arrive
> with no idea of wake_q_add.
>
> Given that wake_q_add wont make sense without wakeup on the waker side,
> what is the win by requiring the order in between READY and wake_q_add
> before wakeup?
>

If the waiter timed out and did not see STATE_READY, it will spin on
info->lock before exiting.

If it saw STATE_READY, it will exit and leave the waker with an invalid
parameter in __pipelined_op.

Thanks,
Varad

> Hillf
>

--
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5
90409 Nürnberg
Germany

HRB 36809, AG Nürnberg
Geschäftsführer: Felix Imendörffer