Re: [RFC PATCH] futex: Introduce __vdso_robust_futex_unlock
From: Mathieu Desnoyers
Date: Fri Mar 13 2026 - 08:18:55 EST
On 2026-03-13 08:12, Sebastian Andrzej Siewior wrote:
On 2026-03-12 18:52:43 [-0400], Mathieu Desnoyers wrote:
On 2026-03-12 18:23, Thomas Gleixner wrote:
On Wed, Mar 11 2026 at 14:54, Mathieu Desnoyers wrote:[...]
TBH, all of this is completely overengineered and tasteless bloat.
The exactly same thing can be achieved by doing the obvious:
struct robust_list_head2 {
struct robust_list_head rhead;
u32 unlock_val;
};
// User space
unlock(futex)
{
struct robust_list_head2 *h = ....;
h->unlock_val = 0;
h->rhead.list_op_pending = .... | FUTEX_ROBUST_UNLOCK;
xchg(futex->uval, h->unlock_val);
Here is the problem with your proposed approach:
"XCHG — Exchange Register/Memory With Register"
^^^^^^^^
So only one of the xchg arguments can be a memory location.
Therefore, you will end up needing an extra store after xchg
to store the content of the result register into h->unlock_val.
But can't we also assign a role to pthread_mutex_destroy() here? So it
would ensure that the futex death cleanup did run for every task having
access to this memory? So it is either 0 or pid-of-dead-task before this
memory location can be used again?
I did propose this exact approach recently:
https://lore.kernel.org/lkml/bd7a8dd3-8dee-4886-abe6-bdda25fe4a0d@xxxxxxxxxxxx/
but it's a far reaching change. Then I thought of using rseq to identify the
critical section:
https://lore.kernel.org/lkml/694424f4-20d1-4473-8955-859acbad466f@xxxxxxxxxxxx/
And then Florian proposed to hide this under a vDSO:
https://lore.kernel.org/lkml/lhufr6ihelv.fsf@xxxxxxxxxxxxxxxxxxxxxxxx/
and here we are.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com