Re: [POC][RFC][PATCH] sched: Extended Scheduler Time Slice

From: Mathieu Desnoyers
Date: Wed Oct 25 2023 - 11:42:26 EST


On 2023-10-25 10:31, Steven Rostedt wrote:
On Wed, 25 Oct 2023 15:55:45 +0200
Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:

[...]

After digging lore for context, here are some thoughts about the actual
proposal: AFAIU the intent here is to boost the scheduling slice for a
userspace thread running with a mutex held so it can complete faster,
and therefore reduce contention.

I suspect this is not completely unrelated to priority inheritance
futexes, except that one goal stated by Steven is to increase the
owner slice without requiring to call a system call on the fast-path.

Compared to PI futexes, I think Steven's proposal misses the part
where a thread waiting on a futex boosts the lock owner's priority
so it can complete faster. By making the lock owner selfishly claim
that it needs a larger scheduling slice, it opens the door to
scheduler disruption, and it's hard to come up with upper-bounds
that work for all cases.

Hopefully I'm not oversimplifying if I state that we have mainly two
actors to consider:

[A] the lock owner thread

[B] threads that block trying to acquire the lock

The fast-path here is [A]. [B] can go through a system call, I don't
think it matters at all.

So perhaps we can extend the rseq per-thread area with a field that
implements a "held locks" list that allows [A] to let the kernel know
that it is currently holding a set of locks (those can be chained when
locks are nested). It would be updated on lock/unlock with just a few
stores in userspace.

Those lock addresses could then be used as keys for private locks,
or transformed into inode/offset keys for shared-memory locks. Threads
[B] blocking trying to acquire the lock can call a system call which
would boost the lock owner's slice and/or priority for a given lock key.

When the scheduler preempts [A], it would check whether the rseq
per-thread area has a "held locks" field set and use this information
to find the slice/priority boost which are currently active for each
lock, and use this information to boost the task slice/priority
accordingly.

A scheme like this should allow lock priority inheritance without
requiring system calls on the userspace lock/unlock fast path.

Thoughts ?

Thanks,

Mathieu

--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com