Adding a new scheduler group type which allows to remove all tasks
from certain CPUs through load balancing can help in scenarios where
such CPUs are currently unfavorable to use, for example in a
virtualized environment.
Functionally, this works as intended. The question would be, if this
could be considered to be added and would be worth going forward
with. If so, which areas would need additional attention?
Some cases are referenced below.
The underlying concept and the approach of adding a new scheduler
group type were presented in the Sched MC of the 2024 LPC.
A short summary:
Some architectures (e.g. s390) provide virtualization on a firmware
level. This implies, that Linux kernels running on such architectures
run on virtualized CPUs.
Like in other virtualized environments, the CPUs are most likely shared
with other guests on the hardware level. This implies, that Linux
kernels running in such an environment may encounter 'steal time'. In
other words, instead of being able to use all available time on a
physical CPU, some of said available time is 'stolen' by other guests.
This can cause side effects if a guest is interrupted at an unfavorable
point in time or if the guest is waiting for one of its other virtual
CPUs to perform certain actions while those are suspended in favour of
another guest.
Architectures, like arch/s390, address this issue by providing an
alternative classification for the CPUs seen by the Linux kernel.
The following example is arch/s390 specific:
In the default mode (horizontal CPU polarization), all CPUs are treated
equally and can be subject to steal time equally.
In the alternate mode (vertical CPU polarization), the underlying
firmware hypervisor assigns the CPUs, visible to the guest, different
types, depending on how many CPUs the guest is entitled to use. Said
entitlement is configured by assigning weights to all active guests.
The three CPU types are:
- vertical high : On these CPUs, the guest has always highest
priority over other guests. This means
especially that if the guest executes tasks on
these CPUs, it will encounter no steal time.
- vertical medium : These CPUs are meant to cover fractions of
entitlement.
- vertical low : These CPUs will have no priority when being
scheduled. This implies especially, that while
all other guests are using their full
entitlement, these CPUs might not be ran for a
significant amount of time.
As a consequence, using vertical lows while the underlying hypervisor
experiences a high load, driven by all defined guests, is to be avoided.
In order to consequently move tasks off of vertical lows, introduce a
new type of scheduler groups: group_parked.
Parked implies, that processes should be evacuated as fast as possible
from these CPUs. This implies that other CPUs should start pulling tasks
immediately, while the parked CPUs should refuse to pull any tasks
themselves.
Adding a group type beyond group_overloaded achieves the expected
behavior. By making its selection architecture dependent, it has
no effect on architectures which will not make use of that group type.
This approach works very well for many kinds of workloads. Tasks are
getting migrated back and forth in line with changing the parked
state of the involved CPUs.
task running on Parked CPUs itself is concern right? unless it is pinned.
There are a couple of issues and corner cases which need further
considerations:
- no_hz: While the scheduler tick can and should still be disabled
on idle CPUs, it should not be disabled on parked CPUs
which run only one task, as that task will not be
scheduled away in time. Side effects and completeness
need to be further investigated. One option might be to
allow dynamic changes to tick_nohz_full_mask. It is also
possible to handle this in exclusively fair.c, but that
seems not to be the best environment to do so.
- pinned tasks: If a task is pinned to CPUs which are all parked, it will
get moved to other CPUs. Like during CPU hotplug, the
information about the tasks initial CPU mask gets lost.
- rt & dl: Realtime and deadline scheduling require some additional
attention.
- ext: Probably affected as well. Needs some conceptional
thoughts first.
- idle vs parked: It could be considered whether an idle parked CPU
would contribute to the count of idle CPUs. It is
usually preferable to utilize idle CPUs, but parked CPUs
should not be used. So a scheduler group with many idle,
but parked, CPUs, should not be the target for additional
workload. At this point, some more thought needs to be
spent to evaluate if it would be ok to not set the idle
flag on parked CPUs.
- optimization: It is probably possible to cut some corners. In order to
avoid tampering with scheduler statistics too much, the
actions based on the parkedness on the CPU are not always
taken on the earliest possible occasion yet.
- raciness: Right now, there are no synchronization efforts. It needs
to be considered whether those might be necessary or if
it is alright that the parked-state of a CPU might change
during load-balancing.
Patches apply to tip:sched/core
The s390 patch serves as a simplified implementation example.
Tobias Huschle (2):
sched/fair: introduce new scheduler group type group_parked
s390/topology: Add initial implementation for selection of parked CPUs
arch/s390/include/asm/topology.h | 3 +
arch/s390/kernel/topology.c | 5 ++
include/linux/sched/topology.h | 20 +++++
kernel/sched/core.c | 10 ++-
kernel/sched/fair.c | 122 +++++++++++++++++++++++++------
5 files changed, 135 insertions(+), 25 deletions(-)