[PATCH 0/7 v2] Introduce local_lock()
From: Sebastian Andrzej Siewior
Date: Sun May 24 2020 - 17:57:53 EST
This is v2 of the local_lock() series. The v1 can be found at
https://lore.kernel.org/lkml/20200519201912.1564477-1-bigeasy@xxxxxxxxxxxxx/
v1âv2:
- Remove static initializer so a local_lock is not used as a single
per-CPU variable but as a member of an existing structure, that is
used per-CPU.
- Use LD_WAIT_CONFIG as wait-type in the dep_map.
- Expect a pointer like value as argument (same as this_cpu_ptr()).
- Drop the SRCU patch. A different sollution is worked on.
- Drop the zswap patch. That code part will be reworked.
preempt_disable() and local_irq_disable/save() are in principle per CPU big
kernel locks. This has several downsides:
- The protection scope is unknown
- Violation of protection rules is hard to detect by instrumentation
- For PREEMPT_RT such sections, unless in low level critical code, can
violate the preemptability constraints.
To address this PREEMPT_RT introduced the concept of local_locks which are
strictly per CPU.
The lock operations map to preempt_disable(), local_irq_disable/save() and
the enabling counterparts on non RT enabled kernels.
If lockdep is enabled local locks gain a lock map which tracks the usage
context. This will catch cases where an area is protected by
preempt_disable() but the access also happens from interrupt context. local
locks have identified quite a few such issues over the years, the most
recent example is:
b7d5dc21072cd ("random: add a spinlock_t to struct batched_entropy")
Aside of the lockdep coverage this also improves code readability as it
precisely annotates the protection scope.
PREEMPT_RT substitutes these local locks with 'sleeping' spinlocks to
protect such sections while maintaining preemtability and CPU locality.
The followin series introduces the infrastructure including
documentation and provides a couple of examples how they are used to
adjust code to be RT ready.
Sebastian