Re: [patch 0/9] mutex subsystem, -V4

From: Andrew Morton
Date: Thu Dec 22 2005 - 08:08:34 EST


Ingo Molnar <mingo@xxxxxxx> wrote:
>
> ...
>
> here are the top 10 reasons of why i think the generic mutex code should
> be considered for upstream integration:

Appropriate for the changelog, please.

> - 'struct mutex' is smaller: on x86, 'struct semaphore' is 20 bytes,
> 'struct mutex' is 16 bytes. A smaller structure size means less RAM
> footprint, and better CPU-cache utilization.

Because of the .sleepers thing. Perhaps a revamped semaphore wouldn't need
thsat field?

> - tighter code. On x86 i get the following .text sizes when
> switching all mutex-alike semaphores in the kernel to the mutex
> subsystem:
>
> text data bss dec hex filename
> 3280380 868188 396860 4545428 455b94 vmlinux-semaphore
> 3255329 865296 396732 4517357 44eded vmlinux-mutex
>
> that's 25051 bytes of code saved, or a 0.76% win - off the hottest
> codepaths of the kernel. (The .data savings are 2892 bytes, or 0.33%)
> Smaller code means better icache footprint, which is one of the
> major optimization goals in the Linux kernel currently.

Why is the mutex-using kernel any smaller? IOW: from where do these
savings come?

> - the mutex subsystem is faster and has superior scalability for
> contented workloads. On an 8-way x86 system, running a mutex-based
> kernel and testing creat+unlink+close (of separate, per-task files)
> in /tmp with 16 parallel tasks, the average number of ops/sec is:
>
> Semaphores: Mutexes:
>
> $ ./test-mutex V 16 10 $ ./test-mutex V 16 10
> 8 CPUs, running 16 tasks. 8 CPUs, running 16 tasks.
> checking VFS performance. checking VFS performance.
> avg loops/sec: 34713 avg loops/sec: 84153
> CPU utilization: 63% CPU utilization: 22%
>
> i.e. in this workload, the mutex based kernel was 2.4 times faster
> than the semaphore based kernel, _and_ it also had 2.8 times less CPU
> utilization. (In terms of 'ops per CPU cycle', the semaphore kernel
> performed 551 ops/sec per 1% of CPU time used, while the mutex kernel
> performed 3825 ops/sec per 1% of CPU time used - it was 6.9 times
> more efficient.)
>
> the scalability difference is visible even on a 2-way P4 HT box:
>
> Semaphores: Mutexes:
>
> $ ./test-mutex V 16 10 $ ./test-mutex V 16 10
> 4 CPUs, running 16 tasks. 8 CPUs, running 16 tasks.
> checking VFS performance. checking VFS performance.
> avg loops/sec: 127659 avg loops/sec: 181082
> CPU utilization: 100% CPU utilization: 34%
>
> (the straight performance advantage of mutexes is 41%, the per-cycle
> efficiency of mutexes is 4.1 times better.)

Why is the mutex-using kernel more scalable?

Can semaphores be tuned to offer the same scalability improvements?

> ...
>
> - the per-call-site inlining cost of the slowpath is cheaper and
> smaller than that of semaphores, by one instruction, because the
> mutex trampoline code does not do a "lea %0,%%eax" that the semaphore
> code does before calling __down_failed. The mutex subsystem uses out
> of line code currently so this makes only a small difference in .text
> size, but in case we want to inline mutexes, they will be cheaper
> than semaphores.

Cannot the semaphore code be improved in the same manner?

> - No wholesale or dangerous migration path. The migration to mutexes is
> fundamentally opt-in, safe and easy: multiple type-based and .config
> based migration helpers are provided to make the migration to mutexes
> easy. Migration is as finegrained as it gets, so robustness of the
> kernel or out-of-tree code should not be jeopardized at any stage.
> The migration helpers can be eliminated once migration is completed,
> once the kernel has been separated into 'mutex users' and 'semaphore
> users'. Out-of-tree code automatically defaults to semaphore
> semantics, mutexes are not forced upon anyone, at any stage of the
> migration.

IOW: a complete PITA, sorry.

> - 'struct mutex' semantics are well-defined and are enforced if
> CONFIG_DEBUG_MUTEXES is turned on. Semaphores on the other hand have
> virtually no debugging code or instrumentation. The mutex subsystem
> checks and enforces the following rules:
>
> * - only one task can hold the mutex at a time
> * - only the owner can unlock the mutex
> * - multiple unlocks are not permitted
> * - recursive locking is not permitted
> * - a mutex object must be initialized via the API
> * - a mutex object must not be initialized via memset or copying
> * - task may not exit with mutex held
> * - memory areas where held locks reside must not be freed

Pretty much all of that could be added to semaphores-when-used-as-mutexes.
Without introducing a whole new locking mechanism.

> furthermore, there are also convenience features in the debugging
> code:
>
> * - uses symbolic names of mutexes, whenever they are printed in debug output
> * - point-of-acquire tracking, symbolic lookup of function names
> * - list of all locks held in the system, printout of them
> * - owner tracking
> * - detects self-recursing locks and prints out all relevant info
> * - detects multi-task circular deadlocks and prints out all affected
> * locks and tasks (and only those tasks)

Ditto, I expect.

> - the generic mutex subsystem is also one more step towards enabling
> the fully preemptable -rt kernel. Ok, this shouldnt bother the
> upstream kernel too much at the moment, but it's a personal driving
> force for me nevertheless ;-)

Actually that's the only compelling reason I've yet seen, sorry.

What is it about these mutexes which is useful to RT and why cannot that
feature be provided by semaphores?


Ingo, there appear to be quite a few straw-man arguments here. You're
comparing a subsystem (semaphores) which obviously could do with a lot of
fixing and enhancing with something new which has had a lot of recent
feature work out into it.

I'd prefer to see mutexes compared with semaphores after you've put as much
work into improving semaphores as you have into developing mutexes.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/