On Sat, 6 Jan 2024 17:36:23 +0100 Uladzislau Rezki <urezki@xxxxxxxxx>
I have also noticed a erformance difference between rwlock and spinlock.
Thank you! I tried the patch, and it seems that the wait for rwlock_t
also exists, as much as using spinlock_t. (The flamegraph is attached.
Not sure why the read_lock waits so long, given that there is no frequent
write_lock competition)
vzalloced shmem(spinlock_t) vzalloced shmem(rwlock_t)
Requests/sec 583729.93 460007.44
So I guess the overhead in finding vmap area is inevitable here and the
original spin_lock is fine in this series.
So, yes. This is what we need to do extra if CONFIG_HARDENED_USERCOPY is
set, i.e. find a VA.
See if read bias helps to understand the gap between spinlock and rwlock.
--- x/kernel/locking/qrwlock.c
+++ y/kernel/locking/qrwlock.c
@@ -23,7 +23,7 @@ void __lockfunc queued_read_lock_slowpat
/*
* Readers come here when they cannot get the lock without waiting
*/
- if (unlikely(in_interrupt())) {
+ if (1) {
/*
* Readers in interrupt context will get the lock immediately
* if the writer is just waiting (not holding the lock yet),