Re: [PATCH v2] pstore: Revert pmsg_lock back to a normal mutex
From: Steven Rostedt
Date: Sun Mar 05 2023 - 11:36:55 EST
On Sat, 4 Mar 2023 03:10:29 +0000
John Stultz <jstultz@xxxxxxxxxx> wrote:
> This reverts commit 76d62f24db07f22ccf9bc18ca793c27d4ebef721.
>
> So while priority inversion on the pmsg_lock is an occasional
> problem that an rt_mutex would help with, in uses where logging
> is writing to pmsg heavily from multiple threads, the pmsg_lock
> can be heavily contended.
>
> Normal mutexes can do adaptive spinning, which keeps the
> contention overhead fairly low maybe adding on the order of 10s
> of us delay waiting, but the slowpath w/ rt_mutexes makes the
> blocked tasks sleep & wake. This makes matters worse when there
> is heavy contentention, as it just allows additional threads to
> run and line up to try to take the lock.
That is incorrect. rt_mutexes have pretty much the same adaptive spinning
as normal mutexes. So that is *not* the reason for the difference in
performance, and should not be stated so in this change log.
The difference between rt_mutex and normal mutex, is that the rt_mutex will
trigger priority inheritance. Perhaps on high contention, that makes a
difference. But do not state it's because rt_mutex does not have adaptive
spinning, because it most definitely does.
-- Steve
>
> It devolves to a worse case senerio where the lock acquisition
> and scheduling overhead dominates, and each thread is waiting on
> the order of ~ms to do ~us of work.
>
> Obviously, having tons of threads all contending on a single
> lock for logging is non-optimal, so the proper fix is probably
> reworking pstore pmsg to have per-cpu buffers so we don't have
> contention.
>
> But in the short term, lets revert the change to the rt_mutex
> and go back to normal mutexes to avoid a potentially major
> performance regression.
>
> Cc: Wei Wang <wvw@xxxxxxxxxx>
> Cc: Midas Chien<midaschieh@xxxxxxxxxx>
> Cc: "Chunhui Li (李春辉)" <chunhui.li@xxxxxxxxxxxx>
> Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
> Cc: Kees Cook <keescook@xxxxxxxxxxxx>
> Cc: Anton Vorontsov <anton@xxxxxxxxxx>
> Cc: "Guilherme G. Piccoli" <gpiccoli@xxxxxxxxxx>
> Cc: Tony Luck <tony.luck@xxxxxxxxx>
> Cc: kernel-team@xxxxxxxxxxx
> Fixes: 76d62f24db07 ("pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion")
> Reported-by: "Chunhui Li (李春辉)" <chunhui.li@xxxxxxxxxxxx>
> Tested-by: Chunhui Li <chunhui.li@xxxxxxxxxxxx>
> Signed-off-by: John Stultz <jstultz@xxxxxxxxxx>
> ---
> I know Steven is working on a fix to address the rtmutex not
> spinning, but as the earlier version of it didn't resolve the
> issue for Chunhui Li, I wanted to resend this out again w/
> Tested-by tags, so it is ready to go if needed. I am looking
> to get a local reproducer so I can help validate Steven's
> efforts.
>
> v2:
> * Fix quoting around Chunhui Li's email name (so they are actually
> cc'ed)
> * Added tested by tag
> ---
> fs/pstore/pmsg.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/fs/pstore/pmsg.c b/fs/pstore/pmsg.c
> index ab82e5f05346..b31c9c72d90b 100644
> --- a/fs/pstore/pmsg.c
> +++ b/fs/pstore/pmsg.c
> @@ -7,10 +7,9 @@
> #include <linux/device.h>
> #include <linux/fs.h>
> #include <linux/uaccess.h>
> -#include <linux/rtmutex.h>
> #include "internal.h"
>
> -static DEFINE_RT_MUTEX(pmsg_lock);
> +static DEFINE_MUTEX(pmsg_lock);
>
> static ssize_t write_pmsg(struct file *file, const char __user *buf,
> size_t count, loff_t *ppos)
> @@ -29,9 +28,9 @@ static ssize_t write_pmsg(struct file *file, const char __user *buf,
> if (!access_ok(buf, count))
> return -EFAULT;
>
> - rt_mutex_lock(&pmsg_lock);
> + mutex_lock(&pmsg_lock);
> ret = psinfo->write_user(&record, buf);
> - rt_mutex_unlock(&pmsg_lock);
> + mutex_unlock(&pmsg_lock);
> return ret ? ret : count;
> }
>