Re: [PATCH] proc: Avoid a thundering herd of threads freeing proc dentries

From: Junxiao Bi
Date: Fri Jun 19 2020 - 11:56:37 EST


Hi Eric,

The patch didn't improve lock contention.

 PerfTop: 48925 irqs/sec kernel:95.6% exact: 100.0% lost: 0/0 drop: 0/0 [4000Hz cycles], (all, 104 CPUs)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

ÂÂÂ 69.66%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] native_queued_spin_lock_slowpath
ÂÂÂÂ 1.93%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] _raw_spin_lock
ÂÂÂÂ 1.24%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] page_counter_cancel
ÂÂÂÂ 0.70%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] do_syscall_64
ÂÂÂÂ 0.62%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] find_idlest_group.isra.96
ÂÂÂÂ 0.57%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] queued_write_lock_slowpath
ÂÂÂÂ 0.56%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] d_walk
ÂÂÂÂ 0.45%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] clear_page_erms
ÂÂÂÂ 0.44%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] syscall_return_via_sysret
ÂÂÂÂ 0.40%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] entry_SYSCALL_64
ÂÂÂÂ 0.38%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] refcount_dec_not_one
ÂÂÂÂ 0.37%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] propagate_protected_usage
ÂÂÂÂ 0.33%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] unmap_page_range
ÂÂÂÂ 0.33%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] select_collect
ÂÂÂÂ 0.32%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] memcpy_erms
ÂÂÂÂ 0.30%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] proc_task_readdir
ÂÂÂÂ 0.27%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] _raw_spin_lock_irqsave

Thanks,

Junxiao.

On 6/19/20 7:09 AM, ebiederm@xxxxxxxxxxxx wrote:
Junxiao Bi <junxiao.bi@xxxxxxxxxx> reported:
When debugging some performance issue, i found that thousands of threads exit
around same time could cause a severe spin lock contention on proc dentry
"/proc/$parent_process_pid/task/", that's because threads needs to clean up
their pid file from that dir when exit.
Matthew Wilcox <willy@xxxxxxxxxxxxx> reported:
We've looked at a few different ways of fixing this problem.
The flushing of the proc dentries from the dcache is an optmization,
and is not necessary for correctness. Eventually cache pressure will
cause the dentries to be freed even if no flushing happens. Some
light testing when I refactored the proc flushg[1] indicated that at
least the memory footprint is easily measurable.

An optimization that causes a performance problem due to a thundering
herd of threads is no real optimization.

Modify the code to only flush the /proc/<tgid>/ directory when all
threads in a process are killed at once. This continues to flush
practically everything when the process is reaped as the threads live
under /proc/<tgid>/task/<tid>.

There is a rare possibility that a debugger will access /proc/<tid>/,
which this change will no longer flush, but I believe such accesses
are sufficiently rare to not be observed in practice.

[1] 7bc3e6e55acf ("proc: Use a list of inodes to flush from proc")
Link: https://lkml.kernel.org/r/54091fc0-ca46-2186-97a8-d1f3c4f3877b@xxxxxxxxxx
Reported-by: Masahiro Yamada <masahiroy@xxxxxxxxxx>
Reported-by: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Signed-off-by: "Eric W. Biederman" <ebiederm@xxxxxxxxxxxx>
---

I am still waiting for word on how this affects performance, but this is
a clean version that should avoid the thundering herd problem in
general.


kernel/exit.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/kernel/exit.c b/kernel/exit.c
index cebae77a9664..567354550d62 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -151,8 +151,8 @@ void put_task_struct_rcu_user(struct task_struct *task)
void release_task(struct task_struct *p)
{
+ struct pid *flush_pid = NULL;
struct task_struct *leader;
- struct pid *thread_pid;
int zap_leader;
repeat:
/* don't need to get the RCU readlock here - the process is dead and
@@ -165,7 +165,16 @@ void release_task(struct task_struct *p)
write_lock_irq(&tasklist_lock);
ptrace_release_task(p);
- thread_pid = get_pid(p->thread_pid);
+
+ /*
+ * When all of the threads are exiting wait until the end
+ * and flush everything.
+ */
+ if (thread_group_leader(p))
+ flush_pid = get_pid(task_tgid(p));
+ else if (!(p->signal->flags & SIGNAL_GROUP_EXIT))
+ flush_pid = get_pid(task_pid(p));
+
__exit_signal(p);
/*
@@ -188,8 +197,10 @@ void release_task(struct task_struct *p)
}
write_unlock_irq(&tasklist_lock);
- proc_flush_pid(thread_pid);
- put_pid(thread_pid);
+ if (flush_pid) {
+ proc_flush_pid(flush_pid);
+ put_pid(flush_pid);
+ }
release_thread(p);
put_task_struct_rcu_user(p);