[GIT PULL] percpu changes for v5.13-rc4

From: Dennis Zhou
Date: Thu May 27 2021 - 16:23:10 EST


Hi Linus,

This contains a cleanup to lib/percpu-refcount.c and an update to the
MAINTAINERS file to more formally take over support for lib/percpu*.

A few things I expect to have ready for-5.14, percpu depopulation
(queued) and an updated to percpu memcg accounting (wip from Roman
Gushchin).

Thanks,
Dennis

The following changes since commit 6efb943b8616ec53a5e444193dccf1af9ad627b5:

Linux 5.13-rc1 (2021-05-09 14:17:44 -0700)

are available in the Git repository at:

git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu.git for-5.13-fixes

for you to fetch changes up to c547addba7096debac4f99cdfe869a32a81081e2:

MAINTAINERS: Add lib/percpu* as part of percpu entry (2021-05-13 04:50:30 +0000)

----------------------------------------------------------------
Nikolay Borisov (2):
percpu_ref: Don't opencode percpu_ref_is_dying
MAINTAINERS: Add lib/percpu* as part of percpu entry

MAINTAINERS | 2 ++
lib/percpu-refcount.c | 6 +++---
2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index bd7aff0c120f..9599e313d7f7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -14317,10 +14317,12 @@ PER-CPU MEMORY ALLOCATOR
M: Dennis Zhou <dennis@xxxxxxxxxx>
M: Tejun Heo <tj@xxxxxxxxxx>
M: Christoph Lameter <cl@xxxxxxxxx>
+L: linux-mm@xxxxxxxxx
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu.git
F: arch/*/include/asm/percpu.h
F: include/linux/percpu*.h
+F: lib/percpu*.c
F: mm/percpu*.c

PER-TASK DELAY ACCOUNTING
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index a1071cdefb5a..af9302141bcf 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
@@ -275,7 +275,7 @@ static void __percpu_ref_switch_mode(struct percpu_ref *ref,
wait_event_lock_irq(percpu_ref_switch_waitq, !data->confirm_switch,
percpu_ref_switch_lock);

- if (data->force_atomic || (ref->percpu_count_ptr & __PERCPU_REF_DEAD))
+ if (data->force_atomic || percpu_ref_is_dying(ref))
__percpu_ref_switch_to_atomic(ref, confirm_switch);
else
__percpu_ref_switch_to_percpu(ref);
@@ -385,7 +385,7 @@ void percpu_ref_kill_and_confirm(struct percpu_ref *ref,

spin_lock_irqsave(&percpu_ref_switch_lock, flags);

- WARN_ONCE(ref->percpu_count_ptr & __PERCPU_REF_DEAD,
+ WARN_ONCE(percpu_ref_is_dying(ref),
"%s called more than once on %ps!", __func__,
ref->data->release);

@@ -465,7 +465,7 @@ void percpu_ref_resurrect(struct percpu_ref *ref)

spin_lock_irqsave(&percpu_ref_switch_lock, flags);

- WARN_ON_ONCE(!(ref->percpu_count_ptr & __PERCPU_REF_DEAD));
+ WARN_ON_ONCE(!percpu_ref_is_dying(ref));
WARN_ON_ONCE(__ref_is_percpu(ref, &percpu_count));

ref->percpu_count_ptr &= ~__PERCPU_REF_DEAD;