Re: [PATCH 2/3] cgroup: add lockless fast-path checks to cgroup_file_notify()
From: Chen Ridong
Date: Mon Mar 02 2026 - 22:19:39 EST
On 2026/3/3 1:00, Shakeel Butt wrote:
> On Mon, Mar 02, 2026 at 08:14:05AM -0800, Shakeel Butt wrote:
>> Hi Chen, thanks for taking a look.
>>
>> On Mon, Mar 02, 2026 at 09:50:53AM +0800, Chen Ridong wrote:
>>>
> [...]
>>>> + last = READ_ONCE(cfile->notified_at);
>>>> + if (time_before_eq(jiffies, last + CGROUP_FILE_NOTIFY_MIN_INTV))
>>>> + return;
>>>> +
>>>
>>> Previously, if a notification arrived within the rate-limit window, we would
>>> still call timer_reduce(&cfile->notify_timer, next) to schedule a deferred
>>> notification.
>>>
>>> With this change, returning early here bypasses that timer scheduling entirely.
>>> Does this risk missing notifications that would have been delivered by the timer?
>>>
>>
>> You are indeed right that this can cause missed notifications. After giving some
>> thought I think the lockless check-and-return can be pretty much simplified to
>> timer_pending() check. If timer is active, just do nothing and the notification
>> will be delivered eventually.
>>
>> I will send the updated version soon. Any comments on the other two patches?
>>
>
> Something like the following:
>
>>From 598199723b50813b015393122796f6775eee02d7 Mon Sep 17 00:00:00 2001
> From: Shakeel Butt <shakeel.butt@xxxxxxxxx>
> Date: Sat, 28 Feb 2026 04:01:28 -0800
> Subject: [PATCH] cgroup: add lockless fast-path checks to cgroup_file_notify()
>
> Add two lockless checks before acquiring the lock:
>
> 1. READ_ONCE(cfile->kn) NULL check to skip torn-down files.
> 2. timer_pending() check to skip when a deferred notification
> timer is already armed.
>
> Both checks have safe error directions -- a stale read can only
> cause unnecessary lock acquisition, never a missed notification.
>
> Annotate cfile->kn write sites with WRITE_ONCE() to pair with the
> lockless reader.
>
> Signed-off-by: Shakeel Butt <shakeel.butt@xxxxxxxxx>
> Reported-by: Jakub Kicinski <kuba@xxxxxxxxxx>
> ---
> kernel/cgroup/cgroup.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 2b298a2cf206..6e816d27ee25 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -1749,7 +1749,7 @@ static void cgroup_rm_file(struct cgroup *cgrp, const struct cftype *cft)
> struct cgroup_file *cfile = (void *)css + cft->file_offset;
>
> spin_lock_irq(&cgroup_file_kn_lock);
> - cfile->kn = NULL;
> + WRITE_ONCE(cfile->kn, NULL);
> spin_unlock_irq(&cgroup_file_kn_lock);
>
> timer_delete_sync(&cfile->notify_timer);
> @@ -4430,7 +4430,7 @@ static int cgroup_add_file(struct cgroup_subsys_state *css, struct cgroup *cgrp,
> timer_setup(&cfile->notify_timer, cgroup_file_notify_timer, 0);
>
> spin_lock_irq(&cgroup_file_kn_lock);
> - cfile->kn = kn;
> + WRITE_ONCE(cfile->kn, kn);
> spin_unlock_irq(&cgroup_file_kn_lock);
> }
>
> @@ -4689,6 +4689,12 @@ void cgroup_file_notify(struct cgroup_file *cfile)
> unsigned long flags;
> struct kernfs_node *kn = NULL;
>
> + if (!READ_ONCE(cfile->kn))
> + return;
> +
> + if (timer_pending(&cfile->notify_timer))
> + return;
> +
The added timer_pending() check seems problematic. According to the function's
comment, callers must ensure serialization with other timer operations. Here
we're checking timer_pending() locklessly, which means we might:
1. See an inconsistent state if another CPU is concurrently modifying the timer
2. Race with del_timer() or mod_timer() from other contexts
> spin_lock_irqsave(&cgroup_file_kn_lock, flags);
> if (cfile->kn) {
> unsigned long last = cfile->notified_at;
--
Best regards,
Ridong