Re: [RFC PATCH 1/2] add pinned flags for kernfs node

From: Greg KH
Date: Fri Sep 10 2021 - 02:00:36 EST


On Fri, Sep 10, 2021 at 10:14:28AM +0800, taoyi.ty wrote:
>
> On 2021/9/8 下午8:35, Greg KH wrote:
> > Why are kernfs changes needed for this? kernfs creation is not
> > necessarily supposed to be "fast", what benchmark needs this type of
> > change to require the addition of this complexity?
>
> The implementation of the cgroup pool should have nothing
>
> to do with kernfs, but during the development process,
>
> I found that when there is a background cpu load, it takes
>
> a very significant time for a process to get the mutex from
>
> being awakened to starting execution.
>
> To create 400 cgroups concurrently, if there is no background
>
> cpu load, it takes about 80ms, but if the cpu usage rate is
>
> 40%, it takes about 700ms. If you reduce
>
> sched_wakeup_granularity_ns, the time consumption will also
>
> be reduced. If you change mutex to spinlock, the situation
>
> will be very much improved.
>
> So to solve this problem, mutex should not be used. The
>
> cgroup pool relies on kernfs_rename which uses
>
> kernfs_mutex, so I need to bypass kernfs_mutex and
>
> add a pinned flag for this.
>
> Because the lock mechanism of kernfs_rename has been
>
> changed, in order to maintain data consistency, the creation
>
> and deletion of kernfs have also been changed accordingly
>
> I admit that this is really not a very elegant design, but I don’t
>
> know how to make it better, so I throw out the problem and
>
> try to seek help from the community.

Look at the changes to kernfs for 5.15-rc1 where a lot of the lock
contention was removed based on benchmarks where kernfs (through sysfs)
was accessed by lots of processes all at once.

That should help a bit in your case, but remember, the creation of
kernfs files is not the "normal" case, so it is not optimized at all.
We have optimized the access case, which is by far the most common.

good luck!

greg k-h