Re: [RFC PATCH 2/2] support cgroup pool in v1
From: Greg KH
Date: Wed Sep 08 2021 - 08:36:02 EST
On Wed, Sep 08, 2021 at 08:15:13PM +0800, Yi Tao wrote:
> Add pool_size interface and delay_time interface. When the user writes
> pool_size, a cgroup pool will be created, and then when the user needs
> to create a cgroup, it will take the fast path protected by spinlock to
> obtain it from the resource pool. Performance is improved by the
> following aspects:
> 1.reduce the critical area for creating cgroups
> 2.reduce the scheduling time of sleep
> 3.avoid competition with other cgroup behaviors which protected
> by cgroup_mutex
>
> The essence of obtaining resources from the pool is kernfs rename. With
> the help of the previous pinned kernfs node function, when the pool is
> enabled, these cgroups will be in the pinned state, and the protection
> of the kernfs data structure will be protected by the specified
> spinlock, thus getting rid of the cgroup_mutex and kernfs_rwsem.
>
> In order to avoid random operations by users, the kernfs nodes of the
> cgroups in the pool will be placed under a hidden kernfs tree, and users
> can not directly touch them. When a user creates a cgroup, it will take
> the fast path, select a node from the hidden tree, and move it to the
> correct position.
>
> As users continue to obtain resources from the pool, the number of
> cgroups in the pool will gradually decrease. When the number is less
> than a certain value, it will be supplemented. In order to avoid
> competition with the currently created cgroup, you can delay this by
> setting delay_time process
>
> Suggested-by: Shanpei Chen <shanpeic@xxxxxxxxxxxxxxxxx>
> Signed-off-by: Yi Tao <escape@xxxxxxxxxxxxxxxxx>
> ---
> include/linux/cgroup-defs.h | 16 +++++
> include/linux/cgroup.h | 2 +
> kernel/cgroup/cgroup-v1.c | 139 ++++++++++++++++++++++++++++++++++++++++++++
I thought cgroup v1 was "obsolete" and not getting new features added to
it. What is wrong with just using cgroups 2 instead if you have a
problem with the v1 interface?
thanks,
greg k-h