Re: [linus:master] [xfs] c91d38b57f: stress-ng.chown.ops_per_sec 70.2% improvement
From: Dave Chinner
Date: Sun Oct 05 2025 - 07:55:11 EST
On Fri, Oct 03, 2025 at 09:56:15AM +0200, Christoph Hellwig wrote:
> On Thu, Oct 02, 2025 at 04:11:29PM +0800, kernel test robot wrote:
> >
> >
> > Hello,
> >
> > kernel test robot noticed a 70.2% improvement of stress-ng.chown.ops_per_sec on:
>
> I wonder what stress-ng shown is doing, because unless it is mixing fsync
> and ilock-heavy operations on the same node this would be highly
> unexpected.
stress-ng puts a fsync() at the end of every ops loop:
do {
int ret;
ret = do_fchown(fd, bad_fd, cap_chown, uid, gid);
if ((ret < 0) && (ret != -EPERM)) {
pr_fail("%s: fchown failed, errno=%d (%s)%s\n",
args->name, errno, strerror(errno),
stress_get_fs_type(filename));
rc = EXIT_FAILURE;
break;
}
ret = do_chown(chown, filename, cap_chown, uid, gid);
if ((ret < 0) && (ret != -EPERM)) {
pr_fail("%s: chown %s failed, errno=%d (%s)%s\n",
args->name, filename, errno, strerror(errno),
stress_get_fs_type(filename));
rc = EXIT_FAILURE;
break;
}
ret = do_chown(lchown, filename, cap_chown, uid, gid);
if ((ret < 0) && (ret != -EPERM)) {
pr_fail("%s: lchown %s failed, errno=%d (%s)%s\n",
args->name, filename, errno, strerror(errno),
stress_get_fs_type(filename));
rc = EXIT_FAILURE;
break;
}
>>>>>>>> (void)shim_fsync(fd);
stress_bogo_inc(args);
} while (stress_continue(args));
It's also triggering a change in rwsem contention behaviour on the
inode->i_rwsem in chown_common(), from sleeping to spinning, because
fsync() no longer causes ILOCK_EXCL lock contention and causes the
task currently doing a chown operation to wait on the ILOCK_EXCL
whilst holding the inode->i_rwsem.
Hence all the chown() operations now trigger the spin-on-waiter
heuristic on the inode->i_rwsem as the lock holder never releases
the CPU during the modification. That explains why the CPU usage
increase (from ~2 CPUs to ~50 CPUs) is way out of proportion with
the actual increase in performance.
-Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx