Re: [PATCH v6 11/12] xfs: add xfs_compute_atomic_write_unit_max()

From: John Garry
Date: Wed Apr 09 2025 - 04:16:42 EST


On 09/04/2025 06:30, Dave Chinner wrote:
This is why I don't agree with adding a static 16MB limit -- we clearly
don't need it to emulate current hardware, which can commit up to 64k
atomically. Future hardware can increase that by 64x and we'll still be
ok with using the existing tr_write transaction type.

By contrast, adding a 16MB limit would result in a much larger minimum
log size. If we add that to struct xfs_trans_resv for all filesystems
then we run the risk of some ancient filesystem with a 12M log failing
suddenly failing to mount on a new kernel.

I don't see the point.
You've got stuck on ithe example size of 16MB I gave, not
the actual reason I gave that example.

You did provide a relatively large value in 16MB. When I say relative, I mean relative to what can be achieved with HW offload today.

The target user we see for this feature is DBs, and they want to do writes in the 16/32/64KB size range. Indeed, these are the sort of sizes we see supported in terms of disk atomic write support today.

Furthermore, they (DBs) want fast and predictable performance which HW offload provides. They do not want to use a slow software-based solution. Such a software-based solution will always be slower, as we need to deal with block alloc/de-alloc and extent remapping for every write.

So are there people who really want very large atomic write support and will tolerate slow performance, i.e. slower than what can be achieved with double-write buffer or some other application logging?

Thanks,
John