Re: [PATCH v8 5/5] block: Pass unshare intent via REQ_OP_PROVISION

From: Dave Chinner
Date: Sun Oct 08 2023 - 19:27:40 EST


On Fri, Oct 06, 2023 at 06:28:17PM -0700, Sarthak Kukreti wrote:
> Allow REQ_OP_PROVISION to pass in an extra REQ_UNSHARE bit to
> annotate unshare requests to underlying layers. Layers that support
> FALLOC_FL_UNSHARE will be able to use this as an indicator of which
> fallocate() mode to use.
>
> Suggested-by: Darrick J. Wong <djwong@xxxxxxxxxx>
> Signed-off-by: Sarthak Kukreti <sarthakkukreti@xxxxxxxxxxxx>
> ---
> block/blk-lib.c | 6 +++++-
> block/fops.c | 6 ++++--
> drivers/block/loop.c | 35 +++++++++++++++++++++++++++++------
> include/linux/blk_types.h | 3 +++
> include/linux/blkdev.h | 3 ++-
> 5 files changed, 43 insertions(+), 10 deletions(-)

I have no idea how filesystems (or even userspace applications, for
that matter) are supposed to use this - they have no idea if the
underlying block device has shared blocks for LBA ranges it already
has allocated and provisioned. IOWs, I don't know waht the semantics
of this function is, it is not documented anywhere, and there is no
use case present that tells me how it might get used.

Yes, unshare at the file level means the filesystem tries to break
internal data extent sharing, but if the block layers or backing
devices are doing deduplication and sharing unknown to the
application or filesystem, how do they ever know that this operation
might need to be performed? In what cases do we need to be able to
unshare block device ranges, and how is that different to the
guarantees that REQ_PROVISION is already supposed to give for
provisioned ranges that are then subsequently shared by the block
device (e.g. by snapshots)?

Also, from an API perspective, this is an "unshare" data operation,
not a "provision" operation. Hence I'd suggest that the API should
be blkdev_issue_unshare() rather than optional behaviour to
_provision() which - before this patch - had clear and well defined
meaning....

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx