Re: [PATCH] xfs: getattr ignore blocks beyond eof

From: Dave Chinner
Date: Fri Apr 01 2022 - 18:14:34 EST


On Fri, Apr 01, 2022 at 04:09:40PM +0800, wang.yi59@xxxxxxxxxx wrote:
> > On Thu, Mar 31, 2022 at 04:32:07PM +0800, wang.yi59@xxxxxxxxxx wrote:
> > > > We do not, and have not ever tried to, hide allocation or block
> > > > usage artifacts from userspace because any application that depends
> > > > on specific block allocation patterns or accounting from the
> > > > filesystem is broken by design.
> > > >
> > > > If your application is dependent on block counts exactly matching
> > > > the file data space for waht ever reason, then what speculative
> > > > preallocation does is the least of your problems.
> > > >
> > >
> > > Thanks for your explaination.
> > >
> > > Unfortunately, the app I'm using evaluates diskusage by querying
> > > the changes of the backend filesystem (XFS) file before and after
> > > the operation.
> >
> > What application is this?
> >
> > What is it trying to use this information for?
>
> Thanks very much, Dave.
>
> I'm trying to use a new xlater(module) named 'simple-quota' in
> glusterfs, which collects file's diskusage by stat, for quota function.

So a company (kadalu) has forked gluster, then removed the gluster
quota implementation because of undefined "major performance
problems" and then implemented their own quota thing that stores
disk usage information in extended attributes. And:

Advised method is adding quota limit info before writing any
information to directory. Even otherwise we don’t need quota
crawl for updating quota-limit, but do a du -s -b $dir and
write the output into xattr.

Yeah, that's broken by design.

Oh, look at that:

Just run statfs() on all dirs which has Quota set, and send
a flag to update status of quota xlator through internal
xattr mechanism if quota limit exceeds for a directory! This
can run once in 5 or 10 seconds, or even lesser frequency if
the quota limit is huge!

That's pretty broken, too. Doesn't scale out, either...

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx