Re: [PATCH AUTOSEL 4.14 25/35] iomap: sub-block dio needs to zeroout beyond EOF
From: Greg KH
Date: Fri Nov 30 2018 - 03:22:09 EST
On Fri, Nov 30, 2018 at 09:40:19AM +1100, Dave Chinner wrote:
> On Thu, Nov 29, 2018 at 01:47:56PM +0100, Greg KH wrote:
> > On Thu, Nov 29, 2018 at 11:14:59PM +1100, Dave Chinner wrote:
> > >
> > > Cherry picking only one of the 50-odd patches we've committed into
> > > late 4.19 and 4.20 kernels to fix the problems we've found really
> > > seems like asking for trouble. If you're going to back port random
> > > data corruption fixes, then you need to spend a *lot* of time
> > > validating that it doesn't make things worse than they already
> > > are...
> >
> > Any reason why we can't take the 50-odd patches in their entirety? It
> > sounds like 4.19 isn't fully fixed, but 4.20-rc1 is? If so, what do you
> > recommend we do to make 4.19 working properly?
>
> You coul dpull all the fixes, but then you have a QA problem.
> Basically, we have multiple badly broken syscalls (FICLONERANGE,
> FIDEDUPERANGE and copy_file_range), and even 4.20-rc4 isn't fully
> fixed.
>
> There were ~5 critical dedupe/clone data corruption fixes for XFS
> went into 4.19-rc8.
Have any of those been tagged for stable?
> There were ~30 patches that went into 4.20-rc1 that fixed the
> FICLONERANGE/FIDEDUPERANGE ioctls. That completely reworks the
> entire VFS infrastructure for those calls, and touches several
> filesystems as well. It fixes problems with setuid files, swap
> files, modifying immutable files, failure to enforce rlimit and
> max file size constraints, behaviour that didn't match man page
> descriptions, etc.
>
> There were another ~10 patches that went into 4.20-rc4 that fixed
> yet more data corruption and API problems that we found when we
> enhanced fsx to use the above syscalls.
>
> And I have another ~10 patches that I'm working on right now to fix
> the copy_file_range() implementation - it has all the same problems
> I listed above for FICLONERANGE/FIDEDUPERANGE and some other unique
> ones. I'm currently writing error condition tests for fstests so
> that we at least have some coverage of the conditions
> copy_file_range() is supposed to catch and fail. This might all make
> a late 4.20-rcX, but it's looking more like 4.21 at this point.
>
> As to testing this stuff, I've spend several weeks now on this and
> so has Darrick. Between us we've done a huge amount of QA needed to
> verify that the problems are fixed and it is still ongoing. From
> #xfs a couple of days ago:
>
> [28/11/18 16:59] * djwong hits 6 billion fsxops...
> [28/11/18 17:07] <dchinner_> djwong: I've got about 3.75 billion ops running on a machine here....
> [28/11/18 17:20] <djwong> note that's 1 billion fsxops x 6 machines
> [28/11/18 17:21] <djwong> [xfsv4, xfsv5, xfsv5 w/ 1k blocks] * [directio fsx, buffered fsx]
> [28/11/18 17:21] <dchinner_> Oh, I've got 3.75B x 4 instances on one filesystem :P
> [28/11/18 17:22] <dchinner_> [direct io, buffered] x [small op lengths, large op lengths]
>
> And this morning:
>
> [30/11/18 08:53] <djwong> 7 billion fsxops...
>
> I stopped my tests at 5 billion ops yesterday (i.e. 20 billion ops
> aggregate) to focus on testing the copy_file_range() changes, but
> Darrick's tests are still ongoing and have passed 40 billion ops in
> aggregate over the past few days.
>
> The reason we are running these so long is that we've seen fsx data
> corruption failures after 12+ hours of runtime and hundreds of
> millions of ops. Hence the testing for backported fixes will need to
> replicate these test runs across multiple configurations for
> multiple days before we have any confidence that we've actually
> fixed the data corruptions and not introduced any new ones.
>
> If you pull only a small subset of the fixes, the fsx will still
> fail and we have no real way of actually verifying that there have
> been no regression introduced by the backport. IOWs, there's a
> /massive/ amount of QA needed for ensuring that these backports work
> correctly.
>
> Right now the XFS developers don't have the time or resources
> available to validate stable backports are correct and regression
> fre because we are focussed on ensuring the upstream fixes we've
> already made (and are still writing) are solid and reliable.
Ok, that's fine, so users of XFS should wait until the 4.20 release
before relying on it? :)
I understand your reluctance to want to backport anything, but it really
feels like you are not even allowing for fixes that are "obviously
right" to be backported either, even after they pass testing. Which
isn't ok for your users.
thanks,
greg k-h