Re: bcachefs status update (it's done cooking; let's get this sucker merged)
From: Kent Overstreet
Date: Mon Jun 10 2019 - 21:22:08 EST
On Mon, Jun 10, 2019 at 10:46:35AM -1000, Linus Torvalds wrote:
> On Mon, Jun 10, 2019 at 9:14 AM Kent Overstreet
> <kent.overstreet@xxxxxxxxx> wrote:
> > So. Here's my bcachefs-for-review branch - this has the minimal set of patches
> > outside of fs/bcachefs/. My master branch has some performance optimizations for
> > the core buffered IO paths, but those are fairly tricky and invasive so I want
> > to hold off on those for now - this branch is intended to be more or less
> > suitable for merging as is.
> Honestly, it really isn't.
Heh, I suppose that's what review is for :)
> There are obvious things wrong with it - like the fact that you've
> rebased it so that the original history is gone, yet you've not
> actually *fixed* the history, so you find things like reverts of
> commits that should simply have been removed, and fixes for things
> that should just have been fixed in the original commit the fix is
Yeah, I suppose I have dropped the ball on that lately.
> But note that the cleanup should go further than just fix those kinds
> of technical issues. If you rebase, and you have fixes in your tree
> for things you rebase, just fix things as you rewrite history anyway
> (there are cases where the fix may be informative in itself and it's
> worth leaving around, but that's rare).
Yeah that has historically been my practice, I've just been moving away from
that kind of history editing as bcachefs has been getting more users. Hence the
in-between, worst of both workflows state of the current tree.
But, I can certainly go through and clean things up like that one last time and
make everything bisectable again - I'll go through and write proper commit
messages too. Unless you'd be ok with just squashing most of the history down to
one commit - which would you prefer?
> Anyway, aside from that, I only looked at the non-bcachefs parts. Some
> of those are not acceptable either, like
> struct pagecache_lock add_lock
> ____cacheline_aligned_in_smp; /* protects adding new pages */
> in 'struct address_space', which is completely bogus, since that
> forces not only a potentially huge amount of padding, it also requires
> alignment that that struct simply fundamentally does not have, and
> _will_ not have.
Oh, good point.
> You can only use ____cacheline_aligned_in_smp for top-level objects,
> and honestly, it's almost never a win. That lock shouldn't be so hot.
> That lock is somewhat questionable in the first place, and no, we
> don't do those hacky recursive things anyway. A recursive lock is
> almost always a buggy and mis-designed one.
You're preaching to the choir there, I still feel dirty about that code and I'd
love nothing more than for someone else to come along and point out how stupid
I've been with a much better way of doing it.
> Why does the regular page lock (at a finer granularity) not suffice?
Because the lock needs to prevent pages from being _added_ to the page cache -
to do it with a page granularity lock it'd have to be part of the radix tree,
> And no, nobody has ever cared. The dio people just don't care about
> page cache anyway. They have their own thing going.
It's not just dio, it's even worse with the various fallocate operations. And
the xfs people care, but IIRC even they don't have locking for pages being
faulted in. This is an issue I've talked to other filesystem people quite a bit
about - especially Dave Chinner, maybe we can get him to weigh in here.
And this inconsistency does result in _real_ bugs. It goes something like this:
- dio write shoots down the range of the page cache for the file it's writing
to, using invalidate_inode_pages_range2
- After the page cache shoot down, but before the write actually happens,
another process pulls those pages back in to the page cache
- Now the write happens: if that write was e.g. an allocating write, you're
going to have page cache state (buffer heads) that say that page doesn't have
anything on disk backing it, but it actually does because of the dio write.
xfs has additional locking (that the vfs does _not_ do) around both the buffered
and dio IO paths to prevent this happening because of a buffered read pulling
the pages back in, but no one has a solution for pages getting _faulted_ back in
- either because of mmap or gup().
And there are some filesystem people who do know about this race, because at
some point the dio code has been changed to shoot down the page cache _again_
after the write completes. But that doesn't eliminate the race, it just makes it
harder to trigger.
And dio writes actually aren't the worst of it, it's even worse with fallocate
FALLOC_FL_INSERT_RANGE/COLLAPSE_RANGE. Last time I looked at the ext4 fallocate
code, it looked _completely_ broken to me - the code seemed to think it was
using the same mechanism truncate uses for shooting down the page cache and
keeping pages from being readded - but that only works for truncate because it's
changing i_size and shooting down pages above i_size. Fallocate needs to shoot
down pages that are still within i_size, so... yeah...
The recursiveness is needed because otherwise, if you mmap a file, then do a dio
write where you pass the address you mmapped to pwrite(), gup() from the dio
write path will be trying to fault in the exact pages it's blocking from being
A better solution would be for gup() to detect that and return an error, so we
can just fall back to buffered writes. Or just return an error to userspace
because fuck anyone who would actually do that.
But I fear plumbing that through gup() is going to be a hell of a lot uglier
than this patch.
I would really like Dave to weigh in here.
> Similarly, no, we're not starting to do vmalloc in non-process context. Stop it.
I don't want to do vmalloc in non process context - but I do need to call
vmalloc when reading in btree nodes, from the filesystem IO path.
But I just learned today about this new memalloc_nofs_save() thing, so if that
works I'm more than happy to drop that patch.
> And the commit comments are very sparse. And not always signed off.
Yeah, I'll fix that.
> I also get the feeling that the "intent" part of the six-locks could
> just be done as a slight extension of the rwsem, where an "intent" is
> the same as a write-lock, but without waiting for existing readers,
> and then the write-lock part is just the "wait for readers to be
> Have you talked to Waiman Long about that?
No, I haven't, but I'm adding him to the list.
I really hate the idea of adding these sorts of special case features to the
core locking primitives though - I mean, look what's happened to the mutex code,
and the intent state isn't the only special feature they have. As is, they're
small and clean and they do their job well, I'd really prefer to have them just
remain their own thing instead of trying to cram it all into the the hyper
optimized rw semaphore code.
Also, six locks used to be in fs/bcachefs/, but last time I was mailing stuff
out for review Peter Zijlstra was dead set against exporting the osq lock stuff
- moving six locks to kernel/locking/ was actually his idea.
I can say more about six locks tomorrow when I'm less sleep deprived, if you're
still not convinced.