Re: [PATCH v2] ceph/iov_iter: fix bad iov_iter handling in ceph splice codepaths
From: Jeff Layton
Date: Wed Jan 18 2017 - 07:15:02 EST
On Thu, 2017-01-12 at 12:37 +0100, Ilya Dryomov wrote:
> On Thu, Jan 12, 2017 at 12:27 PM, Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> > On Thu, 2017-01-12 at 07:59 +0000, Al Viro wrote:
> > > On Tue, Jan 10, 2017 at 07:57:31AM -0500, Jeff Layton wrote:
> > > >
> > > > v2: fix bug in offset handling in iov_iter_pvec_size
> > > >
> > > > xfstest generic/095 triggers soft lockups in kcephfs. Basically it uses
> > > > fio to drive some I/O via vmsplice ane splice. Ceph then ends up trying
> > > > to access an ITER_BVEC type iov_iter as a ITER_IOVEC one. That causes it
> > > > to pick up a wrong offset and get stuck in an infinite loop while trying
> > > > to populate the page array. dio_get_pagev_size has a similar problem.
> > > >
> > > > To fix the first problem, add a new iov_iter helper to determine the
> > > > offset into the page for the current segment and have ceph call that.
> > > > I would just replace dio_get_pages_alloc with iov_iter_get_pages_alloc,
> > > > but that will only return a single page at a time for ITER_BVEC and
> > > > it's better to make larger requests when possible.
> > > >
> > > > For the second problem, we simply replace it with a new helper that does
> > > > what it does, but properly for all iov_iter types.
> > > >
> > > > Since we're moving that into generic code, we can also utilize the
> > > > iterate_all_kinds macro to simplify this. That means that we need to
> > > > rework the logic a bit since we can't advance to the next vector while
> > > > checking the current one.
> > >
> > > Yecchhh... That really looks like exposing way too low-level stuff instead
> > > of coming up with saner primitive ;-/
> > >
> >
> > Fair point. That said, I'm not terribly thrilled with how
> > iov_iter_get_pages* works right now.
> >
> > Note that it only ever touches the first vector. Would it not be better
> > to keep getting page references if the bvec/iov elements are aligned
> > properly? It seems quite plausible that they often would be, and being
> > able to hand back a larger list of pages in most cases would be
> > advantageous.
> >
> > IOW, should we have iov_iter_get_pages basically do what
> > dio_get_pages_alloc does -- try to build as long an array of pages as
> > possible before returning, provided that the alignment works out?
> >
> > The NFS DIO code, for instance, could also benefit there. I know we've
> > had reports there in the past that sending down a bunch of small iovecs
> > causes a lot of small-sized requests on the wire.
> >
> > > Is page vector + offset in the first page + number of bytes really what
> > > ceph wants? Would e.g. an array of bio_vec be saner? Because _that_
> > > would make a lot more natural iov_iter_get_pages_alloc() analogue...
> > >
> > > And yes, I realize that you have ->pages wired into the struct ceph_osd_request;
> > > how painful would it be to have it switched to struct bio_vec array instead?
> >
> > Actually...it looks like that might not be too hard. The low-level OSD
> > handling code can already handle bio_vec arrays in order to service RBD.
> > It looks like we could switch cephfs to use
> > osd_req_op_extent_osd_data_bio instead of
> > osd_req_op_extent_osd_data_pages. That would add a dependency in cephfs
> > on CONFIG_BLOCK, but I think we could probably live with that.
>
> Ah, just that part might be easy enough ;)
>
>
Yeah, that part doesn't look too bad. Regardless though, I think we need
to get a fix in for this sooner rather than later as it's trivial to get
the kernel stuck in this loop today, by any user with write access to a
ceph mount.
Al, when you mentioned switching this over to a bio_vec based interface,
were you planning to roll up the iov_iter->bio_vec array helper for
this, or should I be looking into doing that?
Thanks,
--
Jeff Layton <jlayton@xxxxxxxxxx>