Re: [PATCH] block: don't make BLK_DEF_MAX_SECTORS too big

From: Kent Overstreet
Date: Wed Mar 30 2016 - 20:52:19 EST


On Wed, Mar 30, 2016 at 09:50:30AM -0700, Shaohua Li wrote:
> On Tue, Mar 29, 2016 at 11:51:51PM -0700, Christoph Hellwig wrote:
> > On Tue, Mar 29, 2016 at 03:01:10PM -0700, Shaohua Li wrote:
> > > The problem is bcache allocates a big bio (with bio_alloc). The bio is
> > > split with blk_queue_split, but it isn't split to small size because
> > > queue limit. the bio is cloned later in md, which uses bio_alloc_bioset.
> > > bio_alloc_bioset itself can't allocate big size bio.
> >
> > bcache should be fixed to not allocate larger than allowed bios then.
> > And handling too large arguments to bio_alloc_bioset is still useful to
> > avoid the checks in the callers and make it robust.
>
> Doesn't this conflict the goal of arbitrary bio size? I think nothing is
> wrong in bcache side. The caller can allocate any size of bio, the block
> layer will split the bio into proper size according to block layer
> limitatio and driver limitation. As long as bio_split can do the right
> job, caller of bio allo is good. Fixing bcache is in the opposite side.
> I'm Cc Kent to check if he wants to fix bcache.

_Allocating_ large bios definitely shouldn't be an issue provided they're split
by the time they get to a driver they pose an isuse for; reason is when the
driver clones the bio & bvec, they're only going to clone the bvecs that are
live in the current split, not all the bvecs in the original bio (and if they
were they'd be broken, as they'd have to be looking at bi_vcnt and bi_vcnt can
be 0 in a split now).

And then since generic_make_request() always calls blk_queue_split() before
passing a bio onto a driver, I'm wondering what the actual bug was...