On Wed, Nov 25 2015 at 4:04am -0500,Not a reproducer, but several dumps for analysis.
Hannes Reinecke <hare@xxxxxxx> wrote:
On 11/20/2015 04:28 PM, Ewan Milne wrote:
On Fri, 2015-11-20 at 15:55 +0100, Hannes Reinecke wrote:And indeed, it doesn't.
Can't we have a joint effort here?
I've been spending a _LOT_ of time trying to debug things here, but
none of the ideas I've come up with have been able to fix anything.
Yes. I'm not the one primarily looking at it, and we don't have a
reproducer in-house. We just have the one dump right now.
I'm almost tempted to increase the count from scsi_alloc_sgtable()
by one and be done with ...
That might not fix it if it is a problem with the merge code, though.
How did you arrive at that? Do you have a reproducer now?
Yes.Seems I finally found the culprit.
What happens is this:
We have two paths, with these seg_boundary_masks:
path-1: seg_boundary_mask = 65535,
path-2: seg_boundary_mask = 4294967295,
consequently the DM request queue has this:
md-1: seg_boundary_mask = 65535,
What happens now is that a request is being formatted, and sent
to path 2. During submission req->nr_phys_segments is formatted
with the limits of path 2, arriving at a count of 3.
Now the request gets retried on path 1, but as the NOMERGE request
flag is set req->nr_phys_segments is never updated.
But blk_rq_map_sg() ignores all counters, and just uses the
bi_vec directly, resulting in a count of 4 -> boom.
So the culprit here is the NOMERGE flag,
NOMERGE is always set in __blk_rq_prep_clone() for cloned requests.
That is the idea.which is evaluated via
->dm_dispatch_request()
->blk_insert_cloned_request()
->blk_rq_check_limits()
blk_insert_cloned_request() is the only caller of blk_rq_check_limits();
anyway after reading your mail I'm still left wondering if your proposed
patch is correct.
If the above assessment is correct, the following patch should
fix it:
diff --git a/block/blk-core.c b/block/blk-core.c
index 801ced7..12cccd6 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1928,7 +1928,7 @@ EXPORT_SYMBOL(submit_bio);
*/
int blk_rq_check_limits(struct request_queue *q, struct request *rq)
{
- if (!rq_mergeable(rq))
+ if (rq->cmd_type != REQ_TYPE_FS)
return 0;
if (blk_rq_sectors(rq) > blk_queue_get_max_sectors(q,
rq->cmd_flags)) {
Mike? Jens?
Can you comment on it?
You're not explaining the actual change in the patch very well; I think
you're correct but you're leaving the justification as an exercise to
the reviewer:
blk_rq_check_limits() will call blk_recalc_rq_segments() after the
!rq_mergeable() check but you're saying for this case in question we
never get there -- due to the cloned request having NOMERGE set.
So in blk_rq_check_limits() you've unrolled rq_mergeable() and
open-coded the lone remaining check (rq->cmd_type != REQ_TYPE_FS)
I agree that the (rq->cmd_flags & REQ_NOMERGE_FLAGS) check in
the blk_insert_cloned_request() call-chain (via rq_mergeable()) makes no
sense for cloned requests that always have NOMERGE set.
So you're saying that by having blk_rq_check_limits() go on to call
blk_recalc_rq_segments() this bug will be fixed?
BTW, I think blk_rq_check_limits()'s export should be removed and theActually, seeing Jens' last comment the check for REQ_TYPE_FS is pointless, too, so we might as well remove the entire if-clause.
function made static and renamed to blk_clone_rq_check_limits(), again:
blk_insert_cloned_request() is the only caller of blk_rq_check_limits()
Seems prudent to make that change now to be clear that this code is onlyYeah, that would make sense. I'll be preparing a patch.
used by cloned requests.