Re: [PATCH 005 of 35] Stop updating bi_idx, bv_len, bv_offset whena request completes

From: Tejun Heo
Date: Wed Aug 01 2007 - 10:59:41 EST


Hello,

Went through 1-4 and all look sane and seem to be nice clean ups with or
without the rest of series. I didn't really dig into each conversion,
so I can't say much about correctness tho.

NeilBrown wrote:
> Some requests signal partial completion. We currently record this
> by updating bi_idx, bv_len, and bv_offset.
> This is bad if the bi_io_vec is to be shared.
> So instead keep in "struct request" the amount of the first bio
> that has completed. This is "first_offset" (i.e. offset in to
> first bio). Update and use that instead.
>
> Signed-off-by: Neil Brown <neilb@xxxxxxx>
> @@ -3668,14 +3679,25 @@ EXPORT_SYMBOL(blk_rq_bio_prep);
>
> void *blk_rq_data(struct request *rq)
> {
> - return page_address(bio_page(rq->bio)) +
> - bio_offset(rq->bio);
> + struct bio_vec bvec;
> + struct req_iterator i;
> +
> + rq_for_each_segment(rq, i, bvec)
> + return page_address(bvec.bv_page) + bvec.bv_offset;
> +
> + return NULL;
> }
> EXPORT_SYMBOL(blk_rq_data);
>
> int blk_rq_cur_bytes(struct request *rq)
> {
> - return bio_iovec(rq->bio)->bv_len;
> + struct bio_vec bvec;
> + struct req_iterator i;
> +
> + rq_for_each_segment(rq, i, bvec)
> + return bvec.bv_len;
> +
> + return 0;
> }
> EXPORT_SYMBOL(blk_rq_cur_bytes);

Just a small nit. It might be easier on eyes to use something like
blk_first_segment(rq), which can also be used to implement rq_for_each.

> diff .prev/include/linux/blkdev.h ./include/linux/blkdev.h
> --- .prev/include/linux/blkdev.h 2007-07-31 11:20:46.000000000 +1000
> +++ ./include/linux/blkdev.h 2007-07-31 11:20:46.000000000 +1000
> @@ -254,6 +254,7 @@ struct request {
>
> struct bio *bio;
> struct bio *biotail;
> + int first_offset; /* offset into first bio in list */
>
> struct hlist_node hash; /* merge hash */
> /*
> @@ -640,14 +641,25 @@ static inline void blk_queue_bounce(stru
> struct req_iterator {
> int i;
> struct bio *bio;
> + int offset;
> };
> #define rq_for_each_segment(rq, _iter, bvec) \
> - for (_iter.bio = (rq)->bio; _iter.bio; _iter.bio = _iter.bio->bi_next) \
> - for (_iter.i = _iter.bio->bi_idx, \
> - bvec = *bio_iovec_idx(_iter.bio, _iter.i); \
> + for (_iter.bio = (rq)->bio, _iter.offset = (rq)->first_offset; \
> + _iter.bio; \
> + _iter.bio = _iter.bio->bi_next, _iter.offset = 0) \
> + for (_iter.i = _iter.bio->bi_idx; \
> _iter.i < _iter.bio->bi_vcnt; \
> - _iter.i++, bvec = *bio_iovec_idx(_iter.bio, _iter.i) \
> - )
> + _iter.i++ \
> + ) \
> + if (bvec = *bio_iovec_idx(_iter.bio, _iter.i), \
> + bvec.bv_offset += _iter.offset, \
> + bvec.bv_len <= _iter.offset \
> + ? (_iter.offset -= bvec.bv_len, 0) \
> + : (bvec.bv_len -= _iter.offset, \
> + _iter.offset = 0, \
> + 1))
> +
> +

Implementing and using blk_seg_iter_init(iter, rq) and
blk_seg_iter_next(iter) would be much more readable and take less cache
space.

--
tejun
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/