Re: [PATCH 34/36] block: implement bio_associate_current()

From: Vivek Goyal
Date: Wed Feb 22 2012 - 14:38:03 EST


On Wed, Feb 22, 2012 at 02:33:43PM -0500, Jeff Moyer wrote:
> Tejun Heo <tj@xxxxxxxxxx> writes:
>
> > Hey, Jeff.
>
> Hi, Tejun!
>
> > On Wed, Feb 22, 2012 at 08:45:02AM -0500, Jeff Moyer wrote:
> >> Tejun Heo <tj@xxxxxxxxxx> writes:
> >>
> >> > -v2: #ifdef CONFIG_BLK_CGROUP added around bio->bi_ioc dereference in
> >> > rq_ioc() to fix build breakage.
> >>
> >> This is useful for cfq without blk cgroups as well, right? Why have you
> >> limited the scope like this?
> >
> > Because blk-throttle is the only current user. We can move the
> > BLK_CGROUP to cover just bi_css later on as we add more users.
>
> I guess you're going to make me read the whole patch set. ;-) What I'm
> getting at is CFQ uses the io_context to make its scheduling decisions.
> If we can propagate the issuer's I/O context from bio creation all the
> way down to the I/O scheduler, then we can do a better job of accounting
> I/O (and hence scheduling, preemption, etc). As Vivek mentioned
> previously, we have seen performance issues with the dm-crypt target and
> CFQ, precisely because all of the I/O is submitted in the context of a
> worker thread, and the the process that initiated the I/O is unknown at
> that point.
>
> Hopefully I've either cleared up my question, or proven to you that I do
> need to go read the rest of the patch set to understand why my question
> doesn't make sense. Let me know which is the case. ;-)

Currently he has put the bio_associate() hook only in blk_throtl_bio()
which is under CONFIG_BLK_CGROUP. It is agreed upon that this mechanism
looks generally useful and probably submit_bio() is a better place to
put the hook. Tejun mentioned that once things work well, later we can
think of making the functionality more generic. In that case we shall
have to remove the cgroup specific #ifdefs.

Thansk
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/