Re: [PATCH RFC] fsio: filesystem io accounting cgroup
From: Vivek Goyal
Date: Mon Jul 08 2013 - 13:52:21 EST
On Mon, Jul 08, 2013 at 10:00:47AM -0700, Tejun Heo wrote:
> (cc'ing Vivek and Jens)
>
> Hello,
>
> On Mon, Jul 08, 2013 at 02:01:39PM +0400, Konstantin Khlebnikov wrote:
> > This is proof of concept, just basic functionality for IO controller.
> > This cgroup will control filesystem usage on vfs layer, it's main goal is
> > bandwidth control. It's supposed to be much more lightweight than memcg/blkio.
>
> While blkcg is pretty heavy handed right now, there's no inherent
> reason for it to be that way. The right thing to do would be updating
> blkcg to be light-weight rather than adding yet another controller.
> Also, all controllers should support full hierarchy.
Agreed.
Looks like he is looking to implement only throttling IO with max upper
limits in fsio controller. And I thought that throttling IO part of blkcg was
pretty light weight. Konstantin, is that not the case. Or you find even
throttling functionality to be heavy weigth. If you have ideas to make
it light weight, we can always change it.
>
> > Unlike to blkio this method works for all of filesystems, not just disk-backed.
> > Also it's able to handle writeback, because each inode has context which can be
> > used in writeback thread to account io operations.
>
> Again, a problem to be fixed in the stack rather than patching up from
> up above. The right thing to do is to propagate pressure through bdi
> properly and let whatever is backing the bdi generate appropriate
> amount of pressure, be that disk or network.
Ok, so use network controller for controlling IO rate on NFS? I had
tried it once and it did not work. I think it had problems related
to losing the context info as IO propagated through the stack. So
we will have to fix that too.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/