Re: [PATCH] loop: Limit the number of requests in the bio list
From: Lukáš Czerner
Date: Tue Oct 02 2012 - 04:52:17 EST
On Mon, 1 Oct 2012, Jeff Moyer wrote:
> Date: Mon, 01 Oct 2012 12:52:19 -0400
> From: Jeff Moyer <jmoyer@xxxxxxxxxx>
> To: Lukas Czerner <lczerner@xxxxxxxxxx>
> Cc: Jens Axboe <axboe@xxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx,
> Dave Chinner <dchinner@xxxxxxxxxx>
> Subject: Re: [PATCH] loop: Limit the number of requests in the bio list
>
> Lukas Czerner <lczerner@xxxxxxxxxx> writes:
>
> > Currently there is not limitation of number of requests in the loop bio
> > list. This can lead into some nasty situations when the caller spawns
> > tons of bio requests taking huge amount of memory. This is even more
> > obvious with discard where blkdev_issue_discard() will submit all bios
> > for the range and wait for them to finish afterwards. On really big loop
> > devices this can lead to OOM situation as reported by Dave Chinner.
> >
> > With this patch we will wait in loop_make_request() if the number of
> > bios in the loop bio list would exceed 'nr_requests' number of requests.
> > We'll wake up the process as we process the bios form the list.
>
> I think you might want to do something similar to what is done for
> request_queues by implementing a congestion on and off threshold. As
> Jens writes in this commit (predating the conversion to git):
Right, I've had the same idea. However my first proof-of-concept
worked quite well without this and my simple performance testing did
not show any regression.
I've basically done just fstrim, and blkdiscard on huge loop device
measuring time to finish and dd bs=4k throughput. None of those showed
any performance regression. I've chosen those for being quite simple
and supposedly issuing quite a lot of bios. Any better
recommendation to test this ?
Also I am still unable to reproduce the problem Dave originally
experienced and I was hoping that he can test whether this helps or
not.
Dave could you give it a try please ? By creating huge (500T, 1000T,
1500T) loop device on machine with 2GB memory I was not able to reproduce
that. Maybe it's that xfs punch hole implementation is so damn fast
:). Please let me know.
Thanks!
-Lukas
>
> Author: Jens Axboe <axboe@xxxxxxx>
> Date: Wed Nov 3 15:47:37 2004 -0800
>
> [PATCH] queue congestion threshold hysteresis
>
> We need to open the gap between congestion on/off a little bit, or
> we risk burning many cycles continually putting processes on a wait
> queue only to wake them up again immediately. This was observed with
> CFQ at least, which showed way excessive sys time.
>
> Patch is from Arjan.
>
> Signed-off-by: Jens Axboe <axboe@xxxxxxx>
> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxx>
>
> If you feel this isn't necessary, then I think you at least need to
> justify it with testing. Perhaps Jens can shed some light on the exact
> workload that triggered the pathological behaviour.
>
> Cheers,
> Jeff
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/