Re: [Xen-devel] [PATCH 0/2] xen/blkback: Aggressively shrink page pools if a memory pressure is detected
From: sjpark
Date: Wed Dec 04 2019 - 07:09:34 EST
On 04.12.19 12:52, Durrant, Paul wrote:
>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of
>> SeongJae Park
>> Sent: 04 December 2019 11:34
>> To: konrad.wilk@xxxxxxxxxx; roger.pau@xxxxxxxxxx; axboe@xxxxxxxxx
>> Cc: sj38.park@xxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxxx; linux-
>> block@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; Park, Seongjae
>> <sjpark@xxxxxxxxxx>
>> Subject: [Xen-devel] [PATCH 0/2] xen/blkback: Aggressively shrink page
>> pools if a memory pressure is detected
>>
>> Each `blkif` has a free pages pool for the grant mapping. The size of
>> the pool starts from zero and be increased on demand while processing
>> the I/O requests. If current I/O requests handling is finished or 100
>> milliseconds has passed since last I/O requests handling, it checks and
>> shrinks the pool to not exceed the size limit, `max_buffer_pages`.
>>
>> Therefore, `blkfront` running guests can cause a memory pressure in the
>> `blkback` running guest by attaching arbitrarily large number of block
>> devices and inducing I/O.
> OOI... How do guests unilaterally cause the attachment of arbitrary numbers of PV devices?
Good point. Many systems have their limit for the maximum number of the
devices. Thus, 'arbitrarily' large number of devices cannot be attached. So,
there is the upperbound. System administrators might be able to avoid the
memory pressure problem by setting the limit low enough or giving more memory
to the 'blkback' running guest.
However, many systems also tempt to set the limit high enough so that guests
can satisfy and to give minimal memory to the 'blkback' running guest for cost
efficiency.
I believe this patchset can be helpful for such situations.
Anyway, using the term 'arbitrarily' is obvisously my fault. I will update the
description in the next version of patchset.
Thanks,
SeongJae Park
>
> Paul
>