Re: [PATCH v2 1/1] vhost scsi: alloc vhost_scsi with kvzalloc() to avoid delay
From: Joe Jin
Date: Mon Feb 01 2021 - 11:05:07 EST
Can anyone help to review this patch and give a review-by for it please?
Thanks,
Joe
On 1/24/21 7:12 PM, Jason Wang wrote:
>
> On 2021/1/23 下午4:08, Dongli Zhang wrote:
>> The size of 'struct vhost_scsi' is order-10 (~2.3MB). It may take long time
>> delay by kzalloc() to compact memory pages by retrying multiple times when
>> there is a lack of high-order pages. As a result, there is latency to
>> create a VM (with vhost-scsi) or to hotadd vhost-scsi-based storage.
>>
>> The prior commit 595cb754983d ("vhost/scsi: use vmalloc for order-10
>> allocation") prefers to fallback only when really needed, while this patch
>> allocates with kvzalloc() with __GFP_NORETRY implicitly set to avoid
>> retrying memory pages compact for multiple times.
>>
>> The __GFP_NORETRY is implicitly set if the size to allocate is more than
>> PAGE_SZIE and when __GFP_RETRY_MAYFAIL is not explicitly set.
>>
>> Cc: Aruna Ramakrishna <aruna.ramakrishna@xxxxxxxxxx>
>> Cc: Joe Jin <joe.jin@xxxxxxxxxx>
>> Signed-off-by: Dongli Zhang <dongli.zhang@xxxxxxxxxx>
>> ---
>> Changed since v1:
>> - To combine kzalloc() and vzalloc() as kvzalloc()
>> (suggested by Jason Wang)
>>
>> drivers/vhost/scsi.c | 9 +++------
>> 1 file changed, 3 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
>> index 4ce9f00ae10e..5de21ad4bd05 100644
>> --- a/drivers/vhost/scsi.c
>> +++ b/drivers/vhost/scsi.c
>> @@ -1814,12 +1814,9 @@ static int vhost_scsi_open(struct inode *inode, struct file *f)
>> struct vhost_virtqueue **vqs;
>> int r = -ENOMEM, i;
>> - vs = kzalloc(sizeof(*vs), GFP_KERNEL | __GFP_NOWARN | __GFP_RETRY_MAYFAIL);
>> - if (!vs) {
>> - vs = vzalloc(sizeof(*vs));
>> - if (!vs)
>> - goto err_vs;
>> - }
>> + vs = kvzalloc(sizeof(*vs), GFP_KERNEL);
>> + if (!vs)
>> + goto err_vs;
>> vqs = kmalloc_array(VHOST_SCSI_MAX_VQ, sizeof(*vqs), GFP_KERNEL);
>> if (!vqs)
>
>
> Acked-by: Jason Wang <jasowang@xxxxxxxxxx>
>
>
>