Re: [PATCH v5 00/10] xen-block: multi hardware-queues/rings support
From: Konrad Rzeszutek Wilk
Date: Thu Nov 26 2015 - 11:20:52 EST
On November 26, 2015 2:09:02 AM EST, Bob Liu <bob.liu@xxxxxxxxxx> wrote:
>
>On 11/26/2015 10:57 AM, Konrad Rzeszutek Wilk wrote:
>> On Thu, Nov 26, 2015 at 10:28:10AM +0800, Bob Liu wrote:
>>>
>>> On 11/26/2015 06:12 AM, Konrad Rzeszutek Wilk wrote:
>>>> On Wed, Nov 25, 2015 at 03:56:03PM -0500, Konrad Rzeszutek Wilk
>wrote:
>>>>> On Wed, Nov 25, 2015 at 02:25:07PM -0500, Konrad Rzeszutek Wilk
>wrote:
>>>>>>> xen/blkback: separate ring information out of struct xen_blkif
>>>>>>> xen/blkback: pseudo support for multi hardware queues/rings
>>>>>>> xen/blkback: get the number of hardware queues/rings from
>blkfront
>>>>>>> xen/blkback: make pool of persistent grants and free pages
>per-queue
>>>>>>
>>>>>> OK, got to those as well. I have put them in 'devel/for-jens-4.5'
>and
>>>>>> are going to test them overnight before pushing them out.
>>>>>>
>>>>>> I see two bugs in the code that we MUST deal with:
>>>>>>
>>>>>> - print_stats () is going to show zero values.
>>>>>> - the sysfs code (VBD_SHOW) aren't converted over to fetch data
>>>>>> from all the rings.
>>>>>
>>>>> - kthread_run can't handle the two "name, i" arguments. I see:
>>>>>
>>>>> root 5101 2 0 20:47 ? 00:00:00 [blkback.3.xvda-]
>>>>> root 5102 2 0 20:47 ? 00:00:00 [blkback.3.xvda-]
>>>>
>>>> And doing save/restore:
>>>>
>>>> xl save <id> /tmp/A;
>>>> xl restore /tmp/A;
>>>>
>>>> ends up us loosing the proper state and not getting the ring setup
>back.
>>>> I see this is backend:
>>>>
>>>> [ 2719.448600] vbd vbd-22-51712: -1 guest requested 0 queues,
>exceeding the maximum of 3.
>>>>
>>>> And XenStore agrees:
>>>> tool = ""
>>>> xenstored = ""
>>>> local = ""
>>>> domain = ""
>>>> 0 = ""
>>>> domid = "0"
>>>> name = "Domain-0"
>>>> device-model = ""
>>>> 0 = ""
>>>> state = "running"
>>>> error = ""
>>>> backend = ""
>>>> vbd = ""
>>>> 2 = ""
>>>> 51712 = ""
>>>> error = "-1 guest requested 0 queues, exceeding the maximum
>of 3."
>>>>
>>>> .. which also leads to a memory leak as xen_blkbk_remove never gets
>>>> called.
>>>
>>> I think which was already fix by your patch:
>>> [PATCH RFC 2/2] xen/blkback: Free resources if connect_ring failed.
>>
>> Nope. I get that with or without the patch.
>>
>
>Attached patch should fix this issue.
Yup!
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/