Re: [PATCH] net/mlx4_en: ensure rx_desc updating reaches HW before prod db updating
From: jianchao.wang
Date: Tue Jan 01 2019 - 20:43:16 EST
On 12/31/18 12:27 AM, Tariq Toukan wrote:
>
>
> On 1/27/2018 2:41 PM, jianchao.wang wrote:
>> Hi Tariq
>>
>> Thanks for your kindly response.
>> That's really appreciated.
>>
>> On 01/25/2018 05:54 PM, Tariq Toukan wrote:
>>>
>>>
>>> On 25/01/2018 8:25 AM, jianchao.wang wrote:
>>>> Hi Eric
>>>>
>>>> Thanks for you kindly response and suggestion.
>>>> That's really appreciated.
>>>>
>>>> Jianchao
>>>>
>>>> On 01/25/2018 11:55 AM, Eric Dumazet wrote:
>>>>> On Thu, 2018-01-25 at 11:27 +0800, jianchao.wang wrote:
>>>>>> Hi Tariq
>>>>>>
>>>>>> On 01/22/2018 10:12 AM, jianchao.wang wrote:
>>>>>>>>> On 19/01/2018 5:49 PM, Eric Dumazet wrote:
>>>>>>>>>> On Fri, 2018-01-19 at 23:16 +0800, jianchao.wang wrote:
>>>>>>>>>>> Hi Tariq
>>>>>>>>>>>
>>>>>>>>>>> Very sad that the crash was reproduced again after applied the patch.
>>>>>>>>
>>>>>>>> Memory barriers vary for different Archs, can you please share more details regarding arch and repro steps?
>>>>>>> The hardware is HP ProLiant DL380 Gen9/ProLiant DL380 Gen9, BIOS P89 12/27/2015
>>>>>>> The xen is installed. The crash occurred in DOM0.
>>>>>>> Regarding to the repro steps, it is a customer's test which does heavy disk I/O over NFS storage without any guest.
>>>>>>>
>>>>>>
>>>>>> What is the finial suggestion on this ?
>>>>>> If use wmb there, is the performance pulled down ?
>>>
>>> I want to evaluate this effect.
>>> I agree with Eric, expected impact is restricted, especially after batching the allocations.>
>>>>>
>>>>> Since https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_davem_net-2Dnext.git_commit_-3Fid-3Ddad42c3038a59d27fced28ee4ec1d4a891b28155&d=DwICaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=7WdAxUBeiTUTCy8v-7zXyr4qk7sx26ATvfo6QSTvZyQ&m=c0oI8duFkyFBILMQYDsqRApHQrOlLY_2uGiz_utcd7s&s=E4_XKmSI0B63qB0DLQ1EX_fj1bOP78ZdeYADBf33B-k&e=
>>>>>
>>>>> we batch allocations, so mlx4_en_refill_rx_buffers() is not called that often.
>>>>>
>>>>> I doubt the additional wmb() will have serious impact there.
>>>>>
>>>
>>> I will test the effect (it'll be beginning of next week).
>>> I'll update so we can make a more confident decision.
>>>
>> I have also sent patches with wmb and batching allocations to customer and let them check whether the performance is impacted.
>> And update here asap when get feedback.
>>
>>> Thanks,
>>> Tariq
>>>
>
> Hi Jianchao,
>
> I am interested to push this bug fix.
> Do you want me to submit, or do it yourself?
> Can you elaborate regarding the arch with the repro?
>
> This is the patch I suggest:
>
> --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> @@ -161,6 +161,8 @@ static bool mlx4_en_is_ring_empty(const struct
> mlx4_en_rx_ring *ring)
>
> static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring)
> {
> + /* ensure rx_desc updating reaches HW before prod db updating */
> + wmb();
> *ring->wqres.db.db = cpu_to_be32(ring->prod & 0xffff);
> }
>
Hi Tariq
Happy new year !
The customer provided confused test result for us.
The fix cannot fix their issue.
And we finally find a upstream fix
5d70bd5c98d0e655bde2aae2b5251bdd44df5e71
(net/mlx4_en: fix potential use-after-free with dma_unmap_page)
and killed the issue during Oct 2018. That's really a long way.
Please go ahead with this patch.
Thanks
Jianchao