Re: [PATCH] drm/nouveau/dmem: missing mutex_lock in error path

From: John Hubbard
Date: Fri Jun 14 2019 - 13:53:06 EST


On 6/14/19 10:39 AM, Ralph Campbell wrote:
> On 6/13/19 5:49 PM, John Hubbard wrote:
>> On 6/13/19 5:11 PM, Ralph Campbell wrote:
...
>> Actually, the pre-existing code is a little concerning. Your change preserves
>> the behavior, but it seems questionable to be doing a "return 0" (whether
>> via the above break, or your change) when it's in this partially allocated
>> state. It's reporting success when it only allocates part of what was requested,
>> and it doesn't fill in the pages array either.
>>
>>
>>
>>> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ return 0;
>>> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ return ret;
>>> ÂÂÂÂÂÂÂÂÂÂÂÂÂ }
>>> +ÂÂÂÂÂÂÂÂÂÂÂ mutex_lock(&drm->dmem->mutex);
>>> ÂÂÂÂÂÂÂÂÂÂÂÂÂ continue;
>>> ÂÂÂÂÂÂÂÂÂ }
>>> Â
>>
>> The above comment is about pre-existing potential problems, but your patch itself
>> looks correct, so:
>>
>> Reviewed-by: John Hubbard <jhubbard@xxxxxxxxxx>
>>
>>
>> thanks,
>>
> The crash was the NULL pointer bug in Christoph's patch #10.
> I sent a separate reply for that.
>
> Below is the console output I got, then I made the changes just based on
> code inspection. Do you think I should include it in the change log?

Yes, I think it's good to have it in there. If you look at git log,
you'll see that it's common to include the symptoms, including the
backtrace. It helps people see if they are hitting the same problem,
for one thing.

>
> As for the "return 0", If you follow the call chain,
> nouveau_dmem_pages_alloc() is only ever called for one page so this
> currently "works" but I agree it is a bit of a time bomb. There are a
> number of other bugs that I can see that need fixing but I think those
> should be separate patches.
>

Yes of course. I called it out for the benefit of the email list, not to
say that your patch needs any changes.

thanks,
--
John Hubbard
NVIDIA