Re: Regression: memory corruption on Atmel SAMA5D31
From: Tudor.Ambarus
Date: Thu Jun 30 2022 - 05:23:47 EST
On 6/30/22 08:20, Peter Rosin wrote:
> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>
> Hi!
Hi, Peter!
>
> 2022-06-27 at 18:53, Tudor.Ambarus@xxxxxxxxxxxxx wrote:
>> On 6/27/22 15:26, Tudor.Ambarus@xxxxxxxxxxxxx wrote:
>>> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>>>
>>> On 6/21/22 13:46, Peter Rosin wrote:
>>>> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>>>>
>>>> 2022-06-20 at 16:22, Tudor.Ambarus@xxxxxxxxxxxxx wrote:
>>>>>
>>>>>>
>>>>>> git@xxxxxxxxxx:ambarus/linux-0day.git, branch dma-regression-hdmac-v5.18-rc7-4th-attempt
>>>>>>
>>>>>
>>>>> Hi, Peter,
>>>>>
>>>>> I've just forced pushed on this branch, I had a typo somewhere and with that fixed I could
>>>>> no longer reproduce the bug. Tested for ~20 minutes. Would you please test last 3 patches
>>>>> and tell me if you can still reproduce the bug?
>>>>
>>>> Hi!
>>>>
>>>> I rebased your patches onto my current branch which is v5.18.2 plus a few unrelated
>>>> changes (at least they are unrelated after removing the previous workaround to disable
>>>> nand-dma entirely).
>>>>
>>>> The unrelated patches are two backports so that drivers recognize new compatibles [1][2],
>>>> which should be completely harmless, plus a couple of proposed fixes that happens to fix
>>>> eeprom issues with the at91 I2C driver from Codrin Ciubotariu [3].
>>>>
>>>> On that kernel, I can still reproduce. It seems a bit harder to reproduce the problem now
>>>> though. If the system is otherwise idle, the sha256sum test did not reproduce in a run of
>>>> 150+ attempts, but if I let the "real" application run while I do the test, I get a failure rate
>>>> of about 10%, see below. The real application burns some CPU (but not all of it) and
>>>> communicates with HW using I2C, native UARTs and two of the four USB-serial ports
>>>> (FTDI, with the latency set to 1ms as mentioned earlier), so I guess there is more DMA
>>>> pressure or something? There is a 100mbps network connection, but it was left "idle"
>>>> during this test.
>>>>
>>>
>>> Thanks, Peter.
>>> I got back to the office, I'm rechecking what could go wrong.
>>>
>>
>> Hi, Peter,
>>
>> Would you please help me with another round of testing? I have difficulties
>> in reproducing the bug and maybe you can speed up the process while I copy
>> your testing setup. I made two more patches on top of the same branch [1].
>> My assumption is that the last problem that you saw is that a transfer
>> could be started multiple times. I think these are the last less invasive
>> changes that I try, I'll have to rewrite the logic anyway.
>>
>> Thanks!
>>
>> [1] To github.com:ambarus/linux-0day.git
>> cbb2ddca4618..79c7784dbcf2 dma-regression-hdmac-v5.18-rc7-4th-attempt -> dma-regression-hdmac-v5.18-rc7-4th-attempt
>
> I was out of office, but I managed to get a test running over night and can
> report that It still fails. This is a longer run of about 500 with a failure
> rate of 5% compared to the last time when the failure rate was 10%. I tend
Thanks!
> to think that the observed difference in failure rate may well be statistical
> noise, but who knows? Would it be useful with a longer run without the last
> two patches to see if they make a difference?
I pushed another patch were I added a write mem barrier to make sure everything
is in place before starting the transfer. Could you also take the last patch
and re-test if it's not too complicated? I still can't reproduce it on my side,
I'm checking what else I can add to stress test the DMA.
Thanks!
ta