Re: LTP hugemmap05 test case failure on arm64 with linux-next (next-20190613)

From: Qian Cai
Date: Mon Jun 24 2019 - 08:58:55 EST


On Mon, 2019-06-24 at 10:35 +0100, Will Deacon wrote:
> Hi Qian Cai,
>
> On Sun, Jun 16, 2019 at 09:41:09PM -0400, Qian Cai wrote:
> > > On Jun 16, 2019, at 9:32 PM, Anshuman Khandual <anshuman.khandual@xxxxxxx>
> > > wrote:
> > > On 06/14/2019 05:45 PM, Qian Cai wrote:
> > > > On Fri, 2019-06-14 at 11:20 +0100, Will Deacon wrote:
> > > > > On Thu, Jun 13, 2019 at 05:34:01PM -0400, Qian Cai wrote:
> > > > > > LTP hugemmap05 test case [1] could not exit itself properly and then
> > > > > > degrade
> > > > > > the
> > > > > > system performance on arm64 with linux-next (next-20190613). The
> > > > > > bisection
> > > > > > so
> > > > > > far indicates,
> > > > > >
> > > > > > BAD:ÂÂ30bafbc357f1 Merge remote-tracking branch 'arm64/for-
> > > > > > next/core'
> > > > > > GOOD: 0c3d124a3043 Merge remote-tracking branch 'arm64-fixes/for-
> > > > > > next/fixes'
> > > > >
> > > > > Did you finish the bisection in the end? Also, what config are you
> > > > > using
> > > > > (you usually have something fairly esoteric ;)?
> > > >
> > > > No, it is still running.
> > > >
> > > > https://raw.githubusercontent.com/cailca/linux-mm/master/arm64.config
> > > >
> > >
> > > Were you able to bisect the problem till a particular commit ?
> >
> > Not yet, it turned out the test case needs to run a few times (usually
> > within 5) to reproduce, so the previous bisection was totally wrong where
> > it assume the bad commit will fail every time. Once reproduced, the test
> > case becomes unkillable stuck in the D state.
> >
> > I am still in the middle of running a new round of bisection. The current
> > progress is,
> >
> > 35c99ffa20ed GOOD (survived 20 times)
> > def0fdae813d BAD
>
> Just wondering if you got anywhere with this? We've failed to reproduce the
> problem locally.

Unfortunately, I have not had a chance to dig this up yet. The progress I had so
far is,

The issue was there for a long time goes back to 4.20 and probably earlier. It
is not failing every time. The script below could reproduce it usually within 10
0 tires.

i=0; while :; do ./hugemmap05 -m -s; echo $((i++)); sleep 5; done

This can be reproduced in an error path, i.e., shmget() in the test case will
fail every time before triggering the soft lockups.

# ./hugemmap05 -s -m
tst_test.c:1112: INFO: Timeout per run is 0h 05m 00s
hugemmap05.c:235: INFO: original nr_hugepages is 0
hugemmap05.c:248: INFO: original nr_overcommit_hugepages is 0
tst_safe_sysv_ipc.c:111: BROK: hugemmap05.c:97: shmget(218366029, 103079215104,
b80) failed: ENOMEM
hugemmap05.c:192: INFO: restore nr_hugepages to 0.
hugemmap05.c:201: INFO: restore nr_overcommit_hugepages to 0.

Summary:
passedÂÂÂ0
failedÂÂÂ0
skippedÂÂ0
warnings 0
0

My understanding is that the soft lockups are triggered in this path,

ipcget
ipcget_public
ops->getnew
newseg
hugetlb_file_setup <- return ENOMEM

[ 1521.471216][ T1309] INFO: task hugemmap05:4718 blocked for more than 860
seconds.
[ 1521.478731][ T1309]ÂÂÂÂÂÂÂTainted: GÂÂÂÂÂÂÂÂWÂÂÂÂÂÂÂÂÂ5.2.0-rc4+ #8
[ 1521.485023][ T1309] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 1521.493568][ T1309] hugemmap05ÂÂÂÂÂÂD27168ÂÂ4718ÂÂÂÂÂÂ1 0x00000001
[ 1521.499815][ T1309] Call trace:
[ 1521.502985][ T1309]ÂÂ__switch_to+0x2e0/0x37c
[ 1521.507278][ T1309]ÂÂ__schedule+0xa0c/0xd9c
[ 1521.511484][ T1309]ÂÂschedule+0x60/0x168
[ 1521.515430][ T1309]ÂÂ__rwsem_down_write_failed_common+0x484/0x7b8
[ 1521.521546][ T1309]ÂÂrwsem_down_write_failed+0x20/0x2c
[ 1521.526717][ T1309]ÂÂdown_write+0xa0/0xa4
[ 1521.530747][ T1309]ÂÂipcget+0x74/0x414
[ 1521.534518][ T1309]ÂÂksys_shmget+0x90/0xc4
[ 1521.538638][ T1309]ÂÂ__arm64_sys_shmget+0x54/0x88
[ 1521.543366][ T1309]ÂÂel0_svc_handler+0x198/0x260
[ 1521.548005][ T1309]ÂÂel0_svc+0x8/0xc
[ 1521.551605][ T1309]Â
[ 1521.551605][ T1309] Showing all locks held in the system:
[ 1521.559349][ T1309] 1 lock held by khungtaskd/1309:
[ 1521.564251][ T1309]ÂÂ#0: 00000000033dd0e2 (rcu_read_lock){....}, at:
rcu_lock_acquire+0x8/0x38
[ 1521.573014][ T1309] 2 locks held by hugemmap05/4694:
[ 1521.578010][ T1309] 1 lock held by hugemmap05/4718:
[ 1521.582904][ T1309]ÂÂ#0: 00000000c62a3d44 (&ids->rwsem){....}, at:
ipcget+0x74/0x414
[ 1521.590707][ T1309] 1 lock held by hugemmap05/4755:
[ 1521.595595][ T1309]ÂÂ#0: 00000000c62a3d44 (&ids->rwsem){....}, at:
ipcget+0x74/0x414
[ 1521.603373][ T1309] 1 lock held by hugemmap05/4781:
[ 1521.608270][ T1309]ÂÂ#0: 00000000c62a3d44 (&ids->rwsem){....}, at:
ipcget+0x74/0x414