Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock
From: Don Hiatt
Date: Wed Apr 28 2021 - 12:04:54 EST
On Tue, Apr 27, 2021 at 4:35 PM Josh Don <joshdon@xxxxxxxxxx> wrote:
>
> On Tue, Apr 27, 2021 at 10:10 AM Don Hiatt <dhiatt@xxxxxxxxxxxxxxxx> wrote:
> > Hi Josh and Peter,
> >
> > I've been running into soft lookups and hard lockups when running a script
> > that just cycles setting the cookie of a group of processes over and over again.
> >
> > Unfortunately the only way I can reproduce this is by setting the cookies
> > on qemu. I've tried sysbench, stress-ng but those seem to work just fine.
> >
> > I'm running Peter's branch and even tried the suggested changes here but
> > still see the same behavior. I enabled panic on hard lockup and here below
> > is a snippet of the log.
> >
> > Is there anything you'd like me to try or have any debugging you'd like me to
> > do? I'd certainly like to get to the bottom of this.
>
> Hi Don,
>
> I tried to repro using qemu, but did not generate a lockup. Could you
> provide more details on what your script is doing (or better yet,
> share the script directly)? I would have expected you to potentially
> hit a lockup if you were cycling sched_core being enabled and
> disabled, but it sounds like you are just recreating the cookie for a
> process group over and over?
>
> Best,
> Josh
Hi Josh,
Sorry if I wasn't clear, but I'm running on bare metal (Peter's
5.12-rc8 repo) and have two qemu-system-x86_64 vms (1 with 8vcpu, the
other with 24 but it doesn't really matter). I then run a script [1]
that cycles setting the cookie for all the processes given the pid of
qemu-system-x86_64.
I also just found a lockup when I set the cookies for those two vms,
pinned them both to the same processor pair, and ran sysbench within
each vm to generate some load [2]. This is without cycling the
cookies, just setting once.
Thanks!
Don
----[1]---
This is a little test harness (I can provide the source but it is
based on your kselftests)
dhiatt@s2r5node34:~/do_coresched$ ./do_coresched -h
Usage for ./do_coresched
-c Create sched cookie for <pid>
-f Share sched cookie from <pid>
-g Get sched cookie from <pid>
-t Share sched cookie to <pid>
-z Clear sched cookie from <pid>
-p PID
Create: _prctl(PR_SCHED_CORE, PR_SCHED_CORE_CREATE, pid, PIDTYPE_PID, 0)
Share: _prctl(PR_SCHED_CORE, PR_SCHED_CORE_SHARE_TO, pid, PIDTYPE_PID, 0)
Get: prctl(PR_SCHED_CORE, PR_SCHED_CORE_GET, pid, PIDTYPE_PID,
(unsigned long)&cookie);
--
dhiatt@s2r5node34:~/do_coresched$ cat set.sh
#!/bin/bash
# usage: set.sh pid
target=$1
echo target pid: $target
pids=`ps -eL | grep $target | awk '{print $2}' | sort -g`
echo "Setting cookies for $target"
./do_coresched -c -p $BASHPID. #
for i in $pids
do
./do_coresched -t -p $i
./do_coresched -g -p $i
done
---
---[2]---
[ 8911.926989] watchdog: BUG: soft lockup - CPU#53 stuck for 22s! [CPU
1/KVM:19539]
[ 8911.935727] NMI watchdog: Watchdog detected hard LOCKUP on cpu 6
[ 8911.935791] NMI watchdog: Watchdog detected hard LOCKUP on cpu 7
[ 8911.935908] NMI watchdog: Watchdog detected hard LOCKUP on cpu 12
[ 8911.935967] NMI watchdog: Watchdog detected hard LOCKUP on cpu 13
[ 8911.936070] NMI watchdog: Watchdog detected hard LOCKUP on cpu 19
[ 8911.936145] NMI watchdog: Watchdog detected hard LOCKUP on cpu 21
[ 8911.936220] NMI watchdog: Watchdog detected hard LOCKUP on cpu 23
[ 8911.936361] NMI watchdog: Watchdog detected hard LOCKUP on cpu 31
[ 8911.936453] NMI watchdog: Watchdog detected hard LOCKUP on cpu 34
[ 8911.936567] NMI watchdog: Watchdog detected hard LOCKUP on cpu 42
[ 8911.936627] NMI watchdog: Watchdog detected hard LOCKUP on cpu 46
[ 8911.936712] NMI watchdog: Watchdog detected hard LOCKUP on cpu 49
[ 8911.936827] NMI watchdog: Watchdog detected hard LOCKUP on cpu 58
[ 8911.936887] NMI watchdog: Watchdog detected hard LOCKUP on cpu 60
[ 8911.936969] NMI watchdog: Watchdog detected hard LOCKUP on cpu 70
[ 8915.926847] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[ 8915.932904] rcu: 2-...!: (7 ticks this GP)
idle=38e/1/0x4000000000000000 softirq=181538/181540 fqs=1178
[ 8915.942617] rcu: 14-...!: (2 GPs behind)
idle=d0e/1/0x4000000000000000 softirq=44825/44825 fqs=1178
[ 8915.954034] rcu: rcu_sched kthread timer wakeup didn't happen for
12568 jiffies! g462469 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[ 8915.965775] rcu: Possible timer handling issue on cpu=6 timer-softirq=10747
[ 8915.973021] rcu: rcu_sched kthread starved for 12572 jiffies!
g462469 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=6
[ 8915.983681] rcu: Unless rcu_sched kthread gets sufficient CPU time,
OOM is now expected behavior.
[ 8915.992879] rcu: RCU grace-period kthread stack dump:
[ 8915.998211] rcu: Stack dump where RCU GP kthread last ran:
[ 8939.925995] watchdog: BUG: soft lockup - CPU#53 stuck for 22s! [CPU
1/KVM:19539]
[ 8939.935274] NMI watchdog: Watchdog detected hard LOCKUP on cpu 25
[ 8939.935351] NMI watchdog: Watchdog detected hard LOCKUP on cpu 27
[ 8939.935425] NMI watchdog: Watchdog detected hard LOCKUP on cpu 29
[ 8939.935653] NMI watchdog: Watchdog detected hard LOCKUP on cpu 44
[ 8939.935845] NMI watchdog: Watchdog detected hard LOCKUP on cpu 63
[ 8963.997140] watchdog: BUG: soft lockup - CPU#71 stuck for 22s!
[SchedulerRunner:4405]
-----