Re: [PATCH 3/5] kernel/watchdog: adapt the watchdog_hld interface for async model
From: Petr Mladek
Date: Mon Sep 20 2021 - 04:20:53 EST
On Fri 2021-09-17 23:41:31, Pingfan Liu wrote:
> On Thu, Sep 16, 2021 at 10:36:10AM +0200, Petr Mladek wrote:
> > On Thu 2021-09-16 10:29:05, Petr Mladek wrote:
> > > On Wed 2021-09-15 11:51:01, Pingfan Liu wrote:
> > > > When lockup_detector_init()->watchdog_nmi_probe(), PMU may be not ready
> > > > yet. E.g. on arm64, PMU is not ready until
> > > > device_initcall(armv8_pmu_driver_init). And it is deeply integrated
> > > > with the driver model and cpuhp. Hence it is hard to push this
> > > > initialization before smp_init().
> > > >
> > > > But it is easy to take an opposite approach by enabling watchdog_hld to
> > > > get the capability of PMU async.
> > >
> > > This is another cryptic description. I have probably got it after
> > > looking at the 5th patch (was not Cc :-(
> > >
> > > > The async model is achieved by introducing an extra parameter notifier
> > > > of watchdog_nmi_probe().
> > >
> > > I would say that the code is horrible and looks too complex.
> > >
> > > What about simply calling watchdog_nmi_probe() and
> > > lockup_detector_setup() once again when watchdog_nmi_probe()
> > > failed in lockup_detector_init()?
> > >
> > > Or do not call lockup_detector_init() at all in
> > > kernel_init_freeable() when PMU is not ready yet.
> >
> > BTW: It is an overkill to create your own kthread just to run some
> > code just once. And you implemeted it a wrong way. The kthread
>
> I had thought about queue_work_on() in watchdog_nmi_enable(). But since
> this work will block the worker kthread for this cpu. So finally,
> another worker kthread should be created for other work.
This is not a problem. workqueues use a pool of workers that are
already created and can be used when one worker gets blocked.
> But now, I think queue_work_on() may be more neat.
>
> > must wait in a loop until someone else stop it and read
> > the exit code.
> >
> Is this behavior mandotory? Since this kthread can decide the exit
> condition by itself.
I am pretty sure. Unfortunately, I can't find it in the documentation.
My view is the following. Each process has a task_struct. The
scheduler needs task_struct so that it can switch processes.
The task_struct must still exist when the process exits.
The scheduler puts the task into TASK_DEAD state.
Another process has to read the exit code and destroy the
task struct.
See, do_exit() in kernel/exit.c. It ends with do_dead_task().
It is the point when the process goes into TASK_DEAD state.
For a good example, see lib/test_vmalloc.c. The kthread waits
until anyone want him to stop:
static int test_func(void *private)
{
[...]
/*
* Wait for the kthread_stop() call.
*/
while (!kthread_should_stop())
msleep(10);
return 0;
}
The kthreads are started and stopped in:
static void do_concurrent_test(void)
{
[...]
for (i = 0; i < nr_threads; i++) {
[...]
t->task = kthread_run(test_func, t, "vmalloc_test/%d", i);
[...]
/*
* Sleep quiet until all workers are done with 1 second
* interval. Since the test can take a lot of time we
* can run into a stack trace of the hung task. That is
* why we go with completion_timeout and HZ value.
*/
do {
ret = wait_for_completion_timeout(&test_all_done_comp, HZ);
} while (!ret);
[...]
for (i = 0; i < nr_threads; i++) {
[...]
if (!IS_ERR(t->task))
kthread_stop(t->task);
[...]
}
You do not have to solve this if you use the system workqueue
(system_wq).
Best Regards,
Petr