答复: 答复: [PATCH][RFC] sched/cpuacct: Fix cpuacct charge

From: Li,Rongqing
Date: Mon Aug 16 2021 - 23:55:20 EST


> > diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c index
> > 893eece..48b117e 100644
> > --- a/kernel/sched/cpuacct.c
> > +++ b/kernel/sched/cpuacct.c
> > @@ -340,7 +340,12 @@ void cpuacct_charge(struct task_struct *tsk, u64
> > cputime) {
> > struct cpuacct *ca;
> > int index = CPUACCT_STAT_SYSTEM;
> > - struct pt_regs *regs = get_irq_regs() ? : task_pt_regs(tsk);
> > + struct pt_regs *regs;
> > +
> > + if (task_cpu(tsk) == raw_smp_processor_id())
> > + regs = get_irq_regs() ? : task_pt_regs(tsk);
> > + else
> > + regs = task_pt_regs(tsk);
> >
> > if (regs && user_mode(regs))
> > index = CPUACCT_STAT_USER;
>
> It still suffers from task_pt_regs().
>
> Why not make cpuacct use cgroup2's approach? Remember only delta_exec
> here, then on reading cpuacct.usage_*, use cputime_adjust() to scale the
> user/sys from cpuacct_account_field().
>

I think your suggestion is reasonable, Could you send a patch


> It's arguably more than just a fix for cgroup1, but there have been a few
> complaints about this function lately.
>
> > rcu_read_lock();
> >
> > for (ca = task_ca(tsk); ca; ca = parent_ca(ca))
> > __this_cpu_add(ca->cpuusage->usages[index], cputime);
> >
> > rcu_read_unlock();
>
> By the way, I think the __this_cpu_add() can be wrong in cases like you originally
> describe. Seems like a bug in 73e6aafd9ea8 ("sched/cpuacct:
> Simplify the cpuacct code").

We find this issue too.

-Li