Re: [RFC][PATCH 2/2] sched: Use fancy new guards

From: Peter Zijlstra
Date: Fri May 26 2023 - 12:42:05 EST


On Fri, May 26, 2023 at 05:25:58PM +0100, Greg KH wrote:
> On Fri, May 26, 2023 at 05:05:51PM +0200, Peter Zijlstra wrote:
> > Convert kernel/sched/core.c to use the fancy new guards to simplify
> > the error paths.
>
> That's slightly crazy...
>
> I like the idea, but is this really correct:
>
>
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> > ---
> > kernel/sched/core.c | 1223 +++++++++++++++++++++++----------------------------
> > kernel/sched/sched.h | 39 +
> > 2 files changed, 595 insertions(+), 667 deletions(-)
> >
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -1097,24 +1097,21 @@ int get_nohz_timer_target(void)
> >
> > hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
> >
> > - rcu_read_lock();
> > - for_each_domain(cpu, sd) {
> > - for_each_cpu_and(i, sched_domain_span(sd), hk_mask) {
> > - if (cpu == i)
> > - continue;
> > + void_scope(rcu) {
> > + for_each_domain(cpu, sd) {
> > + for_each_cpu_and(i, sched_domain_span(sd), hk_mask) {
> > + if (cpu == i)
> > + continue;
> >
> > - if (!idle_cpu(i)) {
> > - cpu = i;
> > - goto unlock;
> > + if (!idle_cpu(i))
> > + return i;
>
> You can call return from within a "scope" and it will clean up properly?

Yep, that's the main feature here.

> I tried to read the cpp "mess" but couldn't figure out how to validate
> this at all, have a set of tests for this somewhere?

I have it in userspace with printf, but yeah, I'll go make a selftest
somewhere.

One advantage of using the scheduler locks as testbed is that if you get
it wrong it burns *real* fast -- been there done that etc.

> Anyway, the naming is whack, but I don't have a proposed better name,
> except you might want to put "scope_" as the prefix not the suffix, but
> then that might look odd to, so who knows.

Yeah, naming is certainly crazy, but I figured I should get it all
working before spending too much time on that.

I can certainly do 's/lock_scope/scope_lock/g' on it all.

> But again, the idea is good, it might save us lots of "you forgot to
> clean this up on the error path" mess that we are getting constant churn
> for these days...

That's the goal...