Re: [PATCH 5/6] sched_ext: idle: Per-node idle cpumasks
From: Yury Norov
Date: Tue Feb 11 2025 - 09:20:04 EST
On Tue, Feb 11, 2025 at 10:50:46AM +0100, Andrea Righi wrote:
> On Tue, Feb 11, 2025 at 08:41:45AM +0100, Andrea Righi wrote:
> > On Tue, Feb 11, 2025 at 08:32:51AM +0100, Andrea Righi wrote:
> > > On Mon, Feb 10, 2025 at 11:57:42AM -0500, Yury Norov wrote:
> > > ...
> > > > > > +/*
> > > > > > + * Find the best idle CPU in the system, relative to @node.
> > > > > > + */
> > > > > > +s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
> > > > > > +{
> > > > > > + nodemask_t unvisited = NODE_MASK_ALL;
> > > >
> > > > This should be a NODEMASK_ALLOC(). We don't want to eat up too much of the
> > > > stack, right?
> > >
> > > Ok, and if I want to initialize unvisited to all online nodes, is there a
> > > better than doing:
> > >
> > > nodemask_clear(*unvisited);
> > > nodemask_or(*unvisited, *unvisited, node_states[N_ONLINE]);
> > >
> > > We don't have nodemask_copy() right?
> >
> > Sorry, and with that I mean nodes_clear() / nodes_or() / nodes_copy().
>
> Also, it might be problematic to use NODEMASK_ALLOC() here, since we're
> potentially holding raw spinlocks. Maybe we could use per-cpu nodemask_t,
> but then we need to preempt_disable() the entire loop, since
> scx_pick_idle_cpu() can be be called potentially from any context.
>
> Considering that the maximum value for NODE_SHIFT is 10 with CONFIG_MAXSMP,
> nodemask_t should be 128 bytes at most, that doesn't seem too bad... Maybe
> we can accept to have it on the stack in this case?
If you expect calling this in strict SMP lock-held or IRQ contexts, You
need to be careful about stack overflow even mode. We've got GFP_ATOMIC
for that:
non sleeping allocation with an expensive fallback so it can access
some portion of memory reserves. Usually used from interrupt/bottom-half
context with an expensive slow path fallback.
Check Documentation/core-api/memory-allocation.rst for other options.
You may be interested in __GFP_NORETRY as well.
Thanks,
Yury