Re: New NUMA scheduler and hotplug CPU

From: Andrew Theurer
Date: Mon Jan 26 2004 - 21:18:00 EST


On Monday 26 January 2004 18:09, Martin J. Bligh wrote:
> > For example, you boot on just the boot cpu, which by default is in the
> > first node on the first runqueue. All other cpus, whether being "booted"
> > for the for the first time or hotplugged (maybe now there's really no
> > difference), the hotplugging tells where the cpu should be, in what node
> > and what runqueue. HT cpus work even better, because you can hotplug
> > siblings, once at a time if you wanted, to the same runqueue. Or you
> > have cpus sharing a die, same thing, lots of choices here. This removes
> > any per-arch updates to the kernel for things like scheduler topology,
> > and lets them go somewhere else more easily changes, like userspace.
>
> Ummm ... but *none* of that is dictated as policy stuff - it's all just
> the hardware layout of the machine. You cannot "decide" as the sysadmin
> which node a CPU is in, or which HT sibling it has. It's just there ;-)
> The only thing you could possibly dictate is the CPU number you want
> assigned to the new CPU, which frankly, I think is pointless - they're
> arbitrary tags, and always have been.

How many cpus share a runqueue IMO could be a policy thing. Some HT cpus may
be better sharing a runqueue where others (lots and lots of siblings in one
core) may not.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/