Re: [CPUISOL] CPU isolation extensions

From: Max Krasnyanskiy
Date: Mon Jan 28 2008 - 13:56:23 EST


Peter Zijlstra wrote:
On Mon, 2008-01-28 at 11:34 -0500, Steven Rostedt wrote:
On Mon, Jan 28, 2008 at 08:59:10AM -0600, Paul Jackson wrote:
Thanks for the CC, Peter.
Thanks from me too.

Max wrote:
We've had scheduler support for CPU isolation ever since O(1) scheduler went it. I'd like to extend it further to avoid kernel activity on those CPUs as much as possible.
I recently added the per-cpuset flag 'sched_load_balance' for some
other realtime folks, so that they can disable the kernel scheduler
load balancing on isolated CPUs. It essentially allows for dynamic
control of which CPUs are isolated by the scheduler, using the cpuset
hierarchy, rather than enhancing the 'isolated_cpus' mask. That
'isolated_cpus' mask remained a minimal kernel boottime parameter.
I believe this went to Linus's tree about Oct 2007.

It looks like you have three additional tweaks for realtime in this
patch set, with your patches:

[PATCH] [CPUISOL] Do not route IRQs to the CPUs isolated at boot
I didn't know we still routed IRQs to isolated CPUs. I guess I need to
look deeper into the code on this one. But I agree that isolated CPUs
should not have IRQs routed to them.

While I agree with this in principle, I'm not sure flat out denying all
IRQs to these cpus is a good option. What about the case where we want
to service just this one specific IRQ on this CPU and no others?

Can't this be done by userspace irq routing as used by irqbalanced?
Peter, I think you missed the point of this patch. It's just a convenience feature.
It simply excludes isolated CPUs from IRQ smp affinity masks. That's all. What did you
mean by "flat out denying all IRQs to these cpus" ? IRQs can still be routed to them by writing to /proc/irq/N/smp_affinity.

Also, this happens naturally when we bring a CPU off-line and then bring it back online.
ie When CPU comes back online it's excluded from the IRQ smp_affinity masks even without
my patch.

[PATCH] [CPUISOL] Support for workqueue isolation
The thing about workqueues is that they should only be woken on a CPU if
something on that CPU accessed them. IOW, the workqueue on a CPU handles
work that was called by something on that CPU. Which means that
something that high prio task did triggered a workqueue to do some work.
But this can also be triggered by interrupts, so by keeping interrupts
off the CPU no workqueue should be activated.

Quite so, if nobody uses it, there is no harm in having them around. If
they are used, its by someone already allowed on the cpu.

No no no. I just replied to Steven about that. The problem is that things like NFS and friends expect _all_ their workqueue threads to report back when they do certain things like flushing buffers and stuff. The reason I added this is because my machines were getting stuck because CPU0 was waiting for CPU1 to run NFS work queue threads even though no IRQs, softirqs or other things are running on it.

[PATCH] [CPUISOL] Isolated CPUs should be ignored by the "stop machine"
This I find very dangerous. We are making an assumption that tasks on an
isolated CPU wont be doing things that stopmachine requires. What stops
a task on an isolated CPU from calling something into the kernel that
stop_machine requires to halt?

Very dangerous indeed!
Please see my reply to Steven. I agree it's somewhat dangerous. What we could do is make it
configurable with a big fat warning. In other words I'd rather have an option than just says
"do not use dynamic module loading" on those systems.

Max
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/