Re: [PATCH] bitmap, irq: Add smp_affinity_list interface to /proc/irq

From: Mike Travis
Date: Tue Mar 29 2011 - 20:51:21 EST




Andrew Morton wrote:
On Tue, 29 Mar 2011 16:56:12 -0700 Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:

On Tue, 29 Mar 2011 16:46:52 -0700
Mike Travis <travis@xxxxxxx> wrote:

+ /* create /proc/irq/<irq>/smp_affinity_list */
+ proc_create_data("smp_affinity_list", 0600, desc->dir,
+ &irq_affinity_list_proc_fops, (void *)(long)irq);
Always document your interfaces, please. `grep -r smp_affinity
Documentation' shows where.

And one we've seen a description of the proposed new interface, we can
review the patch!

Also, the patch adds a new interface which duplicates an existing one,
only the formats are different, yes? This is, of course, bad.

The only justification we've seen for being bad is "Manually adjusting
the smp_affinity for IRQ's becomes unwieldy when the cpu count is
large". A more thorough description of how painful this is might help
motivate people to do bad things to the kernel.

Also, if it's just a matter of an alternative presentation of the data,
why not implement the desired user interface with a little userspace
tool then feed the results down into the existing kernel interface?


Setting smp affinity to cpus 256 to 263 would be:

echo 000000ff,00000000,00000000,00000000,00000000,00000000,00000000,00000000 > smp_affinity

instead of:

echo 256-263 > smp_affinity_list

Think about what it looks like for cpus around say, 4088 to 4095.

We already have many alternate "list" interfaces:

/sys/devices/system/cpu/cpuX/indexY/shared_cpu_list
/sys/devices/system/cpu/cpuX/topology/thread_siblings_list
/sys/devices/system/cpu/cpuX/topology/core_siblings_list
/sys/devices/system/node/nodeX/cpulist
/sys/devices/pci***/***/local_cpulist

etc.

This just expands on that same philosophy.

Thanks,
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/