Re: Memory policy question for NUMA arch....

From: Rick Sherm
Date: Wed Apr 07 2010 - 11:48:36 EST

Hi Andy,

--- On Wed, 4/7/10, Andi Kleen <andi@xxxxxxxxxxxxxx> wrote:
> On Tue, Apr 06, 2010 at 01:46:44PM -0700, Rick Sherm wrote:
> > On a NUMA host, if a driver calls __get_free_pages()
> then
> > it will eventually invoke
> ->alloc_pages_current(..). The comment
> > above/within alloc_pages_current() says
> 'current->mempolicy' will be
> > used.So what memory policy will kick-in if the driver
> is trying to
> > allocate some memory blocks during driver load
> time(say from probe_one)? System-wide default
> policy,correct?
> Actually the policy of the modprobe or the kernel boot up
> if built in
> (which is interleaving)

Interleaving,yup that's what I thought. I've tight control on the environment.So for one driver I need high throughput and I will use the interleaving-policy.But for the other 2-3 drivers, I need low latency.So I would like to restrict it to the local node.These are just my thoughts but I'll have to experiment and see what the numbers look like. Once I've some numbers I will post them in a few weeks.

> >
> > What if the driver wishes to i) stay confined to a
> 'cpulist' OR ii) use a different mem-policy? How
> > do I achieve this?
> > I will choose the 'cpulist' after I am successfuly
> able to affinitize the MSI-X vectors.
> You can do that right now by running numactl ... modprobe
> ...
Perfect.Ok, then I'll probably write a simple user-space wrapper:
1)set mem-policy type depending on driver-foo-M.
2)load driver-foo-M.
3)goto 1) and repeat for other driver[s]-foo-X
BTW - I would know before hand which adapter is placed in which slot and so I will be able to deduce its proximity to a Node.

> Yes there should be probably a better way, like using a
> policy
> based on the affinity of the PCI device.

> -Andi


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at