Re: [PATCH 2/3] sched,numa: retry placement more frequently when misplaced
From: Rik van Riel
Date: Fri Apr 11 2014 - 14:04:23 EST
On 04/11/2014 01:46 PM, Joe Perches wrote:
> On Fri, 2014-04-11 at 13:00 -0400, riel@xxxxxxxxxx wrote:
>> This patch reduces the interval at which migration is retried,
>> when the task's numa_scan_period is small.
>
> More style trivia and a question.
>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> []
>> @@ -1326,12 +1326,15 @@ static int task_numa_migrate(struct task_struct *p)
>> /* Attempt to migrate a task to a CPU on the preferred node. */
>> static void numa_migrate_preferred(struct task_struct *p)
>> {
>> + unsigned long interval = HZ;
>
> Perhaps it'd be better without the unnecessary initialization.
>
>> /* This task has no NUMA fault statistics yet */
>> if (unlikely(p->numa_preferred_nid == -1 || !p->numa_faults_memory))
>> return;
>>
>> /* Periodically retry migrating the task to the preferred node */
>> - p->numa_migrate_retry = jiffies + HZ;
>> + interval = min(interval, msecs_to_jiffies(p->numa_scan_period) / 16);
>
> and use
>
> interval = min_t(unsigned long, HZ,
> msecs_to_jiffies(p->numa_scan_period) / 16);
That's what I had before, but spilling things over across
multiple lines like that didn't exactly help readability.
> btw; why 16?
>
> Can msecs_to_jiffies(p->numa_scan_period) ever be < 16?
I picked 16 because there is a cost tradeoff between unmapping
and faulting (and potentially migrating) a task's memory, which
is very expensive, and searching for a better NUMA node to run
on, which is potentially slightly expensive.
This way we may run on the wrong NUMA node for around 6% of the
time between unmapping all of the task's memory (and faulting
it back in with NUMA hinting faults), before retrying migration
of the task to a better node.
I suppose it is possible for a sysadmin to set the minimum
numa scan period to under 16 milliseconds, but if your system
is trying to unmap all of a task's memory every 16 milliseconds,
and fault it back in, task placement is likely to be the least
of your problems :)
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/