Re: [v3.10-rt / v3.12-rt] scheduling while atomic in cgroup code

From: Sebastian Andrzej Siewior
Date: Tue Feb 17 2015 - 04:28:42 EST


* Mike Galbraith | 2014-06-21 10:09:48 [+0200]:

>--- a/mm/memcontrol.c
>+++ b/mm/memcontrol.c
>@@ -2398,16 +2398,18 @@ static bool consume_stock(struct mem_cgr
> {
> struct memcg_stock_pcp *stock;
> bool ret = true;
>+ int cpu;
>
> if (nr_pages > CHARGE_BATCH)
> return false;
>
>- stock = &get_cpu_var(memcg_stock);
>+ cpu = get_cpu_light();
>+ stock = &per_cpu(memcg_stock, cpu);
> if (memcg == stock->cached && stock->nr_pages >= nr_pages)
> stock->nr_pages -= nr_pages;
> else /* need to call res_counter_charge */
> ret = false;
>- put_cpu_var(memcg_stock);
>+ put_cpu_light();
> return ret;
> }

I am not taking this chunk. That preempt_disable() is lower weight
and there is nothing happening that does not work with it.

>@@ -2457,14 +2459,17 @@ static void __init memcg_stock_init(void
> */
> static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
> {
>- struct memcg_stock_pcp *stock = &get_cpu_var(memcg_stock);
>+ struct memcg_stock_pcp *stock;
>+ int cpu = get_cpu_light();
>+
>+ stock = &per_cpu(memcg_stock, cpu);
>
> if (stock->cached != memcg) { /* reset if necessary */
> drain_stock(stock);
> stock->cached = memcg;
> }

I am a little more worried that drain_stock() could be called more than
once on the same CPU.
- memcg_cpu_hotplug_callback() doesn't disable preemption
- drain_local_stock() doesn't as well

so maybe it doesn't matter.

> stock->nr_pages += nr_pages;
>- put_cpu_var(memcg_stock);
>+ put_cpu_light();
> }

Sebastian
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/