Re: [RFC PATCH v3 1/3] sched: schedule balance map foundation
From: Michael Wang
Date: Thu Feb 21 2013 - 23:19:47 EST
On 02/22/2013 11:33 AM, Alex Shi wrote:
> On 02/22/2013 10:53 AM, Michael Wang wrote:
>>>>>> And the final cost is 3000 int and 1030000 pointer, and some padding,
>>>>>> but won't bigger than 10M, not a big deal for a system with 1000 cpu
>>>> Maybe, but quadric stuff should be frowned upon at all times, these
>>>> things tend to explode when you least expect it.
>>>> For instance, IIRC the biggest single image system SGI booted had 16k
>>>> cpus in there, that ends up at something like 14+14+3=31 aka as 2G of
>>>> storage just for your lookup -- that seems somewhat preposterous.
>> Honestly, if I'm a admin who own 16k cpus system (I could not even image
>> how many memory it could have...), I really prefer to exchange 2G memory
>> to gain some performance.
>> I see your point here, the cost of space will grow exponentially, but
>> the memory of system will also grow, and according to my understanding ,
>> it's faster.
Thanks for your reply.
> Why not seek other way to change O(n^2) to O(n)?
> Access 2G memory is unbelievable performance cost.
Not access 2G memory, but (2G / 16K) memory, the sbm size is O(N).
And please notice that on 16k cpus system, topology will be deep if NUMA
enabled (O(log N) as Peter said), and that's really a good stage for
this idea to perform on, we could save lot's of recursed 'for' cycles.
> There are too many jokes on the short-sight of compute scalability, like
> Gates' 64K memory in 2000.
Please do believe me that I won't give up any chance to solve or lighten
this issue (like apply Mike's suggestion), and please let me know if you
have any suggestions to reduce the memory cost.
May be I could make this idea as an option, override the
select_task_rq_fair() when people want the new logical, and if they
don't want to trade with memory, just !CONFIG.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/