Re: [patch?] Re: Do ramdisk exec's map direct to buffer cache?

From: Linus Torvalds (torvalds@transmeta.com)
Date: Tue Aug 01 2000 - 21:01:43 EST


Interesting. But not written by me.

(Not that this would necessarily be a post I'd have to be ashamed of.
Which sometimes happens with stuff that _is_ written by me ;)

                Linus

In article <200008020053.RAA00405@eng2.sequent.com>,
Linus Torvalds <torvalds@transmeta.com> wrote:
>I personally think that a redesign of a major subsystem is doomed
>to failure, *unless* that redesign has some specific goals it can
>be measured against. The measurements need not be strictly quantitative,
>although comparisons like "the kernel build on my machine sure seemed
>a lot faster - it was done before I came back with my cup of coffee"
>seem quite weak.
>
>Today everyone seems to want Linux to solve all of their problems.
>Heck, even here at IBM we want it to run on 390 architecture, on
>PC's made by Netfinity, IA64 platforms, and heck, we'd even like it
>to scale to our 64 processors, 64 GB machines running databases such
>as those that track British Telecom's call load, department stores
>inventories, Boeing's plane parts, etc. etc. And then there are cool
>things like Crusoe or small embedded systems that have very different
>constraints.
>
>What I'd *really* love to see is a description of the "sweet spot"
>that new subsystems in Linux should be designed to handle. For
>instance, should the 2.5 VM subsystem optimize for 16 K memories?
>Gawd, I hope not. What about 256 MB? 1 GB? My next laptop comes
>with 256 MB, I hope Linux runs well on it. If my laptop is that
>big today, what will typical systems be in two years (when 2.5
>enters test29-pre12 ;-)
>
>Also, what is the "typical" workload for a desktop? For a small
>server? What is similar between those two workloads? What are the
>noteable differences? For instance, here's a stab at a few things
>I'd love to considered as part of the "sweet spot" for, say, 2.5:
>
>Desktop/Laptop
>
> 1) Physical memory sizes between 256 MB and 1 GB
>
> 2) 1-4 CPUs (okay, big laptop, but I can dream... ;-)
>
> 3) 100 - 200 GB disk storage (probably about 2-4 disks
> worth, right? ;-) How does this affect the buffer/
> page cache?
>
> 4) < 100 instantiated processes, mostly servers, usually less
> than 2 CPU-hungry tasks running at once (and one most
> likely a game). Typical applications are X servers,
> napster clients (er, or the next underground instantiation),
> small web server, netscape (oops, may need to up that
> physical memory requirement), compiler/application
> development environment.
>
> 5) Primarily single user at the console
>
>Server
>
> 1) Physical memory sizes between 1 GB and 16 GB
>
> 2) 4-16 CPUs
>
> 3) ~500 disks, or up to about 1 TB of disk storage
>
> 4) < 1000 instantiated processes, many servers and/or users
> (e.g. ISP, web server, news server, irc server, ftp server)
>
> 5) Small ISP, 50-100 active users
>
>Big Honkin' Enterprise Class
>
> 1) Physical memory size of 64 GB or more
>
> 2) 16-256 CPUs (possibly clustered in groups of 16-64,
> depending on type of workload) And yes, some vendors
> produce machines of this size today!)
>
> 3) ~5000 disks, or up to 100 TB of disk storage
>
> 4) 10,000+ processes, thousands of threads, heavy web server and
> network traffic, several large databases, thousands of users
> doing data entry, queries, business decision support, full
> data scans, large SETI at home processes, etc.
>
> 5) Large, commercial data center, used by Fortune 1000 folks
> to run a real business. 100's, 1000's of active users...
>
>With fairly simple profiles, it becomes easier to think about
>optimizing subsystems and to decide when a particular users's needs
>are very unique and may deserve special consideration. Also, I'd
>claim that a VM subsystem (for instance) could be reasonably designed
>that would handle the first two cases pretty well, but it would be
>tough to design a VM subsystem that could handle all three "ideally".
>Perhaps the solution is to identify those core differences (would we
>*really* want to scan all of a 64 GB physical memory for page aging
>every second? Could we? What about the impacts of such page touches
>on the processor cache for those 16+ CPUs?).
>
>Once those key differences are identified, you can think about solutions
>which work better for one or the other, consider an abstraction or
>callout layer which allows you to choose the right algorithm based on,
>say, physical memory size at boot time.
>
>Without identifying some sort of profile or profiles, reworking the
>VM subsystem will simply be redesigning out the current bugs, rather
>than solving the "real" problem - e.g. how to do memory management on
>today's systems.
>
>I'd love to hear Linus' vision of the ideal system profiles that
>Linux could address in the 2.5 timeframe. Trying to be everything
>to everyone all the time means that pretty much everyone is going
>to be unhappy. However, if you can find the sweet spots (e.g. it
>has been the 486-Pentium class desktop with 16-64 MB memory, single
>user environment, IMHO) and design for them, you have at least a couple
>of sets of happy people, and a lot more that are only off a little
>bit...
>
>gerrit
>
>
>> On Tue, 1 Aug 2000, Rik van Riel wrote:
>> > >
>> > > Quite frankly, nobody has convinced me that there any way to fix VM
>> > > balancing issues even _if_ people were to re-write the VM.
>> >
>> > Nobody asks of you that you read all your email. However, I
>> > believe that most of the ideas for the new VM were CCd to
>> > you ;)
>>
>> I've seen a lot of discussion, yes.
>>
>> I haven't seen any really convining arguments that any of the rewrites
>> would really make things all that better.
>>
>> Yes, they'll probably fix the thing that you try to fix. And they'll
>> introduce new cases where _they_ work badly, and the old code happened to
>> work fine.
>>
>> For example, the "dd if=/dev/zero of=file" thing can be made to be very
>> nice on interactive behaviour, and you can obviously design a VM subsystem
>> that does that on purpose. Fine. I bet you that such a VM subsystem has
>> serious problems with some other workloads..
>>
>> Or the old idea to start writebacks early in order to try to minimize
>> having dirty pages in memory that are hard to get rid of. It's wonderful.
>> For certain loads. And it really sucks on others that have big temp-files
>> that will get deleted (like bench).
>>
>> The thing that is dangerous about designing a new VM is that you can
>> design it so that it avoids the current pitfalls. But you won't even be
>> aware of the things that the current thing does well, and you may not
>> design it to do as well on those.
>>
>> And in the end, reality always tends to hit theory hard in the face when
>> you least expect it. That's why I'm not holding my breath for some magical
>> VM rewrite that will fix all performance problems. No matter _how_ much
>> people talk about it..
>>
>> Linus
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.rutgers.edu
>Please read the FAQ at http://www.tux.org/lkml/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Aug 07 2000 - 21:00:07 EST