No. The memory issue on squid is almost purely to do with needing to
keep a very large dbase (the index for all the disks) in ram.
> > 2) Too slow disks
>
> With a filesystem that can't handle the extreme dir sizes
> that Squid uses.
Squid doesn't produce extreme dir sizes. No dir will have more then X
files for a given configuration.
> ReiserFS or another advanced filesystem
> will speed up this bottleneck by more than just a considerable
> amount...
The disk issue in squid is purely driven by transaction times.
> > which as far as I can see aren't helped by sendfile. They also
> > say "CPU limitations are rarely encountered except in very large
> > caches".
>
> This situation is also better handled by tree-based filesystems
> and better buffer/cache administration.
On squid 1.2, large squids are CPU bound, and the number two CPU
sucker is read()s and write()s to network sockets. (squid does a LOT
of copying data from network to network, and disk to network).
Michael.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu