Re: File-descriptors - large quantities

Stephen C. Tweedie (sct@redhat.com)
Thu, 9 Jul 1998 15:28:42 +0100


Hi,

On Wed, 08 Jul 1998 13:38:36 +0800, "Michael O'Reilly"
<michael@metal.iinet.net.au> said:

> If you're even coming close to using 3000 FDs then I can guarentee
> that your disk thruput is maxed out for squid 1.1.

Michael,

There was an interesting paper on this at the recent Usenix. A large
proxy server had been profiled and they showed an average of 50 hot
connections present at any time, and *750* cold (slow) connections, at
a rate of 220 new connections per second. The cold connections were
purely a result of WAN-level timings; under lab conditions all
connections were serviced pretty rapidly, but on the internet, enough
connections go really slowly to contribute a lot to mean connection
lifetimes. They had a median connection lifetime of 250ms and a mean
of 2.5 seconds.

This is a real problem: you don't have to have a massively busy
server, just a server requesting items from a slow domain, for the
number of outstanding connections at a time to grow enormously. (The
Usenix paper was explicitly dealing with slow select() and get-new-fd
handling in the kernel, btw.)

--Stephen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu