Re: File-descriptors - large quantities

Dancer (dancer@brisnet.org.au)
Fri, 10 Jul 1998 02:19:17 +1000


And this is exactly the sort of thing we see. Also a 30-second upstream
choke bites us pretty badly, and they happen far too frequently.

D

Stephen C. Tweedie wrote:
>
> Hi,
>
> On Wed, 08 Jul 1998 13:38:36 +0800, "Michael O'Reilly"
> <michael@metal.iinet.net.au> said:
>
> > If you're even coming close to using 3000 FDs then I can guarentee
> > that your disk thruput is maxed out for squid 1.1.
>
> Michael,
>
> There was an interesting paper on this at the recent Usenix. A large
> proxy server had been profiled and they showed an average of 50 hot
> connections present at any time, and *750* cold (slow) connections, at
> a rate of 220 new connections per second. The cold connections were
> purely a result of WAN-level timings; under lab conditions all
> connections were serviced pretty rapidly, but on the internet, enough
> connections go really slowly to contribute a lot to mean connection
> lifetimes. They had a median connection lifetime of 250ms and a mean
> of 2.5 seconds.
>
> This is a real problem: you don't have to have a massively busy
> server, just a server requesting items from a slow domain, for the
> number of outstanding connections at a time to grow enormously. (The
> Usenix paper was explicitly dealing with slow select() and get-new-fd
> handling in the kernel, btw.)
>
> --Stephen

-- 
-----BEGIN GEEK CODE BLOCK-----
Version: 3.1
GAT d- s++: a C++++$ UL++++B+++S+++C++H++U++V+++$ P+++$ L+++ E-
W+++(--)$ N++ w++$>--- t+ 5++ X+() R+ tv b++++ DI+++ e- h-@ 
------END GEEK CODE BLOCK------

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu