Re: [PATCH 2.6.13-rc6 1/2] New Syscall: get rlimits of any process(update)
From: Lee Revell
Date: Thu Aug 18 2005 - 18:17:30 EST
On Fri, 2005-08-19 at 00:13 +0100, Alan Cox wrote:
> On Iau, 2005-08-18 at 14:17 -0400, Lee Revell wrote:
> > Maybe the distros need to just increase the default FD limit to 1024. I
> > hit this constantly with gtk-gnutella, if try to download a file that's
> > available on more than 1024 hosts it will open sockets until it hits
> > that limit then bomb out.
>
> Sounds like a remarkably badly designed application. The author should
> perhaps look at the papers on tcp capture and efficiency unless they
> have a truely remarkably huge network pipe.
>
OK say your DSL can dl at 350KB/sec.
A search for a 200MB file tells you it's available on 2000 hosts, all of
whom are on dialup. You break it into 100 chunks and request a chunk
from what should be the fastest 100 hosts. You wait a bit for the
speeds to stabilize and find you're only getting 150KB/sec aggregate, so
you reduce the size of the chunks and connect to 100 more hosts. Repeat
until the pipe is full or you run out of FDs.
What's inefficient about this? It's remarkably fast even for high
demand files and does not suck up all available upstream BW like
bittorrent and is leech-friendly ;-)
Lee
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/