Re: /proc reliability & performance

From: Albert Cahalan
Date: Thu Oct 16 2003 - 23:46:34 EST


On Thu, 2003-10-16 at 23:24, Brian McGroarty wrote:
> On Thu, Oct 16, 2003 at 10:07:18PM -0400, Albert Cahalan wrote:
> > I created a process with 360 thousand threads,
> > went into the /proc/*/task directory, and did
> > a simple /bin/ls. It took over 9 minutes on a
> > nice fast Opteron. (it's the same at top-level
> > with processes, but I wasn't about to mess up
> > my system that much)
>
> Are there many cases where the /proc directory
> contents are read in this fashion?

Sure. Run any of: top, ps, lsof, fuser...

> I'd be more curious about how performance fares
> with reading a thousand entries by name with 1k
> processes and with 360k processes.

Judging by the crazy example and the observation
that an O(n*n) algorithm is involved, directory
reads on that very fast machine should get annoying
once you have a few thousand processes. They'd be
perceptable one-by-one, which adds up when you have
multiple reads due to scripts, top, or multiple
users.

Anyway, it's not just about performance! That's
only half of the problem. The other half is
reliability. The way /proc works is this:

Count tasks as you read them. The number is
your directory offset. Return a few dozen entries
at a time. For each read, you'll need to find
back your place. You do this by counting tasks
until you reach your offset. Of course, tasks
will have been created and destroyed between
reads, so who knows where you'll continue from?

That's simply not reliable.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/