Re: Proc fs limit workaround?

From: Martin Dalecki (dalecki@evision-ventures.com)
Date: Thu Sep 14 2000 - 22:41:09 EST


Ricky Beam wrote:
>
> On Thu, 14 Sep 2000, Nick Pollitt wrote:
> ...
> >And second, why is the 4K limit there in the first place?
>
> Primarily because it was never designed for 90% of the crap that's in there
> now. I have long hated the BS required to get more than 4k worth of stuff
> out of /proc. The way around the limit is not a solution, it's a hack.
> There's not atomicy for processing more than one page unless you go out
> of your way to deal with it. I've banged my head on the desk a few times
> because of this -- what happens when there's any delay between read()'s?
> *sigh*
>
> #ifdef RANT
> In case you haven't noticed a lot of present-day linux is a nice collection
> of hacks. This is the nature of code evilution -- I have to deal with this
> everyday (of course, I'm paid to.) procfs was actually a Very Good Thing(r)
> six or seven years ago when it was _designed_. Now look at it.
>
> I'm a perfectionist. I like things to be well planned, designed, and
> emplimented to do what they were designed to do. If you want it to do
> something else, return to the planning stage. For example, 747's weren't
> designed to clear cut forests. While they can be used for such a task,
> they are quite inefficient at it.
> #endif

I only disagree in one point - /proc was a bad design from the beginning
on, since
you could expect to get what you have now there!!!! (Actually please see
in archives those MANY MANY people on linux-kernel who where arguing
against
 it before it ever got in...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Sep 15 2000 - 21:00:25 EST