[Please CC me on any replies, as I've been overwhelmed by the amount of
traffic on the list, so I'm no longer subscribed...]
Folks:
I've got a 512MB x86 box running lots of traffic through it and am
seeing some wierd buffer/page cache behaviour. A couple of times over
the last few days, I've had the network stack fail allocating SKB's
due to the fact that ~ 40+% of physical memory is being use for the
buffer cache.
These specific runs, I was running with debug logging turned on in
my server, so I generated probably 10GB of logs overnight. Also,
due to the load placed on the system, my server process grew to ~
200MB (that part, at least, was expected 8-). Of notable interest
was the fact that killing the server process, sync'ing the FS and
then blowing away all the logs didn't make the situation any better.
Should I have 200+MB of buffer cache on a system that is mostly doing
logging to disk (other than misc. linux system processes and a monitor
script that dumped stats once a minute, no other processes where
running on the box, so nothing should have been reading from the disk
at any noticable rate)??
More importantly, should SKB allocation be able to force pages out of
the buffer/page cache(s) so it didn't fail? I would expect so, but
I'm a relative linux kernel newbie.
Any info appreciated,
--rafal
---- Rafal Boni rafal.boni@metatel.com PGP key C7D3024C, print EA49 160D F5E4 C46A 9E91 524E 11E0 7133 C7D3 024C- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sun Apr 23 2000 - 21:00:19 EST