Re: fs/stat: Reduce memory requirements for stat_open

From: Peter Zijlstra
Date: Tue Jul 08 2014 - 09:10:14 EST


On Thu, Jun 12, 2014 at 03:00:17PM +0200, Stefan Bader wrote:
> When reading from /proc/stat we allocate a large buffer to maximise
> the chances of the results being from a single run and thus internally
> consistent. This currently is sized at 128 * num_possible_cpus() which,
> in the face of kernels sized to handle large configurations (256 cpus
> plus), results in the buffer being an order-4 allocation or more.
> When system memory becomes fragmented these cannot be guarenteed, leading
> to read failures due to allocation failures.

> @@ -184,7 +184,7 @@ static int show_stat(struct seq_file *p, void *v)
>
> static int stat_open(struct inode *inode, struct file *file)
> {
> - size_t size = 1024 + 128 * num_possible_cpus();
> + size_t size = 1024 + 128 * num_online_cpus();
> char *buf;
> struct seq_file *m;
> int res;

Old thread, and already solved in the meantime, but note that
CONFIG_NR_CPUS _should_ have no reflection on num_possible_cpus().

The arch (x86 does) should detect at boot time the max possible CPUs the
actual hardware supports and put num_possible_cpus() to that number. So
your typical laptop will mostly have num_possible_cpus() <= 4, even
though CONFIG_NR_CPUS could be 4k.

Of course, if you actually do put 256+ cpus in your system, well, then
the difference between possible and online isn't going to help either.

If on the other hand your 'board' reports it can hold 256 CPUs while in
fact it cannot, go kick your vendor in the nuts.

Attachment: pgp3ErUUejFXs.pgp
Description: PGP signature