Re: Idea for reducing sysfs memory usage
From: Greg Kroah-Hartman
Date: Tue Feb 16 2016 - 19:47:35 EST
On Wed, Feb 17, 2016 at 12:37:38AM +0000, Edward Cree wrote:
> On 16/02/16 23:55, Greg Kroah-Hartman wrote:
> >On Tue, Feb 16, 2016 at 11:46:49PM +0000, Edward Cree wrote:
> >>Sorry if this has been suggested before, but if so I couldn't find it.
> >>Short version: could a sysfs dir reference a list of default attributes
> >>rather than having to instantiate them all?
> >Shorter version, why do you think it is? :)
> >
> >Have you done some testing of the amount of memory that sysfs entries
> >consume and found any problems with it?
> Two reasons:
> a) in his netdev1.1 talk "Scaling the Number of Network Interfaces on
> Linux",
> David Ahern claimed a memory overhead of (iirc) about 45kB per netdevice,
> of which he attributed (again, iirc) about 20kB to sysfs entries. He
> also
> indicated that this was a problem for his use case. (My apologies to
> David if I've misrepresented him. CCed him so he can correct me.)
How many sysfs entries are you creating for that 20kb? And how did you
measure it? If you don't access the files, the backing store is not
allocated, saving you a lot of memory. If you do access it, it will be
freed later on afterward, so it's sometimes really hard to measure this
accurately.
> b) my reading of the code suggested it was allocating stuff for every call
> to
> sysfs_create_file() in the loop in populate_dir().
> Having re-read __kernfs_new_node() and struct kernfs_node, I now realise I
> misinterpreted them - the name isn't being allocated at all
> (kstrdup_const())
> and the struct kernfs_node consists chiefly (if not entirely) of fields
> specific to the individual file rather than shareable between multiple
> instances. So there isn't any memory we can save here.
That's good to verify that we have already solved this, thanks :)
> Sorry for the noise.
Not a problem.
greg k-h