Is that really a problem? Isn't the trade-off worth the RAM
requirements? I knew that this would be brought up when I
decided that caching the data was necessary, but it is a
reasonable approach. If you want the data pulled out of
proc to be consistent, atomicity is necessary.
Besides, the data is cached only while the file is held open.
When the file is closed, all that memory is freed. 3.2MB is
a pitance today. Anyone that is using 40K routes can afford
the US$12 for another 4MB.
The approach the kernel uses now gives a close approximation
(with possible duplicate or missing information) of the current
kernel state. And it really isn't the "current" kernel state,
which implies a discrete point in time, but rather a fuzzy
image of the kernel state over a period of time. How useful can
this be?
The current proc code was the best solution when RAM was limited.
Today's systems have much more RAM than in the past. Hacks like
this should be elimnated now that the majority of systems have
the resources to do "the right thing." (TM)
If this is really an issue, implimenting backwards compatibilty
for memory critical proc routines is trivial.
Rob
(rriggs@tesser.com)