A buffer/page cache question

Richard S. Gray (sgray@preferred.com)
Tue, 28 Oct 1997 18:03:56 -0500


Hi,

I'm sorry this is a long message but this thing with the cache is
driving me crazy. If you have time please take a look at the questions
below I would be very grateful..

1. Is the cache divided into the five subcomponets listed below?
a.) Buffer Cache
b.) Page Cache
c.) Swap Cache
d.) Inode Cache
e.) Directory Cache

2. What's bothering me is that can't seem to track the flow of data
through the various caches. I'm primarily concerned with the Buffer,
Page and Swap caches. I've been told that only file system meta-data
goes through the Buffer Cache. I'm not saying that this isn't correct.
What I am saying is that I don't understand why. Doesn't this mean that
if I wanted to read a single data base record, for instance, that I
would be required to read an entire 4 kilobyte page just to get to that
record?

3. How does data get into the Page Cache? Does the Page Cache
directly interface with the device driver? If the above assertion that
only fs meta-data goes through the Buffer Cache is true then that would
imply that the Page Cache does directly interface with the device
driver. If such an interface exists is it found in the file_operations
structure associated with the appropriate device_struct that is in turn
found in the blkdevs vector?

4. I've been told and believe that the Page Cache is being used as a
read ahead cache. This implies that if a requested page is read into
memory from disk then subsequently several additional pages will be read
in as well. I think the number of pages to prefetched can be adjusted
thought mlord's "hdparm" utility. Is this why when a page fault is
generated and the Physical Page Frame Number associated with the
appropriate Page Table Entry is equal to zero you check to ensure that
the requested page has not been prefetched into the Page Cache. As I
understand this, we're saying "I've prefeched so many pages and those
pages will remain in the cache until the number of free pages within the
system reaches a predefined minimum." Once there are less then the
predefined minimum number of free pages in the system, swapped (the
kernel swap daemon) attempts to shrink the Buffer cache and the Page
Cache to obtain the required number of free pages. I wonder if there
is a way to prevent swapped from shrinking the Buffer/Page cache below a
predefined specified limit of say 10 megs.

5. If you explain anything please try describe how data moves from/to
the device driver from/to the OS. I understand that the Buffer Cache
make uses of a shared queue. Block devices read/write requests are
placed on this queue. The device driver then services the request and
removes it from the queue. I also understand that the file_operations
structure associated with the block device special file can be used to
to request that data be sent to/from the block device. What I'm not
sure of is if the Page Cache interfaces directly with the device driver.
If the Page Cache does interface with the device driver than what
mechanisms are used?

Any information would be very much appreciated.

Thanks
Scott Gray