Re: /tmp in swap space

Jamie Lokier (jamie@imbolc.ucc.ie)
Sun, 24 May 1998 05:22:10 +0100


I wrote:
: Is ext2 clever enough to avoid writing data blocks for the deleted file
: to the disk? This has always struck me as the reason why a Linux tmpfs
: would be useful.

Actually I was unclear. I meant, is ext2 clever to avoid _ever_ having
to write data blocks for the deleted files. By this I mean un-dirtying
the blocks when the file is deleted, so the data in memory no longer
competes with other uses of main memory.

Larry McVoy wrote:
> It is indeed clever enough to do exactly what you want, which is why
> a TMPFS is pointless for Linux systems which already have ext2fs.

Since you say so, I had a good look at the ext2 and generic fs code. I
can see how dirty buffers are created and eventually written to disk. I
can't see any mechanism to prevent the buffers of deleted files from
needing to be written. Am I missing it, or was my original question
phrased poorly?

> lat_fs.c in lmbench has a test which creates 1,000 files of size 10K,
> then deletes them, then repeats. The time for a create is !! 32 usecs !!
> and about !! 3 usecs !! for a delete on a 400Mhz Pentium II. Pretty darn
> good if you ask me. A 233Mhz AMD K6 is more like 780 & 72 usecs for
> create & delete.

1. That's excellent performance. However, that's only 12 meg of memory,
being repeatedly reused. It shows we can create, write and delete
files (you did actually write 10k?) quickly, with all the book-keeping,
which is good. It doesn't test if that memory is subsequently free
for other uses without writing it to disk.

2. Your figures suggest a 400MHz Pentium II is 24 times faster than a
255MHz K6. Faster yes, but 24 times??? Exactly 24 times? I
cannot believe those figures!

> At any rate, all of these numbers would be little or no faster on TMPFS.
> Once you get the disk out of the equation, which both EXT2FS and TMPFS
> do for small tmp files, then you are purely CPU bound and the only thing
> that will show up is code paths and the code paths are unlikely to be
> dramatically different.

You are certainly right for the premise of this test, which is the "lots
of temporary files coming and going" scenario.

What I have in mind is a compile, with just a few passes one after the
other. Each pass writes out a large temporary file, and then deletes
the previous one:

1. Page in pass 1 code.
2. Write out /tmp/file1.
3. Page in pass 2 code (competing with /tmp/file1; unavoidable).
4. Read /tmp/file1, write /tmp/file2.
5. Delete /tmp/file1, nothing written to disk, memory freed immediately.
6. Page in pass 3 code, competing with /tmp/file2 but _not_ /tmp/file1.
7. etc.

If the buffers/pages are not un-dirtied by deleting the files in /tmp,
more memory (the size of /tmp/file1 at step 6) is required to avoid
needing swap.

> Feel free to snarf up the lat_fs.c code and try it on a Sun.

Where can I get this?

Cheers,
-- Jamie

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu