Re: /tmp in swap space

Larry McVoy (lmcvoy@dnai.com)
Mon, 25 May 1998 23:50:23 -0600


: I wrote:
: : Is ext2 clever enough to avoid writing data blocks for the deleted file
: : to the disk? This has always struck me as the reason why a Linux tmpfs
: : would be useful.
:
: Actually I was unclear. I meant, is ext2 clever to avoid _ever_ having
: to write data blocks for the deleted files. By this I mean un-dirtying
: the blocks when the file is deleted, so the data in memory no longer
: competes with other uses of main memory.

That's a good question. I very much doubt it. Because a block can
contain meta data for more than one file, the system would have to keep
track of who was using the block and then notice they had all gone away.
I'll bet you dollars to donuts that it does write the blocks. I'll also
bet that it makes no performance difference that you can measure.

: 1. That's excellent performance. However, that's only 12 meg of memory,
: being repeatedly reused. It shows we can create, write and delete
: files (you did actually write 10k?) quickly, with all the book-keeping,
: which is good. It doesn't test if that memory is subsequently free
: for other uses without writing it to disk.

Wait a second. I misunderstood you. I thought you were asking about
metadata. If you are worried about the actual data, of course it doesn't
write them, they are no longer associated with a file. Writing them
would be pointless. I haven't looked recently, but here's what you do:
go look at what happens when you call ftruncate(). That code path should
go find all data blocks associated with the inode and invalidate them.
The invalidation should clear the dirty bit. Not doing so would lead to
chaos.

: 2. Your figures suggest a 400MHz Pentium II is 24 times faster than a
: 255MHz K6. Faster yes, but 24 times??? Exactly 24 times? I
: cannot believe those figures!

It's not just the K6, it is also 2.0.31 vs 2.1.89 (?, 2.1.something). The
2.1.x tree has some performance changes which really shine under these
sorts of tests. So, no, the P2 isn't that much faster, but the combo of
the P2 & the kernel changes are that much faster. It certainly won't
show up as that dramatic a change unless you are have an application which
creates and deletes files in a tight loop.

: What I have in mind is a compile, with just a few passes one after the
: other. Each pass writes out a large temporary file, and then deletes
: the previous one:
:
: 1. Page in pass 1 code.
: 2. Write out /tmp/file1.
: 3. Page in pass 2 code (competing with /tmp/file1; unavoidable).
: 4. Read /tmp/file1, write /tmp/file2.
: 5. Delete /tmp/file1, nothing written to disk, memory freed immediately.
: 6. Page in pass 3 code, competing with /tmp/file2 but _not_ /tmp/file1.
: 7. etc.
:
: If the buffers/pages are not un-dirtied by deleting the files in /tmp,
: more memory (the size of /tmp/file1 at step 6) is required to avoid
: needing swap.

The memory gets freed immediately. You can try this: write a tiny program
which writes a file about 80% the size of memory, deletes it, and then
time the rate it which it can do it the second time. It should go at
bcopy speeds.

: > Feel free to snarf up the lat_fs.c code and try it on a Sun.
:
: Where can I get this?

Right now:

http://www.kernel.org/pub/software/lmbench/

and in about two weeks:

http://www.bitmover.com/lmbench

--lm

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu