Re: Large tmp files, async flush and still lots of I/O?

DAVID BALAZIC (david.balazic@uni-mb.si)
Fri, 04 Jun 1999 14:46:31 +0100 (MET)


> > Your belief that this shouldn't happen if the file has already been deleted
> > sounds reasonable. I think the reason it doesn't work is due to Unix unlink
> > semantics. To be precise, unlink() does _not_ delete a file. Rather, a file is
> > deleted when unlink() has been used sucessfully on all its names and all
> > processes have closed it. Thus, technically, in your example, the file has not
> > been deleted, so the kernel is right to commit it to disk.
>
> But the inode has a link count of 0, which tells the kernel that it wont be
> accessable after it is closed, therefore the kernel dont need to bother to
> write the data to disk.

I _guess_ this is what is happening :

File is created , a few meta-data blocks are created in RAM and flagged
as 'dirty' ( so they must be written out to disk sooner or later ).
File content is written to RAM , all flagged 'dirty' offcourse.
File is deleted - the meta-data blocks are changed and flagged 'dirty'.
The file contents are not touched any more !
So when bdflush ( or whoever ) wakes up , it will see a big pile of
'dirty' blocks ( the file contents , but it doesn't know that,
they are just some dirty blocks ) and will start to write them out
to disk.

But I may be completely wrong here ... ;-)

--
David Balazic , student
E-mail   : 1stein@writeme.com     |     living in  sLOVEnija
home page: http://surf.to/stein
Computer: Amiga 1200 + Quantum LPS-340AT
--

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/