On Sun, Feb 03, 2002 at 09:05:04AM -0600, Stephen Lord wrote:
I agree is is not a big issue in this case - my interpretation of
tails was the end of any file could be packed, but if it is only
small files.....
But you can't mmap (say) a 1k file right now... so right now this
isn't a problem, but at some point a larger mmap granularity would be
nice --- especially on architectures with small (or untagged) TLBs.
I'm guessing so as not to break backwards compatibility we will have
to support variable page-sizes (creating a plethora of nasties I
imagine).
They are invalidated at the start of the I/O
Cool. That much I'd like to see under Linux
but page faults are not blocked out for the duration of the I/O so
the coherency is weak.
I was thinking this would also be goof, basically invalidate those
pages and remove them from the VMAs, marking them as unusable pending
IO completion --- the logic her being if you were to fault on an
invalidated page during IO you deserve to block indefinitely until the
IO completes.
However, if an application is doing a combination of mmapped and
direct I/O to a file at the same time, then it should generally
have some form of user space synchronization anyway.
I hadn't considered that. I imagined an application doing either but
not both, and the kernel enforcing this. However, in the case when
you want to mmap a large file, you may want to manipulate some pages
using mmap whilst writing others with O_DIRECT. Although, in such
cases arguably you could using multiple mapping's.
--cw
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Feb 07 2002 - 21:00:29 EST