On 04/01/07, Hua Zhong <hzhong@xxxxxxxxx> wrote:Whoops, that should of course have read " then I *DON'T* see why
> > I see that as a good argument _not_ to allow O_DIRECT on
> > tmpfs, which inevitably impacts cache, even if O_DIRECT were
> > requested.
> >
> > But I'd also expect any app requesting O_DIRECT in that way,
> > as a caring citizen, to fall back to going without O_DIRECT
> > when it's not supported.
>
> According to "man 2 open" on my system:
>
> O_DIRECT
> Try to minimize cache effects of the I/O to and from this file.
> In general this will degrade performance, but it is useful in
> special situations, such as when applications do their own
> caching. File I/O is done directly to/from user space buffers.
> The I/O is synchronous, i.e., at the completion of the read(2)
> or write(2) system call, data is guaranteed to have been trans-
> ferred. Under Linux 2.4 transfer sizes, and the alignment of
> user buffer and file offset must all be multiples of the logi-
> cal block size of the file system. Under Linux 2.6 alignment to
> 512-byte boundaries suffices.
> A semantically similar interface for block devices is described
> in raw(8).
>
> This says nothing about (probably disk based) persistent backing store. I don't see why tmpfs has to conflict with it.
>
> So I'd argue that it makes more sense to support O_DIRECT on tmpfs as the memory IS the backing store.
>
I'd agree. O_DIRECT means data will go direct to backing store, so if
RAM *is* the backing store as in the tmpfs case, then I see why
O_DIRECT should fail for it...