Re: [PATCH] f2fs: Fix direct IO handling

From: Chao Yu
Date: Sat Nov 30 2019 - 02:19:56 EST


-Cc fsdevel mailing list

On 2019/11/28 10:10, Damien Le Moal wrote:
> On 2019/11/26 17:34, Ritesh Harjani wrote:
>> Hello Damien,
>>
>> IIUC, you are trying to fix a stale data read by DIO read for the case
>> you explained in your patch w.r.t. DIO-write forced to write as buffIO.
>>
>> Coincidentally I was just looking at the same code path just now.
>> So I do have a query to you/f2fs group. Below could be silly one, as I
>> don't understand F2FS in great detail.
>>
>> How is the stale data by DIO read, is protected against a mmap
>> writes via f2fs_vm_page_mkwrite?
>>
>> f2fs_vm_page_mkwrite() f2fs_direct_IO (read)
>> filemap_write_and_wait_range()
- writepages
lock_page
- writepage
unlock_page
lock_page
>> -> f2fs_get_blocks()

- f2fs_map_blocks

>> -> submit_bio()
>>
>> -> set_page_dirty()

unlock_page

I guess lock range is as above, so the race can happen, however,
1) If mkwrite() creates data in hole, direct_IO->f2fs_map_blocks should
return NEW_ADDR, which means that is a hole of file, so direct_IO should
read all zeroed data.
2) If mkwrite() overwrite data in block, mkwrite->f2fs_get_blocks won't
change old block address, then direct_IO->f2fs_map_blocks could get that
block address, and won't read stale data.

But I doubt could we read stale data with below race condition:

kworker DIO reader
- writepages
- f2fs_map_blocks
- get old block address
- writepage
trigger OPU, update old block address to new one

someone trigger checkpoint, data in old block address becomes stale,
then anyone else can write data into there.
- submit_bio
get stale data

Or am I missing something that maybe vfs has did such synchronization.

Thanks,

>>
>> Is above race possible with current f2fs code?
>> i.e. f2fs_direct_IO could read the stale data from the blocks
>> which were allocated due to mmap fault?
>
> The faulted page is locked until the fault is fully processed so direct
> IO has to wait for that to complete first.
>
>>
>> Am I missing something here?
>>
>> -ritesh
>>
>> On 11/26/19 1:27 PM, Damien Le Moal wrote:
>>> f2fs_preallocate_blocks() identifies direct IOs using the IOCB_DIRECT
>>> flag for a kiocb structure. However, the file system direct IO handler
>>> function f2fs_direct_IO() may have decided that a direct IO has to be
>>> exececuted as a buffered IO using the function f2fs_force_buffered_io().
>>> This is the case for instance for volumes including zoned block device
>>> and for unaligned write IOs with LFS mode enabled.
>>>
>>> These 2 different methods of identifying direct IOs can result in
>>> inconsistencies generating stale data access for direct reads after a
>>> direct IO write that is treated as a buffered write. Fix this
>>> inconsistency by combining the IOCB_DIRECT flag test with the result
>>> of f2fs_force_buffered_io().
>>>
>>> Reported-by: Javier Gonzalez <javier@xxxxxxxxxxx>
>>> Signed-off-by: Damien Le Moal <damien.lemoal@xxxxxxx>
>>> ---
>>> fs/f2fs/data.c | 4 +++-
>>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>> index 5755e897a5f0..8ac2d3b70022 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -1073,6 +1073,8 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
>>> int flag;
>>> int err = 0;
>>> bool direct_io = iocb->ki_flags & IOCB_DIRECT;
>>> + bool do_direct_io = direct_io &&
>>> + !f2fs_force_buffered_io(inode, iocb, from);
>>>
>>> /* convert inline data for Direct I/O*/
>>> if (direct_io) {
>>> @@ -1081,7 +1083,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
>>> return err;
>>> }
>>>
>>> - if (direct_io && allow_outplace_dio(inode, iocb, from))
>>> + if (do_direct_io && allow_outplace_dio(inode, iocb, from))
>>> return 0;
>>>
>>> if (is_inode_flag_set(inode, FI_NO_PREALLOC))
>>>
>>
>>
>
>