[PATCH -next] mm/filemap: fix a data race in filemap_fault()

From: Qian Cai
Date: Mon Feb 10 2020 - 12:00:47 EST


struct file_ra_state ra.mmap_miss could be accessed concurrently during
page faults as noticed by KCSAN,

BUG: KCSAN: data-race in filemap_fault / filemap_map_pages

write to 0xffff9b1700a2c1b4 of 4 bytes by task 3292 on cpu 30:
filemap_fault+0x920/0xfc0
do_sync_mmap_readahead at mm/filemap.c:2384
(inlined by) filemap_fault at mm/filemap.c:2486
__xfs_filemap_fault+0x112/0x3e0 [xfs]
xfs_filemap_fault+0x74/0x90 [xfs]
__do_fault+0x9e/0x220
do_fault+0x4a0/0x920
__handle_mm_fault+0xc69/0xd00
handle_mm_fault+0xfc/0x2f0
do_page_fault+0x263/0x6f9
page_fault+0x34/0x40

read to 0xffff9b1700a2c1b4 of 4 bytes by task 3313 on cpu 32:
filemap_map_pages+0xc2e/0xd80
filemap_map_pages at mm/filemap.c:2625
do_fault+0x3da/0x920
__handle_mm_fault+0xc69/0xd00
handle_mm_fault+0xfc/0x2f0
do_page_fault+0x263/0x6f9
page_fault+0x34/0x40

Reported by Kernel Concurrency Sanitizer on:
CPU: 32 PID: 3313 Comm: systemd-udevd Tainted: G W L 5.5.0-next-20200210+ #1
Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019

ra.mmap_miss is used to contribute the readahead decisions, a data race
could be undesirable. Since the stores are aligned and less than
word-size, assume they are safe. Thus, fixing it by adding READ_ONCE()
for the loads except those places comparing to zero where they are safe.

Signed-off-by: Qian Cai <cai@xxxxxx>
---
mm/filemap.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 1784478270e1..b6c1d37f7ea3 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2380,14 +2380,14 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
}

/* Avoid banging the cache line if not needed */
- if (ra->mmap_miss < MMAP_LOTSAMISS * 10)
+ if (READ_ONCE(ra->mmap_miss) < MMAP_LOTSAMISS * 10)
ra->mmap_miss++;

/*
* Do we miss much more than hit in this file? If so,
* stop bothering with read-ahead. It will only hurt.
*/
- if (ra->mmap_miss > MMAP_LOTSAMISS)
+ if (READ_ONCE(ra->mmap_miss) > MMAP_LOTSAMISS)
return fpin;

/*
@@ -2418,7 +2418,7 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf,
/* If we don't want any read-ahead, don't bother */
if (vmf->vma->vm_flags & VM_RAND_READ)
return fpin;
- if (ra->mmap_miss > 0)
+ if (data_race(ra->mmap_miss > 0))
ra->mmap_miss--;
if (PageReadahead(page)) {
fpin = maybe_unlock_mmap_for_io(vmf, fpin);
@@ -2622,7 +2622,7 @@ void filemap_map_pages(struct vm_fault *vmf,
if (page->index >= max_idx)
goto unlock;

- if (file->f_ra.mmap_miss > 0)
+ if (data_race(file->f_ra.mmap_miss > 0))
file->f_ra.mmap_miss--;

vmf->address += (xas.xa_index - last_pgoff) << PAGE_SHIFT;
--
1.8.3.1