[PATCH] readahead: Update the file_ra_state.ra_pages with each readahead operation

From: Youling Tang
Date: Mon Oct 30 2023 - 03:41:59 EST


From: Youling Tang <tangyouling@xxxxxxxxxx>

Changing the read_ahead_kb value midway through a sequential read of a
large file found that the ra->ra_pages value remained unchanged (new
ra_pages can only be detected the next time the file is opened). Because
file_ra_state_init() is only called once in do_dentry_open() in most
cases.

In ondemand_readahead(), update bdi->ra_pages to ra->ra_pages to ensure
that the maximum pages that can be allocated by the readahead algorithm
are the same as (read_ahead_kb * 1024) / PAGE_SIZE after read_ahead_kb
is modified.

Signed-off-by: Youling Tang <tangyouling@xxxxxxxxxx>
---
mm/readahead.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index e815c114de21..3dbabf819187 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -554,12 +554,14 @@ static void ondemand_readahead(struct readahead_control *ractl,
{
struct backing_dev_info *bdi = inode_to_bdi(ractl->mapping->host);
struct file_ra_state *ra = ractl->ra;
- unsigned long max_pages = ra->ra_pages;
+ unsigned long max_pages;
unsigned long add_pages;
pgoff_t index = readahead_index(ractl);
pgoff_t expected, prev_index;
unsigned int order = folio ? folio_order(folio) : 0;

+ max_pages = ra->ra_pages = bdi->ra_pages;
+
/*
* If the request exceeds the readahead window, allow the read to
* be up to the optimal hardware IO size
--
2.25.1