Re: [PATCH] afs: Don't unlock fetched data pages until the op completes successfully
From: Matthew Wilcox
Date: Sun May 17 2020 - 17:08:18 EST
On Sun, May 17, 2020 at 09:21:05PM +0100, David Howells wrote:
> Don't call req->page_done() on each page as we finish filling it with the
> data coming from the network. Whilst this might speed up the application a
> bit, it's a problem if there's a network failure and the operation has to
> be reissued.
It's readpages, which by definition is called for pages that the
application is _not_ currently waiting for. Now, if the application
is multithreaded and happens to want pages that are currently under
->readpages, then that's going to be a problem (but also unlikely).
Also if the application overruns the readahead window then it'll have
to wait a little longer (but we ramp up the readahead window, so this
should be a self-correcting problem).
> If this happens, an oops occurs because afs_readpages_page_done() clears
> the pointer to each page it unlocks and when a retry happens, the pointers
> to the pages it wants to fill are now NULL (and the pages have been
> unlocked anyway).
I mean, you could check for NULL pointers and not issue the I/O for that
region ... but it doesn't seem necessary.
> Instead, wait till the operation completes successfully and only then
> release all the pages after clearing any terminal gap (the server can give
> us less data than we requested as we're allowed to ask for more than is
> available).
s/release/mark up to date/
> + if (req->page_done)
> + for (req->index = 0; req->index < req->nr_pages; req->index++)
> + req->page_done(req);
> +
I'd suggest doing one call rather than N and putting the page iteration
inside the callback. But this patch is appropriate for this late in
the -rc series, just something to consider for the future.
You might even want to use a bit in the req to indicate whether this is
a readahead request ... that's the only user of the ->page_done callback
that I can find.
Anyway,
Reviewed-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>