As mentioned in [1], it seems odd to check NULL elements in
the middle of page bulk allocating, and it seems caller can
do a better job of bulk allocating pages into a whole array
sequentially without checking NULL elements first before
doing the page bulk allocation for most of existing users.
Through analyzing of bulk allocation API used in fs, it
seems that the callers are depending on the assumption of
populating only NULL elements in fs/btrfs/extent_io.c and
net/sunrpc/svc_xprt.c while erofs and btrfs don't, see:
commit 91d6ac1d62c3 ("btrfs: allocate page arrays using bulk page allocator")
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index f0a1da40d641..ef52cedd9873 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -623,13 +623,26 @@ int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array,
bool nofail)
{
const gfp_t gfp = nofail ? (GFP_NOFS | __GFP_NOFAIL) : GFP_NOFS;
- unsigned int allocated;
+ unsigned int allocated, ret;
- for (allocated = 0; allocated < nr_pages;) {
- unsigned int last = allocated;
+ /* Defragment page_array so pages can be bulk allocated into remaining
+ * NULL elements sequentially.
+ */
+ for (allocated = 0, ret = 0; ret < nr_pages; ret++) {
+ if (page_array[ret]) {
+ page_array[allocated] = page_array[ret];
+ if (ret != allocated)
+ page_array[ret] = NULL;
+
+ allocated++;
+ }
+ }
- allocated = alloc_pages_bulk(gfp, nr_pages, page_array);
- if (unlikely(allocated == last)) {
+ while (allocated < nr_pages) {
+ ret = alloc_pages_bulk(gfp, nr_pages - allocated,
+ page_array + allocated);