Re: [PATCH linux-next] mm: huge_memory: fix misused mapping_large_folio_support() for anon folios
From: ran xiaokai
Date: Tue Jun 04 2024 - 22:22:45 EST
> On 04.06.24 07:47, xu.xin16@xxxxxxxxxx wrote:
> > From: Ran Xiaokai <ran.xiaokai@xxxxxxxxxx>
> >
> > When I did a large folios split test, a WARNING
> > "[ 5059.122759][ T166] Cannot split file folio to non-0 order"
> > was triggered. But my test cases are only for anonmous folios.
> > while mapping_large_folio_support() is only reasonable for page
> > cache folios.
>
> Agreed.
>
> I wonder if mapping_large_folio_support() should either
>
> a) Complain if used for anon folios, so we can detect the wrong use more
> easily. (VM_WARN_ON_ONCE())
> b) Return "true" for anonymous mappings, although that's more debatable.
>
Hi, David,
Thanks for the review.
I think a) is better.
But we have to add a new parameter "folio" to mapping_large_folio_support(), right ?
something like mapping_large_folio_support(struct address_space *mapping, struct folio *folio) ?
But in the __filemap_get_folio() path,
__filemap_get_folio()
no_page:
....
if (!mapping_large_folio_support(mapping))
the folio is not allocated yet, yes ?
Or do you mean there is some other way to do this ?
> >
> > In split_huge_page_to_list_to_order(), the folio passed to
> > mapping_large_folio_support() maybe anonmous folio. The
> > folio_test_anon() check is missing. So the split of the anonmous THP
> > is failed. This is also the same for shmem_mapping(). We'd better add
> > a check for both. But the shmem_mapping() in __split_huge_page() is
> > not involved, as for anonmous folios, the end parameter is set to -1, so
> > (head[i].index >= end) is always false. shmem_mapping() is not called.
> >
> > Using /sys/kernel/debug/split_huge_pages to verify this, with this
> > patch, large anon THP is successfully split and the warning is ceased.
> >
> > Signed-off-by: Ran Xiaokai <ran.xiaokai@xxxxxxxxxx>
> > Cc: xu xin <xu.xin16@xxxxxxxxxx>
> > Cc: Yang Yang <yang.yang29@xxxxxxxxxx>
> > ---
> > mm/huge_memory.c | 38 ++++++++++++++++++++------------------
> > 1 file changed, 20 insertions(+), 18 deletions(-)
> >
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 317de2afd371..4c9c7e5ea20c 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -3009,31 +3009,33 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
> > if (new_order >= folio_order(folio))
> > return -EINVAL;
> >
> > - /* Cannot split anonymous THP to order-1 */
> > - if (new_order == 1 && folio_test_anon(folio)) {
> > - VM_WARN_ONCE(1, "Cannot split to order-1 folio");
> > - return -EINVAL;
> > - }
> > -
> > if (new_order) {
> > /* Only swapping a whole PMD-mapped folio is supported */
> > if (folio_test_swapcache(folio))
> > return -EINVAL;
> > - /* Split shmem folio to non-zero order not supported */
> > - if (shmem_mapping(folio->mapping)) {
> > - VM_WARN_ONCE(1,
> > - "Cannot split shmem folio to non-0 order");
> > - return -EINVAL;
> > - }
> > - /* No split if the file system does not support large folio */
> > - if (!mapping_large_folio_support(folio->mapping)) {
> > - VM_WARN_ONCE(1,
> > - "Cannot split file folio to non-0 order");
> > - return -EINVAL;
> > +
> > + if (folio_test_anon(folio)) {
> > + /* Cannot split anonymous THP to order-1 */
> > + if (new_order == 1) {
> > + VM_WARN_ONCE(1, "Cannot split to order-1 folio");
> > + return -EINVAL;
> > + }
> > + } else {
> > + /* Split shmem folio to non-zero order not supported */
> > + if (shmem_mapping(folio->mapping)) {
> > + VM_WARN_ONCE(1,
> > + "Cannot split shmem folio to non-0 order");
> > + return -EINVAL;
> > + }
> > + /* No split if the file system does not support large folio */
> > + if (!mapping_large_folio_support(folio->mapping)) {
> > + VM_WARN_ONCE(1,
> > + "Cannot split file folio to non-0 order");
> > + return -EINVAL;
> > + }
> > }
> > }
>
> What about the following sequence:
>
> if (folio_test_anon(folio)) {
> if (new_order == 1)
> ...
> } else if (new_order) {
> if (shmem_mapping(...))
> ...
> ...
> }
>
> if (folio_test_swapcache(folio) && new_order)
> return -EINVAL;
>
> Should result in less churn and reduce indentation level.
>
Thanks.
The code is cleaner in this way.
> --
> Cheers,
>
> David / dhildenb