Re: 6.9/BUG: Bad page state in process kswapd0 pfn:d6e840

From: Qu Wenruo
Date: Wed May 29 2024 - 18:39:03 EST




在 2024/5/29 16:27, David Hildenbrand 写道:
On 28.05.24 16:24, David Hildenbrand wrote:
[...]
Hmm, your original report mentions kswapd, so I'm getting the feeling
someone
does one folio_put() too much and we are freeing a pageache folio that
is still
in the pageache and, therefore, has folio->mapping set ... bisecting
would
really help.


A little bird just told me that I missed an important piece in the dmesg
output: "aops:btree_aops ino:1" from dump_mapping():

This is btrfs, i_ino is 1, and we don't have a dentry. Is that
BTRFS_BTREE_INODE_OBJECTID?

Summarizing what we know so far:
(1) Freeing an order-0 btrfs folio where folio->mapping
    is still set
(2) Triggered by kswapd and kcompactd; not triggered by other means of
    page freeing so far

From the implementation of filemap_migrate_folio() (and previous
migrate_page_moving_mapping()), it looks like the migration only involves:

- Migrate the mapping
- Copy the page private value
- Copy the contents (if needed)
- Copy all the page flags

The most recent touch on migration is from v6.0, which I do not believe
is the cause at all.


Possible theories:
(A) folio->mapping not cleared when freeing the folio. But shouldn't
    this also happen on other freeing paths? Or are we simply lucky to
    never trigger that for that folio?

Yeah, in fact we never manually clean folio->mapping inside btrfs, thus
I'm not sure if it's the case.

(B) Messed-up refcounting: freeing a folio that is still in use (and
    therefore has folio-> mapping still set)

I was briefly wondering if large folio splitting could be involved.

Although we have all the metadata support for larger folios, we do not
yet enable it.

My current guess is, could it be some race with this commit?

09e6cef19c9f ("btrfs: refactor alloc_extent_buffer() to
allocate-then-attach method")

For example, when we're allocating an extent buffer (btrfs' metadata
structure), and one page is already attached to the page cache, then the
page is being migrated meanwhile the remaining pages are not yet attached?

It's first introduced in v6.8, matching the earliest report.
But that patch is not easy to revert.


Do you have any extra reproducibility or extra way to debug the lifespan
of that specific patch?

Or is there any way to temporarily disable migration?

Thanks,
Qu

CCing btrfs maintainers.