Re: ubifs: fix page_count in ->ubifs_migrate_page()
From: Dave Chinner
Date: Thu Dec 13 2018 - 17:57:43 EST
On Thu, Dec 13, 2018 at 03:23:37PM +0100, Richard Weinberger wrote:
> Hello zhangjun,
>
> thanks a lot for bringing this up!
>
> Am Mittwoch, 12. Dezember 2018, 15:13:57 CET schrieb zhangjun:
> > Because the PagePrivate() in UBIFS is different meanings,
> > alloc_cma() will fail when one dirty page cache located in
> > the type of MIGRATE_CMA
> >
> > If not adjust the 'extra_count' for dirty page,
> > ubifs_migrate_page() -> migrate_page_move_mapping() will
> > always return -EAGAIN for:
> > expected_count += page_has_private(page)
> > This causes the migration to fail until the page cache is cleaned
> >
> > In general, PagePrivate() indicates that buff_head is already bound
> > to this page, and at the same time page_count() will also increase.
That's an invalid assumption.
We should not be trying to infer what PagePrivate() means in code
that has no business using looking at it i.e. page->private is private
information for the owner of the page, and it's life cycle and
intent are unknown to anyone other than the page owner.
e.g. on XFS, a page cache page's page->private /might/ contain a
struct iomap_page, or it might be NULL. Assigning a struct
iomap_page to the page does not change the reference count on the
page. IOWs, the page needs to be handled exactly the same
way by external code regardless of whether there is somethign
attached to page->private or not.
Hence it looks to me like the migration code is making invalid
assumptions about PagePrivate inferring reference counts and so the
migration code needs to be fixed. Requiring filesystems to work
around invalid assumptions in the migration code is a sure recipe
for problems with random filesystems using page->private for their
own internal purposes....
Cheers,
Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx