Re: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map
From: Mike Rapoport
Date: Mon Oct 26 2020 - 05:16:15 EST
On Mon, Oct 26, 2020 at 12:38:32AM +0000, Edgecombe, Rick P wrote:
> On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@xxxxxxxxxxxxx>
> >
> > When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may
> > be
> > not present in the direct map and has to be explicitly mapped before
> > it
> > could be copied.
> >
> > On arm64 it is possible that a page would be removed from the direct
> > map
> > using set_direct_map_invalid_noflush() but __kernel_map_pages() will
> > refuse
> > to map this page back if DEBUG_PAGEALLOC is disabled.
>
> It looks to me that arm64 __kernel_map_pages() will still attempt to
> map it if rodata_full is true, how does this happen?
Unless I misread the code, arm64 requires both rodata_full and
debug_pagealloc_enabled() to be true for __kernel_map_pages() to do
anything.
But rodata_full condition applies to set_direct_map_*_noflush() as well,
so with !rodata_full the linear map won't be ever changed.
> > Explicitly use set_direct_map_{default,invalid}_noflush() for
> > ARCH_HAS_SET_DIRECT_MAP case and debug_pagealloc_map_pages() for
> > DEBUG_PAGEALLOC case.
> >
> > While on that, rename kernel_map_pages() to hibernate_map_page() and
> > drop
> > numpages parameter.
> >
> > Signed-off-by: Mike Rapoport <rppt@xxxxxxxxxxxxx>
> > ---
> > kernel/power/snapshot.c | 29 +++++++++++++++++++----------
> > 1 file changed, 19 insertions(+), 10 deletions(-)
> >
> > diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> > index fa499466f645..ecb7b32ce77c 100644
> > --- a/kernel/power/snapshot.c
> > +++ b/kernel/power/snapshot.c
> > @@ -76,16 +76,25 @@ static inline void
> > hibernate_restore_protect_page(void *page_address) {}
> > static inline void hibernate_restore_unprotect_page(void
> > *page_address) {}
> > #endif /* CONFIG_STRICT_KERNEL_RWX && CONFIG_ARCH_HAS_SET_MEMORY */
> >
> > -#if defined(CONFIG_DEBUG_PAGEALLOC) ||
> > defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
> > -static inline void
> > -kernel_map_pages(struct page *page, int numpages, int enable)
> > +static inline void hibernate_map_page(struct page *page, int enable)
> > {
> > - __kernel_map_pages(page, numpages, enable);
> > + if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> > + unsigned long addr = (unsigned long)page_address(page);
> > + int ret;
> > +
> > + if (enable)
> > + ret = set_direct_map_default_noflush(page);
> > + else
> > + ret = set_direct_map_invalid_noflush(page);
> > +
> > + if (WARN_ON(ret))
> > + return;
> > +
> > + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > + } else {
> > + debug_pagealloc_map_pages(page, 1, enable);
> > + }
> > }
> > -#else
> > -static inline void
> > -kernel_map_pages(struct page *page, int numpages, int enable) {}
> > -#endif
> >
> > static int swsusp_page_is_free(struct page *);
> > static void swsusp_set_page_forbidden(struct page *);
> > @@ -1366,9 +1375,9 @@ static void safe_copy_page(void *dst, struct
> > page *s_page)
> > if (kernel_page_present(s_page)) {
> > do_copy_page(dst, page_address(s_page));
> > } else {
> > - kernel_map_pages(s_page, 1, 1);
> > + hibernate_map_page(s_page, 1);
> > do_copy_page(dst, page_address(s_page));
> > - kernel_map_pages(s_page, 1, 0);
> > + hibernate_map_page(s_page, 0);
> > }
> > }
> >
>
> If somehow a page was unmapped such that
> set_direct_map_default_noflush() would fail, then this code introduces
> a WARN, but it will still try to read the unmapped page. Why not just
> have the WARN's inside of __kernel_map_pages() if they fail and then
> have a warning for the debug page alloc cases as well? Since logic
> around both expects them not to fail.
The intention of this series is to disallow usage of
__kernel_map_pages() when DEBUG_PAGEALLOC=n. I'll update this patch to
better handle possible errors, but I still want to keep WARN in the
caller.
--
Sincerely yours,
Mike.