[PATCH] x86/mm: avoid premature success when changing page attributes

From: Jan Beulich
Date: Mon Jan 25 2016 - 11:54:50 EST

Since successful return from __cpa_process_fault() makes
__change_page_attr() exit early (and successfully), its caller needs to
be instructed to continue its iteration by adjusting ->numpages. While
this already happens on one of __cpa_process_fault()'s successful exit
paths, the other needs this done similarly. This was in particular a
problem when the top level caller passed zero for "checkalias"
(becoming the "primary" value for the other two mentioned functions),
as is the case in change_page_attr_set_clr() when the OR of "mask_set"
and "mask_clr" equals _PAGE_NX, as e.g. passed from set_memory_{,n}x().

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
arch/x86/mm/pageattr.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

--- 4.5-rc1/arch/x86/mm/pageattr.c
+++ 4.5-rc1-x86-cpa-non-primary/arch/x86/mm/pageattr.c
@@ -1122,8 +1122,10 @@ static int __cpa_process_fault(struct cp
* Ignore all non primary paths.
- if (!primary)
+ if (!primary) {
+ cpa->numpages = 1;
return 0;
+ }

* Ignore the NULL PTE for kernel identity mapping, as it is expected