ioremap_xxx() functions should fail if the memory address range contains
normal RAM. But due to some boundary calculation and boundary judgment
issues, the RAM check may be omitted for the very start or the very end
page in the memory range. As a consequence, ioremap_xxx() can be applied
to normal RAM pages by mistake. This raises the risk of misusing
ioremap_xxx() functions on normal RAM ranges, and may incur terrible
performance issues.
For example, suppose [phys_addr ~ phys_addr + PAGE_SIZE - 1] is a normal
RAM page. Calling ioremap(phys_addr, PAGE_SIZE-1) will succeed (but it
should not). This will set the cache flag of the phys_addr's directing
mapping pte to be PCD. What's worse, iounmap() won't revert this cache
flag in the directing mapping. So the pte in the directing mapping keeps
polluted until workarounds are applied (by invoking ioremap_cache() on
phys_addr) to fix the cache bit. If there is important data/code in the
polluted page, which is accessed frequently, then the performance of the
machine will drop terribly.
These two patches aim to address this issue.
Yahui Wang (2):
x86/ioremap: fix the pfn calculation mistake in __ioremap_check_ram()
kernel/resource: fix boundary judgment issues in find_next_iomem_res()
and __walk_iomem_res_desc()
arch/x86/mm/ioremap.c | 16 ++++++++--------
kernel/resource.c | 4 ++--
2 files changed, 10 insertions(+), 10 deletions(-)
base-commit: 13311e74253fe64329390df80bed3f07314ddd61