Re: [v5 PATCH] arm64: mm: show direct mapping use in /proc/meminfo
From: Yang Shi
Date: Mon Jan 26 2026 - 12:59:59 EST
On 1/26/26 6:18 AM, Will Deacon wrote:
On Tue, Jan 13, 2026 at 04:36:06PM -0800, Yang Shi wrote:
On 1/13/26 6:36 AM, Will Deacon wrote:Or just add the PUD sizes for all the page sizes...
On Tue, Jan 06, 2026 at 04:29:44PM -0800, Yang Shi wrote:I can initialize size[PUD] to "NON_SUPPORT" by default. If the case happens,
+#if defined(CONFIG_ARM64_4K_PAGES)This seems a bit brittle to me. If somebody adds support for l1 block
+ size[PTE] = "4k";
+ size[CONT_PTE] = "64k";
+ size[PMD] = "2M";
+ size[CONT_PMD] = "32M";
+ size[PUD] = "1G";
+#elif defined(CONFIG_ARM64_16K_PAGES)
+ size[PTE] = "16k";
+ size[CONT_PTE] = "2M";
+ size[PMD] = "32M";
+ size[CONT_PMD] = "1G";
+#elif defined(CONFIG_ARM64_64K_PAGES)
+ size[PTE] = "64k";
+ size[CONT_PTE] = "2M";
+ size[PMD] = "512M";
+ size[CONT_PMD] = "16G";
+#endif
+
+ seq_printf(m, "DirectMap%s: %8lu kB\n",
+ size[PTE], dm_meminfo[PTE] >> 10);
+ seq_printf(m, "DirectMap%s: %8lu kB\n",
+ size[CONT_PTE],
+ dm_meminfo[CONT_PTE] >> 10);
+ seq_printf(m, "DirectMap%s: %8lu kB\n",
+ size[PMD], dm_meminfo[PMD] >> 10);
+ seq_printf(m, "DirectMap%s: %8lu kB\n",
+ size[CONT_PMD],
+ dm_meminfo[CONT_PMD] >> 10);
+ if (pud_sect_supported())
+ seq_printf(m, "DirectMap%s: %8lu kB\n",
+ size[PUD], dm_meminfo[PUD] >> 10);
mappings for !4k pages in future, they will forget to update this and
we'll end up returning kernel stack in /proc/meminfo afaict.
/proc/meminfo just shows "DirectMapNON_SUPPORT", then we will notice
something is missed, but no kernel stack data will be leak.
Fine to me.
I think I'd just drop the comment. The code is clear enough once youThe comment may be misleading. I meant if we have the accounting code for@@ -266,6 +351,17 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,I don't understand the comment you're adding here. If somebody passes
(flags & NO_BLOCK_MAPPINGS) == 0) {
pmd_set_huge(pmdp, phys, prot);
+ /*
+ * It is possible to have mappings allow cont mapping
+ * but disallow block mapping. For example,
+ * map_entry_trampoline().
+ * So we have to increase CONT_PMD and PMD size here
+ * to avoid double counting.
+ */
+ if (pgprot_val(prot) & PTE_CONT)
+ dm_meminfo_add(addr, (next - addr), CONT_PMD);
+ else
+ dm_meminfo_add(addr, (next - addr), PMD);
NO_BLOCK_MAPPINGS then that also prevents contiguous entries except at
level 3.
CONT_PMD in alloc_init_cont_pmd(), for example,
actually read what's going on.
Sure.
@@ -433,6 +433,11 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsignedSorry, I got confused here and thought that we could end up with a
long addr,
if (ret)
goto out;
+ if (pgprot_val(prot) & PTE_CONT)
+ dm_meminfo_add(addr, (next - addr), CONT_PMD);
pmdp += pmd_index(next) - pmd_index(addr);
phys += next - addr;
} while (addr = next, addr != end);
If the described case happens, we actually miscount CONT_PMD. So I need to
check whether it is CONT in init_pmd() instead. If the comment is confusing,
I can just remove it.
It also doesn't look you handle the error case properly when the mappingI don't quite get what fail do you mean? pmd_set_huge() doesn't fail. Or you
fails.
meant hotplug fails? If so the hot unplug will decrease the counters, which
is called in the error handling path.
partially-formed contiguous region but that's not the case. So you can
ignore this comment :)
No problem. Thanks for taking time to review the patch.
I will prepare a new revision once we figure out the potential contiguous bit misprogramming issue.
Thanks,
Yang
Will