Re: [PATCH] mm: vmstat: add cma statistics

From: John Hubbard
Date: Wed Feb 17 2021 - 15:58:26 EST


On 2/17/21 9:00 AM, Minchan Kim wrote:
Since CMA is used more widely, it's worth to have CMA
allocation statistics into vmstat. With it, we could
know how agressively system uses cma allocation and
how often it fails.

Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx>
---
include/linux/vm_event_item.h | 3 +++
mm/cma.c | 12 +++++++++---
mm/vmstat.c | 4 ++++
3 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 18e75974d4e3..0c567014ce82 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -70,6 +70,9 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
#endif
#ifdef CONFIG_HUGETLB_PAGE
HTLB_BUDDY_PGALLOC, HTLB_BUDDY_PGALLOC_FAIL,
+#endif
+#ifdef CONFIG_CMA
+ CMA_ALLOC, CMA_ALLOC_FAIL,

This seems wrong: here it's called "alloc", but in the output it's
called "alloc success", and in the implementation it's clearly
"alloc attempt" that is being counted.

Once these are all made consistent, then the bug should naturally
go away as part of that.

nit: I think the multiple items per line is a weak idea at best, even
though it's used here already. Each item is important and needs to be
visually compared to it's output item later. So one per line might
have helped avoid mismatches, and I think we should change to that to
encourage that trend.

thanks,
--
John Hubbard
NVIDIA

#endif
UNEVICTABLE_PGCULLED, /* culled to noreclaim list */
UNEVICTABLE_PGSCANNED, /* scanned for reclaimability */
diff --git a/mm/cma.c b/mm/cma.c
index 23d4a97c834a..ea1e39559526 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -434,14 +434,16 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
struct page *page = NULL;
int ret = -ENOMEM;
+ count_vm_event(CMA_ALLOC);
+
if (!cma || !cma->count || !cma->bitmap)
- return NULL;
+ goto out;
pr_debug("%s(cma %p, count %zu, align %d)\n", __func__, (void *)cma,
count, align);
if (!count)
- return NULL;
+ goto out;
mask = cma_bitmap_aligned_mask(cma, align);
offset = cma_bitmap_aligned_offset(cma, align);
@@ -449,7 +451,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
bitmap_count = cma_bitmap_pages_to_bits(cma, count);
if (bitmap_count > bitmap_maxno)
- return NULL;
+ goto out;
for (;;) {
mutex_lock(&cma->lock);
@@ -506,6 +508,10 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
}
pr_debug("%s(): returned %p\n", __func__, page);
+out:
+ if (!page)
+ count_vm_event(CMA_ALLOC_FAIL);
+
return page;
}
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 97fc32a53320..d8c32a33208d 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1305,6 +1305,10 @@ const char * const vmstat_text[] = {
#ifdef CONFIG_HUGETLB_PAGE
"htlb_buddy_alloc_success",
"htlb_buddy_alloc_fail",
+#endif
+#ifdef CONFIG_CMA
+ "cma_alloc_success",
+ "cma_alloc_fail",
#endif
"unevictable_pgs_culled",
"unevictable_pgs_scanned",