Re: [PATCH v2] ring-buffer: Align meta-page to sub-buffers for improved TLB usage

From: Steven Rostedt
Date: Wed Aug 21 2024 - 11:59:34 EST


On Fri, 28 Jun 2024 11:46:11 +0100
Vincent Donnefort <vdonnefort@xxxxxxxxxx> wrote:

> diff --git a/tools/testing/selftests/ring-buffer/map_test.c b/tools/testing/selftests/ring-buffer/map_test.c
> index a9006fa7097e..4bb0192e43f3 100644
> --- a/tools/testing/selftests/ring-buffer/map_test.c
> +++ b/tools/testing/selftests/ring-buffer/map_test.c
> @@ -228,6 +228,20 @@ TEST_F(map, data_mmap)
> data = mmap(NULL, data_len, PROT_READ, MAP_SHARED,
> desc->cpu_fd, meta_len);
> ASSERT_EQ(data, MAP_FAILED);
> +
> + /* Verify meta-page padding */
> + if (desc->meta->meta_page_size > getpagesize()) {
> + void *addr;
> +
> + data_len = desc->meta->meta_page_size;
> + data = mmap(NULL, data_len,
> + PROT_READ, MAP_SHARED, desc->cpu_fd, 0);
> + ASSERT_NE(data, MAP_FAILED);
> +
> + addr = (void *)((unsigned long)data + getpagesize());
> + ASSERT_EQ(*((int *)addr), 0);

Should we make this a test that the entire page is zero?

for (int i = desc->meta->meta_struct_len; i < desc->meta->meta_page_size; i += sizeof(int))
ASSERT_EQ(((int *)data)[i], 0);

?

> + munmap(data, data_len);
> + }
> }

Also, looking at the init, if for some reason (I highly doubt it may
happen) that the meta_struct_len becomes bigger than page_size, we should
update the init section to:

/* Handle the case where meta_struct_len is greater than page size */
if (page_size < desc->meta->meta_struct_len) {
/* meta_page_size is >= meta_struct_len */
page_size = desc->meta->meta_page_size;
munmap(desc->meta, page_size);
map = mmap(NULL, page_size, PROT_READ, MAP_SHARED, desc->cpu_fd, 0);
if (map == MAP_FAILED)
return -errno;
desc->meta = (struct trace_buffer_meta *)map;
}

-- Steve