[RFC PATCH net-next] Introducing lockless cache built on top of slab allocator

From: Kangmin Park
Date: Mon Sep 20 2021 - 19:49:13 EST


It is just introducing and proof of concept.
The patch code is based on other RFC patches. So, the code is not
correct yet, it is just simple proof of concept.

Recently block layer implemented percpu, lockless cache on the top
of slab allocator. It can be used for IO polling.

Link: https://lwn.net/Articles/868070/
Link: https://www.spinics.net/lists/linux-block/msg71964.html

It gained some IOPS increase (performance increased by about 10%
on the block layer).

And there are attempts to implement the percpu, lockless cache.

Link: https://lore.kernel.org/linux-mm/20210920154816.31832-1-42.hyeyoo@xxxxxxxxx/T/#u

If this cache is implemented successfully,
how about use this cache to allocate skb instead of kmem_cache_alloc_bulk()
in napi_skb_cache_get()?

I want your comment/opinion.

Signed-off-by: Kangmin Park <l4stpr0gr4m@xxxxxxxxx>
---
net/core/skbuff.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 7c2ab27fcbf9..f9a9deca423d 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -170,11 +170,15 @@ static struct sk_buff *napi_skb_cache_get(void)
struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
struct sk_buff *skb;

- if (unlikely(!nc->skb_count))
- nc->skb_count = kmem_cache_alloc_bulk(skbuff_head_cache,
- GFP_ATOMIC,
- NAPI_SKB_CACHE_BULK,
- nc->skb_cache);
+ if (unlikely(!nc->skb_count)) {
+ /* kmem_cache_alloc_cached should be changed to return the size of
+ * the allocated cache
+ */
+ nc->skb_cache = kmem_cache_alloc_cached(skbuff_head_cache,
+ GFP_ATOMIC | SLB_LOCKLESS_CACHE);
+ nc->skb_count = this_cpu_ptr(skbuff_head_cache)->size;
+ }
+
if (unlikely(!nc->skb_count))
return NULL;

--
2.26.2