Re: [PATCH v10 1/5] lib: objpool added: ring-array based lockless MPMC

From: wuqiang.matt
Date: Sun Oct 15 2023 - 12:08:36 EST


On 2023/10/15 23:43, Masami Hiramatsu (Google) wrote:
On Sun, 15 Oct 2023 13:32:47 +0800
"wuqiang.matt" <wuqiang.matt@xxxxxxxxxxxxx> wrote:

objpool is a scalable implementation of high performance queue for
object allocation and reclamation, such as kretprobe instances.

With leveraging percpu ring-array to mitigate hot spots of memory
contention, it delivers near-linear scalability for high parallel
scenarios. The objpool is best suited for the following cases:
1) Memory allocation or reclamation are prohibited or too expensive
2) Consumers are of different priorities, such as irqs and threads

Limitations:
1) Maximum objects (capacity) is fixed after objpool creation
2) All pre-allocated objects are managed in percpu ring array,
which consumes more memory than linked lists


Thanks for updating! This looks good to me except 2 points.

[...]
+
+/* initialize object pool and pre-allocate objects */
+int objpool_init(struct objpool_head *pool, int nr_objs, int object_size,
+ gfp_t gfp, void *context, objpool_init_obj_cb objinit,
+ objpool_fini_cb release)
+{
+ int rc, capacity, slot_size;
+
+ /* check input parameters */
+ if (nr_objs <= 0 || nr_objs > OBJPOOL_NR_OBJECT_MAX ||
+ object_size <= 0 || object_size > OBJPOOL_OBJECT_SIZE_MAX)
+ return -EINVAL;
+
+ /* align up to unsigned long size */
+ object_size = ALIGN(object_size, sizeof(long));
+
+ /* calculate capacity of percpu objpool_slot */
+ capacity = roundup_pow_of_two(nr_objs);

This must be 'roundup_pow_of_two(nr_objs + 1)' because if nr_objs is power
of 2 and all objects are pushed on the same slot, tail == head. This
means empty and full is the same.

That won't happen. Would tail and head wrap only when >= 2^32. When all
objects are pushed to the same slot, tail will be (head + capacity).


+ if (!capacity)
+ return -EINVAL;
+
+ /* initialize objpool pool */
+ memset(pool, 0, sizeof(struct objpool_head));
+ pool->nr_cpus = nr_cpu_ids;
+ pool->obj_size = object_size;
+ pool->capacity = capacity;
+ pool->gfp = gfp & ~__GFP_ZERO;
+ pool->context = context;
+ pool->release = release;
+ slot_size = pool->nr_cpus * sizeof(struct objpool_slot);
+ pool->cpu_slots = kzalloc(slot_size, pool->gfp);
+ if (!pool->cpu_slots)
+ return -ENOMEM;
+
+ /* initialize per-cpu slots */
+ rc = objpool_init_percpu_slots(pool, nr_objs, context, objinit);
+ if (rc)
+ objpool_fini_percpu_slots(pool);
+ else
+ refcount_set(&pool->ref, pool->nr_objs + 1);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(objpool_init);
+

[...]

+
+/* drop unused objects and defref objpool for releasing */
+void objpool_fini(struct objpool_head *pool)
+{
+ void *obj;
+
+ do {
+ /* grab object from objpool and drop it */
+ obj = objpool_pop(pool);
+
+ /*
+ * drop reference of objpool anyway even if
+ * the obj is NULL, since one extra ref upon
+ * objpool was already grabbed during pool
+ * initialization in objpool_init()
+ */
+ if (refcount_dec_and_test(&pool->ref))
+ objpool_free(pool);

Nit: you can call objpool_drop() instead of repeating the same thing here.

objpool_drop won't deref objpool if given obj is NULL. But here we need
drop objpool anyway even if obj is NULL.

Thank you,

+ } while (obj);
+}
+EXPORT_SYMBOL_GPL(objpool_fini);
--
2.40.1


Thanks for you time