Re: [PATCH v2 13/23] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator

From: Vlastimil Babka
Date: Wed Apr 27 2022 - 04:10:09 EST


On 4/14/22 10:57, Hyeonggon Yoo wrote:
> There is not much benefit for serving large objects in kmalloc().
> Let's pass large requests to page allocator like SLUB for better
> maintenance of common code.
>
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx>

Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx>

Some nits:

> @@ -3607,15 +3607,25 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p)
> {
> struct kmem_cache *s;
> size_t i;
> + struct folio *folio;
>
> local_irq_disable();
> for (i = 0; i < size; i++) {
> void *objp = p[i];

folio can be declared here
could probably move 's' too, and 'i' to the for () thanks to gnu11

>
> - if (!orig_s) /* called via kfree_bulk */
> - s = virt_to_cache(objp);
> - else
> + if (!orig_s) {
> + folio = virt_to_folio(objp);
> + /* called via kfree_bulk */
> + if (!folio_test_slab(folio)) {
> + local_irq_enable();
> + free_large_kmalloc(folio, objp);
> + local_irq_disable();
> + continue;
> + }
> + s = folio_slab(folio)->slab_cache;
> + } else
> s = cache_from_obj(orig_s, objp);

This should now use { } brackets per kernel style.

> +
> if (!s)
> continue;
>