Re: [FIX] slub: Remove kmalloc under list_lock from list_slab_objects() V2

From: Tetsuo Handa
Date: Sun Jan 12 2020 - 06:04:32 EST


On 2020/01/10 23:11, Vlastimil Babka wrote:
> On 12/7/19 11:03 PM, Yu Zhao wrote:
>> On Mon, Dec 02, 2019 at 03:12:20PM +0000, Christopher Lameter wrote:
>>> On Sat, 30 Nov 2019, Andrew Morton wrote:
>>>
>>>>> Perform the allocation in free_partial() before the list_lock is taken.
>>>>
>>>> No response here? It looks a lot simpler than the originally proposed
>>>> patch?
>>>
>>> Yup. I prefer this one but its my own patch so I cannot Ack this.
>>
>> Hi, there is a pending question from Tetsuo-san. I'd be happy to ack
>> once it's address.
>
> Tetsuo's mails don't reach linux-mm for a while and he has given up
> trying to do something about it. It's hard to discuss anything outside
> the direct CC group then. I don't know what's the pending question, for
> example.
>

Hmm, this one? Even non-ML destinations are sometimes rejected (e.g.
554 5.7.1 Service unavailable; Client host [202.181.97.72] blocked using b.barracudacentral.org; http://www.barracudanetworks.com/reputation/?pr=1&ip=202.181.97.72
). Anyway, I just worried whether it is really safe to do memory allocation
which might involve memory reclaim. You MM guys know better...

-------- Forwarded Message --------
Subject: Re: [FIX] slub: Remove kmalloc under list_lock from list_slab_objects() V2
Message-ID: <54b6c6a1-f9e4-5002-c828-3084c5203489@xxxxxxxxxxxxxxxxxxx>
Date: Sun, 1 Dec 2019 10:17:38 +0900

On 2019/12/01 8:09, Andrew Morton wrote:
>> Perform the allocation in free_partial() before the list_lock is taken.
>
> No response here? It looks a lot simpler than the originally proposed
> patch?
>
>> --- linux.orig/mm/slub.c 2019-10-15 13:54:57.032655296 +0000
>> +++ linux/mm/slub.c 2019-11-11 15:52:11.616397853 +0000
>> @@ -3690,14 +3690,15 @@ error:
>> }
>>
>> static void list_slab_objects(struct kmem_cache *s, struct page *page,
>> - const char *text)
>> + const char *text, unsigned long *map)
>> {
>> #ifdef CONFIG_SLUB_DEBUG
>> void *addr = page_address(page);
>> void *p;
>> - unsigned long *map = bitmap_zalloc(page->objects, GFP_ATOMIC);

Changing from !(__GFP_IO | __GFP_FS) allocation to

>> +
>> if (!map)
>> return;
>> +
>> slab_err(s, page, text, s->name);
>> slab_lock(page);
>>
>> @@ -3723,6 +3723,11 @@ static void free_partial(struct kmem_cac
>> {
>> LIST_HEAD(discard);
>> struct page *page, *h;
>> + unsigned long *map = NULL;
>> +
>> +#ifdef CONFIG_SLUB_DEBUG
>> + map = bitmap_alloc(oo_objects(s->max), GFP_KERNEL);

__GFP_IO | __GFP_FS allocation.
How is this path guaranteed to be safe to perform __GFP_IO | __GFP_FS reclaim?

>> +#endif
>>
>> BUG_ON(irqs_disabled());
>> spin_lock_irq(&n->list_lock);