Re: [PATCH 1/8] mm: cma: introduce cma_release_nowait()

From: Oscar Salvador
Date: Thu Mar 25 2021 - 05:40:23 EST


On Wed, Mar 24, 2021 at 05:28:28PM -0700, Mike Kravetz wrote:
> +struct cma_clear_bitmap_work {
> + struct work_struct work;
> + struct cma *cma;
> + unsigned long pfn;
> + unsigned int count;
> +};
> +
> struct cma cma_areas[MAX_CMA_AREAS];
> unsigned cma_area_count;
>
> +struct workqueue_struct *cma_release_wq;

should this be static?

> +
> phys_addr_t cma_get_base(const struct cma *cma)
> {
> return PFN_PHYS(cma->base_pfn);
> @@ -146,6 +155,10 @@ static int __init cma_init_reserved_areas(void)
> for (i = 0; i < cma_area_count; i++)
> cma_activate_area(&cma_areas[i]);
>
> + cma_release_wq = create_workqueue("cma_release");
> + if (!cma_release_wq)
> + return -ENOMEM;
> +

I did not check the code that goes through the initcalls and maybe we
cannot really have this scneario, but what happens if we return -ENOMEM?
Because I can see that later in cma_release_nowait() you mess with
cma_release_wq. Can it be that at that stage cma_release_wq == NULL? due
to -ENOMEM, or are we guaranteed to never reach that point?

Also, should the cma_release_wq go before the cma_activate_area?

> +bool cma_release_nowait(struct cma *cma, const struct page *pages,
> + unsigned int count)
> +{
> + struct cma_clear_bitmap_work *work;
> + unsigned long pfn;
> +
> + if (!cma || !pages)
> + return false;
> +
> + pr_debug("%s(page %p)\n", __func__, (void *)pages);

cma_release() seems to have:

pr_debug("%s(page %p, count %u)\n", __func__, (void *)pages, count);

any reason to not have the same here?


> +
> + pfn = page_to_pfn(pages);
> +
> + if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
> + return false;
> +
> + VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
> +
> + /*
> + * Set CMA_DELAYED_RELEASE flag: subsequent cma_alloc()'s
> + * will wait for the async part of cma_release_nowait() to
> + * finish.
> + */
> + if (unlikely(!test_bit(CMA_DELAYED_RELEASE, &cma->flags)))
> + set_bit(CMA_DELAYED_RELEASE, &cma->flags);

It seems this cannot really happen? This is the only place we set the
bit, right?
Why not set the bit unconditionally? Against what this is guarding us?

> +
> + /*
> + * To make cma_release_nowait() non-blocking, cma bitmap is cleared
> + * from a work context (see cma_clear_bitmap_fn()). The first page
> + * in the cma allocation is used to store the work structure,
> + * so it's released after the cma bitmap clearance. Other pages
> + * are released immediately as previously.
> + */
> + if (count > 1)
> + free_contig_range(pfn + 1, count - 1);
> +
> + work = (struct cma_clear_bitmap_work *)page_to_virt(pages);
> + INIT_WORK(&work->work, cma_clear_bitmap_fn);
> + work->cma = cma;
> + work->pfn = pfn;
> + work->count = count;
> + queue_work(cma_release_wq, &work->work);

As I said above, can cma_release_wq be NULL if we had -ENOMEM before?


--
Oscar Salvador
SUSE L3