Re: [Intel-IOMMU 02/10] Library routine for pre-allocat poolhandling

From: Andrew Morton
Date: Thu Jun 07 2007 - 19:27:56 EST


On Wed, 06 Jun 2007 11:57:00 -0700
anil.s.keshavamurthy@xxxxxxxxx wrote:

> Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@xxxxxxxxx>

That was a terse changelog.

Obvious question: how does this differ from mempools, and would it be
better to fill in any gaps in mempool functionality instead of
implementing something similar-looking?

The changelog very much should describe all this, as well as explaining
what the dynamic behaviour of this new thing is, and what applications are
envisaged, what problems it solves, etc, etc.


> --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6.22-rc3/lib/respool.c 2007-06-06 11:34:46.000000000 -0700

There are a number of coding-style glitches in here, but
scripts/checkpatch.pl catches most of them. Please run it, and fix.

> @@ -0,0 +1,222 @@
> +/*
> + * respool.c - library routines for handling generic pre-allocated pool of objects
> + *
> + * Copyright (c) 2006, Intel Corporation.
> + *
> + * This file is released under the GPLv2.
> + *
> + * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@xxxxxxxxx>
> + */
> +
> +#include <linux/respool.h>
> +
> +/**
> + * get_resource_pool_obj - gets an object from the pool
> + * @ppool - resource pool in question
> + * This function gets an object from the pool and
> + * if the pool count drops below min_count, this
> + * function schedules work to grow the pool. If
> + * no elements are fount in the pool then this function
> + * tries to get memory from kernel.
> + */
> +void * get_resource_pool_obj(struct resource_pool *ppool)
> +{
> + unsigned long flags;
> + struct list_head *plist;
> + bool queue_work = 0;
> +
> + spin_lock_irqsave(&ppool->pool_lock, flags);
> + if (!list_empty(&ppool->pool_head)) {
> + plist = ppool->pool_head.next;
> + list_del(plist);
> + ppool->curr_count--;
> + } else {
> + /*Making sure that curr_count is 0 when list is empty */
> + plist = NULL;
> + BUG_ON(ppool->curr_count != 0);
> + }
> +
> + /* Check if pool needs to grow */
> + if (ppool->curr_count <= ppool->min_count)
> + queue_work = 1;
> + spin_unlock_irqrestore(&ppool->pool_lock, flags);
> +
> + if (queue_work)
> + schedule_work(&ppool->work); /* queue work to grow the pool */
> +
> +
> + if (plist) {
> + memset(plist, 0, ppool->alloc_size); /* Zero out memory */
> + return plist;
> + }
> +
> + /* Out of luck, try to get memory from kernel */
> + plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
> + GFP_ATOMIC);
> +
> + return plist;
> +}

A function like this should take a gfp_t from the caller, and pass it on.

> +/**
> + * put_resource_pool_obj - puts an object back to the pool
> + * @vaddr - object's address
> + * @ppool - resource pool in question.
> + * This function puts an object back to the pool.
> + */
> +void put_resource_pool_obj(void * vaddr, struct resource_pool *ppool)
> +{
> + unsigned long flags;
> + struct list_head *plist = (struct list_head *)vaddr;
> + bool queue_work = 0;
> +
> + BUG_ON(!vaddr);
> + BUG_ON(!ppool);
> +
> + spin_lock_irqsave(&ppool->pool_lock, flags);
> + list_add(plist, &ppool->pool_head);
> + ppool->curr_count++;
> + if (ppool->curr_count > (ppool->min_count +
> + ppool->grow_count * 2))
> + queue_work = 1;

Some of the indenting is a bit funny-looking in here.

> + spin_unlock_irqrestore(&ppool->pool_lock, flags);
> +
> + if (queue_work)
> + schedule_work(&ppool->work); /* queue work to shrink the pool */
> +}
> +
> +void
> +__grow_resource_pool(struct resource_pool *ppool,
> + unsigned int grow_count)
> +{
> + unsigned long flags;
> + struct list_head *plist;
> +
> + while(grow_count) {
> + plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
> + GFP_KERNEL);

resource_pool.alloc_mem() already returns void *, so there is never a need
to cast its return value.

> + if (!plist)
> + break;
> +
> + /* Add the element to the list */
> + spin_lock_irqsave(&ppool->pool_lock, flags);
> + list_add(plist, &ppool->pool_head);
> + ppool->curr_count++;
> + spin_unlock_irqrestore(&ppool->pool_lock, flags);
> + grow_count--;
> + }
> +}
> +

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/