On Friday, August 20, 2010 11:50:42 Michal Nazarewicz wrote:+**** Regions
+
+ Regions is a list of regions terminated by a region with size
+ equal zero. The following fields may be set:
+
+ - size -- size of the region (required, must not be zero)
+ - alignment -- alignment of the region; must be power of two or
+ zero (optional)
Just wondering: is alignment really needed since we already align to the
PAGE_SIZE? Do you know of hardware with alignment requirements > PAGE_SIZE?
+ - start -- where the region has to start (optional)
+ - alloc_name -- the name of allocator to use (optional)
+ - alloc -- allocator to use (optional; and besides
+ alloc_name is probably is what you want)
I would make this field internal only. At least for now.
+*** The device and types of memory
+
+ The name of the device is taken from the device structure. It is
+ not possible to use CMA if driver does not register a device
+ (actually this can be overcome if a fake device structure is
+ provided with at least the name set).
+
+ The type of memory is an optional argument provided by the device
+ whenever it requests memory chunk. In many cases this can be
+ ignored but sometimes it may be required for some devices.
This really should not be optional but compulsory. 'type' has the same function
as the GFP flags with kmalloc. They tell the kernel where the memory should be
allocated. Only if you do not care at all can you pass in NULL. But in almost
all cases the memory should be at least DMA-able (and yes, for a lot of SoCs that
is the same as any memory -- for now).
Memory types should be defined in the platform code. Some can be generic
like 'dma' (i.e. any DMAable memory), 'dma32' (32-bit DMA) and 'common' (any
memory). Others are platform specific like 'banka' and 'bankb'.
A memory type definition can either be a start address/size pair but it can
perhaps also be a GFP type (e.g. .name = "dma32", .gfp = GFP_DMA32).
Regions should be of a single memory type. So when you define the region it
should have a memory type field.
Drivers request memory of whatever type they require. The mapping just maps
one or more regions to the driver and the cma allocator will pick only those
regions with the required type and ignore those that do not match.
+ For instance, let's say that there are two memory banks and for
+ performance reasons a device uses buffers in both of them.
+ Platform defines a memory types "a" and "b" for regions in both
+ banks. The device driver would use those two types then to
+ request memory chunks from different banks. CMA attributes could
+ look as follows:
+
+ static struct cma_region regions[] = {
+ { .name = "a", .size = 32 << 20 },
+ { .name = "b", .size = 32 << 20, .start = 512 << 20 },
+ { }
+ }
+ static const char map[] __initconst = "foo/a=a;foo/b=b;*=a,b";
So this would become something like this:
static struct cma_memtype types[] = {
{ .name = "a", .size = 32 << 20 },
{ .name = "b", .size = 32 << 20, .start = 512 << 20 },
// For example:
{ .name = "dma", .gfp = GFP_DMA },
{ }
}
static struct cma_region regions[] = {
// size may of course be smaller than the memtype size.
{ .name = "a", type = "a", .size = 32 << 20 },
{ .name = "b", type = "b", .size = 32 << 20 },
{ }
}
static const char map[] __initconst = "*=a,b";
No need to do anything special for driver foo here: cma_alloc will pick the
correct region based on the memory type requested by the driver.
It is probably no longer needed to specify the memory type in the mapping when
this is in place.