Re: Avoiding external fragmentation with a placement policy Version11

From: Joel Schopp
Date: Wed May 25 2005 - 13:22:19 EST


Changelog since V10

o Important - All allocation types now use per-cpu caches like the standard
allocator. Older versions may have trouble with large numbers of processors

Do you have a new set of benchmarks we could see? The ones you had for v10 were pretty useful.

o Removed all the additional buddy allocator statistic code

Is there a separate patch for the statistic code or is it no longer being maintained?

+/*
+ * Shared per-cpu lists would cause fragmentation over time
+ * The pcpu_list is to keep kernel and userrclm allocations
+ * apart while still allowing all allocation types to have
+ * per-cpu lists
+ */

Why are kernel nonreclaimable and kernel reclaimable joined here? I'm not saying you are wrong, I'm just ignorant and need some education.

+struct pcpu_list {
+ int count;
+ struct list_head list;
+} ____cacheline_aligned_in_smp;
+
struct per_cpu_pages {
- int count; /* number of pages in the list */
+ struct pcpu_list pcpu_list[2]; /* 0: kernel 1: user */
int low; /* low watermark, refill needed */
int high; /* high watermark, emptying needed */
int batch; /* chunk size for buddy add/remove */
- struct list_head list; /* the list of pages */
};

Instead of defining 0 and 1 in a comment why not use a #define?

> + pcp->pcpu_list[0].count = 0;
> + pcp->pcpu_list[1].count = 0;

The #define would make code like this look more readable.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/