> I'm suffering from memory fragmentation slow downs, and my machine
> just has to be up for a while before memory gets sufficiently
> fragmented to cause trouble :(
>
> Anyone want to explain to me really slowly why we try to keep huge
> chunks of contiguous memory?
DMA, 8k NFS fragments, etc...
> And why if that is important why we don't implement a relocating
> defragmentation algorithm in the kernel? On the assumption that I
> could pause the kernel for a moment, it would be probably faster to do
> that on demand, then the current mess!
Even better, make sure that fragmentation doesn't occur very
often by freeing pages on demand before we use a free page
from a big free area.
> There is a defragmentation algorithm that runs in O(mem_size) time
> with two passes over memory, and needs no extra memory.
But it does need huge amounts of CPU time... memcpy() isn't
exactly cheap :(
Rik.
+-------------------------------------------+--------------------------+
| Linux: - LinuxHQ MM-patches page | Scouting webmaster |
| - kswapd ask-him & complain-to guy | Vries cubscout leader |
| http://www.fys.ruu.nl/~riel/ | <H.H.vanRiel@fys.ruu.nl> |
+-------------------------------------------+--------------------------+
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu