Re: bug? in __get_free_pages

kwrohrer@enteract.com
Tue, 4 Nov 1997 00:37:44 -0600 (CST)


And lo, Neilski saith unto me:
> Hmm, I'm new to kernel delving so I've probably picked this up wrong, but it
> *looks* like I've come across a bug.
>
> If lots of pages are free, but none of them happen to be in the DMA-able
> region, then __get_free_pages() will fail to return DMA-able pages - as far as
> I can see... This happened to me today, I reckon.
>
> Is this a bug or a feature ?
>
> The nitty-gritty: if you look in page_alloc.c, you see that if (nr_free_pages >
> reserved_pages) then __get_free_pages will never call try_to_free_page.
>
> I have a suspicion that this would surely have been caught by now, and thus
> must be my mistake, but I really can't see where I'm going wrong...
It's something we've been living with for some time now.

Currently the paging mechanism pays no attention to the details of any
pending memory demand, save for whether each request can be granted.
Network I/O can run into major problems in the case where memory is
fragmented, because it requests order-2 chunks of memory and now blocks
if no chunk that large is available. Zlatko Calusic recently posted
a patch which makes sure at least half the minimum free pages are in
groups of order 2 or greater, but AFAIK even that patch just evicts
more pages until it gets enough large free areas.

It would be nice to see (and I may attempt) a patch which keeps track
of pending allocations of each (order, ISA DMA capable) pair, and
which actually tailors paging activity to create appropriate spaces,
possibly shuffling pages about in memory to create the desired
free spaces. This would solve both the no free DMA memory problems
the sound and floppy drivers exhibit, and the memory fragmentation
problems the new blocking policy imposes on the network layer,
without throwing out (possibly quite many) pages we didn't need to
throw out in the process.

Keith