Re: [RFC PATCH 1/5] mm, page_alloc: support multiple pages allocation
From: Joonsoo Kim
Date: Wed Jul 10 2013 - 21:02:53 EST
On Wed, Jul 10, 2013 at 03:52:42PM -0700, Dave Hansen wrote:
> On 07/03/2013 01:34 AM, Joonsoo Kim wrote:
> > - if (page)
> > + do {
> > + page = buffered_rmqueue(preferred_zone, zone, order,
> > + gfp_mask, migratetype);
> > + if (!page)
> > + break;
> > +
> > + if (!nr_pages) {
> > + count++;
> > + break;
> > + }
> > +
> > + pages[count++] = page;
> > + if (count >= *nr_pages)
> > + break;
> > +
> > + mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
> > + if (!zone_watermark_ok(zone, order, mark,
> > + classzone_idx, alloc_flags))
> > + break;
> > + } while (1);
>
> I'm really surprised this works as well as it does. Calling
> buffered_rmqueue() a bunch of times enables/disables interrupts a bunch
> of times, and mucks with the percpu pages lists a whole bunch.
> buffered_rmqueue() is really meant for _single_ pages, not to be called
> a bunch of times in a row.
>
> Why not just do a single rmqueue_bulk() call?
Hello, Dave.
There are some reasons why I implement the feature in this way.
rmqueue_bulk() needs a zone lock. If we allocate not so many pages,
for example, 2 or 3 pages, it can have much more overhead rather than
allocationg 1 page multiple times. So, IMHO, it is better that
multiple pages allocation is supported on top of percpu pages list.
And I think that enables/disables interrupts a bunch of times help
to avoid a latency problem. If we disable interrupts until the whole works
is finished, interrupts can be handled too lately.
free_hot_cold_page_list() already do enables/disalbed interrupts a bunch of
times.
Thanks for helpful comment!
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@xxxxxxxxxx For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/