Re: [PATCH] mm: do not drain pagevecs for mlock

From: KOSAKI Motohiro
Date: Fri Dec 30 2011 - 03:12:33 EST


2011/12/30 Tao Ma <tm@xxxxxx>:
> In our test of mlock, we have found some severe performance regression
> in it. Some more investigations show that mlocked is blocked heavily
> by lur_add_drain_all which calls schedule_on_each_cpu and flush the work
> queue which is very slower if we have several cpus.
>
> So we have tried 2 ways to solve it:
> 1. Add a per cpu counter for all the pagevecs so that we don't schedule
>   and flush the lru_drain work if the cpu doesn't have any pagevecs(I
>   have finished the codes already).
> 2. Remove the lru_add_drain_all.
>
> The first one has some problems since in our product system, all the cpus
> are busy, so I guess there is very little chance for a cpu to have 0 pagevecs
> except that you run several consecutive mlocks.
>
> From the commit log which added this function(8891d6da), it seems that we
> don't have to call it. So the 2nd one seems to be both easy and workable and
> comes this patch.

Could you please show us your system environment and benchmark programs?
Usually lru_drain_** is very fast than mlock() body because it makes
plenty memset(page).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/