Re: [PATCH] Revert "mm: vmpressure: fix sending wrong events on underflow"
From: zhong jiang
Date: Wed Jun 07 2017 - 00:57:36 EST
On 2017/6/7 11:55, Minchan Kim wrote:
> On Wed, Jun 07, 2017 at 11:08:37AM +0800, zhongjiang wrote:
>> This reverts commit e1587a4945408faa58d0485002c110eb2454740c.
>>
>> THP lru page is reclaimed , THP is split to normal page and loop again.
>> reclaimed pages should not be bigger than nr_scan. because of each
>> loop will increase nr_scan counter.
> Unfortunately, there is still underflow issue caused by slab pages as
> Vinayak reported in description of e1587a4945408 so we cannot revert.
> Please correct comment instead of removing the logic.
>
> Thanks.
we calculate the vmpressue based on the Lru page, exclude the slab pages by previous
discussion. is it not this?
Thanks
zhongjiang
>> Signed-off-by: zhongjiang <zhongjiang@xxxxxxxxxx>
>> ---
>> mm/vmpressure.c | 10 +---------
>> 1 file changed, 1 insertion(+), 9 deletions(-)
>>
>> diff --git a/mm/vmpressure.c b/mm/vmpressure.c
>> index 6063581..149fdf6 100644
>> --- a/mm/vmpressure.c
>> +++ b/mm/vmpressure.c
>> @@ -112,16 +112,9 @@ static enum vmpressure_levels vmpressure_calc_level(unsigned long scanned,
>> unsigned long reclaimed)
>> {
>> unsigned long scale = scanned + reclaimed;
>> - unsigned long pressure = 0;
>> + unsigned long pressure;
>>
>> /*
>> - * reclaimed can be greater than scanned in cases
>> - * like THP, where the scanned is 1 and reclaimed
>> - * could be 512
>> - */
>> - if (reclaimed >= scanned)
>> - goto out;
>> - /*
>> * We calculate the ratio (in percents) of how many pages were
>> * scanned vs. reclaimed in a given time frame (window). Note that
>> * time is in VM reclaimer's "ticks", i.e. number of pages
>> @@ -131,7 +124,6 @@ static enum vmpressure_levels vmpressure_calc_level(unsigned long scanned,
>> pressure = scale - (reclaimed * scale / scanned);
>> pressure = pressure * 100 / scale;
>>
>> -out:
>> pr_debug("%s: %3lu (s: %lu r: %lu)\n", __func__, pressure,
>> scanned, reclaimed);
>>
>> --
>> 1.7.12.4
>>
> .
>