Re: [RFC][PATCH] release mmap_sem before starting migration (WasRe: Need to take mmap_sem lock in move_pages.

From: KAMEZAWA Hiroyuki
Date: Thu Feb 05 2009 - 08:24:04 EST


Swamy Gowda wrote:
> KAMEZAWA Hiroyuki wrote:
>> On Wed, 4 Feb 2009 10:39:19 -0500 (EST)
>> Christoph Lameter <cl@xxxxxxxxxxxxxxxxxxxx> wrote:
>>
>>> On Wed, 4 Feb 2009, KAMEZAWA Hiroyuki wrote:
>>>
>>> > mmap_sem can be released after page table walk ends.
>>>
>>> No. read lock on mmap_sem must be held since the migrate functions
>>> manipulate page table entries. Concurrent large scale changes to the
>>> page
>>> tables (splitting vmas, remapping etc) must not be possible.
>>>
>> Just for clarification:
>>
>> 1. changes in page table is not problem from the viewpoint of kernel.
>> (means no panic, no leak,...)
>> 2. But this loses "atomic" aspect of migration and will allow unexpected
>> behaviors.
>> (means the page-mapping status after sys_move may not be what user
>> expects.)
>>
>>
>> Thanks,
>> -Kame
>>
>>
> But I can't understand how user can see different page->mapping , since
> new page->mapping still holds the anon_vma pointer which should still
> contain the changes in the vma list( due to split vma etc). But,
> considering it as a problem how is it avoided in case of hotremove?
>
I'm sorry page-mapping in my text is not page->mapping. Just means
process's memory map.

In my point of view, no problems (I wrote no problem in the kernel.)

One big difference between sys_move_pages and hot remove is
hot-remove retries many times but sys_move_pages() doesn't.
So, race/contention in migrate_page() will dramatically decrease
success-rate of page migration by system call.

In user side, sys_move_pages(), we may have to think more.
I wonder that there may be much more contentions of pte_lock and
page_lock() etc... if we remove mmap_sem.
The good point of mmap_sem is the waiter can sleep without any troubles
and nest of locks.

Thanks,
-Kame

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/