Re: [rfc patch 3/3] mm: munlock COW pages on truncation unmap
From: KOSAKI Motohiro
Date: Sat Oct 03 2009 - 09:57:03 EST
>> Umm..
>> I haven't understand this.
>>
>> (1) unmap_mapping_range() is called twice.
>>
>> unmap_mapping_range(mapping, new + PAGE_SIZE - 1, 0, 1);
>> truncate_inode_pages(mapping, new);
>> unmap_mapping_range(mapping, new + PAGE_SIZE - 1, 0, 1);
>>
>> (2) PG_mlock is turned on from mlock() and vmscan.
>> (3) vmscan grab anon_vma, but mlock don't grab anon_vma.
>
> You are right, I was so focused on the LRU side that I missed an
> obvious window here: an _explicit_ mlock can still happen between the
> PG_mlocked clearing section and releasing the page.
>
> If we race with it, the put_page() in __mlock_vma_pages_range() might
> free the freshly mlocked page.
>
>> (4) after truncate_inode_pages(), we don't need to think vs-COW, because
>> find_get_page() never success. but first unmap_mapping_range()
>> have vs-COW racing.
>
> Yes, we can race with COW breaking, but I can not see a problem there.
> It clears the old page's mlock, but also with an atomic
> TestClearPageMlocked(). And the new page is mapped and mlocked under
> pte lock and only if we didn't clear the pte in the meantime.
Ah, You are right.
>> So, Is anon_vma grabbing really sufficient?
>
> No, the explicit mlocking race exists, I think.
>
>> Or, you intent to the following?
>>
>> unmap_mapping_range(mapping, new + PAGE_SIZE - 1, 0, 0);
>> truncate_inode_pages(mapping, new);
>> unmap_mapping_range(mapping, new + PAGE_SIZE - 1, 0, 1);
>
> As mentioned above, I don't see how it would make a difference.
Yes, sorry. please forget this.
>> > @@ -544,6 +544,13 @@ redo:
>> > */
>> > lru = LRU_UNEVICTABLE;
>> > add_page_to_unevictable_list(page);
>> > + /*
>> > + * See the TestClearPageMlocked() in zap_pte_range():
>> > + * if a racing unmapper did not see the above setting
>> > + * of PG_lru, we must see its clearing of PG_locked
>> > + * and move the page back to the evictable list.
>> > + */
>> > + smp_mb();
>> > }
>>
>> add_page_to_unevictable() have a spin lock. Why do we need additionl
>> explicit memory barrier?
>
> It sets PG_lru under spinlock and tests PG_mlocked after the unlock.
> The following sections from memory-barriers.txt made me nervous:
>
> (5) LOCK operations.
>
> This acts as a one-way permeable barrier. It guarantees that all memory
> operations after the LOCK operation will appear to happen after the LOCK
> operation with respect to the other components of the system.
>
> (6) UNLOCK operations.
>
> This also acts as a one-way permeable barrier. It guarantees that all
> memory operations before the UNLOCK operation will appear to happen before
> the UNLOCK operation with respect to the other components of the system.
>
> Memory operations that occur after an UNLOCK operation may appear to
> happen before it completes.
>
> So the only garuantee this gives us is that both PG_lru setting and
> PG_mlocked testing happen after LOCK and PG_lru setting finishes
> before UNLOCK, no? I wanted to make sure this does not happen:
>
> LOCK, test PG_mlocked, set PG_lru, UNLOCK
>
> I don't know whether there is a data dependency between those two
> operations. They go to the same word, but I could also imagine
> setting one bit is independent of reading another one. Humm. Help.
Ahh, Yes! you are right.
We really need this barrier.
However, I think this issue doesn't depend on zap_pte_range patch.
Other TestClearPageMlocked(page) caller have the same problem, because
putback_lru_page() doesn't have any exclusion, right?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/