BUG: about flush TLB during unmapping a page in memory subsystem

From: yunfeng zhang
Date: Thu Oct 19 2006 - 22:48:15 EST


In rmap.c::try_to_unmap_one of 2.6.16.29, there are some code snippets

.....
/* Nuke the page table entry. */
flush_cache_page(vma, address, page_to_pfn(page));
pteval = ptep_clear_flush(vma, address, pte);
// >>> The above line is expanded as below
// >>> pte_t __pte;
// >>> __pte = ptep_get_and_clear((__vma)->vm_mm, __address, __ptep);
// >>> flush_tlb_page(__vma, __address);
// >>> __pte;

/* Move the dirty bit to the physical page now the pte is gone. */
if (pte_dirty(pteval))
set_page_dirty(page);
.....


It seems that they only can work on UP system.

On SMP, let's suppose the pte was clean, after A CPU executed
ptep_get_and_clear,
B CPU makes the pte dirty, which will make a fatal error to A CPU since it gets
a stale pte, isn't right?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/