Hi,
On Sat, 14 Apr 2001, Marcelo Tosatti wrote:
> There is a nasty race between shmem_getpage_locked() and
> swapin_readahead() with the new shmem code (introduced in 2.4.3-ac3
> and merged in the main tree in 2.4.4-pre3):
>
> shmem_getpage_locked() finds a page in the swapcache and moves it to
> the pagecache as an shmem page, freeing the swapcache and the swap
> map entry for this page. (which causes a BUG() in mm/shmem.c:353
> since the swap map entry is being used)
>
> In the meanwhile, swapin_readahead() is allocating a page and adding
> it to the swapcache.
Oh, I was just chasing this also.
> I don't see any clean fix for this one.
I think the actual check for swap_count is not necessary: If
swapin_readahead allocates a new swap_cache page for the entry, that's
not a real bug. On memory pressure this page will be reclaimed.
Actually we have to make shmem much more unfriendly to the swap cache
to make it correct: I think we have to drop the whole drop swap cache
pages on truncate logic since it uses lookup_swap_cache and
delete_from_swap_cache which both lock the page, while holding a
spinlock :-(
The appended patch implements both changes and relies on the page
stealer to shrink the swap cache.
It also integrates fixes which Marcelo did send earlier.
Greetings
Christoph
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sun Apr 15 2001 - 21:00:24 EST