Re: [PATCH 2/9] ksm: let shared pages be swappable

From: Hugh Dickins
Date: Mon Nov 30 2009 - 07:38:38 EST


On Mon, 30 Nov 2009, KOSAKI Motohiro wrote:
> > After this patch, the number of shared swappable page will be unlimited.
>
> Probably, it doesn't matter. I mean
>
> - KSM sharing and Shmem sharing are almost same performance characteristics.
> - if memroy pressure is low, SplitLRU VM doesn't scan anon list so much.
>
> if ksm swap is too costly, we need to improve anon list scanning generically.

Yes, we're in agreement that this issue is not new with KSM swapping.

> btw, I'm not sure why bellow kmem_cache_zalloc() is necessary. Why can't we
> use stack?

Well, I didn't use stack: partly because I'm so ashamed of the pseudo-vmas
on the stack in mm/shmem.c, which have put shmem_getpage() into reports
of high stack users (I've unfinished patches to deal with that); and
partly because page_referenced_ksm() and try_to_unmap_ksm() are on
the page reclaim path, maybe way down deep on a very deep stack.

But it's not something you or I should be worrying about: as the comment
says, this is just a temporary hack, to present a patch which gets KSM
swapping working in an understandable way, while leaving some corrections
and refinements to subsequent patches. This pseudo-vma is removed in the
very next patch.

Hugh

>
> ----------------------------
> + /*
> + * Temporary hack: really we need anon_vma in rmap_item, to
> + * provide the correct vma, and to find recently forked instances.
> + * Use zalloc to avoid weirdness if any other fields are involved.
> + */
> + vma = kmem_cache_zalloc(vm_area_cachep, GFP_ATOMIC);
> + if (!vma) {
> + spin_lock(&ksm_fallback_vma_lock);
> + vma = &ksm_fallback_vma;
> + }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/