Re: [PATCH 4/4] Enabling Access bit when doing memory swapping
From: Marcelo Tosatti
Date: Thu May 17 2012 - 22:27:34 EST
On Wed, May 16, 2012 at 09:12:30AM +0800, Xudong Hao wrote:
> Enabling Access bit when doing memory swapping.
>
> Signed-off-by: Haitao Shan <haitao.shan@xxxxxxxxx>
> Signed-off-by: Xudong Hao <xudong.hao@xxxxxxxxx>
> ---
> arch/x86/kvm/mmu.c | 13 +++++++------
> arch/x86/kvm/vmx.c | 6 ++++--
> 2 files changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index ff053ca..5f55f98 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -1166,7 +1166,8 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
> int young = 0;
>
> /*
> - * Emulate the accessed bit for EPT, by checking if this page has
> + * In case of absence of EPT Access and Dirty Bits supports,
> + * emulate the accessed bit for EPT, by checking if this page has
> * an EPT mapping, and clearing it if it does. On the next access,
> * a new EPT mapping will be established.
> * This has some overhead, but not as much as the cost of swapping
> @@ -1179,11 +1180,11 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
> while (spte) {
> int _young;
> u64 _spte = *spte;
> - BUG_ON(!(_spte & PT_PRESENT_MASK));
> - _young = _spte & PT_ACCESSED_MASK;
> + BUG_ON(!is_shadow_present_pte(_spte));
> + _young = _spte & shadow_accessed_mask;
> if (_young) {
> young = 1;
> - clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
> + *spte &= ~shadow_accessed_mask;
> }
Now a dirty bit can be lost. Is there a reason to remove the clear_bit?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/