Re: [PATCH 01/22] kvm: mmu: Separate making SPTEs from set_spte

From: Ben Gardon
Date: Wed Sep 30 2020 - 19:03:18 EST


On Tue, Sep 29, 2020 at 9:55 PM Sean Christopherson
<sean.j.christopherson@xxxxxxxxx> wrote:
>
> On Fri, Sep 25, 2020 at 02:22:41PM -0700, Ben Gardon wrote:
> > +static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
> > + unsigned int pte_access, int level,
> > + gfn_t gfn, kvm_pfn_t pfn, bool speculative,
> > + bool can_unsync, bool host_writable)
> > +{
> > + u64 spte = 0;
> > + struct kvm_mmu_page *sp;
> > + int ret = 0;
> > +
> > + if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access))
> > + return 0;
> > +
> > + sp = sptep_to_sp(sptep);
> > +
> > + spte = make_spte(vcpu, pte_access, level, gfn, pfn, *sptep, speculative,
> > + can_unsync, host_writable, sp_ad_disabled(sp), &ret);
> > + if (!spte)
> > + return 0;
>
> This is an impossible condition. Well, maybe it's theoretically possible
> if page track is active, with EPT exec-only support (shadow_present_mask is
> zero), and pfn==0. But in that case, returning early is wrong.
>
> Rather than return the spte, what about returning 'ret', passing 'new_spte'
> as a u64 *, and dropping the bail early path? That would also eliminate
> the minor wart of make_spte() relying on the caller to initialize 'ret'.

I agree that would make this much cleaner.

>
> > +
> > + if (spte & PT_WRITABLE_MASK)
> > + kvm_vcpu_mark_page_dirty(vcpu, gfn);
> > +
> > if (mmu_spte_update(sptep, spte))
> > ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH;
> > return ret;
> > --
> > 2.28.0.709.gb0816b6eb0-goog
> >