Re: [PATCH 05/24] kvm: x86/mmu: Fix yielding in TDP MMU

From: Ben Gardon
Date: Thu Jan 21 2021 - 20:07:44 EST


On Wed, Jan 20, 2021 at 11:28 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Tue, Jan 12, 2021, Ben Gardon wrote:
> > There are two problems with the way the TDP MMU yields in long running
> > functions. 1.) Given certain conditions, the function may not yield
> > reliably / frequently enough. 2.) In some functions the TDP iter risks
> > not making forward progress if two threads livelock yielding to
> > one another.
> >
> > Case 1 is possible if for example, a paging structure was very large
> > but had few, if any writable entries. wrprot_gfn_range could traverse many
> > entries before finding a writable entry and yielding.
> >
> > Case 2 is possible if two threads were trying to execute wrprot_gfn_range.
> > Each could write protect an entry and then yield. This would reset the
> > tdp_iter's walk over the paging structure and the loop would end up
> > repeating the same entry over and over, preventing either thread from
> > making forward progress.
> >
> > Fix these issues by moving the yield to the beginning of the loop,
> > before other checks and only yielding if the loop has made forward
> > progress since the last yield.
>
> I think it'd be best to split this into two patches, e.g. ensure forward
> progress and then yield more agressively. They are two separate bugs, and I
> don't think that ensuring forward progress would exacerbate case #1. I'm not
> worried about breaking things so much as getting more helpful shortlogs; "Fix
> yielding in TDP MMU" doesn't provide any insight into what exactly was broken.
> E.g. something like:
>
> KVM: x86/mmu: Ensure forward progress when yielding in TDP MMU iter
> KVM: x86/mmu: Yield in TDU MMU iter even if no real work was done
>
> > Fixes: a6a0b05da9f3 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
> > Reviewed-by: Peter Feiner <pfeiner@xxxxxxxxxx>
> >
> > Signed-off-by: Ben Gardon <bgardon@xxxxxxxxxx>
> > ---
> > arch/x86/kvm/mmu/tdp_mmu.c | 83 +++++++++++++++++++++++++++++++-------
> > 1 file changed, 69 insertions(+), 14 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index b2784514ca2d..1987da0da66e 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -470,9 +470,23 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
> > gfn_t start, gfn_t end, bool can_yield)
> > {
> > struct tdp_iter iter;
> > + gfn_t last_goal_gfn = start;
> > bool flush_needed = false;
> >
> > tdp_root_for_each_pte(iter, root, start, end) {
> > + /* Ensure forward progress has been made before yielding. */
> > + if (can_yield && iter.goal_gfn != last_goal_gfn &&
>
> Make last_goal_gfn a property of the iterator, that way all this logic can be
> shoved into tdp_mmu_iter_flush_cond_resched(), and the comments about ensuring
> forward progress and effectively invalidating/resetting the iterator (the
> comment below) can be a function comment, as opposed to being copied everywhere.
> E.g. there can be a big scary warning in the function comment stating that the
> caller must restart its loop if the helper yielded.
>
> Tangentially related, the name goal_gfn is quite confusing. "goal" and "end"
> are synonyms, but "goal" is often initialized with "start", and it's not used to
> terminate the walk. Maybe next_gfn instead? And maybe yielded_gfn, since
> last_next_gfn is pretty horrendous.

All these are excellent suggestions and definitely make the code
cleaner. I'll definitely adopt yielded_gfn. While I agree goal_gfn is
a little odd, I think next_gfn could be more misleading because the
goal_gfn is really more of a target than the next step. It might take
4 or 5 steps to actually reach a last-level entry mapping that gfn.
target_last_level_gfn or next_last_level_gfn would probably be the
most accurate option.

>
> > + tdp_mmu_iter_flush_cond_resched(kvm, &iter)) {
>
> This isn't quite correct, as tdp_mmu_iter_flush_cond_resched() will do an
> expensive remote TLB flush on every yield, even if no flush is needed. The
> cleanest solution is likely to drop tdp_mmu_iter_flush_cond_resched() and
> instead add a @flush param to tdp_mmu_iter_cond_resched(). If it's tagged
> __always_inline, then the callers that unconditionally pass true/false will
> optimize out the conditional code.
>
> At that point, I think it would also make sense to fold tdp_iter_refresh_walk()
> into tdp_mmu_iter_cond_resched(), because really we shouldn't be mucking with
> the guts of the iter except for the yield case.
>
> > + last_goal_gfn = iter.goal_gfn;
>
> Another argument for both renaming goal_gfn and moving last_*_gfn into the iter:
> it's not at all obvious that updating the last gfn _after_ tdp_iter_refresh_walk()
> is indeed correct.
>
> You can also avoid a local variable by doing max(iter->next_gfn, iter->gfn) when
> calling tdp_iter_refresh_walk(). IMO, that's also a bit easier to understand
> than an open-coded equivalent.
>
> E.g. putting it all together, with yielded_gfn set by tdp_iter_start():
>
> static __always_inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,
> struct tdp_iter *iter,
> bool flush)
> {
> /* Ensure forward progress has been made since the last yield. */
> if (iter->next_gfn == iter->yielded_gfn)
> return false;
>
> if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
> if (flush)
> kvm_flush_remote_tlbs(kvm);
> cond_resched_lock(&kvm->mmu_lock);
>
> /*
> * Restart the walk over the paging structure from the root,
> * starting from the highest gfn the iterator had previously
> * reached. The entire paging structure, except the root, may
> * have been completely torn down and rebuilt while we yielded.
> */
> tdp_iter_start(iter, iter->pt_path[iter->root_level - 1],
> iter->root_level, iter->min_level,
> max(iter->next_gfn, iter->gfn));
> return true;
> }
>
> return false;
> }
>
> > + flush_needed = false;
> > + /*
> > + * Yielding caused the paging structure walk to be
> > + * reset so skip to the next iteration to continue the
> > + * walk from the root.
> > + */
> > + continue;
> > + }
> > +
> > if (!is_shadow_present_pte(iter.old_spte))
> > continue;
> >