Re: [PATCH 0/6] IBPB cleanups and a fixup

From: Yosry Ahmed
Date: Thu Feb 20 2025 - 15:00:22 EST


On Thu, Feb 20, 2025 at 11:04:44AM -0800, Josh Poimboeuf wrote:
> On Wed, Feb 19, 2025 at 10:08:20PM +0000, Yosry Ahmed wrote:
> > This series removes X86_FEATURE_USE_IBPB, and fixes a KVM nVMX bug in
> > the process. The motivation is mostly the confusing name of
> > X86_FEATURE_USE_IBPB, which sounds like it controls IBPBs in general,
> > but it only controls IBPBs for spectre_v2_mitigation. A side effect of
> > this confusion is the nVMX bug, where virtualizing IBRS correctly
> > depends on the spectre_v2_user mitigation.
> >
> > The feature bit is mostly redundant, except in controlling the IBPB in
> > the vCPU load path. For that, a separate static branch is introduced,
> > similar to switch_mm_*_ibpb.
>
> Thanks for doing this. A few months ago I was working on patches to fix
> the same thing but I got preempted multiple times over.
>
> > I wanted to do more, but decided to stay conservative. I was mainly
> > hoping to merge indirect_branch_prediction_barrier() with entry_ibpb()
> > to have a single IBPB primitive that always stuffs the RSB if the IBPB
> > doesn't, but this would add some overhead in paths that currently use
> > indirect_branch_prediction_barrier(), and I was not sure if that's
> > acceptable.
>
> We always rely on IBPB clearing RSB, so yes, I'd say that's definitely
> needed. In fact I had a patch to do exactly that, with it ending up
> like this:

I was mainly concerned about the overhead this adds, but if it's a
requirement then yes we should do it.

>
> static inline void indirect_branch_prediction_barrier(void)
> {
> asm volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
> : ASM_CALL_CONSTRAINT
> : : "rax", "rcx", "rdx", "memory");
> }
>
> I also renamed "entry_ibpb" -> "write_ibpb" since it's no longer just
> for entry code.

Do you want me to add this in this series or do you want to do it on top
of it? If you have a patch lying around I can also include it as-is.