Re: [PATCH 3/4] x86,module: Detect VMX vs SLD conflicts

From: Peter Zijlstra
Date: Wed Apr 08 2020 - 05:56:56 EST


On Wed, Apr 08, 2020 at 05:09:34PM +0900, Masami Hiramatsu wrote:
> On Tue, 07 Apr 2020 13:02:39 +0200
> Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> > +static bool insn_is_vmx(struct insn *insn)
> > +{
> > + u8 modrm = insn->modrm.bytes[0];
> > + u8 modrm_mod = X86_MODRM_MOD(modrm);
> > + u8 modrm_reg = X86_MODRM_REG(modrm);
> > +
> > + u8 prefix = insn->prefixes.bytes[0];
>
> This should be the last prefix,
>
> u8 prefix = insn->prefixes.bytes[3];
>
> (The last prefix always copied on the bytes[3])

And that is 0 on no-prefix, right?

> > +
> > + if (insn->opcode.bytes[0] != 0x0f)
> > + return false;
> > +
> > + switch (insn->opcode.bytes[1]) {
> > + case 0x01:
> > + switch (insn->opcode.bytes[2]) {
>
> Sorry, VMCALL etc. is in Grp7 (0f 01), the 3rd code is embedded
> in modrm instead of opcode. Thus it should be,
>
> switch (insn->modrm.value) {

Indeed, I was hoping (I really should've checked) that that byte was
duplicated in opcodes.

Also, since I already have modrm = insn->modrm.bytes[0], I should
probably use that anyway.

> > + case 0xc1: /* VMCALL */
> > + case 0xc2: /* VMLAUNCH */
> > + case 0xc3: /* VMRESUME */
> > + case 0xc4: /* VMXOFF */
>
> case 0xd4: /* VMFUNC */

As per Andrew, VMCALL and VMFUNC are SMV, and I really only need VMX in
this case. Including SMV is probably harmless, but I'm thinking a
smaller function is better.

> > + return true;
> > +
> > + default:
> > + break;
> > + }
> > + break;
> > +
> > + case 0x78: /* VMREAD */
> > + case 0x79: /* VMWRITE */
>
> return !insn_is_evex(insn);
>
> With EVEX prefix, these becomes vcvt* instructions.

Thanks!