Re: [PATCH] x86: Implement __WARN using UD0
Date: Thu Feb 23 2017 - 10:35:06 EST
On February 23, 2017 7:23:09 AM PST, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>On Thu, Feb 23, 2017 at 07:09:05AM -0800, hpa@xxxxxxxxx wrote:
>> Well, it only matters if the instruction extends past a segment
>> boundary or page. However, the CPU instruction decoder will consume
>> modrm for UD1, and so using just the two opcode bytes may cause a #PF
>> or #GP when a #UD was intended.
>It also matters if you want the decoded instruction stream to make
>If for instance I use UD1 without the ModRM byte for WARN, objtool gets
>mighty confused because the instruction stream doesn't decode properly.
>objtool will also consume the extra byte and then the next instruction
>is offset and decodes wrong and it stresses out.
>Similarly, if you were to do objdump (and objdump were to actually
>correctly decode UD1) then the resulting asm would make no sense.
>The kernel will work 'fine', because even without ModRM it will #UD,
>the #UD handler will IP+=2 and all is well, but it becomes impossible
>actually decode the function..
Well, once you are using invalid instructions, it depends not on what the CPU decodes but what your own handler expects. Consider Microsoft's use of C4 C4 /ib as a meta-instruction (called BOP, "BIOS operation")... that format has nothing to do with the CPU, but if you want to disassemble the resulting code you need to know about how they encode BOP.
Sent from my Android device with K-9 Mail. Please excuse my brevity.