Re: [patch] x86: fix ESP corruption CPU bug

From: Stas Sergeev
Date: Sun Mar 13 2005 - 15:57:39 EST


Hi.

Pavel Machek wrote:
+ andl $(VM_MASK | (4 << 8) | 3), %eax
+ cmpl $((4 << 8) | 3), %eax
+ je ldt_ss # returning to user-space with LDT SS
All common linux apps use same %ss, no? Perhaps it would be more
efficient to just check if %ss == 0x7b, and proceed directly to
restore_nocheck if not?
Such an optimization will cost three more
instructions, one of which is a "taken"
jump. It seems like the "taken" jump on
a fast path is not good, while now it is
only 5 instructions with a not-taken jump.
I am not sure here, but I think the current
solution is better (depends on how bad the
"taken" jump is, and how bad it is to have
the three extra insns for that optimization
purpose).

Or perhaps we could only enable this code
after application loads custom ldt?
The good thing here is that the code
actually does what you say, i.e. it jumps
to ldt_ss only when the app has loaded
the custom ldt and loaded that selector
to %ss. The way it is implemented, is
probably different from what you mean,
I assume you mean the new per-thread flag?
But I don't see how it can be more optimal,
i.e. you propose to check whether or not
the app altered the ldt (which can just be
the old glibc I think), while the current
solution is to also check whether it was
loaded to %ss (so the glibc case is avoided,
IIRC glibc used to load %gs with LDT selector).
I.e. since right now we jump to ldt_ss only
when the %ss is loaded with an LDT selector,
I think the extra checks are not needed.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/