Eric W. Biederman wrote:asm volatile("btr %2,%%gs:%1\n\tsbbl %0,%0" \
I just took a quick look at how stack_protector works on x86_64. Unless there is
some deep kernel magic that changes the segment register to %gs from the ABI specified
%fs CC_STACKPROTECTOR is totally broken on x86_64. We access our pda through %gs.
Further -fstack-protector-all only seems to detect against buffer overflows and
thus corruption of the stack. Not stack overflows. So it doesn't appear especially
useful.
So we don't we kill the broken CONFIG_CC_STACKPROTECTOR. Stop trying to figure out
how to use a zero based percpu area.
That should allow us to make the current pda a per cpu variable, and use %gs with
a large offset to access the per cpu area. And since it is only the per cpu accesses
and the pda accesses that will change we should not need to fight toolchain issues
and other weirdness. The linked binary can remain the same.
Eric
Hi Eric,
There is one pda op that I was not able to remove. Most likely it can be recoded
but it was a bit over my expertise. Most likely the "pda_offset(field)" can be
replaced with "per_cpu_var(field)" [per_cpu__##field], but for "_proxy_pda.field"
I wasn't sure about.
include/asm-x86/pda.h:
/*
* This is not atomic against other CPUs -- CPU preemption needs to be off
* NOTE: This relies on the fact that the cpu_pda is the *first* field in
* the per cpu area. Move it and you'll need to change this.
*/
#define test_and_clear_bit_pda(bit, field) \
({ \
int old__; \
asm volatile("btr %2,%%gs:%c3\n\tsbbl %0,%0" \
: "=r" (old__), "+m" (_proxy_pda.field) \
: "dIr" (bit), "i" (pda_offset(field)) : "memory");\