Improve preempt-scheduling and x86 user access v3

From: Andi Kleen
Date: Fri Aug 16 2013 - 18:48:28 EST


Various optimizations related to CONFIG_PREEMPT_VOLUNTARY
and x86 uaccess

- Optimize copy_*_inatomic on x86-64 to handle 1-8 bytes
without string instructions
- Inline might_sleep and other preempt code
to optimize various preemption paths
This costs about 10k text size, but generates far better code
with less unnecessary function calls.

This patch kit is an attempt to get us back to sane code,
mostly by doing proper inlining and doing sleep checks in the right
place. Unfortunately I had to add one tree sweep to avoid an nasty
include loop.

Unfortunately some of the inlining requires a tree sweep
for moving might_sleep and friends to sched.h

v2: Now completely remove reschedule checks for uaccess functions.
v3: Drop unnecessary changes (thanks Michael).
Now it only optimized copy_*_inatomic and inlines might_sleep()
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/