[PATCH 06/19] x86/tsc: Use the full 64-bit TSC in delay_tsc()

From: Borislav Petkov
Date: Thu Jun 25 2015 - 12:45:25 EST


From: Andy Lutomirski <luto@xxxxxxxxxx>

As a very minor optimization, delay_tsc() was only using the low 32 bits
of the TSC. It's a delay function, so just use the whole thing.

Signed-off-by: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Denys Vlasenko <dvlasenk@xxxxxxxxxx>
Cc: Huang Rui <ray.huang@xxxxxxx>
Cc: John Stultz <john.stultz@xxxxxxxxxx>
Cc: Len Brown <lenb@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Ralf Baechle <ralf@xxxxxxxxxxxxxx>
Cc: kvm ML <kvm@xxxxxxxxxxxxxxx>
Cc: x86-ml <x86@xxxxxxxxxx>
Link: http://lkml.kernel.org/r/bd1a277c71321b67c4794970cb5ace05efe21ab6.1434501121.git.luto@xxxxxxxxxx
Signed-off-by: Borislav Petkov <bp@xxxxxxx>
---
arch/x86/lib/delay.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/lib/delay.c b/arch/x86/lib/delay.c
index 9a52ad0c0758..35115f3786a9 100644
--- a/arch/x86/lib/delay.c
+++ b/arch/x86/lib/delay.c
@@ -49,16 +49,16 @@ static void delay_loop(unsigned long loops)
/* TSC based delay: */
static void delay_tsc(unsigned long __loops)
{
- u32 bclock, now, loops = __loops;
+ u64 bclock, now, loops = __loops;
int cpu;

preempt_disable();
cpu = smp_processor_id();
rdtsc_barrier();
- rdtscl(bclock);
+ bclock = native_read_tsc();
for (;;) {
rdtsc_barrier();
- rdtscl(now);
+ now = native_read_tsc();
if ((now - bclock) >= loops)
break;

@@ -80,7 +80,7 @@ static void delay_tsc(unsigned long __loops)
loops -= (now - bclock);
cpu = smp_processor_id();
rdtsc_barrier();
- rdtscl(bclock);
+ bclock = native_read_tsc();
}
}
preempt_enable();
--
2.3.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/