Re: [PATCH] x86: enable preemption in delay

From: Steven Rostedt
Date: Sun May 25 2008 - 09:17:19 EST



On Sun, 25 May 2008, Thomas Gleixner wrote:
>
> +/*
> + * 5 usec on a 1GHZ machine. Not necessarily correct, but not too long
> + * either.

And what happens when we have 10GHz boxes that can do migration in 1us,
and the delay that is asked for is 2us. We can return early. I don't like
to place assumptions of this kind that can hurt with future hardware
enhancements.

-- Steve


> + */
> +#define TSC_MIGRATE_COUNT 5000
> +
> /* TSC based delay: */
> static void delay_tsc(unsigned long loops)
> {
> unsigned long bclock, now;
> + int cpu;
>
> - preempt_disable(); /* TSC's are per-cpu */
> + preempt_disable();
> + cpu = smp_processor_id();
> rdtscl(bclock);
> do {
> rep_nop();
> - rdtscl(now);
> - } while ((now-bclock) < loops);
> +
> + /* Allow RT tasks to run */
> + preempt_enable();
> + preempt_disable();
> +
> + /*
> + * It is possible that we moved to another CPU, and
> + * since TSC's are per-cpu we need to calculate
> + * that. The delay must guarantee that we wait "at
> + * least" the amount of time. Being moved to another
> + * CPU could make the wait longer but we just need to
> + * make sure we waited long enough. Rebalance the
> + * counter for this CPU.
> + */
> + if (unlikely(cpu != smp_processor_id())) {
> + if (loops <= TSC_MIGRATE_COUNT)
> + break;
> + cpu = smp_processor_id();
> + rdtscl(bclock);
> + loops -= TSC_MIGRATE_COUNT;
> + } else {
> + rdtscl(now);
> + if ((now - bclock) >= loops)
> + break;
> + loops -= (now - bclock);
> + bclock = now;
> + }
> + } while (loops > 0);
> preempt_enable();
> }
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/