[PATCH] i386 tlb flush optimization

From: Manfred Spraul (manfreds@colorfullife.com)
Date: Wed Jan 26 2000 - 13:00:55 EST


the i386 tlb flush is still paranoid about lock-ups, but this is
superflous:
e.g. the Alpha port uses a normal smp_call_function() in it's tlb flush
code.

My attached patch removes most of these paranoid checks.

Could you please test it?
I'm preparing a follow-up patch that optimizes flush_tlb_page() for
multi-threaded applications, and that patch will be incompatible with
the current, anti-lock-up flush_tlb_others() code.

The patch is tested on 2.3.40 i386 SMP.

--
	Manfred

--- 2.3/arch/i386/kernel/smp.c Fri Jan 21 12:59:23 2000 +++ build-2.3/arch/i386/kernel/smp.c Wed Jan 26 18:52:24 2000 @@ -281,56 +281,52 @@ #endif } -/* - * This is fraught with deadlocks. Probably the situation is not that - * bad as in the early days of SMP, so we might ease some of the - * paranoia here. - */ +#define TLB_PARANOIA 1 + static void flush_tlb_others(unsigned int cpumask) { - int cpu = smp_processor_id(); - int stuck; - unsigned long flags; +#ifdef TLB_PARANOIA + if(in_interrupt()) { + printk("tlb flush from interrupt: %d,%d", + local_bh_count[smp_processor_id()], + local_irq_count[smp_processor_id()]); + } + if(cpumask & (1<<smp_processor_id())) { + printk("flush_tlb_others: bad cpumask!"); + cpumask &= ~(1<<smp_processor_id()); + local_flush_tlb(); + } + { + int flags; + + save_flags(flags); + if(flags != 1) { +static int limit=10; + if(limit > 0) { + limit--; + printk("flush_tlb_others: possible lock-up, broken!(%d)", + flags); + } + sti(); + } + } +#endif + cpumask &= cpu_online_map; /* * it's important that we do not generate any APIC traffic * until the AP CPUs have booted up! */ - cpumask &= cpu_online_map; + if (cpumask) { atomic_set_mask(cpumask, &smp_invalidate_needed); - - /* - * Processors spinning on some lock with IRQs disabled - * will see this IRQ late. The smp_invalidate_needed - * map will ensure they don't do a spurious flush tlb - * or miss one. - */ - - __save_flags(flags); - __cli(); - send_IPI_allbutself(INVALIDATE_TLB_VECTOR); - /* - * Spin waiting for completion - */ - - stuck = 50000000; while (smp_invalidate_needed) { - /* - * Take care of "crossing" invalidates + /* FIXME: lockup-detection, print backtrace on + * lock-up */ - if (test_bit(cpu, &smp_invalidate_needed)) - do_flush_tlb_local(); - - --stuck; - if (!stuck) { - printk("stuck on TLB IPI wait (CPU#%d)\n",cpu); - break; - } } - __restore_flags(flags); } } --- 2.3/arch/i386/kernel/irq.c Fri Jan 21 12:59:23 2000 +++ build-2.3/arch/i386/kernel/irq.c Wed Jan 26 17:46:35 2000 @@ -192,20 +192,6 @@ atomic_t global_bh_count; atomic_t global_bh_lock; -/* - * "global_cli()" is a special case, in that it can hold the - * interrupts disabled for a longish time, and also because - * we may be doing TLB invalidates when holding the global - * IRQ lock for historical reasons. Thus we may need to check - * SMP invalidate events specially by hand here (but not in - * any normal spinlocks) - */ -static inline void check_smp_invalidate(int cpu) -{ - if (test_bit(cpu, &smp_invalidate_needed)) - do_flush_tlb_local(); -} - static void show(char * str) { int i; @@ -294,7 +280,6 @@ __sti(); SYNC_OTHER_CORES(cpu); __cli(); - check_smp_invalidate(cpu); if (atomic_read(&global_irq_count)) continue; if (global_irq_lock) @@ -346,7 +331,6 @@ /* Uhhuh.. Somebody else got it. Wait.. */ do { do { - check_smp_invalidate(cpu); } while (test_bit(0,&global_irq_lock)); } while (test_and_set_bit(0,&global_irq_lock)); }

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Jan 31 2000 - 21:00:17 EST