Re: new IRQ scalability changes in 2.3.48

From: Ingo Molnar (mingo@chiara.csoma.elte.hu)
Date: Mon Mar 13 2000 - 10:36:17 EST


On Mon, 13 Mar 2000, Andrea Arcangeli wrote:

> Anyway it looks more like your object is to put automatic conditional
> schedules all over the place (when each locks gets dropped) than to take
> advantage of getting rescheduled in kernel mode by an irq.
>
> Such way is more generic but it's also costly because it does at runtime
> what we could do at compile time.

identifying

> For example if you do:
>
> spin_lock(&lru_list_lock)
> spin_lock(&hash_table_lock);
> spin_unlock(&hash_table_lock);
> spin_unlock(&lru_list_lock)
>
> everybody will know that the spin_lock(&has_table_lock) doesn't need to
> increase the lock_depth but you do increase it anyway and adding such
> bloat is not so nice IMHO.

holding multiple spinlocks does increase the current->spinlock_depth twice
(for clarity: it's not the same as current->lock_depth) - but the cost
(lets say ~2 cycle) is going to be dwarfed by the cost of the LOCK;
operations alone (~20-40 cycles). The variable is on the current process's
stack page so there is not going to be any cachemiss, and in the typical
case there is not going to be any branch miss either.

but yes, it's some overhead, so this thing is going to be optional. Also,
there is no problem optimizing the above as:

        inc_preemption_depth();
         __spin_lock(&lru_list_lock)
         __spin_lock(&hash_table_lock);
        [...]
         __spin_unlock(&hash_table_lock);
         __spin_unlock(&lru_list_lock)
        dec_preemption_depth();

(where __spin variants are the ones without the preemption control bits)
- if we want to make it explicit - but i dont think we want that, at least
initially.

        Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Wed Mar 15 2000 - 21:00:25 EST