Re: No locking is needed ... why?

From: BALBIR SINGH (balbir.singh@wipro.com)
Date: Tue Oct 09 2001 - 08:30:05 EST


Kirill Ratkin wrote:

>Hi.
>
>Could somebody explain me this comment?:
>/*
> * Incoming packets are placed on per-cpu queues so
>that
> * no locking is needed.
> */
>
>struct softnet_data
>{
> int throttle;
> int cng_level;
> int avg_blog;
> struct sk_buff_head input_pkt_queue;
> struct net_device *output_queue;
> struct sk_buff *completion_queue;
>} __attribute__((__aligned__(SMP_CACHE_BYTES)));
>
>I didn't understand why packets are placed so and why
>locking isn't needed?
>

As I understand this, the only reason u lock is

1) In an SMP or multiprocessor system, you suspect somebody else is running
   simultaneously with you, this can lead to two or more processors executing
   the same code simultaneously, this may lead to races.(which u do not want).
2) In a Multiprocessor or uniprocesor, data is shared among user processes in the kernel
   or
   between a user process in the kernel and an interrupt context
  (like an irq handler or a bottom half or a tasklet).

So if u have a situation where (2) does not hold and u have a multiprocessor system,
per CPU data need not be locked, since it is not visible/used by other processors.

Did I get it right?
Balbir

>
>__________________________________________________
>Do You Yahoo!?
>NEW from Yahoo! GeoCities - quick and easy web site hosting, just $8.95/month.
>http://geocities.yahoo.com/ps/info1
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
>



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Oct 15 2001 - 21:00:24 EST