Re: [net-next PATCH 1/2] igbvf: add new driver to support 82576virtual functions

From: Andrew Morton
Date: Wed Mar 18 2009 - 22:06:32 EST


On Wed, 18 Mar 2009 17:40:47 -0700 Alexander Duyck <alexander.h.duyck@xxxxxxxxx> wrote:

> Andrew Morton wrote:
> > On Wed, 18 Mar 2009 08:22:46 -0700 Alexander Duyck <alexander.h.duyck@xxxxxxxxx> wrote:
> >
> >>>>>> +static int igbvf_set_ringparam(struct net_device *netdev,
> >>>>>> + struct ethtool_ringparam *ring)
> >>>>>> +{
> >>>>>> + struct igbvf_adapter *adapter = netdev_priv(netdev);
> >>>>>> + struct igbvf_ring *tx_ring, *tx_old;
> >>>>>> + struct igbvf_ring *rx_ring, *rx_old;
> >>>>>> + int err;
> >>>>>> +
> >>>>>> + if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
> >>>>>> + return -EINVAL;
> >>>>>> +
> >>>>>> + while (test_and_set_bit(__IGBVF_RESETTING, &adapter->state))
> >>>>>> + msleep(1);
> >>>>> No timeout needed here? Interrupts might not be working, for example..
> >>>> This bit isn't set in interrupt context. This is always used out of
> >>>> interrupt context and is just to prevent multiple setting changes at the
> >>>> same time.
> >>> Oh. Can't use plain old mutex_lock()?
> >> We have one or two spots that actually check to see if the bit is set
> >> and just report a warning instead of actually waiting on the bit to clear.
> >
> > mutex_is_locked()?
>
> I suppose that would work, but I still would prefer to keep this bit of
> code as it is. My main motivation is just to use what was already
> proven, and the fact is the e1000, e1000e, igb, and several other
> drivers all use this same approach and it works.

OK, that's a reason.

> I don't think we need the extra overhead of the mutex lock since most of
> the calls that end up setting the __IGBVF_RESETTING bit will already be
> wrapped within rtnl_lock/unlock calls. As far as I can tell it looks
> like the only two threads that would ever be competing for the lock
> would be the igbvf_reinit_locked and whatever ethtool or ifconfig
> requests that decide to make changes to the configuration of the netdevice.

You may well find that mutex_lock is more efficient than setting a timer
and waking up once per millisecond. Certainly much lower latency on some
setups given clock granularities of as much as 10 milliseconds.

But it sounds like that's all a separate standalone exercise.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/