Re: [PATCH v2 1/7] lib/dlock-list: Distributed and lock-protected lists
From: Jan Kara
Date: Thu Jul 14 2016 - 10:35:57 EST
On Thu 14-07-16 07:50:43, Tejun Heo wrote:
> > > > +void dlock_list_add(struct dlock_list_node *node, struct dlock_list_head *head)
> > > > +{
> > > > + struct dlock_list_head *myhead;
> > > > +
> > > > + /*
> > > > + * Disable preemption to make sure that CPU won't gets changed.
> > > > + */
> > > > + myhead = get_cpu_ptr(head);
> > > > + spin_lock(&myhead->lock);
> > > > + node->lockptr =&myhead->lock;
> > > > + list_add(&node->list,&myhead->list);
> > > > + spin_unlock(&myhead->lock);
> > > > + put_cpu_ptr(head);
> > > > +}
> > > I wonder whether it'd be better to use irqsafe operations. lists tend
> > > to be often used from irq contexts.
> >
> > The current use case only need to use the regular lock functions. You are
> > right that future use cases may require an irqsafe version of locks. I can
> > either modify the code now to allow lock type selection at init time, for
> > example, or defer it as a future enhancement when the need arises. What do
> > you think?
>
> The bulk of performance gain of dlist would come from being per-cpu
> and I don't think it's likely that we'd see any noticeable difference
> between irq and preempt safe operations. Given that what's being
> implemented is really low level operations, I'd suggest going with
> irqsafe from the get-go.
I'm not sure here. i_sb_list for which percpu lists will be used is bashed
pretty heavily under some workloads and the cost of additional interrupt
disabling & enabling may be visible under those loads. Probably not in the
cases where you get a boost from percpu lists but if the workload is mostly
single-threaded, additional cpu cost may be measurable. So IMO we should
check whether a load which creates tons of empty inodes in tmpfs from a
single process doesn't regress with this change.
Honza
--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR