Re: [PATCH v7 1/4] spinlock: A new lockref structure for locklessupdate of refcount

From: Ingo Molnar
Date: Fri Aug 30 2013 - 05:49:10 EST



* Sedat Dilek <sedat.dilek@xxxxxxxxx> wrote:

> On Fri, Aug 30, 2013 at 9:55 AM, Sedat Dilek <sedat.dilek@xxxxxxxxx> wrote:
> > On Fri, Aug 30, 2013 at 5:54 AM, Linus Torvalds
> > <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> >> On Thu, Aug 29, 2013 at 8:12 PM, Waiman Long <waiman.long@xxxxxx> wrote:
> >>> On 08/29/2013 07:42 PM, Linus Torvalds wrote:
> >>>>
> >>>> Waiman? Mind looking at this and testing? Linus
> >>>
> >>> Sure, I will try out the patch tomorrow morning and see how it works out for
> >>> my test case.
> >>
> >> Ok, thanks, please use this slightly updated patch attached here.
> >>
> >> It improves on the previous version in actually handling the
> >> "unlazy_walk()" case with native lockref handling, which means that
> >> one other not entirely odd case (symlink traversal) avoids the d_lock
> >> contention.
> >>
> >> It also refactored the __d_rcu_to_refcount() to be more readable, and
> >> adds a big comment about what the heck is going on. The old code was
> >> clever, but I suspect not very many people could possibly understand
> >> what it actually did. Plus it used nested spinlocks because it wanted
> >> to avoid checking the sequence count twice. Which is stupid, since
> >> nesting locks is how you get really bad contention, and the sequence
> >> count check is really cheap anyway. Plus the nesting *really* didn't
> >> work with the whole lockref model.
> >>
> >> With this, my stupid thread-lookup thing doesn't show any spinlock
> >> contention even for the "look up symlink" case.
> >>
> >> It also avoids the unnecessary aligned u64 for when we don't actually
> >> use cmpxchg at all.
> >>
> >> It's still one single patch, since I was working on lots of small
> >> cleanups. I think it's pretty close to done now (assuming your testing
> >> shows it performs fine - the powerpc numbers are promising, though),
> >> so I'll split it up into proper chunks rather than random commit
> >> points. But I'm done for today at least.
> >>
> >> NOTE NOTE NOTE! My test coverage really has been pretty pitiful. You
> >> may hit cases I didn't test. I think it should be *stable*, but maybe
> >> there's some other d_lock case that your tuned waiting hid, and that
> >> my "fastpath only for unlocked case" version ends up having problems
> >> with.
> >>
> >
> > Following this thread with half an eye... Was that "unsigned" stuff
> > fixed (someone pointed to it).
> > How do you call that test-patch (subject)?
> > I would like to test it on my SNB ultrabook with your test-case script.
> >
>
> Here on Ubuntu/precise v12.04.3 AMD64 I get these numbers for total loops:
>
> lockref: w/o patch | w/ patch
> ======================
> Run #1: 2.688.094 | 2.643.004
> Run #2: 2.678.884 | 2.652.787
> Run #3: 2.686.450 | 2.650.142
> Run #4: 2.688.435 | 2.648.409
> Run #5: 2.693.770 | 2.651.514
>
> Average: 2687126,6 VS. 2649171,2 ( ???37955,4 )

For precise stddev numbers you can run it like this:

perf stat --null --repeat 5 ./test

and it will measure time only and print the stddev in percentage:

Performance counter stats for './test' (5 runs):

1.001008928 seconds time elapsed ( +- 0.00% )

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/