* Waiman Long<Waiman.Long@xxxxxx> wrote:Thank for the explanation. I am still pretty new to this process of upstream kernel development.
On 04/10/2013 06:31 AM, Ingo Molnar wrote:Since I'll typically the maintainer applying& pushing kernel/mutex.c changes to* Waiman Long<Waiman.Long@xxxxxx> wrote:Yes, I can do that. So can I put your name down as reviewer or ack'er for the
I'd suggest to just remove it in an additional patch, Cc:-ingThat said, the MUTEX_SHOULD_XCHG_COUNT macro should die. Why shouldn't allI think so too. However, I don't have the machines to test out other
architectures just consider negative counts to be locked? It doesn't matter
that some might only ever see -1.
architectures. The MUTEX_SHOULD_XCHG_COUNT is just a safety measure to make sure
that my code won't screw up the kernel in other architectures. Once it is
confirmed that a negative count other than -1 is fine for all the other
architectures, the macro can certainly go.
linux-arch@xxxxxxxxxxxxxxxx The change is very likely to be fine, if not then it's
easy to revert it.
Thanks,
Ingo
1st patch?
Linus via the locking tree, the commit will get a Signed-off-by from me once you
resend the latest state of things - no need to add my Acked-by or Reviewed-by
right now.
I'm still hoping for another patch from you that adds queueing to the spinners ...That is what I hope too. I am going to work on another patch to add spinner queuing to see how much performance impact it will have.
That approach could offer better performance than current patches 1,2,3. In
theory.
I'd prefer that approach because you have a testcase that shows the problem and
you are willing to maximize performance with it - so we could make sure we have
reached maximum performance instead of dropping patches #2, #3, reaching partial
performance with patch #1, without having a real full resolution.