Re: [RFC] [PATCH] Pre-emption control for userspace

From: David Lang
Date: Wed Mar 05 2014 - 18:16:28 EST


On Wed, 5 Mar 2014, Khalid Aziz wrote:

On 03/05/2014 09:36 AM, Oleg Nesterov wrote:
On 03/05, Andi Kleen wrote:

On Wed, Mar 05, 2014 at 03:54:20PM +0100, Oleg Nesterov wrote:
On 03/04, Andi Kleen wrote:

Anything else?

Well, we have yield_to(). Perhaps sys_yield_to(lock_owner) can help.
Or perhaps sys_futex() can do this if it knows the owner. Don't ask
me what exactly I mean though ;)

You mean yield_to() would extend the time slice?

That would be the same as the mmap page, just with a syscall right?

Not the same. Very roughly I meant something like

my_lock()
{
if (!TRY_LOCK()) {
yield_to(owner);
LOCK();
}

owner = gettid();
}

But once again, I am not sure if this makes any sense.

Oleg.


Trouble with that approach is by the time a thread finds out it can not acquire the lock because someone else has it, we have already paid the price of context switch. What I am trying to do is to avoid that cost. I looked into a few other approaches to solving this problem without making kernel changes:

Yes, you have paid the cost of the context switch, but your original problem description talked about having multiple other threads trying to get the lock, then spinning trying to get the lock (wasting time if the process holding it is asleep, but not if it's running on another core) and causing a long delay before the process holding the lock gets a chance to run again.

Having the threads immediately yield to the process that has the lock reduces this down to two context switches, which isn't perfect, but it's a LOT better than what you started from.

- Use PTHREAD_PRIO_PROTECT protocol to boost the priority of thread that holds the lock to minimize contention and CPU cycles wasted by other threads only to find out someone already has the lock. Problem I ran into is the implementation of PTHREAD_PRIO_PROTECT requires another system call, sched_setscheduler(), inside the library to boost priority. Now I have added the overhead of a new system call which easily outweighs any performance gains from removing lock contention. Besides databases implement their own spinlocks to maximize performance and thus can not use PTHREAD_PRIO_PROTECT in posix threads library.

well, writing to something in /proc isn't free either. And how is the thread supposed to know if it needs to do so or if it's going to have enough time to finish it's work before it's out of time (how can it know how much time it would have left anyway?)

- I looked into adaptive spinning futex work Darren Hart was working on. It looked very promising but I ran into the same problem again. It reduces the cost of contention by delaying context switches in cases where spinning is quicker but it still does not do anything to reduce the cost of context switch for a thread to get the CPU only to find out it can not get the lock. This cost again outweighs the 3%-5% benefit we are seeing from just not giving up CPU in the middle of critical section.

is this gain from not giving up the CPU at all? or is it from avoiding all the delays due to the contending thread trying in turn? the yield_to() approach avoids all those other threads trying in turn so it should get fairly close to the same benefits.

David Lang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/