Re: [RFC] observe and act upon workload parallelism:PERF_TYPE_PARALLELISM (Was: [RFC][PATCH] sched_wait_block: wait for blockedthreads)

From: Ingo Molnar
Date: Mon Nov 16 2009 - 15:13:59 EST



* Stijn Devriendt <highguy@xxxxxxxxx> wrote:

> One extra catch, I didn't even think of in the original approach is
> that you still need a way of saying to the kernel: no more work here.
>
> My original approach fails bluntly and I will happily take credit for
> that ;) The perf-approach perfectly allows for this, by waking up the
> "controller" thread which does exactly nothing as there's no work
> left.

Note, the perf approach does not require a 'controller thread'.

The most efficient approach using perf-events would be:

- have the pool threads block in poll(perf_event_fd). (all threads
block in poll() on the same fd).

- blocking threads wake_up() the pool and cause them to drop out of
poll() (with no intermediary). [if there's less than
perf_event::min_concurrency tasks running.]

- waking threads observe the event state and only run if we are still
below perf_event::max_concurrency - otherwise they re-queue to the
poll() waitqueue.

Basically the perf-event fd creates the 'group of tasks'. This can be
created voluntarily by cooperating threads - or involuntarily as well
via PID attach or CPU attach.

There's no 'tracing' overhead or notification overhead: we maintain a
shared state and the 'notifications' are straight wakeups that bring the
pool members out of poll(), to drive the workload further.

Such a special sw-event, with min_concurrency==max_concurrency==1 would
implement Linus's interface - using standard facilities like poll().
(The only 'special' act is the set up of the group itself.)

So various concurrency controls could be implemented that way -
including the one Linus suggest - even a HPC workload-queueing daemon
could be done as well, which sheperds 100% uncooperative tasks.

I dont think this 'fancy' approach is actually a performance drag: it
would really do precisely the same thing Linus's facility does (unless
i'm missing something subtle - or something less subtle about Linus's
scheme), with the two parameters set to '1'.

( It would also enable a lot of other things, and it would not tie the
queueing implementation into the scheduler. )

Only trying would tell us for sure though - maybe i'm wrong.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/