I have read some research in this area done at the University of
Washington by Tom Anderson, IIRC. My recollection is that the effect
of the processor affinity in SMP configurations is negligible. The
authors expected to find that threads alsways scheduled on the same
CPU would benefit from a warm cache, but no experimental data
supported this hypothesis.
If you have a single CPU, cache *could* make a difference in a similar
fashion, but there is no reason to believe that scheduling all of the
threads for a given application, one after the other, is a good idea.
In fact, Mr. Anderson's research also found that it was most efficient
to create only as many threads as available CPUs. A program can
always be more discriminating about the best task (denotative usage)
to perform than can a scheduler that knows nothing about the
interrelationship of threads.
IMHO, the fellow who wrote the article appears to be basing his
criticism on preconceptions of the correct implementation for
threads. Or whatever you call them. AFAIK, there is no evidence
showing that another execution model is usually better.
The only thing I have ever seen that looks like it could be a win in
some cases, and only for SMP, that we don't implement is the idea of
scheduler activations as defined by Mr. Anderson's work. The
principle is to pass knowledge into the application about when the CPU
is running it. The application gains knowledge of the number of truly
simultaneous activities it is executing and can make appropriate
decisions about how it wants to accomplish its goals.
Here is a citation if anyone is interested:
T. Anderson, B. Bershad, E. Lazowska and H. Levy.
"Scheduler Activations: Effective Kernel Support for User-Level
Management of Parallelism",
ACM Transactions on Computer Systems 10(1), February 1992, pp.53-79
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/