This sounds like the way to go ( at least to me ) -- this would also
get rid of the nessessity of using those two user signal handlers ( I believe
I saw this in the README ? ).
> > [sharing of all thread invariant task_struct data to reduce overhead
> > of context switching]
>
> That's an interesting idea, and I'd suspect that some kernels with
> built-in threads (like Solaris or even Mach) are organized this way.
>
> However, it could be that the extra overhead of context switching is
> tolerable in most applications. In my experience, threads are mostly
> used to 1- do input/output in an overlapped way, and 2-
> heavy-duty computation on multiprocessors. In case 1-, the program
> spends lots of time in i/o system calls anyway, and for 2-, the goal
> is to have one thread per processor and as few context switches as
> possible (e.g. by tieing threads to processors, or at least giving
> affinities between a thread and a processor).
>
> So, we'll see in practice if context switching time is really a
> problem.
There were a couple other responses that seemed to indicate that
it is -- there is obviously some merit to the idea, as other os's
like Solaris are using threads hosted on light weight processes.
If like in case 1-, the program spends lots of time in the i/o system
calls, I think this make it especially important to have fast
context switch time. Right now anyways, libc functions use
locks to protect access to the global structures used in the i/o
( the stream based functions like printf ), and this would make
fast context switch times important for i/o bound threaded programs
in which there is contention for the libc i/o data.
I haven't thought to much about case 2-, I figure it will be a LONG time
before I end up with a multiple process linux box.
-- Peeter Joot TOROLAB(PJOOT) joot@vnet.ibm.com IBM Canada Tie Line 778-3186 Phone 416-448-3186