On Wed, Sep 23, 1998 at 01:55:57AM +0300, Jukka Tapani Santala wrote:
> On Tue, 22 Sep 1998, Kurt Garloff wrote:
> > w/32 procs per proc
> > proc thread proc thread proc thread
> > 2.1.120 6.5 2.8 28.3 22.0 0.68 0.60
> > 2.1.122 FPU 6.0 3.9 28.1 22.1 0.69 0.57
> > 2.1.122 both 4.7 2.5 16.4 11.2 0.37 0.27
>
> I'm surprised... It's my recollection that unaligned data is far slower
> than cache misses. I guess accessing byte-aligned bytes isn't that bad,
> though. Still I'd be very interested to see statistics on different
> computers, and (if the structures aren't specific to one architechture -
> can't check just now. If they are, ignore this;) most importantly
> architechtures. Which is the unfortunate point in optimizations like
> this; they're kinda architechture-dependent.
As I pointed out in a private mail, the SMP fields (which I changed to
bytes) are not accessed on my UP machine. So the quite good results have
nothing to do with them.
IIRC, accessing byte-aligned bytes on a IA32 is not that bad. Maybe PPro and
P-II don't like it, I don't know, but Pentium, Cx6X86 and K6-2 are OK, AFAIK.
> But if you're going to optimize for special cases, see the "Optimization
> Manuals" on Intel's website - they give good insight into the cache- and
> burst-loading sequences on Intel architechtures. I would, also, try to
> profile with int's instead of char's to see if it's possible to find an
> even faster combination between cache-line use and misalignment costs.
> But then, I don't have the references in question handy to say if that's
> supposed to have any effect, either ;)
It might be bad on other archs to use bytes, so I changed it back to ints.
I had to move exec_domain to the third cache line in order to have enough
space for the important variables.
I could have done this in the first place, but I didn't want to touch too
much fields.
I append a current patch. It's not tested. I don't know if the kernel will
crash (very unlikely, unless I messed the order within INIT_TASK and by
chance the compiler doesn't catch it, because the types are the same)
or what the scheduling performance will be (I'm pretty sure it would be the
same as with my previous patch on UP systems). It compiles at least.
I will provide results after the weekend, when I'm back.
Linus, what do you think? Regardless, whether Richard's test is broken (as
Larry claims) or not (as I think), it is certainly a good idea to have the
task_struct ordered to be cache-friendly, isn't it. I really think that it
would be a good idea to have it in the kernel.
Regards,
-- Kurt Garloff, Dortmund <K.Garloff@ping.de> PGP key on http://student.physik.uni-dortmund.de/homepages/garloff--jRHKVT23PllUwdXP Content-Type: text/plain; charset=us-ascii Content-Description: 21122-task_struct2.diff Content-Disposition: attachment; filename="21122-task_struct2.diff"
--- linux/include/linux/sched.h.orig Thu Sep 17 10:33:04 1998 +++ linux/include/linux/sched.h Fri Sep 25 10:11:13 1998 @@ -207,7 +207,9 @@ struct user_struct; struct task_struct { -/* these are hardcoded - don't touch */ +/* First put all info important needed in schedule() for every task + * (according to L.McVoy) in order to be cache friendly. + * Offset comments are for true 32bit archs, only. K.Garloff, 98/09/22 */ volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */ unsigned long flags; /* per process flags, defined below */ int sigpending; @@ -215,19 +217,27 @@ 0-0xBFFFFFFF for user-thead 0-0xFFFFFFFF for kernel-thread */ - struct exec_domain *exec_domain; +/* 0x10: */ long need_resched; - + unsigned long timeout; + struct task_struct *prev_run; + struct task_struct *next_task; + +/* 0x20: */ + struct task_struct *next_run; + unsigned long policy, rt_priority; +/* memory management info */ + struct mm_struct *mm; +/* 0x30: */ /* various fields */ - long counter; - long priority; + long counter, priority; /* SMP and runqueue state */ - int has_cpu; - int processor; - int last_processor; - int lock_depth; /* Lock depth. We can context switch in and out of holding a syscall kernel lock... */ - struct task_struct *next_task, *prev_task; - struct task_struct *next_run, *prev_run; + int processor, has_cpu, last_processor, lock_depth; + /* Lock depth. We can context switch in and out of holding a syscall kernel lock... */ + +/* 0x48: */ + struct task_struct *prev_task; + struct exec_domain *exec_domain; /* task state */ struct linux_binfmt *binfmt; @@ -258,7 +268,6 @@ struct task_struct **tarray_ptr; struct wait_queue *wait_chldexit; /* for wait4() */ - unsigned long timeout, policy, rt_priority; unsigned long it_real_value, it_prof_value, it_virt_value; unsigned long it_real_incr, it_prof_incr, it_virt_incr; struct timer_list real_timer; @@ -295,8 +304,6 @@ struct fs_struct *fs; /* open file information */ struct files_struct *files; -/* memory management info */ - struct mm_struct *mm; /* signal handlers */ spinlock_t sigmask_lock; /* Protects signal and blocked */ struct signal_struct *sig; @@ -337,10 +344,13 @@ * your own risk!. Base=0, limit=0x1fffff (=2MB) */ #define INIT_TASK \ -/* state etc */ { 0,0,0,KERNEL_DS,&default_exec_domain,0, \ +/* state etc */ { 0,0,0,KERNEL_DS,0, \ +/* timeout */ 0,&init_task,&init_task,&init_task,\ +/* policy */ SCHED_OTHER,0, \ +/* mm */ &init_mm, \ /* counter */ DEF_PRIORITY,DEF_PRIORITY, \ /* SMP */ 0,0,0,-1, \ -/* schedlink */ &init_task,&init_task, &init_task, &init_task, \ +/* schedlink */ &init_task, &default_exec_domain,\ /* binfmt */ NULL, \ /* ec,brk... */ 0,0,0,0,0,0, \ /* pid etc.. */ 0,0,0,0,0, \ @@ -348,7 +358,7 @@ /* pidhash */ NULL, NULL, \ /* tarray */ &task[0], \ /* chld wait */ NULL, \ -/* timeout */ 0,SCHED_OTHER,0,0,0,0,0,0,0, \ +/* it */ 0,0,0,0,0,0, \ /* timer */ { NULL, NULL, 0, 0, it_real_fn }, \ /* utime */ {0,0,0,0},0, \ /* per CPU times */ {0, }, {0, }, \ @@ -367,7 +377,6 @@ /* tss */ INIT_TSS, \ /* fs */ &init_fs, \ /* files */ &init_files, \ -/* mm */ &init_mm, \ /* signals */ SPIN_LOCK_UNLOCKED, &init_signals, {{0}}, {{0}}, NULL, &init_task.sigqueue, 0, 0, \ }
--jRHKVT23PllUwdXP--
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/