On Sunday 04 August 2002 02:47, Andrew Morton wrote:
> Daniel Phillips wrote:
> >
> > On Saturday 03 August 2002 23:40, Andrew Morton wrote:
> > > - total amount of CPU time lost spinning on locks is 1%, mainly
> > > in page_add_rmap and zap_pte_range.
> > >
> > > That's not much spintime. The total system time with this test went
> > > from 71 seconds (2.5.26) to 88 seconds (2.5.30). (4.5 seconds per CPU)
> > > So all the time is presumably spent waiting on cachelines to come from
> > > other CPUs, or from local L2.
> >
> > Have we tried this one:
> >
> > static inline unsigned rmap_lockno(pgoff_t index)
> > {
> > - return (index >> 4) & (ARRAY_SIZE(rmap_locks) - 1);
> > + return (index >> 4) & (ARRAY_SIZE(rmap_locks) - 16);
> > }
> >
> > (which puts all the rmap spinlocks in separate cache lines)
>
> Seems a strange way of doing it?
It is a strange way of doing it. I felt like being engigmatic at the time,
and no, nothing like that should ever go into production code, it would be
better suited to an IOCCC submission.
> We'll only ever use four locks this way.
Look again: 256 - 16 = 250 = 0xf0.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Wed Aug 07 2002 - 22:00:23 EST