Re: [PATCH v3] tags: much faster, parallel "make tags"
From: Alexey Dobriyan
Date: Mon May 11 2015 - 16:20:16 EST
On Sun, May 10, 2015 at 09:58:12PM +0100, Pádraig Brady wrote:
> On 10/05/15 14:26, Alexey Dobriyan wrote:
> > On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote:
> >> On 08/05/15 14:26, Alexey Dobriyan wrote:
> >
> >>> exuberant()
> >>> {
> >>> - all_target_sources | xargs $1 -a \
> >>> + rm -f .make-tags.*
> >>> +
> >>> + all_target_sources >.make-tags.src
> >>> + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
> >>
> >> `nproc` is simpler and available since coreutils 8.1 (2009-11-18)
> >
> > nproc was discarded because getconf is standartized.
>
> Note getconf doesn't honor CPU affinity which may be fine here?
>
> $ taskset -c 0 getconf _NPROCESSORS_ONLN
> 4
> $ taskset -c 0 nproc
> 1
Why would anyone tag files under affinity?
> >>> + NR_LINES=$(wc -l <.make-tags.src)
> >>> + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
> >>> +
> >>> + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
> >>
> >> `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)
> >
> > -nl/ can't count and always make first file somewhat bigger, which is
> > suspicious. What else it can't do right?
>
> It avoids the overhead of reading all data and counting the lines,
> by splitting the data into approx equal numbers of lines as detailed at:
> http://gnu.org/s/coreutils/split
~1 second -- statistical error.
> >>> + sort .make-tags.* >>$2
> >>> + rm -f .make-tags.*
> >>
> >> Using sort --merge would speed up significantly?
> >
> > By ~1 second, yes.
> >
> >> Even faster would be to get sort to skip the header lines, avoiding the need for sed.
> >> It's a bit awkward and was discussed at:
> >> http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
> >> Summarising that, is if not using merge you can:
> >>
> >> tlines=$(($(wc -l < "$2") + 1))
> >> tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
> >>
> >> Or if merge is appropriate then:
> >>
> >> tlines=$(($(wc -l < "$2") + 1))
> >> eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2
> >
> > Might as well teach ctags to do real parallel processing.
> > LC_* are set by top level Makefile.
> >
> >> p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*
> >
> > The real question is how to kill ctags reliably.
> > Naive
> >
> > trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT
> >
> > doesn't work.
> >
> > Files are removed, but processes aren't.
>
> Is $(jobs -p) generating the correct list?
It looks like it does.
> On an interactive shell here it is.
> Perhaps you need to explicitly use #!/bin/sh -m
> at the start to enable job control like that?
> Another option would be to append each background $! pid
> to a list and kill that list.
> Note also you may want to `wait` after the kill too.
All of this doesn't work reliably.
I switched to "xargs -P" and Ctrl+C became reliable, immediate and
free for programmer. See updated patch.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/