RE: [PATCH 01/10] perf record: new interfaces to read ring buffer to file
From: Liang, Kan
Date: Wed Oct 11 2017 - 00:12:50 EST
> > /* When check_messup is true, 'end' must points to a good entry */
> > static union perf_event * perf_mmap__read(struct perf_mmap *md, bool
> > check_messup, u64 start, diff --git a/tools/perf/util/evlist.h
> > b/tools/perf/util/evlist.h index b1c14f1..1ce4857 100644
> > --- a/tools/perf/util/evlist.h
> > +++ b/tools/perf/util/evlist.h
> > @@ -39,6 +39,16 @@ struct perf_mmap {
> > char event_copy[PERF_SAMPLE_MAX_SIZE] __aligned(8);
> > };
> >
> > +struct perf_mmap_read {
> > + struct perf_mmap *md;
> > + u64 head;
> > + u64 start;
> > + u64 end;
>
> So there will be always a one-on-one association of 'struct perf_mmap_read'
> and 'struct perf_mmap', why not go on adding more fields to 'struct
> perf_mmap' as we need
The fields in 'struct perf_mmap' needs to be recalculated before each reading.
So I put them in a new struct.
> but not doing it all at once (backward, snapshotting,
> overwrite, etc) but first the simple part, make the most basic mode:
>
> perf record -a
>
> perf top
>
> work, multithreaded, leaving the other more complicated modes fallbacking
> to the old format, then when we have it solid, go on getting the other
> features.
Agree.
When I did perf top optimization, I also tried Namhyung's perf top multi-thread patch.
https://lwn.net/Articles/667469/
I think it may be a good start point.
I didn't work on his patch. Because the root cause of bad perf top performance
is non overwrite mode, which generate lots of samples shortly. It exceeds KNL's
computational capability. Multi-threading doesn't help much on this case.
So I tried to use overwrite mode then.
>
> In the end, having the two formats supported will be needed anyway, and
> we can as well ask for processing with both perf.data file formats to compare
> results, while we strenghten out the new code.
>
> I just think we should do this in a more fine grained way to avoid too much
> code churn as well as having a fallback to the old code, that albeit non
> scalable, is what we have been using and can help in certifying that the new
> one works well, by comparing its outputs.
I already extended the multithreading support for event synthesization in perf
record.
https://github.com/kliang2/perf.git perf_record_opt
I will send it out for review shortly after rebasing on the latest perf/core.
In the patch series, I realloc buffer for each thread to temporarily keep the
processing result, and write them to the perf.data at the end of event
synthesization. The number of synthesized event is not big (hundreds of
Kilobyte). So I think it should be OK to do that.
Thanks,
Kan
>
> - Arnaldo
>
> > + bool overwrite;
> > + bool backward;
> > + unsigned long size;
> > +};
> > +
> > static inline size_t
> > perf_mmap__mmap_len(struct perf_mmap *map) { @@ -193,6 +203,11
> @@
> > void perf_evlist__mmap_read_catchup(struct perf_evlist *evlist, int
> > idx);
> >
> > void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx);
> >
> > +int perf_mmap__read_init(struct perf_mmap *md, struct
> perf_mmap_read *read,
> > + bool overwrite, bool backward);
> > +int perf_mmap__read_to_file(struct perf_mmap_read *read,
> > + struct perf_data_file *file);
> > +
> > int perf_evlist__open(struct perf_evlist *evlist); void
> > perf_evlist__close(struct perf_evlist *evlist);
> >
> > --
> > 2.5.5