Re: [PATCH v2] mm: emit tracepoint when RSS changes by threshold
Date: Wed Sep 04 2019 - 19:59:59 EST
On Wed, Sep 04, 2019 at 10:15:10AM -0700, 'Daniel Colascione' via kernel-team wrote:
> On Wed, Sep 4, 2019 at 7:59 AM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> > On Tue, Sep 03, 2019 at 10:42:53PM -0700, Daniel Colascione wrote:
> > > On Tue, Sep 3, 2019 at 10:15 PM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> > > >
> > > > On Tue, Sep 03, 2019 at 09:51:20PM -0700, Daniel Colascione wrote:
> > > > > On Tue, Sep 3, 2019 at 9:45 PM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > On Tue, Sep 3, 2019 at 1:09 PM Joel Fernandes (Google)
> > > > > > <joel@xxxxxxxxxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > Useful to track how RSS is changing per TGID to detect spikes in RSS and
> > > > > > > memory hogs. Several Android teams have been using this patch in various
> > > > > > > kernel trees for half a year now. Many reported to me it is really
> > > > > > > useful so I'm posting it upstream.
> > > > >
> > > > > It's also worth being able to turn off the per-task memory counter
> > > > > caching, otherwise you'll have two levels of batching before the
> > > > > counter gets updated, IIUC.
> > > >
> > > > I prefer to keep split RSS accounting turned on if it is available.
> > >
> > > Why? AFAIK, nobody's produced numbers showing that split accounting
> > > has a real benefit.
> > I am not too sure. Have you checked the original patches that added this
> > stuff though? It seems to me the main win would be on big systems that have
> > to pay for atomic updates.
> I looked into this issue the last time I mentioned split mm
> accounting. See . It's my sense that the original change was
> inadequately justified; Michal Hocko seems to agree. I've tried
> disabling split rss accounting locally on a variety of systems ---
> Android, laptop, desktop --- and failed to notice any difference. It's
> possible that some difference appears at a scale beyond that to which
> I have access, but if the benefit of split rss accounting is limited
> to these cases, split rss accounting shouldn't be on by default, since
> it comes at a cost in consistency.
>  https://lore.kernel.org/linux-mm/20180227100234.GF15357@xxxxxxxxxxxxxx/
> > > > I think
> > > > discussing split RSS accounting is a bit out of scope of this patch as well.
> > >
> > > It's in-scope, because with split RSS accounting, allocated memory can
> > > stay accumulated in task structs for an indefinite time without being
> > > flushed to the mm. As a result, if you take the stream of virtual
> > > memory management system calls that program makes on one hand, and VM
> > > counter values on the other, the two don't add up. For various kinds
> > > of robustness (trace self-checking, say) it's important that various
> > > sources of data add up.
> > >
> > > If we're adding a configuration knob that controls how often VM
> > > counters get reflected in system trace points, we should also have a
> > > knob to control delayed VM counter operations. The whole point is for
> > > users to be able to specify how precisely they want VM counter changes
> > > reported to analysis tools.
> > We're not adding more configuration knobs.
> This position doesn't seem to be the thread consensus yet.
> > > > Any improvements on that front can be a follow-up.
> > > >
> > > > Curious, has split RSS accounting shown you any issue with this patch?
> > >
> > > Split accounting has been a source of confusion for a while now: it
> > > causes that numbers-don't-add-up problem even when sampling from
> > > procfs instead of reading memory tracepoint data.
> > I think you can just disable split RSS accounting if it does not work well
> > for your configuration.
> There's no build-time configuration for split RSS accounting. It's not
> reasonable to expect people to carry patches just to get their memory
> usage numbers to add up.
sure, may be send a patch to make one in that case or for deleting the split
RSS accounting code like you said below.
> > Also AFAIU, every TASK_RSS_EVENTS_THRESH the page fault code does sync the
> > counters. So it does not indefinitely lurk.
> If a thread incurs TASK_RSS_EVENTS_THRESH - 1 page faults and then
> sleeps for a week, all memory counters observable from userspace will
> be wrong for a week. Multiply this potential error by the number of
> threads on a typical system and you have to conclude that split RSS
> accounting produces a lot of potential uncertainty. What are we
> getting in exchange for this uncertainty?
> > The tracepoint's main intended
> > use is to detect spikes which provides ample opportunity to sync the cache.
> The intended use is measuring memory levels of various processes over
> time, not just detecting "spikes". In order to make sense of the
> resulting data series, we need to be able to place error bars on it.
> The presence of split RSS accounting makes those error bars much
> larger than they have to be.
> > You could reduce TASK_RSS_EVENTS_THRESH in your kernel, or even just disable
> > split RSS accounting if that suits you better. That would solve all the
> > issues you raised, not just any potential ones that you raised here for this
> > tracepoint.
> I think we should just delete the split RSS accounting code unless
> someone can demonstrate that it's a measurable win on a typical
> system. The first priority of any system should be correctness.
> Consistency is a kind of correctness. Departures from correctness
> coming only from quantitatively-justifiable need.
I think you make some good points for correctness, but I still don't see
how all that relates to _this_ change. We currently do want an ability to get
these rss spikes in the traces (as the patch description shows).
You seem to be arguing about the correctness of split RSS accounting. I would
suggest to send a patch to delete the split RSS accounting code and take
these *very valid* arguments there? I am struggling to see the point of
derailing this _specific_ change for that.
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@xxxxxxxxxxxx