Re: [tip:timers/core] [posix] 1535cb8028: stress-ng.epoll.ops_per_sec 36.2% regression

From: Eric Dumazet
Date: Thu Mar 27 2025 - 09:44:58 EST


On Thu, Mar 27, 2025 at 2:43 PM Mateusz Guzik <mjguzik@xxxxxxxxx> wrote:
>
> On Thu, Mar 27, 2025 at 2:17 PM Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
> >
> > On Thu, Mar 27, 2025 at 2:14 PM Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
> > >
> > > On Thu, Mar 27 2025 at 12:37, Eric Dumazet wrote:
> > > > On Thu, Mar 27, 2025 at 11:50 AM Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
> > > >> Cute. How much bloat does it cause?
> > > >
> > > > This would expand 'struct ucounts' by 192 bytes on x86, if the patch
> > > > was actually working :)
> > > >
> > > > Note sure if it is feasible without something more intrusive like
> > >
> > > I'm not sure about the actual benefit. The problem is that parallel
> > > invocations which access the same ucount still will run into contention
> > > of the cache line they are modifying.
> > >
> > > For the signal case, all invocations increment rlimit[SIGPENDING], so
> > > putting that into a different cache line does not buy a lot.
> > >
> > > False sharing is when you have a lot of hot path readers on some other
> > > member of the data structure, which happens to share the cache line with
> > > the modified member. But that's not really the case here.
> >
> > We applications stressing all the counters at the same time (from
> > different threads)
> >
> > You seem to focus on posix timers only :)
>
> Well in that case:
> (gdb) ptype /o struct ucounts
> /* offset | size */ type = struct ucounts {
> /* 0 | 16 */ struct hlist_node {
> /* 0 | 8 */ struct hlist_node *next;
> /* 8 | 8 */ struct hlist_node **pprev;
>
> /* total size (bytes): 16 */
> } node;
> /* 16 | 8 */ struct user_namespace *ns;
> /* 24 | 4 */ kuid_t uid;
> /* 28 | 4 */ atomic_t count;
> /* 32 | 96 */ atomic_long_t ucount[12];
> /* 128 | 256 */ struct {
> /* 0 | 8 */ atomic_long_t val;
> } rlimit[4];
>
> /* total size (bytes): 384 */
> }
>
> This comes from malloc. Given 384 bytes of size it is going to be
> backed by a 512-byte sized buffer -- that's a clear cut waste of 128
> bytes.
>
> It is plausible creating a 384-byte sized slab for kmalloc would help
> save memory overall (not just for this specific struct), but that
> would require extensive testing in real workloads. I think Google is
> in position to do it on their fleet and android? fwiw Solaris and
> FreeBSD do have slabs of this size and it does save memory over there.
> I understand it is a tradeoff, hence I'm not claiming this needs to be
> added. I do claim it does warrant evaluation, but I wont blame anyone
> for not wanting to do dig into it.
>
> The other option is to lean into it. In this case I point out the
> refcount shares the cacheline with some of the limits and that it
> could be moved to a dedicated line while still keeping the struct <
> 512 bytes, thus not spending more memory on allocation. the refcount
> changes less frequently than limits themselves so it's not a big deal,
> but it can be adjusted "for free" if you will.
>
> while here I would probably change the name of the field. A reference
> counter named "count" in a struct named "ucounts", followed by an
> "ucount" array is rather unpleasing. How about s/count/refcount?


How many 'struct ucounts' are in use in a typical host ?

Compared to other costs, this seems pure noise to me.