Re: RFC: Dynamic group limit

Mattias.Gronlund (Mattias.Gronlund@sa.erisoft.se)
Wed, 28 Jul 1999 12:05:11 +0200


Frank van Maarseveen wrote:
>
> On Tue, Jul 27, 1999 at 10:36:49AM +0200, Mattias.Gronlund wrote:
> > The proposal is to add a sysctl parameter kernel/ngroups_max that
> > is used as the maximum number of groups that a user can be member
> > of at once. The ngroup variable and group array are removed from
> > the task_struct and a pointer to a new groups_struct is added.
> IMO it is better to make the limit really dynamic: not all
> processes need the same number of groups. The size of a groups
> array might vary, even during the lifetime of a process.

Yes, but the only way (at the moment) the group-list get changed is
by sys_setgroups. This call gives a new list of groups, so i thought
that we might just allocate space for a new grouops-array and switch
to that one insted.

This way we get a fast fork (an increase of the referencecounter)
and a slow sys_setgroup (has to allocate memory). The slow
sys_setgroup may strike on some apps that changes its groups alot.
This could be compensated for by not allways switch to a new array.
The switch is only needed if the array is shared or if the new group-
list is to big to fit in the current array.

> That would eliminate the need for a sysctl.

Not really, the sysctl is a configurable maximum that the a sysadmin
could set for some reason. As POSIX has support for a "dynamic" limit
and it isn't a problem to implement I see no reason not to.

>
> At my company we've set NGROUPS_MAX to 256 and adadpted the NFS
> client code to handle >16 groups. I guess that adding 256*sizeof(gid_t)
> to the struct task_struct (i.e. 0.5/1.0 Kb increase for every process) is
> not a big memory issue when looking at the average size of a
> process. However, this becomes interesting if you want to use
> a really *large* number of groups, say 100K or more. Now memory
> will be an issue and looking up a group id in the group list becomes
> an additional performance problem (see in_group_p() in kernel/sys.c:
> might want to use a hash table or a binary tree there).

I have checked your NFS-patch and thought a lot about it, but I see that
as one of the next step problems in this.

The problem with changing NGROUPS_MAX is that this is a constant that
gets
compiled into binarys. If I buy a program with no source I might get
trouble.

You are right, there is a need for a new algorithm when allowing more
groups.
I do not know enought about searching algorithms to just stef forward
and say
which to use, I might just try to implement it with a hash...

> So, for (let's say) <1000 groups a #define will do but otherwise a
> whole new implementation will be necessary.

I think that the problem is that I can not see a default setting of 1000
comming, and it would still just be moving the limit a bit, who know
what
these groups finds for interesting usage in the future?

/Mattias

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/