Re: [PATCH v6 0/8] ipc: Clamp *mni to the real IPCMNI limit & increase that limit

From: Eric W. Biederman
Date: Wed May 02 2018 - 11:11:09 EST


Waiman Long <longman@xxxxxxxxxx> writes:

> On 05/01/2018 10:18 PM, Eric W. Biederman wrote:
>>
>>> The sysctl parameters msgmni, shmmni and semmni have an inherent limit
>>> of IPC_MNI (32k). However, users may not be aware of that because they
>>> can write a value much higher than that without getting any error or
>>> notification. Reading the parameters back will show the newly written
>>> values which are not real.
>>>
>>> Enforcing the limit by failing sysctl parameter write, however, may
>>> cause regressions if existing user setup scripts set those parameters
>>> above 32k as those scripts will now fail in this case.
>> I have a serious problem with this approach. Have you made any effort
>> to identify any code that sets these values above 32k? Have you looked
>> to see if these applications actually care if you return an error when
>> a value is set too large?
>
> It is not that an application cares about if an error is returned or
> not. Most applications don't care. It is that if an error is returned,
> it means that the sysctl parameter isn't change at all instead of being
> set to a large value and then internally clamped to a smaller number
> which is still bigger than the original value. That is what can break an
> application because the sysctl parameters may be just too small for the
> application.

Agreed that is a possibility. The other possibility is like your
customer they will try to use all of the increased number of shared
memory segments it won't work and they will fail, and it will be
mysterious and weird.

I took a quick look to see if cargo culting bad settings was a common
thing and all I could see were examples of people setting the limits
to numbers smaller than 4096.

>> Right now this seems like a lot of work to avoid breaking applications
>> and or users that may or may not exist. If you can find something that
>> will care sure. We need to avoid breaking userspace and causing
>> regressions. However as this stands it looks you are making maintenance
>> of the kernel more difficult to avoid having to look to see if there are
>> monsters under the bed.
>
> I shall admit that it can be hard to find applications that will
> explicitly need that as we usually don't have access to the applications
> that the customers have. It is more a correctness issue where the
> existing code is kind of lying about what can actually be supported. I
> just want to make the users more aware of what the right limits are.

You presume the kernel is lying to applications. I admit the kernel
can lie to applications. I don't see any evidence that the kernel is
actually doing so. So far (to me) it looks like a large number of sysv
shared memory segments is not particulalry common.

So I would not be at all surprised if no regressions would be generated
if you simply deny setting the value past the maximum.

Eric