Just as with msgrcv (along with the rest of sysvipc since a few yearsThinking about it: isn't this wrong?
ago), perform the security checks without holding the ipc object lock.
CPU1:
* msgrcv()
* ipcperms()
<sleep>
CPU2:
* msgctl(), change permissions
** msgctl() returns, new permissions should now be in effect
* msgsnd(), send secret message
** msgsnd() returns, new message stored.
CPU1: resumes, receives secret message
Obviously, we could argue that the msgrcv() was already ongoing and therefore the old permissions still apply - but then we don't need to recheck after sleeping at all.
This also reduces the hogging of the lock for the entire duration of a
sender, as we drop the lock upon every iteration -- and this is exactly
why we also check for racing with RMID in the first place.
Which hogging do you mean? The lock is dopped uppon every iteration, the schedule() is in the middle.
Which your patch, the lock are now dropped twice:
-This means the lock is dropped, just for ipcperms().
for (;;) {
struct msg_sender s;
err = -EACCES;
if (ipcperms(ns, &msq->q_perm, S_IWUGO))
- goto out_unlock0;
+ goto out_unlock1;
+
+ ipc_lock_object(&msq->q_perm);
/* raced with RMID? */
if (!ipc_valid_object(&msq->q_perm)) {
@@ -681,6 +681,7 @@ long do_msgsnd(int msqid, long mtype, void __user *mtext,
goto out_unlock0;
}
+ ipc_unlock_object(&msq->q_perm);
}
This doubles the lock acquire/release cycles.