Re: [PATCH] timerfd: Protect the might cancel mechanism proper

From: Thomas Gleixner
Date: Fri Feb 10 2017 - 06:34:48 EST


Dmitry,

On Thu, 2 Feb 2017, Dmitry Vyukov wrote:

> On Thu, Feb 2, 2017 at 7:54 PM, Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
> > On Wed, 1 Feb 2017, Dmitry Vyukov wrote:
> >>
> >> Can't we still end up with an inconsistently setup timer?
> >> do_timerfd_settime executes timerfd_setup_cancel and timerfd_setup as
> >> two separate non-atomic actions. So if there are 2 concurrent
> >> timerfd_settime calls, one that needs cancel and another that does not
> >> need cancel, can't we end up with inconsistent setup? E.g. setup timer
> >> that needs cancel, but it won't be in cancel_list. Or vice versa.
> >
> > Do we really care? If an application arms the timer with cancel in one
> > thread and the same timer without cancel in another thread, then it's
> > probably completely irrelevant whether the state pair timeout/cancel is
> > correct or not. That's clearly an application bug and I don't want to add
> > more locking just to make something which is broken by definition pseudo
> > 'atomic'.
>
> I agree that the program is bogus, and don't have to ensure any sane
> behavior for it. But I am concerned about potential kernel corruptions
> due to this. For example, maybe kernel code will decide to not remove
> such timer from the cancel list on destruction because based on
> clockid/flags it should not be in the cancel list, but the timer is
> actually there and we will end up with a leak or a dangling pointer. I
> did not check that this actually happens, such inconsistent state just
> looks like a red flag for me.

That can't happen.

ctx->might_cancel and ctx->clist are always in sync with the new lock and
that's the only interesting thing. On destruction we don't look at clockid
or such, we only care about might_cancel.

What is not guaranteed to be in sync is the timer expiry time and the
cancel stuff, if two threads operate on the same timerfd in
parallel. That's what I do not care about at all.

Thanks,

tglx