Re: [PATCH RFC tip/core/rcu] SRCU rewrite
From: Paul E. McKenney
Date: Fri Nov 18 2016 - 08:35:35 EST
On Thu, Nov 17, 2016 at 11:53:04AM -0800, Lance Roy wrote:
> On Thu, 17 Nov 2016 21:58:34 +0800
> Lai Jiangshan <jiangshanlai@xxxxxxxxx> wrote:
> > from the changelog, it sounds like that "ULONG_MAX - NR_CPUS" is the limit
> > of the implements(old or this one). but actually the real max number of
> > active readers is much smaller, I think ULONG_MAX/4 can be used here instead
> > and that part of the changelog can be removed.
> In the old version, there are two separate limits. There first is that there
> are no more than ULONG_MAX nested or parallel readers, as otherwise ->c[] would
> overflow.
>
> The other limit is to prevent ->seq[] from overflowing during
> srcu_readers_active_idx_check(). For this to happen, there must be ULONG_MAX+1
> readers that loaded ->completed before srcu_flip() was run which then increment
> ->seq[]. The ->seq[] array is supposed to prevent
> srcu_readers_active_idx_check() from completing successfully if any such
> readers increment ->seq[], because otherwise they could decrement ->c[] while
> it is being read, which could cause it to incorrectly report that there are no
> active readers. If ->seq[] overflows then there is nothing (except how
> improbable it is) to prevent this from happening.
>
> I used to think (because of the previous comment) that there could be at most
> one such increment of ->seq[] per CPU, as they would have to be using to old
> value of ->completed and preemption would be disabled. This is not the case
> because there are no barriers around srcu_flip(), so the processor is not
> required to increment ->completed before reading ->seq[] the first time, nor is
> it required to wait until it is done reading ->seq[] the second time before
> incrementing. This means that the following code could cause ->seq[] to
> increment an arbitrarily large number of times between the two ->seq[] loads in
> srcu_readers_active_idx_check().
> while (true) {
> int idx = srcu_read_lock(sp);
> srcu_read_unlock(sp, idx);
> }
I also initially thought that there would need to be a memory barrier
immediately after srcu_flip(). But after further thought, I don't
believe that this is the case.
The key point is that updaters do the flip, sum the unlock counters,
do a full memory barrier, then sum the lock counters.
We therefore know that if an updater sees an unlock, it is guaranteed
to see the corresponding lock. Which prevents negative sums. However,
it is true that the flip and the unlock reads can be interchanged.
This can result in failing to see a count of zero, but it cannot result
in spuriously seeing a count of zero.
More to this point, if an updater fails to see a lock, the next time
that CPU/task does an srcu_read_lock(), that CPU/task is guaranteed
to see the new value of the index. This limits the number of CPUs/tasks
that can be using the old value of the index. Given that preemption
is disabled across the fetch of the index and the increment of the lock
count, that number is NR_CPUS-1, given that the updater has to be running
on one of the CPUs (as Mathieu pointed out earlier in this thread).
Or am I missing something?
Thanx, Paul