Re: [patch] Real-Time Preemption, -RT-2.6.12-rc1-V0.7.41-07
From: Ingo Molnar
Date: Wed Mar 23 2005 - 02:17:49 EST
* Ingo Molnar <mingo@xxxxxxx> wrote:
> That callback will be queued on CPU#2 - while the task still keeps
> current->rcu_data of CPU#1. It also means that CPU#2's read counter
> did _not_ get increased - and a too short grace period may occur.
>
> it seems to me that that only safe method is to pick an 'RCU CPU' when
> first entering the read section, and then sticking to it, no matter
> where the task gets migrated to. Or to 'migrate' the +1 read count
> from one CPU to the other, within the scheduler.
i think the 'migrate read-count' method is not adequate either, because
all callbacks queued within an RCU read section must be called after the
lock has been dropped - while with the migration method CPU#1 would be
free to process callbacks queued in the RCU read section still active on
CPU#2.
i'm wondering how much of a problem this is though. Can there be stale
pointers at that point? Yes in theory, because code like:
rcu_read_lock();
call_rcu(&dentry->d_rcu, d_callback);
func(dentry->whatever);
rcu_read_unlock();
would be unsafe because the pointer is still accessed within the RCU
read section, and if we get migrated from CPU#1 to CPU#2 after call_rcu
but before dentry->whatever dereference, the callback may be processed
early by CPU#1, making the dentry->whatever read operation unsafe.
the question is, does this occur in practice? Does existing RCU-using
code use pointers it has queued for freeing, relying on the fact that
the callback wont be processed until we drop the RCU read lock?
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/