On Thu, 17 Mar 2016, Jens Axboe wrote:
On 03/17/2016 01:20 PM, Thomas Gleixner wrote:
This might be better, we need to start at -1 to not miss the first one...
Still untested.
+static inline struct blk_mq_ctx *next_ctx(struct request_queue *q, int
*i)
+{
+ do {
+ (*i)++;
+ if (*i < q->nr_queues) {
+ if (cpu_possible(*i))
+ return per_cpu_ptr(q->queue_ctx, *i);
+ continue;
+ }
+ break;
+ } while (1);
+
+ return NULL;
+}
+
+#define queue_for_each_ctx(q, ctx, i)
\
+ for ((i) = -1; (ctx = next_ctx((q), &(i))) != NULL;)
+
What's wrong with
for_each_possible_cpu(cpu) {
ctx = per_cpu_ptr(q->queue_ctx, cpu);
....
}
instead of hiding it behind an incomprehensible macro mess?
We might not have mapped all of them.
blk_mq_init_cpu_queues() tells a different story and q->queue_ctx is a per_cpu
allocation.