[PATCH v2] block: document blk-plug

From: Suresh Jayaraman
Date: Mon Aug 29 2011 - 07:29:31 EST


Thus spake Andrew Morton:

"And I have the usual maintainability whine. If someone comes up to
vmscan.c and sees it calling blk_start_plug(), how are they supposed to
work out why that call is there? They go look at the blk_start_plug()
definition and it is undocumented. I think we can do better than this?"

Adapted from the LWN article - http://lwn.net/Articles/438256/ by Jens
Axboe and from an earlier attempt by Shaohua Li to document blk-plug.

Changes since -v1:

* explain how blk_plug helps with potential deadlock avoidance.
* explain why we need blk-plug.
* add a note that cb_list is required by md.

Signed-off-by: Suresh Jayaraman <sjayaraman@xxxxxxx>
---
block/blk-core.c | 14 ++++++++++++++
include/linux/blkdev.h | 16 +++++++++++-----
2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 90e1ffd..ea360c8 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2626,6 +2626,20 @@ EXPORT_SYMBOL(kblockd_schedule_delayed_work);

#define PLUG_MAGIC 0x91827364

+/**
+ * blk_start_plug - initialize blk_plug and track it inside the task_struct
+ * @plug: The &struct blk_plug that needs to be initialized
+ *
+ * Description:
+ * Tracking blk_plug inside the task_struct will help with auto-flushing the
+ * pending I/O should the task end up blocking between blk_start_plug() and
+ * blk_finish_plug(). This is important from a performance perspective, but
+ * also ensures that we don't deadlock. For instance, if the task is blocking
+ * for a memory allocation, memory reclaim could end up wanting to free a
+ * page belonging to that request that is currently residing in our private
+ * plug. By flushing the pending I/O when the process goes to sleep, we avoid
+ * this kind of deadlocks.
+ */
void blk_start_plug(struct blk_plug *plug)
{
struct task_struct *tsk = current;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 84b15d5..f45d783 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -863,17 +863,23 @@ struct request_queue *blk_alloc_queue_node(gfp_t, int);
extern void blk_put_queue(struct request_queue *);

/*
+ * blk_plug allows to build up a queue of related requests by holding the I/O
+ * fragments for a short period. This allows merging of sequential requests
+ * into single larger request. As the requests are moved from per-task list to
+ * the device's request_queue in a batch, this results in improved
+ * scalability as the lock contention for request_queue lock is reduced.
+ *
* Note: Code in between changing the blk_plug list/cb_list or element of such
* lists is preemptable, but such code can't do sleep (or be very careful),
* otherwise data is corrupted. For details, please check schedule() where
* blk_schedule_flush_plug() is called.
*/
struct blk_plug {
- unsigned long magic;
- struct list_head list;
- struct list_head cb_list;
- unsigned int should_sort;
- unsigned int count;
+ unsigned long magic; /* detect uninitialized use-cases */
+ struct list_head list; /* requests */
+ struct list_head cb_list; /* md requires an unplug callback */
+ unsigned int should_sort; /*list to be sorted before flushing? */
+ unsigned int count; /* request count to avoid list getting too big */
};
#define BLK_MAX_REQUEST_COUNT 16
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/