On 05 May 2016, at 11:21, Matias BjÃrling <mb@xxxxxxxxxxx> wrote:
On 05/04/2016 05:31 PM, Javier GonzÃlez wrote:
Within a target, I/O requests stem from different paths, which might vary
in terms of the data structures being allocated, context, etc. This
might impact how the request is treated, or how memory is freed once
the bio is completed.
Add two different types of I/Os: (i) NVM_IOTYPE_SYNC, which indicates
that the I/O is synchronous; and (ii) NVM_IOTYPE_CLOSE_BLK, which
indicates that the I/O closes the block to which all the ppas on the
request belong to.
Signed-off-by: Javier GonzÃlez <javier@xxxxxxxxxxxx>
---
include/linux/lightnvm.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index 29a6890..6c02209 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -11,6 +11,8 @@ enum {
NVM_IOTYPE_NONE = 0,
NVM_IOTYPE_GC = 1,
+ NVM_IOTYPE_SYNC = 2,
+ NVM_IOTYPE_CLOSE_BLK = 4,
};
#define NVM_BLK_BITS (16)
The sync should not be necessary when the read path is implemented
using bio_clone. Similarly for NVM_IOTYPE_CLOSE_BLK. The write
completion can be handled in the bio completion path.
We need to know where the request comes from; we cannot do it just from
having the bio. This is because we allocate different structures
depending on the type of bio we send. It is not only which bio->end_io
function we have, but which memory needs to be released. Sync is
necessary for the read path when we have a partial bio (data both on
write buffer and disk) that we need to fill up. Also for GC.. In this
case, the bio is to be freed differently. In the case of close the case
is similarly; we do not free memory on the end_io path, but on the caller.
You can see how these flags are used on pblk. Maybe there is a better
way of doing it that I could not see...
Javier