Re: dm-bufio: adjust the reserved buffer for dm-verify-target.

From: Xiao, Jin
Date: Wed Aug 15 2018 - 21:32:49 EST

On 8/15/2018 4:32 AM, Mike Snitzer wrote:
On Wed, Aug 08 2018 at 2:40am -0400,
xiao jin <jin.xiao@xxxxxxxxx> wrote:

We hit the BUG() report at include/linux/scatterlist.h:144!
The callback is as bellow:
=> verity_work
=> verity_hash_for_block
=> verity_verify_level
=> verity_hash
=> verity_hash_update
=> sg_init_one
=> sg_set_buf

More debug shows the root cause. When creating dufio client it
uses the __vmalloc() to allocate the buffer data for the reserved
dm_buffer. The buffer that allocated by the __vmalloc() is invalid
according to the __virt_addr_valid().

Mostly the reserved dm_buffer is not touched. But occasionally
it might fail to allocate the dm_buffer data when we try to
allocate in the __alloc_buffer_wait_no_callback(). Then it has
to take the reserved dm_buffer for usage. Finally it reports the
BUG() as virt_addr_valid() detects the buffer data address is invalid.

The patch is to adjust the reserved buffer for dm-verity-target. We
allocated two dm_buffers into the reserved buffers list when creating
the buffer interface. The first dm_buffer in the reserved buffer list
is allocated by the __vmalloc(), it's not used after that. The second
dm_buffer in the reserved buffer list is allocated by the
__get_free_pages() which can be consumed after that.

Signed-off-by: xiao jin <jin.xiao@xxxxxxxxx>
drivers/md/dm-bufio.c | 4 ++--
drivers/md/dm-verity-target.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index dc385b7..3b7ca5e 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -841,7 +841,7 @@ static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client
tried_noio_alloc = true;
- if (!list_empty(&c->reserved_buffers)) {
+ if (!c->need_reserved_buffers) {
b = list_entry(c->,
struct dm_buffer, lru_list);
@@ -1701,7 +1701,7 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
goto bad;
- while (c->need_reserved_buffers) {
+ if (list_empty(&c->reserved_buffers)) {
struct dm_buffer *b = alloc_buffer(c, GFP_KERNEL);
if (!b) {
Point was to allocate N buffers (as accounted in
c->need_reserved_buffers). This change just allocates a single one.

Your header isn't clear on this at all.

Hi Mike,

Currently alloc_buffer() when creating the client will use the __vmalloc() to

get the buffer data for c->reserved_buffers. If the c->reserved_buffers is read to

use in the failures case of buffer allocation in the __alloc_buffer_wait_no_callback(),

and the CONFIG_DEBUG_SG is enabled, we will hit the BUG() report.

That's the problem I find in reality.

I have some thinking to solve such issue. I think to keep the initial buffer with the

data from __vmalloc() in the c->reserved_buffers. But the reserved buffer with the data

from __vmalloc() can't be read to use. We can allocate more buffers with the

data mode of DATA_MODE_SLAB or DATA_MODE_GET_FREE_PAGES for c->reserved_buffers.

Such reserved buffers can be used in the failures case of buffer allocation

in the __alloc_buffer_wait_no_callback().

I test the code on my device. I never see the BUG() report again. Feel free to correct me.



diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
index 12decdbd7..40c66fc 100644
--- a/drivers/md/dm-verity-target.c
+++ b/drivers/md/dm-verity-target.c
@@ -1107,7 +1107,7 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
v->hash_blocks = hash_position;
v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
- 1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
+ 1 << v->hash_dev_block_bits, 2, sizeof(struct buffer_aux),
dm_bufio_alloc_callback, NULL);
if (IS_ERR(v->bufio)) {
ti->error = "Cannot initialize dm-bufio";

dm-devel mailing list
It isn't at all clear from my initial review that what you're doing
makes any sense.

Seems like you're just papering over bufio's use of !__virt_addr_valid()
memory in unintuitive ways.

Mikulas, can you see a better way forward?