Hi Xiao,Hi Matthew
On 24/12/20 11:18 pm, Xiao Ni wrote:> The root cause is found. Now we use a similar way with raid0 to handle discard request
for raid10. Because the discard region is very big, we can calculate the start/end addressThanks for finding the root cause and making a patch that corrects the offset
for each disk. Then we can submit the discard request to each disk. But for raid10, it has
copies. For near layout, if the discard request doesn't align with chunk size, we calculate
a start_disk_offset. Now we only use start_disk_offset for the first disk, but it should be
used for the near copies disks too.
addresses for multiple disks!
[ 789.709501] discard bio start : 70968, size : 191176Just wondering, what is the current status of the patchset? Is there anything
[ 789.709507] first stripe index 69, start disk index 0, start disk offset 70968
[ 789.709509] last stripe index 256, end disk index 0, end disk offset 262144
[ 789.709511] disk 0, dev start : 70968, dev end : 262144
[ 789.709515] disk 1, dev start : 70656, dev end : 262144
For example, in this test case, it has 2 near copies. The start_disk_offset for the first disk is 70968.
It should use the same offset address for second disk. But it uses the start address of this chunk.
It discard more region. The patch in the attachment can fix this problem. It split the region that
doesn't align with chunk size.
that I can do to help?
There is another problem. The stripe size should be calculated differently for near layout and far layout.I can help review the patch and help test the patches anytime. Do you need help
with making a patch to calculate the stripe size for near and far layouts?
Let me know how you are going with this patchset, and if there is anything I
can do for you.
Thanks,
Matthew