[PATCH v2 0/4] md/raid10: reduce lock contention for io

From: Yu Kuai
Date: Tue Sep 13 2022 - 21:38:29 EST


From: Yu Kuai <yukuai3@xxxxxxxxxx>

Changes in v2:
- add patch 1, as suggested by Logan Gunthorpe.
- in patch 4, instead of use spin_lock/unlock in wait_event, which will
confuse lockdep, use write_seqlock/unlock instead.
- in patch 4, use read_seqbegin() to get seqcount instead of unusual
usage of raw_read_seqcount().
- test result is different from v1 in aarch64 due to retest from different
environment.

Test environment:

Architecture:
aarch64 Huawei KUNPENG 920
x86 Intel(R) Xeon(R) Platinum 8380

Raid10 initialize:
mdadm --create /dev/md0 --level 10 --bitmap none --raid-devices 4 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1

Test cmd:
(task set -c 0-15) fio -name=0 -ioengine=libaio -direct=1 -group_reporting=1 -randseed=2022 -rwmixread=70 -refill_buffers -filename=/dev/md0 -numjobs=16 -runtime=60s -bs=4k -iodepth=256 -rw=randread

Test result:

aarch64:
before this patchset: 3.2 GiB/s
bind node before this patchset: 6.9 Gib/s
after this patchset: 7.9 Gib/s
bind node after this patchset: 8.0 Gib/s

x86:(bind node is not tested yet)
before this patchset: 7.0 GiB/s
after this patchset : 9.3 GiB/s

Please noted that in the test machine, memory access latency is very bad
across nodes compare to local node in aarch64, which is why bandwidth
while bind node is much better.

Yu Kuai (4):
md/raid10: cleanup wait_barrier()
md/raid10: prevent unnecessary calls to wake_up() in fast path
md/raid10: fix improper BUG_ON() in raise_barrier()
md/raid10: convert resync_lock to use seqlock

drivers/md/raid10.c | 165 +++++++++++++++++++++++++++-----------------
drivers/md/raid10.h | 2 +-
2 files changed, 104 insertions(+), 63 deletions(-)

--
2.31.1