[RFC 0/1] Optimize ext4 DAX overwrites

From: Ritesh Harjani
Date: Thu Aug 20 2020 - 07:37:48 EST


In case of dax writes, currently we start a journal txn irrespective of whether
it's an overwrite or not. In case of an overwrite we don't need to start a
jbd2 txn since the blocks are already allocated.
So this patch optimizes away the txn start in case of DAX overwrites.
This could significantly boost performance for multi-threaded random write
(overwrite). Fio script used to collect perf numbers is mentioned below.

Below numbers were calculated on a QEMU setup on ppc64 box with simulated
pmem device.

Performance numbers with different threads - (~10x improvement)
==========================================

vanilla_kernel(kIOPS)
60 +-+---------------+-------+--------+--------+--------+-------+------+-+
| + + + +** + + |
55 +-+ ** +-+
| ** ** |
| ** ** |
50 +-+ ** ** +-+
| ** ** |
45 +-+ ** ** +-+
| ** ** |
| ** ** |
40 +-+ ** ** +-+
| ** ** |
35 +-+ ** ** ** +-+
| ** ** ** ** |
| ** ** ** ** ** |
30 +-+ ** ** ** ** ** ** +-+
| ** +** +** +** ** +** |
25 +-+---------------**------+**------+**------+**------**------+**----+-+
1 2 4 8 12 16
Threads
patched_kernel(kIOPS)
600 +-+--------------+--------+--------+-------+--------+-------+------+-+
| + + + + + +** |
| ** |
500 +-+ ** +-+
| ** |
| ** ** |
400 +-+ ** ** +-+
| ** ** |
300 +-+ ** ** ** +-+
| ** ** ** |
| ** ** ** |
200 +-+ ** ** ** +-+
| ** ** ** ** |
| ** ** ** ** |
100 +-+ ** ** ** ** ** +-+
| ** ** ** ** ** |
| +** +** ** +** +** +** |
0 +-+--------------+**------+**------**------+**------+**-----+**----+-+
1 2 4 8 12 16
Threads
fio script
==========
[global]
rw=randwrite
norandommap=1
invalidate=0
bs=4k
numjobs=16 --> changed this for different thread options
time_based=1
ramp_time=30
runtime=60
group_reporting=1
ioengine=psync
direct=1
size=16G
filename=file1.0.0:file1.0.1:file1.0.2:file1.0.3:file1.0.4:file1.0.5:file1.0.6:file1.0.7:file1.0.8:file1.0.9:file1.0.10:file1.0.11:file1.0.12:file1.0.13:file1.0.14:file1.0.15:file1.0.16:file1.0.17:file1.0.18:file1.0.19:file1.0.20:file1.0.21:file1.0.22:file1.0.23:file1.0.24:file1.0.25:file1.0.26:file1.0.27:file1.0.28:file1.0.29:file1.0.30:file1.0.31
file_service_type=random
nrfiles=32
directory=/mnt/

[name]
directory=/mnt/
direct=1

NOTE:
======
1. Looking at ~10x perf delta, I probed a bit deeper to understand what's causing
this scalability problem. It seems when we are starting a jbd2 txn then slab
alloc code is observing some serious contention around spinlock.

Even though the spinlock contention could be related to some other
issue (looking into it internally). But I could still see the perf improvement
of close to ~2x on QEMU setup on x86 with simulated pmem device with the
patched_kernel v/s vanilla_kernel with same fio workload.

perf report from vanilla_kernel (this is not seen with patched kernel) (ppc64)
=======================================================================

47.86% fio [kernel.vmlinux] [k] do_raw_spin_lock
|
---do_raw_spin_lock
|
|--19.43%--_raw_spin_lock
| |
| --19.31%--0
| |
| |--9.77%--deactivate_slab.isra.61
| | ___slab_alloc
| | __slab_alloc
| | kmem_cache_alloc
| | jbd2__journal_start
| | __ext4_journal_start_sb
<...>

2. Kept this as RFC, since maybe using the ext4_iomap_overwrite_ops,
will be better here. We could check for overwrite in ext4_dax_write_iter(),
like how we do for DIO writes. Thoughts?

3. This problem was reported by Dan Williams at [1]

Links
======
[1]: https://lore.kernel.org/linux-ext4/20190802144304.GP25064@xxxxxxxxxxxxxx/T/

Ritesh Harjani (1):
ext4: Optimize ext4 DAX overwrites

fs/ext4/ext4.h | 1 +
fs/ext4/file.c | 2 +-
fs/ext4/inode.c | 8 +++++++-
3 files changed, 9 insertions(+), 2 deletions(-)

--
2.25.4