Re: [PATCH v2] mm: implement write-behind policy for sequential file writes

From: Konstantin Khlebnikov
Date: Wed Sep 25 2019 - 04:15:45 EST


On 25/09/2019 10.18, Dave Chinner wrote:
On Tue, Sep 24, 2019 at 12:00:17PM +0300, Konstantin Khlebnikov wrote:
On 24/09/2019 10.39, Dave Chinner wrote:
On Mon, Sep 23, 2019 at 06:06:46PM +0300, Konstantin Khlebnikov wrote:
On 23/09/2019 17.52, Tejun Heo wrote:
Hello, Konstantin.

On Fri, Sep 20, 2019 at 10:39:33AM +0300, Konstantin Khlebnikov wrote:
With vm.dirty_write_behind 1 or 2 files are written even faster and

Is the faster speed reproducible? I don't quite understand why this
would be.

Writing to disk simply starts earlier.

Stupid question: how is this any different to simply winding down
our dirty writeback and throttling thresholds like so:

# echo $((100 * 1000 * 1000)) > /proc/sys/vm/dirty_background_bytes

to start background writeback when there's 100MB of dirty pages in
memory, and then:

# echo $((200 * 1000 * 1000)) > /proc/sys/vm/dirty_bytes

So that writers are directly throttled at 200MB of dirty pages in
memory?

This effectively gives us global writebehind behaviour with a
100-200MB cache write burst for initial writes.

Global limits affect all dirty pages including memory-mapped and
randomly touched. Write-behind aims only into sequential streams.

There are apps that do sequential writes via mmap()d files.
They should do writebehind too, yes?

I see no reason for that. This is different scenario.

Mmap have no clear signal about "end of write", only page fault at
beginning. Theoretically we could implement similar sliding window and
start writeback on consequent page faults.

But applications who use memory mapped files probably knows better what
to do with this data. I prefer to leave them alone for now.


ANd, really such strict writebehind behaviour is going to cause all
sorts of unintended problesm with filesystems because there will be
adverse interactions with delayed allocation. We need a substantial
amount of dirty data to be cached for writeback for fragmentation
minimisation algorithms to be able to do their job....

I think most sequentially written files never change after close.

There are lots of apps that write zeros to initialise and allocate
space, then go write real data to them. Database WAL files are
commonly initialised like this...

Those zeros are just bunch of dirty pages which have to be written.
Sync and memory pressure will do that, why write-behind don't have to?


Except of knowing final size of huge files (>16Mb in my patch)
there should be no difference for delayed allocation.

There is, because you throttle the writes down such that there is
only 16MB of dirty data in memory. Hence filesystems will only
typically allocate in 16MB chunks as that's all the delalloc range
spans.

I'm not so concerned for XFS here, because our speculative
preallocation will handle this just fine, but for ext4 and btrfs
it's going to interleave the allocate of concurrent streaming writes
and fragment the crap out of the files.

In general, the smaller you make the individual file writeback
window, the worse the fragmentation problems gets....

AFAIR ext4 already preallocates extent beyond EOF too.

But this must be carefully tested for all modern fs for sure.


Probably write behind could provide hint about streaming pattern:
pass something like "MSG_MORE" into writeback call.

How does that help when we've only got dirty data and block
reservations up to EOF which is no more than 16MB away?

Block allocator should interpret this flags as "more data are
expected" and preallocate extent bigger than data and beyond EOF.


Cheers,

Dave.