When the total dirty pages exceed vm_dirty_ratio, the dirtier is made to doI agree with your problem description, a single program which writes a single large file can make an interactive system suck. Creating a 25+GB Blu-Ray image will often saturate the buffer space. I played with per-fd limiting during 2.5.xx development and I had an app writing 5-10GB files. While I wanted to get something to submit while the kernel was changing, I kept hitting cornet cases.
the writeback. But this dirtier may not be the one who took the system to this
state. Instead, if we can track the dirty count per-file, we could throttle
the dirtier of a file, when the file's dirty pages exceed a certain limit.
Even though this dirtier may not be the one who dirtied the other pages of
this file, it is fair to throttle this process, as it uses that file.
This patchI think you have this in the wrong place, can't it go in balance_dirty_pages?
1. Adds dirty page accounting per-file.
2. Exports the number of pages of this file in cache and no of pages dirty via
proc-fdinfo.
3. Adds a new tunable, /proc/sys/vm/file_dirty_bytes. When a files dirty data
exceeds this limit, the writeback of that inode is done by the current
dirtier.
This certainly will affect the throughput of certain heavy-dirtying workloads,I found that the effect was about the same as forcing the application to use O_DIRECT, and since it was our application I could do that. Not all badly-behaved programs are open source, so that addressed my issue but not the general case.
but should help for interactive systems.