[PATCH v3 0/3] free reclaimed pages by paging out instantly

From: Minchan Kim
Date: Tue Jul 01 2014 - 20:12:50 EST


Normally, I/O completed pages for reclaim would be rotated into
inactive LRU tail without freeing. The why it works is we can't free
page from atomic context(ie, end_page_writeback) due to vaious locks
isn't aware of atomic context.

So for reclaiming the I/O completed pages, we need one more iteration
of reclaim and it could make unnecessary aging as well as CPU overhead.

Long time ago, at the first trial, most concern was memcg locking
but recently, Johnannes tried amazing effort to make memcg lock simple
and got merged into mmotm so I coded up based on mmotm tree.
(Kudos to Johannes)

On 1G, 12 CPU kvm guest, build kernel 5 times and result was

allocstall
vanilla: records: 5 avg: 4733.80 std: 913.55(19.30%) max: 6442.00 min: 3719.00
improve: records: 5 avg: 1514.20 std: 441.69(29.17%) max: 1974.00 min: 863.00

pgrotated
vanilla: records: 5 avg: 873313.80 std: 40999.20(4.69%) max: 954722.00 min: 845903.00
improve: records: 5 avg: 28406.40 std: 3296.02(11.60%) max: 34552.00 min: 25047.00

Most of field in vmstat are not changed too much but things I can notice
is allocstall and pgrotated. We could save allocstall(ie, direct relcaim)
and pgrotated very much.

Welcome testing, review and any feedback!

* from v2 - 2014.06.20
* Rebased on v3.16-rc2-mmotm-2014-06-25-16-44
* Remove RFC tag

Minchan Kim (3):
mm: Don't hide spin_lock in swap_info_get internal
mm: Introduce atomic_remove_mapping
mm: Free reclaimed pages indepdent of next reclaim

include/linux/swap.h | 4 ++++
mm/filemap.c | 17 +++++++++-----
mm/swap.c | 21 ++++++++++++++++++
mm/swapfile.c | 17 ++++++++++++--
mm/vmscan.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++
5 files changed, 114 insertions(+), 8 deletions(-)

--
2.0.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/