[PATCH] mm: do not drain pagevecs for mlock

From: Tao Ma
Date: Fri Dec 30 2011 - 01:20:08 EST


In 8891d6da, lru_add_drain_all is added to mlock to flush all the per
cpu pagevecs. It makes this system call runs much slower than the
predecessor(For a 16 core Xeon E5620, it is around 20 times). And the
the more cores we have, the more the performance penalty because of the
nasty call to schedule_on_each_cpu.

>From the commit log of 8891d6da we can see that "it isn't must. but it
reduce the failure of moving to unevictable list. its failure can rescue
in vmscan later." Christoph Lameter removes the call in mlockall(ML_FUTURE),
So this patch just removes all the call from mlock/mlockall.

Without this patch:
time ./test_mlock -c 100000

real 0m20.566s
user 0m0.074s
sys 0m12.759s

With this patch:
time ./test_mlock -c 100000

real 0m1.675s
user 0m0.049s
sys 0m1.622s

Cc: David Rientjes <rientjes@xxxxxxxxxx>
Cc: Minchan Kim <minchan.kim@xxxxxxxxx>
Cc: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx>
Cc: Mel Gorman <mel@xxxxxxxxx>
Cc: Johannes Weiner <jweiner@xxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Tao Ma <boyu.mt@xxxxxxxxxx>
---
mm/mlock.c | 5 -----
1 files changed, 0 insertions(+), 5 deletions(-)

diff --git a/mm/mlock.c b/mm/mlock.c
index 4f4f53b..bb5fc42 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -487,8 +487,6 @@ SYSCALL_DEFINE2(mlock, unsigned long, start, size_t, len)
if (!can_do_mlock())
return -EPERM;

- lru_add_drain_all(); /* flush pagevec */
-
down_write(&current->mm->mmap_sem);
len = PAGE_ALIGN(len + (start & ~PAGE_MASK));
start &= PAGE_MASK;
@@ -557,9 +555,6 @@ SYSCALL_DEFINE1(mlockall, int, flags)
if (!can_do_mlock())
goto out;

- if (flags & MCL_CURRENT)
- lru_add_drain_all(); /* flush pagevec */
-
down_write(&current->mm->mmap_sem);

lock_limit = rlimit(RLIMIT_MEMLOCK);
--
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/