Re: [PATCH] mm: page_alloc: Avoid marking zones full prematurelyafter zone_reclaim()

From: Simon Jeons
Date: Tue Apr 09 2013 - 06:05:41 EST


Hi Michal,
On 04/05/2013 02:31 PM, Simon Jeons wrote:
Hi Michal,
On 03/21/2013 04:19 PM, Michal Hocko wrote:
On Thu 21-03-13 10:33:07, Simon Jeons wrote:
Hi Mel,
On 03/21/2013 02:19 AM, Mel Gorman wrote:
The following problem was reported against a distribution kernel when
zone_reclaim was enabled but the same problem applies to the mainline
kernel. The reproduction case was as follows

1. Run numactl -m +0 dd if=largefile of=/dev/null
This allocates a large number of clean pages in node 0
I confuse why this need allocate a large number of clean pages?
It reads from file and puts pages into the page cache. The pages are not
modified so they are clean. Output file is /dev/null so no pages are
written. dd doesn't call fadvise(POSIX_FADV_DONTNEED) on the input file
by default so pages from the file stay in the page cache

I try this in v3.9-rc5:
dd if=/dev/sda of=/dev/null bs=1MB
14813+0 records in
14812+0 records out
14812000000 bytes (15 GB) copied, 105.988 s, 140 MB/s

free -m -s 1

total used free shared buffers cached
Mem: 7912 1181 6731 0 663 239
-/+ buffers/cache: 277 7634
Swap: 8011 0 8011

It seems that almost 15GB copied before I stop dd, but the used pages which I monitor during dd always around 1200MB. Weird, why?


Sorry for waste your time, but the test result is weird, is it?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/