Hi Michal,
On 03/21/2013 04:19 PM, Michal Hocko wrote:
On Thu 21-03-13 10:33:07, Simon Jeons wrote:Hi Mel,It reads from file and puts pages into the page cache. The pages are not
On 03/21/2013 02:19 AM, Mel Gorman wrote:
The following problem was reported against a distribution kernel whenI confuse why this need allocate a large number of clean pages?
zone_reclaim was enabled but the same problem applies to the mainline
kernel. The reproduction case was as follows
1. Run numactl -m +0 dd if=largefile of=/dev/null
This allocates a large number of clean pages in node 0
modified so they are clean. Output file is /dev/null so no pages are
written. dd doesn't call fadvise(POSIX_FADV_DONTNEED) on the input file
by default so pages from the file stay in the page cache
I try this in v3.9-rc5:
dd if=/dev/sda of=/dev/null bs=1MB
14813+0 records in
14812+0 records out
14812000000 bytes (15 GB) copied, 105.988 s, 140 MB/s
free -m -s 1
total used free shared buffers cached
Mem: 7912 1181 6731 0 663 239
-/+ buffers/cache: 277 7634
Swap: 8011 0 8011
It seems that almost 15GB copied before I stop dd, but the used pages which I monitor during dd always around 1200MB. Weird, why?