[PATCH 1/11] maps3: add proportional set size accounting in smaps

From: Matt Mackall
Date: Mon Oct 15 2007 - 18:27:20 EST


From: Fengguang Wu <wfg@xxxxxxxxxxxxxxxx>

The "proportional set size" (PSS) of a process is the count of pages it has
in memory, where each page is divided by the number of processes sharing
it. So if a process has 1000 pages all to itself, and 1000 shared with one
other process, its PSS will be 1500.

- lwn.net: "ELC: How much memory are applications really using?"

The PSS proposed by Matt Mackall is a very nice metic for measuring an
process's memory footprint. So collect and export it via
/proc/<pid>/smaps.

Matt Mackall's pagemap/kpagemap and John Berthels's exmap can also do the
job. They are comprehensive tools. But for PSS, let's do it in the simple
way.

Cc: John Berthels <jjberthels@xxxxxxxxx>
Cc: Bernardo Innocenti <bernie@xxxxxxxxxxx>
Cc: Padraig Brady <P@xxxxxxxxxxxxxx>
Cc: Denys Vlasenko <vda.linux@xxxxxxxxxxxxxx>
Cc: Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Matt Mackall <mpm@xxxxxxxxxxx>
Signed-off-by: Fengguang Wu <wfg@xxxxxxxxxxxxxxxx>
Cc: Hugh Dickins <hugh@xxxxxxxxxxx>
---

fs/proc/task_mmu.c | 29 ++++++++++++++++++++++++++++-
1 files changed, 28 insertions(+), 1 deletion(-)

Index: l/fs/proc/task_mmu.c
===================================================================
--- l.orig/fs/proc/task_mmu.c 2007-10-14 13:35:31.000000000 -0500
+++ l/fs/proc/task_mmu.c 2007-10-14 13:36:56.000000000 -0500
@@ -122,6 +122,27 @@ struct mem_size_stats
unsigned long private_clean;
unsigned long private_dirty;
unsigned long referenced;
+
+ /*
+ * Proportional Set Size(PSS): my share of RSS.
+ *
+ * PSS of a process is the count of pages it has in memory, where each
+ * page is divided by the number of processes sharing it. So if a
+ * process has 1000 pages all to itself, and 1000 shared with one other
+ * process, its PSS will be 1500. - Matt Mackall, lwn.net
+ */
+ u64 pss;
+ /*
+ * To keep (accumulated) division errors low, we adopt 64bit pss and
+ * use some low bits for division errors. So (pss >> PSS_DIV_BITS)
+ * would be the real byte count.
+ *
+ * A shift of 12 before division means(assuming 4K page size):
+ * - 1M 3-user-pages add up to 8KB errors;
+ * - supports mapcount up to 2^24, or 16M;
+ * - supports PSS up to 2^52 bytes, or 4PB.
+ */
+#define PSS_DIV_BITS 12
};

struct pmd_walker {
@@ -195,6 +216,7 @@ static int show_map_internal(struct seq_
seq_printf(m,
"Size: %8lu kB\n"
"Rss: %8lu kB\n"
+ "Pss: %8lu kB\n"
"Shared_Clean: %8lu kB\n"
"Shared_Dirty: %8lu kB\n"
"Private_Clean: %8lu kB\n"
@@ -202,6 +224,7 @@ static int show_map_internal(struct seq_
"Referenced: %8lu kB\n",
(vma->vm_end - vma->vm_start) >> 10,
mss->resident >> 10,
+ (unsigned long)(mss.pss >> (10 + PSS_DIV_BITS)),
mss->shared_clean >> 10,
mss->shared_dirty >> 10,
mss->private_clean >> 10,
@@ -226,6 +249,7 @@ static void smaps_pte_range(struct vm_ar
pte_t *pte, ptent;
spinlock_t *ptl;
struct page *page;
+ int mapcount;

pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
for (; addr != end; pte++, addr += PAGE_SIZE) {
@@ -242,16 +266,19 @@ static void smaps_pte_range(struct vm_ar
/* Accumulate the size in pages that have been accessed. */
if (pte_young(ptent) || PageReferenced(page))
mss->referenced += PAGE_SIZE;
- if (page_mapcount(page) >= 2) {
+ mapcount = page_mapcount(page);
+ if (mapcount >= 2) {
if (pte_dirty(ptent))
mss->shared_dirty += PAGE_SIZE;
else
mss->shared_clean += PAGE_SIZE;
+ mss->pss += (PAGE_SIZE << PSS_DIV_BITS) / mapcount;
} else {
if (pte_dirty(ptent))
mss->private_dirty += PAGE_SIZE;
else
mss->private_clean += PAGE_SIZE;
+ mss->pss += (PAGE_SIZE << PSS_DIV_BITS);
}
}
pte_unmap_unlock(pte - 1, ptl);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/