Re: [PATCH] oom_kill: use rss value instead of vm size for badness

From: David Rientjes
Date: Mon Nov 02 2009 - 05:42:17 EST


On Sun, 1 Nov 2009, KOSAKI Motohiro wrote:

> > total_vm
> > 673222 test
> > 195695 krunner
> > 168881 plasma-desktop
> > 130567 ktorrent
> > 127081 knotify4
> > 125881 icedove-bin
> > 123036 akregator
> > 121869 firefox-bin
> >
> > rss
> > 672271 test
> > 42192 Xorg
> > 30763 firefox-bin
> > 13292 icedove-bin
> > 10208 ktorrent
> > 9260 akregator
> > 8859 plasma-desktop
> > 7528 krunner
> >
> > firefox-bin seems much more preferred in this case than total_vm, but Xorg
> > still ranks very high with this patch compared to the current
> > implementation.
>
> Hi David,
>
> I'm very interesting your pointing out. thanks good testing.
> So, I'd like to clarify your point a bit.
>
> following are badness list on my desktop environment (x86_64 6GB mem).
> it show Xorg have pretty small badness score. Do you know why such
> different happen?
>

I don't know specifically what's different on your machine than Vedran's,
my data is simply a collection of the /proc/sys/vm/oom_dump_tasks output
from Vedran's oom log.

I guess we could add a call to badness() for the oom_dump_tasks tasklist
dump to get a clearer picture so we know the score for each thread group
leader. Anything else would be speculation at this point, though.

> score pid comm
> ==============================
> 56382 3241 run-mozilla.sh
> 23345 3289 run-mozilla.sh
> 21461 3050 gnome-do
> 20079 2867 gnome-session
> 14016 3258 firefox
> 9212 3306 firefox
> 8468 3115 gnome-do
> 6902 3325 emacs
> 6783 3212 tomboy
> 4865 2968 python
> 4861 2948 nautilus
> 4221 1 init
> (snip about 100line)
> 548 2590 Xorg
>

Are these scores with your rss patch or without? If it's without the
patch, this is understandable since Xorg didn't appear highly in Vedran's
log either.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/