Re: [RFC][PATCH 6/8] mm: handle_speculative_fault()

From: KAMEZAWA Hiroyuki
Date: Wed Jan 06 2010 - 20:04:26 EST


On Wed, 6 Jan 2010 01:39:17 -0800 (PST)
Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:

>
>
> On Wed, 6 Jan 2010, KAMEZAWA Hiroyuki wrote:
> >
> > 9.08% multi-fault-all [kernel] [k] down_read_trylock
<snip>
> That way, it will do the cmpxchg first, and if it wasn't unlocked and had
> other readers active, it will end up doing an extra cmpxchg, but still
> hopefully avoid the extra bus cycles.
>
> So it might be worth testing this trivial patch on top of my other one.
>
Test: on 8-core/2-socket x86-64
while () {
touch memory
barrier
madvice DONTNEED all range by cpu 0
barrier
}

<Before> (cut from my post)
> [root@bluextal memory]# /root/bin/perf stat -e page-faults,cache-misses --repeat 5 ./multi-fault-all 8
>
> Performance counter stats for './multi-fault-all 8' (5 runs):
>
> 33029186 page-faults ( +- 0.146% )
> 348698659 cache-misses ( +- 0.149% )
>
> 60.002876268 seconds time elapsed ( +- 0.001% )
> 41.51% multi-fault-all [kernel] [k] clear_page_c
> 9.08% multi-fault-all [kernel] [k] down_read_trylock
> 6.23% multi-fault-all [kernel] [k] up_read
> 6.17% multi-fault-all [kernel] [k] __mem_cgroup_try_charg


<After>
[root@bluextal memory]# /root/bin/perf stat -e page-faults,cache-misses --repeat 5 ./multi-fault-all 8

Performance counter stats for './multi-fault-all 8' (5 runs):

33782787 page-faults ( +- 2.650% )
332753197 cache-misses ( +- 0.477% )

60.003984337 seconds time elapsed ( +- 0.004% )

# Samples: 1014408915089
#
# Overhead Command Shared Object Symbol
# ........ ............... ........................ ......
#
44.42% multi-fault-all [kernel] [k] clear_page_c
7.73% multi-fault-all [kernel] [k] down_read_trylock
6.65% multi-fault-all [kernel] [k] __mem_cgroup_try_char
6.15% multi-fault-all [kernel] [k] up_read
4.87% multi-fault-all [kernel] [k] handle_mm_fault
3.70% multi-fault-all [kernel] [k] __rmqueue
3.69% multi-fault-all [kernel] [k] __mem_cgroup_commit_c
2.35% multi-fault-all [kernel] [k] bad_range


yes, it seems slightly improved, at least on this test.
but page-fault-throughput test score is within error range.


Thanks,
-Kame



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/