Re: [PATCH]shmem: reduce one time of locking in pagefault

From: Andrew Morton
Date: Tue Jul 06 2010 - 21:33:14 EST


On Wed, 07 Jul 2010 09:15:46 +0800 Shaohua Li <shaohua.li@xxxxxxxxx> wrote:

> I'm running a shmem pagefault test case (see attached file) under a 64 CPU
> system. Profile shows shmem_inode_info->lock is heavily contented and 100%
> CPUs time are trying to get the lock.

I seem to remember complaining about that in 2002 ;) Faulting in a
mapping of /dev/zero is just awful on a 4-way(!).

> In the pagefault (no swap) case,
> shmem_getpage gets the lock twice, the last one is avoidable if we prealloc a
> page so we could reduce one time of locking. This is what below patch does.
>
> The result of the test case:
> 2.6.35-rc3: ~20s
> 2.6.35-rc3 + patch: ~12s
> so this is 40% improvement.
>
> One might argue if we could have better locking for shmem. But even shmem is lockless,
> the pagefault will soon have pagecache lock heavily contented because shmem must add
> new page to pagecache. So before we have better locking for pagecache, improving shmem
> locking doesn't have too much improvement. I did a similar pagefault test against
> a ramfs file, the test result is ~10.5s.
>
> Signed-off-by: Shaohua Li <shaohua.li@xxxxxxxxx>
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index f65f840..c5f2939 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c

The patch doesn't make shmem_getpage() any clearer :(

shmem_inode_info.lock appears to be held too much. Surely
lookup_swap_cache() didn't need it (for example).

What data does shmem_inode_info.lock actually protect?


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/