Re: [patch 1/1] Replace the memtype_lock spinlock with an rwlock.

From: Suresh Siddha
Date: Fri Feb 26 2010 - 19:54:07 EST


On Fri, 2010-02-26 at 09:42 -0800, Robin Holt wrote:
> While testing an application using the xpmem (out of kernel driver), we
> noticed a significant page fault rate reduction of x86_64 with respect
> to ia64. For one test running with 256 cpus, one thread per cpu, it
> took one minute, eight seconds for each of the threads to vm_insert_pfn
> 2GB worth of pages.
>
> The slowdown was tracked to lookup_memtype which acquires the spinlock
> memtype_lock. This heavily contended lock was slowing down fault time.
>
> I quickly converted the spinlock to an rwlock. This greatly improved
> vm_insert_pfn time to 4.3 seconds for the above test.

Acked-by: Suresh Siddha <suresh.b.siddha@xxxxxxxxx>

> As a theoretical test, I removed the lock around get_page_memtype to see
> what the full impact of the single shared lock actually was. With that
> change, the vm_insert_pfn time dropped to 1.6 seconds.
>
> I do not think the current global lock to protect individual pages is
> the "correct" method for protecting get/set of the page_memtype flag.
> It seems like the locking should be located within the function that
> gets and sets the flags and be locked by a per-page lock or protected
> with an atomic update.

I agree. Will you be willing to look into this and post a patch? Or else
I can look at this and post a fix next week.

thanks,
suresh

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/