Re: [PATCH -mm -v3] mm, swap: Sort swap entries before free

From: Andrew Morton
Date: Fri Apr 07 2017 - 17:44:05 EST


On Fri, 7 Apr 2017 14:49:01 +0800 "Huang, Ying" <ying.huang@xxxxxxxxx> wrote:

> To reduce the lock contention of swap_info_struct->lock when freeing
> swap entry. The freed swap entries will be collected in a per-CPU
> buffer firstly, and be really freed later in batch. During the batch
> freeing, if the consecutive swap entries in the per-CPU buffer belongs
> to same swap device, the swap_info_struct->lock needs to be
> acquired/released only once, so that the lock contention could be
> reduced greatly. But if there are multiple swap devices, it is
> possible that the lock may be unnecessarily released/acquired because
> the swap entries belong to the same swap device are non-consecutive in
> the per-CPU buffer.
>
> To solve the issue, the per-CPU buffer is sorted according to the swap
> device before freeing the swap entries. Test shows that the time
> spent by swapcache_free_entries() could be reduced after the patch.
>
> Test the patch via measuring the run time of swap_cache_free_entries()
> during the exit phase of the applications use much swap space. The
> results shows that the average run time of swap_cache_free_entries()
> reduced about 20% after applying the patch.

"20%" is useful info, but it is much better to present the absolute
numbers, please. If it's "20% of one nanosecond" then the patch isn't
very interesting. If it's "20% of 35 seconds" then we know we have
more work to do.

If there is indeed still a significant problem here then perhaps it
would be better to move the percpu swp_entry_t buffer into the
per-device structure swap_info_struct, so it becomes "per cpu, per
device". That way we should be able to reduce contention further.

Or maybe we do something else - it all depends upon the significance of
this problem, which is why a full description of your measurements is
useful.