Re: [PATCH 1/2] mm,mlock: drain pagevecs asynchronously

From: Andrew Morton
Date: Wed Jan 04 2012 - 17:05:48 EST


On Sun, 1 Jan 2012 02:30:24 -0500
kosaki.motohiro@xxxxxxxxx wrote:

> Because lru_add_drain_all() spent much time.

Those LRU pagevecs are horrid things. They add high code and
conceptual complexity, they add pointless uniprocessor overhead and the
way in which they leave LRU pages floating around not on an LRU is
rather maddening.

So the best way to fix all of this as well as this problem we're
observing is, I hope, to completely remove them.

They've been in there for ~10 years and at the time they were quite
beneficial in reducing lru_lock contention, hold times, acquisition
frequency, etc.

The approach to take here is to prepare the patches which eliminate
lru_*_pvecs then identify the problems which occur as a result, via
code inspection and runtime testing. Then fix those up.

Many sites which take lru_lock are already batching the operation.
It's a matter of hunting down those sites which take the lock
once-per-page and, if they have high frequency, batch them up.

Converting readahead to batch the locking will be pretty simple
(read_pages(), mpage_readpages(), others). That will fix pagefaults
too.

rotate_reclaimable_page() can be batched by batching
end_page_writeback(): a bio contains many pages already.

deactivate_page() can be batched too - invalidate_mapping_pages() is
already working on large chunks of pages.

Those three cases are fairly simple - we just didn't try, because the
lru_*_pvecs were there to do the work for us.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/