Re: [RFC PATCH 00/22] Per-cpu page allocator replacement prototype

From: Dave Hansen
Date: Thu May 09 2013 - 11:41:55 EST


On 05/08/2013 09:02 AM, Mel Gorman wrote:
> So preliminary testing indicates the results are mixed bag. As long as
> locks are not contended, it performs fine but parallel fault testing
> hits into spinlock contention on the magazine locks. A greater problem
> is that because CPUs share magazines it means that the struct pages are
> frequently dirtied cache lines. If CPU A frees a page to a magazine and
> CPU B immediately allocates it then the cache line for the page and the
> magazine bounces and this costs. It's on the TODO list to research if the
> available literature has anything useful to say that does not depend on
> per-cpu lists and the associated problems with them.

If we don't want to bounce 'struct page' cache lines around, then we
_need_ to make sure that things that don't share caches don't use the
same magazine. I'm not sure there's any other way. But, that doesn't
mean we have to _statically_ assign cores/thread to particular magazines.

Say we had a percpu hint which points us to the last magazine we used.
We always go to it first, and fall back to round-robin if our preferred
one is contended. That way, if we have a mixture tasks doing heavy and
light allocations, the heavy allocators will tend to "own" a magazine,
and the lighter ones would gravitate to sharing one.

It might be taking things too far, but we could even raise the number of
magazines only when we actually *see* contention on the existing set.

> 24 files changed, 571 insertions(+), 788 deletions(-)

oooooooooooooooooohhhhhhhhhhhhh.

The only question is how much we'll have to bloat it as we try to
optimize things. :)

BTW, I really like the 'magazine' name. It's not frequently used in
this kind of context and it conjures up a nice mental image whether it
be of stacks of periodicals or firearm ammunition clips.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/