Re: [RFC PATCH 0/10] split anon and file LRUs

From: Andrew Morton
Date: Wed Nov 07 2007 - 13:00:32 EST


> On Tue, 6 Nov 2007 21:51:27 -0500 Rik van Riel <riel@xxxxxxxxxx> wrote:
> On Tue, 6 Nov 2007 18:40:46 -0800 (PST)
> Christoph Lameter <clameter@xxxxxxx> wrote:
>
> > On Tue, 6 Nov 2007, Rik van Riel wrote:
> >
> > > Also, a factor 16 increase in page size is not going to help
> > > if memory sizes also increase by a factor 16, since we already
> > > have trouble with today's memory sizes.
> >
> > Note that a factor 16 increase usually goes hand in hand with
> > more processors. The synchronization of multiple processors becomes a
> > concern. If you have an 8p and each of them tries to get the zone locks
> > for reclaim then we are already in trouble. And given the immaturity
> > of the handling of cacheline contention in current commodity hardware this
> > is likely to result in livelocks and/or starvation on some level.
>
> Which is why we need to greatly reduce the number of pages
> scanned to free a page. In all workloads.

It strikes me that splitting one list into two lists will not provide
sufficient improvement in search efficiency to do that. I mean, a naive
guess would be that it will, on average, halve the amount of work which
needs to be done.

But we need multiple-orders-of-magnitude improvements to address the
pathological worst-cases which you're looking at there. Where is this
coming from?

Or is the problem which you're seeing due to scanning of mapped pages
at low "distress" levels?

Would be interested in seeing more details on all of this, please.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/