Re: [PATCH 1/5] inode: Make unused inode LRU per superblock

From: Nick Piggin
Date: Fri May 28 2010 - 06:07:30 EST


On Fri, May 28, 2010 at 08:54:18AM +1000, Dave Chinner wrote:
> On Thu, May 27, 2010 at 01:32:30PM -0700, Andrew Morton wrote:
> > On Tue, 25 May 2010 18:53:04 +1000
> > Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> >
> > > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > >
> > > The inode unused list is currently a global LRU. This does not match
> > > the other global filesystem cache - the dentry cache - which uses
> > > per-superblock LRU lists. Hence we have related filesystem object
> > > types using different LRU reclaimatin schemes.
> > >
> > > To enable a per-superblock filesystem cache shrinker, both of these
> > > caches need to have per-sb unused object LRU lists. Hence this patch
> > > converts the global inode LRU to per-sb LRUs.
> > >
> > > The patch only does rudimentary per-sb propotioning in the shrinker
> > > infrastructure, as this gets removed when the per-sb shrinker
> > > callouts are introduced later on.
> > >
> > > ...
> > >
> > > + list_move(&inode->i_list, &inode->i_sb->s_inode_lru);
> >
> > It's a shape that s_inode_lru is still protected by inode_lock. One
> > day we're going to get in trouble over that lock. Migrating to a
> > per-sb lock would be logical and might help.
> >
> > Did you look into this?
>
> Yes, I have. Yes, it's possible. It's solving a different problem,
> so I figured it can be done in a different patch set.

It almost all goes away in my inode lock splitup patches. Inode lru
and dirty lists were the last things protected by the global lock
there.

I am actually going to do per-zone lrus for these guys and per-zone
locks (which is actually better than per-sb because it gives NUMA
scalability within a single sb).

The dirty/writeback lists should probably be per-bdi locked.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/