Re: [PATCH 2/2] tmpfs: Make tmpfs scalable with caches for freeblocks

From: Tim Chen
Date: Wed May 26 2010 - 15:36:06 EST


On Thu, 2010-05-20 at 16:13 -0700, Andrew Morton wrote:

> >
> > - spin_lock(&sbinfo->stat_lock);
> > - sbinfo->free_blocks += pages;
> > + spin_lock(&inode->i_lock);
> > + qtoken_return(&sbinfo->token_jar, pages);
> > inode->i_blocks -= pages*BLOCKS_PER_PAGE;
> > - spin_unlock(&sbinfo->stat_lock);
> > + spin_unlock(&inode->i_lock);
>
> Well most of the calls into the qtoken layer occur under inode->i_lock.
> So did we really need that spinlock inside the qtoken library code?
>
> It is a problem when library code such as qtoken performs its own
> internal locking. We have learned that such code is much more useful
> and flexible if it performs no locking at all, and requires that
> callers provide the locking (lib/rbtree.c, lib/radix-tree.c,
> lib/prio_heap.c, lib/flex_array.c, etcetera). Can we follow this
> approach with qtoken?
>

Andrew,

The inode->i_lock only locks a single inode. The token jar is shared by
all the inodes using the tmpfs so we do not want to use inode->i_lock to
lock the entire token jar for performance reason. With the qtoken
scheme, the spinlock inside the qtoken library is used only to protect
the free tokens in the common pool of the token jar. Most of the time,
this lock need not be taken as we can operate with the tokens in the per
cpu cache of the token jar. We will only need to take the lock when we
run out of tokens in cache. We put the intelligence in the library to
manage the cache and decides when it is necessary to lock and access the
free tokens in the common pool. It is better to leave the locking
decision in the library code rather than exposing it to the user.
Otherwise the user will need to check whether tokens should be taken
from cache or the common pool and duplicate the code in qtoken library.

Regards,
Tim Chen



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/