Re: [PATCH v7 1/4] spinlock: A new lockref structure for locklessupdate of refcount

From: Ingo Molnar
Date: Tue Sep 03 2013 - 02:01:39 EST



* Waiman Long <waiman.long@xxxxxx> wrote:

> On 08/30/2013 10:42 PM, Al Viro wrote:
> >On Sat, Aug 31, 2013 at 03:35:16AM +0100, Al Viro wrote:
> >
> >>Aha... OK, I see what's going on. We end up with shm_mnt *not* marked
> >>as long-living vfsmount, even though it lives forever. See if the
> >>following helps; if it does (and I very much expect it to), we want to
> >>put it in -stable. As it is, you get slow path in mntput() each time
> >>a file created by shmem_file_setup() gets closed. For no reason whatsoever...
> >We still want MS_NOUSER on shm_mnt, so we'd better make sure that
> >shmem_fill_super() sets it on the internal instance... Fixed variant
> >follows:
> >
> >Signed-off-by: Al Viro<viro@xxxxxxxxxxxxxxxxxx>
> >diff --git a/mm/shmem.c b/mm/shmem.c
> >index e43dc55..5261498 100644
> >--- a/mm/shmem.c
> >+++ b/mm/shmem.c
> >@@ -2615,13 +2615,15 @@ int shmem_fill_super(struct super_block *sb, void *data, int silent)
> > * tmpfs instance, limiting inodes to one per page of lowmem;
> > * but the internal instance is left unlimited.
> > */
> >- if (!(sb->s_flags& MS_NOUSER)) {
> >+ if (!(sb->s_flags& MS_KERNMOUNT)) {
> > sbinfo->max_blocks = shmem_default_max_blocks();
> > sbinfo->max_inodes = shmem_default_max_inodes();
> > if (shmem_parse_options(data, sbinfo, false)) {
> > err = -EINVAL;
> > goto failed;
> > }
> >+ } else {
> >+ sb->s_flags |= MS_NOUSER;
> > }
> > sb->s_export_op =&shmem_export_ops;
> > sb->s_flags |= MS_NOSEC;
> >@@ -2831,8 +2833,7 @@ int __init shmem_init(void)
> > goto out2;
> > }
> >
> >- shm_mnt = vfs_kern_mount(&shmem_fs_type, MS_NOUSER,
> >- shmem_fs_type.name, NULL);
> >+ shm_mnt = kern_mount(&shmem_fs_type);
> > if (IS_ERR(shm_mnt)) {
> > error = PTR_ERR(shm_mnt);
> > printk(KERN_ERR "Could not kern_mount tmpfs\n");
>
> Yes, that patch worked. It eliminated the lglock as a bottleneck in
> the AIM7 workload. The lg_global_lock did not show up in the perf
> profile, whereas the lg_local_lock was only 0.07%.

Just curious: what's the worst bottleneck now in the optimized kernel? :-)

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/