RE: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better performance

From: äè
Date: Wed Sep 11 2013 - 22:42:04 EST


Hi Gu

> -----Original Message-----
> From: Gu Zheng [mailto:guz.fnst@xxxxxxxxxxxxxx]
> Sent: Wednesday, September 11, 2013 1:38 PM
> To: jaegeuk.kim@xxxxxxxxxxx
> Cc: chao2.yu@xxxxxxxxxxx; shu.tan@xxxxxxxxxxx;
> linux-fsdevel@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> linux-f2fs-devel@xxxxxxxxxxxxxxxxxxxxx
> Subject: Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better performance
>
> Hi Jaegeuk, Chao,
>
> On 09/10/2013 08:52 AM, Jaegeuk Kim wrote:
>
> > Hi,
> >
> > At first, thank you for the report and please follow the email writing
> > rules. :)
> >
> > Anyway, I agree to the below issue.
> > One thing that I can think of is that we don't need to use the
> > spin_lock, since we don't care about the exact lock number, but just
> > need to get any not-collided number.
>
> IMHO, just moving sbi->next_lock_num++ before
> mutex_lock(&sbi->fs_lock[next_lock])
> can avoid unbalance issue mostly.
> IMO, the case two or more threads increase sbi->next_lock_num in the same
> time is really very very little. If you think it is not rigorous, change
> next_lock_num to atomic one can fix it.
> What's your opinion?
>
> Regards,
> Gu

I did the test sbi->next_lock_num++ compare with the atomic one,
And I found performance of them is almost the same under a small number thread racing.
So as your and Kim's opinion, it's enough to use "sbi->next_lock_num++" to fix this issue.

Thanks for the advice.
>
> >
> > So, how about removing the spin_lock?
> > And how about using a random number?
>
> > Thanks,
> >
> > 2013-09-06 (ê), 09:48 +0000, Chao Yu:
> >> Hi Kim:
> >>
> >> I think there is a performance problem: when all sbi->fs_lock is
> >> holded,
> >>
> >> then all other threads may get the same next_lock value from
> >> sbi->next_lock_num in function mutex_lock_op,
> >>
> >> and wait to get the same lock at position fs_lock[next_lock], it
> >> unbalance the fs_lock usage.
> >>
> >> It may lost performance when we do the multithread test.
> >>
> >>
> >>
> >> Here is the patch to fix this problem:
> >>
> >>
> >>
> >> Signed-off-by: Yu Chao <chao2.yu@xxxxxxxxxxx>
> >>
> >> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> >>
> >> old mode 100644
> >>
> >> new mode 100755
> >>
> >> index 467d42d..983bb45
> >>
> >> --- a/fs/f2fs/f2fs.h
> >>
> >> +++ b/fs/f2fs/f2fs.h
> >>
> >> @@ -371,6 +371,7 @@ struct f2fs_sb_info {
> >>
> >> struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS
> >> operations */
> >>
> >> struct mutex node_write; /* locking node
> writes
> >> */
> >>
> >> struct mutex writepages; /* mutex for
> >> writepages() */
> >>
> >> + spinlock_t spin_lock; /* lock for
> >> next_lock_num */
> >>
> >> unsigned char next_lock_num; /* round-robin
> global
> >> locks */
> >>
> >> int por_doing; /* recovery is doing
> >> or not */
> >>
> >> int on_build_free_nids; /* build_free_nids is
> >> doing */
> >>
> >> @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
> >> f2fs_sb_info *sbi)
> >>
> >>
> >>
> >> static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
> >>
> >> {
> >>
> >> - unsigned char next_lock = sbi->next_lock_num %
> >> NR_GLOBAL_LOCKS;
> >>
> >> + unsigned char next_lock;
> >>
> >> int i = 0;
> >>
> >>
> >>
> >> for (; i < NR_GLOBAL_LOCKS; i++)
> >>
> >> if (mutex_trylock(&sbi->fs_lock[i]))
> >>
> >> return i;
> >>
> >>
> >>
> >> - mutex_lock(&sbi->fs_lock[next_lock]);
> >>
> >> + spin_lock(&sbi->spin_lock);
> >>
> >> + next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
> >>
> >> sbi->next_lock_num++;
> >>
> >> + spin_unlock(&sbi->spin_lock);
> >>
> >> +
> >>
> >> + mutex_lock(&sbi->fs_lock[next_lock]);
> >>
> >> return next_lock;
> >>
> >> }
> >>
> >>
> >>
> >> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> >>
> >> old mode 100644
> >>
> >> new mode 100755
> >>
> >> index 75c7dc3..4f27596
> >>
> >> --- a/fs/f2fs/super.c
> >>
> >> +++ b/fs/f2fs/super.c
> >>
> >> @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block
> >> *sb, void *data, int silent)
> >>
> >> mutex_init(&sbi->cp_mutex);
> >>
> >> for (i = 0; i < NR_GLOBAL_LOCKS; i++)
> >>
> >> mutex_init(&sbi->fs_lock[i]);
> >>
> >> + spin_lock_init(&sbi->spin_lock);
> >>
> >> mutex_init(&sbi->node_write);
> >>
> >> sbi->por_doing = 0;
> >>
> >> spin_lock_init(&sbi->stat_lock);
> >>
> >> (END)
> >>
> >>
> >>
> >>
> >>
> >>
> >
>
>
> =

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/