Re: [PATCH v4 0/5] kernfs: proposed locking and concurrency improvement

From: Greg Kroah-Hartman
Date: Thu May 27 2021 - 02:50:28 EST


On Thu, May 27, 2021 at 09:23:06AM +0800, Ian Kent wrote:
> On Mon, 2021-05-17 at 09:32 +0800, Ian Kent wrote:
> > On Fri, 2021-05-14 at 10:34 +0800, Fox Chen wrote:
> > > On Fri, May 14, 2021 at 9:34 AM Ian Kent <raven@xxxxxxxxxx> wrote:
> > > >
> > > > On Thu, 2021-05-13 at 23:37 +0800, Fox Chen wrote:
> > > > > Hi Ian
> > > > >
> > > > > On Thu, May 13, 2021 at 10:10 PM Ian Kent <raven@xxxxxxxxxx>
> > > > > wrote:
> > > > > >
> > > > > > On Wed, 2021-05-12 at 16:54 +0800, Fox Chen wrote:
> > > > > > > On Wed, May 12, 2021 at 4:47 PM Fox Chen
> > > > > > > <foxhlchen@xxxxxxxxx>
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > I ran it on my benchmark (
> > > > > > > > https://github.com/foxhlchen/sysfs_benchmark).
> > > > > > > >
> > > > > > > > machine: aws c5 (Intel Xeon with 96 logical cores)
> > > > > > > > kernel: v5.12
> > > > > > > > benchmark: create 96 threads and bind them to each core
> > > > > > > > then
> > > > > > > > run
> > > > > > > > open+read+close on a sysfs file simultaneously for 1000
> > > > > > > > times.
> > > > > > > > result:
> > > > > > > > Without the patchset, an open+read+close operation takes
> > > > > > > > 550-
> > > > > > > > 570
> > > > > > > > us,
> > > > > > > > perf shows significant time(>40%) spending on mutex_lock.
> > > > > > > > After applying it, it takes 410-440 us for that operation
> > > > > > > > and
> > > > > > > > perf
> > > > > > > > shows only ~4% time on mutex_lock.
> > > > > > > >
> > > > > > > > It's weird, I don't see a huge performance boost compared
> > > > > > > > to
> > > > > > > > v2,
> > > > > > > > even
> > > > > > >
> > > > > > > I meant I don't see a huge performance boost here and it's
> > > > > > > way
> > > > > > > worse
> > > > > > > than v2.
> > > > > > > IIRC, for v2 fastest one only takes 40us
> > > > > >
> > > > > > Thanks Fox,
> > > > > >
> > > > > > I'll have a look at those reports but this is puzzling.
> > > > > >
> > > > > > Perhaps the added overhead of the check if an update is
> > > > > > needed is taking more than expected and more than just
> > > > > > taking the lock and being done with it. Then there's
> > > > > > the v2 series ... I'll see if I can dig out your reports
> > > > > > on those too.
> > > > >
> > > > > Apologies, I was mistaken, it's compared to V3, not V2.  The
> > > > > previous
> > > > > benchmark report is here.
> > > > > https://lore.kernel.org/linux-fsdevel/CAC2o3DKNc=sL2n8291Dpiyb0bRHaX=nd33ogvO_LkJqpBj-YmA@xxxxxxxxxxxxxx/
> > > >
> > > > Are all these tests using a single file name in the
> > > > open/read/close
> > > > loop?
> > >
> > > Yes,  because It's easy to implement yet enough to trigger the
> > > mutex_lock.
> > >
> > > And you are right It's not a real-life pattern, but on the bright
> > > side, it proves there is no original mutex_lock problem anymore. :)
> >
> > I've been looking at your reports and they are quite interesting.
> >
> > >
> > > > That being the case the per-object inode lock will behave like a
> > > > mutex and once contention occurs any speed benefits of a spinlock
> > > > over a mutex (or rwsem) will disappear.
> > > >
> > > > In this case changing from a write lock to a read lock in those
> > > > functions and adding the inode mutex will do nothing but add the
> > > > overhead of taking the read lock. And similarly adding the update
> > > > check function also just adds overhead and, as we see, once
> > > > contention starts it has a cumulative effect that's often not
> > > > linear.
> > > >
> > > > The whole idea of a read lock/per-object spin lock was to reduce
> > > > the possibility of contention for paths other than the same path
> > > > while not impacting same path accesses too much for an overall
> > > > gain. Based on this I'm thinking the update check function is
> > > > probably not worth keeping, it just adds unnecessary churn and
> > > > has a negative impact for same file contention access patterns.
> >
> > The reports indicate (to me anyway) that the slowdown isn't
> > due to kernfs. It looks more like kernfs is now putting pressure
> > on the VFS, mostly on the file table lock but it looks like
> > there's a mild amount of contention on a few other locks as well
> > now.
> >
> > That's a whole different problem and those file table handling
> > functions don't appear to have any obvious problems so they are
> > doing what they have to do and that can't be avoided.
> >
> > That's definitely out of scope for these changes.
> >
> > And, as you'd expect, once any appreciable amount of contention
> > happens our measurements go out the window, certainly with
> > respect to kernfs.
> >
> > It also doesn't change my option that checking if an inode
> > attribute update is needed in kernfs isn't useful since, IIUC
> > that file table lock contention would result even if you were
> > using different paths.
> >
> > So I'll drop that patch from the series.
>
> It will probably not make any difference, but for completeness
> and after some thought, I felt I should post what I think is
> happening with the benchmark.
>
> The benchmark application runs a pthreads thread for each CPU
> and goes into a tight open/read/close loop to demonstrate the
> contention that can occur on the kernfs mutex as the number of
> CPUs grows.
>
> But that tight open/read/close loop causes contention on the VFS
> file table because the pthreads threads share the process file
> table.
>
> So the poor performance is actually evidence the last patch is
> doing what it's meant to do rather than evidence of a regression
> with the series.
>
> The benchmark is putting pressure on the process file table on
> some hardware configurations but those critical sections are small
> and there's really nothing obvious that can be done to improve the
> file table locking.
>
> It is however important to remember that the benckmark application
> access pattern is not a normal access pattern by a long way.
>
> So I don't see the need for a new revision of the series with the
> last patch dropped.
>
> If there's a change of heart and the series was to be merged I'll
> leave whether to include this last patch to the discretion of the
> maintainer as the bulk of the improvement comes from the earlier
> patches anyway.

Can you please resubmit the series, it is long-gone from my review
queue.

thanks,

greg k-h