Re: BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!

From: Mikhail Gavrilov
Date: Thu Jan 26 2023 - 17:43:10 EST


On Thu, Jan 26, 2023 at 10:39 PM Boqun Feng <boqun.feng@xxxxxxxxx> wrote:
>
> [Cc lock folks]
>
> On Thu, Jan 26, 2023 at 02:47:42PM +0500, Mikhail Gavrilov wrote:
> > On Wed, Jan 25, 2023 at 10:21 PM David Sterba <dsterba@xxxxxxx> wrote:
> > >
> > > On Wed, Jan 25, 2023 at 01:27:48AM +0500, Mikhail Gavrilov wrote:
> > > > On Tue, Jul 26, 2022 at 9:47 PM David Sterba <dsterba@xxxxxxx> wrote:
> > > > >
> > > > > On Tue, Jul 26, 2022 at 05:32:54PM +0500, Mikhail Gavrilov wrote:
> > > > > > Hi guys.
> > > > > > Always with intensive writing on a btrfs volume, the message "BUG:
> > > > > > MAX_LOCKDEP_CHAIN_HLOCKS too low!" appears in the kernel logs.
> > > > >
> > > > > Increase the config value of LOCKDEP_CHAINS_BITS, default is 16, 18
> > > > > tends to work.
> > > >
> > > > Hi,
> > > > Today I was able to get the message "BUG: MAX_LOCKDEP_CHAIN_HLOCKS too
> > > > low!" again even with LOCKDEP_CHAINS_BITS=18 and kernel 6.2-rc5.
> > > >
> > > > ❯ cat /boot/config-`uname -r` | grep LOCKDEP_CHAINS_BITS
> > > > CONFIG_LOCKDEP_CHAINS_BITS=18
> > > >
> > > > [88685.088099] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
> > > > [88685.088124] turning off the locking correctness validator.
> > > > [88685.088133] Please attach the output of /proc/lock_stat to the bug report
> > > > [88685.088142] CPU: 14 PID: 1749746 Comm: mv Tainted: G W L
> > > > ------- --- 6.2.0-0.rc5.20230123git2475bf0250de.38.fc38.x86_64 #1
> > > > [88685.088154] Hardware name: System manufacturer System Product
> > > > Name/ROG STRIX X570-I GAMING, BIOS 4408 10/28/2022
> > > >
> > > > What's next? Increase this value to 19?
> > >
> > > Yes, though increasing the value is a workaround so you may see the
> > > warning again.
> >
> > Is there any sense in this WARNING if we would ignore it and every
> > time increase the threshold value?
>
> Lockdep uses static allocated array to track lock holdings chains to
> avoid dynmaic memory allocation in its own code. So if you see the
> warning it means your test has more combination of lock holdings than
> the array can record. In other words, you reach the resource limitation,
> and in that sense it makes sense to just ignore it and increase the
> value: you want to give lockdep enough resource to work, right?

It is needed for correct working btrfs. David, am I right?

>
> > May Be set 99 right away? Or remove such a check condition?
>
> That requires having 2^99 * 5 * sizeof(u16) memory for lock holding
> chains array..
>
> However, a few other options we can try in lockdep are:
>
> * warn but not turn off the lockdep: the lock holding chain is
> only a cache for what lock holding combination lockdep has ever
> see, we also record the dependency in the graph. Without the
> lock holding chain, lockdep can still work but just slower.
>
> * allow dynmaic memory allocation in lockdep: I think this might
> be OK since we have lockdep_recursion to avoid lockdep code ->
> mm code -> lockdep code -> mm code ... deadlock. But maybe I'm
> missing something. And even we allow it, the use of memory
> doesn't change, you will still need that amout of memory to
> track lock holding chains.
>
> I'm not sure whether these options are better than just increasing the
> number, maybe to unblock your ASAP, you can try make it 30 and make sure
> you have large enough memory to test.

About just to increase the LOCKDEP_CHAINS_BITS by 1. Where should this
be done? In vanilla kernel on kernel.org? In a specific distribution?
or the user must rebuild the kernel himself? Maybe increase
LOCKDEP_CHAINS_BITS by 1 is most reliable solution, but it difficult
to distribute to end users because the meaning of using packaged
distributions is lost (user should change LOCKDEP_CHAINS_BITS in
config and rebuild the kernel by yourself).

It would be great if the chosen value would simply work always
everywhere. 30? ok! But as I understand, btrfs does not have any
guarantees for this. David, am I right?

Anyway, thank you for keeping the conversation going.

--
Best Regards,
Mike Gavrilov.