Re: [PATCH] xfs: always free inline data before resetting inode fork during ifree

From: Sasha Levin
Date: Wed Mar 28 2018 - 15:30:16 EST


On Wed, Mar 28, 2018 at 02:32:28PM +1100, Dave Chinner wrote:
>How much time are your test rigs going to be able to spend running
>xfstests? A single pass on a single filesysetm config on spinning
>disks will take 3-4 hours of run time. And we have at least 4 common
>configs that need validation (v4, v4 w/ 512b block size, v5
>(defaults), and v5 w/ reflink+rmap) and so you're looking at a
>minimum 12-24 hours of machine test time per kernel you'd need to
>test.

No reason they can't run in parallel, right?

>> > From: Sasha Levin <alexander.levin@xxxxxxxxxxxxx>
>> > To: Sasha Levin <alexander.levin@xxxxxxxxxxxxx>
>> > To: linux-xfs@xxxxxxxxxxxxxxx, "Darrick J . Wong" <darrick.wong@xxxxxxxxxx>
>> > Cc: Brian Foster <bfoster@xxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx
>> > Subject: Re: [PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic
>> > In-Reply-To: <20180306102638.25322-1-vbendel@xxxxxxxxxx>
>> > References: <20180306102638.25322-1-vbendel@xxxxxxxxxx>
>> >
>> > Hi Vratislav Bendel,
>> >
>> > [This is an automated email]
>> >
>> > This commit has been processed by the -stable helper bot and determined
>> > to be a high probability candidate for -stable trees. (score: 6.4845)
>> >
>> > The bot has tested the following trees: v4.15.12, v4.14.29, v4.9.89, v4.4.123, v4.1.50, v3.18.101.
>> >
>> > v4.15.12: OK!
>> > v4.14.29: OK!
>> > v4.9.89: OK!
>> > v4.4.123: OK!
>> > v4.1.50: OK!
>> > v3.18.101: OK!
>> >
>> > Please reply with "ack" to have this patch included in the appropriate stable trees.
>
>That might help, but the testing and validation is completely
>opaque. If I wanted to know what that "OK!" actually meant, where
>do I go to find that out?

This is actually something I want maintainers to dictate. What sort of
testing would make the XFS folks happy here? Right now I'm doing
"./check 'xfs/*'" with xfstests. Is it sufficient? Anything else you'd like to see?

--
Thanks,
Sasha