Re: [PATCH/RFC] clk: gate: Add some kunit test suites

From: Vaittinen, Matti
Date: Tue May 05 2020 - 02:16:07 EST



On Mon, 2020-05-04 at 13:19 -0700, Brendan Higgins wrote:
> On Sun, May 3, 2020 at 10:54 PM Vaittinen, Matti
> <Matti.Vaittinen@xxxxxxxxxxxxxxxxx> wrote:
> > On Wed, 2020-04-29 at 12:15 +0800, David Gow wrote:
> > > On Tue, Apr 14, 2020 at 7:46 PM Vaittinen, Matti
> > > <Matti.Vaittinen@xxxxxxxxxxxxxxxxx> wrote:
> > > > Hello Stephen & All,
> > > >
> > > > Prologue:
> > > >
> > > > I have been traumatized in the past - by unit tests :) Thus I
> > > > am
> > > > always
> > > > a bit jumpy when I see people adding UTs. I always see the
> > > > inertia
> > > > UTs
> > > > add to development - when people change anything they must also
> > > > change
> > > > impacted UTs - and every unnecessary UT case is actually
> > > > burden. I
> > > > was
> > > > once buried under such burden.. Back at those times I quite
> > > > often
> > > > found
> > > > myself wondering why I spend two days fixing UT cases after I
> > > > did a
> > > > necessary code change which took maybe 10 minutes. Please see
> > > > my
> > > > comments knowing this history and do your decisions based on
> > > > less
> > > > biased - brighter understanding :]
> > > >
> > > >
> > >
> > > Hey Matti,
> > >
> > > As someone who's definitely wasted a lot of time fixing
> > > poorly-thought-through tests, but who is nevertheless working
> > > enthusiastically on KUnit, I wanted to respond quickly here.
> >
> > I appreciate your reply. And I appreciate your (and others) work on
> > KUinit. I don't think UTs are bad. I believe UTs are great and
> > powerfull tool - when used on specific occasions. But UTs
> > definitely
> > are "double-edged sword" - you can hit your own leg.
>
> Sure, I saw you just added a unit test, so I would assume that you
> don't think that UTs are inherently bad :-)
>
> > > Certainly, the goal is not to reduce development velocity, though
> > > it
> > > is redistributed a bit: hopefully the extra time spent updating
> > > tests
> > > will be more than paid back in identifying and fixing bugs
> > > earlier,
> > > as
> > > well as making testing one's own changes simpler. Only time will
> > > tell
> > > if we're right or not, but if they turn out to cause more
> > > problems
> > > than they solve we can always adjust or remove them. I remain
> > > optimistic, though.
> >
> > Fixing and identifying bugs is definitely "the thing" in UTs. But
> > what
> > comes to weighing the benefits of UTs Vs. downsides - that's hard.
> > First thing on that front is to understand the cost of UTs. And in
> > my
> > experience - many people underestimate that. It's easy too see
> > things
> > black & white and think UTs are only good - which is not true.
> >
> > UT cases are code which must be maintained as any code. And we must
> > never add code which is not beneficial as maintaining it is work.
> > We do
> > constantly work to decrease amount of code to be maintaned - UTs
> > are no
> > exception - all code is a burden. Unnecessary code is burden with
> > no
> > purpose. And UTs have no other purpose but to point out mistakes.
> > Only
>
> I am in agreement with everything you said so far.
>
> > good test is a test that is failing and pointing out a bug which
> > would
> > have otherwise gone unnoticed. Passing test is a waste.
>
> Not to be uncharitable in how I read this, but I don't think this is
> quite right. I think we would all agree that a test that *cannot*
> fail
> is a waste, but on the other hand, you should never submit code that
> doesn't pass *good tests*, right? Like you wrote tests for your
> linear
> range stuff to verify that it worked as expected; I don't know if
> some
> of the tests didn't pass on the first try, but you did ultimately fix
> all the issues, and submitted only tests that pass, right? I am
> pretty
> sure no one would accept code accompanied with tests that don't pass.
>
> So would a better way to phrase this be: "tests that can only pass
> are
> a waste"? Or maybe, "tests that don't provide useful information on a
> failure are a waste"?

What I mean is really that a test that is not failing is a waste. To
make it more accurate though - a test that is not failing at some point
during development is a waste.

> > So key thing when considering if adding a test is beneficial is
> > whether
> > the test is likely to point out a bug (in early phase). A bug that
> > would otherwise have gone through unnoticed.
>
> Why not a bug later on? On another project, I noticed a piece of code
> that kept breaking, so I added a test for that piece of code, and
> people stopped breaking it: mission accomplished, right?

You are right. "Later on" is fine :) As is "on another project".

> > > I do think that these tests are unlikely to increase the burden
> > > on
> > > developers much:
> >
> > All code which uses kernel APIs is increasing burden for someone
> > who
> > needs to change the API. Much can be discussed ;)
> >
> > > they seem mostly to test interfaces and behaviour
> > > which is used quite a bit elsewhere in the kernel: changes that
> > > break
> > > them seem likely to be reasonably disruptive anyway, so these
> > > tests
> > > aren't going to add too much to that overall,
> >
> > This statement is valid for almost all exported kernel APIs - yet
> > adding tests for all APIs is definitely a bad idea. The effort is
>
> I don't understand, can you elaborate? From my experience, APIs are
> frequently one of the most valuable places to test.

Yes. APIs are what we should focus. But not _all_ APIs. APIs to test
should be carefully selected.


1. I would not add tests to APIs which have simple implementation. I
know you said "simple" is subjective. I'll do some pseudo code to
elaborate:

I think we can agree that for example APIs like

bool is_foo(sometype maybefoo) {
return !!maybefoo.foo;
}

are simple and not likely to break. Adding test

sometype yesfoo = { .foo = 1 };
sometype nofoo = { .foo = 0 };
footest()
{
TEST_ASSERT(is_foo(yesfoo));
TEST_ASSERT(!is_foo(nofoo));
}

is unlikely to bring any value - but is likely to cause extra work when
sometype is replaced with new cooler construct - or foo is renamed or
anything else changes this.

This is just an extreme example - from my perspective the amount of
work required to write and maintain a test will easily exceed the
benefits even for _much_ more complex constructs. But as you said,
"simplicity" is subjective and I need to trust the maintainers to
understand where to draw the line :)

Another example of APIs for which I would omit tests are APIs whose
functionality is tested by some other tests. If we stick to clk domain
we can look following example:

Clk has few 'bulk' APIs that repeat some operation for many clk
entities.

https://elixir.bootlin.com/linux/v5.7-rc4/source/drivers/clk/clk-bulk.c

Let's assume someone wrote test(s) for:
clk_bulk_put():

void clk_bulk_put(int num_clks, struct clk_bulk_data *clks)
{
while (--num_clks >= 0) {
clk_put(clks[num_clks].clk);
clks[num_clks].clk = NULL;
}
}

Then I'd argue that writing tests for clk_put() were not bringing much
of additional value.

So - I would straight away drop at least the tests for simple APIs and
APIs that are tested by other tests.

>
> > cumulative. We should never add new code just because maintenance
> > effort is relatively small. We must only add code if it adds value.
>
> I agree with this statement, but I don't see how it shows that API
> testing is inherently bad.

My intention was not to say API testing is inherently bad. My purpose
was to say that not _all_ APIs should be tested - even though all
exported APIs usually fulfill the David's criteria of being also used
in many other places but tests. So, I was just saying that we should
definitely drop the "this is used by other code so adding test doesn't
matter" - argument. That argument leads to the dark side :]

> I know you are countering David's statement
> that it doesn't add much cost; however, I think that API testing not
> only doesn't add much cost, if done properly, I think it actually
> reduces future effort in changing those APIs.

Yes. I agree. A good test can reduce effort when it is written for a
complex, error prone implementation. But for many APIs the existing
test load would (in my opinion) just hinder the changes.

>
> > > and may ultimately make
> > > things a bit simpler by helping to document and verify the
> > > changes.
> >
> > Yes. Definitely. But we must never deny that all added (test) code
> > adds
> > work. And do weighing pros an cons only after that. Danger of UTs
> > is
> > that we don't admit the cons.
>
> True. There is no magic bullet here. UTs are but one tool in a
> toolbox; there are other tools better suited to solve some problems,
> and no problem is likely to be solved well if the person using the
> tool uses the tool improperly. I don't think anyone here thinks that
> UTs are inherently good and all UTs make all code better.

You optimist :p XD

> > > A few other notes inline:
> > >
> > > > On Tue, 2020-04-07 at 20:56 -0700, Stephen Boyd wrote:
> > > > > Test various parts of the clk gate implementation with the
> > > > > kunit
> > > > > testing
> > > > > framework.
> > > > >
> > > > > Cc: Brendan Higgins <brendanhiggins@xxxxxxxxxx>
> > > > > Cc: <kunit-dev@xxxxxxxxxxxxxxxx>
> > > > > Signed-off-by: Stephen Boyd <sboyd@xxxxxxxxxx>
> > > > > ---
> > > > >
> > > > > This patch is on top of this series[1] that allows the clk
> > > > > framework to be selected by Kconfig language.
> > > > >
> > > > > [1]
> > > > > https://lore.kernel.org/r/20200405025123.154688-1-sboyd@xxxxxxxxxx
> > > > >
> > > > > drivers/clk/Kconfig | 8 +
> > > > > drivers/clk/Makefile | 1 +
> > > > > drivers/clk/clk-gate-test.c | 481
> > > > > ++++++++++++++++++++++++++++++++++++
> > > > > 3 files changed, 490 insertions(+)
> > > > > create mode 100644 drivers/clk/clk-gate-test.c
> > > > >
> > > > > diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
> > > > > index 6ea0631e3956..66193673bcdf 100644
> > > > > --- a/drivers/clk/Kconfig
> > > > > +++ b/drivers/clk/Kconfig
> > > > > @@ -377,4 +377,12 @@ source "drivers/clk/ti/Kconfig"
> > > > > source "drivers/clk/uniphier/Kconfig"
> > > > > source "drivers/clk/zynqmp/Kconfig"
> > > > >
> > > > > +# Kunit test cases
> > > > > +config CLK_GATE_TEST
> > > > > + tristate "Basic gate type Kunit test"
> > > > > + depends on KUNIT
> > > > > + default KUNIT
> > > > > + help
> > > > > + Kunit test for the basic clk gate type.
> > > > > +
> > > > > endif
> > > > > diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile
> > > > > index f4169cc2fd31..0785092880fd 100644
> > > > > --- a/drivers/clk/Makefile
> > > > > +++ b/drivers/clk/Makefile
> > > > > @@ -7,6 +7,7 @@ obj-$(CONFIG_COMMON_CLK) += clk-
> > > > > divider.o
> > > > > obj-$(CONFIG_COMMON_CLK) += clk-fixed-factor.o
> > > > > obj-$(CONFIG_COMMON_CLK) += clk-fixed-rate.o
> > > > > obj-$(CONFIG_COMMON_CLK) += clk-gate.o
> > > > > +obj-$(CONFIG_CLK_GATE_TEST) += clk-gate-test.o
> > > > > obj-$(CONFIG_COMMON_CLK) += clk-multiplier.o
> > > > > obj-$(CONFIG_COMMON_CLK) += clk-mux.o
> > > > > obj-$(CONFIG_COMMON_CLK) += clk-composite.o
> > > > > diff --git a/drivers/clk/clk-gate-test.c b/drivers/clk/clk-
> > > > > gate-
> > > > > test.c
> > > > > new file mode 100644
> > > > > index 000000000000..b1d6c21e9698
> > > > > --- /dev/null
> > > > > +++ b/drivers/clk/clk-gate-test.c
> > > > > @@ -0,0 +1,481 @@
> > > > > +// SPDX-License-Identifier: GPL-2.0
> > > > > +/*
> > > > > + * KUnit test for clk gate basic type
> > > > > + */
> > > > > +#include <linux/clk.h>
> > > > > +#include <linux/clk-provider.h>
> > > > > +#include <linux/platform_device.h>
> > > > > +
> > > > > +#include <kunit/test.h>
> > > > > +
> > > > > +static void clk_gate_register_test_dev(struct kunit *test)
> > > > > +{
> > > > > + struct clk_hw *ret;
> > > > > + struct platform_device *pdev;
> > > > > +
> > > > > + pdev =
> > > > > platform_device_register_simple("test_gate_device",
> > > > > -1,
> > > > > NULL, 0);
> > > > > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
> > > > > +
> > > > > + ret = clk_hw_register_gate(&pdev->dev, "test_gate",
> > > > > NULL,
> > > > > 0,
> > > > > NULL,
> > > > > + 0, 0, NULL);
> > > > > + KUNIT_EXPECT_NOT_ERR_OR_NULL(test, ret);
> > > > > + KUNIT_EXPECT_STREQ(test, "test_gate",
> > > > > clk_hw_get_name(ret));
> > > > > + KUNIT_EXPECT_EQ(test, 0UL, clk_hw_get_flags(ret));
> > > > > +
> > > > > + clk_hw_unregister_gate(ret);
> > > > > + platform_device_put(pdev);
> > > > > +}
> > > > > +
> > > > > +static void clk_gate_register_test_parent_names(struct kunit
> > > > > *test)
> > > > > +{
> > > > > + struct clk_hw *parent;
> > > > > + struct clk_hw *ret;
> > > > > +
> > > > > + parent = clk_hw_register_fixed_rate(NULL,
> > > > > "test_parent",
> > > > > NULL,
> > > > > 0,
> > > > > + 1000000);
> > > > > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
> > > > > +
> > > > > + ret = clk_hw_register_gate(NULL, "test_gate",
> > > > > "test_parent", 0,
> > > > > NULL,
> > > > > + 0, 0, NULL);
> > > > > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
> > > > > + KUNIT_EXPECT_PTR_EQ(test, parent,
> > > > > clk_hw_get_parent(ret));
> > > > > +
> > > > > + clk_hw_unregister_gate(ret);
> > > > > + clk_hw_unregister_fixed_rate(parent);
> > > > > +}
> > > > > +
> > > > > +static void clk_gate_register_test_parent_data(struct kunit
> > > > > *test)
> > > > > +{
> > > > > + struct clk_hw *parent;
> > > > > + struct clk_hw *ret;
> > > > > + struct clk_parent_data pdata = { };
> > > > > +
> > > > > + parent = clk_hw_register_fixed_rate(NULL,
> > > > > "test_parent",
> > > > > NULL,
> > > > > 0,
> > > > > + 1000000);
> > > > > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
> > > > > + pdata.hw = parent;
> > > > > +
> > > > > + ret = clk_hw_register_gate_parent_data(NULL,
> > > > > "test_gate",
> > > > > &pdata, 0,
> > > > > + NULL, 0, 0,
> > > > > NULL);
> > > > > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
> > > > > + KUNIT_EXPECT_PTR_EQ(test, parent,
> > > > > clk_hw_get_parent(ret));
> > > > > +
> > > > > + clk_hw_unregister_gate(ret);
> > > > > + clk_hw_unregister_fixed_rate(parent);
> > > > > +}
> > > > > +
> > > > > +static void clk_gate_register_test_parent_data_legacy(struct
> > > > > kunit
> > > > > *test)
> > > > > +{
> > > > > + struct clk_hw *parent;
> > > > > + struct clk_hw *ret;
> > > > > + struct clk_parent_data pdata = { };
> > > > > +
> > > > > + parent = clk_hw_register_fixed_rate(NULL,
> > > > > "test_parent",
> > > > > NULL,
> > > > > 0,
> > > > > + 1000000);
> > > > > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
> > > > > + pdata.name = "test_parent";
> > > > > +
> > > > > + ret = clk_hw_register_gate_parent_data(NULL,
> > > > > "test_gate",
> > > > > &pdata, 0,
> > > > > + NULL, 0, 0,
> > > > > NULL);
> > > > > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
> > > > > + KUNIT_EXPECT_PTR_EQ(test, parent,
> > > > > clk_hw_get_parent(ret));
> > > > > +
> > > > > + clk_hw_unregister_gate(ret);
> > > > > + clk_hw_unregister_fixed_rate(parent);
> > > > > +}
> > > > > +
> > > > > +static void clk_gate_register_test_parent_hw(struct kunit
> > > > > *test)
> > > > > +{
> > > > > + struct clk_hw *parent;
> > > > > + struct clk_hw *ret;
> > > > > +
> > > > > + parent = clk_hw_register_fixed_rate(NULL,
> > > > > "test_parent",
> > > > > NULL,
> > > > > 0,
> > > > > + 1000000);
> > > > > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
> > > > > +
> > > > > + ret = clk_hw_register_gate_parent_hw(NULL, "test_gate",
> > > > > parent,
> > > > > 0, NULL,
> > > > > + 0, 0, NULL);
> > > > > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
> > > > > + KUNIT_EXPECT_PTR_EQ(test, parent,
> > > > > clk_hw_get_parent(ret));
> > > > > +
> > > > > + clk_hw_unregister_gate(ret);
> > > > > + clk_hw_unregister_fixed_rate(parent);
> > > > > +}
> > > > > +
> > > > > +static void clk_gate_register_test_hiword_invalid(struct
> > > > > kunit
> > > > > *test)
> > > > > +{
> > > > > + struct clk_hw *ret;
> > > > > +
> > > > > + ret = clk_hw_register_gate(NULL, "test_gate", NULL, 0,
> > > > > NULL,
> > > > > + 20, CLK_GATE_HIWORD_MASK,
> > > > > NULL);
> > > > > +
> > > > > + KUNIT_EXPECT_TRUE(test, IS_ERR(ret));
> > > > > +}
> > > >
> > > > Do we really need all these registration tests? I guess most of
> > > > the
> > > > code path would be tested by just one of these - how much value
> > > > we
> > > > add
> > > > by adding rest of the registration cases? (just wondering)
> > > >
> > >
> > > The advantage of having multiple tests here is that it's really
> > > obvious if just one of these function breaks: that's probably not
> > > enormously likely,
> >
> > My point exactly.
>
> I honestly don't know. TBH, I think I have a good idea what a good
> test case should look like, as for measuring how useful a test case
> is, that's harder and I am not sure anyone really has a great answer
> for this. I just read a literature review on measuring code quality
> and they concluded that coverage provided only a weak signal at best
> and the best signal came from "engineers' perception of code
> quality".
>
> The way I usually write tests is: I write my code first. I then write
> some unit tests to exercise some basic features of my code. Then I
> write some integration or end-to-end tests to make sure my code
> "fits"
> in the job it's supposed to do. If everything works, I am done. If
> not, then I start writing tests to narrow down what doesn't work the
> way I expect. Eventually I write a test that clearly exposes the
> issue. I fix the issue, and I rerun the tests to make sure they flip
> from fail to pass. I then look through all the new tests I wrote and
> try to make a judgement call about how useful I think the test will
> be
> in the future. (I am curious what other people's thoughts are on this
> approach.)

I like this approach. I know colleagues who claim that writing the test
first is the only way to go. I also know colleagues who rely only on
UTs at some simulated environment. And I know colleagues who hardly
test at all.

And I usually have an opinion on the colleagues as engineers. I know
what to think when I ask from myself whether I would hire this or that
colleague for my own company or take him/her on my project. (And for
all of the jealous persons or persons who started to think how to
utilize my money and position: No - I don't have a company, I am not
rich and I am not in a position where I pick co-workers. So just forget
it. XD ). And I must say that their preferred way of testing does not
correlate with this. So I'd say there is no single good approach to
make you a better engineer.

I do also write the code first. Then I ran it in it's real environment
and do some tests which verify the intended functionality. At that
phase I usually fix the code. Then I evaluate the complexity and add
UTs for portions which I think are error prone. And after the
release/submission I try to address the questions/error reports
quickly.

> When it comes down to it, I don't know this code base well enough to
> know whether all these registration tests are useful, but I think, as
> they are written, they are easy to understand, and that is what I was
> reviewing.

I think we can't demand more from you. I hope someone will be there
questioning if tests are usefull or could be shrinked - but I don't
think we can demand that from you. Problem is that many people see test
code as a secondary, less interesting thing and tend to skip reviewing
it - but as I said in many places - test code has potentially huge
impact to code maintenance. Hence we do need experts of that particular
area to pay attention on test code too. And I think I have now done my
share on trying to ensure it happens. Raised awareness is the term. And
no doubt many people think I've now done much more than I should :p

>
> This is all a very long way of saying: "I think this is a question
> for
> Stephen" :-)

Yes. I think Stephen is perfectly capable of deciding if tests are good
as they were or if he thinks some of my comments were relevant.

>
> > > but there's also not much of a cost to having them
> >
> > And this is the other side of the equation. And I leave this
> > estimation
> > to others - I just want to point out that imbalance in this
> > equation is
> > what usually leads to writing excessive amounts of tests - which
> > eventually just hinders development with little or no benefits. But
> > as
> > I said, I will leave this to smarter guys to evaluate. Just wanted
> > to
> > point out that this should be considered :)
>
> Fair enough. And I think that's great! Unit testing is "new" to the
> Linux kernel and I want to make sure we do it the right way. From my
> experience, I think the best way to bring about a "new" thing like
> this is to get someone who is overly liberal in adopting the new
> thing, someone who is more conservative, and maybe some invested, but
> less opinionated bystanders and make them all try to agree.

You didn't write overly conservative, thanks for that ;) (I'm using vim
not vi after all. And that's released at 90's which makes me pretty
modern, right?)

> Once we have debated over a number of tests, it should start getting
> easier. Hopefully, a good pattern will emerge and people will start
> following that pattern.

I like your view :) Well said.

Best Regards
--Matti