Re: RFC - kernel selftest result documentation (KTAP)

From: Frank Rowand
Date: Fri Jun 19 2020 - 13:58:38 EST


On 2020-06-16 07:08, Paolo Bonzini wrote:
> On 15/06/20 21:07, Bird, Tim wrote:
>>> Note: making the plan line required differs from TAP13 and TAP14. I
>>> think it's the right choice, but we should be clear.
>
> As an aside, where is TAP14?
>
>> With regards to making it optional or not, I don't have a strong
>> preference. The extra info seems helpful in some circumstances.
>> I don't know if it's too onerous to make it a requirement or not.
>> I'd prefer if it was always there (either at the beginning or the end),
>> but if there is some situation where it's quite difficult to calculate,
>> then it would be best not to mandate it. I can't think of any impossible
>> situations at the moment.
>
> I think making the plan mandatory is a good idea. "Late plans" work
> very well for cases where you cannot know in advance the number of tests
> (for example in filters that produce TAP from other output), and provide
> an additional safety net.
>
>>> "Bail out!" to be moved to "optional" elements, since it may not appear.
>>> And we should clarify TAP13 and TAP14's language to say it should only
>>> appear when the test is aborting without running later tests -- for this
>>> reason, I think the optional "description" following "Bail out!" should
>>> be made required. I.e. it must be: "Bail out! $reason"
>>
>> I'll make sure this is listed as optional.
>> I like adding a mandatory reason.
>
> +1.
>
>>> TAP13/14 makes description optional, are we making it required (I think
>>> we should). There seems to be a TAP13/14 "convention" of starting
>>> <description> with "- ", which I'm on the fence about it. It does make
>>> parsing maybe a little easier.
>>
>> I would like the description to be required.
>> I don't have a strong opinion on the dash. I'm OK with either one (dash
>> or no dash), but we should make kselftest and KUnit consistent.
>
> I think no mandatory dash is better (or even mandatory no-dash!). We
> can suggest removing it when formatting TAP output.
>
>>>> Finally, it is possible to use a test directive to indicate another
>>>> possible outcome for a test: that it was skipped. To report that
>>>> a test case was skipped, the result line should start with the
>>>> result "not ok", and the directive "# SKIP" should be placed after
>>>> the test description. (Note that this deviates from the TAP13
>>>> specification).
>
> How so? The description comes first, but there can be a description of
> the directive.
>
> not ok 4 - Summarized correctly # TODO Not written yet
>
>>>> It is usually helpful if a diagnostic message is emitted to explain
>>>> the reasons for the skip. If the message is a single line and is
>>>> short, the diagnostic message may be placed after the '# SKIP'
>>>> directive on the same line as the test result. However, if it is
>>>> not on the test result line, it should precede the test line (see
>>>> diagnostic data, next).
>>>>
>>>> Bail out!
>>>> ---------
>>>> If a line in the test output starts with 'Bail out!', it indicates
>>>> that the test was aborted for some reason. It indicates that
>>>> the test is unable to proceed, and no additional tests will be
>>>> performed.
>>>>
>>>> This can be used at the very beginning of a test, or anywhere in the
>>>> middle of the test, to indicate that the test can not continue.
>>>
>>> I think the required syntax should be:
>>>
>>> Bail out! <reason>
>>>
>>> And to make it clear that this is optionally used to indicate an early
>>> abort. (Though with a leading plan line, a parser should be able to
>>> determine this on its own.)
>
> True. However, "Bail out!" allow to distinguish issues with the harness
> (such as ENOSPC) from test aborts.
>
>>>> - TODO directive
>>>
>>> Agreed: SKIP should cover everything TODO does.
>
> XFAIL/XPASS are different from SKIP. I personally don't have a need for
> them, but kselftests includes XFAIL/XPASS exit codes and they aren't
> reflected into selftests/kselftest/runner.sh.
>
> Likewise, kselftest.h has ksft_inc_xfail_cnt but not
> ksft_test_result_xfail/ksft_test_result_xpass.

It has ksft_exit_xfail() and ksft_exit_xpass(). But at 5.8-rc1 I do not
see any callers of those two functions.

>
> It's important to notice in the spec that the TODO directive inverts the
> direction of ok/not-ok (i.e. XFAIL, the "good" result, is represented by
> "not ok # TODO").

For a slightly more verbose explanation, the mindset behind pytest "xfail"
and TAP "not ok # TODO explanation" is different, though they both provide
essentially the same information.

pytest "xfail" (if I understand correctly) means the test is expected to
fail, and it does indeed fail.

pytest "xpass" means the test is expected to fail, but it does not fail.

TAP "not ok # TODO explanation" means the test is expected to fail, and
it does indeed fail.

TAP "ok # TODO explanation" means the test is expected to fail, but it does not fail.

It does not seem to be a good idea to have two different ways of reporting
the same thing.

The TAP method seems cleaner to me. It is consistently reported whether or
not the test passed, no matter what the expectation of pass/fail is. "# TODO ..."
is a way for the code maintainer to convey information to the test community
that the test is expected to fail due to a known bug. If the output of
the test changes to "ok # TODO explanation", then the tester can report
that as an issue to the code maintainer, and the code maintainer can then
determine whether the bug got fixed (and thus the "# TODO explanation"
should be removed from the test code).

Can we just remove ksft_exit_xfail() and ksft_exit_xpass() since no one is
using it, and there is an equivalent way to express the same information
in TAP?

>
>>>> - test identifier
>>>> - multiple parts, separated by ':'
>>>
>>> This is interesting... is the goal to be able to report test status over
>>> time by name?
>
> What about "/" instead?
>
>>>> Finally,
>>>> - Should a SKIP result be 'ok' (TAP13 spec) or 'not ok' (current kselftest practice)?
>>>> See https://testanything.org/tap-version-13-specification.html
>>>
>>> Oh! I totally missed this. Uhm. I think "not ok" makes sense to me "it
>>> did not run successfully". ... but ... Uhhh ... how do XFAIL and SKIP
>>> relate? Neither SKIP nor XFAIL count toward failure, though, so both
>>> should be "ok"? I guess we should change it to "ok".
>
> See above for XFAIL.
>
> I initially raised the issue with "SKIP" because I have a lot of tests
> that depend on hardware availability---for example, a test that does not
> run on some processor kinds (e.g. on AMD, or old Intel)---and for those
> SKIP should be considered a success.

No, SKIP should not be considered a success. It should also not be considered
a failure. Please do not blur the lines between success, failure, and
skipped.

-Frank

>
> Paolo
>
>> I have the same initial impression. In my mind, a skip is "not ok", since
>> the test didn't run. However, a SKIP and should be treated differently
>> from either "ok" or "not ok" by the results interpreter, so I don't think it
>> matters. Originally I was averse to changing the SKIP result to "ok"
>> (as suggested by Paulo Bonzini [1]), but I checked and it's pretty trivial to
>> change the parser in Fuego, and it would make the kernel results format
>> match the TAP13 spec. I don't see a strong reason for us to be different
>> from TAP13 here.
>>
>> I raised this issue on our automated testing conference call last week
>> (which includes people from the CKI, Fuego, KernelCI and LKFT projects), and
>> so people should be chiming in if their parser will have a problem with this change.)
>>
>> [1] https://lkml.kernel.org/lkml/20200610154447.15826-1-pbonzini@xxxxxxxxxx/T/
>>
>> Thanks very much for the feedback.
>> -- Tim
>>
>
> .
>