Re: LTS testing with latest kselftests - some failures
From: Luis R. Rodriguez
Date: Fri Jun 16 2017 - 12:47:13 EST
Kees, please review 47e0bbb7fa98 below.
Brian, please review be4a1326d12c below.
On Thu, Jun 15, 2017 at 11:26:53PM +0530, Sumit Semwal wrote:
> Hello Greg, Shuah,
> While testing 4.4.y and 4.9.y LTS kernels with latest kselftest,
To be clear it seems like you are taking the latest upstream ksefltest and run
it against older stable kernels. Furthermore you seem to only run the shell
script tests but are using older kselftests drivers? Is this all correct?
Otherwise it is unclear how you are running into the issues below.
Does 0-day so the same? I thought 0-day takes just the kselftest from each tree
submitted. That *seemed* to me like the way it was designed. Shuah ?
What's the name of *this* testing effort BTW? Is this part of the overall
kselftest ? Or is this something Linaro does for LTS kernels ? If there
is a name to your effort can you document it here so that others are aware:
Replying below only the firmware stuff.
> we found a couple more test failures due to test-kernel mismatch:
> 1. firmware tests: - linux 4.5  and 4.10  added a few updates to
> tests, and related updates to lib/test_firmware.c to improve the
> tests. Stable-4.4 misses these patches to lib/test_firmware.c. Stable
> 4.9 misses the second update.
<-- snip, skipped 2. and 3. -->
> For all the 3 listed above, we will try and update the tests to gracefully exit.
Hmm, this actually raises a good kselftest question:
I *though* kselftests were running tests on par with the kernels, so we would
*not* take latest upstream kselftests to test against older kernels. Is this
If this is indeed incorrect then indeed you have a problem and then I understand
this email, however this manual approach seems rather fragile. If I would have
understood this practice was expected I would have tried to design tests cases
a bit differently, but *also* it does beg the question about what to do when
the latest kselftest shell script does require some new knob form a test driver
which *is* fine to backport to the respective ksefltest C test-driver for an
older kernel. What makes this hard is that C test-drivers may depend on new API,
so you might have to do some manual work to backport some fancy new API in old
ways. This makes me question the value to this mismatch between shell and C
test-drivers on kselftests. Your effort seems to be all manual and empirical ?
Did we design kselftests with this in mind? Even though using the latest
kselftest shell tests against older stable kernels with older kselftest C
drivers seems to be like a good idea (provided the above is resolved) your
current suggestion to just drop some tests seems worrisome and seems to
*invalidate* the gains of such effort and all the pains you are going through.
If you are just dropping patches / tests loosely your approach could be missing
out on valid tests which *may* have missed out on respective stable patches.
The test-firmware async knobs are a good example, and so is the firmware custom
fallback trigger. These patches are just extending test coverage, so they help
test the existing old kernel API.
Its not worthy to Cc stable on those as they are not fixing a stable issue,
however their test at times may reveal an issue which a subsequent patch *does
fix* which *is* Cc stable.
An alternative to the way you are doing things, if I understood it correctly,
would be for us to consider evaluating pegging as stable candidates not only
ksefltest shell but kselftest C driver extensions, then instead of using the
latest ksefltests against older kernels you could just use the kselftest on the
respective old stable kernel, the *backport* effort becomes part of the stable
pipeline. Note I think this is very debatable.... and I would not be surprised
if Greg does not like it, but its worth *considering* if there is indeed value to
your current attempted approach.
The alternative of course, is to only use ksefltest from each respective
kernel, under the assumption each stable fix does makes its way through.
So -- where is the metric value of your current approach? Do we have stats?
> I will also individually request subsystem authors / mailing lists for
> each of these towards help in improving these tests if required, but
> wanted to use this thread as a converging point.
> Thanks and Best regards,
> : https://lkml.org/lkml/2015/12/8/816
> Patches added via :
> eb910947c82f (test: firmware_class: add asynchronous request trigger)
This is an example extension to the C test driver which could be useful for
> be4a1326d12c (test: firmware_class: use kstrndup() where appropriate)
I can't see this being a stable candidate, its unclear why this has come up on
this thread ?
> 47e0bbb7fa98 (test: firmware_class: report errors properly on failure)
Hrm, come to think of it, this *might* have been a stable fix, however the
fix did not mention any specific about real issue with this. Kees?
> : https://lkml.org/lkml/2017/1/23/440
> Patch added via :
> 061132d2b9c9 (test_firmware: add test custom fallback trigger)
This is another C test driver extension for kselftest which is useful to test
Also just a heads up these are other stable fixes for firmware in the pipeline,
they are not merged yet though. In this case no new test driver C functionality
is extended, just shell. But the test extensions do help test an old issue,
so the tests cases are worthy to be cherry picked into kselftests , as there is
a fix tagged as stable which is pending stable integration. Of course, since
they are not upstream yet it means it still has to go through final review and
[PATCH 0/4] firmware: fix fallback mechanism by ignoring SIGCHLD
[PATCH 1/4] test_firmware: add test case for SIGCHLD on sync fallback
[PATCH 2/4] swait: add the missing killable swaits
[PATCH 3/4] firmware: avoid invalid fallback aborts by using killable swait
[PATCH 4/4] firmware: send -EINTR on signal abort on fallback mechanism