Tarkan Erimer wrote:Of course,it's impossible to test all the things/scenarios. Just, that kind of tool, should allow us to minimize the issues that we will face.To improve the quality of kernel releases, maybe we can create a special kernel testing tool.
A variety of bugs cannot be caught by automated tests. Notably those which happen with rare hardware, or due to very specific interaction with hardware, or with very special workloads.
My idea is also hunting the bugs more easily via a tool like this that has a console/X interface and ability to bisect. So; users,who has little or no knowledge about git/bisect, can easily try to find out the problematic commits/bugs.
An interesting thing to investigate would be to start at the regression meta bugs at bugzilla.kernel.org, go through all bugs on which are linked from there, and try to figure out
- if these bugs could have been found by automated or at least
semiautomatic tests on pre-merge code, and
- how those tests had to have looked like, e.g. what equipment would
have been necessary.
Let's look back at the posting at the thread start:
| On Wed, Apr 30, 2008 at 10:03 AM, David Miller <davem@xxxxxxxxxxxxx> wrote:
| > Yesterday, I spent the whole day bisecting boot failures
| > on my system due to the totally untested linux/bitops.h
| > optimization, which I fully analyzed and debugged.
...
| > Yet another bootup regression got added within the last 24
| > hours.
Bootup regressions can be automatically caught if the necessary machines are available, and candidate code gets exposure to test parks of those machines. I hear this is already being done, and increasingly so. But those test parks will ever only cover a tiny fraction of existing hardware and cannot be subjected to all code iterations and all possible .config permutations, hence will have limited coverage of bugs.
And things like the bitops issue depend on review much more than on tests, AFAIU.