Re: Quick extra regression report listing regression which available fixes

From: Thorsten Leemhuis
Date: Thu Jul 11 2024 - 12:59:46 EST


On 10.07.24 17:15, Thorsten Leemhuis wrote:
> Hi Linus, as you might release the final on Sunday, here is a quick
> "extra" report in case you want to know about a few unfixed regressions
> introduced during the 6.10 cycle. By chance this list only contains
> issues for which a fix is already available; I track a few more
> regression, but they are IMHO not worth mentioning for one reason or
> another.
> [...]


FWIW a quick add-on in case you are interested.

There are two regressions from recent cycles that have available fixes
that afaics will miss 6.10:

* copy_from_kernel_nofault_allowed() breaks MMUless devices:
https://bugzilla.kernel.org/show_bug.cgi?id=218953
Known for 4 weeks. Caused by 169f9102f9198b ("ARM: 9350/1: fault:
Implement copy_from_kernel_nofault_allowed()") [v6.9-rc1]. Fix is in
-next as 3ccea4784fddd9 ("ARM: Remove address checking for MMUless
devices") since earlier this week:
https://lore.kernel.org/all/20240611100947.32241-1-yangyj.ee@xxxxxxxxx/

* fs/ntfs3: memory corruption when page_size changes (like from Windows
-> RasPi5)
https://lore.kernel.org/ntfs3/20240529064053.2741996-1-chenhuacai@xxxxxxxxxxx/
https://lore.kernel.org/all/20240614155437.2063380-1-popcornmix@xxxxxxxxx/
Known since the end of May. Caused by 865e7a7700d930 ("fs/ntfs3: Reduce
stack usage") [v6.8-rc4, v6.6.19]. Fix is in -next as 68ef5b8c612b0c
("fs/ntfs3: Update log->page_{mask,bits} if log->page_size changed"
(likely for 3+ weeks already, not sure, did not verify):
https://lore.kernel.org/ntfs3/20240529064053.2741996-2-chenhuacai@xxxxxxxxxxx/

* perf jevents: DDR controller metrics are completely unavailable on
i.MX8M* systems
https://lore.kernel.org/linux-perf-users/20240531194414.1849270-1-l.stach@xxxxxxxxxxxxxx/
Known since the end of May through a proposed fix which fell through the
cracks once already -- and maybe that happened again, as since a week
nothing happened.


And while at it and in case you care, there are also a few regressions
from recent cycles where the culprit is identified for more than three
weeks now, but there still is no fix in sight or in -next:

* 9p: autopkgtest qemu jobs broken
https://bugzilla.kernel.org/show_bug.cgi?id=218916
https://lore.kernel.org/lkml/Zj0ErxVBE3DYT2Ea@gpd/
https://bugs.launchpad.net/ubuntu/+source/autopkgtest/+bug/2056461
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1072004
Known upstream for 2 months now. To quote a complaint about it I got
today: "[this is] causing the testing infrastructure of two distros to
grind to a halt, requiring pinning old kernels (Ubuntu) or simply
disabling all tests that require qemu (Debian). The reporters have
analyzed it, root caused it, found the commit causing it, provided
reproducers, reported to mailing list and bugzilla, provided tentative
patches. On the kernel side? Crickets."

* x86/bugs: dosemu crashes on some x86-32 machines
https://bugzilla.kernel.org/show_bug.cgi?id=218707
https://lore.kernel.org/lkml/IdYcxU6x6xuUqUg8cliJUnucfwfTO29TrKIlLGCCYbbIr1EQnP0ZAtTxdAM2hp5e5Gny_acIN3OFDS6v0sazocnZZ1UBaINEJ0HoDnbasSI=@protonmail.com/
Known for 13 weeks. Fixes are under review for some time, but review is
slow. Latest proposed fix is:
https://lore.kernel.org/lkml/20240710-fix-dosemu-vm86-v4-1-aa6464e1de6f@xxxxxxxxxxxxxxx/

* can: m_can: kernel hang
https://lore.kernel.org/lkml/e72771c75988a2460fa8b557b0e2d32e6894f75d.camel@xxxxxxxxxxxxxxx/
Known for three weeks now and no fix in sight because people are
apparently busy with other stuff. But likely not something many people
care about.

Ciao, Thorsten