On Thu, Jun 24, 2021 at 05:38:35PM +0100, Robin Murphy wrote:
On 2021-06-24 17:27, Al Viro wrote:
On Thu, Jun 24, 2021 at 02:22:27PM +0100, Robin Murphy wrote:
FWIW I think the only way to make the kernel behaviour any more robust here
would be to make the whole uaccess API more expressive, such that rather
than simply saying "I only got this far" it could actually differentiate
between stopping due to a fault which may be recoverable and worth retrying,
and one which definitely isn't.
... and propagate that "more expressive" information through what, 3 or 4
levels in the call chain?
From include/linux/uaccess.h:
* If raw_copy_{to,from}_user(to, from, size) returns N, size - N bytes starting
* at to must become equal to the bytes fetched from the corresponding area
* starting at from. All data past to + size - N must be left unmodified.
*
* If copying succeeds, the return value must be 0. If some data cannot be
* fetched, it is permitted to copy less than had been fetched; the only
* hard requirement is that not storing anything at all (i.e. returning size)
* should happen only when nothing could be copied. In other words, you don't
* have to squeeze as much as possible - it is allowed, but not necessary.
arm64 instances violate the aforementioned hard requirement. Please, fix
it there; it's not hard. All you need is an exception handler in .Ltiny15
that would fall back to (short) byte-by-byte copy if the faulting address
happened to be unaligned. Or just do one-byte copy, not that it had been
considerably cheaper than a loop. Will be cheaper than propagating that extra
information up the call chain, let alone paying for extra ->write_begin()
and ->write_end() for single byte in generic_perform_write().
And what do we do if we then continue to fault with an external abort
because whatever it is that warranted being mapped as Device-type memory in
the first place doesn't support byte accesses?
If it does not support byte access, it would've failed on fault-in.