Re: [PATCH] lib/bch: fix possible stack overrun

From: Boris Brezillon
Date: Fri Oct 05 2018 - 12:40:56 EST


On Fri, 5 Oct 2018 17:56:51 +0200
Arnd Bergmann <arnd@xxxxxxxx> wrote:

> The previous patch introduced very large kernel stack usage and a Makefile
> change to hide the warning about it.
>
> From what I can tell, a number of things went wrong here:
>
> - The BCH_MAX_T constant was set to the maximum value for 'n',
> not the maximum for 't', which is much smaller.
>
> - The stack usage is actually larger than the entire kernel stack
> on some architectures that can use 4KB stacks (m68k, sh, c6x), which
> leads to an immediate overrun.
>
> - The justification in the patch description claimed that nothing
> changed, however that is not the case even without the two points above:
> the configuration is machine specific, and most boards never use the
> maximum BCH_ECC_WORDS() length but instead have something much smaller.
> That maximum would only apply to machines that use both the maximum
> block size and the maximum ECC strength.
>
> The largest value for 't' that I could find is '32', which in turn leads
> to a 60 byte array instead of 2048 bytes. With that changed, the warning
> can be enabled again.
>
> Fixes: 02361bc77888 ("lib/bch: Remove VLA usage")
> Reported-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> Signed-off-by: Arnd Bergmann <arnd@xxxxxxxx>
> ---
> Please review carefully to ensure my logic is correct. I spent a
> long time trying to understand what is going on here, but I'm not
> too familiar with this algorithm, and it's possible I still got it
> wrong as well.
> In particular, I'm not 100% sure if '32' is the maximum ECC strength
> we can support, or if a larger one (which?) might be possible.

Well, the ECC strength (T) is dynamically configurable through the
nand-ecc-strength param, so it's theoretically not limited to 32. This
being said, I doubt people are using soft BCH with more strength higher
than 32.

> ---
> lib/Makefile | 1 -
> lib/bch.c | 10 ++++++----
> 2 files changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/lib/Makefile b/lib/Makefile
> index 8c9647fa271a..12c479dd46e0 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -122,7 +122,6 @@ obj-$(CONFIG_ZLIB_INFLATE) += zlib_inflate/
> obj-$(CONFIG_ZLIB_DEFLATE) += zlib_deflate/
> obj-$(CONFIG_REED_SOLOMON) += reed_solomon/
> obj-$(CONFIG_BCH) += bch.o
> -CFLAGS_bch.o := $(call cc-option,-Wframe-larger-than=4500)
> obj-$(CONFIG_LZO_COMPRESS) += lzo/
> obj-$(CONFIG_LZO_DECOMPRESS) += lzo/
> obj-$(CONFIG_LZ4_COMPRESS) += lz4/
> diff --git a/lib/bch.c b/lib/bch.c
> index 7b0f2006698b..3ef1a3467e7b 100644
> --- a/lib/bch.c
> +++ b/lib/bch.c
> @@ -79,20 +79,19 @@
> #define GF_T(_p) (CONFIG_BCH_CONST_T)
> #define GF_N(_p) ((1 << (CONFIG_BCH_CONST_M))-1)
> #define BCH_MAX_M (CONFIG_BCH_CONST_M)
> +#define BCH_MAX_T (CONFIG_BCH_CONST_T)
> #else
> #define GF_M(_p) ((_p)->m)
> #define GF_T(_p) ((_p)->t)
> #define GF_N(_p) ((_p)->n)
> -#define BCH_MAX_M 15
> +#define BCH_MAX_M 15 /* 2KB */
> +#define BCH_MAX_T 32 /* 32 bit correction */

I'd recommend adding a test on t here [1] (t > BCH_MAX_T), so that you
fail at init time if t is too big.

[1]https://elixir.bootlin.com/linux/v4.19-rc6/source/lib/bch.c#L1280

> #endif
>
> -#define BCH_MAX_T (((1 << BCH_MAX_M) - 1) / BCH_MAX_M)
> -
> #define BCH_ECC_WORDS(_p) DIV_ROUND_UP(GF_M(_p)*GF_T(_p), 32)
> #define BCH_ECC_BYTES(_p) DIV_ROUND_UP(GF_M(_p)*GF_T(_p), 8)
>
> #define BCH_ECC_MAX_WORDS DIV_ROUND_UP(BCH_MAX_M * BCH_MAX_T, 32)
> -#define BCH_ECC_MAX_BYTES DIV_ROUND_UP(BCH_MAX_M * BCH_MAX_T, 8)
>
> #ifndef dbg
> #define dbg(_fmt, args...) do {} while (0)
> @@ -202,6 +201,9 @@ void encode_bch(struct bch_control *bch, const uint8_t *data,
> const uint32_t * const tab3 = tab2 + 256*(l+1);
> const uint32_t *pdata, *p0, *p1, *p2, *p3;
>
> + if (WARN_ON(r_bytes > sizeof(r)))
> + return;
> +
> if (ecc) {
> /* load ecc parity bytes into internal 32-bit buffer */
> load_ecc8(bch, bch->ecc_buf, ecc);