RE: [PATCH] ALSA: core: fix unsigned int pages overflow when comapred
From: Zhang, Jun
Date: Thu Jul 19 2018 - 02:08:15 EST
Hello, Takashi
I think use our patch, it's NOT possible that the returned size is over sgbuf->tblsize.
In function snd_malloc_sgbuf_pages,
Pages is align page,
sgbuf->tblsize is align 32*page,
chunk is align 2^n*page,
in our panic case, pages = 123, tlbsize = 128,
1st loop trunk = 32
2nd loop trunk = 32
3rd loop trunk = 32
4th loop trunk = 16
5th loop trunk = 16
So in 5th loop pages-trunk = -5, which make dead loop.
Use our patch , in 5th loop, while is break. Returned size could NOT be over sgbuf->tblsize.
-----Original Message-----
From: Takashi Iwai [mailto:tiwai@xxxxxxx]
Sent: Wednesday, July 18, 2018 20:34
To: He, Bo <bo.he@xxxxxxxxx>
Cc: alsa-devel@xxxxxxxxxxxxxxxx; perex@xxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; Zhang, Jun <jun.zhang@xxxxxxxxx>; Zhang, Yanmin <yanmin.zhang@xxxxxxxxx>
Subject: Re: [PATCH] ALSA: core: fix unsigned int pages overflow when comapred
On Wed, 18 Jul 2018 13:52:45 +0200,
He, Bo wrote:
>
> we see the below kernel panic on stress suspend resume test in
> snd_malloc_sgbuf_pages(), snd_dma_alloc_pages_fallback() alloc chunk
> maybe larger than the left pages due to the pages alignment, which
> will cause the pages overflow.
>
> while (pages > 0) {
> ...
> pages -= chunk;
> }
>
> the patch is change the pages from unsigned int to int to fix the issue.
Thanks for the patch.
Although the analysis is correct, the fix doesn't look ideal. It's also possible that the returned size may over sgbuf->tblsize if we are more unlucky.
A change like below should work instead. Could you give it a try?
Takashi
-- 8< --
--- a/sound/core/sgbuf.c
+++ b/sound/core/sgbuf.c
@@ -108,7 +108,7 @@ void *snd_malloc_sgbuf_pages(struct device *device,
break;
}
chunk = tmpb.bytes >> PAGE_SHIFT;
- for (i = 0; i < chunk; i++) {
+ for (i = 0; i < chunk && pages > 0; i++) {
table->buf = tmpb.area;
table->addr = tmpb.addr;
if (!i)
@@ -117,9 +117,9 @@ void *snd_malloc_sgbuf_pages(struct device *device,
*pgtable++ = virt_to_page(tmpb.area);
tmpb.area += PAGE_SIZE;
tmpb.addr += PAGE_SIZE;
+ sgbuf->pages++;
+ pages--;
}
- sgbuf->pages += chunk;
- pages -= chunk;
if (chunk < maxpages)
maxpages = chunk;
}