Re: [PATCH 2/2] hugepages: Fix use after free bug in "quota" handling
From: David Gibson
Date: Thu Mar 08 2012 - 10:38:33 EST
On Thu, Mar 08, 2012 at 07:59:27PM +0800, Hillf Danton wrote:
> On Thu, Mar 8, 2012 at 12:17 PM, Aneesh Kumar K.V
> <aneesh.kumar@xxxxxxxxxxxxxxxxxx> wrote:
> > On Wed, 7 Mar 2012 20:28:39 +0800, Hillf Danton <dhillf@xxxxxxxxx> wrote:
> >> On Wed, Mar 7, 2012 at 12:48 PM, David Gibson
> >> > @@ -533,9 +611,9 @@ static void free_huge_page(struct page *page)
> >> > */
> >> > struct hstate *h = page_hstate(page);
> >> > int nid = page_to_nid(page);
> >> > - struct address_space *mapping;
> >> > + struct hugepage_subpool *spool =
> >> > + (struct hugepage_subpool *)page_private(page);
> >> >
> >> > - mapping = (struct address_space *) page_private(page);
> >> > set_page_private(page, 0);
> >> > page->mapping = NULL;
> >> > BUG_ON(page_count(page));
> >> > @@ -551,8 +629,7 @@ static void free_huge_page(struct page *page)
> >> > enqueue_huge_page(h, page);
> >> > }
> >> > spin_unlock(&hugetlb_lock);
> >> > - if (mapping)
> >> > - hugetlb_put_quota(mapping, 1);
> >> > + hugepage_subpool_put_pages(spool, 1);
> >>
> >> Like current code, quota is handed back *unconditionally*, but ...
> >
> >
> > We will end up doing get_quota for every allocated page. get_quota
> > happens either during mmap() if MAP_NORESERVE is not specified.
> > or during alloc_huge_page if we haven't done a quota reservation during
> > mmap for that range. Are you finding any part of code where we miss that ?
> >
> >
> Thank you, Aneesh, I work it out along the direction of your question.
>
> Apart from David's approach, quota is no longer reclaimed when pages are freed,
> but when they are truncated, and we end up with smaller .text size, and with
> the use-after-free bug fixed as well.
Sorry, I don't follow what you're saying here.
> --- a/mm/hugetlb.c Mon Mar 5 20:20:34 2012
> +++ b/mm/hugetlb.c Thu Mar 8 19:47:26 2012
> @@ -533,9 +533,7 @@ static void free_huge_page(struct page *
> */
> struct hstate *h = page_hstate(page);
> int nid = page_to_nid(page);
> - struct address_space *mapping;
>
> - mapping = (struct address_space *) page_private(page);
> set_page_private(page, 0);
> page->mapping = NULL;
> BUG_ON(page_count(page));
> @@ -551,8 +549,6 @@ static void free_huge_page(struct page *
> enqueue_huge_page(h, page);
> }
> spin_unlock(&hugetlb_lock);
> - if (mapping)
> - hugetlb_put_quota(mapping, 1);
> }
>
> static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
> @@ -2944,8 +2940,8 @@ void hugetlb_unreserve_pages(struct inod
> inode->i_blocks -= (blocks_per_huge_page(h) * freed);
> spin_unlock(&inode->i_lock);
>
> - hugetlb_put_quota(inode->i_mapping, (chg - freed));
> - hugetlb_acct_memory(h, -(chg - freed));
> + hugetlb_put_quota(inode->i_mapping, chg);
> + hugetlb_acct_memory(h, -chg);
> }
>
> #ifdef CONFIG_MEMORY_FAILURE
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/