Re: [PATCH v2] vfio/type1: Limit DMA mappings per container

From: Cornelia Huck
Date: Wed Apr 03 2019 - 03:18:24 EST


On Tue, 02 Apr 2019 10:15:38 -0600
Alex Williamson <alex.williamson@xxxxxxxxxx> wrote:

> Memory backed DMA mappings are accounted against a user's locked
> memory limit, including multiple mappings of the same memory. This
> accounting bounds the number of such mappings that a user can create.
> However, DMA mappings that are not backed by memory, such as DMA
> mappings of device MMIO via mmaps, do not make use of page pinning
> and therefore do not count against the user's locked memory limit.
> These mappings still consume memory, but the memory is not well
> associated to the process for the purpose of oom killing a task.
>
> To add bounding on this use case, we introduce a limit to the total
> number of concurrent DMA mappings that a user is allowed to create.
> This limit is exposed as a tunable module option where the default
> value of 64K is expected to be well in excess of any reasonable use
> case (a large virtual machine configuration would typically only make
> use of tens of concurrent mappings).
>
> This fixes CVE-2019-3882.
>
> Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx>
> ---
>
> v2: Remove unnecessary atomic, all runtime access occurs while
> holding vfio_iommu.lock. Change to unsigned int since we're
> no longer bound by the atomic_t.
>
> drivers/vfio/vfio_iommu_type1.c | 14 ++++++++++++++
> 1 file changed, 14 insertions(+)

Non-atomic seems fine.

Reviewed-by: Cornelia Huck <cohuck@xxxxxxxxxx>