Re: KASAN: use-after-free Read in vhost_chr_write_iter

From: Michael S. Tsirkin
Date: Mon May 21 2018 - 09:47:32 EST


On Mon, May 21, 2018 at 10:38:10AM +0800, Jason Wang wrote:
>
>
> On 2018å05æ18æ 17:24, Jason Wang wrote:
> >
> >
> > On 2018å05æ17æ 21:45, DaeRyong Jeong wrote:
> > > We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter
> > >
> > > This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
> > > version of Syzkaller), which we describe more at the end of this
> > > report. Our analysis shows that the race occurs when invoking two
> > > syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER.
> > >
> > >
> > > Analysis:
> > > We think the concurrent execution of vhost_process_iotlb_msg() and
> > > vhost_dev_cleanup() causes the crash.
> > > Both of functions can run concurrently (please see call sequence below),
> > > and possibly, there is a race on dev->iotlb.
> > > If the switch occurs right after vhost_dev_cleanup() frees
> > > dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value
> > > and it
> > > keep executing without returning -EFAULT. Consequently, use-after-free
> > > occures
> > >
> > >
> > > Thread interleaving:
> > > CPU0 (vhost_process_iotlb_msg)ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ CPU1 (vhost_dev_cleanup)
> > > (In the case of both VHOST_IOTLB_UPDATE and
> > > VHOST_IOTLB_INVALIDATE)
> > > =====ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ =====
> > > ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ vhost_umem_clean(dev->iotlb);
> > > if (!dev->iotlb) {
> > > ÂÂÂÂÂÂÂÂÂÂÂ ret = -EFAULT;
> > > ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ break;
> > > }
> > > ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ dev->iotlb = NULL;
> > >
> > >
> > > Call Sequence:
> > > CPU0
> > > =====
> > > vhost_net_chr_write_iter
> > > ÂÂÂÂvhost_chr_write_iter
> > > ÂÂÂÂÂÂÂ vhost_process_iotlb_msg
> > >
> > > CPU1
> > > =====
> > > vhost_net_ioctl
> > > ÂÂÂÂvhost_net_reset_owner
> > > ÂÂÂÂÂÂÂ vhost_dev_reset_owner
> > > ÂÂÂÂÂÂÂÂÂÂÂ vhost_dev_cleanup
> >
> > Thanks a lot for the analysis.
> >
> > This could be addressed by simply protect it with dev mutex.
> >
> > Will post a patch.
> >
>
> Could you please help to test the attached patch? I've done some smoking
> test.
>
> Thanks

> >From 88328386f3f652e684ee33dc4cf63dcaed871aea Mon Sep 17 00:00:00 2001
> From: Jason Wang <jasowang@xxxxxxxxxx>
> Date: Fri, 18 May 2018 17:33:27 +0800
> Subject: [PATCH] vhost: synchronize IOTLB message with dev cleanup
>
> DaeRyong Jeong reports a race between vhost_dev_cleanup() and
> vhost_process_iotlb_msg():
>
> Thread interleaving:
> CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
> (In the case of both VHOST_IOTLB_UPDATE and
> VHOST_IOTLB_INVALIDATE)
> ===== =====
> vhost_umem_clean(dev->iotlb);
> if (!dev->iotlb) {
> ret = -EFAULT;
> break;
> }
> dev->iotlb = NULL;
>
> The reason is we don't synchronize between them, fixing by protecting
> vhost_process_iotlb_msg() with dev mutex.
>
> Reported-by: DaeRyong Jeong <threeearcat@xxxxxxxxx>
> Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API")
> Reported-by: DaeRyong Jeong <threeearcat@xxxxxxxxx>

Long terms we might want to move iotlb into vqs
so that messages can be processed in parallel.
Not sure how to do it yet.

> ---
> drivers/vhost/vhost.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index f3bd8e9..f0be5f3 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -981,6 +981,7 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,
> {
> int ret = 0;
>
> + mutex_lock(&dev->mutex);
> vhost_dev_lock_vqs(dev);
> switch (msg->type) {
> case VHOST_IOTLB_UPDATE:
> @@ -1016,6 +1017,8 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,
> }
>
> vhost_dev_unlock_vqs(dev);
> + mutex_unlock(&dev->mutex);
> +
> return ret;
> }
> ssize_t vhost_chr_write_iter(struct vhost_dev *dev,
> --
> 2.7.4
>