[patch 049/149] virtio_net: Make delayed refill more reliable
From: Greg KH
Date: Thu Jul 01 2010 - 14:23:22 EST
2.6.32-stable review patch. If anyone has any objections, please let us know.
------------------
From: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
commit 39d321577405e8e269fd238b278aaf2425fa788a upstream.
I have seen RX stalls on a machine that experienced a suspected
OOM. After the stall, the RX buffer is empty on the guest side
and there are exactly 16 entries available on the host side. As
the number of entries is less than that required by a maximal
skb, the host cannot proceed.
The guest did not have a refill job scheduled.
My diagnosis is that an OOM had occured, with the delayed refill
job scheduled. The job was able to allocate at least one skb, but
not enough to overcome the minimum required by the host to proceed.
As the refill job would only reschedule itself if it failed completely
to allocate any skbs, this would lead to an RX stall.
The following patch removes this stall possibility by always
rescheduling the refill job until the ring is totally refilled.
Testing has shown that the RX stall no longer occurs whereas
previously it would occur within a day.
Signed-off-by: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
Acked-by: Rusty Russell <rusty@xxxxxxxxxxxxxxx>
Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx>
Cc: Bruce Rogers <brogers@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxx>
---
drivers/net/virtio_net.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -398,8 +398,7 @@ static void refill_work(struct work_stru
vi = container_of(work, struct virtnet_info, refill.work);
napi_disable(&vi->napi);
- try_fill_recv(vi, GFP_KERNEL);
- still_empty = (vi->num == 0);
+ still_empty = !try_fill_recv(vi, GFP_KERNEL);
napi_enable(&vi->napi);
/* In theory, this can happen: if we don't get any buffers in
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/