[PATCH 4.17 54/63] hv_netvsc: Fix napi reschedule while receive completion is busy
From: Greg Kroah-Hartman
Date: Mon Jul 23 2018 - 08:27:51 EST
4.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Haiyang Zhang <haiyangz@xxxxxxxxxxxxx>
[ Upstream commit 6b81b193b83e87da1ea13217d684b54fccf8ee8a ]
If out ring is full temporarily and receive completion cannot go out,
we may still need to reschedule napi if certain conditions are met.
Otherwise the napi poll might be stopped forever, and cause network
disconnect.
Fixes: 7426b1a51803 ("netvsc: optimize receive completions")
Signed-off-by: Stephen Hemminger <stephen@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Haiyang Zhang <haiyangz@xxxxxxxxxxxxx>
Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
drivers/net/hyperv/netvsc.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -1291,6 +1291,7 @@ int netvsc_poll(struct napi_struct *napi
struct hv_device *device = netvsc_channel_to_device(channel);
struct net_device *ndev = hv_get_drvdata(device);
int work_done = 0;
+ int ret;
/* If starting a new interval */
if (!nvchan->desc)
@@ -1302,16 +1303,18 @@ int netvsc_poll(struct napi_struct *napi
nvchan->desc = hv_pkt_iter_next(channel, nvchan->desc);
}
- /* If send of pending receive completions suceeded
- * and did not exhaust NAPI budget this time
- * and not doing busy poll
+ /* Send any pending receive completions */
+ ret = send_recv_completions(ndev, net_device, nvchan);
+
+ /* If it did not exhaust NAPI budget this time
+ * and not doing busy poll
* then re-enable host interrupts
- * and reschedule if ring is not empty.
+ * and reschedule if ring is not empty
+ * or sending receive completion failed.
*/
- if (send_recv_completions(ndev, net_device, nvchan) == 0 &&
- work_done < budget &&
+ if (work_done < budget &&
napi_complete_done(napi, work_done) &&
- hv_end_read(&channel->inbound) &&
+ (ret || hv_end_read(&channel->inbound)) &&
napi_schedule_prep(napi)) {
hv_begin_read(&channel->inbound);
__napi_schedule(napi);