On Wed, Jan 02, 2019 at 11:28:43AM +0800, Jason Wang wrote:
On 2018/12/31 äå2:45, Michael S. Tsirkin wrote:Well we already have VHOST_NET_WEIGHT - is it too big then?
On Thu, Dec 27, 2018 at 06:00:36PM +0800, Jason Wang wrote:Yes, but how about the cases of multiple flows. That's where I see unstable
On 2018/12/26 äå11:19, Michael S. Tsirkin wrote:Well same logic as wifi applies. Unpredictable latencies related
On Thu, Dec 06, 2018 at 04:17:36PM +0800, Jason Wang wrote:I can test this. But changing default TCP value is much more than a
On 2018/12/6 äå6:54, Michael S. Tsirkin wrote:So how about increasing TSQ pacing shift then?
When use_napi is set, let's enable BQLs. Note: some of the issues areI've played a similar patch several days before. The tricky part is the mode
similar to wifi. It's worth considering whether something similar to
commit 36148c2bbfbe ("mac80211: Adjust TSQ pacing shift") might be
benefitial.
switching between napi and no napi. We should make sure when the packet is
sent and trakced by BQL, it should be consumed by BQL as well. I did it by
tracking it through skb->cb. And deal with the freeze by reset the BQL
status. Patch attached.
But when testing with vhost-net, I don't very a stable performance,
virtio-net specific thing.
to radio in one case, to host scheduler in the other.
Sorry I still don't get it.We don't add used when means BQL may not see the consumed packet in time.it wasI don't think it's reasonable to expect userspace to be that smart ...
probably because we batch the used ring updating so tx interrupt may come
randomly. We probably need to implement time bounded coalescing mechanism
which could be configured from userspace.
Why do we need time bounded? used ring is always updated when ring
becomes empty.
And the delay varies based on the workload since we count packets not bytes
or time before doing the batched updating.
Thanks
When nothing is outstanding then we do update the used.
So if BQL stops userspace from sending packets then
we get an interrupt and packets start flowing again.
results.
It might be suboptimal, we might need to tune it but I doubt runningProbably not a timer but a time counter (or event byte counter) in vhost to
timers is a solution, timer interrupts cause VM exits.
add used and signal guest if it exceeds a value instead of waiting the
number of packets.
Thanks
And maybe we should expose the "MORE" flag in the descriptor -
do you think that will help?