Re: [PATCH RFC 1/2] virtio-net: bql support

From: Jason Wang
Date: Thu Dec 06 2018 - 03:31:52 EST


This is a multi-part message in MIME format.
On 2018/12/6 äå4:17, Jason Wang wrote:

On 2018/12/6 äå6:54, Michael S. Tsirkin wrote:
When use_napi is set, let's enable BQLs. Note: some of the issues are
similar to wifi. It's worth considering whether something similar to
commit 36148c2bbfbe ("mac80211: Adjust TSQ pacing shift") might be
benefitial.


I've played a similar patch several days before. The tricky part is the mode switching between napi and no napi. We should make sure when the packet is sent and trakced by BQL, it should be consumed by BQL as well. I did it by tracking it through skb->cb. And deal with the freeze by reset the BQL status. Patch attached.


Add the patch.

Thanks



But when testing with vhost-net, I don't very a stable performance, it was probably because we batch the used ring updating so tx interrupt may come randomly. We probably need to implement time bounded coalescing mechanism which could be configured from userspace.

Btw, maybe it's time just enable napi TX by default. I get ~10% TCP_RR regression on machine without APICv, (haven't found time to test APICv machine). But consider it was for correctness, I think it's acceptable? Then we can do optimization on top?


Thanks


Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
---
 drivers/net/virtio_net.c | 27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index cecfd77c9f3c..b657bde6b94b 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1325,7 +1325,8 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
ÂÂÂÂÂ return stats.packets;
 }
 -static void free_old_xmit_skbs(struct send_queue *sq)
+static void free_old_xmit_skbs(struct send_queue *sq, struct netdev_queue *txq,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ bool use_napi)
 {
ÂÂÂÂÂ struct sk_buff *skb;
ÂÂÂÂÂ unsigned int len;
@@ -1347,6 +1348,9 @@ static void free_old_xmit_skbs(struct send_queue *sq)
ÂÂÂÂÂ if (!packets)
ÂÂÂÂÂÂÂÂÂ return;
 + if (use_napi)
+ÂÂÂÂÂÂÂ netdev_tx_completed_queue(txq, packets, bytes);
+
ÂÂÂÂÂ u64_stats_update_begin(&sq->stats.syncp);
ÂÂÂÂÂ sq->stats.bytes += bytes;
ÂÂÂÂÂ sq->stats.packets += packets;
@@ -1364,7 +1368,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
ÂÂÂÂÂÂÂÂÂ return;
 Â if (__netif_tx_trylock(txq)) {
-ÂÂÂÂÂÂÂ free_old_xmit_skbs(sq);
+ÂÂÂÂÂÂÂ free_old_xmit_skbs(sq, txq, true);
ÂÂÂÂÂÂÂÂÂ __netif_tx_unlock(txq);
ÂÂÂÂÂ }
 @@ -1440,7 +1444,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
ÂÂÂÂÂ struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq));
 Â __netif_tx_lock(txq, raw_smp_processor_id());
-ÂÂÂ free_old_xmit_skbs(sq);
+ÂÂÂ free_old_xmit_skbs(sq, txq, true);
ÂÂÂÂÂ __netif_tx_unlock(txq);
 Â virtqueue_napi_complete(napi, sq->vq, 0);
@@ -1505,13 +1509,15 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
ÂÂÂÂÂ struct send_queue *sq = &vi->sq[qnum];
ÂÂÂÂÂ int err;
ÂÂÂÂÂ struct netdev_queue *txq = netdev_get_tx_queue(dev, qnum);
-ÂÂÂ bool kick = !skb->xmit_more;
+ÂÂÂ bool more = skb->xmit_more;
ÂÂÂÂÂ bool use_napi = sq->napi.weight;
+ÂÂÂ unsigned int bytes = skb->len;
+ÂÂÂ bool kick;
 Â /* Free up any pending old buffers before queueing new ones. */
-ÂÂÂ free_old_xmit_skbs(sq);
+ÂÂÂ free_old_xmit_skbs(sq, txq, use_napi);
 - if (use_napi && kick)
+ÂÂÂ if (use_napi && !more)
ÂÂÂÂÂÂÂÂÂ virtqueue_enable_cb_delayed(sq->vq);
 Â /* timestamp packet in software */
@@ -1552,7 +1558,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
ÂÂÂÂÂÂÂÂÂ if (!use_napi &&
ÂÂÂÂÂÂÂÂÂÂÂÂÂ unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
ÂÂÂÂÂÂÂÂÂÂÂÂÂ /* More just got used, free them then recheck. */
-ÂÂÂÂÂÂÂÂÂÂÂ free_old_xmit_skbs(sq);
+ÂÂÂÂÂÂÂÂÂÂÂ free_old_xmit_skbs(sq, txq, false);
ÂÂÂÂÂÂÂÂÂÂÂÂÂ if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ netif_start_subqueue(dev, qnum);
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ virtqueue_disable_cb(sq->vq);
@@ -1560,7 +1566,12 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
ÂÂÂÂÂÂÂÂÂ }
ÂÂÂÂÂ }
 - if (kick || netif_xmit_stopped(txq)) {
+ÂÂÂ if (use_napi)
+ÂÂÂÂÂÂÂ kick = __netdev_tx_sent_queue(txq, bytes, more);
+ÂÂÂ else
+ÂÂÂÂÂÂÂ kick = !more || netif_xmit_stopped(txq);
+
+ÂÂÂ if (kick) {
ÂÂÂÂÂÂÂÂÂ if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) {
ÂÂÂÂÂÂÂÂÂÂÂÂÂ u64_stats_update_begin(&sq->stats.syncp);
ÂÂÂÂÂÂÂÂÂÂÂÂÂ sq->stats.kicks++;
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization