Re: [PATCH RFC v2 3/4] virtio_net: move tx vq operation under tx queue lock

From: Michael S. Tsirkin
Date: Tue Apr 13 2021 - 15:38:33 EST


On Tue, Apr 13, 2021 at 10:20:39AM -0400, Willem de Bruijn wrote:
> On Tue, Apr 13, 2021 at 10:03 AM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
> >
> > On Tue, Apr 13, 2021 at 04:54:42PM +0800, Jason Wang wrote:
> > >
> > > 在 2021/4/13 下午1:47, Michael S. Tsirkin 写道:
> > > > It's unsafe to operate a vq from multiple threads.
> > > > Unfortunately this is exactly what we do when invoking
> > > > clean tx poll from rx napi.
>
> Actually, the issue goes back to the napi-tx even without the
> opportunistic cleaning from the receive interrupt, I think? That races
> with processing the vq in start_xmit.
>
> > > > As a fix move everything that deals with the vq to under tx lock.
> > > >
>
> If the above is correct:
>
> Fixes: b92f1e6751a6 ("virtio-net: transmit napi")
>
> > > > Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
> > > > ---
> > > > drivers/net/virtio_net.c | 22 +++++++++++++++++++++-
> > > > 1 file changed, 21 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > index 16d5abed582c..460ccdbb840e 100644
> > > > --- a/drivers/net/virtio_net.c
> > > > +++ b/drivers/net/virtio_net.c
> > > > @@ -1505,6 +1505,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
> > > > struct virtnet_info *vi = sq->vq->vdev->priv;
> > > > unsigned int index = vq2txq(sq->vq);
> > > > struct netdev_queue *txq;
> > > > + int opaque;
>
> nit: virtqueue_napi_complete also stores as int opaque, but
> virtqueue_enable_cb_prepare actually returns, and virtqueue_poll
> expects, an unsigned int. In the end, conversion works correctly. But
> cleaner to use the real type.
>
> > > > + bool done;
> > > > if (unlikely(is_xdp_raw_buffer_queue(vi, index))) {
> > > > /* We don't need to enable cb for XDP */
> > > > @@ -1514,10 +1516,28 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
> > > > txq = netdev_get_tx_queue(vi->dev, index);
> > > > __netif_tx_lock(txq, raw_smp_processor_id());
> > > > + virtqueue_disable_cb(sq->vq);
> > > > free_old_xmit_skbs(sq, true);
> > > > +
> > > > + opaque = virtqueue_enable_cb_prepare(sq->vq);
> > > > +
> > > > + done = napi_complete_done(napi, 0);
> > > > +
> > > > + if (!done)
> > > > + virtqueue_disable_cb(sq->vq);
> > > > +
> > > > __netif_tx_unlock(txq);
> > > > - virtqueue_napi_complete(napi, sq->vq, 0);
> > >
> > >
> > > So I wonder why not simply move __netif_tx_unlock() after
> > > virtqueue_napi_complete()?
> > >
> > > Thanks
> > >
> >
> >
> > Because that calls tx poll which also takes tx lock internally ...
>
> which tx poll?

Oh. It's virtqueue_poll actually. I confused it with
virtnet_poll_tx. Right. We can put it back the way it was.

--
MST