Re: [PATCH net-next v5 4/5] mvpp2: recycle buffers

From: Matteo Croce
Date: Thu May 13 2021 - 19:53:39 EST


On Thu, May 13, 2021 at 8:21 PM Russell King (Oracle)
<linux@xxxxxxxxxxxxxxx> wrote:
>
> On Thu, May 13, 2021 at 06:58:45PM +0200, Matteo Croce wrote:
> > diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> > index b2259bf1d299..9dceabece56c 100644
> > --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> > +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> > @@ -3847,6 +3847,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
> > struct mvpp2_pcpu_stats ps = {};
> > enum dma_data_direction dma_dir;
> > struct bpf_prog *xdp_prog;
> > + struct xdp_rxq_info *rxqi;
> > struct xdp_buff xdp;
> > int rx_received;
> > int rx_done = 0;
> > @@ -3912,15 +3913,15 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
> > else
> > frag_size = bm_pool->frag_size;
> >
> > + if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE)
> > + rxqi = &rxq->xdp_rxq_short;
> > + else
> > + rxqi = &rxq->xdp_rxq_long;
> >
> > + if (xdp_prog) {
> > + xdp.rxq = rxqi;
> >
> > + xdp_init_buff(&xdp, PAGE_SIZE, rxqi);
> > xdp_prepare_buff(&xdp, data,
> > MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM,
> > rx_bytes, false);
> > @@ -3964,7 +3965,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
> > }
> >
> > if (pp)
> > + skb_mark_for_recycle(skb, virt_to_page(data), pp);
> > else
> > dma_unmap_single_attrs(dev->dev.parent, dma_addr,
> > bm_pool->buf_size, DMA_FROM_DEVICE,
>
> Looking at the above, which I've only quoted the _resulting_ code after
> your patch above, I don't see why you have moved the
> "bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE" conditional outside of
> the test for xdp_prog - I don't see rxqi being used except within that
> conditional. Please can you explain the reasoning there?
>

Back in v3, skb_mark_for_recycle() was accepting an xdp_mem_info*, so
I needed rxqi out of that conditional scope to get that pointer.
Now we just need a page_pool*, so I can restore the original chunk.
Nice catch.

Thanks,
--
per aspera ad upstream