Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

From: Jisheng Zhang
Date: Wed Dec 12 2018 - 04:25:34 EST


Hi Marcin,

On Wed, 12 Dec 2018 09:22:57 +0100 Marcin Wojtas <mw@xxxxxxxxxxxx> wrote:

> Hi Jisheng,
>
> År., 12 gru 2018 o 03:48 Jisheng Zhang <Jisheng.Zhang@xxxxxxxxxxxxx> napisaÅ(a):
> >
> > Hi,
> >
> > On Tue, 11 Dec 2018 13:56:49 +0100 Marcin Wojtas wrote:
> >
> > > Recent changes in the mvneta driver reworked allocation
> > > and handling of the ingress buffers to use entire pages.
> > > Apart from that in SW BM scenario the HW must be informed
> > > via PRXDQS about the biggest possible incoming buffer
> > > that can be propagated by RX descriptors.
> > >
> > > The BufferSize field was filled according to the MTU-dependent
> > > pkt_size value. Later change to PAGE_SIZE broke RX operation
> > > when usin 64K pages, as the field is simply too small.
> > >
> > > This patch conditionally limits the value passed to the BufferSize
> > > of the PRXDQS register, depending on the PAGE_SIZE used.
> > > On the occasion remove now unused frag_size field of the mvneta_port
> > > structure.
> > >
> > > Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation
> > > method for SWBM")
> >
> > IMHO, we'd better revert 562e2f467e71 and 7e47fd84b56bb
> >
> > The issue commit 562e2f467e71 wants to solve is due to commit 7e47fd84b56bb
> > It looks a bit wired, to introduce regression then submit another commit(in
> > the same patch set) solve it
> >
> > Per my test, after reverting 562e2f467e71 and 7e47fd84b56bb, I can't reproduce
> > what's claimed in commit 562e2f467e71 -- "With system having a small memory
> > (around 256MB), the state "cannot allocate memory to refill with new buffer"
> > is reach pretty quickly."
>
> I am not the one to decide about patch reverting. From what I
> understand, commit 7e47fd84b56bb was intorduced in order to increase
> performance thanks to replacing mvneta_frag_alloc/free with using
> entire pages for RX buffers. I have 2 questions:
> - without reverting anything, do you observe memory allocation
> problems during refill?

I see memory waste: For normal 1500 MTU, before commit 7e47fd84b56bb we
allocate 1920Bytes for rx. After commit 7e47fd84b56bb, we always allocate
PAGE_SIZE bytes, if PAGE_SIZE=4096, we waste 53% memory for each rx buf.

> - are you able to check L2 forwarding numbers on top of the pure
> mainline branch and after reverting the mentioned patches? I'm
> wondering what would be the performance penalty (if any).

I didn't have the numbers. IMHO, when the performance number should
be put into the commit msg when introducing commit 7e47fd84b56bb.

Thanks

>
> Best regards,
> Marcin
>
> >
> >
> > >
> > > Signed-off-by: Marcin Wojtas <mw@xxxxxxxxxxxx>
> > > ---
> > > drivers/net/ethernet/marvell/mvneta.c | 6 +++---
> > > 1 file changed, 3 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > > index e5397c8..61b2349 100644
> > > --- a/drivers/net/ethernet/marvell/mvneta.c
> > > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > > @@ -408,7 +408,6 @@ struct mvneta_port {
> > > struct mvneta_pcpu_stats __percpu *stats;
> > >
> > > int pkt_size;
> > > - unsigned int frag_size;
> > > void __iomem *base;
> > > struct mvneta_rx_queue *rxqs;
> > > struct mvneta_tx_queue *txqs;
> > > @@ -2905,7 +2904,9 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp,
> > > if (!pp->bm_priv) {
> > > /* Set Offset */
> > > mvneta_rxq_offset_set(pp, rxq, 0);
> > > - mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size);
> > > + mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
> > > + PAGE_SIZE :
> > > + MVNETA_RX_BUF_SIZE(pp->pkt_size));
> > > mvneta_rxq_bm_disable(pp, rxq);
> > > mvneta_rxq_fill(pp, rxq, rxq->size);
> > > } else {
> > > @@ -3760,7 +3761,6 @@ static int mvneta_open(struct net_device *dev)
> > > int ret;
> > >
> > > pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
> > > - pp->frag_size = PAGE_SIZE;
> > >
> > > ret = mvneta_setup_rxqs(pp);
> > > if (ret)
> >