Re: [net PATCH] skb: Do mix page pool and page referenced frags in GRO

From: Ilias Apalodimas
Date: Fri Jan 27 2023 - 02:15:50 EST


Thanks Alexander!

On Fri, 27 Jan 2023 at 01:13, Jakub Kicinski <kuba@xxxxxxxxxx> wrote:
>
> On Thu, 26 Jan 2023 11:06:59 -0800 Alexander Duyck wrote:
> > From: Alexander Duyck <alexanderduyck@xxxxxx>
> >
> > GSO should not merge page pool recycled frames with standard reference
> > counted frames. Traditionally this didn't occur, at least not often.
> > However as we start looking at adding support for wireless adapters there
> > becomes the potential to mix the two due to A-MSDU repartitioning frames in
> > the receive path. There are possibly other places where this may have
> > occurred however I suspect they must be few and far between as we have not
> > seen this issue until now.
> >
> > Fixes: 53e0961da1c7 ("page_pool: add frag page recycling support in page pool")
> > Reported-by: Felix Fietkau <nbd@xxxxxxxx>
> > Signed-off-by: Alexander Duyck <alexanderduyck@xxxxxx>
>
> Exciting investigation!
> Felix, out of curiosity - the impact of loosing GRO on performance is
> not significant enough to care? We could possibly try to switch to
> using the frag list if we can't merge into frags safely.
>
> > diff --git a/net/core/gro.c b/net/core/gro.c
> > index 506f83d715f8..4bac7ea6e025 100644
> > --- a/net/core/gro.c
> > +++ b/net/core/gro.c
> > @@ -162,6 +162,15 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
> > struct sk_buff *lp;
> > int segs;
> >
> > + /* Do not splice page pool based packets w/ non-page pool
> > + * packets. This can result in reference count issues as page
> > + * pool pages will not decrement the reference count and will
> > + * instead be immediately returned to the pool or have frag
> > + * count decremented.
> > + */
> > + if (p->pp_recycle != skb->pp_recycle)
> > + return -ETOOMANYREFS;
> >
> > /* pairs with WRITE_ONCE() in netif_set_gro_max_size() */
> > gro_max_size = READ_ONCE(p->dev->gro_max_size);
> >
> >
> >
>
Acked-by: Ilias Apalodimas <ilias.apalodimas@xxxxxxxxxx>