Re: [PATCH v4] netdev attribute to control xdpgeneric skb linearization

From: Willem de Bruijn
Date: Tue Mar 03 2020 - 16:10:58 EST


On Tue, Mar 3, 2020 at 3:50 PM Jakub Kicinski <kuba@xxxxxxxxxx> wrote:
>
> On Tue, 3 Mar 2020 20:46:55 +0100 Daniel Borkmann wrote:
> > Thus, when the data/data_end test fails in generic XDP, the user can
> > call e.g. bpf_xdp_pull_data(xdp, 64) to make sure we pull in as much as
> > is needed w/o full linearization and once done the data/data_end can be
> > repeated to proceed. Native XDP will leave xdp->rxq->skb as NULL, but
> > later we could perhaps reuse the same bpf_xdp_pull_data() helper for
> > native with skb-less backing. Thoughts?

Something akin to pskb_may_pull sounds like a great solution to me.

Another approach would be a new xdp_action XDP_NEED_LINEARIZED that
causes the program to be restarted after linearization. But that is both
more expensive and less elegant.

Instead of a sysctl or device option, is this an optimization that
could be taken based on the program? Specifically, would XDP_FLAGS be
a path to pass a SUPPORT_SG flag along with the program? I'm not
entirely familiar with the XDP setup code, so this may be a totally
off. But from a quick read it seems like generic_xdp_install could
transfer such a flag to struct net_device.

> I'm curious why we consider a xdpgeneric-only addition. Is attaching
> a cls_bpf program noticeably slower than xdpgeneric?

This just should not be xdp*generic* only, but allow us to use any XDP
with large MTU sizes and without having to disable GRO. I'd still like a
way to be able to drop or modify packets before GRO, or to signal that
a type of packet should skip GRO.