Re: [PATCH iwl-next v3 16/18] idpf: add support for XDP on Rx

From: Simon Horman
Date: Thu Jul 31 2025 - 09:36:16 EST


On Wed, Jul 30, 2025 at 06:07:15PM +0200, Alexander Lobakin wrote:
> Use libeth XDP infra to support running XDP program on Rx polling.
> This includes all of the possible verdicts/actions.
> XDP Tx queues are cleaned only in "lazy" mode when there are less than
> 1/4 free descriptors left on the ring. libeth helper macros to define
> driver-specific XDP functions make sure the compiler could uninline
> them when needed.
> Use __LIBETH_WORD_ACCESS to parse descriptors more efficiently when
> applicable. It really gives some good boosts and code size reduction
> on x86_64.
>
> Co-developed-by: Michal Kubiak <michal.kubiak@xxxxxxxxx>
> Signed-off-by: Michal Kubiak <michal.kubiak@xxxxxxxxx>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@xxxxxxxxx>

...

> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c

...

> @@ -3127,14 +3125,12 @@ static bool idpf_rx_process_skb_fields(struct sk_buff *skb,
> return !__idpf_rx_process_skb_fields(rxq, skb, xdp->desc);
> }
>
> -static void
> -idpf_xdp_run_pass(struct libeth_xdp_buff *xdp, struct napi_struct *napi,
> - struct libeth_rq_napi_stats *ss,
> - const struct virtchnl2_rx_flex_desc_adv_nic_3 *desc)
> -{
> - libeth_xdp_run_pass(xdp, NULL, napi, ss, desc, NULL,
> - idpf_rx_process_skb_fields);
> -}
> +LIBETH_XDP_DEFINE_START();
> +LIBETH_XDP_DEFINE_RUN(static idpf_xdp_run_pass, idpf_xdp_run_prog,
> + idpf_xdp_tx_flush_bulk, idpf_rx_process_skb_fields);
> +LIBETH_XDP_DEFINE_FINALIZE(static idpf_xdp_finalize_rx, idpf_xdp_tx_flush_bulk,
> + idpf_xdp_tx_finalize);
> +LIBETH_XDP_DEFINE_END();
>
> /**
> * idpf_rx_hsplit_wa - handle header buffer overflows and split errors
> @@ -3222,7 +3218,10 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
> struct libeth_rq_napi_stats rs = { };
> u16 ntc = rxq->next_to_clean;
> LIBETH_XDP_ONSTACK_BUFF(xdp);
> + LIBETH_XDP_ONSTACK_BULK(bq);
>
> + libeth_xdp_tx_init_bulk(&bq, rxq->xdp_prog, rxq->xdp_rxq.dev,
> + rxq->xdpsqs, rxq->num_xdp_txq);
> libeth_xdp_init_buff(xdp, &rxq->xdp, &rxq->xdp_rxq);
>
> /* Process Rx packets bounded by budget */
> @@ -3318,11 +3317,13 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
> if (!idpf_rx_splitq_is_eop(rx_desc) || unlikely(!xdp->data))
> continue;
>
> - idpf_xdp_run_pass(xdp, rxq->napi, &rs, rx_desc);
> + idpf_xdp_run_pass(xdp, &bq, rxq->napi, &rs, rx_desc);
> }
>
> rxq->next_to_clean = ntc;
> +
> libeth_xdp_save_buff(&rxq->xdp, xdp);
> + idpf_xdp_finalize_rx(&bq);

This will call __libeth_xdp_finalize_rx(), which calls rcu_read_unlock().
But there doesn't seem to be a corresponding call to rcu_read_lock()

Flagged by Sparse.

>
> u64_stats_update_begin(&rxq->stats_sync);
> u64_stats_add(&rxq->q_stats.packets, rs.packets);

...