Re: [PATCH v2 net-next 21/26] ice: add XDP and XSK generic per-channel statistics

From: Toke Høiland-Jørgensen
Date: Thu Nov 25 2021 - 06:58:12 EST


Daniel Borkmann <daniel@xxxxxxxxxxxxx> writes:

> Hi Alexander,
>
> On 11/23/21 5:39 PM, Alexander Lobakin wrote:
> [...]
>
> Just commenting on ice here as one example (similar applies to other drivers):
>
>> diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
>> index 1dd7e84f41f8..7dc287bc3a1a 100644
>> --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
>> +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
>> @@ -258,6 +258,8 @@ static void ice_clean_xdp_irq(struct ice_tx_ring *xdp_ring)
>> xdp_ring->next_dd = ICE_TX_THRESH - 1;
>> xdp_ring->next_to_clean = ntc;
>> ice_update_tx_ring_stats(xdp_ring, total_pkts, total_bytes);
>> + xdp_update_tx_drv_stats(&xdp_ring->xdp_stats->xdp_tx, total_pkts,
>> + total_bytes);
>> }
>>
>> /**
>> @@ -277,6 +279,7 @@ int ice_xmit_xdp_ring(void *data, u16 size, struct ice_tx_ring *xdp_ring)
>> ice_clean_xdp_irq(xdp_ring);
>>
>> if (!unlikely(ICE_DESC_UNUSED(xdp_ring))) {
>> + xdp_update_tx_drv_full(&xdp_ring->xdp_stats->xdp_tx);
>> xdp_ring->tx_stats.tx_busy++;
>> return ICE_XDP_CONSUMED;
>> }
>> diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
>> index ff55cb415b11..62ef47a38d93 100644
>> --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
>> +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
>> @@ -454,42 +454,58 @@ ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp_arr)
>> * @xdp: xdp_buff used as input to the XDP program
>> * @xdp_prog: XDP program to run
>> * @xdp_ring: ring to be used for XDP_TX action
>> + * @lrstats: onstack Rx XDP stats
>> *
>> * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
>> */
>> static int
>> ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
>> - struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring)
>> + struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring,
>> + struct xdp_rx_drv_stats_local *lrstats)
>> {
>> int err, result = ICE_XDP_PASS;
>> u32 act;
>>
>> + lrstats->bytes += xdp->data_end - xdp->data;
>> + lrstats->packets++;
>> +
>> act = bpf_prog_run_xdp(xdp_prog, xdp);
>>
>> if (likely(act == XDP_REDIRECT)) {
>> err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
>> - if (err)
>> + if (err) {
>> + lrstats->redirect_errors++;
>> goto out_failure;
>> + }
>> + lrstats->redirect++;
>> return ICE_XDP_REDIR;
>> }
>>
>> switch (act) {
>> case XDP_PASS:
>> + lrstats->pass++;
>> break;
>> case XDP_TX:
>> result = ice_xmit_xdp_buff(xdp, xdp_ring);
>> - if (result == ICE_XDP_CONSUMED)
>> + if (result == ICE_XDP_CONSUMED) {
>> + lrstats->tx_errors++;
>> goto out_failure;
>> + }
>> + lrstats->tx++;
>> break;
>> default:
>> bpf_warn_invalid_xdp_action(act);
>> - fallthrough;
>> + lrstats->invalid++;
>> + goto out_failure;
>> case XDP_ABORTED:
>> + lrstats->aborted++;
>> out_failure:
>> trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
>> - fallthrough;
>> + result = ICE_XDP_CONSUMED;
>> + break;
>> case XDP_DROP:
>> result = ICE_XDP_CONSUMED;
>> + lrstats->drop++;
>> break;
>> }
>
> Imho, the overall approach is way too bloated. I can see the
> packets/bytes but now we have 3 counter updates with return codes
> included and then the additional sync of the on-stack counters into
> the ring counters via xdp_update_rx_drv_stats(). So we now need
> ice_update_rx_ring_stats() as well as xdp_update_rx_drv_stats() which
> syncs 10 different stat counters via u64_stats_add() into the per ring
> ones. :/
>
> I'm just taking our XDP L4LB in Cilium as an example: there we already
> count errors and export them via per-cpu map that eventually lead to
> XDP_DROP cases including the /reason/ which caused the XDP_DROP (e.g.
> Prometheus can then scrape these insights from all the nodes in the
> cluster). Given the different action codes are very often application
> specific, there's not much debugging that you can do when /only/
> looking at `ip link xdpstats` to gather insight on *why* some of these
> actions were triggered (e.g. fib lookup failure, etc). If really of
> interest, then maybe libxdp could have such per-action counters as
> opt-in in its call chain..

To me, standardising these counters is less about helping people debug
their XDP programs (as you say, you can put your own telemetry into
those), and more about making XDP less "mystical" to the system
administrator (who may not be the same person who wrote the XDP
programs). So at the very least, they need to indicate "where are the
packets going", which means at least counters for DROP, REDIRECT and TX
(+ errors for tx/redirect) in addition to the "processed by XDP" initial
counter. Which in the above means 'pass', 'invalid' and 'aborted' could
be dropped, I guess; but I don't mind terribly keeping them either given
that there's no measurable performance impact.

> But then it also seems like above in ice_xmit_xdp_ring() we now need
> to bump counters twice just for sake of ethtool vs xdp counters which
> sucks a bit, would be nice to only having to do it once:

This I agree with, and while I can see the layering argument for putting
them into 'ip' and rtnetlink instead of ethtool, I also worry that these
counters will simply be lost in obscurity, so I do wonder if it wouldn't
be better to accept the "layering violation" and keeping them all in the
'ethtool -S' output?

[...]

> + xdp-channel0-rx_xdp_redirect: 7
> + xdp-channel0-rx_xdp_redirect_errors: 8
> + xdp-channel0-rx_xdp_tx: 9
> + xdp-channel0-rx_xdp_tx_errors: 10
> + xdp-channel0-tx_xdp_xmit_packets: 11
> + xdp-channel0-tx_xdp_xmit_bytes: 12
> + xdp-channel0-tx_xdp_xmit_errors: 13
> + xdp-channel0-tx_xdp_xmit_full: 14
>
> From a user PoV to avoid confusion, maybe should be made more clear that the latter refers
> to xsk.

+1, these should probably be xdp-channel0-tx_xsk_* or something like
that...

-Toke