[PATCH v3 net-next 0/6] skbuff: micro-optimize flow dissection

From: Alexander Lobakin
Date: Sun Mar 14 2021 - 07:12:25 EST


This little number makes all of the flow dissection functions take
raw input data pointer as const (1-5) and shuffles the branches in
__skb_header_pointer() according to their hit probability.

The result is +20 Mbps per flow/core with one Flow Dissector pass
per packet. This affects RPS (with software hashing), drivers that
use eth_get_headlen() on their Rx path and so on.

>From v2 [1]:
- reword some commit messages as a potential fix for NIPA;
- no functional changes.

>From v1 [0]:
- rebase on top of the latest net-next. This was super-weird, but
I double-checked that the series applies with no conflicts, and
then on Patchwork it didn't;
- no other changes.

[0] https://lore.kernel.org/netdev/20210312194538.337504-1-alobakin@xxxxx
[1] https://lore.kernel.org/netdev/20210313113645.5949-1-alobakin@xxxxx

Alexander Lobakin (6):
flow_dissector: constify bpf_flow_dissector's data pointers
skbuff: make __skb_header_pointer()'s data argument const
flow_dissector: constify raw input data argument
linux/etherdevice.h: misc trailing whitespace cleanup
ethernet: constify eth_get_headlen()'s data argument
skbuff: micro-optimize {,__}skb_header_pointer()

include/linux/etherdevice.h | 4 ++--
include/linux/skbuff.h | 26 +++++++++++------------
include/net/flow_dissector.h | 6 +++---
net/core/flow_dissector.c | 41 +++++++++++++++++++-----------------
net/ethernet/eth.c | 2 +-
5 files changed, 40 insertions(+), 39 deletions(-)

--
2.30.2