On Fri, 6 Jun 2025 22:48:53 +0700 Bui Quang Minh wrote:
Is that always the case? Can the multi-buf not be due to header-dataHere we are in the zerocopy path, so the buffers for the frame to fillBut currently, if a multi-buffer packet arrives, it will not go throughSounds fair, but at a glance the normal XDP path seems to be trying to
XDP program so it doesn't increase the stats but still goes to network
stack. So I think it's not a correct behavior.
linearize the frame. Can we not try to flatten the frame here?
If it's simply to long for the chunk size that's a frame length error,
right?
in are allocated from XDP socket's umem. And if the frame spans across
multiple buffers then the total frame size is larger than the chunk
size.
split of the incoming frame? (I'm not familiar with the virtio spec)
Furthermore, we are in the zerocopy so we cannot linearize byGeneric XDP == do_xdp_generic(), here I think you mean the normal XDP
allocating a large enough buffer to cover the whole frame then copy the
frame data to it. That's not zerocopy anymore. Also, XDP socket zerocopy
receive has assumption that the packet it receives must from the umem
pool. AFAIK, the generic XDP path is for copy mode only.
patch in the virtio driver? If so then no, XDP is very much not
expected to copy each frame before processing.
This is only slightly related to you patch but while we talk about
multi-buf - in the netdev CI the test which sends ping while XDP
multi-buf program is attached is really flaky :(
https://netdev.bots.linux.dev/contest.html?executor=vmksft-drv-hw&test=ping-py.ping-test-xdp-native-mb&ld-cases=1