Re: [PATCH v2 bpf] bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES

From: Alexander Lobakin
Date: Tue Feb 14 2023 - 10:40:48 EST


From: Daniel Borkmann <daniel@xxxxxxxxxxxxx>
Date: Tue, 14 Feb 2023 16:24:10 +0100

> On 2/13/23 3:27 PM, Alexander Lobakin wrote:
>> &xdp_buff and &xdp_frame are bound in a way that
>>
>> xdp_buff->data_hard_start == xdp_frame
>>
>> It's always the case and e.g. xdp_convert_buff_to_frame() relies on
>> this.
>> IOW, the following:
>>
>>     for (u32 i = 0; i < 0xdead; i++) {
>>         xdpf = xdp_convert_buff_to_frame(&xdp);
>>         xdp_convert_frame_to_buff(xdpf, &xdp);
>>     }
>>
>> shouldn't ever modify @xdpf's contents or the pointer itself.
>> However, "live packet" code wrongly treats &xdp_frame as part of its
>> context placed *before* the data_hard_start. With such flow,
>> data_hard_start is sizeof(*xdpf) off to the right and no longer points
>> to the XDP frame.
>>
>> Instead of replacing `sizeof(ctx)` with `offsetof(ctx, xdpf)` in several
>> places and praying that there are no more miscalcs left somewhere in the
>> code, unionize ::frm with ::data in a flex array, so that both starts
>> pointing to the actual data_hard_start and the XDP frame actually starts
>> being a part of it, i.e. a part of the headroom, not the context.
>> A nice side effect is that the maximum frame size for this mode gets
>> increased by 40 bytes, as xdp_buff::frame_sz includes everything from
>> data_hard_start (-> includes xdpf already) to the end of XDP/skb shared
>> info.
>>
>> Minor: align `&head->data` with how `head->frm` is assigned for
>> consistency.
>> Minor #2: rename 'frm' to 'frame' in &xdp_page_head while at it for
>> clarity.
>>
>> (was found while testing XDP traffic generator on ice, which calls
>>   xdp_convert_frame_to_buff() for each XDP frame)
>>
>> Fixes: b530e9e1063e ("bpf: Add "live packet" mode for XDP in
>> BPF_PROG_RUN")
>> Signed-off-by: Alexander Lobakin <alexandr.lobakin@xxxxxxxxx>
>
> Could you double check BPF CI? Looks like a number of XDP related tests
> are failing on your patch which I'm not seeing on other patches where runs
> are green, for example test_progs on several archs report the below:
>
> https://github.com/kernel-patches/bpf/actions/runs/4164593416/jobs/7207290499
>
>   [...]
>   test_xdp_do_redirect:PASS:prog_run 0 nsec
>   test_xdp_do_redirect:PASS:pkt_count_xdp 0 nsec
>   test_xdp_do_redirect:PASS:pkt_count_zero 0 nsec
>   test_xdp_do_redirect:PASS:pkt_count_tc 0 nsec
>   test_max_pkt_size:PASS:prog_run_max_size 0 nsec
>   test_max_pkt_size:FAIL:prog_run_too_big unexpected prog_run_too_big:
> actual -28 != expected -22
>   close_netns:PASS:setns 0 nsec
>   #275     xdp_do_redirect:FAIL
>   Summary: 273/1581 PASSED, 21 SKIPPED, 2 FAILED
Ah I see. xdp_do_redirect.c test defines:

/* The maximum permissible size is: PAGE_SIZE -
* sizeof(struct xdp_page_head) - sizeof(struct skb_shared_info) -
* XDP_PACKET_HEADROOM = 3368 bytes
*/
#define MAX_PKT_SIZE 3368

This needs to be updated as it now became bigger. The test checks that
this size passes and size + 1 fails, but now it doesn't.
Will send v3 in a couple minutes.

Thanks,
Olek