Re: [PATCH net] net/core/dev.c: Ensure pfmemalloc skbs are correctly handled when receiving

From: Eric Dumazet
Date: Sat Apr 17 2021 - 00:52:24 EST


On Sat, Apr 17, 2021 at 2:08 AM Xie He <xie.he.0141@xxxxxxxxx> wrote:
>
> When an skb is allocated by "__netdev_alloc_skb" in "net/core/skbuff.c",
> if "sk_memalloc_socks()" is true, and if there's not sufficient memory,
> the skb would be allocated using emergency memory reserves. This kind of
> skbs are called pfmemalloc skbs.
>
> pfmemalloc skbs must be specially handled in "net/core/dev.c" when
> receiving. They must NOT be delivered to the target protocol if
> "skb_pfmemalloc_protocol(skb)" is false.
>
> However, if, after a pfmemalloc skb is allocated and before it reaches
> the code in "__netif_receive_skb", "sk_memalloc_socks()" becomes false,
> then the skb will be handled by "__netif_receive_skb" as a normal skb.
> This causes the skb to be delivered to the target protocol even if
> "skb_pfmemalloc_protocol(skb)" is false.
>
> This patch fixes this problem by ensuring all pfmemalloc skbs are handled
> by "__netif_receive_skb" as pfmemalloc skbs.
>
> "__netif_receive_skb_list" has the same problem as "__netif_receive_skb".
> This patch also fixes it.
>
> Fixes: b4b9e3558508 ("netvm: set PF_MEMALLOC as appropriate during SKB processing")
> Cc: Mel Gorman <mgorman@xxxxxxx>
> Cc: David S. Miller <davem@xxxxxxxxxxxxx>
> Cc: Neil Brown <neilb@xxxxxxx>
> Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
> Cc: Jiri Slaby <jslaby@xxxxxxx>
> Cc: Mike Christie <michaelc@xxxxxxxxxxx>
> Cc: Eric B Munson <emunson@xxxxxxxxx>
> Cc: Eric Dumazet <eric.dumazet@xxxxxxxxx>
> Cc: Sebastian Andrzej Siewior <sebastian@xxxxxxxxxxxxx>
> Cc: Christoph Lameter <cl@xxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Signed-off-by: Xie He <xie.he.0141@xxxxxxxxx>
> ---
> net/core/dev.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 1f79b9aa9a3f..3e6b7879daef 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -5479,7 +5479,7 @@ static int __netif_receive_skb(struct sk_buff *skb)
> {
> int ret;
>
> - if (sk_memalloc_socks() && skb_pfmemalloc(skb)) {
> + if (skb_pfmemalloc(skb)) {
> unsigned int noreclaim_flag;
>
> /*
> @@ -5507,7 +5507,7 @@ static void __netif_receive_skb_list(struct list_head *head)
> bool pfmemalloc = false; /* Is current sublist PF_MEMALLOC? */
>
> list_for_each_entry_safe(skb, next, head, list) {
> - if ((sk_memalloc_socks() && skb_pfmemalloc(skb)) != pfmemalloc) {
> + if (skb_pfmemalloc(skb) != pfmemalloc) {
> struct list_head sublist;
>
> /* Handle the previous sublist */
> --
> 2.27.0
>

The race window has been considered to be small that we prefer the
code as it is.

The reason why we prefer current code is that we use a static key for
the implementation
of sk_memalloc_socks()

Trading some minor condition (race) with extra cycles for each
received packet is a serious concern.

What matters is a persistent condition that would _deplete_ memory,
not for a dozen of packets,
but thousands. Can you demonstrate such an issue ?