Re: [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table

From: Jason Xing
Date: Sat Jan 14 2023 - 07:06:38 EST


On Sat, Jan 14, 2023 at 5:45 PM Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
>
> On Thu, Jan 12, 2023 at 7:54 AM Jason Xing <kerneljasonxing@xxxxxxxxx> wrote:
> >
> > From: Jason Xing <kernelxing@xxxxxxxxxxx>
> >
> > While one cpu is working on looking up the right socket from ehash
> > table, another cpu is done deleting the request socket and is about
> > to add (or is adding) the big socket from the table. It means that
> > we could miss both of them, even though it has little chance.
> >
> > Let me draw a call trace map of the server side.
> > CPU 0 CPU 1
> > ----- -----
> > tcp_v4_rcv() syn_recv_sock()
> > inet_ehash_insert()
> > -> sk_nulls_del_node_init_rcu(osk)
> > __inet_lookup_established()
> > -> __sk_nulls_add_node_rcu(sk, list)
> >
> > Notice that the CPU 0 is receiving the data after the final ack
> > during 3-way shakehands and CPU 1 is still handling the final ack.
> >
> > Why could this be a real problem?
> > This case is happening only when the final ack and the first data
> > receiving by different CPUs. Then the server receiving data with
> > ACK flag tries to search one proper established socket from ehash
> > table, but apparently it fails as my map shows above. After that,
> > the server fetches a listener socket and then sends a RST because
> > it finds a ACK flag in the skb (data), which obeys RST definition
> > in RFC 793.
> >
> > Many thanks to Eric for great help from beginning to end.
> >
> > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> > Signed-off-by: Jason Xing <kernelxing@xxxxxxxxxxx>
> > ---
> > net/ipv4/inet_hashtables.c | 10 ++++++++++
> > 1 file changed, 10 insertions(+)
> >
> > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> > index 24a38b56fab9..18f88cb4efcb 100644
> > --- a/net/ipv4/inet_hashtables.c
> > +++ b/net/ipv4/inet_hashtables.c
> > @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> > spin_lock(lock);
> > if (osk) {
> > WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> > + if (sk_hashed(osk))
> > + /* Before deleting the node, we insert a new one to make
> > + * sure that the look-up=sk process would not miss either
> > + * of them and that at least one node would exist in ehash
> > + * table all the time. Otherwise there's a tiny chance
> > + * that lookup process could find nothing in ehash table.
> > + */
> > + __sk_nulls_add_node_rcu(sk, list);
>
> In our private email exchange, I suggested to insert sk at the _tail_
> of the hash bucket.
>

Yes, I noticed that. At that time I kept considering the race
condition of the RCU itself, not the scene you mentioned as below.

> Inserting it at the _head_ would still leave a race condition, because
> a concurrent reader might
> have already started the bucket traversal, and would not see 'sk'.

Thanks for the detailed explanation. Now I see why. I'll replace it
with __sk_nulls_add_node_tail_rcu() function and send the v2 patch.

By the way, I checked the removal of TIMEWAIT socket which is included
in this patch.
I write down the call-trace:
inet_hash_connect()
-> __inet_hash_connect()
-> if (sk_unhashed(sk)) {
inet_ehash_nolisten(sk, (struct sock *)tw, NULL);
-> inet_ehash_insert(sk, osk, found_dup_sk);
Therefore, this patch covers the timewait case.

Thanks,
Jason

>
> Thanks.
>
> > ret = sk_nulls_del_node_init_rcu(osk);
> > + goto unlock;
> > } else if (found_dup_sk) {
> > *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
> > if (*found_dup_sk)
> > @@ -660,6 +669,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> > if (ret)
> > __sk_nulls_add_node_rcu(sk, list);
> >
> > +unlock:
> > spin_unlock(lock);
> >
> > return ret;
> > --
> > 2.37.3
> >