Re: Re: Re: Re: Re: Re: [PATCH net v2 0/2] Revert the 'socket_alloc' life cycle change

From: SeongJae Park
Date: Wed May 06 2020 - 11:21:02 EST


On Wed, 6 May 2020 07:41:51 -0700 "Paul E. McKenney" <paulmck@xxxxxxxxxx> wrote:

> On Wed, May 06, 2020 at 02:59:26PM +0200, SeongJae Park wrote:
> > TL; DR: It was not kernel's fault, but the benchmark program.
> >
> > So, the problem is reproducible using the lebench[1] only. I carefully read
> > it's code again.
> >
> > Before running the problem occurred "poll big" sub test, lebench executes
> > "context switch" sub test. For the test, it sets the cpu affinity[2] and
> > process priority[3] of itself to '0' and '-20', respectively. However, it
> > doesn't restore the values to original value even after the "context switch" is
> > finished. For the reason, "select big" sub test also run binded on CPU 0 and
> > has lowest nice value. Therefore, it can disturb the RCU callback thread for
> > the CPU 0, which processes the deferred deallocations of the sockets, and as a
> > result it triggers the OOM.
> >
> > We confirmed the problem disappears by offloading the RCU callbacks from the
> > CPU 0 using rcu_nocbs=0 boot parameter or simply restoring the affinity and/or
> > priority.
> >
> > Someone _might_ still argue that this is kernel problem because the problem
> > didn't occur on the old kernels prior to the Al's patches. However, setting
> > the affinity and priority was available because the program received the
> > permission. Therefore, it would be reasonable to blame the system
> > administrators rather than the kernel.
> >
> > So, please ignore this patchset, apology for making confuse. If you still has
> > some doubts or need more tests, please let me know.
> >
> > [1] https://github.com/LinuxPerfStudy/LEBench
> > [2] https://github.com/LinuxPerfStudy/LEBench/blob/master/TEST_DIR/OS_Eval.c#L820
> > [3] https://github.com/LinuxPerfStudy/LEBench/blob/master/TEST_DIR/OS_Eval.c#L822
>
> Thank you for chasing this down!
>
> I have had this sort of thing on my list as a potential issue, but given
> that it is now really showing up, it sounds like it is time to bump
> up its priority a bit. Of course there are limits, so if userspace is
> running at any of the real-time priorities, making sufficient CPU time
> available to RCU's kthreads becomes userspace's responsibility. But if
> everything is running at SCHED_OTHER (which is this case here, correct?),

Correct.

> then it is reasonable for RCU to do some work to avoid this situation.

That would be also great!

>
> But still, yes, the immediate job is fixing the benchmark. ;-)

Totally agreed.

>
> Thanx, Paul
>
> PS. Why not just attack all potential issues on my list? Because I
> usually learn quite a bit from seeing the problem actually happen.
> And sometimes other changes in RCU eliminate the potential issue
> before it has a chance to happen.

Sounds interesting, I will try some of those in my spare time ;)


Thanks,
SeongJae Park