Re: [PATCH v7 7/6] fs/epoll: scale nested callbacks

From: Jason Baron
Date: Wed Oct 18 2017 - 10:09:14 EST




On 10/17/2017 11:53 AM, Davidlohr Bueso wrote:
> On Mon, 16 Oct 2017, Jason Baron wrote:
>
>> Hi,
>>
>> I posted a patch to completely remove the lock here from the
>> ep_poll_safewake() patch here:
>>
>> http://lkml.iu.edu/hypermail/linux/kernel/1710.1/05834.html
>
> Kernel development never ceases to amaze me. Two complementary
> fixes to a 8+ y/o performance issue on the same day - not that
> nested epolls are that common, but it also comes from two real
> workloads...
>
> Getting rid of the lock altogether is always the best way.
>
>>
>> So these are going to conflict. The reason its safe to remove the lock
>> is that there are loop and depth checks now that are performed during
>> EPOLL_CTL_ADD. Specifically, ep_loop_check(). I would prefer to these
>> checks once add add time as opposed to at each wakeup (even if they can
>> be scaled better).
>
> Wrt conflicts, no worries, I'll rebase -- and hopefully we can get
> the dlock stuff in for v4.15 as well.
>
>>
>> I also have a pending patch to do something similar for
>> poll_readywalk_ncalls, but I think that poll_safewake_ncalls is the most
>> egregious one here?
>
> The customer's workload issues are for the loop_ncalls and readywalk_ncalls
> lists, so I'd be interested in seeing your patch for the later. The reason
> your patch above is likely not to help my scenario is that most of the time
> is spent at a dispatcher level doing epoll_wait, not too many
> EPOLL_CTL_ADDs
> going on afaict.

If there are not many EPOLL_CTL_ADDs, then I wouldn't think loop_ncalls
would be highly contented (since it should only be called from the add
path)?

Thanks,

-Jason


>
> Thanks,
> Davidlohr
>
>>
>> Thanks,
>>
>> -Jason
>>
>> On 10/13/2017 11:45 AM, Davidlohr Bueso wrote:
>>> A customer reported massive contention on the ncalls->lock in which
>>> the workload is designed around nested epolls (where the fd is already
>>> an epoll).
>>>
>>> 83.49%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] __pv_queued_spin_lock_slowpath
>>> Â2.70%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] ep_call_nested.constprop.13
>>> Â1.94%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] _raw_spin_lock_irqsave
>>> Â1.83%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k]
>>> __raw_callee_save___pv_queued_spin_unlock
>>> Â1.45%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] _raw_spin_unlock_irqrestore
>>> Â0.41%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] ep_scan_ready_list.isra.8
>>> Â0.36%Â [kernel]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ [k] pvclock_clocksource_read
>>>
>>> The application running on kvm, is using a shared memory IPC
>>> communication
>>> with a pipe wakeup mechanism, and a two level dispatcher both built
>>> around
>>> 'epoll_wait'. There is one process per physical core and a full mesh of
>>> pipes
>>> between them, so on a 18 core system (the actual case), there are 18*18
>>> pipes
>>> for the IPCs alone.
>>>
>>> This patch proposes replacing the nested calls global linked list with a
>>> dlock
>>> interface, for which we can benefit from pcpu lists when doing
>>> ep_poll_safewake(),
>>> and hashing for the current task, which provides two benefits:
>>>
>>> 1. Speeds up the process of loop and max-depth checks from O(N) lookups
>>> to O(1)
>>> Â (albeit possible collisions, which we have to iterate); and,
>>>
>>> 2. Provides smaller lock granularity.
>>>
>>> cpusÂÂÂ beforeÂÂÂÂÂÂÂ afterÂÂÂÂÂÂ diff
>>> 1ÂÂÂ 1409370ÂÂÂÂÂÂÂ 1344804ÂÂÂÂ -4.58%
>>> 2ÂÂÂ 1015690ÂÂÂÂÂÂÂ 1337053ÂÂÂÂ 31.63%
>>> 3ÂÂÂÂ 721009ÂÂÂÂÂÂÂ 1273254ÂÂÂÂ 76.59%
>>> 4ÂÂÂÂ 380959ÂÂÂÂÂÂÂ 1128931ÂÂÂ 196.33%
>>> 5ÂÂÂÂ 287603ÂÂÂÂÂÂÂ 1028362ÂÂÂ 257.56%
>>> 6ÂÂÂÂ 221104ÂÂÂÂÂÂÂÂ 894943ÂÂÂ 304.76%
>>> 7ÂÂÂÂ 173592ÂÂÂÂÂÂÂÂ 976395ÂÂÂ 462.46%
>>> 8ÂÂÂÂ 145860ÂÂÂÂÂÂÂÂ 922584ÂÂÂ 532.51%
>>> 9ÂÂÂÂ 127877ÂÂÂÂÂÂÂÂ 925900ÂÂÂ 624.05%
>>> 10ÂÂÂÂ 112603ÂÂÂÂÂÂÂÂ 791456ÂÂÂ 602.87%
>>> 11ÂÂÂÂÂ 97926ÂÂÂÂÂÂÂÂ 724296ÂÂÂ 639.63%
>>> 12ÂÂÂÂÂ 80732ÂÂÂÂÂÂÂÂ 730485ÂÂÂ 804.82%
>>>
>>> With the exception of a single cpu, where the overhead of jhashing
>>> influences), we
>>> get some pretty good raw throughput increase. Similarly, the amount of
>>> time spent
>>> decreases immensely as well.
>>>
>>> Passes ltp related testcases.
>>>
>>> Signed-off-by: Davidlohr Bueso <dbueso@xxxxxxx>
>>> ---
>>> fs/eventpoll.c | 88
>>> +++++++++++++++++++++++++++++++++++-----------------------
>>> 1 file changed, 53 insertions(+), 35 deletions(-)
>>>
>>> diff --git a/fs/eventpoll.c b/fs/eventpoll.c
>>> index 2fabd19cdeea..675c97fdc5da 100644
>>> --- a/fs/eventpoll.c
>>> +++ b/fs/eventpoll.c
>>> @@ -22,7 +22,7 @@
>>> #include <linux/slab.h>
>>> #include <linux/poll.h>
>>> #include <linux/string.h>
>>> -#include <linux/list.h>
>>> +#include <linux/dlock-list.h>
>>> #include <linux/hash.h>
>>> #include <linux/spinlock.h>
>>> #include <linux/syscalls.h>
>>> @@ -119,7 +119,7 @@ struct epoll_filefd {
>>> Â* and loop cycles.
>>> Â*/
>>> struct nested_call_node {
>>> -ÂÂÂ struct list_head llink;
>>> +ÂÂÂ struct dlock_list_node llink;
>>> ÂÂÂ void *cookie;
>>> ÂÂÂ void *ctx;
>>> };
>>> @@ -129,8 +129,7 @@ struct nested_call_node {
>>> Â* maximum recursion dept and loop cycles.
>>> Â*/
>>> struct nested_calls {
>>> -ÂÂÂ struct list_head tasks_call_list;
>>> -ÂÂÂ spinlock_t lock;
>>> +ÂÂÂ struct dlock_list_heads tasks_call_list;
>>> };
>>>
>>> /*
>>> @@ -371,10 +370,14 @@ static inline int ep_op_has_event(int op)
>>> }
>>>
>>> /* Initialize the poll safe wake up structure */
>>> -static void ep_nested_calls_init(struct nested_calls *ncalls)
>>> +static inline int ep_nested_calls_init(struct nested_calls *ncalls)
>>> +{
>>> +ÂÂÂ return alloc_dlock_list_heads(&ncalls->tasks_call_list, true);
>>> +}
>>> +
>>> +static inline void ep_nested_calls_free(struct nested_calls *ncalls)
>>> {
>>> -ÂÂÂ INIT_LIST_HEAD(&ncalls->tasks_call_list);
>>> -ÂÂÂ spin_lock_init(&ncalls->lock);
>>> +ÂÂÂ free_dlock_list_heads(&ncalls->tasks_call_list);
>>> }
>>>
>>> /**
>>> @@ -483,45 +486,50 @@ static int ep_call_nested(struct nested_calls
>>> *ncalls, int max_nests,
>>> {
>>> ÂÂÂ int error, call_nests = 0;
>>> ÂÂÂ unsigned long flags;
>>> -ÂÂÂ struct list_head *lsthead = &ncalls->tasks_call_list;
>>> -ÂÂÂ struct nested_call_node *tncur;
>>> -ÂÂÂ struct nested_call_node tnode;
>>> +ÂÂÂ struct dlock_list_head *head;
>>> +ÂÂÂ struct nested_call_node *tncur, tnode;
>>>
>>> -ÂÂÂ spin_lock_irqsave(&ncalls->lock, flags);
>>> +ÂÂÂ head = dlock_list_hash(&ncalls->tasks_call_list, ctx);
>>>
>>> +ÂÂÂ spin_lock_irqsave(&head->lock, flags);
>>> ÂÂÂ /*
>>> ÂÂÂÂ * Try to see if the current task is already inside this wakeup
>>> call.
>>> -ÂÂÂÂ * We use a list here, since the population inside this set is
>>> always
>>> -ÂÂÂÂ * very much limited.
>>> +ÂÂÂÂ *
>>> +ÂÂÂÂ * We use a chained hash table with linked lists here, and take the
>>> +ÂÂÂÂ * lock to avoid racing when collisions (where ctx pointer is the
>>> key).
>>> +ÂÂÂÂ * Calls for which context is the cpu number, avoid hashing and
>>> act as
>>> +ÂÂÂÂ * pcpu add/removal.
>>> +ÂÂÂÂ *
>>> +ÂÂÂÂ * Since the population inside this set is always very much
>>> limited,
>>> +ÂÂÂÂ * list scanning should be short.
>>> ÂÂÂÂ */
>>> -ÂÂÂ list_for_each_entry(tncur, lsthead, llink) {
>>> -ÂÂÂÂÂÂÂ if (tncur->ctx == ctx &&
>>> -ÂÂÂÂÂÂÂÂÂÂÂ (tncur->cookie == cookie || ++call_nests > max_nests)) {
>>> -ÂÂÂÂÂÂÂÂÂÂÂ /*
>>> -ÂÂÂÂÂÂÂÂÂÂÂÂ * Ops ... loop detected or maximum nest level reached.
>>> -ÂÂÂÂÂÂÂÂÂÂÂÂ * We abort this wake by breaking the cycle itself.
>>> -ÂÂÂÂÂÂÂÂÂÂÂÂ */
>>> -ÂÂÂÂÂÂÂÂÂÂÂ error = -1;
>>> -ÂÂÂÂÂÂÂÂÂÂÂ goto out_unlock;
>>> -ÂÂÂÂÂÂÂ }
>>> -ÂÂÂ }
>>> +ÂÂÂ list_for_each_entry(tncur, &head->list, llink.list) {
>>> +ÂÂÂÂÂÂÂÂÂÂ if (tncur->ctx == ctx &&
>>> +ÂÂÂÂÂÂÂÂÂÂ (tncur->cookie == cookie || ++call_nests > EP_MAX_NESTS)) {
>>> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ /*
>>> +ÂÂÂÂÂÂÂÂÂÂÂ * Ops ... loop detected or maximum nest level reached.
>>> +ÂÂÂÂÂÂÂÂÂÂÂ * We abort this wake by breaking the cycle itself.
>>> +ÂÂÂÂÂÂÂÂÂÂÂ */
>>> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ error = -1;
>>> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ spin_unlock_irqrestore(&head->lock, flags);
>>> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ goto done;
>>> +ÂÂÂÂÂÂÂÂÂÂ }
>>> +ÂÂÂÂÂÂ }
>>> +
>>>
>>> ÂÂÂ /* Add the current task and cookie to the list */
>>> ÂÂÂ tnode.ctx = ctx;
>>> ÂÂÂ tnode.cookie = cookie;
>>> -ÂÂÂ list_add(&tnode.llink, lsthead);
>>> -
>>> -ÂÂÂ spin_unlock_irqrestore(&ncalls->lock, flags);
>>> +ÂÂÂ tnode.llink.head = head;
>>> +ÂÂÂ list_add(&tnode.llink.list, &head->list);
>>> +ÂÂÂ spin_unlock_irqrestore(&head->lock, flags);
>>>
>>> ÂÂÂ /* Call the nested function */
>>> ÂÂÂ error = (*nproc)(priv, cookie, call_nests);
>>>
>>> ÂÂÂ /* Remove the current task from the list */
>>> -ÂÂÂ spin_lock_irqsave(&ncalls->lock, flags);
>>> -ÂÂÂ list_del(&tnode.llink);
>>> -out_unlock:
>>> -ÂÂÂ spin_unlock_irqrestore(&ncalls->lock, flags);
>>> -
>>> +ÂÂÂ dlock_lists_del(&tnode.llink);
>>> +done:
>>> ÂÂÂ return error;
>>> }
>>>
>>> @@ -2313,13 +2321,16 @@ static int __init eventpoll_init(void)
>>> ÂÂÂÂ * Initialize the structure used to perform epoll file descriptor
>>> ÂÂÂÂ * inclusion loops checks.
>>> ÂÂÂÂ */
>>> -ÂÂÂ ep_nested_calls_init(&poll_loop_ncalls);
>>> +ÂÂÂ if (ep_nested_calls_init(&poll_loop_ncalls))
>>> +ÂÂÂÂÂÂÂ goto err;
>>>
>>> ÂÂÂ /* Initialize the structure used to perform safe poll wait head wake
>>> ups */
>>> -ÂÂÂ ep_nested_calls_init(&poll_safewake_ncalls);
>>> +ÂÂÂ if (ep_nested_calls_init(&poll_safewake_ncalls))
>>> +ÂÂÂÂÂÂÂ goto err_free1;
>>>
>>> ÂÂÂ /* Initialize the structure used to perform file's f_op->poll()
>>> calls */
>>> -ÂÂÂ ep_nested_calls_init(&poll_readywalk_ncalls);
>>> +ÂÂÂ if (ep_nested_calls_init(&poll_readywalk_ncalls))
>>> +ÂÂÂÂÂÂÂ goto err_free0;
>>>
>>> ÂÂÂ /*
>>> ÂÂÂÂ * We can have many thousands of epitems, so prevent this from
>>> @@ -2336,5 +2347,12 @@ static int __init eventpoll_init(void)
>>> ÂÂÂÂÂÂÂÂÂÂÂ sizeof(struct eppoll_entry), 0, SLAB_PANIC, NULL);
>>>
>>> ÂÂÂ return 0;
>>> +
>>> +err_free0:
>>> +ÂÂÂ ep_nested_calls_free(&poll_safewake_ncalls);
>>> +err_free1:
>>> +ÂÂÂ ep_nested_calls_free(&poll_loop_ncalls);
>>> +err:
>>> +ÂÂÂ return -ENOMEM;
>>> }
>>> fs_initcall(eventpoll_init);