Re: [PATCH RFC] rds: ib: Reduce the contention caused by the asynchronous workers to flush the mr pool

From: Leon Romanovsky
Date: Wed Jan 19 2022 - 01:59:25 EST


On Tue, Jan 18, 2022 at 07:42:54PM +0000, Santosh Shilimkar wrote:
> On Jan 18, 2022, at 11:17 AM, Jason Gunthorpe <jgg@xxxxxxxx> wrote:
> >
> > On Tue, Jan 18, 2022 at 04:48:43PM +0000, Santosh Shilimkar wrote:
> >>
> >>> On Jan 18, 2022, at 6:47 AM, Praveen Kannoju <praveen.kannoju@xxxxxxxxxx> wrote:
> >>>
> >>> This patch aims to reduce the number of asynchronous workers being spawned
> >>> to execute the function "rds_ib_flush_mr_pool" during the high I/O
> >>> situations. Synchronous call path's to this function "rds_ib_flush_mr_pool"
> >>> will be executed without being disturbed. By reducing the number of
> >>> processes contending to flush the mr pool, the total number of D state
> >>> processes waiting to acquire the mutex lock will be greatly reduced, which
> >>> otherwise were causing DB instance crash as the corresponding processes
> >>> were not progressing while waiting to acquire the mutex lock.
> >>>
> >>> Signed-off-by: Praveen Kumar Kannoju <praveen.kannoju@xxxxxxxxxx>
> >>> —
> >>>
> >> […]
> >>
> >>> diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
> >>> index 8f070ee..6b640b5 100644
> >>> +++ b/net/rds/ib_rdma.c
> >>> @@ -393,6 +393,8 @@ int rds_ib_flush_mr_pool(struct rds_ib_mr_pool *pool,
> >>> */
> >>> dirty_to_clean = llist_append_to_list(&pool->drop_list, &unmap_list);
> >>> dirty_to_clean += llist_append_to_list(&pool->free_list, &unmap_list);
> >>> + WRITE_ONCE(pool->flush_ongoing, true);
> >>> + smp_wmb();
> >>> if (free_all) {
> >>> unsigned long flags;
> >>>
> >>> @@ -430,6 +432,8 @@ int rds_ib_flush_mr_pool(struct rds_ib_mr_pool *pool,
> >>> atomic_sub(nfreed, &pool->item_count);
> >>>
> >>> out:
> >>> + WRITE_ONCE(pool->flush_ongoing, false);
> >>> + smp_wmb();
> >>> mutex_unlock(&pool->flush_lock);
> >>> if (waitqueue_active(&pool->flush_wait))
> >>> wake_up(&pool->flush_wait);
> >>> @@ -507,8 +511,17 @@ void rds_ib_free_mr(void *trans_private, int invalidate)
> >>>
> >>> /* If we've pinned too many pages, request a flush */
> >>> if (atomic_read(&pool->free_pinned) >= pool->max_free_pinned ||
> >>> - atomic_read(&pool->dirty_count) >= pool->max_items / 5)
> >>> - queue_delayed_work(rds_ib_mr_wq, &pool->flush_worker, 10);
> >>> + atomic_read(&pool->dirty_count) >= pool->max_items / 5) {
> >>> + smp_rmb();
> >> You won’t need these explicit barriers since above atomic and write once already
> >> issue them.
> >
> > No, they don't. Use smp_store_release() and smp_load_acquire if you
> > want to do something like this, but I still can't quite figure out if
> > this usage of unlocked memory accesses makes any sense at all.
> >
> Indeed, I see that now, thanks. Yeah, these multi variable checks can indeed
> be racy but they are under lock at least for this code path. But there are few
> hot path places where single variable states are evaluated atomically instead of
> heavy lock.

At least pool->dirty_count is not locked in rds_ib_free_mr() at all.

Thanks

>
> Regards,
> Santosh
>