Re: [PATCH 41/41] mm: replace rw_semaphore with atomic_t in vma_lock

From: Suren Baghdasaryan
Date: Mon Jan 16 2023 - 17:36:32 EST


On Mon, Jan 16, 2023 at 3:15 AM Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> wrote:
>
> On Mon, Jan 09, 2023 at 12:53:36PM -0800, Suren Baghdasaryan wrote:
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index d40bf8a5e19e..294dd44b2198 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -627,12 +627,16 @@ static inline void vma_write_lock(struct vm_area_struct *vma)
> > * mm->mm_lock_seq can't be concurrently modified.
> > */
> > mm_lock_seq = READ_ONCE(vma->vm_mm->mm_lock_seq);
> > - if (vma->vm_lock_seq == mm_lock_seq)
> > + if (vma->vm_lock->lock_seq == mm_lock_seq)
> > return;
> >
> > - down_write(&vma->vm_lock->lock);
> > - vma->vm_lock_seq = mm_lock_seq;
> > - up_write(&vma->vm_lock->lock);
> > + if (atomic_cmpxchg(&vma->vm_lock->count, 0, -1))
> > + wait_event(vma->vm_mm->vma_writer_wait,
> > + atomic_cmpxchg(&vma->vm_lock->count, 0, -1) == 0);
> > + vma->vm_lock->lock_seq = mm_lock_seq;
> > + /* Write barrier to ensure lock_seq change is visible before count */
> > + smp_wmb();
> > + atomic_set(&vma->vm_lock->count, 0);
> > }
> >
> > /*
> > @@ -643,20 +647,28 @@ static inline void vma_write_lock(struct vm_area_struct *vma)
> > static inline bool vma_read_trylock(struct vm_area_struct *vma)
> > {
> > /* Check before locking. A race might cause false locked result. */
> > - if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
> > + if (vma->vm_lock->lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
> > return false;
> >
> > - if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0))
> > + if (unlikely(!atomic_inc_unless_negative(&vma->vm_lock->count)))
> > return false;
> >
> > + /* If atomic_t overflows, restore and fail to lock. */
> > + if (unlikely(atomic_read(&vma->vm_lock->count) < 0)) {
> > + if (atomic_dec_and_test(&vma->vm_lock->count))
> > + wake_up(&vma->vm_mm->vma_writer_wait);
> > + return false;
> > + }
> > +
> > /*
> > * Overflow might produce false locked result.
> > * False unlocked result is impossible because we modify and check
> > * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq
> > * modification invalidates all existing locks.
> > */
> > - if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) {
> > - up_read(&vma->vm_lock->lock);
> > + if (unlikely(vma->vm_lock->lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) {
> > + if (atomic_dec_and_test(&vma->vm_lock->count))
> > + wake_up(&vma->vm_mm->vma_writer_wait);
> > return false;
> > }
>
> With this change readers can cause writers to starve.
> What about checking waitqueue_active() before or after increasing
> vma->vm_lock->count?

The readers are in page fault path, which is the fast path, while
writers performing updates are in slow path. Therefore I *think*
starving writers should not be a big issue. So far in benchmarks I
haven't seen issues with that but maybe there is such a case?

>
> --
> Thanks,
> Hyeonggon
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@xxxxxxxxxxx.
>