Re: [COUNTERPATCH] mm: avoid overflowing preempt_count() inmmu_take_all_locks()

From: Peter Zijlstra
Date: Thu Apr 01 2010 - 12:50:06 EST


On Thu, 2010-04-01 at 18:45 +0200, Peter Zijlstra wrote:
> On Thu, 2010-04-01 at 18:18 +0200, Andrea Arcangeli wrote:
> > On Thu, Apr 01, 2010 at 06:12:34PM +0200, Peter Zijlstra wrote:
> > > One thing we can do there is to mutex_trylock() if we get the lock, see
> > > if we've got the right object, if the trylock fails we can do the
> > > refcount thing and sleep. That would allow the fast-path to remain a
> > > single atomic.
> >
> > But then how do you know which anon_vma_unlink has to decrease the
> > refcount and which not? That info should need to be stored in the
> > kernel stack, can't be stored in the vma. I guess it's feasible but
> > passing that info around sounds more tricky than the trylock itself
> > (adding params to those functions with int &refcount).
>
> I was thinking of something like:
>
> struct anon_vma *page_lock_anon_vma(struct page *page)
> {
> struct anon_vma *anon_vma = NULL;
> unsigned long anon_mapping;
>
> rcu_read_lock();
> anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping);
> if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> goto out;
> if (!page_mapped(page))
> goto out;
>
> anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
> if (!mutex_trylock(&anon_vma->lock)) {
> if (atomic_inc_unless_zero(&anon_vma->ref)) {
> rcu_read_unlock();
> mutex_lock(&anon_vma->lock);
> atomic_dec(&anon_vma->ref); /* ensure the lock pins it */
> } else
> anon_vma = NULL;
> }
> rcu_read_unlock();
>
> return anon_vma;
> }
>
> void page_unlock_anon_vma(struct anon_vma *anon_vma)
> {
> mutex_unlock(&anon_vma->lock);
> }
>
> Then anybody reaching ref==0 would only need to sync against the lock
> before freeing.

Ah, there is a race where the dec after lock makes it 0, we could catch
that by making it -1 and free in unlock_anon_vma().

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/