Re: [PATCH 00/11] V4: rwsem changes + down_read_critical() proposal

From: Michel Lespinasse
Date: Thu May 27 2010 - 07:00:05 EST


On Tue, May 25, 2010 at 11:27:55AM +0200, Peter Zijlstra wrote:
> On Tue, 2010-05-25 at 02:12 -0700, Michel Lespinasse wrote:
> > Yes, we do have patches trying to release the mmap_sem when a page
> > fault for a file backed VMA blocks on accessing the corresponding
> > file. We have not given up on these, and we intend to try submitting
> > them again. However, these patches do *not* address the case of a page
> > fault blocking while trying to get a free page (i.e. when you get
> > under high memory pressure).
>
> But I guess they could, right? Simply make the allocation under mmap_sem
> be __GFP_HARDWALL|__GFP_HIGHMEM|__GFP_MOVABLE__GFP_NOWARN or
> (GFP_HUGHUSER_MOVABLE & ~(__GFP_WAIT|__GFP_IO|__GFP_FS))|__GFP_NOWARN
>
> and drop the mmap_sem when that fails.

It's not clear to me if this can lead to a clean uncontroversial solution.
Doing this for file backed VMAs does not sound any harder in principle,
but we could not get it past linus's NACK last time. I think it's worth
exploring again, but I don't expect it to be so easy :)

> > > I really don't like people tinkering with the lock implementations like
> > > this. Nor do I like the naming, stats are in no way _critical_.
> >
> > Critical here refers to the fact that you're not allowed to block
> > while holding the unfairly acquired rwsem.
>
> We usually call that atomic, your 0/n patch didn't explain any of that.

Would replacing the 'critical' name with 'atomic' address your concern
though, or would you remain fundamentally opposed to anything that involves
an unfair acquire path ?

What about patches 1-7 which don't deal with the critical/atomic API;
can we agree to get these in before we we figure out what to do with
the the last 4 ?

--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/