Re: [PATCH v1 2/3] mm/gup: use gup_can_follow_protnone() also in GUP-fast
From: Jason Gunthorpe
Date: Tue Aug 30 2022 - 14:45:50 EST
On Tue, Aug 30, 2022 at 08:23:52PM +0200, David Hildenbrand wrote:
> ... and looking into the details of TLB flush and GUP-fast interaction
> nowadays, that case is no longer relevant. A TLB flush is no longer
> sufficient to stop concurrent GUP-fast ever since we introduced generic
> RCU GUP-fast.
Yes, we've had RCU GUP fast for a while, and it is more widely used
now, IIRC.
It has been a bit, but if I remember, GUP fast in RCU mode worked on a
few principles:
- The PTE page must not be freed without RCU
- The PTE page content must be convertable to a struct page using the
usual rules (eg PTE Special)
- That struct page refcount may go from 0->1 inside the RCU
- In the case the refcount goes from 0->1 there must be sufficient
barriers such that GUP fast observing the refcount of 1 will also
observe the PTE entry has changed. ie before the refcount is
dropped in the zap it has to clear the PTE entry, the refcount
decr has to be a 'release' and the refcount incr in gup fast has be
to be an 'acquire'.
- The rest of the system must tolerate speculative refcount
increments from GUP on any random page
The basic idea being that if GUP fast obtains a valid reference on a
page *and* the PTE entry has not changed then everything is fine.
The tricks with TLB invalidation are just a "poor mans" RCU, and
arguably these days aren't really needed since I think we could make
everything use real RCU always without penalty if we really wanted.
Today we can create a unique 'struct pagetable_page' as Matthew has
been doing in other places that guarentees a rcu_head is always
available for every page used in a page table. Using that we could
drop the code in the TLB flusher that allocates memory for the
rcu_head and hopes for the best. (Or even is the common struct page
rcu_head already guarenteed to exist for pagetable pages now a days?)
IMHO that is the main reason we still have the non-RCU mode at all..
Jason