Re: [PATCH] mm: fix use-after free of page_ext after race with memory-offline
From: Pavan Kondeti
Date: Wed Jul 20 2022 - 04:21:28 EST
Hi Charan,
On Tue, Jul 19, 2022 at 08:42:42PM +0530, Charan Teja Kalla wrote:
> Thanks Michal!!
>
> On 7/18/2022 8:24 PM, Michal Hocko wrote:
> >>>> The above mentioned race is just one example __but the problem persists
> >>>> in the other paths too involving page_ext->flags access(eg:
> >>>> page_is_idle())__. Since offline waits till the last reference on the
> >>>> page goes down i.e. any path that took the refcount on the page can make
> >>>> the memory offline operation to wait. Eg: In the migrate_pages()
> >>>> operation, we do take the extra refcount on the pages that are under
> >>>> migration and then we do copy page_owner by accessing page_ext. For
> >>>>
> >>>> Fix those paths where offline races with page_ext access by maintaining
> >>>> synchronization with rcu lock.
> >>> Please be much more specific about the synchronization. How does RCU
> >>> actually synchronize the offlining and access? Higher level description
> >>> of all the actors would be very helpful not only for the review but also
> >>> for future readers.
> >> I will improve the commit message about this synchronization change
> >> using RCU's.
> > Thanks! The most imporant part is how the exclusion is actual achieved
> > because that is not really clear at first sight
> >
> > CPU1 CPU2
> > lookup_page_ext(PageA) offlining
> > offline_page_ext
> > __free_page_ext(addrA)
> > get_entry(addrA)
> > ms->page_ext = NULL
> > synchronize_rcu()
> > free_page_ext
> > free_pages_exact (now addrA is unusable)
> >
> > rcu_read_lock()
> > entryA = get_entry(addrA)
> > base + page_ext_size * index # an address not invalidated by the freeing path
> > do_something(entryA)
> > rcu_read_unlock()
> >
> > CPU1 never checks ms->page_ext so it cannot bail out early when the
> > thing is torn down. Or maybe I am missing something. I am not familiar
> > with page_ext much.
>
>
> Thanks a lot for catching this Michal. You are correct that the proposed
> code from me is still racy. I Will correct this along with the proper
> commit message in the next version of this patch.
>
Trying to understand your discussion with Michal. What part is still racy? We
do check for mem_section::page_ext and bail out early from lookup_page_ext(),
no?
Also to make this scheme explicit, we can annotate page_ext member with __rcu
and use rcu_assign_pointer() on the writer side.
struct page_ext *lookup_page_ext(const struct page *page)
{
unsigned long pfn = page_to_pfn(page);
struct mem_section *section = __pfn_to_section(pfn);
/*
* The sanity checks the page allocator does upon freeing a
* page can reach here before the page_ext arrays are
* allocated when feeding a range of pages to the allocator
* for the first time during bootup or memory hotplug.
*/
if (!section->page_ext)
return NULL;
return get_entry(section->page_ext, pfn);
}
Thanks,
Pavan