Re: [PATCH RESEND V5,2/2] mm: shmem: implement POSIX_FADV_[WILL|DONT]NEED for shmem

From: Matthew Wilcox
Date: Tue Jan 24 2023 - 11:01:46 EST


On Thu, Mar 31, 2022 at 12:08:21PM +0530, Charan Teja Kalla wrote:
> +static void shmem_isolate_pages_range(struct address_space *mapping, loff_t start,
> + loff_t end, struct list_head *list)
> +{
> + XA_STATE(xas, &mapping->i_pages, start);
> + struct page *page;
> +
> + rcu_read_lock();
> + xas_for_each(&xas, page, end) {
> + if (xas_retry(&xas, page))
> + continue;
> + if (xa_is_value(page))
> + continue;
> +
> + if (!get_page_unless_zero(page))
> + continue;
> + if (isolate_lru_page(page)) {
> + put_page(page);
> + continue;
> + }
> + put_page(page);
> +
> + if (PageUnevictable(page) || page_mapcount(page) > 1) {
> + putback_lru_page(page);
> + continue;
> + }
> +
> + /*
> + * Prepare the page to be passed to the reclaim_pages().
> + * VM couldn't reclaim the page unless we clear PG_young.
> + * Also, to ensure that the pages are written before
> + * reclaiming, page is set to dirty.
> + * Since we are not clearing the pte_young in the mapped
> + * page pte's, its reclaim may not be attempted.
> + */
> + ClearPageReferenced(page);
> + test_and_clear_page_young(page);
> + list_add(&page->lru, list);
> + if (need_resched()) {
> + xas_pause(&xas);
> + cond_resched_rcu();
> + }
> + }
> + rcu_read_unlock();
> +}

This entire function needs to be converted to use folios instead of
pages if you're refreshing this patchset for current kernels.