linux-next: manual merge of the akpm-current tree with the fscache tree

From: Stephen Rothwell
Date: Wed Jan 27 2021 - 06:54:57 EST


Hi all,

Today's linux-next merge of the akpm-current tree got a conflict in:

include/linux/pagemap.h

between commits:

fa4910177245 ("vm: Add wait/unlock functions for PG_fscache")
13aecd8259dc ("mm: Implement readahead_control pageset expansion")

from the fscache tree and commits:

f5614fc4780c ("mm/filemap: pass a sleep state to put_and_wait_on_page_locked")
7335e3449f74 ("mm/filemap: add mapping_seek_hole_data")

from the akpm-current tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

--
Cheers,
Stephen Rothwell

diff --cc include/linux/pagemap.h
index 4935ad6171c1,20225b067583..000000000000
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@@ -682,21 -681,7 +682,20 @@@ static inline int wait_on_page_locked_k
return wait_on_page_bit_killable(compound_head(page), PG_locked);
}

+/**
+ * wait_on_page_fscache - Wait for PG_fscache to be cleared on a page
+ * @page: The page
+ *
+ * Wait for the fscache mark to be removed from a page, usually signifying the
+ * completion of a write from that page to the cache.
+ */
+static inline void wait_on_page_fscache(struct page *page)
+{
+ if (PagePrivate2(page))
+ wait_on_page_bit(compound_head(page), PG_fscache);
+}
+
- extern void put_and_wait_on_page_locked(struct page *page);
-
+ int put_and_wait_on_page_locked(struct page *page, int state);
void wait_on_page_writeback(struct page *page);
extern void end_page_writeback(struct page *page);
void wait_for_stable_page(struct page *page);
@@@ -771,11 -756,11 +770,13 @@@ int add_to_page_cache_lru(struct page *
pgoff_t index, gfp_t gfp_mask);
extern void delete_from_page_cache(struct page *page);
extern void __delete_from_page_cache(struct page *page, void *shadow);
- int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask);
+ void replace_page_cache_page(struct page *old, struct page *new);
void delete_from_page_cache_batch(struct address_space *mapping,
struct pagevec *pvec);
+void readahead_expand(struct readahead_control *ractl,
+ loff_t new_start, size_t new_len);
+ loff_t mapping_seek_hole_data(struct address_space *, loff_t start, loff_t end,
+ int whence);

/*
* Like add_to_page_cache_locked, but used to add newly allocated pages:

Attachment: pgpWySZ5Hxp3w.pgp
Description: OpenPGP digital signature