Re: [PATCH] fs/buffer: use min folio order to calculate upper limit in __getblk_slow()

From: Pankaj Raghav (Samsung)
Date: Wed Jun 18 2025 - 15:51:40 EST


> > diff --git a/fs/buffer.c b/fs/buffer.c
> > index 8cf4a1dc481e..98f90da69a0a 100644
> > --- a/fs/buffer.c
> > +++ b/fs/buffer.c
> > @@ -1121,10 +1121,11 @@ __getblk_slow(struct block_device *bdev, sector_t block,
> > unsigned size, gfp_t gfp)
> > {
> > bool blocking = gfpflags_allow_blocking(gfp);
> > + int blocklog = PAGE_SHIFT + mapping_min_folio_order(bdev->bd_mapping);
> >
> > /* Size must be multiple of hard sectorsize */
> > - if (unlikely(size & (bdev_logical_block_size(bdev)-1) ||
> > - (size < 512 || size > PAGE_SIZE))) {
> > + if (unlikely(size & (bdev_logical_block_size(bdev) - 1) ||
> > + (size < 512 || size > (1U << blocklog)))) {
>
> So this doesn't quite make sense to me. Shouldn't it be capped from above
> by PAGE_SIZE << mapping_max_folio_order(bdev->bd_mapping)?

This __getblk_slow() function is used to read a block from a block
device and fill the page cache along with creating buffer heads.

I think the reason we have this check is to make sure the size, which is
block size is within the limits from 512 (SECTOR_SIZE) to upper limit on block size.

That upper limit on block size was PAGE_SIZE before the lbs support in
block devices, but now the upper limit of block size is mapping_min_folio_order.
We set that in set_blocksize(). So a single block cannot be bigger than
(PAGE_SIZE << mapping_min_folio_order).

I hope that makes sense.

--
Pankaj Raghav