Re: [PATCH 4/4] mm: swap: Per-cgroup per-CPU swap device cache with shared clusters

From: YoungJun Park
Date: Tue Jul 22 2025 - 14:30:49 EST


On Wed, Jul 23, 2025 at 01:44:49AM +0800, Kairui Song wrote:
> On Thu, Jul 17, 2025 at 4:21 AM Youngjun Park <youngjun.park@xxxxxxx> wrote:
>
> Hi Youngjun
>
> One thing I noticed after a quick glance is that this
> swap_alloc_cgroup_priority is bloated and it is doing similar things
> as folio_alloc_swap.
>
> I imagined that we can just have a struct (eg. let's call it struct
> swap_percpu_info / pi) as a closure of what the allocator needs, it
> contains the plist and fast path device.
>
> With slight changes to folio_alloc_swap, it can respect either the
> cgroup's pi or global pi. (might be a horrible name though, feel free
> to change it)
>
> For example first thing swap_alloc_fast do will be:
>
> `struct swap_percpu_info *pi = folio_swap_percpu_info(folio);`
>
> folio_swap_percpu_info returns the cgroup's swap_percpu_info or the global one.
>
> swap_alloc_slow can do a similar thing, it then can just use pi->plist
> and pi->pcpu_swapdev, (cluster info will be in si) ignoring all the
> cgroup differences.

I was also considering whether the priority handling (like `plist`) could be
abstracted to unify the allocation logic across paths.

At the time, I leaned toward keeping the existing allocator logic intact as
much as possible, which is why I avoided introducing a new struct and instead
duplicated some logic.

Your suggestion with `swap_percpu_info` makes the design clearer and aligns
well with what I had in mind — I’ll review this direction more closely. If my
thoughts change during the process, I’ll make sure to share the update on the
mailing list.

Thanks again for the helpful input!

> Also it is better to check your patches with ./scripts/checkpatch.pl,
> I'm seeing some styling issues.

I should have paid more attention to this.
I’ll be sure to run `./scripts/checkpatch.pl` more carefully and address those
issues in the next version of the patch. Thanks for the reminder!

> I'll check your other patches too later this week, thanks for the
> update on this idea.

Thanks again for the great idea, and I really appreciate you taking the time to
review this in the middle of your busy schedule.

>
> Why not just remove the `percpu_swap_cluster.offset` and just share
> si->percpu_cluster among all cgroups (including root cgroup)?
>
> Otherwise, eg. if rootcg's pcpu cluster and one cgroup's pcpu
> cluster are pointing to one same cluster, they might be in
> contention on allocation of different order, or even in the same order
> the performance might not be good as multiple CPUs will race
> with each other.
>
> It will be easier to implement too.

I originally kept `percpu_swap_cluster.offset` around to
preserve compatibility when swap cgroup priority is not enabled, and to
minimize disruption to the existing fast path.

But after reviewing your suggestion, I agree it makes more sense to unify this
path and always rely on `si->percpu_cluster`, even for the root cgroup.

This simplifies the implementation, and as you pointed out, avoids potential
contention and complexity that could arise from sharing per-cgroup clusters
across CPUs.

Thanks again for the clear and helpful insight.