Re: [PATCH v18 00/32] per memcg lru_lock

From: Daniel Jordan
Date: Tue Aug 25 2020 - 21:11:35 EST


On Tue, Aug 25, 2020 at 11:26:58AM +0800, Alex Shi wrote:
> 在 2020/8/25 上午9:56, Daniel Jordan 写道:
> > Alex, do you have a pointer to the modified readtwice case?
>
> Sorry, no. my developer machine crashed, so I lost case my container and modified
> case. I am struggling to get my container back from a account problematic repository.
>
> But some testing scripts is here, generally, the original readtwice case will
> run each of threads on each of cpus. The new case will run one container on each cpus,
> and just run one readtwice thead in each of containers.

Ok, what you've sent so far gives me an idea of what you did. My readtwice
changes were similar, except I used the cgroup interface directly instead of
docker and shared a filesystem between all the cgroups whereas it looks like
you had one per memcg. 30 second runs on 5.9-rc2 and v18 gave 11% more data
read with v18. This was using 16 cgroups (32 dd tasks) on a 40 CPU, 2 socket
machine.

> > Even better would be a description of the problem you're having in production
> > with lru_lock. We might be able to create at least a simulation of it to show
> > what the expected improvement of your real workload is.
>
> we are using thousands memcgs in a machine, but as a simulation, I guess above case
> could be helpful to show the problem.

Using thousands of memcgs to do what? Any particulars about the type of
workload? Surely it's more complicated than page cache reads :)

> > I ran a few benchmarks on v17 last week (sysbench oltp readonly, kerndevel from
> > mmtests, a memcg-ized version of the readtwice case I cooked up) and then today
> > discovered there's a chance I wasn't running the right kernels, so I'm redoing
> > them on v18.

Neither kernel compile nor git checkout in the root cgroup changed much, just
0.31% slower on elapsed time for the compile, so no significant regressions
there. Now for sysbench again.