Re: [PATCH v3 0/4] sched/fair: Burstable CFS bandwidth controller

From: Odin Ugedal
Date: Wed Mar 10 2021 - 06:12:35 EST


Hi,

> If there are cases where the "start bandwidth" matters, I think there is need to expose the
> "start bandwidth" explicitly too. However, I doubt the existence of such cases from my view
> and the two examples above.

Yeah, I don't think there will be any cases where users will be
"depending" on having burst available,
so I agree in that sense.

> In my thoughts, this patchset keeps cgroup usage within the quota in the longer term, and allows
> cgroup to respond to a burst of work with the help of a reasonable burst buffer. If quota is set correctly
> above average usage, and enough burst buffer is set to meet the needs of bursty work. In this
> case, it makes no difference whether this cgroup runs with 0 start bandwidth or all of it.
> Thus I used sysctl_sched_cfs_bw_burst_onset_percent to decided the start bandwidth
> to leave some convenience here. If this sysctl interface is confusing, I wonder whether it
> is a good idea not to expose this interface.
>
> For the first case mentioned above, if Kubernet users care the "start bandwidth" for process startup,
> maybe it is better to give all of it rather than a part?

Yeah, I am a bit afraid there will be some confusion, so not sure if
the sysctl is the best way to do it.

But I would like feedback from others to highlight the problem as
well, that would be helpful. I think a simple "API"
where you get 0 burst or full burst on "set" (the one we decide on)
would be best to avoid unnecessary complexity.

Start burst when starting up a new process in a new cgroup might be
helpful, so maybe that is a vote for
full burst? However, in long term that doesn't matter, so 0 burst on
start would work as well.

> For the second case with quota changes over time, I think it is important making sure each change works
> long enough to enforce average quota limit. Does it really matter to control "start burst" on each change?

No, I don't think so. Doing so would be another thing to set per
cgroup, and that would just clutter the api
more than necessary imo., since we cannot come up with any real use cases.

> It is an copy of runtime at period start, and used to calculate burst time during a period.
> Not quite remaining_runtime_prev_period.

Ahh, I see, I misunderstood the code. So in a essence it is
"runtime_at_period_start"?

> Yeah, there is the updating problem. It is okey not to expose cfs_b->runtime then.

Yeah, I think dropping it all together is the best solution.


> This comment does not mean any loss any unnecessary throttle for present cfsb.
> All this means is that all quota refilling that is not done during timer stop should be
> refilled on timer start, for the burstable cfsb.
>
> Maybe I shall change this comment in some way if it is misleading?

I think I formulated my question badly. The comment makes sense, I am
just trying to compare how "start_cfs_bandwidth"
works after your patch compared to how it works currently. As I
understand, without this patch "start_cfs_bandwidth" will
never refill runtime, while with your patch, it will refill even when
overrun=0 with burst disabled. Is that an intended change in
behavior, or am I not understanding the patch?


On another note, I have also been testing this patch, and I am not
able to reproduce your schbench results. Both with and without burst,
it gets the same result, and no nr_throttled stays at 0 (tested on a
32-core system). Can you try to rerun your tests with the mainline
to see if you still get the same results? (Also, I see you are running
with 30 threads. How many cores do your test setup have?). To actually
say that the result is real, all cores used should maybe be
exclusively reserved as well, to avoid issues where other processes
cause a
spike in latency.


Odin