Re: [v3,net-next 1/4] net: qos: introduce a gate control flow action

From: Vladimir Oltean
Date: Wed Apr 22 2020 - 15:28:57 EST


Hi Allan,

On Wed, 22 Apr 2020 at 22:20, Allan W. Nielsen
<allan.nielsen@xxxxxxxxxxxxx> wrote:
>
> Hi Po,
>
> Nice to see even more work on the TSN standards in the upstream kernel.
>
> On 22.04.2020 10:48, Po Liu wrote:
> >EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
> >
> >Introduce a ingress frame gate control flow action.
> >Tc gate action does the work like this:
> >Assume there is a gate allow specified ingress frames can be passed at
> >specific time slot, and be dropped at specific time slot. Tc filter
> >chooses the ingress frames, and tc gate action would specify what slot
> >does these frames can be passed to device and what time slot would be
> >dropped.
> >Tc gate action would provide an entry list to tell how much time gate
> >keep open and how much time gate keep state close. Gate action also
> >assign a start time to tell when the entry list start. Then driver would
> >repeat the gate entry list cyclically.
> >For the software simulation, gate action requires the user assign a time
> >clock type.
> >
> >Below is the setting example in user space. Tc filter a stream source ip
> >address is 192.168.0.20 and gate action own two time slots. One is last
> >200ms gate open let frame pass another is last 100ms gate close let
> >frames dropped. When the frames have passed total frames over 8000000
> >bytes, frames will be dropped in one 200000000ns time slot.
> >
> >> tc qdisc add dev eth0 ingress
> >
> >> tc filter add dev eth0 parent ffff: protocol ip \
> > flower src_ip 192.168.0.20 \
> > action gate index 2 clockid CLOCK_TAI \
> > sched-entry open 200000000 -1 8000000 \
> > sched-entry close 100000000 -1 -1
>
> First of all, it is a long time since I read the 802.1Qci and when I did
> it, it was a draft. So please let me know if I'm completly off here.
>
> I know you are focusing on the gate control in this patch serie, but I
> assume that you later will want to do the policing and flow-meter as
> well. And it could make sense to consider how all of this work
> toghether.
>
> A common use-case for the policing is to have multiple rules pointing at
> the same policing instance. Maybe you want the sum of the traffic on 2
> ports to be limited to 100mbit. If you specify such action on the
> individual rule (like done with the gate), then you can not have two
> rules pointing at the same policer instance.
>
> Long storry short, have you considered if it would be better to do
> something like:
>
> tc filter add dev eth0 parent ffff: protocol ip \
> flower src_ip 192.168.0.20 \
> action psfp-id 42
>
> And then have some other function to configure the properties of psfp-id
> 42?
>
>
> /Allan
>

It is very good that you brought it up though, since in my opinion too
it is a rather important aspect, and it seems that the fact this
feature is already designed-in was a bit too subtle.

"psfp-id" is actually his "index" argument.

You can actually do this:
tc filter add dev eth0 ingress \
flower skip_hw dst_mac 01:80:c2:00:00:0e \
action gate index 1 clockid CLOCK_TAI \
base-time 200000000000 \
sched-entry OPEN 200000000 -1 -1 \
sched-entry CLOSE 100000000 -1 -1
tc filter add dev eth0 ingress \
flower skip_hw dst_mac 01:80:c2:00:00:0f \
action gate index 1

Then 2 filters get created with the same action:

tc -s filter show dev swp2 ingress
filter protocol all pref 49151 flower chain 0
filter protocol all pref 49151 flower chain 0 handle 0x1
dst_mac 01:80:c2:00:00:0f
skip_hw
not_in_hw
action order 1:
priority wildcard clockid TAI flags 0x6404f
base-time 200000000000 cycle-time 300000000
cycle-time-ext 0
number 0 gate-state open interval 200000000
ipv wildcard max-octets wildcard
number 1 gate-state close interval 100000000
ipv wildcard max-octets wildcard
pipe
index 2 ref 2 bind 2 installed 168 sec used 168 sec
Action statistics:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0

filter protocol all pref 49152 flower chain 0
filter protocol all pref 49152 flower chain 0 handle 0x1
dst_mac 01:80:c2:00:00:0e
skip_hw
not_in_hw
action order 1:
priority wildcard clockid TAI flags 0x6404f
base-time 200000000000 cycle-time 300000000
cycle-time-ext 0
number 0 gate-state open interval 200000000
ipv wildcard max-octets wildcard
number 1 gate-state close interval 100000000
ipv wildcard max-octets wildcard
pipe
index 2 ref 2 bind 2 installed 168 sec used 168 sec
Action statistics:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0

Actually my only concern is that maybe this mechanism should (?) have
been more generic. At the moment, this patch series implements it via
a TCA_GATE_ENTRY_INDEX netlink attribute, so every action which wants
to be shared across filters needs to reinvent this wheel.

Thoughts, everyone?

Thanks,
-Vladimir