Re: [PATCH 5/9] x86/intel_rdt: Add new cgroup and Class of service management

From: Marcelo Tosatti
Date: Wed Aug 05 2015 - 20:24:49 EST


On Wed, Aug 05, 2015 at 01:22:57PM +0100, Matt Fleming wrote:
> On Sun, 02 Aug, at 12:31:57PM, Tejun Heo wrote:
> >
> > But we're doing it the wrong way around. You can do most of what
> > cgroup interface can do with systemcall-like interface with some
> > inconvenience. The other way doesn't really work. As I wrote in the
> > other reply, cgroups is a horrible programmable interface and we don't
> > want individual applications to interact with it directly and CAT's
> > use cases most definitely include each application programming its own
> > cache mask.
>
> I wager that this assertion is wrong. Having individual applications
> program their own cache mask is not going to be the most common
> scenario.

What i like about the syscall interface is that it moves the knowledge
of cache behaviour close to the application launching (or inside it),
which allows the following common scenario, say on a multi purpose
desktop:

Event: launch high performance application: use cache reservation, finish
quickly.
Event: cache hog application: do not thrash the cache.

The two cache reservations are logically unrelated in terms of
configuration, and configured separately do not affect each other.

They should be configured separately.

Also, data/code reservation is specific to the application, so it
should its specification should be close to the application (its just
cumbersome to maintain that data somewhere else).

> Only in very specific situations would you trust an
> application to do that.

Perhaps ulimit can be used to allow a certain limit on applications.

> A much more likely use case is having the sysadmin carve up the cache
> for a workload which may include multiple, uncooperating applications.

Sorry, what cooperating means in this context?

> Yes, a programmable interface would be useful, but only for a limited
> set of workloads. I don't think it's how most people are going to want
> to use this hardware technology.

It seems syscall interface handles all usecases which the cgroup
interface handles.

> --
> Matt Fleming, Intel Open Source Technology Center

Tentative interface, please comment.

The "return key/use key" scheme would allow COSid sharing similarly to
shmget. Intra-application, that is functional, but i am not experienced
with shmget to judge whether there is a better alternative. Would have
to think how cross-application setup would work,
and in the simple "cacheset" configuration.
Also, the interface should work for other architectures (TODO item, PPC
at least has similar functionality).

enum cache_rsvt_flags {
CACHE_RSVT_ROUND_UP = (1 << 0), /* round "bytes" up */
CACHE_RSVT_ROUND_DOWN = (1 << 1), /* round "bytes" down */
CACHE_RSVT_EXTAGENTS = (1 << 2), /* allow usage of area common with external agents */
};

enum cache_rsvt_type {
CACHE_RSVT_TYPE_CODE = 0, /* cache reservation is for code */
CACHE_RSVT_TYPE_DATA, /* cache reservation is for data */
CACHE_RSVT_TYPE_BOTH, /* cache reservation is for code and data */
};

struct cache_reservation {
size_t kbytes;
u32 type;
u32 flags;
};

int sys_cache_reservation(struct cache_reservation *cv);

returns -ENOMEM if not enough space, -EPERM if no permission.
returns keyid > 0 if reservation has been successful, copying actual
number of kbytes reserved to "kbytes".

-----------------

int sys_use_cache_reservation_key(struct cache_reservation *cv, int
key);

returns -EPERM if no permission.
returns -EINVAL if no such key exists.
returns 0 if instantiation of reservation has been successful,
copying actual reservation to cv.

Backward compatibility for processors with no support for code/data
differentiation: by default code and data cache allocation types
fallback to CACHE_RSVT_TYPE_BOTH on older processors (and return the
information that they done so via "flags").


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/