On Tue, 13 Dec 2022 14:20:12 +0000
Douglas Raillard <douglas.raillard@xxxxxxx> wrote:
The above is for the kernel to build.
That was my understanding that the comparison issue is related to in-kernel filtering ?
If that's the case, I completely agree that the type kernel code sees does not _have_
to be the same thing that is exposed to userspace if that simplifies the problem.
Yes, and note the patch I sent out to fix this.
I did, that's some seriously fast TAT :)
I'm going to pull that one in and start testing, so I can push it out in
this merge window.
That third part may be difficult with the above issue I mentioned.
Just do:
git grep '__field(' | cut -d',' -f1 | cut -d'(' -f2 | sed -e 's/[ ]*//'
| sort -u
to see what's in the kernel.
There are lots of types, but as long as the caller knows what to ask for, it shouldn't be an issue.
Pretty printing the trace is obviously an important aspect and ideally requires the parser to know
how to format everything.
But when it comes to other processing in a compiled language, it's not a big burden to let people
declare the events they require and the expected fields + types so they can get the data into their
own struct (e.g. as with serde or any equivalent technology).
That's what is done today. The size and offset is how the tools can get to
the data and it knows what to do with it.
I'm not sure how rust can handle this type of opaque type scheme.
Note, I put much more effort into the offset, size and sign than the type.
But is this only for the arrays that you have these restrictions, or any
field type?
In terms of borken pretty printing, it's a general issue not limited to dynamic arrays.
Yeah, the pretty printing can easily fail, as it can have anything that the
kernel can do. Including calling functions that are not available to user
space. This is why the fallback is always back to size, offset and sign.
The only ways pretty printing for an opaque type can possibly work for new types the parser has no
specific knowledge of are:
1. The type is not actually opaque, i.e. it comes with some decoding schema (just like the events have
a schema listing their fields + types)
2. The type is opaque, but also ships with an executable description of how to print it.
E.g. if there was a WASM/eBPF/whatever bytecode printing routine made available to userspace.
Option (2) is not so appealing as it's both hard to achieve and only allows a fixed set of
behaviors for a type. Option (1) is a lot easier and allows the behaviors to be defined
on the user side.
Wild idea: include the BTF blob in the trace.dat header so no type is opaque anymore. The printing
issue is not entirely solved this way (e.g. cpumask still needs some plugin to be displayed as a list
of CPUs), but we could at least print all structs in "raw" mode and enum symbolically.
And how big is that blob?
I'm not against the idea, but I would like it to only hold what is needed
That could also allow creating a quick&dirty way of defining a proper event (aka not trace_printk()):
I prefer not to have "quick&dirty" ;-)
#define SIMPLE_TRACE_EVENT(type, fields) \
struct type fields;
TRACE_EVENT(type, \
TP_PROTO(struct type *data), \
TP_ARGS(data), \
TP_STRUCT__entry(__field(struct type, data)), \
TP_fast_assign(__entry->data = *data;), \
TP_printk("print in raw mode to display the data"), \
);
#define SIMPLE_TRACE(type, fields) trace_struct_##type(&(struct type)fields)
SIMPLE_TRACE_EVENT(myevent, {
char name[11];
int foobar;
});
SIMPLE_TRACE(myevent, {.name = "hello", .foobar = 42});
The format string could be either kernel-generated based on BTF or userspace could be expected
to make its own use of BTF.
What's the use case for the above?