On Tue, 1 Apr 2025 03:28:20 +0200FWIW, I have the following in the LTTng kernel tracer to cover this.
Jann Horn <jannh@xxxxxxxxxx> wrote:
I think you probably need flushes on both sides, since you might have
to first flush out the dirty cacheline you wrote through the kernel
mapping, then discard the stale clean cacheline for the user mapping,
or something like that? (Unless these VIVT cache architectures provide
stronger guarantees on cache state than I thought.) But when you're
adding data to the tracing buffers, I guess maybe you only want to
flush the kernel mapping from the kernel, and leave flushing of the
user mapping to userspace? I think if you're running in some random
kernel context, you probably can't even reliably flush the right
userspace context - see how for example vivt_flush_cache_range() does
nothing if the MM being flushed is not running on the current CPU.
I'm assuming I need to flush both the kernel (get the updates out to
memory) and user space (so it can read those updates).
The paths are all done via system calls from user space, so it should be on
the same CPU. User space will do an ioctl() on the buffer file descriptor
asking for an update, the kernel will populate the page with that update,
and then user space will read the update after the ioctl() returns. All
very synchronous. Thus, we don't need to worry about updates from one CPU
happening on another CPU.
Even when it wants to read the buffer. The ioctl() will swap out the old
reader page with one of the write pages making it the new "reader" page,
where no more updates will happen on that page. The flush happens after
that and before going back to user space.