Re: RFC: paravirtualizing perf_clock

From: David Ahern
Date: Thu Oct 31 2013 - 12:46:06 EST


On 10/31/13, 2:09 AM, Masami Hiramatsu wrote:
(2013/10/30 23:03), David Ahern wrote:
On 10/29/13 11:59 PM, Masami Hiramatsu wrote:
(2013/10/29 11:58), David Ahern wrote:
To back out a bit, my end goal is to be able to create and merge
perf-events from any context on a KVM-based host -- guest userspace,
guest kernel space, host userspace and host kernel space (userspace
events with a perf-clock timestamp is another topic ;-)).

That is almost same as what we(Yoshihiro and I) are trying on integrated
tracing, we are doing it on ftrace and trace-cmd (but perhaps, it eventually
works on perf-ftrace).

I thought at this point (well, once perf-ftrace gets committed) that you
can do everything with perf. What feature is missing in perf that you
get with trace-cmd or using debugfs directly?

The perftools interface is the best for profiling a process or in a short period.
However, what we'd like to do is monitoring or tracing in background a long
period on the memory, while the system life cycle, as a flight recorder.
This kind of tracing interface is required for mission-critical system for
trouble shooting.

right. I have a perf-based scheduling daemon that runs in a flight recorder mode - retain the last N-seconds of scheduling data. Challenging mostly to handle memory growth with task-based records (MMAP, FORK, EXIT, COMM). Other events are handled fairly well.


Also, on-the-fly configurability of ftrace such as snapshot, multi-buffer,
event-adding/removing are very useful, since in the flight-recorder
use-case, we can't stop tracing for even a moment.

interesting.

Moreover, our guest/host integrated tracer can pass event buffers from
guest to host with very small overhead, because it uses ftrace ringbuffer
and virtio-serial with splice (so, zero page copying in the guest).
Note that we need low overhead tracing as small as possible because it
is running always in background.

Right. Been meaning to look at what you guys have done, just have not had the time.

That's why we're using ftrace for our purpose. But anyway, the time
synchronization is common issue. Let's share the solution :)

Yes, that was one of the key takeaways from the Tracing Summit is the need to have a common time-source - just extending it to VMs as well.

And then for the cherry on top a design that works across architectures
(e.g., x86 now, but arm later).

I think your proposal is good for the default implementation, it doesn't
depends on the arch specific feature. However, since physical timer(clock)
interfaces and virtualization interfaces strongly depends on the arch,
I guess the optimized implementations will become different on each arch.
For example, maybe we can export tsc-offset to the guest to adjust clock
on x86, but not on ARM, or other devices. In that case, until implementing
optimized one, we can use paravirt perf_clock.

So this MSR read takes about 1.6usecs (from 'perf stat kvm live') and
that is total time between VMEXIT and VMENTRY. The time it takes to run
perf_clock in the host should be a very small part of that 1.6 usec.

Yeah, a hypercall is always heavy operation. So that is not the best
solution, we need a optimized one for each arch.

I'll take a look at the TSC path to see how it is optimized (suggestions
appreciated).

At least on the machine which has stable tsc, we can relay on that.
We just need the tsc-offset to adjust it in the guest. Note that this
offset can change if the guest sleeps/resumes or does a live-migration.
Each time we need to refresh the tsc-offset.

Another thought is to make the use of pv_perf_clock an option -- user
can knowingly decide the additional latency/overhead is worth the feature.

Yeah. BTW, would you see the paravirt_sched_clock(pv_time_ops)?
It seems that such synchronized clock is there.

I have poked around with it a bit.

David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/