Re: [PATCH] KVM: add halt_attempted_poll to VCPU stats

From: Wanpeng Li
Date: Wed Sep 16 2015 - 08:12:38 EST


On 9/16/15 6:12 PM, Christian Borntraeger wrote:
Am 15.09.2015 um 18:27 schrieb Paolo Bonzini:
This new statistic can help diagnosing VCPUs that, for any reason,
trigger bad behavior of halt_poll_ns autotuning.

For example, say halt_poll_ns = 480000, and wakeups are spaced exactly
like 479us, 481us, 479us, 481us. Then KVM always fails polling and wastes
10+20+40+80+160+320+480 = 1110 microseconds out of every
479+481+479+481+479+481+479 = 3359 microseconds. The VCPU then

For the first 481 us, block_ns should be 481us, block_ns > halt_poll_ns(480us) and long halt is detected, the vcpu->halt_poll_ns will be shrinked.

is consuming about 30% more CPU than it would use without
polling. This would show as an abnormally high number of
attempted polling compared to the successful polls.

Cc: Christian Borntraeger <borntraeger@xxxxxxxxxx<
Cc: David Matlack <dmatlack@xxxxxxxxxx>
Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
Acked-by: Christian Borntraeger <borntraeger@xxxxxxxxxx>

yes, this will help to detect some bad cases, but not all.

PS:
upstream maintenance keeps me really busy at the moment :-)
I am looking into a case right now, where auto polling goes
completely nuts on my system:

guest1: 8vcpus guest2: 1 vcpu
iperf with 25 process (-P25) from guest1 to guest2.

I/O interrupts on s390 are floating (pending on all CPUs) so on
ALL VCPUs that go to sleep, polling will consider any pending
network interrupt as successful poll. So with auto polling the
guest consumes up to 5 host CPUs without auto polling only 1.
Reducing halt_poll_ns to 100000 seems to work (goes back to
1 cpu).

The proper way might be to feedback the result of the
interrupt dequeue into the heuristics. Don't know yet how
to handle that properly.

If this can be reproduced on x86 platform?

Regards,
Wanpeng Li
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/