Re: [PATCH] irqchip/mips-gic: mark count and compare accessors notrace

From: Marcin Nowakowski
Date: Thu Jun 08 2017 - 09:35:18 EST


Hi Marc,

On 08.06.2017 15:26, Marc Zyngier wrote:
On Thu, Jun 08 2017 at 3:06:23 pm BST, Marcin Nowakowski <marcin.nowakowski@xxxxxxxxxx> wrote:
gic_read_count(), gic_write_compare() and gic_write_cpu_compare() are
often used in a sequence to update the compare register with a count
value increased by a small offset.
With small delta values used to update the compare register, the time to
update function trace for these operations may be longer than the update
timeout leading to update failure.

Signed-off-by: Marcin Nowakowski <marcin.nowakowski@xxxxxxxxxx>
---
drivers/irqchip/irq-mips-gic.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index eb7fbe1..ecee073 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -140,7 +140,7 @@ static inline void gic_map_to_vpe(unsigned int intr, unsigned int vpe)
}
#ifdef CONFIG_CLKSRC_MIPS_GIC
-u64 gic_read_count(void)
+notrace u64 gic_read_count(void)

The attributes are usually placed between the return type and the
function name.

OK, I'll change this.

{
unsigned int hi, hi2, lo;
@@ -167,7 +167,7 @@ unsigned int gic_get_count_width(void)
return bits;
}
-void gic_write_compare(u64 cnt)
+notrace void gic_write_compare(u64 cnt)
{
if (mips_cm_is64) {
gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE), cnt);
@@ -179,7 +179,7 @@ void gic_write_compare(u64 cnt)
}
}
-void gic_write_cpu_compare(u64 cnt, int cpu)
+notrace void gic_write_cpu_compare(u64 cnt, int cpu)
{
unsigned long flags;

What guarantees do you have that some event (interrupt? frequency
scaling?) won't delay these anyway, generating the same missed deadline?
Shouldn't the code deal with these case and acknowledge that the
deadline has already expired?

Well - there is no guarantee for that at the moment. One solution that kernel provides (and that works in this scenario) is to enable GENERIC_CLOCKEVENTS_MIN_ADJUST. This ensures that any failures are always retried with an increasing minimum adjustment step.
That, however, suffers from a different issue as described and discussed here: https://patchwork.kernel.org/patch/8909491/

Various events can delay these operations and even with notrace they might still fail, but as it stands now, even if the code using them does a retry, the latency I've observed with tracing enabled is often too long to ever succeed.

Marcin