Re: [PATCH v3 next 09/10] lib: mul_u64_u64_div_u64() Optimise the divide code

From: David Laight
Date: Wed Jun 18 2025 - 18:27:01 EST


On Wed, 18 Jun 2025 16:12:49 -0400 (EDT)
Nicolas Pitre <nico@xxxxxxxxxxx> wrote:

> On Wed, 18 Jun 2025, David Laight wrote:
>
> > On Wed, 18 Jun 2025 11:39:20 -0400 (EDT)
> > Nicolas Pitre <nico@xxxxxxxxxxx> wrote:
> >
> > > > > + q_digit = n_long / d_msig;
> > > >
> > > > I think you want to do the divide right at the top - maybe even if the
> > > > result isn't used!
> > > > All the shifts then happen while the divide instruction is in progress
> > > > (even without out-of-order execution).
>
> Well.... testing on my old Intel Core i7-4770R doesn't show a gain.
>
> With your proposed patch as is: ~34ns per call
>
> With my proposed changes: ~31ns per call
>
> With my changes but leaving the divide at the top of the loop: ~32ns per call

Wonders what makes the difference...
Is that random 64bit values (where you don't expect zero digits)
or values where there are likely to be small divisors and/or zero digits?

On x86 you can use the PERF_COUNT_HW_CPU_CYCLES counter to get pretty accurate
counts for a single call.
The 'trick' is to use syscall(___NR_perf_event_open,...) and pc = mmap() to get
the register number pc->index - 1.
Then you want:
static inline unsigned int rdpmc(unsigned int counter)
{
unsigned int low, high;
asm volatile("rdpmc" : "=a" (low), "=d" (high) : "c" (counter));
return low;
}
and do:
unsigned int start = rdpmc(pc->index - 1);
unsigned int zero = 0;
OPTIMISER_HIDE_VAR(zero);
q = mul_u64_add_u64_div_u64(a + (start & zero), b, c, d);
elapsed = rdpmc(pc->index - 1 + (q & zero)) - start;

That carefully forces the rdpmc include the code being tested without
the massive penalty of lfence/mfence.
Do 10 calls and the last 8 will be pretty similar.
Lets you time cold-cache and branch mis-prediction effects.

> > Can you do accurate timings for arm64 or arm32?
>
> On a Broadcom BCM2712 (ARM Cortex-A76):
>
> With your proposed patch as is: ~20 ns per call
>
> With my proposed changes: ~19 ns per call
>
> With my changes but leaving the divide at the top of the loop: ~19 ns per call

Pretty much no difference.
Is that 64bit or 32bit (or the 16 bits per iteration on 64bit) ?
The shifts get more expensive on 32bit.
Have you timed the original code?


>
> Both CPUs have the same max CPU clock rate (2.4 GHz). These are obtained
> with clock_gettime(CLOCK_MONOTONIC) over 56000 calls. There is some
> noise in the results over multiple runs though but still.

That many loops definitely trains the branch predictor and ignores
any effects of loading the I-cache.
As Linus keeps saying, the kernel tends to be 'cold cache', code size
matters.
That also means that branches are 50% likely to be mis-predicted.
(Although working out what cpu actually do is hard.)

>
> I could get cycle measurements on the RPi5 but that requires a kernel
> recompile.

Or a loadable module - shame there isn't a sysctl.

>
> > I've found a 2004 Arm book that includes several I-cache busting
> > divide algorithms.
> > But I'm sure this pi-5 has hardware divide.
>
> It does.
>
>
> Nicolas