Re: Overhead of io{read,write}{8,16,32,64} on x86
From: Peter Zijlstra
Date: Wed Nov 01 2023 - 06:28:59 EST
On Wed, Nov 01, 2023 at 10:08:42AM +0100, Arnd Bergmann wrote:
> On Tue, Oct 31, 2023, at 22:41, Jiaxun Yang wrote:
> > Hi all,
> >
> > I'm trying to improve Kernel's support of devices that have ioports
> > mapped into MMIO, that involves converting existing driver which is
> > using {in,out}{l,w,b} to use io{read,write}{8,16,32,64}, so they can
> > benefit from ioport_map and pci_iomap.
> >
> > However, the problem is io{read,write}{8,16,32,64} will incur penalty
> > on x86 by introducing extra function calls (they are not inlined) and
> > having extra condition judgment on MMIO vs PIO.
> >
> > x86 folks, do you think this kind of overhead is acceptable? I do think
> > most of PCI/ISA drivers will need to be converted.
> >
> > linux-arch folks, do you think it will be better if we introduce a
> > variant of io{read,write}{8,16,32,64} that direct to PIO on x86 but
> > remains the same functionality on other architectures?
>
> I think in general there is not much of a problem here since
> the inb()/outb() operations themselves are extremely slow already,
> in particular the outb() writes are non-posted unlike writeb().
>
> My feeling is that converting to ioread/iowrite is generally a win
> for any driver that already needs to support both cases (e.g.
> serial-8250) since this can unify the two code paths.
And here I looked at iowrite8 and find it includes tracing and all
sorts, which means it is unsuitable for things like early-serial and the
shiny new atomic write functionality of said serial-8250.