Re: pci_set_mwi() ... why isn't it used more?

From: David Brownell (david-b@pacbell.net)
Date: Thu Jan 30 2003 - 11:25:07 EST


Anton Blanchard wrote:
>
>
>>I suppose the potential for broken PCI devices is exactly why MWI
>>isn't automatically enabled when DMA mastering is enabled, though
>>I don't understand why the cacheline size doesn't get fixed then
>>(unless it's that same issue). Devices can use the cacheline size
>>to get better Memory Read Line/Read Multiple" throughput; setting
>>it shouldn't be tied exclusively to enabling MWI.
>
>
> There are a number of cases where we cant enable MWI either because
> the PCI card doesnt support the CPU cacheline size or we have to set the
> PCI card cacheline size to something smaller due to various bugs.

At least the former case seems like it should be easily detectible.
The cacheline setting "must" be supported (sez the PCI spec) if MWI
can be enabled ... and may be supported in other cases too.

> eg I understand earlier versions of the e100 dont support a 128 byte
> cacheline (and the top bits are read only so setting it up for 128 bytes
> will result in it it being set to 0). Not good for read multiple/line
> and even worse if we decide to enable MWI :)

At least on 2.5.59, the pci_generic_prep_mwi() code doesn't check
for that type of error: it just writes the cacheline size, and
doesn't verify that setting it worked as expected. Checking for
that kind of problem would make it safer to call pci_set_mwi() in
such cases ... e.g. using it on a P4 with 128 byte L1 cachelines
would fail cleanly, while on an Athlon (64 byte L1) it might work
(depending in which top bits are unusable).

- Dave

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Jan 31 2003 - 22:00:24 EST