Re: [RFC] disable PCIE 'Enable No Snoop' bit by default

From: Matthew Wilcox
Date: Thu Sep 06 2007 - 07:57:51 EST


On Thu, Sep 06, 2007 at 01:35:08PM +0800, Shaohua Li wrote:
> PCIE 'Enable No Snoop' bit is set by default per PCIE spec, but OS
> assumes PCI DMA is snooped, which is legacy PCI device does. Enabling no
> snoop might cause potential DMA issues. This patch disables it, if a
> driver can work with no snoop, we can provide a helper to enable it.

I'm not sure your analysis is correct. Here's what my draft copy of
the pcie 2.0 spec says:

Enble No Snoop ­ If this bit is Set, the Function is permitted to
Set the No Snoop bit in the Requester Attributes of transactions it
initiates that do not require hardware enforced cache coherency (see
Section 2.2.6.5). Note that setting this bit to 1b should not cause
a Function to Set the No Snoop attribute on all transactions that it
initiates. Even when this bit is Set, a Function is only permitted
to Set the No Snoop attribute on a transaction when it can guarantee
that the address of the transaction is not stored in any cache in
the system. This bit permitted to be hardwired to 0b if a Function
would never Set the No Snoop attribute in transactions it initiates.
Default value of this bit is 1b.

That implies that devices are only allowed to set it when it's safe to
do so ... and we don't need to turn it off.

--
Intel are signing my paycheques ... these opinions are still mine
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/