Re: [PATCH v2] nd_blk: add support for "read flush" DSM flag

From: Dan Williams
Date: Thu Aug 20 2015 - 16:27:24 EST


On Thu, Aug 20, 2015 at 12:00 PM, Ross Zwisler
<ross.zwisler@xxxxxxxxxxxxxxx> wrote:
> On Thu, 2015-08-20 at 11:26 -0700, Dan Williams wrote:
>> On Thu, Aug 20, 2015 at 11:17 AM, Ross Zwisler
>> <ross.zwisler@xxxxxxxxxxxxxxx> wrote:
>> > On Thu, 2015-08-20 at 10:59 -0700, Dan Williams wrote:
>> [..]
>> > Ah, I think we're getting confused about the deinterleave part.
>> >
>> > The aperture is a set of contiguous addresses from the perspective of the
>> > DIMM, but when it's interleaved by the iMC it becomes a bunch of segments that
>> > are not contiguous in the virtual address space of the kernel.
>> >
>> > Meaning, say you have an 8k aperture that is interleaved with one other DIMM
>> > on a 256 byte granularity - this means that in SPA space you'll end up with a
>> > big mesh of 256 byte chunks, half of which belong to you and half which don't:
>> >
>> > SPA space:
>> > +--------------------+
>> > |256 bytes (ours) |
>> > +--------------------+
>> > |256 bytes (not ours)|
>> > +--------------------+
>> > |256 bytes (ours) |
>> > +--------------------+
>> > |256 bytes (not ours)|
>> > +--------------------+
>> > ...
>> >
>> > To be able to flush the entire aperture unconditionally, we have to walk
>> > through all the segments that belong to use and flush each one of them. I
>> > don't think we want to blindly flush the entire interleaved space because a)
>> > the other chunks are some other DIMMs' apertures, and b) we'd be flushing 2x
>> > or more (depending on how many DIMMs are interleaved) the space we need, one
>> > cache line at a time.
>>
>> I am indeed proposing flushing other DIMMs because those ranges are
>> invalidated by the aperture moving. This is based on the assumption
>> that the flushing is cheaper in the case when no dirty-lines are
>> found. The performance gains of doing piecemeal flushes seems not
>> worth the complexity.
>
> Why are the segments belonging to other apertures invalidated because we have
> moved our aperture? They are all independent cache lines (segments must be a
> multiple of the cache line size), and the other apertures might be in the
> middle of some other I/O operation on some other CPU that we know nothing
> about.
>

Ah, ok the other DIMM data in the aperture should never be consumed
and does not need to be invalidated. I'm straightened out on that
aspect.

With regards to the fencing, since we already take care to flush
writes we don't need to fence at all for the flush, right? All we
care about is that reads see valid data.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/