Re: [RFC PATCH 00/17] btrfs zoned block device support

From: Austin S. Hemmelgarn
Date: Mon Aug 13 2018 - 15:29:13 EST


On 2018-08-13 15:20, Hannes Reinecke wrote:
On 08/13/2018 08:42 PM, David Sterba wrote:
On Fri, Aug 10, 2018 at 03:04:33AM +0900, Naohiro Aota wrote:
This series adds zoned block device support to btrfs.

Yay, thanks!

As this a RFC, I'll give you some. The code looks ok for what it claims
to do, I'll skip style and unimportant implementation details for now as
there are bigger questions.

The zoned devices bring some constraints so not all filesystem features
cannot be expected to work, so this rules out any form of in-place
updates like NODATACOW.

Then there's list of 'how will zoned device work with feature X'?

You disable fallocate and DIO. I haven't looked closer at the fallocate
case, but DIO could work in the sense that open() will open the file but
any write will fallback to buffered writes. This is implemented so it
would need to be wired together.

Mixed device types are not allowed, and I tend to agree with that,
though this could work in principle. Just that the chunk allocator
would have to be aware of the device types and tweaked to allocate from
the same group. The btrfs code is not ready for that in terms of the
allocator capabilities and configuration options.

Device replace is disabled, but the changlog suggests there's a way to
make it work, so it's a matter of implementation. And this should be
implemented at the time of merge.

How would a device replace work in general?
While I do understand that device replace is possible with RAID thingies, I somewhat fail to see how could do a device replacement without RAID functionality.
Is it even possible?
If so, how would it be different from a simple umount?
Device replace is implemented in largely the same manner as most other live data migration tools (for example, LVM2's pvmove command).

In short, when you issue a replace command for a given device, all writes that would go to that device are instead sent to the new device. While this is happening, old data is copied over from the old device to the new one. Once all the data is copied, the old device is released (and it's BTRFS signature wiped), and the new device has it's device ID updated to that of the old device.

This is possible largely because of the COW infrastructure, but it's implemented in a way that doesn't entirely depend on it (otherwise it wouldn't work for NOCOW files).

Handling this on zoned devices is not likely to be easy though, you would functionally have to freeze I/O that would hit the device being replaced so that you don't accidentally write to a sequential zone out of order.

RAID5/6 + zoned support is highly desired and lack of it could be
considered a NAK for the whole series. The drive sizes are expected to
be several terabytes, that sounds be too risky to lack the redundancy
options (RAID1 is not sufficient here).

That really depends on the allocator.
If we can make the RAID code to work with zone-sized stripes it should be pretty trivial. I can have a look at that; RAID support was on my agenda anyway (albeit for MD, not for btrfs).

The changelog does not explain why this does not or cannot work, so I
cannot reason about that or possibly suggest workarounds or solutions.
But I think it should work in principle.

As mentioned, it really should work for zone-sized stripes. I'm not sure we can make it to work with stripes less than zone sizes.

As this is first post and RFC I don't expect that everything is
implemented, but at least the known missing points should be documented.
You've implemented lots of the low-level zoned support and extent
allocation, so even if the raid56 might be difficult, it should be the
smaller part.

FYI, I've run a simple stress-test on a zoned device (git clone linus && make) and haven't found any issue with those; compilation ran without a problem, and with quite decent speed.
Good job!

Cheers,

Hannes