*Re: [PATCH 02/19] btrfs: Get zone information of zoned block devices
2019-06-18 6:42 ` Naohiro Aota@ 2019-06-27 15:11 ` David Sterba0 siblings, 0 replies; 79+ messages in thread
From: David Sterba @ 2019-06-27 15:11 UTC (permalink / raw)
To: Naohiro Aota
Cc: dsterba, linux-btrfs, David Sterba, Chris Mason, Josef Bacik,
Qu Wenruo, Nikolay Borisov, linux-kernel, Hannes Reinecke,
linux-fsdevel, Damien Le Moal, Matias Bjørling,
Johannes Thumshirn, Bart Van Assche
On Tue, Jun 18, 2019 at 06:42:09AM +0000, Naohiro Aota wrote:
> >> + device->seq_zones = kcalloc(BITS_TO_LONGS(device->nr_zones),
> >> + sizeof(*device->seq_zones), GFP_KERNEL);
> >
> > What's the expected range for the allocation size? There's one bit per
> > zone, so one 4KiB page can hold up to 32768 zones, with 1GiB it's 32TiB
> > of space on the drive. Ok that seems safe for now.
>
> Typically, zone size is 256MB (as default value in tcmu-runner). On such device,
> we need one 4KB page per 8TB disk space. Still it's quite safe.
Ok, and for drives up to 16T the allocation is 8kb that the allocator
usually is able to find.
> >> + u8 zone_size_shift;
> >
> > So the zone_size is always power of two? I may be missing something, but
> > I wonder if the calculations based on shifts are safe.
>
> The kernel ZBD support have a restriction that
> "The zone size must also be equal to a power of 2 number of logical blocks."
> http://zonedstorage.io/introduction/linux-support/#zbd-support-restrictions
>
> So, the zone_size is guaranteed to be power of two.
Ok. I don't remember if there are assertions, but would like to see them
in the filesystem code indpendently anyway as the mount-time sanity
checks.
^permalinkrawreply [flat|nested] 79+ messages in thread

*Re: [PATCH 09/19] btrfs: limit super block locations in HMZONED mode
2019-06-07 13:10 ` [PATCH 09/19] btrfs: limit super block locations in HMZONED mode Naohiro Aota
2019-06-13 14:12 ` Josef Bacik@ 2019-06-17 22:53 ` David Sterba
2019-06-18 9:01 ` Naohiro Aota
2019-06-28 3:55 ` Anand Jain2 siblings, 1 reply; 79+ messages in thread
From: David Sterba @ 2019-06-17 22:53 UTC (permalink / raw)
To: Naohiro Aota
Cc: linux-btrfs, David Sterba, Chris Mason, Josef Bacik, Qu Wenruo,
Nikolay Borisov, linux-kernel, Hannes Reinecke, linux-fsdevel,
Damien Le Moal, Matias Bjørling, Johannes Thumshirn,
Bart Van Assche
On Fri, Jun 07, 2019 at 10:10:15PM +0900, Naohiro Aota wrote:
> When in HMZONED mode, make sure that device super blocks are located in
> randomly writable zones of zoned block devices. That is, do not write super
> blocks in sequential write required zones of host-managed zoned block
> devices as update would not be possible.
This could be explained in more detail. My understanding is that the 1st
and 2nd copy superblocks is skipped at write time but the zone
containing the superblocks is not excluded from allocations. Ie. regular
data can appear in place where the superblocks would exist on
non-hmzoned filesystem. Is that correct?
The other option is to completely exclude the zone that contains the
superblock copies.
primary sb 64K
1st copy 64M
2nd copy 256G
Depends on the drives, but I think the size of the random write zone
will very often cover primary and 1st copy. So there's at least some
backup copy.
The 2nd copy will be in the sequential-only zone, so the whole zone
needs to be excluded in exclude_super_stripes. But it's not, so this
means data can go there. I think the zone should be left empty.
^permalinkrawreply [flat|nested] 79+ messages in thread

*Re: [PATCH 09/19] btrfs: limit super block locations in HMZONED mode
2019-06-17 22:53 ` David Sterba@ 2019-06-18 9:01 ` Naohiro Aota
2019-06-27 15:35 ` David Sterba0 siblings, 1 reply; 79+ messages in thread
From: Naohiro Aota @ 2019-06-18 9:01 UTC (permalink / raw)
To: dsterba
Cc: linux-btrfs, David Sterba, Chris Mason, Josef Bacik, Qu Wenruo,
Nikolay Borisov, linux-kernel, Hannes Reinecke, linux-fsdevel,
Damien Le Moal, Matias Bjørling, Johannes Thumshirn,
Bart Van Assche
On 2019/06/18 7:53, David Sterba wrote:
> On Fri, Jun 07, 2019 at 10:10:15PM +0900, Naohiro Aota wrote:
>> When in HMZONED mode, make sure that device super blocks are located in
>> randomly writable zones of zoned block devices. That is, do not write super
>> blocks in sequential write required zones of host-managed zoned block
>> devices as update would not be possible.
>
> This could be explained in more detail. My understanding is that the 1st
> and 2nd copy superblocks is skipped at write time but the zone
> containing the superblocks is not excluded from allocations. Ie. regular
> data can appear in place where the superblocks would exist on
> non-hmzoned filesystem. Is that correct?
Correct. You can see regular data stored at usually SB location on HMZONED fs.
> The other option is to completely exclude the zone that contains the
> superblock copies.
>
> primary sb 64K
> 1st copy 64M
> 2nd copy 256G
>
> Depends on the drives, but I think the size of the random write zone
> will very often cover primary and 1st copy. So there's at least some
> backup copy.
>
> The 2nd copy will be in the sequential-only zone, so the whole zone
> needs to be excluded in exclude_super_stripes. But it's not, so this
> means data can go there. I think the zone should be left empty.
>
I see. That's more safe for the older kernel/userland, right? By keeping that zone empty,
we can avoid old ones to mis-interpret data to be SB.
Alright, I will change the code to do so.
^permalinkrawreply [flat|nested] 79+ messages in thread

*Re: [PATCH 09/19] btrfs: limit super block locations in HMZONED mode
2019-06-18 9:01 ` Naohiro Aota@ 2019-06-27 15:35 ` David Sterba0 siblings, 0 replies; 79+ messages in thread
From: David Sterba @ 2019-06-27 15:35 UTC (permalink / raw)
To: Naohiro Aota
Cc: dsterba, linux-btrfs, David Sterba, Chris Mason, Josef Bacik,
Qu Wenruo, Nikolay Borisov, linux-kernel, Hannes Reinecke,
linux-fsdevel, Damien Le Moal, Matias Bjørling,
Johannes Thumshirn, Bart Van Assche
On Tue, Jun 18, 2019 at 09:01:35AM +0000, Naohiro Aota wrote:
> On 2019/06/18 7:53, David Sterba wrote:
> > On Fri, Jun 07, 2019 at 10:10:15PM +0900, Naohiro Aota wrote:
> >> When in HMZONED mode, make sure that device super blocks are located in
> >> randomly writable zones of zoned block devices. That is, do not write super
> >> blocks in sequential write required zones of host-managed zoned block
> >> devices as update would not be possible.
> >
> > This could be explained in more detail. My understanding is that the 1st
> > and 2nd copy superblocks is skipped at write time but the zone
> > containing the superblocks is not excluded from allocations. Ie. regular
> > data can appear in place where the superblocks would exist on
> > non-hmzoned filesystem. Is that correct?
>
> Correct. You can see regular data stored at usually SB location on HMZONED fs.
>
> > The other option is to completely exclude the zone that contains the
> > superblock copies.
> >
> > primary sb 64K
> > 1st copy 64M
> > 2nd copy 256G
> >
> > Depends on the drives, but I think the size of the random write zone
> > will very often cover primary and 1st copy. So there's at least some
> > backup copy.
> >
> > The 2nd copy will be in the sequential-only zone, so the whole zone
> > needs to be excluded in exclude_super_stripes. But it's not, so this
> > means data can go there. I think the zone should be left empty.
> >
>
> I see. That's more safe for the older kernel/userland, right? By keeping that zone empty,
> we can avoid old ones to mis-interpret data to be SB.
That's not only for older kernels, the superblock locations are known
and the contents should not depend on the type of device on which it was
created. This can be considered part of the on-disk format.
^permalinkrawreply [flat|nested] 79+ messages in thread

*Re: [PATCH 11/19] btrfs: introduce submit buffer
2019-06-13 14:14 ` Josef Bacik@ 2019-06-17 3:16 ` Damien Le Moal
2019-06-18 0:00 ` David Sterba
2019-06-18 13:33 ` Josef Bacik0 siblings, 2 replies; 79+ messages in thread
From: Damien Le Moal @ 2019-06-17 3:16 UTC (permalink / raw)
To: Josef Bacik, Naohiro Aota
Cc: linux-btrfs, David Sterba, Chris Mason, Qu Wenruo,
Nikolay Borisov, linux-kernel, Hannes Reinecke, linux-fsdevel,
Matias Bjørling, Johannes Thumshirn, Bart Van Assche
Josef,
On 2019/06/13 23:15, Josef Bacik wrote:
> On Fri, Jun 07, 2019 at 10:10:17PM +0900, Naohiro Aota wrote:
>> Sequential allocation is not enough to maintain sequential delivery of
>> write IOs to the device. Various features (async compress, async checksum,
>> ...) of btrfs affect ordering of the IOs. This patch introduces submit
>> buffer to sort WRITE bios belonging to a block group and sort them out
>> sequentially in increasing block address to achieve sequential write
>> sequences with __btrfs_map_bio().
>>
>> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
>
> I hate everything about this. Can't we just use the plugging infrastructure for
> this and then make sure it re-orders the bios before submitting them? Also
> what's to prevent the block layer scheduler from re-arranging these io's?
> Thanks,
The block I/O scheduler reorders requests in LBA order, but that happens for a
newly inserted request against pending requests. If there are no pending
requests because all requests were already issued, no ordering happen, and even
worse, if the drive queue is not full yet (e.g. there are free tags), then the
newly inserted request will be dispatched almost immediately, preventing
reordering with subsequent incoming write requests to happen.
The other problem is that the mq-deadline scheduler does not track zone WP
position. Write request issuing is done regardless of the current WP value,
solely based on LBA ordering. This means that mq-deadline will not prevent
out-of-order, or rather, unaligned write requests. These will not be detected
and dispatched whenever possible. The reasons for this are that:
1) the disk user (the FS) has to manage zone WP positions anyway. So duplicating
that management at the block IO scheduler level is inefficient.
2) Adding zone WP management at the block IO scheduler level would also need a
write error processing path to resync the WP value in case of failed writes. But
the user/FS also needs that anyway. Again duplicated functionalities.
3) The block layer will need a timeout to force issue or cancel pending
unaligned write requests. This is necessary in case the drive user stops issuing
writes (for whatever reasons) or the scheduler is being switched. This would
unnecessarily cause write I/O errors or cause deadlocks if the request queue
quiesce mode is entered at the wrong time (and I do not see a good way to deal
with that).
blk-mq is already complicated enough. Adding this to the block IO scheduler will
unnecessarily complicate things further for no real benefits. I would like to
point out the dm-zoned device mapper and f2fs which are both already dealing
with write ordering and write error processing directly. Both are fairly
straightforward but completely different and each optimized for their own structure.
Naohiro changes to btrfs IO scheduler have the same intent, that is, efficiently
integrate and handle write ordering "a la btrfs". Would creating a different
"hmzoned" btrfs IO scheduler help address your concerns ?
Best regards.
--
Damien Le Moal
Western Digital Research
^permalinkrawreply [flat|nested] 79+ messages in thread

*Re: [PATCH 11/19] btrfs: introduce submit buffer
2019-06-17 3:16 ` Damien Le Moal@ 2019-06-18 0:00 ` David Sterba
2019-06-18 4:04 ` Damien Le Moal
2019-06-18 13:33 ` Josef Bacik1 sibling, 1 reply; 79+ messages in thread
From: David Sterba @ 2019-06-18 0:00 UTC (permalink / raw)
To: Damien Le Moal
Cc: Josef Bacik, Naohiro Aota, linux-btrfs, David Sterba,
Chris Mason, Qu Wenruo, Nikolay Borisov, linux-kernel,
Hannes Reinecke, linux-fsdevel, Matias Bjørling,
Johannes Thumshirn, Bart Van Assche
On Mon, Jun 17, 2019 at 03:16:05AM +0000, Damien Le Moal wrote:
> Josef,
>
> On 2019/06/13 23:15, Josef Bacik wrote:
> > On Fri, Jun 07, 2019 at 10:10:17PM +0900, Naohiro Aota wrote:
> >> Sequential allocation is not enough to maintain sequential delivery of
> >> write IOs to the device. Various features (async compress, async checksum,
> >> ...) of btrfs affect ordering of the IOs. This patch introduces submit
> >> buffer to sort WRITE bios belonging to a block group and sort them out
> >> sequentially in increasing block address to achieve sequential write
> >> sequences with __btrfs_map_bio().
> >>
> >> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
> >
> > I hate everything about this. Can't we just use the plugging infrastructure for
> > this and then make sure it re-orders the bios before submitting them? Also
> > what's to prevent the block layer scheduler from re-arranging these io's?
> > Thanks,
>
> The block I/O scheduler reorders requests in LBA order, but that happens for a
> newly inserted request against pending requests. If there are no pending
> requests because all requests were already issued, no ordering happen, and even
> worse, if the drive queue is not full yet (e.g. there are free tags), then the
> newly inserted request will be dispatched almost immediately, preventing
> reordering with subsequent incoming write requests to happen.
This would be good to add to the changelog.
>
> The other problem is that the mq-deadline scheduler does not track zone WP
> position. Write request issuing is done regardless of the current WP value,
> solely based on LBA ordering. This means that mq-deadline will not prevent
> out-of-order, or rather, unaligned write requests.
This seems to be the key point.
> These will not be detected
> and dispatched whenever possible. The reasons for this are that:
> 1) the disk user (the FS) has to manage zone WP positions anyway. So duplicating
> that management at the block IO scheduler level is inefficient.
> 2) Adding zone WP management at the block IO scheduler level would also need a
> write error processing path to resync the WP value in case of failed writes. But
> the user/FS also needs that anyway. Again duplicated functionalities.
> 3) The block layer will need a timeout to force issue or cancel pending
> unaligned write requests. This is necessary in case the drive user stops issuing
> writes (for whatever reasons) or the scheduler is being switched. This would
> unnecessarily cause write I/O errors or cause deadlocks if the request queue
> quiesce mode is entered at the wrong time (and I do not see a good way to deal
> with that).
>
> blk-mq is already complicated enough. Adding this to the block IO scheduler will
> unnecessarily complicate things further for no real benefits. I would like to
> point out the dm-zoned device mapper and f2fs which are both already dealing
> with write ordering and write error processing directly. Both are fairly
> straightforward but completely different and each optimized for their own structure.
So the question is where on which layer the decision logic is. The
filesystem(s) or dm-zoned have enough information about the zones and
the writes can be pre-sorted. This is what the patch proposes.
From your explanation I get that the io scheduler can throw the wrench
in the sequential ordering, for various reasons depending on state of
internal structures od device queues. This is my simplified
interpretation as I don't understand all the magic below filesystem
layer.
I assume there are some guarantees about the ordering, eg. within one
plug, that apply to all schedulers (maybe not the noop one). Something
like that should be the least common functionality that the filesystem
layer can rely on.
> Naohiro changes to btrfs IO scheduler have the same intent, that is, efficiently
> integrate and handle write ordering "a la btrfs". Would creating a different
> "hmzoned" btrfs IO scheduler help address your concerns ?
IMHO this sounds both the same, all we care about is the sequential
ordering, which in some sense is "scheduling", but I would not call it
that way due to the simplicity.
As implemented, it's a list of bios, but I'd suggest using rb-tree or
xarray, the insertion is fast and submission is start to end traversal.
I'm not sure that the loop in __btrfs_map_bio_zoned after label
send_bios: has reasonable complexity, looks like an O(N^2).
^permalinkrawreply [flat|nested] 79+ messages in thread

*Re: [PATCH 11/19] btrfs: introduce submit buffer
2019-06-18 0:00 ` David Sterba@ 2019-06-18 4:04 ` Damien Le Moal0 siblings, 0 replies; 79+ messages in thread
From: Damien Le Moal @ 2019-06-18 4:04 UTC (permalink / raw)
To: dsterba
Cc: Josef Bacik, Naohiro Aota, linux-btrfs, David Sterba,
Chris Mason, Qu Wenruo, Nikolay Borisov, linux-kernel,
Hannes Reinecke, linux-fsdevel, Matias Bjørling,
Johannes Thumshirn, Bart Van Assche
David,
On 2019/06/18 8:59, David Sterba wrote:
> On Mon, Jun 17, 2019 at 03:16:05AM +0000, Damien Le Moal wrote:
>> Josef,
>>
>> On 2019/06/13 23:15, Josef Bacik wrote:
>>> On Fri, Jun 07, 2019 at 10:10:17PM +0900, Naohiro Aota wrote:
>>>> Sequential allocation is not enough to maintain sequential delivery of
>>>> write IOs to the device. Various features (async compress, async checksum,
>>>> ...) of btrfs affect ordering of the IOs. This patch introduces submit
>>>> buffer to sort WRITE bios belonging to a block group and sort them out
>>>> sequentially in increasing block address to achieve sequential write
>>>> sequences with __btrfs_map_bio().
>>>>
>>>> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
>>>
>>> I hate everything about this. Can't we just use the plugging infrastructure for
>>> this and then make sure it re-orders the bios before submitting them? Also
>>> what's to prevent the block layer scheduler from re-arranging these io's?
>>> Thanks,
>>
>> The block I/O scheduler reorders requests in LBA order, but that happens for a
>> newly inserted request against pending requests. If there are no pending
>> requests because all requests were already issued, no ordering happen, and even
>> worse, if the drive queue is not full yet (e.g. there are free tags), then the
>> newly inserted request will be dispatched almost immediately, preventing
>> reordering with subsequent incoming write requests to happen.
>
> This would be good to add to the changelog.
Sure. No problem. We can add that explanation.
>> The other problem is that the mq-deadline scheduler does not track zone WP
>> position. Write request issuing is done regardless of the current WP value,
>> solely based on LBA ordering. This means that mq-deadline will not prevent
>> out-of-order, or rather, unaligned write requests.
>
> This seems to be the key point.
Yes it is. We can also add this to the commit message explanation.
>> These will not be detected
>> and dispatched whenever possible. The reasons for this are that:
>> 1) the disk user (the FS) has to manage zone WP positions anyway. So duplicating
>> that management at the block IO scheduler level is inefficient.
>> 2) Adding zone WP management at the block IO scheduler level would also need a
>> write error processing path to resync the WP value in case of failed writes. But
>> the user/FS also needs that anyway. Again duplicated functionalities.
>> 3) The block layer will need a timeout to force issue or cancel pending
>> unaligned write requests. This is necessary in case the drive user stops issuing
>> writes (for whatever reasons) or the scheduler is being switched. This would
>> unnecessarily cause write I/O errors or cause deadlocks if the request queue
>> quiesce mode is entered at the wrong time (and I do not see a good way to deal
>> with that).
>>
>> blk-mq is already complicated enough. Adding this to the block IO scheduler will
>> unnecessarily complicate things further for no real benefits. I would like to
>> point out the dm-zoned device mapper and f2fs which are both already dealing
>> with write ordering and write error processing directly. Both are fairly
>> straightforward but completely different and each optimized for their own structure.
>
> So the question is where on which layer the decision logic is. The
> filesystem(s) or dm-zoned have enough information about the zones and
> the writes can be pre-sorted. This is what the patch proposes.
Yes, exactly.
> From your explanation I get that the io scheduler can throw the wrench
> in the sequential ordering, for various reasons depending on state of
> internal structures od device queues. This is my simplified
> interpretation as I don't understand all the magic below filesystem
> layer.
Not exactly "throw the wrench". mq-deadline will guarantee per zone write order
to be exactly the order in which requests were inserted, that is, issued by the
FS. But mq-dealine will not "wait" if the write order is not purely sequential,
that is, there are holes/jumps in the LBA sequence for the zone. Order only is
guaranteed. The alignment to WP/contiguous sequential write issuing is the
responsibility of the issuer (FS or DM or application in the case of raw accesses).
> I assume there are some guarantees about the ordering, eg. within one
> plug, that apply to all schedulers (maybe not the noop one). Something
> like that should be the least common functionality that the filesystem
> layer can rely on.
The insertion side of the scheduler (upper level from FS to scheduler), which
include the per CPU software queues and plug control, will not reorder requests.
However, the dispatch side (lower level, from scheduler to HBA driver) can cause
reordering. This is what mq-deadline prevents using a per zone write lock to
avoid reordering of write requests per zone by allowing only a single write
request per zone to be dispatched to the device at any time. Overall order is
not guaranteed, nor is read request order. But per zone write requests will not
be reordered.
But again, this is only ordering. Nothing to do with trying to achieve a purely
sequential write stream per zone. This is the responsibility of the issuer to
deliver write request per zone without any gap, all requests sequential in LBA
within each zone. Overall, the stream of request does not have to be sequential,
e.g. if multiple zones are being written at the same time. But per zones, write
requests must be sequential.
>> Naohiro changes to btrfs IO scheduler have the same intent, that is, efficiently
>> integrate and handle write ordering "a la btrfs". Would creating a different
>> "hmzoned" btrfs IO scheduler help address your concerns ?
>
> IMHO this sounds both the same, all we care about is the sequential
> ordering, which in some sense is "scheduling", but I would not call it
> that way due to the simplicity.
OK. And yes, it is only ordering of writes per zone. For all other requests,
e.g. reads, order does not matter. And the overall interleaving of write
requests to different zones can also be anything. No constraints there.
> As implemented, it's a list of bios, but I'd suggest using rb-tree or
> xarray, the insertion is fast and submission is start to end traversal.
> I'm not sure that the loop in __btrfs_map_bio_zoned after label
> send_bios: has reasonable complexity, looks like an O(N^2).
OK. We can change that. rbtree is simple enough to use. We can change the list
to that.
Thank you for your comments.
Best regards.
--
Damien Le Moal
Western Digital Research
^permalinkrawreply [flat|nested] 79+ messages in thread

*Re: [PATCH 11/19] btrfs: introduce submit buffer
2019-06-17 3:16 ` Damien Le Moal
2019-06-18 0:00 ` David Sterba@ 2019-06-18 13:33 ` Josef Bacik
2019-06-19 10:32 ` Damien Le Moal1 sibling, 1 reply; 79+ messages in thread
From: Josef Bacik @ 2019-06-18 13:33 UTC (permalink / raw)
To: Damien Le Moal
Cc: Josef Bacik, Naohiro Aota, linux-btrfs, David Sterba,
Chris Mason, Qu Wenruo, Nikolay Borisov, linux-kernel,
Hannes Reinecke, linux-fsdevel, Matias Bjørling,
Johannes Thumshirn, Bart Van Assche
On Mon, Jun 17, 2019 at 03:16:05AM +0000, Damien Le Moal wrote:
> Josef,
>
> On 2019/06/13 23:15, Josef Bacik wrote:
> > On Fri, Jun 07, 2019 at 10:10:17PM +0900, Naohiro Aota wrote:
> >> Sequential allocation is not enough to maintain sequential delivery of
> >> write IOs to the device. Various features (async compress, async checksum,
> >> ...) of btrfs affect ordering of the IOs. This patch introduces submit
> >> buffer to sort WRITE bios belonging to a block group and sort them out
> >> sequentially in increasing block address to achieve sequential write
> >> sequences with __btrfs_map_bio().
> >>
> >> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
> >
> > I hate everything about this. Can't we just use the plugging infrastructure for
> > this and then make sure it re-orders the bios before submitting them? Also
> > what's to prevent the block layer scheduler from re-arranging these io's?
> > Thanks,
>
> The block I/O scheduler reorders requests in LBA order, but that happens for a
> newly inserted request against pending requests. If there are no pending
> requests because all requests were already issued, no ordering happen, and even
> worse, if the drive queue is not full yet (e.g. there are free tags), then the
> newly inserted request will be dispatched almost immediately, preventing
> reordering with subsequent incoming write requests to happen.
>
This sounds like we're depending on specific behavior from the ioscheduler,
which means we're going to have a sad day at some point in the future.
> The other problem is that the mq-deadline scheduler does not track zone WP
> position. Write request issuing is done regardless of the current WP value,
> solely based on LBA ordering. This means that mq-deadline will not prevent
> out-of-order, or rather, unaligned write requests. These will not be detected
> and dispatched whenever possible. The reasons for this are that:
> 1) the disk user (the FS) has to manage zone WP positions anyway. So duplicating
> that management at the block IO scheduler level is inefficient.
I'm not saying it has to manage the WP pointer, and in fact I'm not saying the
scheduler has to do anything at all. We just need a more generic way to make
sure that bio's submitted in order are kept in order. So perhaps a hmzoned
scheduler that does just that, and is pinned for these devices.
> 2) Adding zone WP management at the block IO scheduler level would also need a
> write error processing path to resync the WP value in case of failed writes. But
> the user/FS also needs that anyway. Again duplicated functionalities.
Again, no not really. My point is I want as little block layer knowledge in
btrfs as possible. I accept we should probably keep track of the WP, it just
makes it easier on everybody if we allocate sequentially. I'll even allow that
we need to handle the write errors and adjust our WP stuff internally when
things go wrong.
What I'm having a hard time swallowing is having a io scheduler in btrfs proper.
We just ripped out the old one we had because it broke cgroups. It just adds
extra complexity to an already complex mess.
> 3) The block layer will need a timeout to force issue or cancel pending
> unaligned write requests. This is necessary in case the drive user stops issuing
> writes (for whatever reasons) or the scheduler is being switched. This would
> unnecessarily cause write I/O errors or cause deadlocks if the request queue
> quiesce mode is entered at the wrong time (and I do not see a good way to deal
> with that).
Again we could just pin the hmzoned scheduler to those devices so you can't
switch them. Or make a hmzoned blk plug and pin no scheduler to these devices.
>
> blk-mq is already complicated enough. Adding this to the block IO scheduler will
> unnecessarily complicate things further for no real benefits. I would like to
> point out the dm-zoned device mapper and f2fs which are both already dealing
> with write ordering and write error processing directly. Both are fairly
> straightforward but completely different and each optimized for their own structure.
>
So we're duplicating this effort in 2 places already and adding a 3rd place
seems like a solid plan? Device-mapper it makes sense, we're sitting squarely
in the block layer so moving around bio's/requests is its very reason for
existing. I'm not sold on the file system needing to take up this behavior.
This needs to be handled in a more generic way so that all file systems can
share the same mechanism.
I'd even go so far as to say that you could just require using a dm device with
these hmzoned block devices and then handle all of that logic in there if you
didn't feel like doing it generically. We're already talking about esoteric
devices that require special care to use, adding the extra requirement of
needing to go through device-mapper to use it wouldn't be that big of a stretch.
Thanks,
Josef
^permalinkrawreply [flat|nested] 79+ messages in thread

*Re: [PATCH 11/19] btrfs: introduce submit buffer
2019-06-18 13:33 ` Josef Bacik@ 2019-06-19 10:32 ` Damien Le Moal0 siblings, 0 replies; 79+ messages in thread
From: Damien Le Moal @ 2019-06-19 10:32 UTC (permalink / raw)
To: Josef Bacik
Cc: Naohiro Aota, linux-btrfs, David Sterba, Chris Mason, Qu Wenruo,
Nikolay Borisov, linux-kernel, Hannes Reinecke, linux-fsdevel,
Matias Bjørling, Johannes Thumshirn, Bart Van Assche
On 2019/06/18 22:34, Josef Bacik wrote:
> On Mon, Jun 17, 2019 at 03:16:05AM +0000, Damien Le Moal wrote:
>> The block I/O scheduler reorders requests in LBA order, but that happens for a
>> newly inserted request against pending requests. If there are no pending
>> requests because all requests were already issued, no ordering happen, and even
>> worse, if the drive queue is not full yet (e.g. there are free tags), then the
>> newly inserted request will be dispatched almost immediately, preventing
>> reordering with subsequent incoming write requests to happen.
>>
>
> This sounds like we're depending on specific behavior from the ioscheduler,
> which means we're going to have a sad day at some point in the future.
In a sense yes, we are. But my team and I always make sure that such sad day do
not come. We are always making sure that HM-zoned drives can be used and work as
expected (all RCs and stable versions are tested weekly). For now, getting
guarantees on write requests order mandates the use of the mq-deadline scheduler
as it is currently the only one providing these guarantees. I just sent a patch
to ensure that this scheduler is always available with CONFIG_BLK_DEV_ZONED
enabled (see commit b9aef63aca77 "block: force select mq-deadline for zoned
block devices") and automatically configuring it for HM zoned devices is simply
a matter of adding an udev rule to the system (mq-deadline is the default
scheduler for spinning rust anyway).
>> The other problem is that the mq-deadline scheduler does not track zone WP
>> position. Write request issuing is done regardless of the current WP value,
>> solely based on LBA ordering. This means that mq-deadline will not prevent
>> out-of-order, or rather, unaligned write requests. These will not be detected
>> and dispatched whenever possible. The reasons for this are that:
>> 1) the disk user (the FS) has to manage zone WP positions anyway. So duplicating
>> that management at the block IO scheduler level is inefficient.
>
> I'm not saying it has to manage the WP pointer, and in fact I'm not saying the
> scheduler has to do anything at all. We just need a more generic way to make
> sure that bio's submitted in order are kept in order. So perhaps a hmzoned
> scheduler that does just that, and is pinned for these devices.
This is exactly what mq-deadline does for HM devices: it guarantees that write
bio order submission is kept as is for request dispatching to the disk. The only
missing part is "pinned for these devices". This is not possible now. A user can
still change the scheduler to say BFQ. But in that case, unaligned write errors
will show up very quickly. So this is easy to debug. Not ideal I agree, but that
can be fixed independently of BtrFS support for hmzoned disks.
>> 2) Adding zone WP management at the block IO scheduler level would also need a
>> write error processing path to resync the WP value in case of failed writes. But
>> the user/FS also needs that anyway. Again duplicated functionalities.
>
> Again, no not really. My point is I want as little block layer knowledge in
> btrfs as possible. I accept we should probably keep track of the WP, it just
> makes it easier on everybody if we allocate sequentially. I'll even allow that
> we need to handle the write errors and adjust our WP stuff internally when
> things go wrong.
>
> What I'm having a hard time swallowing is having a io scheduler in btrfs proper.
> We just ripped out the old one we had because it broke cgroups. It just adds
> extra complexity to an already complex mess.
I understand your point. It makes perfect sense. The "IO scheduler" added for
hmzoned case is only the method proposed to implement sequential write issuing
guarantees. The sequential allocation was relatively easy to achieve, but what
is really needed is an atomic "sequential alloc blocks + issue write BIO for
these blocks" so that the block IO schedulker sees sequential write streams per
zone. If only the sequential allocation is achieved, write bios serving these
blocks may be reordered at the FS level and result in write failures since the
block layer scheduler only guarantees preserving the order without any
reordering guarantees for unaligned writes.
>> 3) The block layer will need a timeout to force issue or cancel pending
>> unaligned write requests. This is necessary in case the drive user stops issuing
>> writes (for whatever reasons) or the scheduler is being switched. This would
>> unnecessarily cause write I/O errors or cause deadlocks if the request queue
>> quiesce mode is entered at the wrong time (and I do not see a good way to deal
>> with that).
>
> Again we could just pin the hmzoned scheduler to those devices so you can't
> switch them. Or make a hmzoned blk plug and pin no scheduler to these devices.
That is not enough. Pinning the schedulers or using the plugs cannot guarantee
that write requests issued out of order will always be correctly reordered. Even
worse, we cannot implement this. For multiple reason as I stated before.
One example that may illustrates this more easily is this: imagine a user doing
buffered I/Os to an hm disk (e.g. dd if=/dev/zero of=/dev/sdX). The first part
of this execution, that is, allocate a free page, copy the user data and add the
page to the page cache as dirty, is in fact equivalent to an FS sequential block
allocation (the dirty pages are allocated in offset order and added to the page
cache in that same order).
Most of the time, this will work just fine because the page cache dirty page
writeback code is mostly sequential. Dirty pages for an inode are found in
offset order, packed into write bios and issued sequentially. But start putting
memory pressure on the system, or executing "sync" or other applications in
parallel, and you will start seeing unaligned write errors because the page
cache atomicity is per page so different contexts may end up grabbing dirty
pages in order (as expected) but issuing interleaved write bios out of order.
And this type of problem *cannot* be handled in the block layer (plug or
scheduler) because stopping execution of a bio expecting that another bio will
come is very dangerous as there are no guarantees that such bio will ever be
issued. In the case of the page cache flush, this is actually a real eventuality
as memory allocation needed for issuing a bio may depend on the completion of
already issued bios, and if we cannot dispatch those, then we can deadlock.
This is an extreme example. This is unlikely but still a real possibility.
Similarly to your position, that is, the FS should not know anything about the
block layer, the block layer position is that it cannot rely on a specific
behavior from the upper layers. Essentially, all bios are independent and
treated as such.
For HM devices, we needed sequential write guarantees, but could not break the
independence of write requests. So what we did is simply guarantee that the
dispatch order is preserved from the issuing order, nothing else. There is no
"buffering" possible and no checks regarding the sequentiality of writes.
As a result, the sequential write constraint of the disks is directly exposed to
the disk user (FS or DM).
>> blk-mq is already complicated enough. Adding this to the block IO scheduler will
>> unnecessarily complicate things further for no real benefits. I would like to
>> point out the dm-zoned device mapper and f2fs which are both already dealing
>> with write ordering and write error processing directly. Both are fairly
>> straightforward but completely different and each optimized for their own structure.
>>
>
> So we're duplicating this effort in 2 places already and adding a 3rd place
> seems like a solid plan? Device-mapper it makes sense, we're sitting squarely
> in the block layer so moving around bio's/requests is its very reason for
> existing. I'm not sold on the file system needing to take up this behavior.
> This needs to be handled in a more generic way so that all file systems can
> share the same mechanism.
I understand your point. But I am afraid it is not easily possible. The reason
is that for an FS, to achieve sequential write streams in zones, one need an
atomic (or serialized) execution of "block allocation + wrtite bio issuing".
Both combined achieve a sequential write stream that mq-deadline will preserve
and everything will work as intended. This is obviously not easily possible in a
generic manner for all FSes. In f2fs, this was rather easy to do without
changing a lot of code by simply using a mutex to have the 2 operations
atomically executed without any noticeable performance impact. A similar method
in BtrFS is not possible because of async checksum and async compression which
can result in btrfs_map_bio() execution in an order that is different from the
extent allocation order.
>
> I'd even go so far as to say that you could just require using a dm device with
> these hmzoned block devices and then handle all of that logic in there if you
> didn't feel like doing it generically. We're already talking about esoteric
> devices that require special care to use, adding the extra requirement of
> needing to go through device-mapper to use it wouldn't be that big of a stretch.
HM drives are not so "esoteric" anymore. Entire data centers are starting
running on them. And getting BtrFS to work natively on HM drives would be a huge
step toward facilitating their use, and remove this "esoteric" label :)
Back to your point, using a dm to do the reordering is possible, but requires
temporary persistent backup of the out-of-order BIOs due to the reasons pointed
out above (dependency of memory allocation failure/success on bio completion).
This is basically what dm-zoned does, using conventional zones to store
out-of-order writes in conventional zones. Such generic DM is enough to run any
file system (ext4 or XFS run perfectly fine on dm-zoned), but come at the cost
of needing garbage collection with a huge impact on performance. The simple
addition of Naohiro's write bio ordering feature in BtrFS avoids all this and
preserves performance. I really understand your desire to reduce complexity. But
in the end, this is only a "sorted list" that is well controlled within btrfs
itself and avoids dependency on the behavior of other components beside the
block IO scheduler.
We could envision to make such feature generic, implementing it as a block layer
object. But its use would still be needed in btrfs. Since f2fs and dm-zoned do
not require it, btrfs would be the sole user though, so for now at least, this
generic implementation has I think little value. We can work on trying to
isolate this bio reordering code more so that it is easier to remove and use a
future generic implementation. Would that help in addressing your concerns ?
Thank you for your comments.
Best regards.
--
Damien Le Moal
Western Digital Research
^permalinkrawreply [flat|nested] 79+ messages in thread

*Re: [PATCH v2 00/19] btrfs zoned block device support
2019-06-13 13:46 ` David Sterba
2019-06-14 2:07 ` Naohiro Aota@ 2019-06-17 2:44 ` Damien Le Moal1 sibling, 0 replies; 79+ messages in thread
From: Damien Le Moal @ 2019-06-17 2:44 UTC (permalink / raw)
To: dsterba, Naohiro Aota
Cc: linux-btrfs, David Sterba, Chris Mason, Josef Bacik, Qu Wenruo,
Nikolay Borisov, linux-kernel, Hannes Reinecke, linux-fsdevel,
Matias Bjørling, Johannes Thumshirn, Bart Van Assche
David,
On 2019/06/13 22:45, David Sterba wrote:
> On Thu, Jun 13, 2019 at 04:59:23AM +0000, Naohiro Aota wrote:
>> On 2019/06/13 2:50, David Sterba wrote:
>>> On Fri, Jun 07, 2019 at 10:10:06PM +0900, Naohiro Aota wrote:
>>>> btrfs zoned block device support
>>>>
>>>> This series adds zoned block device support to btrfs.
>>>
>>> The overall design sounds ok.
>>>
>>> I skimmed through the patches and the biggest task I see is how to make
>>> the hmzoned adjustments and branches less visible, ie. there are too
>>> many if (hmzoned) { do something } standing out. But that's merely a
>>> matter of wrappers and maybe an abstraction here and there.
>>
>> Sure. I'll add some more abstractions in the next version.
>
> Ok, I'll reply to the patches with specific things.
>
>>> How can I test the zoned devices backed by files (or regular disks)? I
>>> searched for some concrete example eg. for qemu or dm-zoned, but closest
>>> match was a text description in libzbc README that it's possible to
>>> implement. All other howtos expect a real zoned device.
>>
>> You can use tcmu-runer [1] to create an emulated zoned device backed by
>> a regular file. Here is a setup how-to:
>> http://zonedstorage.io/projects/tcmu-runner/#compilation-and-installation
>
> That looks great, thanks. I wonder why there's no way to find that, all
> I got were dead links to linux-iscsi.org or tutorials of targetcli that
> were years old and not working.
The site went online 4 days ago :) We will advertise it whenever we can. This is
intended to document all things "zoned block device" including Btrfs support,
when we get it finished :)
>
> Feeding the textual commands to targetcli is not exactly what I'd
> expect for scripting, but at least it seems to work.
Yes, this is not exactly obvious, but that is how most automation with linux
iscsi is done.
>
> I tried to pass an emulated ZBC device on host to KVM guest (as a scsi
> device) but lsscsi does not recognize that it as a zonde device (just a
> QEMU harddisk). So this seems the emulation must be done inside the VM.
>
What driver did you use for the drive ? virtio block ? I have not touch that
driver nor qemu side, so zoned block dev support is likely missing. I will add
it. That would be especially useful for testing with a real drive. In the case
of tcmu runner, the initiator can be started in the guest directly and the
target emulation done either in the guest if loopback is used, or on the host
using iscsi connection. The former is what we use all the time and so is well
tested. I have to admit that testing with iscsi is lacking... Will add that to
the todo list.
Best regards.
--
Damien Le Moal
Western Digital Research
^permalinkrawreply [flat|nested] 79+ messages in thread