Re: bio linked list corruption.

From: Linus Torvalds
Date: Wed Oct 26 2016 - 18:24:11 EST


On Wed, Oct 26, 2016 at 2:52 PM, Chris Mason <clm@xxxxxx> wrote:
>
> This one is special because CONFIG_VMAP_STACK is not set. Btrfs triggers in < 10 minutes.
> I've done 30 minutes each with XFS and Ext4 without luck.

Ok, see the email I wrote that crossed yours - if it's really some
list corruption on ctx->rq_list due to some locking problem, I really
would expect CONFIG_VMAP_STACK to be entirely irrelevant, except
perhaps from a timing standpoint.

> WARNING: CPU: 6 PID: 4481 at lib/list_debug.c:33 __list_add+0xbe/0xd0
> list_add corruption. prev->next should be next (ffffe8ffffd80b08), but was ffff88012b65fb88. (prev=ffff880128c8d500).
> Modules linked in: crc32c_intel aesni_intel aes_x86_64 glue_helper lrw gf128mul ablk_helper i2c_piix4 cryptd i2c_core virtio_net serio_raw floppy button pcspkr sch_fq_codel autofs4 virtio_blk
> CPU: 6 PID: 4481 Comm: dbench Not tainted 4.9.0-rc2-15419-g811d54d #319
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.0-1.fc24 04/01/2014
> ffff880104eff868 ffffffff814fde0f ffffffff8151c46e ffff880104eff8c8
> ffff880104eff8c8 0000000000000000 ffff880104eff8b8 ffffffff810648cf
> ffff880128cab2c0 000000213fc57c68 ffff8801384e8928 ffff880128cab180
> Call Trace:
> [<ffffffff814fde0f>] dump_stack+0x53/0x74
> [<ffffffff8151c46e>] ? __list_add+0xbe/0xd0
> [<ffffffff810648cf>] __warn+0xff/0x120
> [<ffffffff810649a9>] warn_slowpath_fmt+0x49/0x50
> [<ffffffff8151c46e>] __list_add+0xbe/0xd0
> [<ffffffff814dec38>] blk_sq_make_request+0x388/0x580
> [<ffffffff814d5444>] generic_make_request+0x104/0x200

Well, it's very consistent, I have to say. So I really don't think
this is random corruption.

Could you try the attached patch? It adds a couple of sanity tests:

- a number of tests to verify that 'rq->queuelist' isn't already on
some queue when it is added to a queue

- one test to verify that rq->mq_ctx is the same ctx that we have locked.

I may be completely full of shit, and this patch may be pure garbage
or "obviously will never trigger", but humor me.

Linus
block/blk-mq.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index ddc2eed64771..4f575de7fdd0 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -521,6 +521,8 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head)
*/
BUG_ON(rq->cmd_flags & REQ_SOFTBARRIER);

+WARN_ON_ONCE(!list_empty(&rq->queuelist));
+
spin_lock_irqsave(&q->requeue_lock, flags);
if (at_head) {
rq->cmd_flags |= REQ_SOFTBARRIER;
@@ -838,6 +840,7 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
queued++;
break;
case BLK_MQ_RQ_QUEUE_BUSY:
+WARN_ON_ONCE(!list_empty(&rq->queuelist));
list_add(&rq->queuelist, &rq_list);
__blk_mq_requeue_request(rq);
break;
@@ -1034,6 +1037,8 @@ static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,

trace_block_rq_insert(hctx->queue, rq);

+WARN_ON_ONCE(!list_empty(&rq->queuelist));
+
if (at_head)
list_add(&rq->queuelist, &ctx->rq_list);
else
@@ -1137,6 +1142,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
depth = 0;
}

+WARN_ON_ONCE(!list_empty(&rq->queuelist));
depth++;
list_add_tail(&rq->queuelist, &ctx_list);
}
@@ -1172,6 +1178,7 @@ static inline bool blk_mq_merge_queue_io(struct blk_mq_hw_ctx *hctx,
blk_mq_bio_to_request(rq, bio);
spin_lock(&ctx->lock);
insert_rq:
+WARN_ON_ONCE(rq->mq_ctx != ctx);
__blk_mq_insert_request(hctx, rq, false);
spin_unlock(&ctx->lock);
return false;
@@ -1326,6 +1333,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
old_rq = same_queue_rq;
list_del_init(&old_rq->queuelist);
}
+WARN_ON_ONCE(!list_empty(&rq->queuelist));
list_add_tail(&rq->queuelist, &plug->mq_list);
} else /* is_sync */
old_rq = rq;
@@ -1412,6 +1420,7 @@ static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
trace_block_plug(q);
}

+WARN_ON_ONCE(!list_empty(&rq->queuelist));
list_add_tail(&rq->queuelist, &plug->mq_list);
return cookie;
}