use-after-free. [libata/block]

From: Dave Jones
Date: Thu Jul 27 2017 - 14:31:23 EST


Found this in the logs this morning after an overnight fuzz run..

BUG: KASAN: use-after-free in __lock_acquire+0x1aa/0x1970
Read of size 8 at addr ffff880406805e30 by task trinity-c8/25954

CPU: 1 PID: 25954 Comm: trinity-c8 Not tainted 4.13.0-rc2-think+ #1
Call Trace:
<IRQ>
dump_stack+0x68/0xa1
print_address_description+0xd9/0x270
kasan_report+0x257/0x370
? __lock_acquire+0x1aa/0x1970
__asan_load8+0x54/0x90
__lock_acquire+0x1aa/0x1970
? save_stack+0xb1/0xd0
? save_stack_trace+0x1b/0x20
? save_stack+0x46/0xd0
? try_to_wake_up+0x9b/0xa20
? end_swap_bio_read+0xbe/0x1a0
? debug_check_no_locks_freed+0x1b0/0x1b0
? scsi_softirq_done+0x1a3/0x1d0
? __blk_mq_complete_request+0x14a/0x2a0
? blk_mq_complete_request+0x33/0x40
? scsi_mq_done+0x4e/0x190
? ata_scsi_qc_complete+0x15b/0x700
? __ata_qc_complete+0x16d/0x2e0
? ata_qc_complete+0x1a4/0x740
? ata_qc_complete_multiple+0xeb/0x140
? ahci_handle_port_interrupt+0x19e/0xa10
? ahci_handle_port_intr+0xd9/0x130
? ahci_single_level_irq_intr+0x62/0x90
? __handle_irq_event_percpu+0x6e/0x450
? handle_irq_event_percpu+0x70/0xf0
? handle_irq_event+0x5a/0x90
? handle_edge_irq+0xd9/0x2f0
? handle_irq+0xb4/0x190
? do_IRQ+0x67/0x140
? common_interrupt+0x97/0x97
? do_syscall_64+0x45/0x260
? entry_SYSCALL64_slow_path+0x25/0x25
lock_acquire+0xfc/0x220
? lock_acquire+0xfc/0x220
? try_to_wake_up+0x9b/0xa20
_raw_spin_lock_irqsave+0x40/0x80
? try_to_wake_up+0x9b/0xa20
try_to_wake_up+0x9b/0xa20
? rcu_read_lock_sched_held+0x8f/0xa0
? kmem_cache_free+0x2d3/0x300
? migrate_swap_stop+0x3f0/0x3f0
? mempool_free+0x5f/0xd0
wake_up_process+0x15/0x20
end_swap_bio_read+0xc6/0x1a0
bio_endio+0x12f/0x300
blk_update_request+0x12e/0x5c0
scsi_end_request+0x63/0x2f0
scsi_io_completion+0x3f3/0xa50
? scsi_end_request+0x2f0/0x2f0
? lock_downgrade+0x2c0/0x2c0
? lock_acquire+0xfc/0x220
? blk_stat_add+0x62/0x340
? scsi_handle_queue_ramp_up+0x42/0x1e0
scsi_finish_command+0x1b1/0x220
scsi_softirq_done+0x1a3/0x1d0
__blk_mq_complete_request+0x14a/0x2a0
? scsi_prep_state_check.isra.26+0xa0/0xa0
blk_mq_complete_request+0x33/0x40
scsi_mq_done+0x4e/0x190
? scsi_prep_state_check.isra.26+0xa0/0xa0
ata_scsi_qc_complete+0x15b/0x700
? lock_downgrade+0x2c0/0x2c0
? msleep_interruptible+0xb0/0xb0
? ata_scsi_activity_show+0xb0/0xb0
? trace_hardirqs_off_caller+0x70/0x110
? trace_hardirqs_off+0xd/0x10
? _raw_spin_unlock_irqrestore+0x4b/0x50
? intel_unmap+0x20b/0x300
? intel_unmap_sg+0x9e/0xc0
__ata_qc_complete+0x16d/0x2e0
? intel_unmap+0x300/0x300
ata_qc_complete+0x1a4/0x740
ata_qc_complete_multiple+0xeb/0x140
ahci_handle_port_interrupt+0x19e/0xa10
? ahci_single_level_irq_intr+0x57/0x90
ahci_handle_port_intr+0xd9/0x130
ahci_single_level_irq_intr+0x62/0x90
? ahci_handle_port_intr+0x130/0x130
__handle_irq_event_percpu+0x6e/0x450
handle_irq_event_percpu+0x70/0xf0
? __handle_irq_event_percpu+0x450/0x450
? lock_contended+0x810/0x810
? handle_edge_irq+0x30/0x2f0
? do_raw_spin_unlock+0x97/0x130
handle_irq_event+0x5a/0x90
handle_edge_irq+0xd9/0x2f0
handle_irq+0xb4/0x190
do_IRQ+0x67/0x140
common_interrupt+0x97/0x97
RIP: 0010:do_syscall_64+0x45/0x260
RSP: 0018:ffff8803b2bd7f08 EFLAGS: 00000246
ORIG_RAX: ffffffffffffff1e
RAX: 0000000000000000 RBX: ffff8803b2bd7f58 RCX: ffffffff81146032
RDX: 0000000000000007 RSI: dffffc0000000000 RDI: ffff88044e6c9cc0
RBP: ffff8803b2bd7f48 R08: ffff88046b21d1c0 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000023
R13: ffff88044e6c9cc0 R14: ffff8803b2bd7fd0 R15: 00000000cccccccd
</IRQ>
? trace_hardirqs_on_caller+0x182/0x260
? do_syscall_64+0x41/0x260
entry_SYSCALL64_slow_path+0x25/0x25
RIP: 0033:0x7f80c5932230
RSP: 002b:00007fff521ac2e8 EFLAGS: 00000246
ORIG_RAX: 0000000000000023
RAX: ffffffffffffffda RBX: 00007f80c5ff6058 RCX: 00007f80c5932230
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fff521ac2f0
RBP: 0000000000006562 R08: 00007f80c5c130a4 R09: 00007f80c5c13120
R10: 0000000000000001 R11: 0000000000000246 R12: 000000000000005a
R13: 00007f80c5ff6058 R14: 0000004df61cd3a0 R15: 00000000cccccccd

Allocated by task 14480:
save_stack_trace+0x1b/0x20
save_stack+0x46/0xd0
kasan_kmalloc+0xad/0xe0
kasan_slab_alloc+0x12/0x20
kmem_cache_alloc+0xe0/0x2f0
copy_process.part.44+0xbe0/0x2f90
_do_fork+0x173/0x8a0
SyS_clone+0x19/0x20
do_syscall_64+0xea/0x260
return_from_SYSCALL_64+0x0/0x7a

Freed by task 0:
save_stack_trace+0x1b/0x20
save_stack+0x46/0xd0
kasan_slab_free+0x72/0xc0
kmem_cache_free+0xa8/0x300
free_task+0x69/0x70
__put_task_struct+0xdc/0x220
delayed_put_task_struct+0x59/0x1a0
rcu_process_callbacks+0x49a/0x1580
__do_softirq+0x109/0x5bc

The buggy address belongs to the object at ffff8804068055c0
which belongs to the cache task_struct of size 6848
The buggy address is located 2160 bytes inside of
6848-byte region [ffff8804068055c0, ffff880406807080)
The buggy address belongs to the page:
page:ffffea00101a0000 count:1 mapcount:0 mapping: (null) index:0x0
compound_mapcount: 0
flags: 0x8000000000008100(slab|head)
raw: 8000000000008100 0000000000000000 0000000000000000 0000000100040004
raw: ffffea0006f6f420 ffffea0005311820 ffff880467e68e40 0000000000000000
page dumped because: kasan: bad access detected

Memory state around the buggy address:
ffff880406805d00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff880406805d80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff880406805e00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff880406805e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff880406805f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
Disabling lock debugging due to kernel taint