[Possible REGRESSION, 4.16-rc4] Error updating SMART data during runtime and could not connect to lvmetad at some boot attempts

From: Martin Steigerwald
Date: Sun Mar 11 2018 - 04:20:36 EST


Hello.

Since 4.16-rc4 (upgraded from 4.15.2 which worked) I have an issue
with SMART checks occassionally failing like this:

smartd[28017]: Device: /dev/sdb [SAT], is in SLEEP mode, suspending checks
udisksd[24408]: Error performing housekeeping for drive /org/freedesktop/UDisks2/drives/INTEL_SSDSA2CW300G3_[â]: Error updating SMART
data: Error sending ATA command CHECK POWER MODE: Unexpected sense data returned:#0120000: 0e 09 0c 00 00 00 ff 00 00 00 00 00 00 00 50 00 ..............P.#0120010:
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................#012 (g-io-error-quark, 0)
merkaba udisksd[24408]: Error performing housekeeping for drive /org/freedesktop/UDisks2/drives/Crucial_CT480M500SSD3_[â]: Error updating SMART dat
a: Error sending ATA command CHECK POWER MODE: Unexpected sense data returned:#0120000: 01 00 1d 00 00 00 0e 09 0c 00 00 00 ff 00 00 00 ................#0120010: 00 0
0 00 00 50 00 00 00 00 00 00 00 00 00 00 00 ....P...........#012 (g-io-error-quark, 0)

(Intel SSD is connected via SATA, Crucial via mSATA in a ThinkPad T520)

However when I then check manually with smartctl -a | -x | -H the device
reports SMART data just fine.

As smartd correctly detects that device is in sleep mode, this may be an
userspace issue in udisksd.

Also at some boot attempts the boot hangs with a message like "could not
connect to lvmetad, scanning manually for devices". I use BTRFS RAID 1
on to LVs (each on one of the SSDs). A configuration that requires a manual
adaption to InitRAMFS in order to boot (basically vgchange -ay before
btrfs device scan).

I wonder whether that has to do with the new SATA LPM policy stuff, but as
I had issues with

3 => Medium power with Device Initiated PM enabled

(machine did not boot, which could also have been caused by me accidentally
removing all TCP/IP network support in the kernel with that setting)

I set it back to

CONFIG_SATA_MOBILE_LPM_POLICY=0

(firmware settings)

Only other significant change I am aware of is that I switched from SLAB
to SLUB allocator as Debian did with their kernels recently I think.

I attach the complete configuration as xz.

Please understand that I am not into doing a bisect as it can take quite a
a while for the issue to appear and I will be holding a Linux training next
week. If you have any other suggestions, please tell.

I found a thread in LKML about another Crucial SSD not working with more
aggressive LPM settings, yet my current 4.16-rc4 kernel runs with LPM policy
0 which should be safe ([PATCH] libata: Apply NOLPM quirk to Crucial MX100 512GB SSDs).

Also about 3 => Medium power with Device Initiated PM enabled I am not yet
sure which of the both SSDs may cause trouble.

Also posted as bug report:

Bug 199077 - [Possible REGRESSION, 4.16-rc4] Error updating SMART data during runtime and could not connect to lvmetad at some boot attempts
https://bugzilla.kernel.org/show_bug.cgi?id=199077

Thanks,
--
Martin

Attachment: config-4.16.0-rc4-tp520-btrfstrim+.xz
Description: application/xz