The relevant portions of the log file are below (two independent events,
there is nothing related to ata before the "exception" message):
[901292.247428] ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
frozen
[901292.247492] ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[901292.247494] res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4
(timeout)
[901292.247500] ata2.00: status: { DRDY }
[901292.247512] ata2: hard resetting link
[901294.090746] ata2: SRST failed (errno=-19)
[901294.101922] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[901294.101938] ata2.00: failed to IDENTIFY (I/O error, err_mask=0x40)
[901294.101943] ata2.00: revalidation failed (errno=-5)
[901299.100347] ata2: hard resetting link
[901299.974103] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[901300.105734] ata2.00: configured for UDMA/133
[901300.105776] ata2: EH complete
[901300.137059] end_request: I/O error, dev sdb, sector 1953519935
[901300.137069] md: super_written gets error=-5, uptodate=0
[901300.137077] raid1: Disk failure on sdb1, disabling device.
[901300.137079] raid1: Operation continuing on 1 devices.
[90307.328266] ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
frozen
[90307.328275] ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[90307.328277] res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4
(timeout)
[90307.328280] ata2.00: status: { DRDY }
[90307.328288] ata2: hard resetting link
[90313.218511] ata2: link is slow to respond, please be patient (ready=0)
[90317.377711] ata2: SRST failed (errno=-16)
[90317.377720] ata2: hard resetting link
[90318.251720] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[90318.338026] ata2.00: configured for UDMA/133
[90318.338062] ata2: EH complete
[90318.370625] end_request: I/O error, dev sdb, sector 1953519935
[90318.370632] md: super_written gets error=-5, uptodate=0
[90318.370636] raid1: Disk failure on sdb1, disabling device.
[90318.370637] raid1: Operation continuing on 1 devices.
And here's the story for linux-ide from the earlier messages:I'm using two ST31000528AS drives in RAID1 array using MD. I've hadseveralfailures occur over a period of few months (see logs below). I've RMA'dthedrive, but then got curious why an otherwise normal drive locks up whilecomes
trying to write the same sector once a month or so, but does not report
having bad sectors, doesn't fail any tests, and does just fine if I do
dd if=/dev/urandom of=/dev/sdb bs=512 seek=1953519935 count=1
however many times I try.
I then tried Googling for this number (1953519935) and found that it
up quite a few times and most of the time (or always) in context of
md/raid.