My Debian(jessie)-based system sets one of my RAID disks to faulty after some days of running. If I reboot the machine - all is fine again for some days until the problem appears again.
Here's my environment:
The System is running Debian Jessie 64bit and has two physical disks which are used as a RAID1 with mdadm.
The system also uses LVM for a more flexible handling of partitions.
Inside the VirtualBox 5.1.10 environment there are two virtual machines running. The .VDI files of these machines are also located on the LVM mentioned above.
Now I have the problem that after a few days one of the disks seems to have errors - at least the RAID controller sets the disk to faulty. In the last two months both physical disks have been replaced by new disks but the problem is still there. For this reason I wonder if those were real disk failures or if the software RAID controller sets the disks to faulty although they are fine.
Are there any known bugs for this combination of software RAID, LVM and Virtualbox?
Some command output:
~# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sda3[0] sdb3[2](F)
1458846016 blocks [2/1] [U_]
md1 : active raid1 sda1[0] sdb1[2](F)
4194240 blocks [2/1] [U_]
unused devices: <none>
~# mdadm -D /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Sat May 14 00:24:24 2016
Raid Level : raid1
Array Size : 4194240 (4.00 GiB 4.29 GB)
Used Dev Size : 4194240 (4.00 GiB 4.29 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sun Dec 4 00:59:17 2016
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Number Major Minor RaidDevice State
0 8 1 0 active sync
2 0 0 2 removed
2 8 17 - faulty /dev/sdb1
~# mdadm -D /dev/md3
/dev/md3:
Version : 0.90
Creation Time : Sat May 14 00:24:24 2016
Raid Level : raid1
Array Size : 1458846016 (1391.26 GiB 1493.86 GB)
Used Dev Size : 1458846016 (1391.26 GiB 1493.86 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Sun Dec 4 00:59:16 2016
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Number Major Minor RaidDevice State
0 8 3 0 active sync
2 0 0 2 removed
2 8 19 - faulty /dev/sdb3
~# cat /etc/fstab
/dev/md1 / ext3 defaults 1 1
/dev/sda2 none swap sw
/dev/sdb2 none swap sw
/dev/vg00/usr /usr ext4 defaults 0 2
/dev/vg00/var /var ext4 defaults 0 2
/dev/vg00/home /home ext4 defaults 0 2
#/dev/hdd/data /data ext4 defaults 0 2
devpts /dev/pts devpts gid=5,mode=620 0 0
none /proc proc defaults 0 0
none /tmp tmpfs defaults 0 0