Questions tagged [software-raid]

RAID solution handled by the operating system.

Software RAID is implemented in the OS. It uses CPU to handle the RAID setup. It is cheaper than hardware raid. It often doesn't support features which you can find with hardware raid like:

  • hot spare
  • fast rebuilding of an array
  • hot swapping
  • high write throughput

Another problem is that it uses the CPU and memory of the computer to perform all the tasks (hardware raidcards have a special cpu and memory for this task). Therefor generating more load on your system.

866 questions
5
votes
2 answers

FreeNAS with ZFS and TLER/ERC/CCTL

I am currently in the process of building a new storage server, to be used for virtual machines, files and backup. The OS is FreeNAS, which uses ZFS as software RAID. My problem is that, I need to choose hard drives and I have looked at both…
Indigo
  • 53
  • 5
5
votes
3 answers

Windows Server software RAID volume constant "Failed Redundancy"

I'm using Windows Server 2008 software RAID volumes. So, recently I've started to receive error in System event log: "The device, \Device\Harddisk7\DR7, has a bad block." Meanwhile volume in Disk Manager is marked as "Failed Redundancy". I could…
Artem Tikhomirov
  • 742
  • 3
  • 9
  • 15
5
votes
1 answer

Can I stripe 2 volumes on the operating system drive?

I'm new to system administration and I'm not really sure if this is possible. Our server supports up to 4 x 2 TB drives. We need a drive larger than 2TB for a particular use. We also need redundancy in the case of hard drive failure. We thought,…
Adam
  • 87
  • 1
  • 1
  • 4
5
votes
2 answers

Linux software raid 1 - can more than two devices be used?

I have a batch of linux servers using software raid 1 that need to have both disks swapped. While this can be done one disk at a time, I'd like to know if it is possible to do both at once with a process like the following, to reduce the outages…
DrStalker
  • 6,946
  • 24
  • 79
  • 107
5
votes
2 answers

Linux RAID 1: How to make a secondary HD boot?

I have the following RAID 1 on a Centos 6.5 server: # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[3] 974713720 blocks super 1.0 [2/1] [_U] bitmap: 7/8 pages [28KB], 65536KB chunk md1 : active raid1 sdb2[3] sda2[2] …
Fernando
  • 1,189
  • 6
  • 23
  • 32
5
votes
2 answers

How to get email alert if one of raid 1 disks fails?

I need to know how can I get email alert if one of raid 1 disks fail to work/crashes. I have CentOS 6.4 64bits, software raid. I did some mistake folowing this tutorial, because it was a bottom note NOTE: It has been found that mdadm will not send…
Blazer
  • 77
  • 2
  • 6
5
votes
1 answer

Linux software RAID 1 - root filesystem becomes read-only after a fault on one disk

Linux software RAID 1 locking to read-only mode The setup: Centos 5.2, 2x 320 GB sata drives in RAID 1. /dev/md0 (/dev/sda1 + /dev/sdb1) is /boot /dev/md1 (/dev/sda1 + /dev/sdb1) is an LVM partition which contains /, /data and swap partitions All…
DrStalker
  • 6,946
  • 24
  • 79
  • 107
5
votes
2 answers

What happens to missed writes after a zpool clear?

I am trying to understand ZFS' behaviour under a specific condition, but the documentation is not very explicit about this so I'm left guessing. Suppose we have a zpool with redundancy. Take the following sequence of events: A problem arises in…
Kevin
  • 1,580
  • 4
  • 23
  • 35
5
votes
1 answer

100% packets dropped on first RX queue on 3/5 raid6 iSCSI NAS devices using intel igb (resolved)

Edit : The issue is resolved. The Queues in question have been used for Flow Control Packets. Why the igb driver propagated FC packets up to have them dropped (and counted) is another question. But the solution is, that there is nothing dropped in a…
Yamakuzure
  • 153
  • 6
5
votes
1 answer

Linux mdadm --grow RAID6: Something wrong - reshape aborted

I have a RAID60 that I want to expand. The current is: 2 axles each having 9 disks + 2 spares. The future is: 4 axles each having 10 disks + 1 spare. So I need to do some --grow to reshape the drives. I thought this would be enough: mdadm -v --grow…
Ole Tange
  • 2,946
  • 6
  • 32
  • 47
5
votes
2 answers

Windows Server 2008 Software Raid 5 - Data integrity issues

I've got a server running Windows Server 2008 R2, with a (windows native) software raid-5 array. The array consists of 7x 1TB Western Digital RE3 and RE4 drives. I have offline backups of this array. The problem is this: I noticed a few days ago…
Fopedush
  • 160
  • 5
5
votes
2 answers

Linux software RAID - partition first?

I have two identical drives that I intend to mirror in in the interest of data safety. These are data-only drives, not a primary OS drive. In such a system, is it better to create single partition (Linux raid auto: type 0xfd) on each drive and raid…
tylerl
  • 15,055
  • 7
  • 51
  • 72
5
votes
2 answers

Linux software RAID6: rebuild slow

I am trying to find the bottleneck in the rebuilding of a software raid6. ## Pause rebuilding when measuring raw I/O performance # echo 1 > /proc/sys/dev/raid/speed_limit_min # echo 1 > /proc/sys/dev/raid/speed_limit_max ## Drop caches so that does…
Ole Tange
  • 2,946
  • 6
  • 32
  • 47
5
votes
1 answer

mdadm+zfs vs mdadm+lvm

This may be a naive question since I'm new to this and I cannot find any results about mdadm+zfs, but after some testing it seems it might work: The use case is a server with RAID6 for some data that is backed-up somewhat infrequently. I think I'm…
Álex
  • 193
  • 1
  • 6
5
votes
2 answers

How to know which disk has failed on a Mirrored Raid? marked as DR0

Our 2ndry DC, which is on a W2K8R2 Mirrored software raid has lost it's sync, and disk management displays the failed redundancy error How do I know which of the disks has failed? (beside to try and replace one - and see if it loads and syncs) On…
Saariko
  • 1,791
  • 14
  • 45
  • 75