Originally Posted By: LittleBlueThing
Originally Posted By: drakino
One thing that does annoy me about RAID is the absolute need for matched drives.

You mean 'hardware raid'


No, I should clarify. I mean I dislike the requirement that the size of space used on each disk for the RAID has to be identical. IE, toss in 2 initial 160GB drives in a RAID 1 (Linux software raid, or hardware raid), then toss in a newer 200GB drive, and only 160 of it can be added to the existing RAID. Sure, that leftover 40gb of space can normally be used for something else (yes, hardware and software RAID here too), but since the first two drives into the RAID were 160, that is now the set size.

My 3rd Linux server ended up starting with 3 160gb drives in RAID 5 and an 80gb drive to boot off of. One failed, was replaced by a 200, and so I used the extra 40gb space to backup some files off the boot hard drive. Later I added a 500gb, used 160 of it to expand the RAID, and left the rest of the space as an "UnRAID" share that was used only for data I didn't care if the server lost.

With a Drobo, or ZFS, or something that doesn't use RAID, I could have just kept tossing drives in of various size, and still had data redundancy. Odds are if Linux had stable support for this a few years back, I would have gone that route. Problem is, once I made the decision to go RAID with 160gb drives, I was locked into that until a lengthy migration to completely new and independent disks.

And to also clarify, hardware raid can mix devices just fine too. Worked with several raid controllers that didn't care if the devices attached were SATA (1.5 or 3.0 gb, I or II is a bit of a misnomer), or SAS, FATA or FC, SCSI or Tape. The term RAID also didn't mean Redundant Array of Independent Disks, and instead was changed to Redundant Array of Independent Devices when the servers were shipping with RAID memory configurations. Wheres the software RAM RAID drivers in Linux? :-)