#345395 - 28/05/2011 15:17
3TB drives on linux software raid array
|
old hand
Registered: 15/02/2002
Posts: 1049
|
Hey everyone, I've been running a debian etch system for several years now with 3x1TB drives in a software RAID5 array. I bought three 3TB drives to replace the existing drives, and I think I've run into the "2.1TB barrier" based on some googling. I see this in the boot messages: sdb : very big device. try to use READ CAPACITY(16). sdb : unsupported sector size -1548812288. SCSI device sdb: 0 512-byte hdwr sectors (0 MB) sdb: Write Protect is off sdb: Mode Sense: 00 3a 00 00 SCSI device sdb: drive cache: write back Do you guys know what I need to do to use these drives? They are recognized correctly (it seems) by my SATA controller card, but I have noticed that I can't boot the system when all drives are connected to this card (even just the 1TB drives). It seems my motherboard is not supporting boot to this controller card very well and requires a disk plugged in to the motherboard. My plan is to leave a 1TB drive as a boot/operating system device connected to the motherboard and would like to have the 3TB drives each with a single RAID partition in a 3-disk array. Then I can just copy the boot disk to the other two old 1TB drives for physical backup rather than dicking around with RAID1 for the OS. Is a kernel upgrade sufficient? Some googling leads me to believe that a different partition manager is required. Would it be easier to just upgrade the entire system? Will the latest version of debian support the big drives? Is there a different distro you would recommend? I'm using RAID1 partitions for /boot and / on the existing drives (4 partitions total, including the big storage raid5 partition and swap). If reinstalling is easiest, my plan would be to reformat the / and /boot partitions as non-RAID on an existing disk, leaving the large RAID5 partitions alone, then install. I'm assuming a new version of linux could use the existing raid5 partitions and array just fine? I have backups of the data, but it's very slow media. I'm trying to get this done leaving the data on the RAID5 filesystem intact. Hope all this rambling makes some sense... Thanks in advance! Jim
Edited by TigerJimmy (28/05/2011 15:22)
|
Top
|
|
|
|
#345396 - 28/05/2011 16:37
Re: 3TB drives on linux software raid array
[Re: TigerJimmy]
|
carpal tunnel
Registered: 29/08/2000
Posts: 14493
Loc: Canada
|
You need (1) a recent (last year or two) Linux kernel, (2) to enable CONFIG_LBDF in the kernel configuration, and (3) use a filesystem capable of addressing that amount of space (eg. ext4, xfs, jfs, ...).
Cheers
|
Top
|
|
|
|
#345397 - 28/05/2011 16:40
Re: 3TB drives on linux software raid array
[Re: mlord]
|
carpal tunnel
Registered: 29/08/2000
Posts: 14493
Loc: Canada
|
For partitioning, you can either (1) partition the drives and then construct RAIDs from the partitions, or (2) you can first RAID the (whole) drives, and then partition the RAID, or (3) not bother partitioning at all (one massive filesystem).
I think you may need to use something other than MSDOS partition tables if partitioning the raw drives.
|
Top
|
|
|
|
#345400 - 29/05/2011 00:27
Re: 3TB drives on linux software raid array
[Re: mlord]
|
carpal tunnel
Registered: 08/06/1999
Posts: 7868
|
Correct, partitions must be GPT (the "newer" disk format introduced alongside EFI back in 2000), instead of the older MBR format (introduced with the IBM PC in 1981).
|
Top
|
|
|
|
#345401 - 29/05/2011 02:12
Re: 3TB drives on linux software raid array
[Re: mlord]
|
old hand
Registered: 15/02/2002
Posts: 1049
|
For partitioning, you can either (1) partition the drives and then construct RAIDs from the partitions, or (2) you can first RAID the (whole) drives, and then partition the RAID, or (3) not bother partitioning at all (one massive filesystem).
I think you may need to use something other than MSDOS partition tables if partitioning the raw drives. So it sounds like I should just install a new system on the 3 drives and then copy over the data... I don't think I understand how to RAID whole drives. Can I make a software RAID array using /dev/sd1, /dev/sd2, etc? I didn't realize that. I'm fine with one big filesystem comprised of the RAID5 array, especially if I'm using another disk for the OS disk, which will be required for this motherboard, it seems. I guess I get to learn about a new partition manager, too. Thanks for the help, guys. Jim
|
Top
|
|
|
|
#345402 - 29/05/2011 12:09
Re: 3TB drives on linux software raid array
[Re: TigerJimmy]
|
carpal tunnel
Registered: 29/08/2000
Posts: 14493
Loc: Canada
|
I don't think I understand how to RAID whole drives. Can I make a software RAID array using /dev/sd1, /dev/sd2, etc? I didn't realize that. Actually, the raw drives will have names like /dev/sda, /dev/sdb, and /dev/sdc. Just drop those into your RAID configuration, and then make one massive filesystem on the resulting /dev/md0 device. Or partition /dev/md0 if you want to break it up into multiple filesystems. Cheers
|
Top
|
|
|
|
#345403 - 29/05/2011 15:22
Re: 3TB drives on linux software raid array
[Re: mlord]
|
old hand
Registered: 15/02/2002
Posts: 1049
|
I don't think I understand how to RAID whole drives. Can I make a software RAID array using /dev/sd1, /dev/sd2, etc? I didn't realize that. Actually, the raw drives will have names like /dev/sda, /dev/sdb, and /dev/sdc. Just drop those into your RAID configuration, and then make one massive filesystem on the resulting /dev/md0 device. Or partition /dev/md0 if you want to break it up into multiple filesystems. Cheers Of course. I got my nomenclature backwards - the numbers are partitions. OK, I get it. Thanks guys!
|
Top
|
|
|
|
#345404 - 29/05/2011 17:43
Re: 3TB drives on linux software raid array
[Re: TigerJimmy]
|
carpal tunnel
Registered: 29/08/2000
Posts: 14493
Loc: Canada
|
I don't know the pros/cons of RAIDing the whole drives and then partitioning, versus partitioning first and then RAIDing the individual groups of partitions.
There may be some decent rationale for doing the latter. One reason to partition first, is that it allows using RAID1 (bootable!) for the operating system partition, and then RAID5 for everything else.
But I prefer a separate flash/SSD drive for the boot/OS.
|
Top
|
|
|
|
#345406 - 29/05/2011 23:56
Re: 3TB drives on linux software raid array
[Re: mlord]
|
old hand
Registered: 15/02/2002
Posts: 1049
|
Yeah, about that :-)
So with the old 1TB drives, I partitioned first and then made the arrays for exactly that reason. My /boot and / partitions are RAID1.
With this new build, I want to use one of those disks as a system drive (I like your idea of switching to a SSD for the OS), and decommission the other two, using them as offline backups of the system drive.
How can I switch from a RAID1 md device, to a regular partition? Or, should I just fail the two other drives in the array and let it stay a RAID1 md device with just 1 drive in it? Can I clobber it and change the partition type of those partitions to regular filesystem partitions and NOT damage the big RAID5 partition?
Thanks again for the help!
Jim
|
Top
|
|
|
|
#345409 - 30/05/2011 00:45
Re: 3TB drives on linux software raid array
[Re: TigerJimmy]
|
carpal tunnel
Registered: 29/08/2000
Posts: 14493
Loc: Canada
|
How can I switch from a RAID1 md device, to a regular partition? I'm not really a sysadmin expert, nor a RAID expert. There's probably some nifty way to just retag the partition as a non-RAID filesystem, but I've never tried/done it. Copy off the RAID5 data to the new drives first, whatever you do. Then your old 1TB drives will all be "available" for experimentation. I would simply partition and mkfs on a new drive (or a repurposed 1TB drive) and then use mirrordir or rsync to copy everything over to it, and re-run grub-install afterward to make the "new" filesystem bootable. But like I said, I'm not a sysadmin, other than for my own stuff. Cheers
|
Top
|
|
|
|
#345417 - 30/05/2011 14:45
Re: 3TB drives on linux software raid array
[Re: TigerJimmy]
|
addict
Registered: 11/01/2002
Posts: 612
Loc: Reading, UK
|
How can I switch from a RAID1 md device, to a regular partition?
You can't 'migrate' from md <-> non-md You should remove the md tag (look at man mdadm and --zero-superblock) before running mkfs though. The superblock can be stored at various locations on disk so simply wiping the first few bytes or remaking the partition table won't work reliably. Or, should I just fail the two other drives in the array and let it stay a RAID1 md device with just 1 drive in it?
You could do that. A bit messy but not a real problem. Also allows you to add another device later. Can I clobber it and change the partition type of those partitions to regular filesystem partitions and NOT damage the big RAID5 partition?
Yes. Also note the old partition table mark (0xfd) is deprecated which is rational but kinda annoying - the suggestion is to use 0xda now.
_________________________
LittleBlueThing
Running twin 30's
|
Top
|
|
|
|
|
|