Social security for getting on a large amount Beat The Cycle Of Debt Beat The Cycle Of Debt for an approved in addition questions.

Linux Software RAID

Posted: January 19th, 2009 | Author: | Filed under: Linux, Tech | Tags: , , , , , , , , , , , , , , , | 1 Comment »

I have always wanted to setup a NAS to house all of my data in one location, and now the price/GB is affordable enough for me to live the dream.  Christmas Day I was sitting on the computer thinking that a  NAS would be a good project to close out 08′, so I purchased the following:

7 – Western Digital 1TB Hard drvies
2 – PCI Sata Controllers with 4x internal SataII ports each

I decided to use my 2nd desktop computer that I’ve owned.
Purchased in 03′ here are the specs:

AMD Athlon XP 2200+ 1.8GHz
3GB Ram
4 PCI Slots

That’s all you need to know about that PC.  So I get all of the hard drives setup in the new case and create a rat’s nest of sata and molex connectors.
Most geeks will tell you that you should definitely go with hardware raid controllers because it takes the load off of the CPU and the performance is much better.  I decided to go with software RAID6 because I am not a conformist. :)

Just kidding here are my reasons for software RAID over Hardware:
-Most modern CPUs can handle the RAID operations with ease
-Hardware RAID controller failure results in you having to obtain the exact model and firmware ver. for your data to come back
-My application is not write or read intensive.  The highest load I can see on the NAS will be burning a DVD image over the network while having several other users in the house stream high-def media, while VPN users access the data.  So a max of 40MB/s-ish.  Should be nothing for the gigabit network here.

Here is the breakdown to setup a raid.
1. Install all hardware and confirm it is recognized by the cards/bios
2. Boot to your favorite .NUX distro
3. Setup your main boot drive with the common partitioning scheme.  1x 100MB /Boot, 2x RAM Swap, / with rest of HD space on OS drive
4. Boot up, yum -y update
5. Remote in and fdisk -l to verify that everything is there, then dmesg to make sure there are no issues.
6. fdisk /dev/sda,  specify type (Linux RAID Autodetect), type n for new, primary partition, enter, enter…Wait for the inode goodness
7. yum install mdadm
8. man mdadm and read
9. Setup your raid with the mdadm command, specify spare drives, block size (64k or 128k shoud do), and RAID level.  I went with RAID6 for dual redundancy.  So my 7 drives ended up being ~4.5TB of usuable space.
10. After you run the mdadm command the RAID will sync up and initialize.  You can use the drives while they are syncing, but they will operate in a degradated state.
11. Wait about 15 hours in my case for the RAID to initialize, then check dmesg to see if you have any errors.  Also ‘cat /proc/mdstat’ to see the status of your RAID.
12. Setup Samba, nfs, ftp, whatever you want to give your (l)users access to the data.

If you have any issues with RAID, leave a comment and I’ll email you to help.  I had many issues over the course of this RAID fiasco that I’ve learned from.

Final Say:  If you must go software RAID, use at least a P4 Core 2 Duo processor.