Descending into geekdom with a senior IT guy. My teams manage storage, and I architect storage solutions for various projects, so be warned. . .
Mr. Z's First rule: RAID IS NO SUBSTITUTE FOR FULL AND TESTED BACKUPS.
Now, for those who don't want to read further: Chances are excellent that RAID 5 will fill the bill for most educational projects. Short form: It's highly redundant, and the only performance penalty is during write activities. BE CERTAIN you can detect failures and address them promptly, have spares, and you should be fine until the equipment ages.
RAID 5: Uses a number of disks (typically 3 to 8) to create a "RAID Group."
Visualize an 8-layer cake. Instead of a sector on one disk, like a slice of one layer, there is a "stripe," a slice cut thru all the layers. In that stripe, 7 of the pieces contain data, the 8th contains parity information. If you lose one "layer," the information for all the stripes can be recreated as long as all pieces of all 7 other layers remain intact. The parity sector is calculated during a write operation, and is written in round-robin fashion to layer 1 for the first stripe, layer 2 of the next stripe, etc.
Disadvantage ? These parity calculations take time and processing power, which is why I do not specify this RAID level for highly "write-intensive" applications like a transactional database with 3,000 concurrent users. But I doubt this is a serious issue in most educational uses.
Advantage ? Saves money -- you only need one disk worth of overhead to carry the parity information. But don't get carried away. . .
One thing to watch for -- as I mentioned already -- is to be certain you can detect failures promptly. When (NOT "if") a disk fails, you need to discover that and to replace it promptly. The system will typically begin reading every sector of every other disk (called running in 'degraded mode') to recreate the missing data both for user operations and to rebuild the disk when you replace it.
IF A SECOND DISK FAULTS BEFORE THE REBUILD COMPLETES THE RG IS TOAST. That's why you replace failed disks promptly, and never put "too many" disks into one RG. I have been consulted after people build RAID 5 RGs with 15 disks, and 4 years later two fail. They ask "what can we do ?" and I tell them to reach for backups. (See my first rule)
So, as disks get larger, the rebuild times takes longer. Which leads to . . .
RAID 6: Like RAID 5, but the parity information is written to 2 disks in the RG.
Advantage: Lose 1 disks and there is no degraded mode operation per say. Lose a second disk during rebuild and then the system shifts to degraded mode. BUT, it's not a disaster.
Disadvantage: Higher cost ("wastes" another disk over RAID 5), not all controllers support RAID 6 yet. Also resistant to LSE's
Latent Sector Error (LSE) - When a disk has a weak sector but either that data has not been read from, or the system corrected for it. You find in on a RAID 5 system when a DIFFERENT disk fails, and you try to rebuild. The system tries to read the weak sector, and OOPS!" Which is why RAID 5 on aging systems can be problematic. Sophisticated enterprise-class storage from EMC or Hitachi performs "disk-scrubbing" in the back-ground, constantly looking for weak sectors and moving the data. But lower-end systems typically do not have that option. If your controller supports it, make sure it's turned on.
Lastly, for the performance geeks out there:
RAID 0 - Not redundant at all. Writes data across multiple disks, which gives a very high I/O capability, but does not calculate nor does it write any parity information. Good for very little in the business setting, since downtime will occur eventually.
RAID 1 - Mirrors 2 disks. No parity information to calculate, so in general as fast as a single disk, but with redundancy. But doubles the disk costs. Used as higher write performer than RAID 5 / 6.
RAID 1 + 0 or RAID 10 - Mirror multiple pairs of disks to create RGs, then stripe across those RAID 1 RGs to create a RAID 0 super-RG. Redundant due to mirroring of individual pairs, very fast, since there are no parity calculations, and with great potential due to the available I/O bandwidth. Quite safe: A complete failure requires BOTH halves of any one mirrored pair to fail.
RAID 0 + 1 - Stripe then mirror. Don't do it. Unlike RAID 10, if you lose just ONE disk in each RAID 0 RG, you're toast.
RAID 5 + 0 / 50 - composite multiple RAID 5 RGs then stripe across them. Improved performance over RAID 5.
Lastly, for people using RAID,
CREATE A HOTSPARE. This is a mechanism for the system to keep one disk aside for emergencies. If a running disk fails, the system will "swap in" the hot spare and begin the rebuild. It reduces the vulnerable time
MONITOR THE SYSTEM. It sucks to have the system swap in the hot spare and for you to not know it,
Hope that answered your questions.
Thanks that's very comprehensive. I currently have RAID 6 on my HP SAN and it does indeed run the scrubber utility and there is a hot spare available. The SAN also sends plenty of emails too many sometimes :P
Last edited by ChrisH; 16th January 2011 at 11:20 PM.
So.... what you really need is data integrity (i.e. knowing that when you write a block of data to your filesystem you are safe in the knowledge that you can read it back at any point in the future). There is only one filesystem that can guarantee this.... ZFS (and to save me writing shed loads look here: ZFS - Wikipedia, the free encyclopedia).
Any form of RAID only gives you protection against disk failure and allows you to carry on (albeit at a reduced performance rate) whilst the parity is rebuilt on the hot spare. With ZFS the parity is only rebuilt on the blocks that are used. So for example, if you have a RAID set made up of 2TB drives (not uncommon nowadays) with 100GB of used capacity and you have a situation where one fails and the hot spare kicks in you only rebuilds the parity on the 200GB and NOT the whole drive like other filesystems will force you to do. This means your "slow down" is considerable less.
Regarding the read and write performance issues noted above, ZFS is the only "SSD aware" filesystem. What that means is ZFS understands about SSDs and uses them intelligently for both reads and writes to drive greater performance thus alleviating the need for "tiers of storage" and the associated management overhead maintaining data in the correct tier.
I am more than happy to explain this in detail off forum if anyone wants to know more.
What systems now support zfs as it kind of got muddy after Oracle brought Sun?
Sun Unified Storage | Flash Optimized Storage | Oracle).
These use the latest Intel Nehelam and Westmere CPUs and support read and write SSDs (model dependant).
If you need more information then drop me a line and I can put you in touch with your local Oracle storage sales person
SYNACK (17th January 2011)
The server runs some kind of Linux-based OS. Several people on this forum have obviously found the performance of these devices just fine as they were recommended in a couple of recent threads where people were asking about storage servers. The server offers iSCISI, so must be capable of good enough performance to act as a VM disk image host.
If the above is correct, is it likly that the QNAP server is using Linux's standard mdadm software RAID to run its RAID array, or are they likly to have had to write their own RAID system of some kind? If performance of mdadm RAID is up to running an iSCISI server, why do people bother buying hardware RAID cards in the first place? Is there likly to be some practical limit to the number of disks mdadm RAID will be able to handle - is 8 maybe the most you should expect to be able to use?
In addition to the possible overhead of software RAID, there's the setup expertise -- the "care and feeding" part. I agree that VxVM, mdadm, and the like are not rocket science, but plugging in a card and running a vendor utility are even less so. I'd love a Hitachi VSP for every project I do, but I don't have that much budget. I've also done plenty of pro bono mdadm setups with great success (I recommend webmin to remove much of the grunt work). Depends on the money and the skill level.
And while I have no experience with QNAP, I would not be surprised to see them using FOSS tools like linux and mdadm. But if I have the right impression, understand they're devoting all the CPU power on that appliance to running either mdadm, NFS, Samba, or some combination of all 3. The CPU has no other work. Hence, you can consider it hardware RAID.
In my day job, I have worked with storage engineers from several huge, 3-letter vendors, and been given logins to what were obviously customized Linux-based appliances under the cover. If you're feeling really curious, pull a disk and mount it on a Linux system. See if mdadm can read the header.
Ultimately, RAID cards are easier than mdadm, and you don't need much for care and feeding. How many people running mdadm run checkarray or some other tool like that religiously ? If you can afford them, those cards have their advantages.
IIRC, mdadm has a limit of 28 devices per array. There are ways around that, but I think then the stock kernel gets confused. I have never used more than 8 in a RAID 5, disks do fail. Then I've used LVM to stripe or "glue" the individual arrays together. But that's more of a personal choice than anything else.
plexer i haave a few 1tb disks sitting outside my system ready to swap in, just in case
How good is your monitoring ? Will you know promptly when you need to replace a failed disk ? The hot-spare makes up for some less-effective monitoring systems, giving you more time to actually notice it failed before it becomes a crisis. If you have nagios or similar watching everything and emailing alerts, this is less important.
How physically accessible is the system ? Do you have on-site support regularly available at least during normal business hours ? If it's a "one person show," does that techie take extended vacations ? Are there weather-accessibility issues ? For remote sites, how long is travel-time ? The hot-spare will buy more time to actually get to the task of changing the disk. But if those are not a factor, you may have plenty of time to swap disks manually.
How long to acquire another disk ? If the vendor takes 2 weeks to get another disk, and you don't have a spare on the shelf, you might get nervous in the interim. If you have a shelved spare, or can "share spares" among several teams / locations / schools / organizations. this may not be a concern.
How big is the disk / how busy will the system be / how long does it take to rebuild ? The hot-spare starts the rebuild as soon as the failure is detected. If rebuild times are not long (you can test this any time as long as you have a good backup first), then starting the rebuild later rather than sooner is not a problem.
Can you stand some small risk of downtime if the unlikely happens ? Probably 'yes' but consider it. You could get a run of bad disks. It's rare, but we lost a pile of them in quick succession some years ago. Our SAN systems were slamming in hot-spares (we allocate one for each 30 disks in service) at a (comparatively) stunning rate. This was a real corner-condition, but the hot-spares earned their keep in that we had zero downtime.
How are your backups ? Do you test recovery regularly ? I recently ran into an issue where a specialized backup technology from a really big vendor worked perfectly during commissioning tests. And subsequently during annual D/R tests. Until we grew the LUN size over a certain number of TB. Then it went kerblooey. All the backups were running perfectly, but we were still vulnerable due to a bug we didn't know about.
In the end, only you can decide. RAID 6 already goes a long way to being able to sleep at night, you may need nothing else. Best of luck.
ZFS - Wikipedia, the free encyclopedia) and get lots of good things for free, like unlimited snapshots, software RAID (RAID5, RAID6, Mirror, Triple Parity RAID), deduplication, NFS, SMB, iSCSI, etc., etc.
ZFS is free to download and use and as it is a Copy on Write filesystem you can avoid lots of nasty things that other filesystems may being you (silent data corruption, bad blocks, phantom writes, etc.).
Just my tuppence worth.
There are currently 1 users browsing this thread. (0 members and 1 guests)