^ yep. To further that you can consider 2.5inch over 3.5inch as you generally get around twice as many disks. You generally only get 10k disk but it is negated by te double spindles. Plus it gives you plenty of room for hot spares.
@LukeC arnt you glad you opened this can of worms?
EMC always recommend Raid 5 4+1 for performance anything else is considered slower in there setup.
For the OP my personal choice would be raid 10 or 50. For raid 6 you need a really good raid card as it has to calculate the parity bit twice which requires a lot more cpu cycles.
For a database that is read-mostly, RAID 5 or 6 is fine. My organization uses it all the time for low write/delete/modify databases. And we use RAID 6 for much of our data warehouses (tempspace and index rebuild areas excepted). RAID 5 is only slower during write ops, not read ops. So consider the usage.
RAID 1 and it's big brother RAID 10 we use for fast transactional systems, beginning with the log space (since that's written to with every write op), then for the temp space (large sorts, etc), then as needed for high change rate tablespaces.
For spindle capacities > 600GB, we go to RAID 6, since the hotspare rebuild times start to get longer, and we really don't want to lose another drive while one is rebuilding. Else it's "Hope the backup set is current," and "Tell the users it'll be down till the restore finishes."
Also, don't confuse throughput/bandwidth with access time. They only correspond when there are enough users active, and then only roughly. Without a demonstrated bottleneck, adding more spindles to a RAID set (thus raising the stripe size) means more users can get to data in the same time, but not necessarily that the same number of users can get to the same data faster.
Lastly, remember the hot spares. Our rubric is 1 HS per 30 disks in service if the hot spare can take over for any other spindle in the same VNX or VMAX (my teams manage several data centers with rather sizeable SANs in each). If you do not have hotspares, you MUST HAVE ACTIVE MONITORING watching for failures. I have seen too many organizations lose a disk in a redundant system and not know it until a second one fails. "Oops, this is going to suck."
As I recently explained to a colleague in another division, my preference (not always possible) is to start as small as possible and "watch and grow" the system as performance data becomes available. I advised him to keep 30% of his space in reserve and adjust its placement as he learns more.
Best of luck.
There are currently 1 users browsing this thread. (0 members and 1 guests)