Hardware Thread, Raid in Technical; Originally Posted by LukeC
I was thinking of using 15k 600GB SAS Drives.
The other thing to consider with RAID ...
13th June 2013, 03:56 PM #16
The other thing to consider with RAID is more spindles = more speed (lower seek/access times). So depending on your enclosure using say 8x300Gb drives might be better than 4x600Gb drives which also opens up the RAID options into the RAID-50 or RAID-60 territory. Also if you are thinking of a large array, in terms of disk numbers, then seriously consider 1 or 2 hot spares.
Originally Posted by LukeC
13th June 2013, 04:27 PM #17
^ yep. To further that you can consider 2.5inch over 3.5inch as you generally get around twice as many disks. You generally only get 10k disk but it is negated by te double spindles. Plus it gives you plenty of room for hot spares.
@LukeC arnt you glad you opened this can of worms?
13th June 2013, 11:51 PM #18
- Rep Power
Not sure i agree with the more spindles statement, with certain raid levels yes more drives can/could be quicker but again it all depends on the data being written/read from them.
Originally Posted by tmcd35
EMC always recommend Raid 5 4+1 for performance anything else is considered slower in there setup.
For the OP my personal choice would be raid 10 or 50. For raid 6 you need a really good raid card as it has to calculate the parity bit twice which requires a lot more cpu cycles.
14th June 2013, 01:41 AM #19
All our production stuff runs off raid 10 LUNS i have two raid 6 luns for the testing kind of stuff
Originally Posted by VeryPC_Ed
14th June 2013, 05:14 AM #20
- Rep Power
What will your DBMS be doing?
For a database that is read-mostly, RAID 5 or 6 is fine. My organization uses it all the time for low write/delete/modify databases. And we use RAID 6 for much of our data warehouses (tempspace and index rebuild areas excepted). RAID 5 is only slower during write ops, not read ops. So consider the usage.
RAID 1 and it's big brother RAID 10 we use for fast transactional systems, beginning with the log space (since that's written to with every write op), then for the temp space (large sorts, etc), then as needed for high change rate tablespaces.
For spindle capacities > 600GB, we go to RAID 6, since the hotspare rebuild times start to get longer, and we really don't want to lose another drive while one is rebuilding. Else it's "Hope the backup set is current," and "Tell the users it'll be down till the restore finishes."
Also, don't confuse throughput/bandwidth with access time. They only correspond when there are enough users active, and then only roughly. Without a demonstrated bottleneck, adding more spindles to a RAID set (thus raising the stripe size) means more users can get to data in the same time, but not necessarily that the same number of users can get to the same data faster.
Lastly, remember the hot spares. Our rubric is 1 HS per 30 disks in service if the hot spare can take over for any other spindle in the same VNX or VMAX (my teams manage several data centers with rather sizeable SANs in each). If you do not have hotspares, you MUST HAVE ACTIVE MONITORING watching for failures. I have seen too many organizations lose a disk in a redundant system and not know it until a second one fails. "Oops, this is going to suck."
As I recently explained to a colleague in another division, my preference (not always possible) is to start as small as possible and "watch and grow" the system as performance data becomes available. I advised him to keep 30% of his space in reserve and adjust its placement as he learns more.
Best of luck.
Last Post: 7th June 2006, 11:59 PM
By indie in forum Hardware
Last Post: 8th February 2006, 11:42 AM
By pete in forum Jokes/Interweb Things
Last Post: 17th January 2006, 03:37 PM
By Dos_Box in forum Hardware
Last Post: 29th September 2005, 12:02 AM
By Dos_Box in forum Hardware
Last Post: 13th July 2005, 11:16 AM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)