+ Post New Thread
Page 2 of 2 FirstFirst 12
Results 16 to 20 of 20
Hardware Thread, Raid in Technical; Originally Posted by LukeC I was thinking of using 15k 600GB SAS Drives. The other thing to consider with RAID ...
  1. #16

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    6,069
    Thank Post
    902
    Thanked 1,013 Times in 825 Posts
    Blog Entries
    9
    Rep Power
    350
    Quote Originally Posted by LukeC View Post
    I was thinking of using 15k 600GB SAS Drives.
    The other thing to consider with RAID is more spindles = more speed (lower seek/access times). So depending on your enclosure using say 8x300Gb drives might be better than 4x600Gb drives which also opens up the RAID options into the RAID-50 or RAID-60 territory. Also if you are thinking of a large array, in terms of disk numbers, then seriously consider 1 or 2 hot spares.

  2. #17


    Join Date
    Oct 2006
    Posts
    3,414
    Thank Post
    184
    Thanked 356 Times in 285 Posts
    Rep Power
    149
    ^ yep. To further that you can consider 2.5inch over 3.5inch as you generally get around twice as many disks. You generally only get 10k disk but it is negated by te double spindles. Plus it gives you plenty of room for hot spares.

    @LukeC arnt you glad you opened this can of worms?

  3. #18

    Join Date
    Jul 2010
    Posts
    106
    Thank Post
    0
    Thanked 14 Times in 14 Posts
    Rep Power
    12
    Quote Originally Posted by tmcd35 View Post
    The other thing to consider with RAID is more spindles = more speed (lower seek/access times). So depending on your enclosure using say 8x300Gb drives might be better than 4x600Gb drives which also opens up the RAID options into the RAID-50 or RAID-60 territory. Also if you are thinking of a large array, in terms of disk numbers, then seriously consider 1 or 2 hot spares.
    Not sure i agree with the more spindles statement, with certain raid levels yes more drives can/could be quicker but again it all depends on the data being written/read from them.
    EMC always recommend Raid 5 4+1 for performance anything else is considered slower in there setup.

    For the OP my personal choice would be raid 10 or 50. For raid 6 you need a really good raid card as it has to calculate the parity bit twice which requires a lot more cpu cycles.

  4. #19

    Join Date
    Mar 2013
    Location
    west sussex
    Posts
    519
    Thank Post
    74
    Thanked 26 Times in 26 Posts
    Rep Power
    15
    Quote Originally Posted by VeryPC_Ed View Post
    Hi Luke, I would suggest looking at Raid 10 as this will provide the best mix of performance and redundancy, especially if you are running i/o intensive applications such as SQL and Exchange.
    Ed
    All our production stuff runs off raid 10 LUNS i have two raid 6 luns for the testing kind of stuff

  5. #20

    Join Date
    Jan 2009
    Location
    upstate New York
    Posts
    23
    Thank Post
    0
    Thanked 12 Times in 7 Posts
    Rep Power
    15

    What will your DBMS be doing?

    For a database that is read-mostly, RAID 5 or 6 is fine. My organization uses it all the time for low write/delete/modify databases. And we use RAID 6 for much of our data warehouses (tempspace and index rebuild areas excepted). RAID 5 is only slower during write ops, not read ops. So consider the usage.

    RAID 1 and it's big brother RAID 10 we use for fast transactional systems, beginning with the log space (since that's written to with every write op), then for the temp space (large sorts, etc), then as needed for high change rate tablespaces.

    For spindle capacities > 600GB, we go to RAID 6, since the hotspare rebuild times start to get longer, and we really don't want to lose another drive while one is rebuilding. Else it's "Hope the backup set is current," and "Tell the users it'll be down till the restore finishes."

    Also, don't confuse throughput/bandwidth with access time. They only correspond when there are enough users active, and then only roughly. Without a demonstrated bottleneck, adding more spindles to a RAID set (thus raising the stripe size) means more users can get to data in the same time, but not necessarily that the same number of users can get to the same data faster.

    Lastly, remember the hot spares. Our rubric is 1 HS per 30 disks in service if the hot spare can take over for any other spindle in the same VNX or VMAX (my teams manage several data centers with rather sizeable SANs in each). If you do not have hotspares, you MUST HAVE ACTIVE MONITORING watching for failures. I have seen too many organizations lose a disk in a redundant system and not know it until a second one fails. "Oops, this is going to suck."

    As I recently explained to a colleague in another division, my preference (not always possible) is to start as small as possible and "watch and grow" the system as performance data becomes available. I advised him to keep 30% of his space in reserve and adjust its placement as he learns more.

    Best of luck.



SHARE:
+ Post New Thread
Page 2 of 2 FirstFirst 12

Similar Threads

  1. Replies: 2
    Last Post: 7th June 2006, 11:59 PM
  2. Recover data from broken RAID 0 array
    By indie in forum Hardware
    Replies: 23
    Last Post: 8th February 2006, 11:42 AM
  3. RAID F
    By pete in forum Jokes/Interweb Things
    Replies: 0
    Last Post: 17th January 2006, 03:37 PM
  4. Moving 4 RAID 5 disks
    By Dos_Box in forum Hardware
    Replies: 10
    Last Post: 29th September 2005, 12:02 AM
  5. DIY NAS RAID 5
    By Dos_Box in forum Hardware
    Replies: 10
    Last Post: 13th July 2005, 11:16 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •