+ Post New Thread
Results 1 to 10 of 10
Hardware Thread, ESXi and HDDs in Technical; A friend of mine is looking to go down the route of better implementing ESXi onto his network (he's on ...
  1. #1

    Join Date
    Jun 2008
    Posts
    745
    Thank Post
    121
    Thanked 70 Times in 57 Posts
    Rep Power
    33

    ESXi and HDDs

    A friend of mine is looking to go down the route of better implementing ESXi onto his network (he's on version 3 at the moment). I've suggested that if all he's doing is running a max of 4 VMs, he's probably better off just whacking in some HDDs as RAID 5. However, he has other ideas and was wondering what your thoughts on the following is.

    He has 4 VMs doing different things: WSUS+WDS, AV, Print Server and SIMS.

    He believes giving each VM their own individual HDD is better than having a group of them in a RAID 5 setup. So he was thinking of setting it up like this:

    SIMS: 2 x 146GB HD RAID 1+0
    AV: 2 x 72GB HD RAID 1+0
    Print: 2 x 72GB HD RAID 1+0
    WSUS+WDS: 2 x 146GB HD RAID 1+0

    All in a single server - HP ML350 G5/HP ML370 G5.

    What do you guys think?

  2. #2

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    6,069
    Thank Post
    902
    Thanked 1,013 Times in 825 Posts
    Blog Entries
    9
    Rep Power
    350
    If he can fit all 8 drives in the case and on the controller and can afford to purchase all 8 drives then he's probably right. Not much future expandability. But certainly would be both faster and offer better protection than RAID-5. And he'll be using RAID1 on each set not RAID-10 (he'd need 16 drives total for RAID-10).

    For a more practical set up, better price, perhaps slightly better future proofing - you're right RAID-5 is better. I'd go 5 larger drives with a larger total capacity in RAID-5 over 8 drives in RAID-1. But thats just me

  3. #3
    apaton's Avatar
    Join Date
    Jun 2009
    Location
    Kings Norton
    Posts
    283
    Thank Post
    54
    Thanked 106 Times in 87 Posts
    Rep Power
    37
    SIMS: 2 x 146GB HD RAID 1+0
    AV: 2 x 72GB HD RAID 1+0
    Print: 2 x 72GB HD RAID 1+0
    WSUS+WDS: 2 x 146GB HD RAID 1+0
    I'm not keen on this method, although the suggested configuration has advantages for I/O isolation. But the utilisation of the storage would be poor in my estimation.

    Imagine why people use ESXi in the first place, better utlisation the CPU/Memory so why not disks?
    Using all the disks will improve IOPS but there will be I/O contention, just like for CPU/memory.

    I don't know much about SIMS but the other applications would all work fine on an internal RAID5 controller(256Mb) with SAS disks.

    My suggested configuration would be a single VMFS volume based on RAID 5 with 7 SAS disks including 1 hot spare.

    This is my opinion and not the definitive answer, it all depends on applications I/O requirements.

  4. #4

    Join Date
    Jun 2008
    Posts
    745
    Thank Post
    121
    Thanked 70 Times in 57 Posts
    Rep Power
    33
    Quote Originally Posted by tmcd35 View Post
    If he can fit all 8 drives in the case and on the controller and can afford to purchase all 8 drives then he's probably right. Not much future expandability. But certainly would be both faster and offer better protection than RAID-5. And he'll be using RAID1 on each set not RAID-10 (he'd need 16 drives total for RAID-10).

    For a more practical set up, better price, perhaps slightly better future proofing - you're right RAID-5 is better. I'd go 5 larger drives with a larger total capacity in RAID-5 over 8 drives in RAID-1. But thats just me

    Yeah, my thoughts exactly. I figured if he got himself 15k RPM HDs he'll be laughing, cos there shouldn't be any noticeable reduction in performance. Then again, I suppose that's what he's after - better protection and it being a whole lot faster.

    Mind you, once he's virtualised SIMS, he's going to use the SIMS server as another one of his ESXi host. So I guess in that sense he does have some room of expandability.

  5. #5

    Join Date
    Jun 2008
    Posts
    745
    Thank Post
    121
    Thanked 70 Times in 57 Posts
    Rep Power
    33
    Quote Originally Posted by apaton View Post
    I'm not keen on this method, although the suggested configuration has advantages for I/O isolation. But the utilisation of the storage would be poor in my estimation.

    Imagine why people use ESXi in the first place, better utlisation the CPU/Memory so why not disks?
    Using all the disks will improve IOPS but there will be I/O contention, just like for CPU/memory.

    I don't know much about SIMS but the other applications would all work fine on an internal RAID5 controller(256Mb) with SAS disks.

    My suggested configuration would be a single VMFS volume based on RAID 5 with 7 SAS disks including 1 hot spare.

    This is my opinion and not the definitive answer, it all depends on applications I/O requirements.
    That's what his primary target is: isolating the I/O without risking any perfomances on the other VMs. I admit it is poor utilisation of storage, and this is where I was hoping to make him see sense in. My recommendation was to put in 5 x 146GB as RAID 5. If the server was capable of using a HD as a hot spare, then whack in 6 in total. But he doesn't see it like that. I believe one of the servers can only handle 8 HDDs, therefore he won't be able to increase storage in future.

  6. #6

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    6,069
    Thank Post
    902
    Thanked 1,013 Times in 825 Posts
    Blog Entries
    9
    Rep Power
    350
    Try this argument on him -

    Always design servers for possible future needs, not to purely satisify current requirements.

    Give him two "what if's" as examples -

    What if SIMS needed an extra 20Gb space? Since he's got a fixed 146Gb drive he can't just increase the size of the .vmdk file to suite.

    What if the Science department insist on running the dreadful AQA exams again? Do you a) buy another new PC to run the server?, b) run the AQA service on an existing server, say your print server, and risk bring down the print server when AQA goes belly up (as it will), c) run AQA in it's own isolated VM - oh you can't because the ESXi server was designed to run four specific VM's and that's it!

    OK, long winded but illustrates the point. Your right, he's wrong, a single larger RAID-5 array hosting the VMDK for all the existing VM's allowing you to expand their size if ever needed and allowing to run additional VM's on the server in the future if ever needed.

  7. #7
    apaton's Avatar
    Join Date
    Jun 2009
    Location
    Kings Norton
    Posts
    283
    Thank Post
    54
    Thanked 106 Times in 87 Posts
    Rep Power
    37
    He wants performance, but creating small mirrors would cripple performance on an application by application basis.

    Mirrored 2 x 146Gb 10Krpm will give us a maximum of 280 READ IOPS, 140 Write IOPS
    RAID-5 6 x 146Gb 10Krpm will give us a maximum of 840 READ IOPS, 700 Write IOPS

    RAID-5 in this configuration would be
    3.5 x faster for reads
    5 x faster for writes
    Now these are theoretical numbers due the RAID Controller and cache but I'm sure they look better than sticking with just mirrored pairs.

    All VM's will be using the RAID5, but I'm sure performance is enough to cope. If not tell me why not!

    Hope this helps!

  8. #8

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    6,069
    Thank Post
    902
    Thanked 1,013 Times in 825 Posts
    Blog Entries
    9
    Rep Power
    350
    Quote Originally Posted by apaton View Post
    Mirrored 2 x 146Gb 10Krpm will give us a maximum of 280 READ IOPS, 140 Write IOPS
    RAID-5 6 x 146Gb 10Krpm will give us a maximum of 840 READ IOPS, 700 Write IOPS
    Just to play devils advocate (as I agree RAID-5 is the way to go), as you say...

    Using all the disks will improve IOPS but there will be I/O contention, just like for CPU/memory.
    210 READ IOPS and 175 WRITE IOPS per VM doesn't sound as good and adding a 5th VM will obviously reduce those figures. (assuming each VM requires equal disk access time).

    EDIT: actually wont the WRITE IOPS be a lot lower? Each time you write data to a RAID-5 array it is stripped across all the disks. So thats 6 WRITE IOPS to 1 WRITE ratio.

    Theres only one thing for it - more spindles rotating faster! I propose RAID-5 7 x 146Gb 15k rpm. (I like spending other peoples money )
    Last edited by tmcd35; 9th December 2009 at 06:38 AM.

  9. #9
    apaton's Avatar
    Join Date
    Jun 2009
    Location
    Kings Norton
    Posts
    283
    Thank Post
    54
    Thanked 106 Times in 87 Posts
    Rep Power
    37
    tmcd35,

    I like playing devils advocate, you can't and should not accept everything someone says. You need to generate your own opinion.

    Quote Originally Posted by tmcd35 View Post
    210 READ IOPS and 175 WRITE IOPS per VM doesn't sound as good and adding a 5th VM will obviously reduce those figures. (assuming each VM requires equal disk access time).
    I must assume that each VM doesn't require equal access. Because that real life.

    Quote Originally Posted by tmcd35 View Post
    EDIT: actually wont the WRITE IOPS be a lot lower? Each time you write data to a RAID-5 array it is stripped across all the disks.
    A 10K SAS drive is capable of 140 IOPS, the challenge now lies with the SAS RAID Controller with CACHE to utilise all disks.

    Quote Originally Posted by tmcd35 View Post
    So thats 6 WRITE IOPS to 1 WRITE ratio.
    That's worst case scenario, A random write which is less that the RAID5 stripe, performance will be poor. This is well known and understood.

    Again this where the RAID Controller makes its money, by bunching up write I/O so when they go to disk we have optimal writes.
    Last edited by apaton; 9th December 2009 at 11:31 AM. Reason: spelling mistake

  10. #10
    AIT
    AIT is offline
    AIT's Avatar
    Join Date
    Dec 2009
    Location
    Nottingham
    Posts
    369
    Thank Post
    46
    Thanked 32 Times in 30 Posts
    Rep Power
    20
    Besides the argument about hdd configuration.. Personally I would go with his original suggestion even though it doesn’t give you future expandability.
    You are suggesting the G5 series of server... i would seriously look at the new g6 generation.

    Vast improved chipset and can deliver far far greater performance. Have recently purchased 3 of the new G6 and i would say they are worth the extra money. I was extremely impressed.



SHARE:
+ Post New Thread

Similar Threads

  1. HDDs in a server
    By Chuckster in forum Hardware
    Replies: 8
    Last Post: 23rd November 2009, 02:01 PM
  2. Replies: 0
    Last Post: 16th October 2009, 12:13 PM
  3. ESXi 4.0 - Getting Started
    By Zoom7000 in forum Thin Client and Virtual Machines
    Replies: 8
    Last Post: 16th July 2009, 01:12 PM
  4. XenServer vs ESXi
    By Kamran7860 in forum Thin Client and Virtual Machines
    Replies: 18
    Last Post: 22nd June 2009, 06:42 PM
  5. Setting up VMs on Single HDDs
    By Chuckster in forum Thin Client and Virtual Machines
    Replies: 5
    Last Post: 29th January 2009, 07:58 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •