+ Post New Thread
Page 1 of 2 12 LastLast
Results 1 to 15 of 18
Hardware Thread, Virtualisation Advice - SAN or no SAN! in Technical; Hi All, We are looking to replace our aging server infrastructure this summer. I have quotes for a Dell R720 ...
  1. #1

    Join Date
    Nov 2011
    Location
    UK
    Posts
    50
    Thank Post
    3
    Thanked 0 Times in 0 Posts
    Rep Power
    0

    Virtualisation Advice - SAN or no SAN!

    Hi All,

    We are looking to replace our aging server infrastructure this summer. I have quotes for a Dell R720 and a HP DL380p - both same spec - dual Xeon E5-2640's, 96GB RAM, internal SD card for hypervisor. We'll be buying three of these to use as ESXi hosts. We'll eventually be running around 15 or 16 Win2k8 R2 x64 VMs.

    Now, we also have a Dell MD3200i iSCSI SAN, which, at present is only connected to our fileserver and is used for data storage (user areas, shared storage etc).

    I can't make my mind up whether to either....

    1. Put 4 x 300GB SAS disks in each server in a RAID 10 array and keep VMs stored locally. We'd then only buy VMWare Essentials. We wouldn't have HA and vMotion, but as we will be taking snapshots with Veeam, if a host did fail we could restore snapshots back to the two remaining hosts - though this will take time and there would be a loss of data between the snapshot time and failure time. But we would be back up and running fairly quickly.

    I was hoping to get enough money to buy a second SAN this year, but that hasn't been possible, unfortunately. So my other option is....

    2. Use the Dell MD3200i to store our VMs centrally. Purchase VMWare Essentials Plus and make use of HA and vMotion. However, although the MD3200i has dual power supplies, dual quad port NICs, RAID and is cabled redundantly via two iSCSI switches so the chances or failure are *very* slim, there is still a very small chance that the MD3200i chassis could die, or have multiple disk failure, and I'd be left with no VMs at all! That's *really, really* scary! I'm not sure I like all my eggs in one basket. I'd save money on 15 300GB SAS disks here in the servers (4 in each plus a hot spare in each), though I still might have some in one server for Exchange.

    I'm also not entirely sure how the MD3200i will perform with this sort of load. It has 8 x 1GB ethernet links to the 2 x iSCSI switches (Dell PowerConnect 5424's), though I'd have to check they are all active when not in failover state. I think they are though. And each VM host could potentially have 4 x 1GB ethernet links to the iSCSI switches, and then 4 x 1GB back to the core. How would this compare to having local 10K SAS disks on the hosts?

    Price wise, there's not a lot in it - we'd save money on server disks, but spend more on VMWare licensing and additional quad port NICs.

    If I could get a second SAN, it'd be a no brainer (though I'm still sceptical of the iSCSI performance with the MD3200i hosting all the VMs) but as we only have one I'm in a quandry! Would really appreciate your thoughts on this, especially if you have gone down one road or the other and would be interested to hear how that has worked out for you!

    Thanks.

  2. #2

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,821
    Thank Post
    272
    Thanked 1,140 Times in 1,036 Posts
    Rep Power
    351
    If you have the San use it - as you say as long as your VM's are well backed up using Veeam you should be ok. And the thing to say is you didn't give me the funding for the san last year and this year you have increased risk - its up to you what you prefer the increased risk or the additional expense. If they say the Increased risk its there fault if it dies one day.

    With regards to switching 2 ports should be plenty 4 ports certainly enough. I have a client running 20 servers and around 50 VDi machines with 2 x 1gb iscsi from each host.

    This thread might be of interest - Do schools spend more money on back-end ICT than is necessary?

    EDIT: Personally to get the full benefits of Virtualisation you need to use a SAN - otherwise you are loosing out on half of the benefits of Virtualisation.
    Last edited by glennda; 6th July 2012 at 06:26 PM.

  3. #3

    Join Date
    Dec 2011
    Posts
    408
    Thank Post
    372
    Thanked 45 Times in 33 Posts
    Rep Power
    14
    Hi

    Do appreciate your dilemma, I went through something similar! When I joined the school they had a QSAN iSCSI SAN just acting as a fileserver, did think about using it as a SAN for the VMs but because the school was deperately in need of a backup solution(with 3 TB of data!) I ended up using it for my DPM server. I ran a DL380 G7 running VMs on local RAID 10 for now(did so very well) and now the school had a bit of budget I purchased a MSA P2000 SAN and another 2 hosts

    I would probably work out your total disk IOPS to help you plan whether iSCSI is enough. Also whats the warranty on the MD3200i ? That needs to be considered too.

    Good thing with buying servers and hosting VMs local on the RAID..spend less now and if you choose your SAN later if money comes up you can migrate the disk over to it.

    We are also, not having enough money or a 2nd SAN, looking at using Veeam to replicate to another server for DR
    Last edited by MrWu; 6th July 2012 at 06:26 PM.

  4. #4
    TheScarfedOne's Avatar
    Join Date
    Apr 2007
    Location
    Plymouth, Devon
    Posts
    1,122
    Thank Post
    767
    Thanked 176 Times in 159 Posts
    Blog Entries
    78
    Rep Power
    86
    SAN definiately for central VM storage. Also, why not use HyperV for your platform - which has the live move etc which you have to pay for in VMWare. There is some info on this, and links over on the my Blog and the Microsoft Schools Blog too...

  5. #5


    Join Date
    Oct 2006
    Posts
    3,414
    Thank Post
    184
    Thanked 356 Times in 285 Posts
    Rep Power
    149
    I think I'd be sceptical of the performance of your current SAN. You don't state what speed disks it has, how many there are or what RAID level. Looking at the CPUs and amount of RAM you've specced you are presumably a fairly heavy user. IMO 99% of bottlenecks are now at the storage level, be that physical disk speed or IO paths. For "proof" of that stick a SSD in just about any PC and you will notice the difference.

    We have 2 physical file servers, DC, SIMS, 2 terminal servers, and 2 virtual hosts running 2 terminal servers, www, vle, WSUS, WDS etc. In total we have around 50 disks all SAS 10k or 15k, RAID 5 for storage, 10 for everything else (will soon be RAID 10 on the file servers for performance). I'm just about to upgrade the virtual hosts to 64gb between them, plus the ~64gb ram in the physicals and we are still only at half the RAM you have specced. And looking at the spec of your CPUs all of ours put together will not be as powerful as your 3 servers! We have 1300 users, 400 concurrent.

    Assuming you only have a 12 bay SAN it seems to me that will be the bottleneck by a long long way. How about dropping down to 2 hosts (and maybe a baby backup one) and spend the money on an extra SAN.

    If you are going to put disks in the hosts themselves I'd personally be putting 12 15k disks in RAID 10.

    Of course if you are running loads of thin clients off these half of the above doesn't apply
    Last edited by j17sparky; 6th July 2012 at 09:53 PM.

  6. #6

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,821
    Thank Post
    272
    Thanked 1,140 Times in 1,036 Posts
    Rep Power
    351
    Quote Originally Posted by TheScarfedOne View Post
    SAN definiately for central VM storage. Also, why not use HyperV for your platform - which has the live move etc which you have to pay for in VMWare. There is some info on this, and links over on the my Blog and the Microsoft Schools Blog too...
    Yeah but its hyper-v :/

  7. #7
    gshaw's Avatar
    Join Date
    Sep 2007
    Location
    Essex
    Posts
    2,726
    Thank Post
    176
    Thanked 229 Times in 211 Posts
    Rep Power
    69
    You'll only know what disks you need once you've run a capacity planner and got some IOPS stats from your existing server. Once you've done that then spec up a SAN to match.

    Personally I wouldn't do the local disks, you lose pretty much the main benefit of virtualisation i.e. reduce the reliance on the physical hardware. A Veeam restore, although comparatively quick in backup terms is still slow when you look at it against VMWare HA (reboot VM automatically and carry on regardless)

    For recovery look at a decent spec server filled with disks running Veeam. If your SAN goes down it can run critical services from the backups until you get back online. Alternatively if you have deep pockets replicated SAN is the ultimate option.

    Can you add extra shelves to the MD3200i? iSCSI should be OK if set up right I reckon...
    Last edited by gshaw; 9th July 2012 at 03:32 PM.

  8. #8

    Join Date
    Nov 2011
    Location
    UK
    Posts
    50
    Thank Post
    3
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    Hi All,

    Thanks for all the replies - they have all been really helpful. And aplogies for the delay in replying - mad week!

    I'm monitoring the IO now on our existing servers so will see what the results of that are and hopefully that will give me a better idea of whether the iSCSI SAN will cut it.

    At the moment I'm leaning towards going with local disks (but not going too mad on storage capacity to keep costs down) and then hopefully next year purchasing a SAS 6GBps SAN which we can use for centralised VM storage, and then use our exising iSCSI SAN for snapshots and as a failsafe should the new SAN go down. I'll be able to transfer the disks from the 3 hosts to the SAN, and then run the hosts diskless with ESXi on an SD card (or two!)

    I know we'll be losing the benefits of HA, however compared to what we have at present (physical servers only) we'll have far more redundancy and resilience, and this can be our stepping stone to having full redundancy via HA once funds allow.

    But lets see what the tests say first!

  9. #9


    Join Date
    Feb 2007
    Location
    51.403651, -0.515458
    Posts
    9,807
    Thank Post
    262
    Thanked 2,967 Times in 2,182 Posts
    Rep Power
    846
    Quote Originally Posted by glennda View Post
    to get the full benefits of virtualisation you need to use a SAN
    Once WS2012 is released, you will be able to live migrate VMs that are stored on file shares and local disks. No SAN or clustered hosts required.


  10. #10

    Join Date
    Jan 2012
    Posts
    170
    Thank Post
    8
    Thanked 16 Times in 15 Posts
    Rep Power
    37
    Quote Originally Posted by smarties11 View Post
    Hi All,

    We are looking to replace our aging server infrastructure this summer. I have quotes for a Dell R720 and a HP DL380p - both same spec - dual Xeon E5-2640's, 96GB RAM, internal SD card for hypervisor. We'll be buying three of these to use as ESXi hosts. We'll eventually be running around 15 or 16 Win2k8 R2 x64 VMs.

    Now, we also have a Dell MD3200i iSCSI SAN, which, at present is only connected to our fileserver and is used for data storage (user areas, shared storage etc).

    I can't make my mind up whether to either....

    1. Put 4 x 300GB SAS disks in each server in a RAID 10 array and keep VMs stored locally. We'd then only buy VMWare Essentials. We wouldn't have HA and vMotion, but as we will be taking snapshots with Veeam, if a host did fail we could restore snapshots back to the two remaining hosts - though this will take time and there would be a loss of data between the snapshot time and failure time. But we would be back up and running fairly quickly.

    I was hoping to get enough money to buy a second SAN this year, but that hasn't been possible, unfortunately. So my other option is....

    2. Use the Dell MD3200i to store our VMs centrally. Purchase VMWare Essentials Plus and make use of HA and vMotion. However, although the MD3200i has dual power supplies, dual quad port NICs, RAID and is cabled redundantly via two iSCSI switches so the chances or failure are *very* slim, there is still a very small chance that the MD3200i chassis could die, or have multiple disk failure, and I'd be left with no VMs at all! That's *really, really* scary! I'm not sure I like all my eggs in one basket. I'd save money on 15 300GB SAS disks here in the servers (4 in each plus a hot spare in each), though I still might have some in one server for Exchange.

    I'm also not entirely sure how the MD3200i will perform with this sort of load. It has 8 x 1GB ethernet links to the 2 x iSCSI switches (Dell PowerConnect 5424's), though I'd have to check they are all active when not in failover state. I think they are though. And each VM host could potentially have 4 x 1GB ethernet links to the iSCSI switches, and then 4 x 1GB back to the core.


    Thanks.
    i don't know the specifics of the MD3200i, but it sounds like it's an array capable of being a VM shared storage for most schools. 8x 1gb iscsi ? why did they even bother with the switches if those ports can be direct connected to hosts ? If they're split across controllers, you should be able to have four hosts doing MPIO surely.

    I personally would stick with it as your primary storage either for your fileserver or for your VM hosts. or both. If it's already got some large sized NL-SAS disks in then it's probably got a load of storage, plus you can SAS attach additional storage shelves. If you go down the vmware route you can have your HA features, after all that's why most people pay for vsphere rather than use the free version or stick with hyper-v. If it were me i wouldn't get too hung up on buying an additional better quality SAN, the important thing is to have good fault tolerance within the primary array, a couple of spare drives, and extended maintenance. So for whatever reason you lose uptime, your other 'SAN' can be a cheaper variety designed to store Veeam backups in case of worst case scaneraio of something catastrophic happening on the primary array or as a temporary method of mounting your VM's over NFS or iSCSI while you get the SAN fixed and back online, in the unlikely even it's an extended outage.
    Last edited by alttab; 12th July 2012 at 07:53 AM.

  11. #11
    mrbios's Avatar
    Join Date
    Jun 2007
    Location
    Stroud, Gloucestershire
    Posts
    2,659
    Thank Post
    383
    Thanked 273 Times in 225 Posts
    Rep Power
    103
    One of the biggest benefits I've found of having a SAN over local storage is the ability to tier your storage. You could do it on servers with enough drive bays but then you'd have to replicate that on each host which would just end up costing a ton in disks.

    Running two SANs in two buildings I've got 4 x SSDs in both, these are hosting just SQL databases and any other heavy read/write intensive systems, 4 x fast HDDs which are hosting things like online applications, terminal server etc then finally 4 x SATA disks per SAN which are just holding the servers that don't need good performance like DCs etc. This works out cheaper than buying say...SAS drives for everything because you need the performance for your more intensive servers but not for your less intensive ones. Also if a VM host dies HA should automatically bring those hosts back up on another host straight away, rather than having to worry about getting the dead host back online asap.

  12. #12

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,821
    Thank Post
    272
    Thanked 1,140 Times in 1,036 Posts
    Rep Power
    351
    Quote Originally Posted by Arthur View Post
    Once WS2012 is released, you will be able to live migrate VMs that are stored on file shares and local disks. No SAN or clustered hosts required.

    But you are then reliant on Hyper-V..
    yuck

  13. #13
    mrbios's Avatar
    Join Date
    Jun 2007
    Location
    Stroud, Gloucestershire
    Posts
    2,659
    Thank Post
    383
    Thanked 273 Times in 225 Posts
    Rep Power
    103
    Quote Originally Posted by glennda View Post
    But you are then reliant on Hyper-V..
    yuck
    Not a hyper-v user personally (VMware) but there's nothing wrong with it It's looking pretty ship shape in 2012 too! Granted it's still playing catchup to VMware in terms of features but can't really complain when it's as cheap as it is.

  14. #14
    gshaw's Avatar
    Join Date
    Sep 2007
    Location
    Essex
    Posts
    2,726
    Thank Post
    176
    Thanked 229 Times in 211 Posts
    Rep Power
    69
    Quote Originally Posted by mrbios View Post
    Not a hyper-v user personally (VMware) but there's nothing wrong with it It's looking pretty ship shape in 2012 too! Granted it's still playing catchup to VMware in terms of features but can't really complain when it's as cheap as it is.
    2012 looks like closing the gap but I wouldn't want to run it in production until someone else has had the pleasure of finding the initial gotchas (and there will always be some!)

    How much have MS managed to strip down the install for Hyper-V with 2012? Until it can fit on an SD card like VMWare I'm staying put

  15. #15
    mrbios's Avatar
    Join Date
    Jun 2007
    Location
    Stroud, Gloucestershire
    Posts
    2,659
    Thank Post
    383
    Thanked 273 Times in 225 Posts
    Rep Power
    103
    Quote Originally Posted by gshaw View Post
    2012 looks like closing the gap but I wouldn't want to run it in production until someone else has had the pleasure of finding the initial gotchas (and there will always be some!)

    How much have MS managed to strip down the install for Hyper-V with 2012? Until it can fit on an SD card like VMWare I'm staying put
    Aye same here, waiting for something that can fit on SD card or USB pen before i move. Hopefully by that point most of the features we use in VMware will be in hyper-v too



SHARE:
+ Post New Thread
Page 1 of 2 12 LastLast

Similar Threads

  1. Cluster / SAN or other for Home Drive Data
    By mthomas08 in forum General Chat
    Replies: 5
    Last Post: 14th September 2011, 12:22 PM
  2. To SAN or not to SAN
    By owen1978 in forum Hardware
    Replies: 0
    Last Post: 24th January 2011, 12:44 PM
  3. New IT Room Desks - locked doors or no doors?
    By dgsmith in forum Hardware
    Replies: 17
    Last Post: 29th April 2008, 10:13 AM
  4. Limited or no connectivity.
    By boomam in forum Windows
    Replies: 39
    Last Post: 6th March 2008, 01:23 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •