Virtualisation Advice - SAN or no SAN!
We are looking to replace our aging server infrastructure this summer. I have quotes for a Dell R720 and a HP DL380p - both same spec - dual Xeon E5-2640's, 96GB RAM, internal SD card for hypervisor. We'll be buying three of these to use as ESXi hosts. We'll eventually be running around 15 or 16 Win2k8 R2 x64 VMs.
Now, we also have a Dell MD3200i iSCSI SAN, which, at present is only connected to our fileserver and is used for data storage (user areas, shared storage etc).
I can't make my mind up whether to either....
1. Put 4 x 300GB SAS disks in each server in a RAID 10 array and keep VMs stored locally. We'd then only buy VMWare Essentials. We wouldn't have HA and vMotion, but as we will be taking snapshots with Veeam, if a host did fail we could restore snapshots back to the two remaining hosts - though this will take time and there would be a loss of data between the snapshot time and failure time. But we would be back up and running fairly quickly.
I was hoping to get enough money to buy a second SAN this year, but that hasn't been possible, unfortunately. So my other option is....
2. Use the Dell MD3200i to store our VMs centrally. Purchase VMWare Essentials Plus and make use of HA and vMotion. However, although the MD3200i has dual power supplies, dual quad port NICs, RAID and is cabled redundantly via two iSCSI switches so the chances or failure are *very* slim, there is still a very small chance that the MD3200i chassis could die, or have multiple disk failure, and I'd be left with no VMs at all! That's *really, really* scary! I'm not sure I like all my eggs in one basket. I'd save money on 15 300GB SAS disks here in the servers (4 in each plus a hot spare in each), though I still might have some in one server for Exchange.
I'm also not entirely sure how the MD3200i will perform with this sort of load. It has 8 x 1GB ethernet links to the 2 x iSCSI switches (Dell PowerConnect 5424's), though I'd have to check they are all active when not in failover state. I think they are though. And each VM host could potentially have 4 x 1GB ethernet links to the iSCSI switches, and then 4 x 1GB back to the core. How would this compare to having local 10K SAS disks on the hosts?
Price wise, there's not a lot in it - we'd save money on server disks, but spend more on VMWare licensing and additional quad port NICs.
If I could get a second SAN, it'd be a no brainer (though I'm still sceptical of the iSCSI performance with the MD3200i hosting all the VMs) but as we only have one I'm in a quandry! Would really appreciate your thoughts on this, especially if you have gone down one road or the other and would be interested to hear how that has worked out for you!