Windows Server 2012 Thread, Server 2012 - Failover Cluster Configuration - Help Needed! in Technical; Hi,
I'm after a bit of advice really on the best way to setup our planned 2012 hyper-v cluster, we ...
11th February 2014, 03:11 PM #1
Server 2012 - Failover Cluster Configuration - Help Needed!
I'm after a bit of advice really on the best way to setup our planned 2012 hyper-v cluster, we currently have hyper-v failover cluster running on 2008R2 but are not happy with the way it was setup (my fault, new to clustering and made a few bad decisions). I've attached a image of our proposed network config that I've cobbled together from MS Tech articles and best practices. In short what we will end up with is;
*EDIT - Sorry about the diagram, visio isn't the best tool.
Does this sound overkill? Is it even correct? Any help/guidance greatly appreciated.
11th February 2014, 03:21 PM #2
It looks very good. However, I'd say very much overkill for a school! All those switches! I know its best practice to try and segregate different types of traffic onto different devices but the difference in price to do so is thousands of pounds - which could be used elsewhere.
I'd question a few things:
1. Why so many physical servers? 3 seems to be the most needed that I've seen in any school.
2. How many clients do you have connecting to the cluster?
3. You have all that redundancy with switches for the SANs and the CSV bits, but then have a single 'core' switch? Seems a little odd.
We run a failover cluster here with 2 hyper-v nodes. We don't have the money for redundant networking gear, so everything goes through a single HP 5406zl. We have 10GbE between the hosts and the core, and then between the core and the storage server (we didn't go for a specific 'SAN' as there's no need in my experience of a school). This will soon be joined by a second identical storage server which will be in its own failover cluster for fileserving.
We run around 20 VMs, running the entire network effectively, and the network traffic doesn't pass 10% utilisation on any 10GbE port. This is for about 300 clients.
11th February 2014, 04:04 PM #3
Hi, thanks for the input, much appreciated. I forgot to mention at the moment we do not have 10Gb anywhere and we are seeing network related performance issues for the VMs, hence the backbone upgrade.
Hopefully these answers make a kind of sense:
1. We ended up with 5 hosts as we migrated from CC3 and had 4 very powerful servers that could be reused, so we did. the 5th is a lower spec server that is joined to the cluster testing purposes (and to make it an uneven number) and hosts our management VMs: Package Build, Image Build, Windows 8.1 test machine, generic test machine for software and various other low use clients. We have the main production servers spread across all the remaining 4 servers of which we have 15
2. We have at most 600 clients connecting, that includes PC, thin clients (not VDI tho), laptops, iPads, Android tablets, public facing servers such as Exchange and our web server
3. As usual everything is driven by cost and we cannot afford 2 core switches, we have all the other switches/equipment in school they just need moving around (apart form the 10gb modules) and the to be honest we don't have the room for two core switches. The best we can achieve is two modules for the 10gb and two modules for the fibre connections at least that provides redundancy should one module fails.
Thanks again for the input.
By MYK-IT in forum Windows Server 2012
Last Post: 20th June 2014, 08:47 PM
By cysklement in forum Thin Client and Virtual Machines
Last Post: 10th May 2013, 01:01 PM
By mattianuk in forum Windows Server 2008 R2
Last Post: 13th November 2012, 06:01 PM
By albertwt in forum Thin Client and Virtual Machines
Last Post: 1st June 2011, 03:43 AM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)
Tags for this Thread