+ Post New Thread
Results 1 to 13 of 13
Windows Server 2012 Thread, Hyper-V Cluster Networks in Technical; Ok, so I have a my cluster working already Each of the 3 hosts has 12 1Gbe Network Cards: 1 ...
  1. #1
    Mr.Ben's Avatar
    Join Date
    Jan 2008
    Location
    A Pirate Ship
    Posts
    942
    Thank Post
    182
    Thanked 157 Times in 126 Posts
    Blog Entries
    2
    Rep Power
    65

    Hyper-V Cluster Networks

    Ok, so I have a my cluster working already

    Each of the 3 hosts has 12 1Gbe Network Cards:

    1 Reserved for Management on the 10.6.183.0 network
    2 Teamed for Clustering and Live Migration services on a 192.168.10.0/29 network (VLAN'ned on our core switch)
    4 Teamed for the Hyper-V Virtual Switch (directly connected to the 10.6.183.0 network)
    2 x 2 Teams for MPIO Access via 2 switches to the iSCSI SAN (192.168.20.0/29 and 192.168.30.0/29)

    FYI The iSCSI SAN is an MD3600i (10Gbe)

    And a spare!

    I've been reading that in 2008R2 the Live Migration and CSV networks should be separate, but I can't find anywhere to do this - have they been converged in 2012?


    Kind regards, Ben

  2. #2

    Join Date
    Oct 2010
    Posts
    3
    Thank Post
    0
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    Hi Ben,

    Sorry I don't have any answers, but I am working on achieving the same configuration as you. My environment is more or less identical but I am issues with getting the Teamed Virtual Switch to connect to the right VLANs. Could you please clarify how the switch was configured? Are the 4 Teamed connections VLAN tagged on the server or switch or both? Do the Guest devices run on the same VLAN as the Host?

    Thanks,

    Charles

  3. #3
    Tsonga's Avatar
    Join Date
    Oct 2012
    Location
    Dorset
    Posts
    155
    Thank Post
    9
    Thanked 19 Times in 16 Posts
    Rep Power
    7
    Quote Originally Posted by Mr.Ben View Post
    Ok, so I have a my cluster working already

    Each of the 3 hosts has 12 1Gbe Network Cards:

    1 Reserved for Management on the 10.6.183.0 network
    2 Teamed for Clustering and Live Migration services on a 192.168.10.0/29 network (VLAN'ned on our core switch)
    4 Teamed for the Hyper-V Virtual Switch (directly connected to the 10.6.183.0 network)
    2 x 2 Teams for MPIO Access via 2 switches to the iSCSI SAN (192.168.20.0/29 and 192.168.30.0/29)

    FYI The iSCSI SAN is an MD3600i (10Gbe)

    And a spare!

    I've been reading that in 2008R2 the Live Migration and CSV networks should be separate, but I can't find anywhere to do this - have they been converged in 2012?


    Kind regards, Ben
    An impressive setup.

  4. #4
    Mr.Ben's Avatar
    Join Date
    Jan 2008
    Location
    A Pirate Ship
    Posts
    942
    Thank Post
    182
    Thanked 157 Times in 126 Posts
    Blog Entries
    2
    Rep Power
    65
    Ok, so I've found my answers through experimentation!

    First off, I needed 2 cluster networks, to provide failover and information to the nodes. I've assigned the management network as the secondary cluster network (failover clusting has prioritised the internal cluster network as the primary network for getting it's information and for quick migration).

    The Live Migration facility can be set to run across any network: I've set mine to only run across the Internal Cluster Network for the moment (Microsoft still say this should be on a separate network, but this is a tiny deployment compared to their ideas). The option is in Neworks and then hidden away to the top right - Live migration settings.

    To answer mutindac's question, the Hyper-V Teamed devices go directly into our core switch, which has untagged ports for the correct VLAN - I've not done any configuration of the Hyper-V Virtual Switch. IP routing is enabled on the core and traffic seems to be fine.

  5. #5

    Join Date
    Aug 2007
    Location
    Deal, Kent
    Posts
    343
    Thank Post
    12
    Thanked 73 Times in 51 Posts
    Rep Power
    27
    We have a setup similar to this

    3 x HP BL480 Blades (2 x Intel Xeon Quad Core 2.66, 48GB RAM, 8 x 1GB NIC)
    HP P4300G2 2 node SAN (2 x 1GB Active / Active controllers per node - LeftHand do things differently than traditional SANs in that there is no head unit to manage comms, making it scalable)

    2 x host, teamed and running on seperate modules in our core switch (10.0.0.x)
    2 x iSCSI, MPIO not teamed as it's not supported (10.0.4.x)
    4 x Hyper-V VMs, teamed and split 2/2 to seperatemodules in our core switch (10.0.0.x)

    Management and live migrations all go over the 2 x host adapters, iSCSI has it's own VLAN. As @Mr.Ben said - the MS deployment scenarios are massive clusters which is why they like to keep things seperate.

    Simon

  6. #6

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,892
    Thank Post
    518
    Thanked 2,494 Times in 1,935 Posts
    Blog Entries
    24
    Rep Power
    839
    Seems remarkably complicated to be honest.

    We have a Hyper-V cluster consisting of 2 nodes and another acting as the central storage.

    We run it all over SMB in 2012. Using a single 10GbE card in each machine.

    Failover works fine for me!

  7. #7

    Join Date
    Aug 2007
    Location
    Deal, Kent
    Posts
    343
    Thank Post
    12
    Thanked 73 Times in 51 Posts
    Rep Power
    27
    Quote Originally Posted by localzuk View Post
    Seems remarkably complicated to be honest.

    We have a Hyper-V cluster consisting of 2 nodes and another acting as the central storage.

    We run it all over SMB in 2012. Using a single 10GbE card in each machine.

    Failover works fine for me!
    It's not as bad as it sounds, but the cluster is then fully redundant from NIC failure, switch failure etc... If you have a NIC or switch die, you lose everything.
    Last edited by Psymon; 18th March 2013 at 09:41 AM. Reason: typo

  8. #8

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,892
    Thank Post
    518
    Thanked 2,494 Times in 1,935 Posts
    Blog Entries
    24
    Rep Power
    839
    Quote Originally Posted by Psymon View Post
    It's not as bad as it sounds, but the cluster is then fully redundany from NIC failure, switch failure etc... If you have a NIC or switch die, you lose everything.
    True. But have to draw the line somewhere. Unless I were to buy a second storage server, a second switch, and extra NICs for each server, I'm not going to get full redundancy. So, we've looked at the likelihood of failures. PSUs, RAM and HDD/SSDs are most likely to fail. So we have dealt with those aspects.

  9. #9

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,727
    Thank Post
    859
    Thanked 905 Times in 750 Posts
    Blog Entries
    9
    Rep Power
    330
    Quote Originally Posted by localzuk View Post
    Seems remarkably complicated to be honest.

    We have a Hyper-V cluster consisting of 2 nodes and another acting as the central storage.

    We run it all over SMB in 2012. Using a single 10GbE card in each machine.

    Failover works fine for me!
    Very, very interesting - almost identical to our set up here other than we don't have failover working yet. Is this a new setup using 2012 and failover explicitly implemented from day 1, or did you upgrade from SMB in 2008R2 which didn't support clustering on SMB?

    We've upgraded from 2008R2, about to update the central storage server to 2012 over Easter. Not sure whether I can/want/need to setup failover clustering once this upgrade is complete. Just moving the host servers to 2012 gave us live migration which was a very welcome and impressive addition, I think that might be all we need anyway.

  10. #10

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,892
    Thank Post
    518
    Thanked 2,494 Times in 1,935 Posts
    Blog Entries
    24
    Rep Power
    839
    Quote Originally Posted by tmcd35 View Post
    Very, very interesting - almost identical to our set up here other than we don't have failover working yet. Is this a new setup using 2012 and failover explicitly implemented from day 1, or did you upgrade from SMB in 2008R2 which didn't support clustering on SMB?
    Brand new installation, with failover configured from day 1.

    We've upgraded from 2008R2, about to update the central storage server to 2012 over Easter. Not sure whether I can/want/need to setup failover clustering once this upgrade is complete. Just moving the host servers to 2012 gave us live migration which was a very welcome and impressive addition, I think that might be all we need anyway.
    You should be able to do it. If you migrated from 2008R2 I imagine you're still on ISCSI? Or have you migrated all the servers into SMB shares?

    To enable failover, you simply need a witness share. Once you have that set up and enabled, failover is just ticking a couple of boxes.

    Step-by-Step: Building a FREE Hyper-V Server 2012 Cluster - Part 1 of 2 - IT Pros ROCK! at Microsoft - Site Home - TechNet Blogs

    I can't find the stuff I used to do it, but the witness share is covered in there - its a couple of lines of powershell code.

  11. #11

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,727
    Thank Post
    859
    Thanked 905 Times in 750 Posts
    Blog Entries
    9
    Rep Power
    330
    Quote Originally Posted by localzuk View Post
    You should be able to do it. If you migrated from 2008R2 I imagine you're still on ISCSI? Or have you migrated all the servers into SMB shares?
    Not at all, 2008R2 did work with SMB shares - it was just never officially supported. I wanted iSCSI but cost meant building my own storage server and WSS wasn't available as a separate purchase so I had to use SMB shares and loose live migration and failover clustering. Now 2012 is out and WSS is built in a standard, but SMB's are also fully supported.

    That was the reason for the question. The SMB share is pre-exiting with live VM's running through it. Didn't know if that would be an issue when coming to set up a failover cluster.

  12. #12

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,892
    Thank Post
    518
    Thanked 2,494 Times in 1,935 Posts
    Blog Entries
    24
    Rep Power
    839
    As far as I know, it shouldn't be an issue at all - you should be able to add the witness at any time.

  13. Thanks to localzuk from:

    tmcd35 (18th March 2013)

  14. #13

    Join Date
    Mar 2010
    Location
    Adelaide
    Posts
    133
    Thank Post
    2
    Thanked 19 Times in 17 Posts
    Rep Power
    13
    A witness or quorum is only needed if there are an even number of hosts. Although I run 4 x 10Gb fully redundant in our environment, the principles are the same, so going back to the OP setup it looks good for your environment. Clustering is only really used if direct access to your storage is not available from a host (traffic is redirected over the network). This should rarely happen so it is safe to pair it with live migration, but if it does occur and you migrate a server you will see a big slow down.

    You can create separate networks for clustering and live migration or use the same, with management as the secondary in both cases. Both are in different areas of Hyper-V with tick boxes for selecting the networks to use and then ordering the priority. In 2012 you no longer need to set the metric (well I have not, always seems to be correct when I check).

    I would probably take the spare NIC and add it to the cluster/live migration team.

SHARE:
+ Post New Thread

Similar Threads

  1. 2008 R2 Hyper-V Cluster / Dell MD3200i / iSCSI Configuration
    By adamf in forum Windows Server 2008 R2
    Replies: 2
    Last Post: 19th October 2012, 09:30 AM
  2. High Availability Hyper-V Clusters
    By Mr.Ben in forum Windows Server 2008 R2
    Replies: 2
    Last Post: 18th May 2012, 01:35 PM
  3. Hyper-V cluster not updating switch ARP
    By sonofsanta in forum Thin Client and Virtual Machines
    Replies: 2
    Last Post: 10th May 2012, 09:23 PM
  4. Question regardin Sans and Hyper V clustering
    By Rozzer in forum Wireless Networks
    Replies: 11
    Last Post: 27th June 2011, 11:47 PM
  5. Hyper-V Networking Problem
    By Dantech in forum Windows Server 2008
    Replies: 3
    Last Post: 5th November 2008, 03:00 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •