+ Post New Thread
Page 1 of 2 12 LastLast
Results 1 to 15 of 19
Windows Server 2012 Thread, Hyper V Networking Setup Help - Clusters, ISCSI, VLANs in Technical; Hi, I'm looking to build a new Server 2012 Hyper V setup in a clustered configuration and am a bit ...
  1. #1

    Join Date
    Mar 2009
    Posts
    25
    Thank Post
    2
    Thanked 1 Time in 1 Post
    Rep Power
    0

    Exclamation Hyper V Networking Setup Help - Clusters, ISCSI, VLANs

    Hi,

    I'm looking to build a new Server 2012 Hyper V setup in a clustered configuration and am a bit stumped on the best way to configure the networking for this setup. The system will be built with the following hardware:

    SAN - Dell Power Vault MD3200i
    Servers - Dell Power Edge 2970 x3 - All servers have a total of 4 NICs
    Switches - Dell Power Connect 5524 - These switches are currently in a stacked configuration & currently has 2 VLANs assigned - one for ISCSI & one for network


    I've setup the SAN and have the 8 NICs configured on a separate subnet & IP range (10.10.10.x) to our main network, the servers are configured and can talk to the SAN and see the storage available fine, however when I run the Validate Configuration in the Cluster Manager it throws up warnings about network communication and indicates a single point of failover.

    I set the 4 server NICs up in two sets of Teams - one for our physical network & one team for our ISCSI, i've since read that I shouldn't team the ISCSI NICs, is this correct? I've tried to change the configuration so that the NICs are no longer teamed in the ISCSI network but the Validate Configuration then warns that the devices should not be on the same subnet. I've also read about needing to assign a NIC for Live Migration on a seperate subnet to the ISCSI & physical network, will this be required?

    If anyone is able to give me some pointers as to the best way of configuring the 4 NICs on the servers & any other advice I would really appreciate it. Has been an extremely long week & I seem to be going round & round in circles trying to get it sorted!

    Thanks,

    Matt

  2. #2

    Join Date
    Apr 2007
    Location
    Croydon
    Posts
    504
    Thank Post
    18
    Thanked 31 Times in 30 Posts
    Rep Power
    22
    We have a similar setup, all be it running 2008 R2.For iSCSI connectivity to the SAN I don't use IPs on our main network range (10.59.96.x), I use a class C range. We have five nodes configured like this

    Node 1:

    ISCSI1: 192.168.1.20 (255.255.255.0) - No default gateway
    ISCSI2 192.168.2.20 (255.255.255.0) - No default gateway

    Other nodes use 1.21, 2.21 etc etc..

    On my SAN I only use ports 0 and 1 on each controller, setup like this:

    Controller 0 Port 0: 192.168.1.2 (255.255.255.0)
    Controller 0 Port 1 192.168.2.2 (255.255.255.0)

    Controller 1 Port 1: 192.168.1.3 (255.255.255.0)
    Controller 1 Port 1: 192.168.2.3 (255.255.255.0)

    Each controller has a connection to a different subnet which are on different switches.

    I've attached a spreadsheet that I got from Dell which shows to to configure the iSCSI side of things if you are using 2 or 4 NICS for iSCSI.MD3200 Isics Subnet config.xlsx

    We also have seperate NICS assigned for Live Migration traffic and CSV traffic. Each node has 12 NICS, used in this way:

    NIC1: iSCSI1
    NIC2: iSCSI2
    NIC3: LIVEMIG
    NIC4: CSV
    NIC5: MGMT (this is connected to our main network range, for RDP access etc..)

    NIC6-12: Virtual NICs assinged to VMs or groups of VMs

  3. Thanks to adamf from:

    matte53 (26th July 2013)

  4. #3

    Join Date
    Mar 2009
    Posts
    25
    Thank Post
    2
    Thanked 1 Time in 1 Post
    Rep Power
    0
    Hi Adam,

    Thanks for your reply, it's extremely helpful. A couple of queries if you don't mind!

    We currently have all 8 ports on our SAN active - would you recommend cutting this down to just the 4?

    In terms of switching we only really have the capacity for using two switches, currently I have these Stacked & have Ports 1-12 on a VLAN for our main network range & ports 13-24 are on a VLAN for the ISCSI range, are we best to change this? I'm happy to use the two switches independently rather than stacking them if it allows us to be more flexible.

    How many physical NICs do your nodes have?

    Really appreciate your help,

    Matt

  5. #4

    Join Date
    Apr 2007
    Location
    Croydon
    Posts
    504
    Thank Post
    18
    Thanked 31 Times in 30 Posts
    Rep Power
    22
    How many ports does each switch that is part of the stack have?

    Yes, 4 is easier to work with, 2 on each controller. I use 2 independent switches for all the cluster type traffic (iSCSI, LiveMig, CSV) and then the virtual nics for the actual VM traffic go into a different switch (main server switch). With 3 nodes you'll need 12 ports just for iSCSI, LiveMig and CSV traffic, 4 ports from the SAN to the switch totalling 16 and you've only got 11 on the VLAN for iSCSI. You'll also want more than 1 VLAN. I put iSCSI traffic on 2 different VLANS, LiveMig traffic on a seperate VLAN and CSV traffic on a seperate VLAN.

    Each node has 12 physical nics.

  6. #5

    Join Date
    Mar 2009
    Posts
    25
    Thank Post
    2
    Thanked 1 Time in 1 Post
    Rep Power
    0
    They are two 24 port switches.

    We currently only have 4 physical NICs on each node - am i correct in thinking that this will be insufficient to achieve this setup?

  7. #6

    Join Date
    Apr 2007
    Location
    Croydon
    Posts
    504
    Thank Post
    18
    Thanked 31 Times in 30 Posts
    Rep Power
    22
    Yes, 4 will be insufficient. If you cut the iSCSI ports down to 1 per node it'll work but you'll loose the ability to use MPIO and any high availability because there will only be a connection to 1 controller on the SAN and if that controller fails you loose connection to the storage.

    I started with 8 physical NICS on each node and added an additional 4 later on when I realised I wanted to create more virtual networks.

  8. #7

    Join Date
    Mar 2009
    Posts
    25
    Thank Post
    2
    Thanked 1 Time in 1 Post
    Rep Power
    0
    Thanks as I thought. I'll look into purchasing a card with a further 4 NICs.

    Just out of interest how many VMs do you run?

  9. #8

    Join Date
    Apr 2007
    Location
    Croydon
    Posts
    504
    Thank Post
    18
    Thanked 31 Times in 30 Posts
    Rep Power
    22
    Currently running 43 VMs...

  10. #9

    Join Date
    Mar 2009
    Posts
    25
    Thank Post
    2
    Thanked 1 Time in 1 Post
    Rep Power
    0
    Few more than we are looking at then! We'll only need about 9!

  11. #10

    Join Date
    Mar 2009
    Posts
    25
    Thank Post
    2
    Thanked 1 Time in 1 Post
    Rep Power
    0
    Hi, we've decided on our setup now, many thanks for your help..

    Out of interest how do you have your SAN configured in terms of RAID setup & number of LUNs?

    Thanks

  12. #11

    Join Date
    Apr 2007
    Location
    Croydon
    Posts
    504
    Thank Post
    18
    Thanked 31 Times in 30 Posts
    Rep Power
    22
    In terms of the SAN config, it's fully populated with 8x 2TB 7.2k rpm drives, 2x 300GB 15k rpm drives, 2x 900GB 10k rpm drives. I have 3 LUNs on the SAN, 1 LUN has 7x 2TB drives in it (the other drive is a hot spare), RAID 5, 1 LUN has 2x 300GB drives in it, RAID 1, the other LUN has 2x 900GB drives in it, also RAID 1.

  13. #12

    Join Date
    Mar 2009
    Posts
    25
    Thank Post
    2
    Thanked 1 Time in 1 Post
    Rep Power
    0
    Do you use the LUNs for specific purposes? We have 8x600gb SAS drives & I was thinking of RAID 5 (with hot spare) & creating just a single LUN. As I said previously we'll only have 9 or so servers in the setup.

    Any thoughts on this setup?

  14. #13

    Join Date
    Apr 2007
    Location
    Croydon
    Posts
    504
    Thank Post
    18
    Thanked 31 Times in 30 Posts
    Rep Power
    22
    Yer, we use the big LUN (with 2TB drives) for general VMs, the LUN with 900GB 10k drives has SIMS SQL DBs and Exchange mailbox dbs on it, the LUN with 300GB 15k drives has any smaller/not heavily used SQL dbs and the SIMS SQL logs.

    It depends on the speed of the drives as to whether you want to go for multiple LUNS. What speed are they?

  15. #14

    Join Date
    Mar 2009
    Posts
    25
    Thank Post
    2
    Thanked 1 Time in 1 Post
    Rep Power
    0
    The drives are 15krpm SAS drives.

    We won't be using exchange as have migrated to O365. We use CMIS that will need SQL & also SIMS FMS (only used by 3 users). All other servers will be standard, so DCs, print severs, file server etc.

  16. #15

    Join Date
    Apr 2007
    Location
    Croydon
    Posts
    504
    Thank Post
    18
    Thanked 31 Times in 30 Posts
    Rep Power
    22
    You could go 5x 600GB in RAID 5 with a hot spare which will give you 2.4TB of usable space. Then 2x 600GB in RAID 1 for SQL dBs but maybe 600GB is too much for dBs that won't grow to anywhere near that size.



SHARE:
+ Post New Thread
Page 1 of 2 12 LastLast

Similar Threads

  1. Replies: 8
    Last Post: 20th June 2014, 09:47 PM
  2. Dell 150i network setup problem
    By Dos_Box in forum Hardware
    Replies: 7
    Last Post: 4th April 2007, 03:14 PM
  3. Sixth formers as network support help?
    By theriver in forum General Chat
    Replies: 24
    Last Post: 7th July 2006, 11:56 AM
  4. MX records - setup help
    By pooley in forum Windows
    Replies: 3
    Last Post: 23rd June 2006, 03:15 PM
  5. Thin Client Network Setups
    By Pear in forum Thin Client and Virtual Machines
    Replies: 26
    Last Post: 24th January 2006, 04:22 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •