Hyper V Networking Setup Help - Clusters, ISCSI, VLANs
I'm looking to build a new Server 2012 Hyper V setup in a clustered configuration and am a bit stumped on the best way to configure the networking for this setup. The system will be built with the following hardware:
SAN - Dell Power Vault MD3200i
Servers - Dell Power Edge 2970 x3 - All servers have a total of 4 NICs
Switches - Dell Power Connect 5524 - These switches are currently in a stacked configuration & currently has 2 VLANs assigned - one for ISCSI & one for network
I've setup the SAN and have the 8 NICs configured on a separate subnet & IP range (10.10.10.x) to our main network, the servers are configured and can talk to the SAN and see the storage available fine, however when I run the Validate Configuration in the Cluster Manager it throws up warnings about network communication and indicates a single point of failover.
I set the 4 server NICs up in two sets of Teams - one for our physical network & one team for our ISCSI, i've since read that I shouldn't team the ISCSI NICs, is this correct? I've tried to change the configuration so that the NICs are no longer teamed in the ISCSI network but the Validate Configuration then warns that the devices should not be on the same subnet. I've also read about needing to assign a NIC for Live Migration on a seperate subnet to the ISCSI & physical network, will this be required?
If anyone is able to give me some pointers as to the best way of configuring the 4 NICs on the servers & any other advice I would really appreciate it. Has been an extremely long week & I seem to be going round & round in circles trying to get it sorted!
We have a similar setup, all be it running 2008 R2.For iSCSI connectivity to the SAN I don't use IPs on our main network range (10.59.96.x), I use a class C range. We have five nodes configured like this
ISCSI1: 192.168.1.20 (255.255.255.0) - No default gateway
ISCSI2 192.168.2.20 (255.255.255.0) - No default gateway
Other nodes use 1.21, 2.21 etc etc..
On my SAN I only use ports 0 and 1 on each controller, setup like this:
Controller 0 Port 0: 192.168.1.2 (255.255.255.0)
Controller 0 Port 1 192.168.2.2 (255.255.255.0)
Controller 1 Port 1: 192.168.1.3 (255.255.255.0)
Controller 1 Port 1: 192.168.2.3 (255.255.255.0)
Each controller has a connection to a different subnet which are on different switches.
I've attached a spreadsheet that I got from Dell which shows to to configure the iSCSI side of things if you are using 2 or 4 NICS for iSCSI.MD3200 Isics Subnet config.xlsx
We also have seperate NICS assigned for Live Migration traffic and CSV traffic. Each node has 12 NICS, used in this way:
NIC5: MGMT (this is connected to our main network range, for RDP access etc..)
NIC6-12: Virtual NICs assinged to VMs or groups of VMs
Thanks for your reply, it's extremely helpful. A couple of queries if you don't mind!
We currently have all 8 ports on our SAN active - would you recommend cutting this down to just the 4?
In terms of switching we only really have the capacity for using two switches, currently I have these Stacked & have Ports 1-12 on a VLAN for our main network range & ports 13-24 are on a VLAN for the ISCSI range, are we best to change this? I'm happy to use the two switches independently rather than stacking them if it allows us to be more flexible.
How many ports does each switch that is part of the stack have?
Yes, 4 is easier to work with, 2 on each controller. I use 2 independent switches for all the cluster type traffic (iSCSI, LiveMig, CSV) and then the virtual nics for the actual VM traffic go into a different switch (main server switch). With 3 nodes you'll need 12 ports just for iSCSI, LiveMig and CSV traffic, 4 ports from the SAN to the switch totalling 16 and you've only got 11 on the VLAN for iSCSI. You'll also want more than 1 VLAN. I put iSCSI traffic on 2 different VLANS, LiveMig traffic on a seperate VLAN and CSV traffic on a seperate VLAN.
Yes, 4 will be insufficient. If you cut the iSCSI ports down to 1 per node it'll work but you'll loose the ability to use MPIO and any high availability because there will only be a connection to 1 controller on the SAN and if that controller fails you loose connection to the storage.
I started with 8 physical NICS on each node and added an additional 4 later on when I realised I wanted to create more virtual networks.
In terms of the SAN config, it's fully populated with 8x 2TB 7.2k rpm drives, 2x 300GB 15k rpm drives, 2x 900GB 10k rpm drives. I have 3 LUNs on the SAN, 1 LUN has 7x 2TB drives in it (the other drive is a hot spare), RAID 5, 1 LUN has 2x 300GB drives in it, RAID 1, the other LUN has 2x 900GB drives in it, also RAID 1.
Do you use the LUNs for specific purposes? We have 8x600gb SAS drives & I was thinking of RAID 5 (with hot spare) & creating just a single LUN. As I said previously we'll only have 9 or so servers in the setup.
Yer, we use the big LUN (with 2TB drives) for general VMs, the LUN with 900GB 10k drives has SIMS SQL DBs and Exchange mailbox dbs on it, the LUN with 300GB 15k drives has any smaller/not heavily used SQL dbs and the SIMS SQL logs.
It depends on the speed of the drives as to whether you want to go for multiple LUNS. What speed are they?
You could go 5x 600GB in RAID 5 with a hot spare which will give you 2.4TB of usable space. Then 2x 600GB in RAID 1 for SQL dBs but maybe 600GB is too much for dBs that won't grow to anywhere near that size.