Hi guys, was just wondering if any of you could suggest any bright ideas to increase network redundancy in our Hyper-V setup.
Our initial setup (get it up and going fast) is as described in network plan original.gif, which consisted of a 4 port trunk from one blade switch module to the core switch, and each port of our iscsi san plugged into the other switch module. Performance wise this setup worked great, but reliability has been an issue, occasionally we’d get one or the other blade switch module freeze up which meant that either the virtual servers died as they couldn’t see their disks, or disappeared from the lan. We purchased another new, identical switch module hoping that there was perhaps a hardware fault, but the freezing issue remains.
I set about working out my ideal performance/reliability setup, which is outlined in network plan desired.gif. This consists of two 4 gig trunks, one from each blade switch module to the core switch, and the san directly connected to the core also (meaning it can be access from more than just the blade servers). Each blade server would have a load balanced trunk with failover, iscsi and lan traffic being kept on separate vlans. The eventual plan would have been to buy more switch modules and network adapters and have individual trunks for lan/iscsi traffic.
However, due to incompatibilities between the Broadcom nic trunking and hyper-v this setup does not work, no way no how! The failover trunk works absolutely fine on the host blade server, and will pass vlan’s until there’s no tomorrow. As soon as you make the trunk available to hyper-v, failover stops working on the trunk and you cannot attach a virtual server to a specific vlan.
We've currently had to settle for a mismash of the two outlined above, a trunk from one switch module to the core for network, and another from switch module 2 to the core for iscsi. This no more resilient that our initial setup however.
So, any bright sparks want to suggest anything that might make life a little more reliable, we’re currently experiencing a freeze at least once a fortnight and it is beginning to drive us a little crazy.
Thanks in advance and sorry for the length.
Last edited by saundersmatt; 6th January 2009 at 02:30 PM.
Sad, posting follow ups to my own posts I know.
I have just tested the latest Broadcom NetXreme II drivers (and management tools) on server 2008 R2 beta, they appear so far to support nic teaming combined with hyper-v. I am not sure if this is entirely a driver fix, or increased compatability with 2008 R2.
Tommorrow's plan is to test the new drivers on an existing 2008 machine in the live cluster and see what happens.
There are currently 1 users browsing this thread. (0 members and 1 guests)