VMWare ESX NIC Load Balancing problem
I have a small problem regarding VMWare and the load balancing of physical nics on my esx servers. I have attached a diagram of how I have setup the network(ing) for my esx hosts and SAN (VMWare Physical Setup.pdf). I have 2 switches and have followed guidelines from VMWare white papers and forum users for the configuration of my setup but I seem to have a problem with the load balancing of physical nics on my esx hosts. All network cards are gigabit and I have 1 dual port card used solely for iSCSI traffic on a seperate port based vlan to keep the traffic off of the default vlan which our computers reside on. I also have another dual port card used solely for vmotion traffic, again on its own vlan. I then have a quad port card which is used for all virtual machine traffic and is plugged into the default vlan that all of our computers sit on. I have attached a diagram of the setup for this (networking.png) which shows how I have the networking configured on each esx host.
The problem I have is with the load balancing of physical network cards on the server. I first noticed this when I was on an esx host and realised that each esx host server could only see 2 targets on my SAN (not 4 as per the 4 nics in the SAN). I remember in my test environment I could see all 4 targets but I was only using one switch with everything plugged into it. So I logged into the service console and tried pinging all 4 ip's of each nic on the SAN. I only got replies from nic 1 and nic 3. At first I thought it was a problem with the second switch (because after tracing the path back from the 2 iscsi nics on the esx host one routes through one switch and one through the other) but after pulling the network lead that joined the esx host to the first switch I discovered that I could now ping nic 3 and nic 4 on the SAN! So it seems to me that the 2 nics being used for iscsi traffic on the esx host server are actually configured for fail over and not load balancing. I had a quick look in the performance monitor for the nics and sure enough of the 2 nics assigned for iscsi traffic only one seems to be in use (no load balancing). Similarly and more worrying is that I have 4 nics assigned for virtual machine traffic and according to the performance chart (attached as performance.png) only one of the four nics is being used for ALL virtual machine traffic.
I cant believe this is the case because I have read on the vmware website that nic teaming provides load balancing as well as failover!
The link is here (NIC Teaming heading):
Creating a Virtual Networks (VLAN) in a Virtual Infrastructure - VMware
I have had a look at the Virtual Switch configuration (Switch Config.png as below) that connects physical nics to virtual services like the console/vmkernel and load balancing is configured for 'Route based on the originating virtual port ID'. I assumed that this as the default setting would be fine for load balancing. However nic load balancing just doesnt seem to be working (but nic failover does).
As a side note, I have read that load balancing in ESX only applies to outbound traffic. To set up inbound load balancing, I need to enable VLAN trunking. I've googled this, but it doesn't strike me as the solution for my problem.
Has anyone had a similar experience or is there some simple configuration setting that I'm missing here? As a result of the load balancing problem I believe we are experiencing performance problems, especially at peak times when the VMs are under load. Im sure that load balancing the nics would greatly improve performance, especially between the esx hosts that are running the virtual machines and the SAN where connectivity is currently limited to a single gigabit cable for iscsi traffic.