Pretty much what I've got now but nada. The two servers ping eachother and the vlan gateway now.
So to confirm:
Both servers on the same vlan/vswitch in ESX which is set on VLAN ID: 100
Switch is set to TRUNK mode and tagged for 100 (interface is TRK1)
TRK1 is also untagged for the management vlan.
DHCP isn't working, nor can I ping any of the above from other stations.
What is ip route 10.12.156.0 255.255.252.0 10.12.148.3 for?
Also I assume you are using TRK1 for aggregation of multiple interfaces to the server? There is also some config for static teams that needed to be configured on esxi if you are using multiple NICs on a vswitch.
I assume the new scopes for the ip ranges are created in DHCP and the gateways for clients and server VMs are set to their vlan address?
They are indeed. That static route points to our other site switch which is as yet unconfigured (that site will serve that client address range). Actually already removed that to make sure things aren't going squiffy with that.
The TRK1 is for multiple interface use and it's configured in line with VMWare's recommendations. Again, currently I'm already only using a single connection to rule out such issues.
Currently I have everything untagged and naturally things are pinging away happily but that's rather against the point.
Woo, nearly there!
Firstly, a huge thanks to Dan Jackson at TalkStraight ( @SchoolsBroadband ) who's been massively helpful in getting our router reconfigured to speak to everything correctly. Probably the best part of 3 hours on the phone and a lot of learning done in the process, but all good! I suspect certain other large providers would have told us to go hang, or charge extortionate amounts on top of the fees!
Everything is speaking to everything else, NEARLY, as it should! The big problem causer this morning mostly for the internet access side was my mistake - thinking that I would need the setting "enable management-vlan". No! Took it out and things started to behave!
Plus, getting the VMware stuff down has been good. In a nutshell:
Vswitch to physical switch set up as a trunk, tagged with all the relevant vlans on the physical switch.
Virtual NIC's set up on VLAN 100 (Servers VLAN)
NIC Teaming set up to route by IP hash rather than the default option
That's gotten that down. DHCP working like a dream at both sites, picking up the relevant IP's from each specific server.
SCCM IP change didn't bat an eyelid, thank god for working DNS!
**** EDIT ****
Couldn't get from one sites clients to the other site's server. Turned out I'd removed the static IP routes I'd put in to do that for testing. Back in and job's a good'un Roll on monday!
Last edited by synaesthesia; 5th April 2013 at 08:02 PM.
You need to make sure the Trunk on the HP switch is NOT set to TRUNK and SHOULD be set to LACP
A Trunk on a HP switch is a proprietary protocol used for bonding multiple links to increase bandwidth.
LACP is the industry standard for bonding multiple links and should be common across vendors
If you run "show trunk" on your HP switch the type should show as LACP
It's just "TRUNK" as advised to me by a Finnish friend who runs an almost identical setup. As it's working exactly as it should I'm not too inclined to go playing much further.
I used trunk for many years as recommend by vmware like in this KB here VMware KB: Sample configuration of EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/HP switches
Indeed. Both our main boxes are 5.1 however we have a lot still on 4.1. No point upgrading them as it just continues to work through thick and thin. It's that very article above that I was pointed to (and also found previously in older searches) to get my setup working well.
There are currently 1 users browsing this thread. (0 members and 1 guests)