VMWare Physical Configuration
I have just bought 3x dell poweredge 2900 servers, 2x dell powerconnect 5448 switches and 1x dell powervault MD3000i SAN. I have drawn a diagram of how I intend to physically cable these together and I'm pretty sure that following information from vmware white papers I have this correct.
Each server has 1x 4port gigabit pci-e network card and 1x 2port onboard gigabit network. So each server has a total of 6 nic's.
I was going to team them all in 2's, so the 2 onboard nic's would be used for the service console, two of the four on the pci-e card would be used for the vmkernel (vmotion and iSCSI) and the other two ports will be used for the vmnetwork (virtual machines to connect to the lan).
Is this the best way of configuring network connectivity for each esx server? I'n not sure if 2 ports are overkill for the service console?
I have connected one of each network port to each of the two switches for redundancy, similarly the san is connected to each switch in the same way. Therefore in the event of a switch failing I have failover onto the other switch. This from what I have seen is best practice.
My second question is that the vmkernel physical network port and the san physical network port need to be directly connected. They do not need connectivity with the LAN however, so I was going to use a port based vlan on each switch to segregate all ports (works out that its 5 ports on each switch) that are connected to the vmkernel and the san device. Is this standard practice? I know I could probably buy another two small 8 port switches and plug 1 of the 2 ports of the san and 1 of each of the 2 ports of the vmkernal into each switch (again for failover) and this would work. I'm just wondering what others have done.
My last question relates to storage. I'm new to san technology. The san I have bought has 10x 150GB SAS drives and 3x 500GB SATAII drives. I'm not sure what (or how) the best way to set the storage up would be. Should I use raid 5 on the 3x sataII disks and use those for the storage of my virtual machines. Then use the rest of the disks for data storage.? If I do this I was thinking of using four (of the 10) disks in a raid 10 for staff profiles and homedirs. Then using the rest in the form of 2 raid 5 volumes or even a single raid 5 volume. (I am assuming in all this that I can assign multiple LUN's to a virtual machine so that it can see multiple storage volumes???)
Im not sure if I am running say 6 virtual servers from the 3 sataII disks in raid 5 whether the performance would be bad. I usually mirror 2 OS disks and install to those seperately on each server in my non virtualized environment. Also I'm sure I remember reading that installing an OS to a raid 5 volume was a bad idea, but effectively I am installing my OS onto a single virtual disk that resides on a raid 5 volume so I am not sure whether performance would be any worse?
Does anyone else have any opinions on the above? This is my first shot at server virtualization and I'm open to any ideas/questions/suggestions. I have the diagram of physical network, I'll attach it as a pdf for anyone interested. Bear in mind it was hand drawn though! :P
EDIT:/ I noticed that you cannot view the pdf by clicking it. You can save it to your pc for viewing by right clicking and selecting 'save as' or 'save target as'. Then save it and make sure it has the .pdf file extension and it should open.