+ Post New Thread
Results 1 to 13 of 13
Thin Client and Virtual Machines Thread, VMWare Physical Configuration in Technical; I have just bought 3x dell poweredge 2900 servers, 2x dell powerconnect 5448 switches and 1x dell powervault MD3000i SAN. ...
  1. #1

    Join Date
    Dec 2005
    Location
    Essex
    Posts
    85
    Thank Post
    7
    Thanked 5 Times in 5 Posts
    Rep Power
    17

    VMWare Physical Configuration

    I have just bought 3x dell poweredge 2900 servers, 2x dell powerconnect 5448 switches and 1x dell powervault MD3000i SAN. I have drawn a diagram of how I intend to physically cable these together and I'm pretty sure that following information from vmware white papers I have this correct.

    Each server has 1x 4port gigabit pci-e network card and 1x 2port onboard gigabit network. So each server has a total of 6 nic's.
    I was going to team them all in 2's, so the 2 onboard nic's would be used for the service console, two of the four on the pci-e card would be used for the vmkernel (vmotion and iSCSI) and the other two ports will be used for the vmnetwork (virtual machines to connect to the lan).

    Is this the best way of configuring network connectivity for each esx server? I'n not sure if 2 ports are overkill for the service console?

    I have connected one of each network port to each of the two switches for redundancy, similarly the san is connected to each switch in the same way. Therefore in the event of a switch failing I have failover onto the other switch. This from what I have seen is best practice.

    My second question is that the vmkernel physical network port and the san physical network port need to be directly connected. They do not need connectivity with the LAN however, so I was going to use a port based vlan on each switch to segregate all ports (works out that its 5 ports on each switch) that are connected to the vmkernel and the san device. Is this standard practice? I know I could probably buy another two small 8 port switches and plug 1 of the 2 ports of the san and 1 of each of the 2 ports of the vmkernal into each switch (again for failover) and this would work. I'm just wondering what others have done.

    My last question relates to storage. I'm new to san technology. The san I have bought has 10x 150GB SAS drives and 3x 500GB SATAII drives. I'm not sure what (or how) the best way to set the storage up would be. Should I use raid 5 on the 3x sataII disks and use those for the storage of my virtual machines. Then use the rest of the disks for data storage.? If I do this I was thinking of using four (of the 10) disks in a raid 10 for staff profiles and homedirs. Then using the rest in the form of 2 raid 5 volumes or even a single raid 5 volume. (I am assuming in all this that I can assign multiple LUN's to a virtual machine so that it can see multiple storage volumes???)

    Im not sure if I am running say 6 virtual servers from the 3 sataII disks in raid 5 whether the performance would be bad. I usually mirror 2 OS disks and install to those seperately on each server in my non virtualized environment. Also I'm sure I remember reading that installing an OS to a raid 5 volume was a bad idea, but effectively I am installing my OS onto a single virtual disk that resides on a raid 5 volume so I am not sure whether performance would be any worse?

    Does anyone else have any opinions on the above? This is my first shot at server virtualization and I'm open to any ideas/questions/suggestions. I have the diagram of physical network, I'll attach it as a pdf for anyone interested. Bear in mind it was hand drawn though! :P

    EDIT:/ I noticed that you cannot view the pdf by clicking it. You can save it to your pc for viewing by right clicking and selecting 'save as' or 'save target as'. Then save it and make sure it has the .pdf file extension and it should open.
    Attached Files Attached Files
    Last edited by JamesC; 20th February 2008 at 04:06 PM.

  2. #2

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,493
    Thank Post
    1,184
    Thanked 745 Times in 647 Posts
    Rep Power
    228
    Quote Originally Posted by JamesC View Post
    I'n not sure if 2 ports are overkill for the service console?
    I haven't used VMWare, but I'm guessing the management console certainly shouldn't need 2GB/s of bandwidth. Might be handy to have for failover, though.

    Should I use raid 5 on the 3x sataII disks and use those for the storage of my virtual machines.
    Again, this is from theory, not practice: I'm guessing that once booted and running, assuming a decent amount of RAM each to avoid swapping, your virtual servers shouldn't constantly need to read/write massive amounts of data to the OS disk (just logs, etc), so would be better on a RAID 5 array of your slower disks, with your data being available on faster disks for faster access. You might want to consider what, exactly, your OSes are going to be doing - if one's a web server, with logs being appended many times a second, faster disk writes might be an idea (either faster disks or assign more RAM to the disk cache).

    Same goes for swap space - you're advised to use a fast disk for swap space. If you have enough RAM free you could actually assign swap space on a RAM disk, saving any disks access being needed at all.

    Also I'm sure I remember reading that installing an OS to a raid 5 volume was a bad idea
    But you've just bought a fancy SAN device with a hardware RAID controller which has chip-based support for the XOR operations needed for RAID 5, so performance should be fine.

    --
    David Hicks

  3. Thanks to dhicks from:

    JamesC (22nd February 2008)

  4. #3
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,386
    Thank Post
    10
    Thanked 483 Times in 423 Posts
    Rep Power
    110
    Quote Originally Posted by JamesC View Post
    I have just bought 3x dell poweredge 2900 servers, 2x dell powerconnect 5448 switches and 1x dell powervault MD3000i SAN. I have drawn a diagram of how I intend to physically cable these together and I'm pretty sure that following information from vmware white papers I have this correct.

    Each server has 1x 4port gigabit pci-e network card and 1x 2port onboard gigabit network. So each server has a total of 6 nic's.
    I was going to team them all in 2's, so the 2 onboard nic's would be used for the service console, two of the four on the pci-e card would be used for the vmkernel (vmotion and iSCSI) and the other two ports will be used for the vmnetwork (virtual machines to connect to the lan). You will only be able to team them if they are on the same switch (unless you have some vastly expensive ones which allows you to link multiple switches and create teams from ports on both).

    Is this the best way of configuring network connectivity for each esx server? I'n not sure if 2 ports are overkill for the service console?

    I have connected one of each network port to each of the two switches for redundancy, similarly the san is connected to each switch in the same way. Therefore in the event of a switch failing I have failover onto the other switch. This from what I have seen is best practice.

    My second question is that the vmkernel physical network port and the san physical network port need to be directly connected. They do not need connectivity with the LAN however, so I was going to use a port based vlan on each switch to segregate all ports (works out that its 5 ports on each switch) that are connected to the vmkernel and the san device. Is this standard practice? I know I could probably buy another two small 8 port switches and plug 1 of the 2 ports of the san and 1 of each of the 2 ports of the vmkernal into each switch (again for failover) and this would work. I'm just wondering what others have done.

    My last question relates to storage. I'm new to san technology. The san I have bought has 10x 150GB SAS drives and 3x 500GB SATAII drives. I'm not sure what (or how) the best way to set the storage up would be. Should I use raid 5 on the 3x sataII disks and use those for the storage of my virtual machines. Then use the rest of the disks for data storage.? If I do this I was thinking of using four (of the 10) disks in a raid 10 for staff profiles and homedirs. Then using the rest in the form of 2 raid 5 volumes or even a single raid 5 volume. (I am assuming in all this that I can assign multiple LUN's to a virtual machine so that it can see multiple storage volumes???)

    Im not sure if I am running say 6 virtual servers from the 3 sataII disks in raid 5 whether the performance would be bad. I usually mirror 2 OS disks and install to those seperately on each server in my non virtualized environment. Also I'm sure I remember reading that installing an OS to a raid 5 volume was a bad idea, but effectively I am installing my OS onto a single virtual disk that resides on a raid 5 volume so I am not sure whether performance would be any worse?

    Does anyone else have any opinions on the above? This is my first shot at server virtualization and I'm open to any ideas/questions/suggestions. I have the diagram of physical network, I'll attach it as a pdf for anyone interested. Bear in mind it was hand drawn though! :P

    EDIT:/ I noticed that you cannot view the pdf by clicking it. You can save it to your pc for viewing by right clicking and selecting 'save as' or 'save target as'. Then save it and make sure it has the .pdf file extension and it should open.
    You want a vlan for the following:

    Console, vmotion/vmkernel, VMs (one or more vlans)

    You could team the 4 on the pci-e card and use it for console/vmkernel as the console only really needs more than one link for redundancy so you might as well waste as little bandwidth on it as possible, then team the final 2 for VMs.

    I keep my vm hardware on the console vlan, the vmkernel will use a different ip address from the console btw. This means you can keep everything on private vlans except the VMs if you wish (although Virtual Center will need to access the console vlan too).

    I have early blades, which were limited due to having fibre HBAs too, this means I have two adaptors teamed with everything on it, just using different vlans.

    If you wish to use vmotion then you *will* need your san plugged into a switch, again put it on the private vlan that the vmkernel will use so all your hosts can see it. I suggest that you don't plug these into cheap switches, use the core switch or a seperate high performance switch if you have the money. This isn't going to provide any redundancy though, I've not looked at iscsi redundancy as I have two fibre switches instead which makes life easier.

    From the san side you could even split the 10 drives into a raid 1 set and a raid 5 set etc. You must however try to keep the number of active VMs on each LUN to a maximum of 10 (to 15 if pushed). You can however partition your raid sets (eg the raid 5 set) into multiple luns.

    You assign a disk image *on* a LUN to virtual machines, you can add multiple disk images from multiple luns to it. I have a raid set for DBs for example and another for long term storage and create disk images and assign them to VMs as appropriate.
    Last edited by DMcCoy; 21st February 2008 at 05:55 PM.

  5. Thanks to DMcCoy from:

    JamesC (22nd February 2008)

  6. #4
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,386
    Thank Post
    10
    Thanked 483 Times in 423 Posts
    Rep Power
    110
    Ok, I've just been reading up on the iSCSI part for ESX, there are some limitations:

    Number of HBAs software 1
    hardware 1 dual port or 2 single port

    what counts as a software hba, I wouldn't know

    Hmm, I can't really any iSCSI questions it seems (I'm still on fibre channel).

    Edit! It is on the supported list, just the link in the pdf takes you past the actual dell ones :/

  7. #5
    sahmeepee's Avatar
    Join Date
    Oct 2005
    Location
    Greater Manchester
    Posts
    795
    Thank Post
    20
    Thanked 69 Times in 42 Posts
    Rep Power
    33
    When they say a software HBA, do they not just mean an iSCSI initiator like the microsoft one?

    Also, the only difference between how the diagram in post #1 was cabled and how I'd read to do it is that the two switches weren't connected:

    Further, whenever possible, mesh the network between your initiators and your target. That is, use two good quality gigabit Ethernet switches and multihome two connections (one from each switch) to two separate NICs in each server. To your iSCSI targets, multihome each switch to two separate ports on each individual array. Finally, connect each switch to the other. Using this configuration, you can lose a NIC, one of the two switches, and a cable, and remain operational.
    from "Get to know iSCSI SAN components"

    It made sense when I first read it, but I'm finding it hard to see a scenario where the cable connecting the 2 switches together saves you though.
    Last edited by sahmeepee; 21st February 2008 at 07:19 PM.

  8. Thanks to sahmeepee from:

    JamesC (22nd February 2008)

  9. #6
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,386
    Thank Post
    10
    Thanked 483 Times in 423 Posts
    Rep Power
    110
    Quote Originally Posted by sahmeepee View Post
    When they say a software HBA, do they not just mean an iSCSI initiator like the microsoft one?
    I would assume so, although it does limit a few options.

  10. #7

    Join Date
    Dec 2005
    Location
    Essex
    Posts
    85
    Thank Post
    7
    Thanked 5 Times in 5 Posts
    Rep Power
    17
    @dhicks - thanks for the headsup regarding the storage of my VM's. I will try storing them on the 3 SATAII disks to begin to see how they hold up (using raid5).

    @DMcCoy - obviously my VM's need to be on the same vlan as my lan so that client computers can communicate with them. As far as the console portgroup is concerned I'd have thought that this needs to be on the same vlan as well so that I can telnet/ssh into it should I need to make configuration changes (like you mentioned) and so that Virtual Centre can use it (again like you mentioned). So the VMKernel can go on its own separate vlan as all traffic between HBA (be it hardware or software based) and iSCSI device can be isolated which is what I was going to do (thanks for confirming though!!!).

    As for the switches used, I have 2 Dell PowerConnect 5448 48 port gigabit managed switches, I have console'd into them and enabled the web interface which makes it kind of easier to configure the vlans.

    Out of curiosity DMcCoy how big do you size your virtual machine's when you create them? I was thinking 80GB for an OS install but I'm not sure whether this is overkill.

    If I raid5 the three SATAII disks I will have 1TB of storage. If I create 1 partition of 1TB and estimate that I will allocate 80GB per VM then I would be able to fit 12 virtual machines on the volume in a single LUN, although I dont anticipate that many VM's just yet. Does this sound ok?

    As for software HBA - this is where a network card is actually used inplace of a dedicated HBA like is used in FC sans.

    @sahmeepee - yes you are correct in saying an iSCSI initiator is used for a software HBA. I didn't think about actually cabling the two switches together. Now that I have thought about it I cant actually see any benefits of cabling them together either (although for the sake of a cable I will probably do it). Thanks for the link to the techrepublic article though, I hadn't seen that.

    Thanks for your help guys. I just have one more question though. This is gonna sound kind of silly but it was one of the first things I thought about when researching virtualization. I have all this storage in the san that I want to allocate to different vm's but I was always unsure about how to do it. Basically there are 2 ways I can see of doing this. The first is to create a VM with enough capacity (say 400gb for example) for both OS and all the data I would ever expect it to use. Then when installing the OS I could partition the disk space into 80GB for the os and leave the rest for data. Of course I would be left with 320GB for data and both the os and the data would be encapsulated inside a single vm.

    The second way I envisaged doing it was by creating a vm that was only ever going to be the size of the os. So I could create an 80GB vm and install an os into it using the full 80gb partition (with room to spare for apps etc). I would then create a raid 5 volume and make a 320gb partition in it using the other faster disks in my san. I would then create a single 320gb LUN on the 320gb partition. I would then guess that I could assign that LUN to a vm and that the vm would see the disk space as storage which I could partition up and do with as I please.

    I'm pretty sure that the second method of sharing storage is more correct than the first, but Id like to hear how you guys do it and make sure I get it right from the start.

    Thanks guys,

    James

  11. #8
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,386
    Thank Post
    10
    Thanked 483 Times in 423 Posts
    Rep Power
    110
    You can put the console in a separate vlan and then add a trusted machine to both your normal vlans and the console vlan. This keeps it private but allows access.

    My Server VMs have 20GB assigned for their system drive, additional storage is allocated from storage or DB luns. 2008 will probably need 25-30.

    I would not put more than 10 active disk images on the same LUN, you can split a single raid group into one or more LUNs instead to minimise scsi lock issues.


    I always have separate disk images the VM system/installed apps from the data storage as it means I can attach the data to other VMs if needed.

  12. #9

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,243
    Thank Post
    772
    Thanked 804 Times in 670 Posts
    Blog Entries
    9
    Rep Power
    299
    Does the Dell SAN have a 2Gb bonded nic? If not, 2Gb iSCSI is pointless. Also, AFAIK, you can only have one software Initiator, or 2 Hardware HBA's (very specific QLogic TOE), So you only need 1x 1Gbps on Each server for SAN.

    Best practice would be to put all the SAN connection on their own VLAN.

    The service console only needs one 100Mbps port - best practice is two for redundancy, but it's not needed.

    We have a 2Gbps bonded NIC going from the vSwitch to our physical switch.

    I'd put all the userdata, applications, etc on the SAS drive and the OS images on the SATA drives personally. We use a 10x300gb SATA-II DAS system here which has some writeback speed issues. We are installing SANMelody soon to manage the array away from the two ESX Servers. Give anything that needs lots of reads/writes - such as home directories - the faster SAS drives.

    We mainly use RAID-50 LUN's here. Give's is two drive failure redundancy, Though may contribute to some (minor) speed issues.

    We tend to use 12Gb VMDK's for the server's C drive's which are store in one LUN. Then the D: drives (home directorys, shares, applications, whatever) are store in seperate LUN's.

  13. Thanks to tmcd35 from:

    JamesC (26th February 2008)

  14. #10

    Join Date
    Dec 2005
    Location
    Essex
    Posts
    85
    Thank Post
    7
    Thanked 5 Times in 5 Posts
    Rep Power
    17
    @DMcCoy - Thanks for all your help, it has been most useful. Seeing as I have the space for the VMs I'll probably create each at 50-80gb just to be safe. Thanks for letting me know about keeping the number of VMs per LUN to a minimum to avoid scsi lock problems, ill bear this in mind when creating VMs but in all honestly I wasn't aware of the problem before you mentioning it.

    @tmcd35 - Thanks for a quick explanation on how you have your virtualized environment configured. Userdata, applications, etc will be going on the SAS drives and the VMs will go onto the SATA. I have actually just ordered another matching disk for the 3 SATA disks so that I can use Raid 10 instead of Raid 5. I will still only get 1tb of storage for my VMs but hopefully there will be a performance increase for the sake of one extra disk. I think Raid 50 may be a little overkill for my needs to begin with although I may consider it in the future. I understand the 2 nics in the SAN are teamed/bonded. I too will store all my VMs on a single LUN although as mentioned but like I said to DMcCoy I will probably use slightly bigger sized VMs as I can afford the disk space.

    Thanks for all you help here guys, you've all definitely given me a better understanding of Virtualization.

    James

  15. #11
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,386
    Thank Post
    10
    Thanked 483 Times in 423 Posts
    Rep Power
    110
    It seems from what I can make out from the forums that the scsi reservations don't apply to the whole lun with vmfs 3, although you still don't want too many high load images on the same lun.
    Last edited by DMcCoy; 26th February 2008 at 05:47 PM.

  16. #12
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,386
    Thank Post
    10
    Thanked 483 Times in 423 Posts
    Rep Power
    110
    Replying to myself. slight missread of the many numbers in the vmware docs. 32 is the maximum number of *hosts* connected to a single lun. The 256 file limitation that was present in vmfs 2 has also been removed for 3. You can have folders now too!

  17. #13

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,243
    Thank Post
    772
    Thanked 804 Times in 670 Posts
    Blog Entries
    9
    Rep Power
    299
    Most OS's (Windows/Linux/MacOS/etc) put a lock on the LUN, thus only one instance of that OS has control over the LUN and it's contents. VMWare does not do this, so multiple copies of VMWare can access the same LUN - this is how vMotion and HA work. Instead VMWare puts a lock on the individual VMDK's - only 1 ESX server can run a VM at a time. Although you can have 32 hosts connect to a LUN at once, you'd probably have read/write issues on the scsi bus. Depending on the virtual servers work loads, you probably want between 8 and 16 VM's per LUN. With our move to SANMelody I'm looking at spitting our current 'C DRIVES' LUN in two, aiming for around 10 VM's per LUN.

SHARE:
+ Post New Thread

Similar Threads

  1. RM EasyLink & IIS configuration
    By randle in forum Windows
    Replies: 15
    Last Post: 9th February 2007, 05:23 PM
  2. IE7 GPO Configuration - what do you do?
    By mark in forum Windows
    Replies: 1
    Last Post: 16th November 2006, 09:10 PM
  3. VMWare and VMWare Player Licensing
    By Ric_ in forum General Chat
    Replies: 7
    Last Post: 12th January 2006, 03:32 PM
  4. Office configuration
    By mseaney in forum Windows
    Replies: 6
    Last Post: 1st December 2005, 12:38 PM
  5. Scripting IP configuration.
    By Dos_Box in forum Scripts
    Replies: 3
    Last Post: 2nd September 2005, 09:07 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •