+ Post New Thread
Page 1 of 2 12 LastLast
Results 1 to 15 of 16
Thin Client and Virtual Machines Thread, Virtualizing with current hardware in Technical; Hi all, I'm looking at trying to split some of the workload from the main server into virtual machines and ...
  1. #1

    Join Date
    Jun 2009
    Location
    Birmingham
    Posts
    600
    Thank Post
    92
    Thanked 72 Times in 64 Posts
    Rep Power
    24

    Virtualizing with current hardware

    Hi all,

    I'm looking at trying to split some of the workload from the main server into virtual machines and some additional roles so that when I need to do some maintenance it only affects a small area and doesn't involve taking down the whole server.

    I currently have an NEC Expression 5800 with dual Xeon E5504 and a separate storage server which is just acting as a NAS at the moment which is an Intel SSR212MC2RBR. I was wondering if I added a second Expression 5800 could I use the Intel storage server as a SAN and virtualize my machines?

    If I was going to do this, what software would I be best to use? The servers are both currently running Server 2003 but I would really like to get them onto 2008R2. I was looking at the free XENserver as it can do a P2V during install which will save a lot of work.

    Thanks in advance

    Rich

  2. #2

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,620
    Thank Post
    845
    Thanked 883 Times in 731 Posts
    Blog Entries
    9
    Rep Power
    326
    I don't know the Intel SSR212MC2RBR but the question really is - does it support iSCSI? If it does then there is nothing stopping you setting up a SAN with it. If it doesn't, depending on drive/network speeds, there is nothing stopping you using the NAS as a central file store for the VM images.

    Not used Xenserver to comment (I'm doing well here ) but ordinarily I'd say take a gander at VMWare ESXi as this tends to be the best of bread VM solution. I very much doubt Hyper-V could be an option since I'm pretty sure they are the same Xeon processors as I have in my 3 old HP servers and they don't support hardware virtualisation (which Hyper-V requires).

    Obviously you need to double check RAM and network card support on the NEC's to make sure they will run the VM Workload you require and be careful with OS Licensing if you are buying new Server 08R2 licenses - depending on how meny VM's each host will run and the number of physical processors (not cores) in each will determin what you need to buy OS wise.

    Hope that helps.

  3. #3

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,076
    Thank Post
    853
    Thanked 2,676 Times in 2,270 Posts
    Blog Entries
    9
    Rep Power
    769
    The CPUs in these do support all of the required extentions for Hyper-V to function and for all of them to be able to host 64bit guest OSs IntelŽ XeonŽ Processor E5504 (4M Cache, 2.00 GHz, 4.80 GT/s IntelŽ QPI) with SPEC Code(s) SLBF9

    There are several free solutions, Xen, ESXi and Hyper-V. If you are doing automatic fail over and load balancing then you will need to look at either Xen or Hyper-V as this is a paid extra with ESXi.

    As said before if the storage unit can be accessed as iSCSI then you should be fine using this in the way that you want to.

  4. #4

    Join Date
    Jun 2009
    Location
    Birmingham
    Posts
    600
    Thank Post
    92
    Thanked 72 Times in 64 Posts
    Rep Power
    24
    Thanks guys, that great info.

    Glad the chips support virtualization, makes things cheaper.
    I was looking at ESXi as it seems to be very popular and I don't need automatic failover. If a server dies it would only take a few minutes to get it up and running again which is no worse than now.Do you know if you can upgrade later on?

    The biggy is the iSCSI and I just don't know.ggrrrr. I've been having a look around the Intel site and can't seem to find any information about it and the supplier hasn't got back to me yet about it either. Is there anyway of telling from the machine its self? Like I said it's running Server 2003R2 at the moment.

    Thanks again

    Rich

  5. #5

    plexer's Avatar
    Join Date
    Dec 2005
    Location
    Norfolk
    Posts
    13,343
    Thank Post
    624
    Thanked 1,584 Times in 1,421 Posts
    Rep Power
    414
    If it's running server 2003 then you must be able to get some software for it to turn it into an iscsi target.

    Ben

  6. #6

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,076
    Thank Post
    853
    Thanked 2,676 Times in 2,270 Posts
    Blog Entries
    9
    Rep Power
    769
    You can upgrade later if you need to but the pricing is steep on the VMWare side.

    Your storage unit, is it a specific storage device or a server computer with a bunch of drives in it running server 2003. If it is a server then you would need to use some software to allow access to it as an iSCSI device. SANMelody is one that works on windows I think and something like Openfiler would replace the 2003 OS all together with a linux solution that would allow access to the drives as iSCSI.

  7. #7

    Join Date
    Jun 2009
    Location
    Birmingham
    Posts
    600
    Thank Post
    92
    Thanked 72 Times in 64 Posts
    Rep Power
    24
    Cracking. I was also thinking of trying OpenFiler, it looks really good and it I bonded the NICs together or put some extra cards in it I think it will do the job well.

    Good thing about this server is that it hasn't been used a lot and as such it's relatively easy to just move the data from it on to a backup box for the time being.

    How much RAM would you recommend in each of the servers and the SAN itself?

    Rich

  8. #8

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,076
    Thank Post
    853
    Thanked 2,676 Times in 2,270 Posts
    Blog Entries
    9
    Rep Power
    769
    Quote Originally Posted by Tricky_Dicky View Post
    How much RAM would you recommend in each of the servers and the SAN itself?
    Depends on how many virtual servers you want to host and what you want them to do, just add up the amount of ram that each seporate machine would need if it was physical and then add 1GB for the host and you should be sorted. I'd look at a minimum of 2GB per Win2k8 client.

    For the SAN you should be able to get away with 1-2GB but more may make it go faster, I'm not 100% on how openfiler utilizes it though so could be wrong.

  9. Thanks to SYNACK from:

    Tricky_Dicky (5th July 2010)

  10. #9

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,620
    Thank Post
    845
    Thanked 883 Times in 731 Posts
    Blog Entries
    9
    Rep Power
    326
    Quote Originally Posted by SYNACK View Post
    The CPUs in these do support all of the required extentions for Hyper-V to function and for all of them to be able to host 64bit guest OSs IntelŽ XeonŽ Processor E5504 (4M Cache, 2.00 GHz, 4.80 GT/s IntelŽ QPI) with SPEC Code(s) SLBF9.
    Ooops, my bad! I was thinking of the older 5000 series. Intel need to do something about their processor naming policies!

    I'm gonna have to stop driving home at the end of the night Synack has pretty much said everything I was about to (actually I typed up some replys and then re-read the post to see he'd beaten me to a few punch lines).

    I would just ask again, is the Intel a dedicated NAS box or an x86 PC with a large drive array dressed up as a dedicated NAS? The difference would determin whether or not you can run OpenFiler on it.

    As for ram I'd say go for either as much as the motherboards can take or as much as you can afford. My top spec host currently as 24Gb.

  11. Thanks to tmcd35 from:

    Tricky_Dicky (5th July 2010)

  12. #10

    Join Date
    Jun 2009
    Location
    Birmingham
    Posts
    600
    Thank Post
    92
    Thanked 72 Times in 64 Posts
    Rep Power
    24
    Quote Originally Posted by SYNACK View Post
    Depends on how many virtual servers you want to host and what you want them to do, just add up the amount of ram that each seporate machine would need if it was physical and then add 1GB for the host and you should be sorted. I'd look at a minimum of 2GB per Win2k8 client.

    For the SAN you should be able to get away with 1-2GB but more may make it go faster, I'm not 100% on how openfiler utilizes it though so could be wrong.
    I was planning on running hosts for the DC, Print, Moodle, Exchange, Apps so over the two hosts maybe 12Gb?
    The San has 4Gb already I think (will double check) so hopefully that will be enough.


    Quote Originally Posted by tmcd35 View Post

    I would just ask again, is the Intel a dedicated NAS box or an x86 PC with a large drive array dressed up as a dedicated NAS? The difference would determin whether or not you can run OpenFiler on it.

    As for ram I'd say go for either as much as the motherboards can take or as much as you can afford. My top spec host currently as 24Gb.
    I'm pretty sure it's a proper NAS box because it's listed on the Intel website as such and when I was looking around the openfiller site it actually has a picture of it Products.

    So the basic plan is get another NEC machine, 12GB+ of RAM to spread between the hosts, OpenFiller on the Intel Box and then link it via iSCSI, VMWare ESXi on the main machines.

    Is there anything I need to bear in mind when running a P2V on the main DC?

    Thanks again for everyones input.

    Rich

  13. #11

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,620
    Thank Post
    845
    Thanked 883 Times in 731 Posts
    Blog Entries
    9
    Rep Power
    326
    Quote Originally Posted by Tricky_Dicky View Post
    Is there anything I need to bear in mind when running a P2V on the main DC?
    Don't do it!

    Okay, it can be done (I've done it) but it's not recommended and can seriously screw up your AD. I would...

    Buy new NEC and set that up with ESXi
    Build a new DC with DNS on the new ESXi host
    Transfer FSMO roles to new DC
    demote old DC and remove old DNS role
    Check DNS settings in DHCP point at new DNS server and not old DNS server.

    Once this is done, if the old server was running any other roles that you don't want to spin off onto their own VM servers, now would be the ideal time to P2V the old server.

  14. #12

    Join Date
    Jun 2009
    Location
    Birmingham
    Posts
    600
    Thank Post
    92
    Thanked 72 Times in 64 Posts
    Rep Power
    24
    Ahh OK.

    I suppose it would also make sense to do it this way as I was hoping to move to Server 2008 instead of 2003.

    The other roles are the random little apps which Primary schools seem to love which are all on the DC at the moment and I don't like it. Would much rather they be on a separate low powered VM in case there is a problem.

    Rich

  15. #13

    Join Date
    Jun 2009
    Location
    Birmingham
    Posts
    600
    Thank Post
    92
    Thanked 72 Times in 64 Posts
    Rep Power
    24
    random port.jpg
    Can anyone shed any light on this port? It's on the back of the Intel storage box and I think it's an external SAS port but I'm not entirely sure. If it is, can I use that to connect back to the NEC server?

    If not, what's the best way of physically connecting the servers to the SAN device?

    Rich

  16. #14

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,076
    Thank Post
    853
    Thanked 2,676 Times in 2,270 Posts
    Blog Entries
    9
    Rep Power
    769
    Yes that does look to be a connector for external SAS enclosures. As the storage box is a server rather than a dedicated server device though it is not designed to work as an output. Even if it was it would only be able to connect with a single machine at a time.

    You would need to use some software like mentioned above to expose the storage as iSCSI over the network ports. You would then use the iSCSI initiator as either a seporate download for 2003 or integrated into 2008 in order to subscribe to the storage offered by the server.

    As for migration you could install your preffered VM host OS onto a seporate computer temporarily and install the newer server OS on that in a VM then perform the migration that way. Once its done you can simply reinstall the main server with your VM host OS and move the VM from the temporary box to the main server, if your storage is provided by iSCSI this make this step much easier and quicker.

  17. #15
    diggory's Avatar
    Join Date
    Sep 2008
    Location
    Bristol
    Posts
    85
    Thank Post
    36
    Thanked 11 Times in 10 Posts
    Rep Power
    13
    I was using open filer for my SANs with VMWare ESX4, however OF has stability issues under high throughput with iSCSI - which can cause server freezes - uuuch.

    I've swapped all my SANs for nexenta - which is free up to 7Tb and VMWare certified.
    Seems to work well.

    The open filer issue is apparently down to the iSCSI target software used. There seems to have been a rift between the developer and some users - the developer quit.
    It doesn't seem to be an issue under light loads - but it's risky if the SAN bottles, can make your all your connected VMWare hosts unstable!

SHARE:
+ Post New Thread
Page 1 of 2 12 LastLast

Similar Threads

  1. Vendor says Virtualizing a no-no
    By Libadmingeek in forum Thin Client and Virtual Machines
    Replies: 5
    Last Post: 24th August 2011, 01:54 PM
  2. Virtualizing Domain controllers
    By zag in forum Windows Server 2008 R2
    Replies: 19
    Last Post: 14th May 2010, 06:44 PM
  3. Replies: 13
    Last Post: 24th February 2009, 09:36 AM
  4. Replies: 8
    Last Post: 15th December 2008, 03:41 PM
  5. Virtualizing Servers
    By Nick_Parker in forum Windows
    Replies: 26
    Last Post: 23rd July 2008, 12:33 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •