Thin Client and Virtual Machines Thread, Virtualizing with current hardware in Technical; Hi all,
I'm looking at trying to split some of the workload from the main server into virtual machines and ...
2nd July 2010, 02:43 PM #1
Virtualizing with current hardware
I'm looking at trying to split some of the workload from the main server into virtual machines and some additional roles so that when I need to do some maintenance it only affects a small area and doesn't involve taking down the whole server.
I currently have an NEC Expression 5800 with dual Xeon E5504 and a separate storage server which is just acting as a NAS at the moment which is an Intel SSR212MC2RBR. I was wondering if I added a second Expression 5800 could I use the Intel storage server as a SAN and virtualize my machines?
If I was going to do this, what software would I be best to use? The servers are both currently running Server 2003 but I would really like to get them onto 2008R2. I was looking at the free XENserver as it can do a P2V during install which will save a lot of work.
Thanks in advance
IDG Tech News
2nd July 2010, 03:55 PM #2
I don't know the Intel SSR212MC2RBR but the question really is - does it support iSCSI? If it does then there is nothing stopping you setting up a SAN with it. If it doesn't, depending on drive/network speeds, there is nothing stopping you using the NAS as a central file store for the VM images.
Not used Xenserver to comment (I'm doing well here ) but ordinarily I'd say take a gander at VMWare ESXi as this tends to be the best of bread VM solution. I very much doubt Hyper-V could be an option since I'm pretty sure they are the same Xeon processors as I have in my 3 old HP servers and they don't support hardware virtualisation (which Hyper-V requires).
Obviously you need to double check RAM and network card support on the NEC's to make sure they will run the VM Workload you require and be careful with OS Licensing if you are buying new Server 08R2 licenses - depending on how meny VM's each host will run and the number of physical processors (not cores) in each will determin what you need to buy OS wise.
Hope that helps.
2nd July 2010, 04:02 PM #3
The CPUs in these do support all of the required extentions for Hyper-V to function and for all of them to be able to host 64bit guest OSs Intel® Xeon® Processor E5504 (4M Cache, 2.00 GHz, 4.80 GT/s Intel® QPI) with SPEC Code(s) SLBF9
There are several free solutions, Xen, ESXi and Hyper-V. If you are doing automatic fail over and load balancing then you will need to look at either Xen or Hyper-V as this is a paid extra with ESXi.
As said before if the storage unit can be accessed as iSCSI then you should be fine using this in the way that you want to.
2nd July 2010, 04:09 PM #4
Thanks guys, that great info.
Glad the chips support virtualization, makes things cheaper.
I was looking at ESXi as it seems to be very popular and I don't need automatic failover. If a server dies it would only take a few minutes to get it up and running again which is no worse than now.Do you know if you can upgrade later on?
The biggy is the iSCSI and I just don't know.ggrrrr. I've been having a look around the Intel site and can't seem to find any information about it and the supplier hasn't got back to me yet about it either. Is there anyway of telling from the machine its self? Like I said it's running Server 2003R2 at the moment.
2nd July 2010, 04:15 PM #5
If it's running server 2003 then you must be able to get some software for it to turn it into an iscsi target.
2nd July 2010, 04:15 PM #6
You can upgrade later if you need to but the pricing is steep on the VMWare side.
Your storage unit, is it a specific storage device or a server computer with a bunch of drives in it running server 2003. If it is a server then you would need to use some software to allow access to it as an iSCSI device. SANMelody is one that works on windows I think and something like Openfiler would replace the 2003 OS all together with a linux solution that would allow access to the drives as iSCSI.
2nd July 2010, 04:20 PM #7
Cracking. I was also thinking of trying OpenFiler, it looks really good and it I bonded the NICs together or put some extra cards in it I think it will do the job well.
Good thing about this server is that it hasn't been used a lot and as such it's relatively easy to just move the data from it on to a backup box for the time being.
How much RAM would you recommend in each of the servers and the SAN itself?
2nd July 2010, 05:29 PM #8
Depends on how many virtual servers you want to host and what you want them to do, just add up the amount of ram that each seporate machine would need if it was physical and then add 1GB for the host and you should be sorted. I'd look at a minimum of 2GB per Win2k8 client.
Originally Posted by Tricky_Dicky
For the SAN you should be able to get away with 1-2GB but more may make it go faster, I'm not 100% on how openfiler utilizes it though so could be wrong.
Thanks to SYNACK from:
Tricky_Dicky (5th July 2010)
2nd July 2010, 07:01 PM #9
Ooops, my bad! I was thinking of the older 5000 series. Intel need to do something about their processor naming policies!
Originally Posted by SYNACK
I'm gonna have to stop driving home at the end of the night Synack has pretty much said everything I was about to (actually I typed up some replys and then re-read the post to see he'd beaten me to a few punch lines).
I would just ask again, is the Intel a dedicated NAS box or an x86 PC with a large drive array dressed up as a dedicated NAS? The difference would determin whether or not you can run OpenFiler on it.
As for ram I'd say go for either as much as the motherboards can take or as much as you can afford. My top spec host currently as 24Gb.
Thanks to tmcd35 from:
Tricky_Dicky (5th July 2010)
5th July 2010, 08:08 AM #10
I was planning on running hosts for the DC, Print, Moodle, Exchange, Apps so over the two hosts maybe 12Gb?
Originally Posted by SYNACK
The San has 4Gb already I think (will double check) so hopefully that will be enough.
I'm pretty sure it's a proper NAS box because it's listed on the Intel website as such and when I was looking around the openfiller site it actually has a picture of it Products.
Originally Posted by tmcd35
So the basic plan is get another NEC machine, 12GB+ of RAM to spread between the hosts, OpenFiller on the Intel Box and then link it via iSCSI, VMWare ESXi on the main machines.
Is there anything I need to bear in mind when running a P2V on the main DC?
Thanks again for everyones input.
5th July 2010, 08:34 AM #11
Don't do it!
Originally Posted by Tricky_Dicky
Okay, it can be done (I've done it) but it's not recommended and can seriously screw up your AD. I would...
Buy new NEC and set that up with ESXi
Build a new DC with DNS on the new ESXi host
Transfer FSMO roles to new DC
demote old DC and remove old DNS role
Check DNS settings in DHCP point at new DNS server and not old DNS server.
Once this is done, if the old server was running any other roles that you don't want to spin off onto their own VM servers, now would be the ideal time to P2V the old server.
5th July 2010, 08:40 AM #12
I suppose it would also make sense to do it this way as I was hoping to move to Server 2008 instead of 2003.
The other roles are the random little apps which Primary schools seem to love which are all on the DC at the moment and I don't like it. Would much rather they be on a separate low powered VM in case there is a problem.
5th July 2010, 10:16 AM #13
Can anyone shed any light on this port? It's on the back of the Intel storage box and I think it's an external SAS port but I'm not entirely sure. If it is, can I use that to connect back to the NEC server?
If not, what's the best way of physically connecting the servers to the SAN device?
5th July 2010, 11:08 AM #14
Yes that does look to be a connector for external SAS enclosures. As the storage box is a server rather than a dedicated server device though it is not designed to work as an output. Even if it was it would only be able to connect with a single machine at a time.
You would need to use some software like mentioned above to expose the storage as iSCSI over the network ports. You would then use the iSCSI initiator as either a seporate download for 2003 or integrated into 2008 in order to subscribe to the storage offered by the server.
As for migration you could install your preffered VM host OS onto a seporate computer temporarily and install the newer server OS on that in a VM then perform the migration that way. Once its done you can simply reinstall the main server with your VM host OS and move the VM from the temporary box to the main server, if your storage is provided by iSCSI this make this step much easier and quicker.
8th September 2010, 11:03 AM #15
I was using open filer for my SANs with VMWare ESX4, however OF has stability issues under high throughput with iSCSI - which can cause server freezes - uuuch.
I've swapped all my SANs for nexenta - which is free up to 7Tb and VMWare certified.
Seems to work well.
The open filer issue is apparently down to the iSCSI target software used. There seems to have been a rift between the developer and some users - the developer quit.
It doesn't seem to be an issue under light loads - but it's risky if the SAN bottles, can make your all your connected VMWare hosts unstable!
By Libadmingeek in forum Thin Client and Virtual Machines
Last Post: 24th August 2011, 01:54 PM
By zag in forum Windows Server 2008 R2
Last Post: 14th May 2010, 06:44 PM
Last Post: 24th February 2009, 09:36 AM
By kerlj001 in forum MIS Systems
Last Post: 15th December 2008, 03:41 PM
By Nick_Parker in forum Windows
Last Post: 23rd July 2008, 12:33 PM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)