What to do hardware wise
Next year I am looking at consolidating our servers and want to virtualise. A huge gap in my personal knowledge here so I've started reading the VMware site and other related bits of information.
We currently have around 8 servers with at least 3/4 of them being 1 server - 1 application. This needs to change. The servers are ageing and running 2003. So I want to move forward.
From what I gather VMWare is the way forward - but what version? We would virtualise our servers first as they stand (how?) and then upgrade each one slowly to 2008R2.
So questions are:
1. What is the ideal spec for my main virtual server? I've read that I don't need a host OS.
2. What spec hard drives? How would they be organised? How big? Any examples online that I can be pointed at?
3. Am I right in thinking their is a Physical to Virtual application out there that works with SCSI hard drives? What about servers with partitions?
I'm sure there are more questions but they will come.
Cheers in advance,
VMware is just the same as any OS (Windows Server or otherwise) and it does indeed run an OS - as such you need to store that OS on something (either conventional drives or a SSD/SD card/USB stick).
Arrange your hard drives in a way that makes sense for you - here we have a high speed 8x15k SAS storage array for applications that need high read/writes (web filter and cache/databases/ect) and then 4x2TB 7.2K drives for our shared storage (Staff drives/ect).
P2V will (or at least should) work with any hardware as it creates a 'snapshot' of your server at any given time and converts it into a virtual hard disk file.
Here we use Hyper-V as our virtualisation software, I don't know if you have looked into it but if you buy a copy of 2k8R2 Enterprise you are then entitled to run up to 4 virtual machines on that server and then if you go up to datacentre you are entitled to unlimited virtual machines.
Mix this in with System Centre Virtual machine manager and boom you have a very good virtual infrastructure for much less than the cost of VMware.
Sorry on the hardware front we have 2x HP DL165 G7s and a SAS storage array - 8 core AMD Opterons and haven't seen any more than 20% processor usage.
Very small 1U rack mount servers that run a school of 800 students and 100 staff.
Hi - it's the storage arrays that worry me - I've never set one up - does it cover failure and backup?
Originally Posted by jamesfed
Ours is basically a SAS box using dual port SAS cables to hook into the back of the servers, we then have 4x2TB in the servers themselves which handles AD/File storage and I have a backup internet connection virtual machine ready on the local storage.
Its a bit of a weird setup so you might want to wait for others to throw in their 2p :)
Not quite right. You can run as many machines as you like on the Hyper-V server but with one Windows 2008 Enterprise licence, you're only licenced to run 4 virtual instances of Windows 2008 Standard or Enterprise on that box. You can buy another Windows 2008 Enterprise license and run an additional four and/or run as many other non-Windows OSeseseseses as you're licensed for on there too.
Originally Posted by jamesfed
We have a VMware vSphere farm set up here. It consists of two clusters. Cluster 1 has five Dell PE 2950 servers with dual quad-core Xeon CPUs and 32GB RAM. They're all attached to two Equallogic SANs and have about 9TB of storage available to them. Each server has four 1GB connections to the iSCSI fabric, 1 10GBe connection to the network backbone, 1 1GBe connection to the network backbone for the service port, 1 connection to a seperate network for vMotion and one 1GBe connection to our DMZ.
Cluster B has three PE M910 servers in them which have 4 quad-core Xeon CPUs and 64GB RAM each. That's attached to another Equallogic SAN with about 4TB of available storage. They have a similar amount of network connections to the servers in Cluster A but have the four 1GBe iSCSI connections replaced with a single 10GBe connection.
Of course, it all depends on how much you have to spend but if you're serious about setting up a robust virtualisation farm, a setup similar to this is a good bet. Three servers with eight cores and 32GB RAM each would probably be enough plus an iSCSI SAN with as much storage as you can afford. A setup like that will give you good redunancy and high availability. If one of your hosts goes down, there ought to be enough capacity in the other two to make sure you don't lose any services. If you go the HyperV route, make sure you get SCVMM (Systems Center Virtual Machine Manager) to make this automated. I'd also say that if you go the iSCSI route, make sure you don't plug the iSCSI ports into your core network, even if you VLAN it off. A lot of traffic is generated by iSCSI and your switch may not have the capacity to cope. Install a seperate iSCSI switch.
We're running Hyper-V clustered in a Server 2008 Datacentre Host OS. 2x Dell Poweredge R710, one bought last year, one bought a few months ago. (40 CPU's, mixed 5500/5600 series with CPU masking, 112Gb RAM, 16 NIC's total across both platforms). iSCSI connectivity to a Dell PV MD3200i 3Tb SAN, 6x 600Gb 15k RPM SAS, directly attached to support upto 4 hosts at present, can be extended into an IP SAN if required. Multipathing enabled and managed with cluster shared volumes and SCVMM '08 R2 SP1. Currently running 12 HA VM's with various operating systems, failover performance is fantastic and there is as little as ~2 seconds downtime due to a host failover situation. Performed P2V on all of my physical and virtual servers to move them over to hyper-v, which was flawless (TAKES FOREVER). Backups are a breeze with Symantec BE 2010 R3 and we're getting licenses for direct CSV backups soon. *EDIT* - Forgot to mention Livemotion, WOW :D, and the PRO tools if you ever bother to license them (intelligent VM placement according to situations and load balancing).
From what I understand, our solution is considerably cheaper than VMWare and we moved from a single host Citrix Xenserver 5.5 setup to this setup in about a week and a half, during the summer break. Completed the migration from Xen to Hyper-v and all the server/san installation myself, I am unqualified and have had just over a year working with Xenserver, so I would say it's fairly straightforward, even for someone new to virtualisation. The SAN configuration was the most difficult part, configuring iSCSI and allocating LUN's correctly, and testing failover, once thats working, easy street.
Good luck, I hope you find the right solution!
VMWare is probably the best virtualisation platform out there, you 'can' do it all using the free VMWare ESXi, but there are some constraints using that but tbh we use it for quite a lot of our virtualisation, 20+ virtual servers on 3 boxes and it's fine for us. You don't get live migration, failover etc but it does enough to be fine for a school. If you want to go into the higher end versions of VMWare, I hope you have a nice budget for the project ;-)
You can move existing physical servers to virtual using a utility provided by VMWare, works great. You will see some old posts about not doing that with DC's, but they are fine as well now.
I would strongly recommend you do some homework on your current servers, find out how much memory they use, cpu load, network load etc and that should give you an idea of how many servers you need.
As for hardware, speak to someone like Dell or HP and they will specify you a full solution with the servers and the storage. They will also try selling you loads of consulatncy, training, high end VMWare etc, but tell em you just want the hardware.
Also, don't skimp on the budget for this, spending a decent amount on this saves in the long run and you want to spend on the back end storage as that is the bit that really affects how well the solution performs.
We use ESXi 4.1 currently on just one virtual host, I have lots of seperate servers running bare metal too. It's a fantastic basis and very easy to learn, but I'm sure the competing solutions are well up there now (Hyper-V looks good, but not had any need to try it). For us, it's just one HP DL385 G5P with 12GB RAM, ESXi is on a memory stick and the VMs are on the internal RAID 5 array. Backs up to a NAS over NFS every night, so I have the ability to restore the servers if the host dies.
Wouldn't touch VMWare anymore - too expensive and heavy on resources even on high spec servers which we have - we have just turned to Xen and won't look back its fantastic.... and its free!
Citation needed there. I did speed comparisons on two DL380 G5's one running ESXI and the other running Linux KVM on ubuntu 10.04
Originally Posted by teejay
KVM won hands down and therefore i now run that. It does pretty much all of what ESXI does and is completely free. Live migration etc is all included.
you just need a few linux skills and its easy and fully customizable.
Also HP are currently doing a deal - if you purchase a qualifying HP P2000 Smart array you can get a qualifying server free (although you pay out initially you claim back funds) So you can get a Dl360/380 with quad xeon for free which could save you around 2k.
The P2000 \ free server offer looks fantastic but not sure if that will apply with special education bid pricing as well... if it does you're quids in!
The platform choice really comes down to Hyper-V (complete MS package and cheap) vs VMWare (better on a technical level but costs more). I also prefer 3 hosts rather than two based on the theory that if one goes down I'd prefer not to have all the load on one server. Also gives you a bit more room for expansion as you add more services on (which you'll want to do once you've gained the flexibility of instant server provisioning!)
If you're going SAN get the disk configuration right as that's the most vital part. Get a capacity planner done with your chosen supplier and find out how much utilisation you have at the moment and plan for type \ speed \ storage accordingly...
And a big plus here is that you don't even need to put it on expensive servers. As long as you have a decent SAN you can use cheap as chips PC's for servers. if one falls over then the apps migrate to another 'server'. Saves bags of money and get way more power/redundancy for you £.
Originally Posted by glennda
Out of interest Norphy, what are you running off of that setup, machines, users, servers etc.
Originally Posted by Norphy