+ Post New Thread
Page 1 of 2 12 LastLast
Results 1 to 15 of 20
Thin Client and Virtual Machines Thread, What to do hardware wise in Technical; Afternoon, Next year I am looking at consolidating our servers and want to virtualise. A huge gap in my personal ...
  1. #1

    garethedmondson's Avatar
    Join Date
    Oct 2008
    Location
    Gowerton, Swansea
    Posts
    2,309
    Thank Post
    973
    Thanked 326 Times in 194 Posts
    Blog Entries
    11
    Rep Power
    170

    What to do hardware wise

    Afternoon,

    Next year I am looking at consolidating our servers and want to virtualise. A huge gap in my personal knowledge here so I've started reading the VMware site and other related bits of information.

    We currently have around 8 servers with at least 3/4 of them being 1 server - 1 application. This needs to change. The servers are ageing and running 2003. So I want to move forward.

    From what I gather VMWare is the way forward - but what version? We would virtualise our servers first as they stand (how?) and then upgrade each one slowly to 2008R2.

    So questions are:

    1. What is the ideal spec for my main virtual server? I've read that I don't need a host OS.
    2. What spec hard drives? How would they be organised? How big? Any examples online that I can be pointed at?
    3. Am I right in thinking their is a Physical to Virtual application out there that works with SCSI hard drives? What about servers with partitions?

    I'm sure there are more questions but they will come.

    Cheers in advance,

    Gareth

  2. #2
    jamesfed's Avatar
    Join Date
    Sep 2009
    Location
    Reading
    Posts
    2,214
    Thank Post
    138
    Thanked 347 Times in 293 Posts
    Rep Power
    90
    VMware is just the same as any OS (Windows Server or otherwise) and it does indeed run an OS - as such you need to store that OS on something (either conventional drives or a SSD/SD card/USB stick).
    Arrange your hard drives in a way that makes sense for you - here we have a high speed 8x15k SAS storage array for applications that need high read/writes (web filter and cache/databases/ect) and then 4x2TB 7.2K drives for our shared storage (Staff drives/ect).
    P2V will (or at least should) work with any hardware as it creates a 'snapshot' of your server at any given time and converts it into a virtual hard disk file.

    Here we use Hyper-V as our virtualisation software, I don't know if you have looked into it but if you buy a copy of 2k8R2 Enterprise you are then entitled to run up to 4 virtual machines on that server and then if you go up to datacentre you are entitled to unlimited virtual machines.
    Mix this in with System Centre Virtual machine manager and boom you have a very good virtual infrastructure for much less than the cost of VMware.

  3. #3
    jamesfed's Avatar
    Join Date
    Sep 2009
    Location
    Reading
    Posts
    2,214
    Thank Post
    138
    Thanked 347 Times in 293 Posts
    Rep Power
    90
    Sorry on the hardware front we have 2x HP DL165 G7s and a SAS storage array - 8 core AMD Opterons and haven't seen any more than 20% processor usage.

    Very small 1U rack mount servers that run a school of 800 students and 100 staff.

  4. #4

    garethedmondson's Avatar
    Join Date
    Oct 2008
    Location
    Gowerton, Swansea
    Posts
    2,309
    Thank Post
    973
    Thanked 326 Times in 194 Posts
    Blog Entries
    11
    Rep Power
    170
    Quote Originally Posted by jamesfed View Post
    Sorry on the hardware front we have 2x HP DL165 G7s and a SAS storage array - 8 core AMD Opterons and haven't seen any more than 20% processor usage.

    Very small 1U rack mount servers that run a school of 800 students and 100 staff.
    Hi - it's the storage arrays that worry me - I've never set one up - does it cover failure and backup?

    Gareth

  5. #5
    jamesfed's Avatar
    Join Date
    Sep 2009
    Location
    Reading
    Posts
    2,214
    Thank Post
    138
    Thanked 347 Times in 293 Posts
    Rep Power
    90
    Ours is basically a SAS box using dual port SAS cables to hook into the back of the servers, we then have 4x2TB in the servers themselves which handles AD/File storage and I have a backup internet connection virtual machine ready on the local storage.

    Its a bit of a weird setup so you might want to wait for others to throw in their 2p

  6. #6

    Norphy's Avatar
    Join Date
    Jan 2006
    Location
    Harpenden
    Posts
    2,580
    Thank Post
    59
    Thanked 370 Times in 286 Posts
    Blog Entries
    7
    Rep Power
    134
    Quote Originally Posted by jamesfed View Post
    I don't know if you have looked into it but if you buy a copy of 2k8R2 Enterprise you are then entitled to run up to 4 virtual machines on that server
    Not quite right. You can run as many machines as you like on the Hyper-V server but with one Windows 2008 Enterprise licence, you're only licenced to run 4 virtual instances of Windows 2008 Standard or Enterprise on that box. You can buy another Windows 2008 Enterprise license and run an additional four and/or run as many other non-Windows OSeseseseses as you're licensed for on there too.

    We have a VMware vSphere farm set up here. It consists of two clusters. Cluster 1 has five Dell PE 2950 servers with dual quad-core Xeon CPUs and 32GB RAM. They're all attached to two Equallogic SANs and have about 9TB of storage available to them. Each server has four 1GB connections to the iSCSI fabric, 1 10GBe connection to the network backbone, 1 1GBe connection to the network backbone for the service port, 1 connection to a seperate network for vMotion and one 1GBe connection to our DMZ.

    Cluster B has three PE M910 servers in them which have 4 quad-core Xeon CPUs and 64GB RAM each. That's attached to another Equallogic SAN with about 4TB of available storage. They have a similar amount of network connections to the servers in Cluster A but have the four 1GBe iSCSI connections replaced with a single 10GBe connection.

    Of course, it all depends on how much you have to spend but if you're serious about setting up a robust virtualisation farm, a setup similar to this is a good bet. Three servers with eight cores and 32GB RAM each would probably be enough plus an iSCSI SAN with as much storage as you can afford. A setup like that will give you good redunancy and high availability. If one of your hosts goes down, there ought to be enough capacity in the other two to make sure you don't lose any services. If you go the HyperV route, make sure you get SCVMM (Systems Center Virtual Machine Manager) to make this automated. I'd also say that if you go the iSCSI route, make sure you don't plug the iSCSI ports into your core network, even if you VLAN it off. A lot of traffic is generated by iSCSI and your switch may not have the capacity to cope. Install a seperate iSCSI switch.
    Last edited by Norphy; 13th September 2011 at 03:53 PM.

  7. #7

    Join Date
    Nov 2006
    Location
    Redcar
    Posts
    62
    Thank Post
    0
    Thanked 3 Times in 3 Posts
    Rep Power
    17
    We're running Hyper-V clustered in a Server 2008 Datacentre Host OS. 2x Dell Poweredge R710, one bought last year, one bought a few months ago. (40 CPU's, mixed 5500/5600 series with CPU masking, 112Gb RAM, 16 NIC's total across both platforms). iSCSI connectivity to a Dell PV MD3200i 3Tb SAN, 6x 600Gb 15k RPM SAS, directly attached to support upto 4 hosts at present, can be extended into an IP SAN if required. Multipathing enabled and managed with cluster shared volumes and SCVMM '08 R2 SP1. Currently running 12 HA VM's with various operating systems, failover performance is fantastic and there is as little as ~2 seconds downtime due to a host failover situation. Performed P2V on all of my physical and virtual servers to move them over to hyper-v, which was flawless (TAKES FOREVER). Backups are a breeze with Symantec BE 2010 R3 and we're getting licenses for direct CSV backups soon. *EDIT* - Forgot to mention Livemotion, WOW , and the PRO tools if you ever bother to license them (intelligent VM placement according to situations and load balancing).

    From what I understand, our solution is considerably cheaper than VMWare and we moved from a single host Citrix Xenserver 5.5 setup to this setup in about a week and a half, during the summer break. Completed the migration from Xen to Hyper-v and all the server/san installation myself, I am unqualified and have had just over a year working with Xenserver, so I would say it's fairly straightforward, even for someone new to virtualisation. The SAN configuration was the most difficult part, configuring iSCSI and allocating LUN's correctly, and testing failover, once thats working, easy street.

    Good luck, I hope you find the right solution!
    Last edited by cogrady84; 16th September 2011 at 04:25 PM.

  8. #8

    teejay's Avatar
    Join Date
    Apr 2008
    Posts
    3,259
    Thank Post
    289
    Thanked 796 Times in 605 Posts
    Rep Power
    348
    VMWare is probably the best virtualisation platform out there, you 'can' do it all using the free VMWare ESXi, but there are some constraints using that but tbh we use it for quite a lot of our virtualisation, 20+ virtual servers on 3 boxes and it's fine for us. You don't get live migration, failover etc but it does enough to be fine for a school. If you want to go into the higher end versions of VMWare, I hope you have a nice budget for the project ;-)
    You can move existing physical servers to virtual using a utility provided by VMWare, works great. You will see some old posts about not doing that with DC's, but they are fine as well now.
    I would strongly recommend you do some homework on your current servers, find out how much memory they use, cpu load, network load etc and that should give you an idea of how many servers you need.
    As for hardware, speak to someone like Dell or HP and they will specify you a full solution with the servers and the storage. They will also try selling you loads of consulatncy, training, high end VMWare etc, but tell em you just want the hardware.
    Also, don't skimp on the budget for this, spending a decent amount on this saves in the long run and you want to spend on the back end storage as that is the bit that really affects how well the solution performs.

  9. #9

    3s-gtech's Avatar
    Join Date
    Mar 2009
    Location
    Wales
    Posts
    3,104
    Thank Post
    161
    Thanked 655 Times in 588 Posts
    Rep Power
    169
    We use ESXi 4.1 currently on just one virtual host, I have lots of seperate servers running bare metal too. It's a fantastic basis and very easy to learn, but I'm sure the competing solutions are well up there now (Hyper-V looks good, but not had any need to try it). For us, it's just one HP DL385 G5P with 12GB RAM, ESXi is on a memory stick and the VMs are on the internal RAID 5 array. Backs up to a NAS over NFS every night, so I have the ability to restore the servers if the host dies.

  10. #10
    cpjitservices's Avatar
    Join Date
    Jul 2010
    Location
    Hessle
    Posts
    2,605
    Thank Post
    544
    Thanked 301 Times in 277 Posts
    Rep Power
    85
    Wouldn't touch VMWare anymore - too expensive and heavy on resources even on high spec servers which we have - we have just turned to Xen and won't look back its fantastic.... and its free!

  11. #11

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,821
    Thank Post
    272
    Thanked 1,140 Times in 1,036 Posts
    Rep Power
    351
    Quote Originally Posted by teejay View Post
    VMWare is probably the best virtualisation platform out there,
    Citation needed there. I did speed comparisons on two DL380 G5's one running ESXI and the other running Linux KVM on ubuntu 10.04

    KVM won hands down and therefore i now run that. It does pretty much all of what ESXI does and is completely free. Live migration etc is all included.

    you just need a few linux skills and its easy and fully customizable.

  12. #12

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,821
    Thank Post
    272
    Thanked 1,140 Times in 1,036 Posts
    Rep Power
    351
    Also HP are currently doing a deal - if you purchase a qualifying HP P2000 Smart array you can get a qualifying server free (although you pay out initially you claim back funds) So you can get a Dl360/380 with quad xeon for free which could save you around 2k.

  13. #13
    gshaw's Avatar
    Join Date
    Sep 2007
    Location
    Essex
    Posts
    2,726
    Thank Post
    176
    Thanked 229 Times in 211 Posts
    Rep Power
    69
    The P2000 \ free server offer looks fantastic but not sure if that will apply with special education bid pricing as well... if it does you're quids in!

    The platform choice really comes down to Hyper-V (complete MS package and cheap) vs VMWare (better on a technical level but costs more). I also prefer 3 hosts rather than two based on the theory that if one goes down I'd prefer not to have all the load on one server. Also gives you a bit more room for expansion as you add more services on (which you'll want to do once you've gained the flexibility of instant server provisioning!)

    If you're going SAN get the disk configuration right as that's the most vital part. Get a capacity planner done with your chosen supplier and find out how much utilisation you have at the moment and plan for type \ speed \ storage accordingly...

  14. #14


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,033 Times in 813 Posts
    Rep Power
    341
    Quote Originally Posted by glennda View Post
    Citation needed there. I did speed comparisons on two DL380 G5's one running ESXI and the other running Linux KVM on ubuntu 10.04

    KVM won hands down and therefore i now run that. It does pretty much all of what ESXI does and is completely free. Live migration etc is all included.

    you just need a few linux skills and its easy and fully customizable.
    And a big plus here is that you don't even need to put it on expensive servers. As long as you have a decent SAN you can use cheap as chips PC's for servers. if one falls over then the apps migrate to another 'server'. Saves bags of money and get way more power/redundancy for you £.

  15. #15
    Disease's Avatar
    Join Date
    Jan 2006
    Posts
    1,110
    Thank Post
    120
    Thanked 72 Times in 49 Posts
    Rep Power
    57
    Quote Originally Posted by Norphy View Post
    Not quite right. You can run as many machines as you like on the Hyper-V server but with one Windows 2008 Enterprise licence, you're only licenced to run 4 virtual instances of Windows 2008 Standard or Enterprise on that box. You can buy another Windows 2008 Enterprise license and run an additional four and/or run as many other non-Windows OSeseseseses as you're licensed for on there too.

    We have a VMware vSphere farm set up here. It consists of two clusters. Cluster 1 has five Dell PE 2950 servers with dual quad-core Xeon CPUs and 32GB RAM. They're all attached to two Equallogic SANs and have about 9TB of storage available to them. Each server has four 1GB connections to the iSCSI fabric, 1 10GBe connection to the network backbone, 1 1GBe connection to the network backbone for the service port, 1 connection to a seperate network for vMotion and one 1GBe connection to our DMZ.

    Cluster B has three PE M910 servers in them which have 4 quad-core Xeon CPUs and 64GB RAM each. That's attached to another Equallogic SAN with about 4TB of available storage. They have a similar amount of network connections to the servers in Cluster A but have the four 1GBe iSCSI connections replaced with a single 10GBe connection.

    Of course, it all depends on how much you have to spend but if you're serious about setting up a robust virtualisation farm, a setup similar to this is a good bet. Three servers with eight cores and 32GB RAM each would probably be enough plus an iSCSI SAN with as much storage as you can afford. A setup like that will give you good redunancy and high availability. If one of your hosts goes down, there ought to be enough capacity in the other two to make sure you don't lose any services. If you go the HyperV route, make sure you get SCVMM (Systems Center Virtual Machine Manager) to make this automated. I'd also say that if you go the iSCSI route, make sure you don't plug the iSCSI ports into your core network, even if you VLAN it off. A lot of traffic is generated by iSCSI and your switch may not have the capacity to cope. Install a seperate iSCSI switch.
    Out of interest Norphy, what are you running off of that setup, machines, users, servers etc.



SHARE:
+ Post New Thread
Page 1 of 2 12 LastLast

Similar Threads

  1. Spare Server - What to do?
    By mmoseley in forum Hardware
    Replies: 20
    Last Post: 3rd January 2008, 04:12 PM
  2. What to do with your old crt's
    By drjturner in forum General Chat
    Replies: 10
    Last Post: 9th August 2007, 06:41 PM
  3. What to do with a doorstop server?
    By rhyds in forum Wireless Networks
    Replies: 23
    Last Post: 31st July 2007, 09:54 AM
  4. Replies: 3
    Last Post: 25th January 2007, 12:11 AM
  5. failed redundancy - what to do?
    By browolf in forum Hardware
    Replies: 3
    Last Post: 2nd November 2005, 09:59 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •