+ Post New Thread
Page 1 of 3 123 LastLast
Results 1 to 15 of 44
Hardware Thread, New Virtualized Servers - how would you do it? in Technical; More or less as the title says, tbh. Background: Current servers are old and running on 2k3, I want to ...
  1. #1

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    4,467
    Thank Post
    750
    Thanked 1,210 Times in 852 Posts
    Blog Entries
    45
    Rep Power
    533

    New Virtualized Servers - how would you do it?

    More or less as the title says, tbh.

    Background:
    Current servers are old and running on 2k3, I want to upgrade to 2k8 R2 ahead of going W7 in Summer '12. Server room is currently full of fat HP towers of varying ages, and it couold do with tidying up and improving.

    I want to move to rack-mounted, and I want to go down the virtualized route for disaster recovery and reliability purposes. I'm planning on having 2 DC's, 1 Exchange 2010, 1 print server and 1 general apps server to run the AV console, WSUS, intranet site etc.

    We run about 400 computers with about 1200 users.


    I know enough about virtualization to talk about it and appreciate what's being said, but not enough to plan this out properly. I've got a few competing quotes from companies already, mostly based around 2 Xeon-powered servers with buckets of RAM and an iSCSI SAN, but there are variations.

    For what I want to do budget shouldn't be an issue, based on existing quotes, but obviously I need to be saving money where reasonable in the current climate. There's no need for corners to be cut though.


    So if you were virtualizing the 5 servers above - how would you do it?

  2. #2

    Join Date
    Mar 2007
    Posts
    1,669
    Thank Post
    72
    Thanked 249 Times in 199 Posts
    Rep Power
    64
    physical - dc, virtual host 1, virtual host 2, san, standalone virtual host.

    dc for disaster recovery, 2 clustered virtual hosts with a san for instant failover, 1 standalone incase the san fails so you can restore the vms to a standalone machine. I'd them use the enterprise lciense to stick 4 vm's on each server, have a dc on the standalone, a dc on the cluster, and do some service duplication accross the vm's.

  3. Thanks to strawberry from:

    sonofsanta (5th April 2011)

  4. #3

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    4,467
    Thank Post
    750
    Thanked 1,210 Times in 852 Posts
    Blog Entries
    45
    Rep Power
    533
    Quote Originally Posted by strawberry View Post
    physical - dc, virtual host 1, virtual host 2, san, standalone virtual host.

    dc for disaster recovery, 2 clustered virtual hosts with a san for instant failover, 1 standalone incase the san fails so you can restore the vms to a standalone machine. I'd them use the enterprise lciense to stick 4 vm's on each server, have a dc on the standalone, a dc on the cluster, and do some service duplication accross the vm's.
    Good point - no-one mentioned "what if the SAN failed"... would you run physical DC as primary or backup though? I figure a virtual DC must be easier to recover so would be better as primary, and if physical DC goes down and is only a backup, it could just pick everything up again from replication.

  5. #4
    nicholab's Avatar
    Join Date
    Nov 2006
    Location
    Birmingham
    Posts
    1,411
    Thank Post
    3
    Thanked 93 Times in 89 Posts
    Blog Entries
    1
    Rep Power
    50
    I would run a software SAN in a VM in HA mode accross two physical boxes. So the SAN is mirrored across the two virtual hosts and the other VM run on the same 2 hosts.

  6. Thanks to nicholab from:

    sonofsanta (5th April 2011)

  7. #5

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    4,467
    Thank Post
    750
    Thanked 1,210 Times in 852 Posts
    Blog Entries
    45
    Rep Power
    533
    Quote Originally Posted by nicholab View Post
    I would run a software SAN in a VM in HA mode accross two physical boxes. So the SAN is mirrored across the two virtual hosts and the other VM run on the same 2 hosts.
    One supplier did quote on that possibility - two very well specced servers, each running a virtual FalconStor appliance in a HA pair, and each also running the VMs. Which was the cheaper option, and I got the impression that it wasn't as powerful/recommended (although of course, that may be because they wanted me to spend more... )

  8. #6

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,714
    Thank Post
    269
    Thanked 1,116 Times in 1,012 Posts
    Rep Power
    345
    We have the following

    3 x DL380 G5 (2 x quad core xeon) 2 of them with 64gb ram and 1 with 8gb (testing host). Connected via Fiber Channel to a MSA2000 with multiple controllers and automatic fail over between controllers if one dies (which it did and only one server crashed which we put down to trying to write to a system file at the time or something like that)

    We currently run on the 2 main hosts (using Qemu/KVM on Ubuntu with oracle file system cluster) 1 x dc, 3 x file server, 1 x sims server, 1 x ecplise library server, 2 x terminal servers, wsus server, av server, xibo server, print server. The Processor never gets above 45/50 % usage and the ram hits around 30gb average.

    We then run 1 x physical primary DC, a physical proxy (using squid) our firewall (isa) and then a couple more file servers.

  9. Thanks to glennda from:

    sonofsanta (5th April 2011)

  10. #7
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,014
    Thank Post
    300
    Thanked 172 Times in 158 Posts
    Rep Power
    57
    Just throwing in my two pence...

    If you have two virtual hosts then can a single host handle the whole load if one fails? Usually everyone recommends having a minimum of three hosts for this reason, but if you've only got 5-or-so virtual servers then one host could probably handle it.

    As strawberry said, you must keep one DC physical, I'm not 100% sure on what the best practice is for making it primary or not.

    Clustered pool of hosts with VMware ESXi (my preference), Oracle Sun S7000 SAN (again my preference) with a budget for either a second mirrored SAN or a physical backup solution - probably a server running DAS or with a tape drive and Veeam.

    Chris

  11. Thanks to Duke from:

    sonofsanta (5th April 2011)

  12. #8

    TechMonkey's Avatar
    Join Date
    Dec 2005
    Location
    South East
    Posts
    3,237
    Thank Post
    218
    Thanked 387 Times in 288 Posts
    Rep Power
    158
    2 points, which may be me being simple. (1) if you get a decent SAN then surely it has built in fail over (duplicated NIC, power, disks with RAID etc) so failover for the failover is commendable but maybe overkill (2) can someone point me to the docs/discussion/theory behind having a physical DC as it seems counter-productive to virtual up and then have a single point of failure with one physical DC.

  13. #9
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,014
    Thank Post
    300
    Thanked 172 Times in 158 Posts
    Rep Power
    57
    Quote Originally Posted by TechMonkey View Post
    2 points, which may be me being simple. (1) if you get a decent SAN then surely it has built in fail over (duplicated NIC, power, disks with RAID etc) so failover for the failover is commendable but maybe overkill (2) can someone point me to the docs/discussion/theory behind having a physical DC as it seems counter-productive to virtual up and then have a single point of failure with one physical DC.
    The SAN will have redundant individual components, but the whole head itself can die or you could get a bad firmware/OS update that kills it. Clustered heads or a failover SAN with replication should solve this, and while it seems overkill the SAN is obviously a big component if it's hosting your VMs - one SAN failure could wipe out pretty much your entire server infrastructure.

    I don't think anyone's suggesting you just have a single physical DC but that you must have at least one physical one (then other virtual ones too) so that when you first power up the virtual hosts and the virtual servers on them they have a server to talk to for DNS, DHCP (if applicable) and AD authentication.

    Cheers,
    Chris

  14. #10

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    4,467
    Thank Post
    750
    Thanked 1,210 Times in 852 Posts
    Blog Entries
    45
    Rep Power
    533
    Quote Originally Posted by glennda View Post
    We have the following

    3 x DL380 G5 (2 x quad core xeon) 2 of them with 64gb ram and 1 with 8gb (testing host). Connected via Fiber Channel to a MSA2000 with multiple controllers and automatic fail over between controllers if one dies (which it did and only one server crashed which we put down to trying to write to a system file at the time or something like that)

    We currently run on the 2 main hosts (using Qemu/KVM on Ubuntu with oracle file system cluster) 1 x dc, 3 x file server, 1 x sims server, 1 x ecplise library server, 2 x terminal servers, wsus server, av server, xibo server, print server. The Processor never gets above 45/50 % usage and the ram hits around 30gb average.

    We then run 1 x physical primary DC, a physical proxy (using squid) our firewall (isa) and then a couple more file servers.
    Crikey - how much did that lot all cost you? (ballpark)

    Do you find it better separating everything out so much? I'd have thought running a separate instance for each web app (WSUS, AV etc.) would introduce a lot of overhead from all the 2k8 installations running.

  15. #11

    AngryTechnician's Avatar
    Join Date
    Oct 2008
    Posts
    3,724
    Thank Post
    695
    Thanked 1,206 Times in 759 Posts
    Rep Power
    393
    Counterpoint: you do not need to keep a physical DC.

    Just don't put it in the cluster.

    I have a DC on every physical host, but none of them are in a clustered disk on the SAN. Their virtual HDs reside on local RAID1 disks on the host. If the SAN fails, they will still work. If a host fails, one of the others will still work. Even if the Hyper-V environment goes pear-shaped on every host, I can simply reinstall Hyper-V from scratch, create a new VM and attach the locally-stored DC disks, and it will boot.

    You do not need to keep a physical DC. Just keep at least one that isn't in the SAN, and there are no scenarios you can't recover from that you could if it was physical.

  16. Thanks to AngryTechnician from:

    sonofsanta (5th April 2011)

  17. #12

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    4,467
    Thank Post
    750
    Thanked 1,210 Times in 852 Posts
    Blog Entries
    45
    Rep Power
    533
    Quote Originally Posted by AngryTechnician View Post
    Counterpoint: you do not need to keep a physical DC.

    Just don't put it in the cluster.

    I have a DC on every physical host, but none of them are in a clustered disk on the SAN. Their virtual HDs reside on local RAID1 disks on the host. If the SAN fails, they will still work. If a host fails, one of the others will still work. Even if the Hyper-V environment goes pear-shaped on every host, I can simply reinstall Hyper-V from scratch, create a new VM and attach the locally-stored DC disks, and it will boot.

    You do not need to keep a physical DC. Just keep at least one that isn't in the SAN, and there are no scenarios you can't recover from that you could if it was physical.
    I like that idea. A cunning plan, my lord...

    Do you have any DC's running from the SAN then, or do you have one DC per physical host running from local storage and just rely on AD's natural resilience-through-replication?

  18. #13

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,714
    Thank Post
    269
    Thanked 1,116 Times in 1,012 Posts
    Rep Power
    345
    Quote Originally Posted by sonofsanta View Post
    Crikey - how much did that lot all cost you? (ballpark)

    Do you find it better separating everything out so much? I'd have thought running a separate instance for each web app (WSUS, AV etc.) would introduce a lot of overhead from all the 2k8 installations running.
    I don't know how much the SAN was but each set of 64gb ram is around 3,500 (although i upgraded each from 32 to 64) We only have seperate servers as I haven't had a chance to move the WSUS server to the AV server yet (av was converted from physical to virtual). The xibo server is linux and uses not alot but the sims server and each TS use 10gb ram each.

    Also running on KVM is quicker then the likes of esxi/hyper-v as it runs inside the linux kernel plus its free so is the Oracle File System Cluster software.

  19. #14
    chazzy2501's Avatar
    Join Date
    Jan 2008
    Location
    South West
    Posts
    1,723
    Thank Post
    206
    Thanked 254 Times in 206 Posts
    Rep Power
    65
    2 hosts, with 1 cpu each 32gb ram each host, 1 san with 3x 10k sas drives (for os) and as many 7k2 sata drives as you need for data. ISCSI SAN and 2x layer 2 gigabit switches. that with vmware essentials plus and some backup software per socket Veeme or vranger. I've 2 x 2 cpu hosts and it laughably uses less than 10% MAX. as most licensing with vm stuff is per socket just get what you need.

    I have no physical DC and just keep both on the SAN it's got two of everything if I experience 2 simultaneous errors then tough, this is a school not a multimillion pound business. The SAN has a 5 year 4 hour call out. This is a significant improvement on my old physical hardware that was only backed up (no redundancy so turnaround was a day or two)

    So far only 1 psu on the san has gone, was bricking it until new one arrived though.

  20. Thanks to chazzy2501 from:

    sonofsanta (5th April 2011)

  21. #15

    TechMonkey's Avatar
    Join Date
    Dec 2005
    Location
    South East
    Posts
    3,237
    Thank Post
    218
    Thanked 387 Times in 288 Posts
    Rep Power
    158
    Quote Originally Posted by Duke View Post
    The SAN will have redundant individual components, but the whole head itself can die or you could get a bad firmware/OS update that kills it. Clustered heads or a failover SAN with replication should solve this, and while it seems overkill the SAN is obviously a big component if it's hosting your VMs - one SAN failure could wipe out pretty much your entire server infrastructure.

    I don't think anyone's suggesting you just have a single physical DC but that you must have at least one physical one (then other virtual ones too) so that when you first power up the virtual hosts and the virtual servers on them they have a server to talk to for DNS, DHCP (if applicable) and AD authentication.

    Cheers,
    Chris
    Aha, didn't realise that there was something major that could take the SAN down & didn't have a redundant partner. I was thinking I would like 2 SANs but the cost may be prohibitive.

    Sorry, bad typing on my part. I realised that everything wouldn't be on DC, but wasn't understanding why you had to have 1 physical DC alongside your virtual servers. Would not just starting the first VM & letting that start up before initialising the others be enough? they would then see the first server with the services adn be happy? Or am I being a VM N00b?

SHARE:
+ Post New Thread
Page 1 of 3 123 LastLast

Similar Threads

  1. How many servers do we really need?
    By dgsmith in forum Hardware
    Replies: 13
    Last Post: 25th March 2011, 03:30 PM
  2. 2 New servers please :)
    By jamin100 in forum Our Advertisers
    Replies: 2
    Last Post: 11th January 2010, 09:53 AM
  3. [Ubuntu] hp servers
    By browolf in forum *nix
    Replies: 3
    Last Post: 5th June 2009, 11:58 PM
  4. How many servers??
    By maniac in forum Hardware
    Replies: 4
    Last Post: 6th November 2007, 10:05 AM
  5. Servers
    By Lee_K_81 in forum Hardware
    Replies: 14
    Last Post: 18th May 2007, 08:12 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •