+ Post New Thread
Page 2 of 2 FirstFirst 12
Results 16 to 20 of 20
How do you do....it? Thread, Where to start creating a virtual server setup? RM CC4 to Vanilla in Technical; Originally Posted by tj2419 We have 4 DC's 1 physical and 1 virtual on either side of the wan. What ...
  1. #16
    mrbios's Avatar
    Join Date
    Jun 2007
    Location
    Stroud, Gloucestershire
    Posts
    2,579
    Thank Post
    368
    Thanked 269 Times in 221 Posts
    Rep Power
    101
    Quote Originally Posted by tj2419 View Post
    We have 4 DC's 1 physical and 1 virtual on either side of the wan.

    What is the benefit of keeping a physical DC as well instead of having them all virtualised?
    If you forget to setup the automatic startup routine after your whole networks been down it can be a bit of a PITA trying to get access again

    I've had all my DCs virtual for the past 3 years though, so long as you have more than one and on more than one host it's unlikely to be an issue. Just keep some IPs noted down so that you can connect without DNS should you ever have the occasion arise

    So would i be right in thinking you're looking at around 10 VMs? plan for 15 in that case (Trust me when you first virtualise you'll suddenly realise there's so much you can setup in an instant that you'll start wanting new things haha) I'd say that two hosts would suffice, but for some reason part of me wants to say 3 even though you wouldn't ever max them out.....on the brightside if you go VMware then you only need to pay for vCenter server foundation rather than standard (You could buy the vSphere essentials plus kit, which gives you 6 CPU licenses to cover up to 3 hosts and vcenter foundation) VMware Europe Official Online Store - VMware vSphere Essentials Kit obviously that's assuming you go down the path of VMware.

  2. #17

    Join Date
    May 2011
    Location
    United Kingdom
    Posts
    520
    Thank Post
    126
    Thanked 18 Times in 18 Posts
    Rep Power
    11
    Quote Originally Posted by mrbios View Post

    So would i be right in thinking you're looking at around 10 VMs? plan for 15 in that case (Trust me when you first virtualise you'll suddenly realise there's so much you can setup in an instant that you'll start wanting new things haha) I'd say that two hosts would suffice, but for some reason part of me wants to say 3 even though you wouldn't ever max them out.
    Very true. We started of virtualising our moodle install on Xen server and now already have 3 VM's with the spiceworks and XIBO :P

    We currently have three servers for our curriculum network svr1 (DC, DHCP, DNS), svr2 (DNS, member server) svr3 (DNS, member server). Would you keep the same setup with 3 vm's or consolidate as the user directories would be stored on the SAN? for the extra resilience. How many of you have two SAN's mirroring? incase your SAN goes wrong?

    Thanks for all the help and advice a lot to take in but finding it very interesting :P

  3. #18
    mrbios's Avatar
    Join Date
    Jun 2007
    Location
    Stroud, Gloucestershire
    Posts
    2,579
    Thank Post
    368
    Thanked 269 Times in 221 Posts
    Rep Power
    101
    Quote Originally Posted by tj2419 View Post
    Very true. We started of virtualising our moodle install on Xen server and now already have 3 VM's with the spiceworks and XIBO :P

    We currently have three servers for our curriculum network svr1 (DC, DHCP, DNS), svr2 (DNS, member server) svr3 (DNS, member server). Would you keep the same setup with 3 vm's or consolidate as the user directories would be stored on the SAN? for the extra resilience. How many of you have two SAN's mirroring? incase your SAN goes wrong?

    Thanks for all the help and advice a lot to take in but finding it very interesting :P
    What do svr2 and svr3 actually do exactly? (not sure what's involved in being a member server) I think you could just consolidate those 3 into one, then add an additional DC doing dns, dhcp (even if only for failover) too so that you have one on each host. Seems the best way to go to be honest. I reckon you could reuse the hardware from one of these if you wanted to have a physical DC for "just in case" purposes. Some people would, some people wouldn't, up to you really.

    Rather than mirroring the SAN using the various tools and features SANs can provide to do it the cheaper alternative is to do D2D2T backups and have a really cheap but decent sized RAID6 storage SAN in a separate location to your server room and have it do incremental backups every night and then backup to tape. In a pinch you could commission the slow SAN to take over if the main one died. Performance wouldn't be great but obviously you'd only be looking to use it for a maximum of 1 day anyway. It's a tricky one though, others may give you different advice to this but that's the route I'd look at personally.

  4. #19

    Join Date
    May 2011
    Location
    United Kingdom
    Posts
    520
    Thank Post
    126
    Thanked 18 Times in 18 Posts
    Rep Power
    11
    Quote Originally Posted by mrbios View Post
    What do svr2 and svr3 actually do exactly? (not sure what's involved in being a member server) I think you could just consolidate those 3 into one, then add an additional DC doing dns, dhcp (even if only for failover) too so that you have one on each host. Seems the best way to go to be honest. I reckon you could reuse the hardware from one of these if you wanted to have a physical DC for "just in case" purposes. Some people would, some people wouldn't, up to you really.

    Rather than mirroring the SAN using the various tools and features SANs can provide to do it the cheaper alternative is to do D2D2T backups and have a really cheap but decent sized RAID6 storage SAN in a separate location to your server room and have it do incremental backups every night and then backup to tape. In a pinch you could commission the slow SAN to take over if the main one died. Performance wouldn't be great but obviously you'd only be looking to use it for a maximum of 1 day anyway. It's a tricky one though, others may give you different advice to this but that's the route I'd look at personally.
    Just had a look and svr1, 2,3 are all acting as DC's all running DNS with just svr 1 running DHCP.

    Possible Setup
    I think i would (money permitting) go for 3 hosts, 1 fast SAN, 1 slow SAN. Setup with 3 VM's running as DC's. The additional VM's (Spiceworks, print server etc). Have it set up to replicate across the three hosts incase of a failure of one of them. For the replication incase a host fails i take it the file directories and shared drives need to be stored on the SAN so all the hosts can access it?

    We already have a pretty good backup system hardware wise with two HP Proliant DL180 G6 servers Xeon E5520 @ 2.27Ghz (8CPUs) and a HP autoloader running backup exec at the min but hopefully change that to veeam. Each with 16TB storage on them (8 * 2TB drives).

    So am i right in thinking that

    • As users login and connect to the VM's they are automatically balanced across the three hosts to help with performance?
    • We can remote onto a particular VM's IP and make changes and that is replicated across the hosts immediatly?
    • If configured correctly a host could die and users would automatically be redirected to the other two hosts to continue work?
    • Would you connect the three hosts up to the core switch directly? Have a separate switch linking the Hosts to the SAN for extra speed?


    Thanks
    Last edited by tj2419; 14th November 2013 at 01:36 PM. Reason: Correction

  5. #20
    mrbios's Avatar
    Join Date
    Jun 2007
    Location
    Stroud, Gloucestershire
    Posts
    2,579
    Thank Post
    368
    Thanked 269 Times in 221 Posts
    Rep Power
    101
    Quote Originally Posted by tj2419 View Post
    Just had a look and svr1, 2,3 are all acting as DC's all running DNS with just svr 1 running DHCP.

    Possible Setup
    I think i would (money permitting) go for 3 hosts, 1 fast SAN, 1 slow SAN. Setup with 3 VM's running as DC's. The additional VM's (Spiceworks, print server etc). Have it set up to replicate across the three hosts incase of a failure of one of them. For the replication incase a host fails i take it the file directories and shared drives need to be stored on the SAN so all the hosts can access it?

    We already have a pretty good backup system hardware wise with two HP Proliant DL180 G6 servers Xeon E5520 @ 2.27Ghz (8CPUs) and a HP autoloader running backup exec at the min but hopefully change that to veeam. Each with 16TB storage on them (8 * 2TB drives).

    So am i right in thinking that

    • As users login and connect to the VM's they are automatically balanced across the three hosts to help with performance?
    • We can remote onto a particular VM's IP and make changes and that is replicated across the hosts immediatly?
    • If configured correctly a host could die and users would automatically be redirected to the other two hosts to continue work?
    • Would you connect the three hosts and SAN up to the core switch directly? Have a separate switch linking the Hosts to the SAN for extra speed?


    Thanks
    I'm going to just play on the assumption of using VMware here as it's the one I'm used to, but with 3 VMware hosts you'd also have a vcenter server (you can make this a VM if you wish, there's even a virtual appliance for it) you can then create a high availability cluster containing the 3 hosts. All 3 hosts would share the same SAN storage so moving machines between the hosts (while running) is as simple as dragging and dropping or right clicking and migrating from one host to another, and it just depends on how much ram is assigned to each VM as to how long it takes to migrate (even my biggest VM with 24gb ram only takes 1-2 minutes at max) and that's all it takes to get a VM running on a different host, so you can balance these out yourself quite easily. There's also very granular performance graphs and event warnings for high latency etc so you can adjust it pretty easily, the VMs don't auto balance themselves based on performance BUT if one host fails with 5 running VMs on them, the cluster will automatically bring those 5 VMs back up on a different host. So you get a couple of minutes down time on those VMs (or less, VMs boot really really fast) but the system automatically brings them back up for you.

    I'd connect up the 3 hosts directly to the core, but then all the iSCSI traffic from hosts to SAN (assuming you went for iSCSI) i'd have on a separate switch, and using separate ports on the host (ideally as a separate vlan)....still connected to the core as a center point for the vlan routing, but the traffic just wouldn't go out that way.

SHARE:
+ Post New Thread
Page 2 of 2 FirstFirst 12

Similar Threads

  1. RM CC4 to Vanilla Windows
    By netman2012 in forum Network and Classroom Management
    Replies: 11
    Last Post: 17th June 2013, 09:44 AM
  2. How To: setup an ubuntu virtual server for thin clients to use rdesktop into windows
    By atamakosi in forum Thin Client and Virtual Machines
    Replies: 24
    Last Post: 21st June 2012, 12:43 AM
  3. [Fog] Web interface very slow to log in centos virtual server
    By reggiep in forum O/S Deployment
    Replies: 4
    Last Post: 31st July 2011, 11:35 AM
  4. Moving from RM CC4 to Windows Network
    By tj2419 in forum Network and Classroom Management
    Replies: 47
    Last Post: 3rd June 2011, 07:08 PM
  5. Virtual Server Setup
    By CheeseDog in forum Thin Client and Virtual Machines
    Replies: 7
    Last Post: 25th January 2008, 09:44 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •