+ Post New Thread
Results 1 to 14 of 14
Windows Server 2008 Thread, virtualisation in 2008 in Technical; I've looked at some threads in here reference virtualisation but I have a few questions from the learned amongst you. ...
  1. #1

    Join Date
    Oct 2008
    Posts
    213
    Thank Post
    2
    Thanked 11 Times in 11 Posts
    Rep Power
    21

    virtualisation in 2008

    I've looked at some threads in here reference virtualisation but I have a few questions from the learned amongst you. We are a relatively small school, 200 PCs and 3 servers (1 firewall dedicated, 1 DC with IIS and SQL, 1 exchange) Ive been given enough cash to replace 2 of them but I was thinking about simply purchasing one and virtualising . Is it cost effective to simply virtualise into only 3 machines? The machines at the moment are all dual xeon (old 2.8's) so any quad core would blow them away even taking on the load of 2 + firewall. I dont plan on running a SAN as I will simply stuff SAS drives into the new box and RAID1 them.

    We have software assurance for the 2003->2008 (r2?) upgrade (and the SQL, exchange and ISA) and I think hyperV comes in 2008? I guess it would be a case of loading the 2008, virtualising it, keeping our 3 server licences and away I go?

    I know I will need new more expensive CALs over my 2003 ones.

    Has anyone virtualised on a small scale? Was it cost effective or should I simply buy 2 separate boxes (and leave the firewall box alone - its plenty powerful enough)?

  2. #2
    Abaddon's Avatar
    Join Date
    Mar 2006
    Location
    Middlesex
    Posts
    591
    Thank Post
    70
    Thanked 68 Times in 63 Posts
    Rep Power
    59
    Personally, I wouldn't. If you have only one host server, you have no fail-over.. if you're hosting important server roles and your host server has a hardware failure you've lost multiple functionality. I'd only consider it with at least two host servers with vMotion (or whatever the MS equivalent is now) and some form of shared storage. Just my view of course. Probably not worth it unless you are considering virtualising more of your infrastructure (if you have more infrastructure!)

  3. #3

    Join Date
    Oct 2008
    Posts
    213
    Thank Post
    2
    Thanked 11 Times in 11 Posts
    Rep Power
    21
    If we lose any hardware we are hosed (from an operational not backup sense) anyway as both servers are symbiant of each other due to the horrific infrastructure I inherited. DFS was not meant to be abused in the way it is.....

    Anyway I digress. I would be virtualising purely to move three big boxes down to one - space, power, heat. The more I look into it the more I think stick with extra boxes rather than a multicore with heaps of RAM.

  4. #4

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,575
    Thank Post
    834
    Thanked 873 Times in 726 Posts
    Blog Entries
    9
    Rep Power
    324
    Quote Originally Posted by Abaddon View Post
    Personally, I wouldn't. If you have only one host server, you have no fail-over.. if you're hosting important server roles and your host server has a hardware failure you've lost multiple functionality. I'd only consider it with at least two host servers with vMotion (or whatever the MS equivalent is now) and some form of shared storage. Just my view of course. Probably not worth it unless you are considering virtualising more of your infrastructure (if you have more infrastructure!)
    I'd agree totally. Wouldn't do virtualisation without atleast two hosts and some shared storage (NAS min.)

    I do want to raise two points though. I think vMotion (or equivilent) is a nice feature but far from worth the money to pay out for. I'm really, how long does it take to push play on another host and get the VM going again?

    The other point is unless you have $$$'s to spare you're still going to have a single point of failure. It's all well and good having two host's incase of hardware failure on one. But what about the shared storage? We're getting on to replicating between SAN's territory here

    OT: (but related). Just had a thought, how well would a DFS share between two Windows Files Servers work as shared storage for VM's?

  5. #5


    Join Date
    Mar 2009
    Location
    Leeds
    Posts
    6,506
    Thank Post
    227
    Thanked 848 Times in 727 Posts
    Rep Power
    287
    depends what and why you are virtualizing. Couple of schools ive deployed it in almost all are single server BUT the vm is just doing a few odds and ends usually 2003 print server if drivers are an issue and things like abacus evolve interactive planner that wont run on 2008/vista/7 or in the case of the 2vms im currently doing im making 2008 terminal servers for out of hours use. if the vms die in any of these cases im not going to be to worried worst case i rebuild them in a few hours

  6. #6

    m25man's Avatar
    Join Date
    Oct 2005
    Location
    Romford, Essex
    Posts
    1,617
    Thank Post
    49
    Thanked 448 Times in 331 Posts
    Rep Power
    136
    We run a whole load of stuff on Hyper-V including our own SBS-2008 server which is the DC and Exchange and everything with no failover!

    But thats our choice because I know that I have bare metal recovery capability within a few hours and a choice of fresh hardware from stock in hours if needs be.

    I certainly wouldn't recommend anyone did the same unless you have the resources to do so.

    On a smaller scale by all means, virtualise anything that you think you could manage without if the host was out of service for a while because the loss of your virtual host could be far more disruptive than you might first think?

    Put it all down on paper, then try and work out how you would cope if you lost all 3 Virtual Boxes at once! Then decide.

  7. #7


    Join Date
    Mar 2009
    Location
    Leeds
    Posts
    6,506
    Thank Post
    227
    Thanked 848 Times in 727 Posts
    Rep Power
    287
    Quote Originally Posted by m25man View Post
    We run a whole load of stuff on Hyper-V including our own SBS-2008 server which is the DC and Exchange and everything with no failover!

    But thats our choice because I know that I have bare metal recovery capability within a few hours and a choice of fresh hardware from stock in hours if needs be.

    I certainly wouldn't recommend anyone did the same unless you have the resources to do so.

    On a smaller scale by all means, virtualise anything that you think you could manage without if the host was out of service for a while because the loss of your virtual host could be far more disruptive than you might first think?

    Put it all down on paper, then try and work out how you would cope if you lost all 3 Virtual Boxes at once! Then decide.
    thing is if you only have one server anyway you dont have much to loose with virtualization anyway still a single point of failure but if your vm is backed up hopefully you can just dump it onto a workstation/laptop and at least have some of your network/policies back if its a dc in a short time

  8. #8

    m25man's Avatar
    Join Date
    Oct 2005
    Location
    Romford, Essex
    Posts
    1,617
    Thank Post
    49
    Thanked 448 Times in 331 Posts
    Rep Power
    136
    Exactly, it works for me but the Host is Dual Quad with 32Gb RAM and 3TB, 8 NICs running 2008R2 Datacentre it does a lot more than just the SBS.
    It's got my Homeserver running on it with all of my video collection as well!
    Oh and I forgot to mention the iSCSI Disk Array attached.

    That is however the DR strategy, the backup VHD's can be brought back online as quickly as we can get the new hardware installed, so for us it's not an issue. All of the media content has been archived to BlueRay just in case.

    Im not trying to sell you off of the idea, the one reason we have virtualised it was because of the electricity and cooling costs of running a 12 server sandbox!

    The only real important server can as you say be revived in a very short window but I don't know many schools that have that type of hardware in the store room going spare!

    By all means virtualise but do so with Business Continuity as the determining factor.

  9. #9


    Join Date
    May 2009
    Location
    UK
    Posts
    2,105
    Thank Post
    256
    Thanked 450 Times in 251 Posts
    Rep Power
    141
    I have a very similar situation coming up.


    School has got a PCP grant to upgrade it's gear; getting a complete newly wired Cat6 network, with a managed wireless and core switch, the works.
    They've also specced some money for "Server upgrades".

    Currently, school is running curriculum on a single CPU quad core Xeon with 4gb ram, a 2nd DC, single CPU dual core 2gb memory, that does nothing it seems except break the replication between the servers occasionally (I may turn the sucker off!)
    And an admin server, I suspect similar spec to the curriculum, though I believe slightly older (not had a chance to nose about and see what it is, i've been here 10 days so far!)
    All servers running Server 2003, client pc's are a mix of xp, vista, and a couple of windows 7.
    and a 2TB (1TB raid 5 setup) NAS.

    I have considered looking into a virtualisation system for here; one curriculum, one admin (sims), all data on the NAS. Don't need a printer server, all printing is done either by local printers (or low tec enough I might as well add them individually to laptops as local's), or by a riso networked/managed photocopier/printer thing.

    From my experience, most schools don't have the money/resources to have any real redundancies. You can setup all these possible systems, but without having a complete 2nd network sitting idle, you will always have a single point of failure to contend with.

    with my current setup, The DC runs printing, AD, and all userdata; the NAS is mapped as an iSCSI drive to the DC. So if the DC fails, all printing is lost, all documents access is lost; it's all mapped through the iSCSI drive, not direct to the NAS.
    If the NAS fails, users can log on, but all profile data is lost, all documents is lost. RAID arrays in various different mirror config's will help here, but we all know to remirror a DC or a NAS with hundreds of profiles worth of work is at minimum a day's work.

    I have an added problem of only the admin server is backed up externally at all, but that's a discussion for another place!

  10. #10

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    10,991
    Thank Post
    851
    Thanked 2,653 Times in 2,253 Posts
    Blog Entries
    9
    Rep Power
    764
    I would also suggest at least two boxes, this may even fit within you budget and with shared storage would give you a much more robust system.

    As to live migration and clustering support it is actually free with the Microsoft Solution. You can download Hyper-V Server R2 ( http://www.microsoft.com/hyper-v-ser...s/default.aspx )for free and it will support this feature across multiple servers. Depending on how you are licensed for Server 2003 you could probably just convert them all to virtual machines and dump the whole lot into a virtual setup then go from there.

    The one thing that I would caution about is virtualising the firewall box, this is supposed to be as secure as possible and some virtualised configurations can make it easier to compromise.

  11. #11

    Join Date
    Oct 2008
    Posts
    213
    Thank Post
    2
    Thanked 11 Times in 11 Posts
    Rep Power
    21
    One good idea you have all pointed me towards is snapshotting. I could snapshot the VMs to an iSCSI SATA box which would help recover the system (not necessarily data) in a failure. That at least is better than I have now. I would also gain the current physical PCs

    The idea of virtualising was my old training of keep exchange away from the DC wher you can. Since I have the money to replace 2 machines I was thinking about replacing with one bigger machine I dont think I can afford a SAN + required card on top of this.

    I wont have failover in any case - hardware failure (or mad RAID collapse) would kybosh the system - we simply arent a big enough school to afford redundancy. Although I would have the 2 poweredge 2800 servers that I will be retiring.

    -neilfisher- Your iSCSI NAS - what type of drives do you have in there and is it fast enough for you? Do you mind me asking what type (make) of NAS is it? I trust my openfiler RAID 10 (or is it 01, I cant remember now) SATA box with backups but I wouldnt trust it with live data (redirected documents so I dont think SATA or nearline SAS would cut it!) so a dedicated (for example Dell equallogic) box will be more appropriate (if I can shoehorn the money!)

    -anyone- how does hyperV handle snapshotting. Is this via the hypervisor or is each VM expected do snapshot itself? I am looking towards the bare metal hypervisor (free one) both VMS will have 2008 on there. I am not sure of the terminology but could I automatically replicate a virtualised setup from my NEW hyperV server to a cluster of 2 old servers? Or am I barking up the wrong tree. My thinking is to utilise the old 2 servers as a failover. This means that I *will* need a SAN/NAS iscsi arrangement rather than a new server full of drives.

    I still think that single box virtualisation is the way to go for me with perhaps the option of a clustered (but still woefully slow) backup. Lets see how my core switches can handle that traffic

  12. #12

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    10,991
    Thank Post
    851
    Thanked 2,653 Times in 2,253 Posts
    Blog Entries
    9
    Rep Power
    764
    Quote Originally Posted by KK20 View Post
    -anyone- how does hyperV handle snapshotting. Is this via the hypervisor or is each VM expected do snapshot itself? I am looking towards the bare metal hypervisor (free one) both VMS will have 2008 on there. I am not sure of the terminology but could I automatically replicate a virtualised setup from my NEW hyperV server to a cluster of 2 old servers? Or am I barking up the wrong tree. My thinking is to utilise the old 2 servers as a failover. This means that I *will* need a SAN/NAS iscsi arrangement rather than a new server full of drives.

    I still think that single box virtualisation is the way to go for me with perhaps the option of a clustered (but still woefully slow) backup. Lets see how my core switches can handle that traffic
    HyperV is snapshotted by the host hypervisor, if I remember correctly it locks off the present state and makes all further changes to a differential drive that keeps all changes from that point on.

    You could use the two old servers in a cluster with the new one with the free HyperV server and get it to do live failover but this would require centralized storage. It should be quite possible over iSCSI so no extra cards needed. There are even suppliers on here (CPLD) who have been known to do custom build openfiler boxes for good prices that manage quite speedy operation which would probably work fine for this. Especially given the far lower IO requirements of Exchange 2010. For live migration there is really no way around centralized storage as all products require it. You may be able to manage a manual fail over to a previous state if you were to back up regularly to the other boxes.

    I'd also be careful about snapshotting a DC unless you only have one because they get really screwy if their databases get out of sync.

  13. #13

    Join Date
    Oct 2008
    Posts
    213
    Thank Post
    2
    Thanked 11 Times in 11 Posts
    Rep Power
    21
    Thank you for taking your time to reply.

    Although it will blow the budget I will need a centralised storage. Because of our size a SAN will be overkill. We only have circa 1 tb of storage between both of our servers in its current guise so a small iSCSI would be best. The irony is, it will need numerous decent quality drives in there due to the flogging they would get being shared between 2VMs + failovers - as opposed to the RAID 5 76gb 15k SCSI living in the servers at the moment.

    <sigh> what a can of worms I have opened

  14. #14

    Join Date
    Oct 2008
    Posts
    213
    Thank Post
    2
    Thanked 11 Times in 11 Posts
    Rep Power
    21
    After looking at all the options I wont be virtualising. I will still be sticking with the 2 server (+1 firewall) approach. The cost will simply be too much for the small gain achieved as I would need a decent external storage unit of some description (almost the cost of a small server) and a decent core switch for the storage and server to sit on (although a direct link was a possibility in the short term via a dedicated card if iSCSI was to be used).

    I wasnt able to find any realworld data on raw access speeds for iSCSI as a principal for a virtualised environment but I suspect it is as fast as a bonded connection would allow + overhead.

SHARE:
+ Post New Thread

Similar Threads

  1. Virtualisation
    By alan-d in forum How do you do....it?
    Replies: 5
    Last Post: 17th November 2009, 04:51 PM
  2. Virtualisation - where to start?
    By speckytecky in forum Thin Client and Virtual Machines
    Replies: 23
    Last Post: 7th August 2009, 03:58 AM
  3. New Virtualisation PC
    By Zoom7000 in forum Thin Client and Virtual Machines
    Replies: 19
    Last Post: 27th July 2009, 09:36 AM
  4. So what are you using virtualisation for?
    By Nozza in forum Thin Client and Virtual Machines
    Replies: 13
    Last Post: 22nd July 2009, 04:10 PM
  5. virtualisation ?
    By mac_shinobi in forum Mac
    Replies: 5
    Last Post: 6th February 2008, 02:49 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •