+ Post New Thread
Page 2 of 2 FirstFirst 12
Results 16 to 29 of 29
Thin Client and Virtual Machines Thread, Virtualization Plan Sanity Check in Technical; Originally Posted by sted because as far as im aware thats how hyperv works the machines are 100% independant so ...
  1. #16

    Join Date
    Mar 2008
    Location
    Surrey
    Posts
    2,168
    Thank Post
    98
    Thanked 319 Times in 261 Posts
    Blog Entries
    4
    Rep Power
    113
    Quote Originally Posted by sted View Post
    because as far as im aware thats how hyperv works the machines are 100% independant so you cant share core files
    Differencing disks should allow for multiple instances based off the same parent disk in Hyper-V. The instances themselves will be 100% independent - the parent just gives them a common starting system installation rather than having to install twelve copies of the same OS and consume twelve times the disk space.

    I have never used it so cant omment ive only really used hyperv, hyperv core(or whatever its called) and something else the name of which escapes me. My main reasn for liking hyperv is the fact you get an actual usable console on the host pc you dont need to rely on a 2nd machine to manage it
    But it does mean you're using all the system resources of a full OS with GUI just in order to have that console - probably why Hyper-V requires such a scandalous amount of memory for itself.

    Quote Originally Posted by Mr.Ben View Post
    With the RAM requirements why not use the Hyper-V dynamic memory feature? (Introduced with 2008R2 SP1)
    Sadly not in KVM yet (at least not for Windows guests - Linux guests support balloon drivers and they're working on a Windows one).

  2. #17
    Irazmus's Avatar
    Join Date
    Feb 2006
    Location
    Suffolk
    Posts
    313
    Thank Post
    13
    Thanked 20 Times in 15 Posts
    Rep Power
    22
    Firstly, thank you to everyone for all the advice, I'll go through again later and thank individual posts.

    @jamesb
    Is that a bit fine just on the RAM or the CPU as well?
    I can probably persuade our Business Manager to cover an upgrade to 24GB, but I'm working on the assumption of 16GB for the time being.

    Since we're currently running those services (except Syslog) as separate hosts we should be able to manage. But we can lose IIS, WDS and the Intranet without much issue. And if absolutely necessary I can always press-gang my desktop to assist.

    We currently have 2 DCs, 1 of which will be staying physical as I'm too paranoid to virtualize them both.

    @sted
    The host OS will either be Debian or Gentoo running KVM as hypervisor. I'm more comfortable with Debain, but Gentoo has support for newer versions of Ganeti.

    The print server is a Debian, CUPS, Pykota box, so merging with WDS isn't possible.
    I guess I should have mentioned which OS each VM is running, I may go back and update that.

    @FN-GM
    Merging IIS into the SM box is an idea I'd not considered, although I'd like to try to segregate services if at all possible, since that's one of the things that attracts me to virtualization. I'd planned syslog to run on Debian, but I'll look into Windows options as a possible alternative if RAM becomes an issue.

    @glennda
    That sounds like a nice setup. I looked into SANs but they're simply out of our price range, although it's definately something to aspire to and your CPU numbers are reassuring.

    @sted
    It's probably rather fortunate we're still running 2K3 then. I'd like to at least upgrade our DCs to 2K8 but the cost of new CALs would prevent us buying the VM server. Our budget really is that tight this year.

    @plexer
    No, the 2800s will be mirrored file servers running OpenFiler. Not ideal since Dell will be classifying them End of Life later in the year, but we should be able to keep one going by robbing parts from the other if needed.

    @Mr.Ben
    If and when KVM supports it I may well do that. But even if KVM did support it, for planning I'd still assume them to each have dedicated RAM, at least that way I shouldn't under spec.

    @dhicks
    I think I already checked that, but I'll double check to be safe.

    Yes mirrored as in DRDB style, just in case the server itself fails. We have a separate backup for disaster recovery (External HDDs stored in a firesafe in a separate building, with 3 TeraStations for server/VM backups).

    My home server and half the servers here run Debian, the only reason I'm considering Gentoo is that Ganeti seems to be better supported there.

  3. #18

    plexer's Avatar
    Join Date
    Dec 2005
    Location
    Norfolk
    Posts
    13,458
    Thank Post
    646
    Thanked 1,614 Times in 1,444 Posts
    Rep Power
    419
    @Irazmus ok thanks good to know as I still have 2800 as my second DC here and I was going to run it as an additional 2008 R2 Dc but if it's going end of life and no more warranty available then I wont

    Ben

  4. #19

    Join Date
    Mar 2008
    Location
    Surrey
    Posts
    2,168
    Thank Post
    98
    Thanked 319 Times in 261 Posts
    Blog Entries
    4
    Rep Power
    113
    Quote Originally Posted by Irazmus View Post
    @jamesb
    Is that a bit fine just on the RAM or the CPU as well?
    A bit fine on both - I'd recommend making sure you've got more cores to play with before virtualising anything else onto it - 1 core/VM should be fine though. The RAM upgrade wouldn't hurt, but again so long as you're planning to upgrade before expanding the virtualisation project then it can be done later.

    Since a large part of virtualisation is intended to maximise resource usage, cutting things a bit fine is the best way to do it.

    Since we're currently running those services (except Syslog) as separate hosts we should be able to manage. But we can lose IIS, WDS and the Intranet without much issue. And if absolutely necessary I can always press-gang my desktop to assist.

    We currently have 2 DCs, 1 of which will be staying physical as I'm too paranoid to virtualize them both.
    Good call. As @dhicks said you don't really want to have any DC mirrored - much better either to have a separate virtual one on the other host, or a separate physical one, or both.

    @Mr.Ben
    If and when KVM supports it I may well do that. But even if KVM did support it, for planning I'd still assume them to each have dedicated RAM, at least that way I shouldn't under spec.
    With well-managed dynamic memory and ballooning drivers, you can actually provide about 50% more memory than actually exists (assuming that the guests never try to actually use it all at once, then things get messy). KVM does actually support balloon drivers for your Linux guests apparently, so it may be worth looking into.

    Out of curiousity - is there a specific reason you're going for KVM or is it purely an expense thing? If it's purely an expense, did you have a look at VMWare's ESXi Hypervisor (free)? I'm mainly asking since ESXi does provide balloon drivers for all of the OSs I've ever had a chance to try it with.

  5. #20
    jamesfed's Avatar
    Join Date
    Sep 2009
    Location
    Reading
    Posts
    2,202
    Thank Post
    137
    Thanked 342 Times in 289 Posts
    Rep Power
    86
    I'd strongly suggest you never cut corners on a Virtulisation project - there are just to many things which could go wrong when it comes to using existing (and potentialy aging) hard disks and expecting them to work harder than before (with your SIMS/File Server).

    I know the budget is tight and we just had the same problem here. Ultimately we cut corners on a physical to virtual conversion by way of not spending enough on the SAN (which was a repurposed Dell PowerEdge 2900) and are now feeling the pain of corrupted disks and non working VMs.

    Not that I want to be a doom sayer but you might be better off to wait a year for the money to get a propper SAN and a second virtual host before you start your virtual journey.

    On a ligter note that server sounds pretty much like our HP DL165s but with a 8 core processor, we also have a DL385 G7 which has another 8 core (this one runs our virtual desktops) and both models run a treat.

    Also have a look at Hyper-V Server 2008 R2 from Microsoft - its free and you get the things like Dynamic Memory which would help on your RAM requirements.

  6. #21

    Join Date
    Mar 2008
    Location
    Surrey
    Posts
    2,168
    Thank Post
    98
    Thanked 319 Times in 261 Posts
    Blog Entries
    4
    Rep Power
    113
    Quote Originally Posted by jamesfed View Post
    Also have a look at Hyper-V Server 2008 R2 from Microsoft - its free and you get the things like Dynamic Memory which would help on your RAM requirements.
    Microsoft Hyper-V, VMWare vSphere Hypervisor (ESXi) and KVM are all free, and all perfectly fine for simple server virtualisation (except for KVM's lack of balloon drivers for Windows).

  7. #22

    Join Date
    Apr 2011
    Posts
    2
    Thank Post
    0
    Thanked 1 Time in 1 Post
    Rep Power
    0
    Sounds like a good well thought out plan to me, the 24GB ram upgrade would be good if you can get it from the money side of things.
    Nice server choice

    Hope all goes well, be sure to let us know when its up an running how its going

  8. #23
    Irazmus's Avatar
    Join Date
    Feb 2006
    Location
    Suffolk
    Posts
    313
    Thank Post
    13
    Thanked 20 Times in 15 Posts
    Rep Power
    22
    With well-managed dynamic memory and ballooning drivers, you can actually provide about 50% more memory than actually exists (assuming that the guests never try to actually use it all at once, then things get messy). KVM does actually support balloon drivers for your Linux guests apparently, so it may be worth looking into.
    I'll definitely look into that, especially for those (nix) servers that could do with more RAM anyway.

    It's partly an expense thing, but it's also the fact that it's GPL.
    We looked into ESXi but decided against it for a number of reasons.
    I couldn't see any way to get High Availability and auto failover, which I think I can probably manage with KVM/Ganeti/LHAP. It's not vital but I'd like to have the option for later.
    I also don't think it's possible to have local VM storage using DRDB (or similar) to clone between VM hosts. Without the DRDB cloning I can't avoid using network storage for VM images while maintaining quick recovery.
    I was initially planning to use Xen with Ganeti, but KVM has replaced it as it's kernel based. I may still use some Xen hosts if some of my older kit doesn't support HVM, but that's a decision I can finalize later.

    @jamesfed
    I don't mind a little doom saying, the more information/opinions I get, the better.

    I take your warning on board, but SIMS is too important to run on unsupported hardware so we have to act now, either with a new SIMS server, or by starting the virtualization. The upside is that most of the load will be put on the new server, the 2800s will actually get less of a hammering since they'll be sharing the load (unless one decides to fail) and will only be serving user files. Plus the drives they'll be using are only ~3 years old since they aren't original to either server, they were added as an upgrade.

    But if this all goes to pot because we're cutting corners, you have full permission to say 'I warned you' in a vaguely Scottish accent and then throw a rabbit at me

    Not to nit-pick, but Hyper-V isn't free, it's included in the server licence cost, and we have no 2K8 licences, nor 2K8 CALs either (Which I assume I'd need to allow clients to connect to the VMs). To sort the above plan using Hyper-V, we'd need a 2K8 licence for the new host and each of the backup hosts. When adding the CALs on (250 computers @ 3.70) , we're looking at ~1200 - 1500, which I'd rather put towards the hardware. That and I'm happier managing *nix machines.

    @NS_tech
    I'm confident I can get the upgrade, but prefer to err on the side of caution until it's confirmed.
    The HP DL3xx range come up often in the virtualization threads here so it seemed the obvious choice, especially when you compare the prices to Dell.

    I'll definitely report back when we get it running, but it'll probably be the summer before we really get going with it.

  9. #24
    jamesfed's Avatar
    Join Date
    Sep 2009
    Location
    Reading
    Posts
    2,202
    Thank Post
    137
    Thanked 342 Times in 289 Posts
    Rep Power
    86
    Quote Originally Posted by Irazmus View Post
    But if this all goes to pot because we're cutting corners, you have full permission to say 'I warned you' in a vaguely Scottish accent and then throw a rabbit at me
    Rabbit at the ready (said in the best of faith)

    Quote Originally Posted by Irazmus View Post
    Not to nit-pick, but Hyper-V isn't free, it's included in the server licence cost, and we have no 2K8 licences, nor 2K8 CALs either (Which I assume I'd need to allow clients to connect to the VMs). To sort the above plan using Hyper-V, we'd need a 2K8 licence for the new host and each of the backup hosts. When adding the CALs on (250 computers @ 3.70) , we're looking at ~1200 - 1500, which I'd rather put towards the hardware. That and I'm happier managing *nix machines.
    Hyper-V Server is free http://www.microsoft.com/hyper-v-ser...s/default.aspx (its a seprate thing from Windows Server) basicly Microsoft realised VMWare and Citrix were making a killing from 'free' products and they jumped on the band wagon.
    Just looking through your requirements again I don't think Debin works to well on Hyper-V so my bad on mentioning it.

  10. #25

    Join Date
    Mar 2008
    Location
    Surrey
    Posts
    2,168
    Thank Post
    98
    Thanked 319 Times in 261 Posts
    Blog Entries
    4
    Rep Power
    113
    Quote Originally Posted by Irazmus View Post
    I'll definitely look into that, especially for those (nix) servers that could do with more RAM anyway.
    It's important to remember that over-committing memory (ballooning) doesn't actually create more RAM from nowhere. Instead it'll grab memory back from machines which aren't using it by having a fake system driver request the memory - then it can be parcelled out to a machine which does need it at the time. Ballooning is good, paging to disk is a bad thing, and is what'll happen if too much over-committing goes on.

    It's partly an expense thing, but it's also the fact that it's GPL.
    Fair enough.

    We looked into ESXi but decided against it for a number of reasons.
    I couldn't see any way to get High Availability and auto failover, which I think I can probably manage with KVM/Ganeti/LHAP. It's not vital but I'd like to have the option for later.
    It can be done (for free, obviously it can be done with the full vSphere stack but that gets pricey). You'd need a separate monitoring server and a bit of scripting knowledge, and it wouldn't be the instantaneous HA you get with the full stack but would take maybe a minute to fire up. I don't know about doing it with Ganeti though, which is probably easier.

    @jamesfed
    Not to nit-pick, but Hyper-V isn't free, it's included in the server licence cost, and we have no 2K8 licences, nor 2K8 CALs either (Which I assume I'd need to allow clients to connect to the VMs).
    Actually, to be nit-picky, Microsoft Hyper-V Server 2008 R2 is free - Microsoft Hyper-V Server: How to Get It, it's just that most people assume you can only get Hyper-V with Windows attached.

    To sort the above plan using Hyper-V, we'd need a 2K8 licence for the new host and each of the backup hosts. When adding the CALs on (250 computers @ 3.70) , we're looking at ~1200 - 1500, which I'd rather put towards the hardware. That and I'm happier managing *nix machines.
    No CALs needed. And ESXi is Linux-based. I'd say stick with KVM though, since it looks like you've researched it and there's very little difference between the hypervisors.

  11. #26

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Quote Originally Posted by jamesfed View Post
    Ultimately we cut corners on a physical to virtual conversion by way of not spending enough on the SAN (which was a repurposed Dell PowerEdge 2900) and are now feeling the pain of corrupted disks and non working VMs.
    Why did you get corrupted disks? Do you mean you just had too many disks in your RAID array physically fail, or did something else happen?

  12. #27


    Join Date
    Mar 2009
    Location
    Leeds
    Posts
    6,588
    Thank Post
    228
    Thanked 856 Times in 735 Posts
    Rep Power
    296
    Quote Originally Posted by jamesb View Post
    Differencing disks should allow for multiple instances based off the same parent disk in Hyper-V. The instances themselves will be 100% independent - the parent just gives them a common starting system installation rather than having to install twelve copies of the same OS and consume twelve times the disk space.



    But it does mean you're using all the system resources of a full OS with GUI just in order to have that console - probably why Hyper-V requires such a scandalous amount of memory for itself.



    Sadly not in KVM yet (at least not for Windows guests - Linux guests support balloon drivers and they're working on a Windows one).
    again there is no ideal solution but i think its a bonus to be able to use the server as a console if things go pear shaped (always have a plan b at least) yes it wastes a bit of ram but its cheap enough these days that wasting 2gb is hardly the end of the world and if i ever dare sp1 the pc i wont even need to do that as such

    Quote Originally Posted by jamesb View Post
    Microsoft Hyper-V, VMWare vSphere Hypervisor (ESXi) and KVM are all free, and all perfectly fine for simple server virtualisation (except for KVM's lack of balloon drivers for Windows).

    though bare metal hyper v is a pita to get started once going its fine but that first step i managed it once and like a fool diddnt document it and dont seem to able to replicate it lol

  13. #28
    Irazmus's Avatar
    Join Date
    Feb 2006
    Location
    Suffolk
    Posts
    313
    Thank Post
    13
    Thanked 20 Times in 15 Posts
    Rep Power
    22
    Hyper-V Server is free http://www.microsoft.com/hyper-v-ser...s/default.aspx (its a seprate thing from Windows Server) basicly Microsoft realised VMWare and Citrix were making a killing from 'free' products and they jumped on the band wagon.
    Actually, to be nit-picky, Microsoft Hyper-V Server 2008 R2 is free - Microsoft Hyper-V Server: How to Get It, it's just that most people assume you can only get Hyper-V with Windows attached.
    Doh! I stand corrected.
    * Fetches dunces hat and goes to write lines in the corner
    Code:
    I will get my facts straight before nit-picking someone's comments.
    I will get my facts straight before nit-picking someone's comments.
    ...
    Just looking through your requirements again I don't think Debin works to well on Hyper-V so my bad on mentioning it.
    No idea is a bad idea. With the exception of nit-picking without doing proper research first [facepalm]

    It's important to remember that over-committing memory (ballooning) doesn't actually create more RAM from nowhere. Instead it'll grab memory back from machines which aren't using it by having a fake system driver request the memory - then it can be parcelled out to a machine which does need it at the time. Ballooning is good, paging to disk is a bad thing, and is what'll happen if too much over-committing goes on.
    I was thinking of the print and web servers which will want generous amounts in exceptional circumstances (~100MB document or entire school accessing Moodle), but only moderate amounts during normal usage.

    If I'm right, Ganeti doesn't do auto failover, but the LHAP does, so I should be able to combine the two to get the benefits of both. Either that or I'll end up with an unmanageable mess. But since full HA is desirable, not essential, it won't be the end of the world if I can't get it working.

    No CALs needed.
    Again I stand corrected.
    Since I can't wear more than one dunces hat at once, I'll just have to wear it for twice as long.

    And ESXi is Linux-based.
    But I know my way around Debian better so I'll still stick with that, unless Gentoo tempts me away (fickle?)

  14. #29
    jamesfed's Avatar
    Join Date
    Sep 2009
    Location
    Reading
    Posts
    2,202
    Thank Post
    137
    Thanked 342 Times in 289 Posts
    Rep Power
    86
    Quote Originally Posted by dhicks View Post
    Why did you get corrupted disks? Do you mean you just had too many disks in your RAID array physically fail, or did something else happen?
    Just a faulty NIC on our makeshift SAN causing problems with iSCSI - the data transfered over fine but on booting the newly created VHDs just fried.

    All working now though thanks for a new NIC and 2 days of watching backup tapes spin around in the auto loader (still coulda done without the hastle over the holiday)

  15. Thanks to jamesfed from:

    dhicks (28th April 2011)

SHARE:
+ Post New Thread
Page 2 of 2 FirstFirst 12

Similar Threads

  1. SQL Sanity check please
    By plexer in forum How do you do....it?
    Replies: 3
    Last Post: 6th April 2011, 02:52 PM
  2. New server deployment - sanity check?
    By sparker in forum Hardware
    Replies: 36
    Last Post: 2nd February 2011, 08:05 PM
  3. Sanity Check & Advice re: 3G gateway
    By kmount in forum Internet Related/Filtering/Firewall
    Replies: 6
    Last Post: 13th September 2010, 05:08 PM
  4. Quick Sanity Check....
    By FragglePete in forum Windows Server 2008 R2
    Replies: 3
    Last Post: 21st July 2010, 07:16 PM
  5. VLAN Sanity Check Please
    By ICTNUT in forum Wireless Networks
    Replies: 2
    Last Post: 22nd July 2008, 08:16 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •