sparkeh (3rd February 2014)
Ok last word on this as the OP seems to have needed info.
Our setup has run for 6 years with zero downtime. If the server did have an issue it would be fixed same day or a new server would be run up the next day at the latest. This is acceptable to SMT. We get the benefits of virtualisation at a lower cost and complexity than that of ensuring greater resilience that the managers of the school judge we don't need.
It works for us. It will suit some other schools in a similar situation. Running with one host is not always unacceptable.
It wouldn't suit everyone, some people will need as near as dammit 100% guaranteed uptime. Ok then don't follow our model and make your system more resilient.
You pays ya money ya takes ya choice
I think you all have the same ideas, as has been said you are all talking off the same song sheet. In the risk assessment some can except down time, some cant, but the whole thread was to give Jollie, the best advice he can have, which I think, by the way, has been achieved.
Last edited by Trev_LCHS; 3rd February 2014 at 12:34 PM.
if your host is part of the domain, some times it has to be, then please follow this advice on preventing a nightmare with system time.
We were having timing issues due to the PDC being a Virtual. Hosts look at the PDC for time, but Virtuals take time from the host, so time issues can just spiral. The scenario is, the PDC is a minute out, host looks at PCD changes time to PDC, later, PDC looks at Host and is a minute out so changes time to Host and so on.
Configure your PDC as per How to configure an authoritative time server in Windows Server using the “fix it myself”, configuring a PDC to use an external Time Source.
Then make sure you go to the Virtual PDC in Hyper V manager, go to setting, and then to integration services, untick time synchronisation.
This works well now.
Time servers I used when configuring are “1.uk.pool.ntp.uk.0x1 2.uk.pool.ntp.org.0x1” (Don’t forgot to put the ,0x1 or it won’t work).
I have a few more questions. (Surprise!)
Do people tend to use the same backup solution for the host machine system partition as for the virtual machines? If Veeam is for virtual machines only then maybe not. Or is the host machine configuration minimal enough that it can just be recreated? I am guessing that might be true for VMWare, but not for Hyper-V.
Similarly do you backup the backup server system partition? (Who backs up the backers up?!)
A fellow admin I was talking to yesterday, said that he keeps some roles (including DC) on the host system, and puts others, like Exchange, in Hyper-V virtual machines. My understanding would be that it would better practice to keep every other role off the hosts for security reasons. Is that correct? Are there other reasons for not doing it?
Edit: He also said that one of the virtual servers is a remote desktop server. It would be attractive to have a virtual remote desktop server but I think I read in another thread on here that it was a bad idea. Why?
Last edited by Jollity; 5th February 2014 at 11:55 PM.
We run the same backup solution for everything - backup assist. It leverages the backup capabilities built into Windows but puts a better front end and more capabilities at your fingertips.
It's usually inadvisable to do anything other than run guest VMs on a hyper-v host, and Microsoft don't recommend it.
Regarding backing up the backup system - there should be no need really, as the restoration process should be as simple as "get software used to backup, install on machine, restore from backups to other machines".
With Backup Assist there isn't a central backup host anyway. You run it on each host you're backing up. We run it on 3 servers - the 2 hyper-v nodes and our storage server.
I wouldn't run anything else on host machine. Our production host (and new replication target) are both off domain boxes with just hyper-v installed.
This afternoon I spun up the server and added hyper-v role in about...20 mins. As a test I copied across a vhd, did some minor config and had the vm up and running in no time. Probably quicker than restoring a backup of the host. This probably goes for backup system as well.
Last edited by sparkeh; 6th February 2014 at 12:14 AM.
We backup our VMs using Veeam to a FreeNAS server and then have Veeam setup to do a secondary copy to a Drobo B800i connected to the backup server via iSCSI. It is recommended that you always keep at least two copies of your backups. And that you verify them occasionally.
What ever you do, if you're using Hyper-V, NEVER run DC on the host. DC has doesn't like multi-homed network cards (pretty much a requirement for Hyper-V). You'll be in for a world of pain if you do.
For back up we do the following. Or storage server (based on SMB shares) holds both VHD's and user data (home drives, etc). User data is backed up using Yosemite Backup - pretty much standard, traditional Windows backup ware. The SQL servers (VM's) run backups into backup folder on the storage server which is then backed up again as part of the Yosemite system. This is our daily, incremental, help users out if they loose a file backup.
We also have a NAS box I take offsite. The whole shooting match, VHD images and all, is Robocopied on to this once every term. This is out distaster recovery backup.
For Terminal Servers - it's in essence a form of virtualisation all ready. Mutliple users shareing one set of CPU/RAM resources. Depending on the number of users they can often require the same kind of spec's as virtualisation host servers to provide a smooth end user experience. So it's considered a bad idea to vitualise because you're adding an extra layer of complexity between the terminal server and the hardware, and you probably wouldn't be using the host for anyother VM's.
That said, Server hardware is getting cheap and powerful. I just got 24 cores and 128Gb Ram for around £2.5k. On this kind of host giving over 4 cores and 16Gb Ram to a RDP server is nothing (which is what I plan to do). With powerful hardware, dynamic memory, and easy VHD backups, I think the traditional "don't run Terminal Services in a VM" mentality has passed.
Hyper-V should be easy enough to set up again should it fail.
VeryPC. Novtech have managed similar for me in the past. Both good bespoke manufacturers who know how to treat their customers.
As far as I am aware that's still true. Sounds like a bonkers thing to try IMHO. Bear in mind I was talking about running Terminal Server in a VM which doesn't rely on Hyper-V.Also I read somewhere that you couldn't run Hyperv in a VM as its already running on the host? Is that not right anymore?
Last edited by tmcd35; 6th February 2014 at 10:31 AM.
There are currently 1 users browsing this thread. (0 members and 1 guests)