Jollity (2nd February 2014)
Another +1 for virtualisation here. VMware essentials plus, 3 Hosts, SAN, Veeam B&R - Sorted.
Cannot recommend it enough, less physical space used, less power consumption, more efficient use of hardware, backup and DR covered, redundancy and HA built in. It's a marvel.
Removing old hardware after a P2V conversion - Virtualization Pro
However, where possible, I'd recommend building new boxes and moving over to them anyway
I have been starting to do some reading and working out specifications. I am tending towards Hyper-V mainly as it seems, in the limited time available before bring it into use, that it would be easier for my colleagues and me to get our heads round, as Windows is what we are already used to, but I am not set on that.
I was wondering what specification people used for a minimal server 2012 guest computer. A DC, for example, presumably does not need much in the way of memory or other resources. The minimum RAM requirement of Server 2012 is supposedly 512MB, but would you actually stick at 2GB?
Similarly for hard disk, how much space do you give a windows partition? Or do dynamic disks make this fairly unimportant? It seems a pity to have to store 20GB+ of identical system files for each guest, but I suppose that is part of the price for the benefits of virtualisation.
Do people tend to remove the GUI from guest Windows installs?
My VMWare DC's use 4GB each. I am moving to 2012R2 soon and imagine they would be fine with that too. As for partitions, my 2008 ones have 40GB as a HHD partition but 60GB seems to be the norm nowadays but would think you could still get away with 40 GB easily.
I only run one server core and it is great but only did it as a training exercise. If you are happy with server core then knock yourself out however if not then standard will be fine in a hyper V environment.
Veeam backup is a great recommendation, but you want your backup server to have lots of CPU grunt, at least 12GB RAM, fast storage and at least two NICs that can be trunked. Otherwise, your backups run slow, restores run slower, and you would barely be able to run more than one VM in an instant restore scenario.
Here are my recommendations based on 7 years of running a virtualised infrastructure:
1. ALWAYS use at least two hosts for your VM environment. Using only one is foolhardy, more than two is unnecessary for most and just more expense and more to manage.
2. ALWAYS overspec your VM servers by at least 30%, especially for the CPUs which are much harder and more expensive to upgrade. The minimum RAM you should use is 1.5GB for Linux servers, 4GB for standard Windows servers (file, web, general purpose), 6GB for DCs and 8GB for Database servers. Always use at least 2 vCPU cores per Windows VM and 4 vCPUs for DB servers. Use a minimum of two NIC ports for every vSwitch (LAN, DMZ, SAN) and 4 or more are better for the SAN network (if you use a SAN). A bare minimum for most environments is two servers each configured with at least 2 x quad core Xeon CPUs (1 x 8 core or 2 x 8 core even better), 32GB of RAM and 3 x dual port NICs (6 total).
3. ONLY use a SAN if you know what you are doing. An improperly configured or managed SAN will give you nightmares and you are better off using DAS. Don't try to use an inexpensive NAS as your VM storage. Bad mistake. If you are spending less than $8,000 for your SAN - it isn't good enough to use for your primary VM storage. The lowest end I would go is with a mid-tier TruNAS server, a Drobo 1200i, or similar. Better yet, build a SAM-SD or buy a Nimble storage SAN or Oracle 7120/7320 (just discontinued might be able to find at a deep discount). Otherwise, just stick with DAS with good fast 10,000 or 15,000 RPM SAS disks.
4. Go with servers that can support 10GbE NICs. Even if you won't use 10GbE right now, it's good to have the option in the next 5 years. Remember that with a VM host ALL of your VM servers are sharing the bandwidth on the host rather than it being distributed across multiple physical servers each with their own independent NIC. Many often fail to realise this little fact. 10GbE really comes into its own with VM hosts and SAN storage.
5. BACKUP, BACKUP, BACKUP! Ensure you have a good backup server and backup storage. Don't skimp on this aspect of your environment. Use a server with at least a quad core CPU (i7 or Xeon), 12GB RAM, and two NIC ports. Ensure your backup storage uses enterprise rated disks such as the WD SE or RE series. A good option for backup storage is a Drobo B800i with 3 or 4TB WD RE drives. I recommend Veeam backup for your backing up your VMs. The newest version also supports secondary backup to tape or a secondary backup storage. I personally recommend building a cheap Micro-server running FreeNAS located in another building to copy your backups to just in case. I built one with 7.8TB storage for $1,600. Another option is an iOsafe fire proof backup drive connected to your backup server via USB 3.0. BackupAssist is also a great option particularly for backing up the Hyper-V hosts themselves (bare metal) and critical data backups.
...This I don't (100%)...
I've never needed more than 3Gb in a DC and this is a 450 seat environement - DC2 currently sitting at 1.5Gb. DC3 using a bit more at 2.7Gb out of 12Gb (physical server, old VM host). 6Gb would be over kill, and 1 core is more than fine. I have VM's running on 1 core and 1Gb and others up at 4 core and 12Gb Ram. It's about matching the work load requirements. The best place to start is to look at the spec's and performance data from your existing machines.
Like wise, while it might be good to have 2xNIC's for each network. Your infrastructure needs to support it. More NIC's = more CPU/RAM requirements for the host OS to support them. Do your switches support 802.11ad? (that said 2012 now includes some very cool NIC bonding features).
While I agree with the sentiment, the size of the environment says different. Personally I'd stick with local storage in each host server, but I think a couple of grand could buy a suitable NAS that support iSCSI with good enough speed/reliability for that environment. SAN's are rediculously expensive and I very much doubt is something you should be looking at.3. ONLY use a SAN if you know what you are doing. An improperly configured or managed SAN will give you nightmares and you are better off using DAS. Don't try to use an inexpensive NAS as your VM storage. Bad mistake. If you are spending less than $8,000 for your SAN - it isn't good enough to use for your primary VM storage. The lowest end I would go is with a mid-tier TruNAS server, a Drobo 1200i, or similar. Better yet, build a SAM-SD or buy a Nimble storage SAN or Oracle 7120/7320 (just discontinued might be able to find at a deep discount). Otherwise, just stick with DAS with good fast 10,000 or 15,000 RPM SAS disks.
All servers support 10GbE - it's called PCIe. Seriously, don't buy this now unless you need it. Buy it when you need it, when prices come down.4. Go with servers that can support 10GbE NICs. Even if you won't use 10GbE right now, it's good to have the option in the next 5 years. Remember that with a VM host ALL of your VM servers are sharing the bandwidth on the host rather than it being distributed across multiple physical servers each with their own independent NIC. Many often fail to realise this little fact. 10GbE really comes into its own with VM hosts and SAN storage.
Pretty much this! I turn all my servers off once every term and backup to a 8Tb QNAP NAS. My entire virtual environment (nearly 30 servers + user data) fits inside less than 4Tb. Taken off site. If the school burned down we can get everything back very quickly and easily.5. BACKUP, BACKUP, BACKUP! A good option for backup storage is a Drobo B800i with 3 or 4TB WD RE drives.
I suppose the point I'm trying to make is there is a danger of over spec'ing and spending too much (or getting a project mothballed on cost), as much as there's a danger in under spec'ing.
Other thing to bear in mind - Hyper-V pretty much requires atleast 1 physical domain controller (I'm aware 2012 R2 has "measures" to get around this particular chicken and egg, but...)
Last edited by tmcd35; 3rd February 2014 at 09:18 AM.
I have x1 host with 11Gb of Ram hosting Veeam backup and x2 Vm DCs on local storage, 1 running as a file and printer server, the other running SCCM, Sophos, Spiceworks and other stuff I can't think of right now. Everything (including Veeam backups) run very smoothly.
Not exactly as I would run things given the more money but it works well and I would rather choose this setup over physical servers.
Everyone keeps talking of SANs for our environments, and I just don't think we need them still. I can't justify spending that much money on a SAN when I can get a DL380 with 25 disk bays for far less, and can populate it how I wish. Stick Windows 2012 on it and you've got a great environment for shared storage for Hyper-V.
In fact, that's what we have here.
So, you really don't have to splash out on expensive storage systems to have shared storage. For a small environment, just find a server with enough drive bays and spec the disks you want.
With Windows 2012, you can even have 2 such servers and specify a fileserver cluster which the Hyper-V cluster uses, so you have both Hyper-V and storage is redundant.
Last edited by localzuk; 3rd February 2014 at 10:13 AM.
If you're SOLE server ever goes down (highly likely at some point), how long will your ENTIRE environment be down while you obtain a replacement server? Whereas, the likelihood of having multiple physical servers fail simultaneously is extremely unlikely. And, even in such cirucumstances only part of your environment would be offline. A good (and inexpensive) backup solution like BackupAssist can perform a full backup (bare metal) of physical servers for rapid restore.
Virtualisation is not the answer in every situation.
I have actually had to action this as a server had a raid failure meaning it needed to be bought online.
Yeh we always have one spare server with Hyper-V installed. If the main machine fails, we just push out the backups from VEEAM to the new machine.
There are currently 1 users browsing this thread. (0 members and 1 guests)