Jollity (6th February 2014)
I run multiple RDS servers in VMs very successfully. Hyper-V is specifically optimised to handle it in fact, like Xen Server is optimised to handle XenApp.
Let's see if I have learned anything. Any thoughts on the plan I have come up with would be valued.
The omens suggest the finance gods are in fairly favourable mood, so I am pitching for a more Seawolf-solution, but I may fall back on a more Glenda-solution if things are looking dicey. I am going to try and sell getting two powerful main servers now as an alternative to replacing the current (rather underutilised) MIS database and web servers, and absorb their roles on the main servers. I plan to use the current web server as a backup server with Microsoft DPM and use one of current main servers as a dedicated physical DC (until it dies of old age at least).
The main servers would be Hyper-V hosts, have identical specs and each should be able to run one of each of the essential virtual machines on its own. The files would be replicated between the two virtual file servers using DFSR as they are currently. I have divided the server roles up as follows:
Roles Number needed Essential? Memory (GB) System Disk Space (GB) host operating system 2 TRUE 4 80 File 2 TRUE 6 80 Separate RAID 5 array for files - passthrough to VM. DC, DHCP, DNS 2 TRUE 3 80 1 TRUE 4 100 WSUS, internal
1 TRUE 4 80 Additional dynamic VHD for update files on SATA disk AV Management,
Log Monitoring, Unifi Controller
1 TRUE 4 100 AV Updates
1 FALSE 2 100 WDS 2 FALSE 6 160 Dynamic VHD Certificate
1 TRUE 2 40 Server Core 3Sys Web Server 1 TRUE 6 80 PASS Database
1 TRUE 8 100
I have been playing with specifications for a PowerEdge T620 on the Dell website, but I will get an equivalent quote for an HP M350p. Two main servers something like:
2 x Intel® Xeon® E5-2620v2, 2.1GHz, 15M Cache, 7.2GT/s QPI, Turbo, HT, 6C, 80W, DDR3-1600MHz
14 x 4GB UDIMM, 1600 MHz, Low Volt, Dual Rank, x8 Data Width - total RAM: 56GB
PERC H710 Adapter RAID Controller, 512MB NV Cache
System Array (for host and guest images): RAID 1 - 2 x 1.2TB, SAS 6Gbps, 2.5in, 10K RPM Hard Drive (Hot-plug)
Data Array (passthrough to the file server): RAID 5 - 3 x 600GB, SAS 6Gbps, 2.5in, 10K RPM Hard Drive (Hot-Plug)
Extras Disk (80GB for WSUS files, rest for previous versions files): 500GB, SATA, 2.5in, 7.2K RPM Hard Drive (Hot-Plug)
Dual, Hot-plug, Redundant Power Supply (1+1), 750W, Titanium
Additional network card: Broadcom 5720 DP 1Gb Network Interface Card
5Yr ProSupport and Next Business Day On-Site Service
Any suggestions? Would you divide up the roles differently? Anywhere you would you suggest increasing the specification? Anywhere I would be able to cut back if I have to?
I have specified RAID 5 on the data disk as an additional layer of redundancy on top of replication to the other server by DFSR. Is this unnecessary? I have a colleague who is not keen on using RAID, so I will need to justify this.
Any reason you need two file servers with DFSR?
If you use Veeam as a backup tool instead of DPM you could instantly boot a failed file server if the requirement/need was there
With My clients I setup Veeam with Direct SAN access (for VMware) and Local storage in the Backup server. And then Servers are also replicated offsite.
EDIT: Does Veeam with Hyper-V have any change block tracking?
1. I would never use RAID5 except for backup storage as I find is too unreliable for primary storage. You didn't specify whether you planned to use a hot spare or not. If you do plan to, don't. Besides my own experience (and others I know) with RAID5, technical reasons for not using it:
So, what to use instead? RAID10 ideally, but if not in your budget at least use RAID6, it only costs you one more drive.
2. Of the other specs I might bump up if the money were there, it would be the CPUs and NICs.
NICs - how many total NICs will you have? You should aim for 6-8 per server ideally. If I had to choose between more CPU grunt and more 1Gb NICs in a virtualisation solution, I would spend the money on the extra NICs.
CPUs - I would go for E-2650s for the extra two cores (if I could afford them), which can become important in virtualisation. If budget doesn't allow, the 2620s will certainly suffice unless you experience a lot of growth.
As for the roles of the VM servers, your plan looks good. I do question the need for the AV management server and AV update servers to be on their own separate servers. We combine those here. Also, I think it would be OK to run the print server on one of your file servers with the number of clients you have, shouldn't be a problem at all.
3. Re-purposing older servers is a great plan. We do the same and it is partially this process over time that allows you to build up a rack or two of great kit. If you buy quality, well spec'd servers, they are still useful after 4-5 years in non-mission critical roles, especially of you have a spare one of similar spec to fire up if something goes down. Our backup server now runs on one of our 4.5 year old previous ESXi hosts, so it is a very well spec'd backup server, which means backups run fast!
4. What do you plan to use for backup storage? This is the one place where RAID5 using good quality SATA drives (WD SE for instance) can be used relatively safely and save you a few dollars. Another option is to do what we did, build your own server with at least 4 drives, 16GB+ RAM and install FreeNAS for rock solid ZFS backup storage. If you have an older server with several SATA drive bays then you have a great inexpensive backup storage solution as long as you use quality drives.
Yes we use Veeam v6.1. We have direct SAN access too, backing up 20 odd VMs from 3 ESXi hosts.
I'd love to add a 4th host to our VMWare infrastructure but the licence pricing for this is complicated. I haven't yet got my head around it.
To be honest I'm not too concerned about the write speeds as we are well within the allotted time for completing backups. I think our Veeam processes at about 200MB so I can't complain.
A significant reason is perhaps that this is what we do currently. It has the advantage of also splitting the load, but I do not really know that this is necessary. It means no technician action is required if one server goes down, though there is a slowdown while the connections to the server that's down time out. Does the Veeam set up you are talking about do automatic failover?
Though if we have the disks space available to store all the data files on the second server, it occurs to me it might as well act as a DFSR replica server. Though I suppose we could use slower, cheaper disks if it was only to be used in emergencies.
There are currently 1 users browsing this thread. (0 members and 1 guests)