ESX host sizing
We would like to fully virtualize our servers, which consist of:
- RDS (also WSUS and WDS)
Our current data totals 600GB the bulk of which is user data, including Exchange mailboxes. The SIMS database is about 500MB.
We've 100 clients and about 400 users.
We use Dell for servers and buy 5 year warranty/support contracts.
Our plan is to purchase:
- ESX1 - 'production' machine
- ESX2 - redundancy machine
- Backup host - er, backup host
Our proposal is:
- ESX1 will run the network on a day-to-day basis
- ESX2 will continue to run the network should ESX1 fail for some reason, and at reduced performance
- The backup host will use Veeam to carry out D2D backups and weekly D2T (LTO6) for offsite
We want to provide about 2TB of storage and we will be using local storage on each of the ESX hosts and the Veeam machine.
We are dithering settling on a spec and the following is what we are dithering over:
- As a rule of thumb (https://www.google.co.uk/search?q=es...ient=firefox-a) one would run 4 guests per CPU core. Therefore a 4 core machine should allow us to run our network as above and leave room for further machines if necessary.
- However, we don't feel this reflects current processor standards and keep looking towards 6 core machines.
- We think that spending money on large fast disks will benefit us more than spending money on processors.
- We can't decide whether or not to purchase dual processor machines.
- ESX1 should have fast disks (we propose RAID10), what about ESX2 - it needs to run the network until the first unit can be repaired so performance is secondary to maintaining the network operation, will SATA disks in this host suffice?
- We wonder whether SATA disks in the backup machine are going to cripple the backup. We think not if they're in a RAID array.
Any input with this would be hugely appreciate us. We have a budget of £12k (software is already purchased), but think it could be £2-£4k less with some more planning/thought/input/etc.
I'm sure I've omitted some info.
Many thanks in advance.
with regards to storage what about some sort of local tiered storage for example SSDs for the OS and SQL and then cheaper disks for the user data that doesn't require high IOPS. And shouldn't ESX2 be same spec as as ESX1 so end users don't notice anything different.
Why wouldn't you setup ESX1 and ESX2 to share the load? Then take advantage of the High Availability functions in ESX to failover if one does break?
We run a lot more than that off 3x hosts (2x quad core / 32GB RAM). They're all set up to share the load, if one dies (like last week) it takes about 20-30 seconds for services to recover.
That said, I think to take advantage of ESX HA, you can't store your VMs on the hosts - it needs to be some form of SAN (we use a MD3000i tied to a MD1000 for extra space), the hosts then reside entirely on that.
I would totally consider drafting up some sort of proposal that revolves around High Availability - it's usually pretty easy to sell, short of some massive problem, which probably indicates something bigger anyway, being able to keep your network up if a Raid Controller fails or a power supply or even a whole host is usually pretty easy to sell - especially if it takes you a day to get hold of the part you need to fix it.
The other option if looking to avoid purchasing a SAN is the VMware vSphere Storage Appliance (VSA) which will give you HA. Think it comes as part of VMware Essentials Plus. You would need to have identical disk configs on each host though, I believe.
vSphere stoarge applaince and vSphere Data Protection for backup would be an option.
Originally Posted by Ashm
4 vms per physical core is feasible if these VM's arn't doing much .i.e. typically based around under utilized web servers but i'd aim for a lower vm to phyical core ratio myself.
Yup I would go with sharing the load across both hosts, you can actually provide HA and live migration now without the need for a SAN.
When it comes to memory, think about what you need, double it then add some more (it is always needed!). Along with running Veeam on the physical backup server I would also make this a Physical DC.
Are you planning on buying ESXI License or are you wanting to use the free version? If so Veeam is not supported on this platform.
With regards to the D2T then Veeam will shortly support this in the coming release.
Thank you everyone for your replies/input.
We have licenses for vSphere Essentials Plus & Veeam Essentials Enterprise.
We will stay with local storage. I believe Veeam will take care of VM replication.
In my experience Dell's hardware reliability is very good. We have two servers that are 7-8 years old that haven't developed any faults; I've no reason to suppose that any new ones would either.
On my many (many) visits to Dell's website I have been configuring the ESX hosts with 64GB RAM.
@DrPerception : that's really useful info. I think that the school's money is better spent on fast disks/controllers. I think I need to verify what my software is going to let me.
Many thanks again everyone.
Veeam will do the replication but not in real time - go for as much memory as possible, if you think you need 64 go to a 100, you will use it (and its always nice to have plenty of extra capacity)
Originally Posted by ijk
Personally, Like what was mentioned before I would configure both ESX Hosts for High Availability. I'm pretty sure you get the functionality of vMotion with vSphere Essentials.
vMotion is pretty awesome and has been a life saver for me in the past.
For Disks in your array I would try and go for the 15k SAS Disks if you have the budget for them.
I have used VEEAM before and i quite like this product. You can setup your servers to replicate on a differential. this will replicate any changes to your DR Cluster.
I have had to use this in anger before and I think it worked quite well.