Hardware Thread, ESX host sizing in Technical; ...
14th May 2013, 07:57 PM #1
14th May 2013, 08:12 PM #2
with regards to storage what about some sort of local tiered storage for example SSDs for the OS and SQL and then cheaper disks for the user data that doesn't require high IOPS. And shouldn't ESX2 be same spec as as ESX1 so end users don't notice anything different.
14th May 2013, 09:36 PM #3
Why wouldn't you setup ESX1 and ESX2 to share the load? Then take advantage of the High Availability functions in ESX to failover if one does break?
We run a lot more than that off 3x hosts (2x quad core / 32GB RAM). They're all set up to share the load, if one dies (like last week) it takes about 20-30 seconds for services to recover.
That said, I think to take advantage of ESX HA, you can't store your VMs on the hosts - it needs to be some form of SAN (we use a MD3000i tied to a MD1000 for extra space), the hosts then reside entirely on that.
I would totally consider drafting up some sort of proposal that revolves around High Availability - it's usually pretty easy to sell, short of some massive problem, which probably indicates something bigger anyway, being able to keep your network up if a Raid Controller fails or a power supply or even a whole host is usually pretty easy to sell - especially if it takes you a day to get hold of the part you need to fix it.
Last edited by DrPerceptron; 14th May 2013 at 09:39 PM.
14th May 2013, 09:45 PM #4
The other option if looking to avoid purchasing a SAN is the VMware vSphere Storage Appliance (VSA) which will give you HA. Think it comes as part of VMware Essentials Plus. You would need to have identical disk configs on each host though, I believe.
14th May 2013, 10:13 PM #5
vSphere stoarge applaince and vSphere Data Protection for backup would be an option.
Originally Posted by Ashm
4 vms per physical core is feasible if these VM's arn't doing much .i.e. typically based around under utilized web servers but i'd aim for a lower vm to phyical core ratio myself.
14th May 2013, 11:32 PM #6
Yup I would go with sharing the load across both hosts, you can actually provide HA and live migration now without the need for a SAN.
When it comes to memory, think about what you need, double it then add some more (it is always needed!). Along with running Veeam on the physical backup server I would also make this a Physical DC.
Are you planning on buying ESXI License or are you wanting to use the free version? If so Veeam is not supported on this platform.
With regards to the D2T then Veeam will shortly support this in the coming release.
15th May 2013, 11:29 AM #7
- Rep Power
Thank you everyone for your replies/input.
We have licenses for vSphere Essentials Plus & Veeam Essentials Enterprise.
We will stay with local storage. I believe Veeam will take care of VM replication.
In my experience Dell's hardware reliability is very good. We have two servers that are 7-8 years old that haven't developed any faults; I've no reason to suppose that any new ones would either.
On my many (many) visits to Dell's website I have been configuring the ESX hosts with 64GB RAM.
@DrPerception : that's really useful info. I think that the school's money is better spent on fast disks/controllers. I think I need to verify what my software is going to let me.
Many thanks again everyone.
15th May 2013, 11:33 AM #8
Veeam will do the replication but not in real time - go for as much memory as possible, if you think you need 64 go to a 100, you will use it (and its always nice to have plenty of extra capacity)
Originally Posted by ijk
11th June 2013, 11:41 AM #9
Personally, Like what was mentioned before I would configure both ESX Hosts for High Availability. I'm pretty sure you get the functionality of vMotion with vSphere Essentials.
vMotion is pretty awesome and has been a life saver for me in the past.
For Disks in your array I would try and go for the 15k SAS Disks if you have the budget for them.
I have used VEEAM before and i quite like this product. You can setup your servers to replicate on a differential. this will replicate any changes to your DR Cluster.
I have had to use this in anger before and I think it worked quite well.
By Bruce123 in forum Thin Client and Virtual Machines
Last Post: 2nd May 2013, 12:13 PM
By Tallwood_6 in forum Thin Client and Virtual Machines
Last Post: 16th November 2011, 09:35 AM
By TechMonkey in forum Web Development
Last Post: 1st July 2009, 12:05 PM
By DAckroyd in forum Thin Client and Virtual Machines
Last Post: 27th May 2009, 06:46 PM
By fox1977 in forum Thin Client and Virtual Machines
Last Post: 10th October 2008, 06:21 PM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)