+ Post New Thread
Results 1 to 9 of 9
Hardware Thread, ESX host sizing in Technical; We would like to fully virtualize our servers, which consist of: DC SIMS Exchange RDS (also WSUS and WDS ) ...
  1. #1
    ijk
    ijk is offline

    Join Date
    Sep 2009
    Location
    M11/A11/A1307
    Posts
    46
    Thank Post
    9
    Thanked 8 Times in 6 Posts
    Rep Power
    11

    ESX host sizing

    We would like to fully virtualize our servers, which consist of:
    • DC
    • SIMS
    • Exchange
    • RDS (also WSUS and WDS)
    • RHEL

    Our current data totals 600GB the bulk of which is user data, including Exchange mailboxes. The SIMS database is about 500MB.
    We've 100 clients and about 400 users.
    We use Dell for servers and buy 5 year warranty/support contracts.
    Our plan is to purchase:
    • ESX1 - 'production' machine
    • ESX2 - redundancy machine
    • Backup host - er, backup host

    Our proposal is:
    • ESX1 will run the network on a day-to-day basis
    • ESX2 will continue to run the network should ESX1 fail for some reason, and at reduced performance
    • The backup host will use Veeam to carry out D2D backups and weekly D2T (LTO6) for offsite

    We want to provide about 2TB of storage and we will be using local storage on each of the ESX hosts and the Veeam machine.

    We are dithering settling on a spec and the following is what we are dithering over:


    • As a rule of thumb (https://www.google.co.uk/search?q=esx+vm+per+core&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-USfficial&client=firefox-a) one would run 4 guests per CPU core. Therefore a 4 core machine should allow us to run our network as above and leave room for further machines if necessary.
    • However, we don't feel this reflects current processor standards and keep looking towards 6 core machines.
    • We think that spending money on large fast disks will benefit us more than spending money on processors.
    • We can't decide whether or not to purchase dual processor machines.
    • ESX1 should have fast disks (we propose RAID10), what about ESX2 - it needs to run the network until the first unit can be repaired so performance is secondary to maintaining the network operation, will SATA disks in this host suffice?
    • We wonder whether SATA disks in the backup machine are going to cripple the backup. We think not if they're in a RAID array.


    Any input with this would be hugely appreciate us. We have a budget of 12k (software is already purchased), but think it could be 2-4k less with some more planning/thought/input/etc.

    I'm sure I've omitted some info.

    Many thanks in advance.
    Nic

  2. #2

    Join Date
    Jul 2007
    Location
    Lancs
    Posts
    388
    Thank Post
    45
    Thanked 21 Times in 19 Posts
    Rep Power
    18
    with regards to storage what about some sort of local tiered storage for example SSDs for the OS and SQL and then cheaper disks for the user data that doesn't require high IOPS. And shouldn't ESX2 be same spec as as ESX1 so end users don't notice anything different.

  3. #3
    DrPerceptron's Avatar
    Join Date
    Dec 2008
    Location
    In a house
    Posts
    918
    Thank Post
    34
    Thanked 134 Times in 114 Posts
    Rep Power
    41
    Why wouldn't you setup ESX1 and ESX2 to share the load? Then take advantage of the High Availability functions in ESX to failover if one does break?

    We run a lot more than that off 3x hosts (2x quad core / 32GB RAM). They're all set up to share the load, if one dies (like last week) it takes about 20-30 seconds for services to recover.

    That said, I think to take advantage of ESX HA, you can't store your VMs on the hosts - it needs to be some form of SAN (we use a MD3000i tied to a MD1000 for extra space), the hosts then reside entirely on that.

    I would totally consider drafting up some sort of proposal that revolves around High Availability - it's usually pretty easy to sell, short of some massive problem, which probably indicates something bigger anyway, being able to keep your network up if a Raid Controller fails or a power supply or even a whole host is usually pretty easy to sell - especially if it takes you a day to get hold of the part you need to fix it.
    Last edited by DrPerceptron; 14th May 2013 at 09:39 PM.

  4. #4

    Join Date
    Oct 2007
    Location
    Northamptonshire
    Posts
    310
    Thank Post
    20
    Thanked 80 Times in 68 Posts
    Rep Power
    43
    The other option if looking to avoid purchasing a SAN is the VMware vSphere Storage Appliance (VSA) which will give you HA. Think it comes as part of VMware Essentials Plus. You would need to have identical disk configs on each host though, I believe.

  5. #5

    Join Date
    May 2008
    Location
    Kent
    Posts
    521
    Thank Post
    26
    Thanked 73 Times in 64 Posts
    Rep Power
    28
    Quote Originally Posted by Ashm View Post
    The other option if looking to avoid purchasing a SAN is the VMware vSphere Storage Appliance (VSA) which will give you HA. Think it comes as part of VMware Essentials Plus. You would need to have identical disk configs on each host though, I believe.
    vSphere stoarge applaince and vSphere Data Protection for backup would be an option.
    4 vms per physical core is feasible if these VM's arn't doing much .i.e. typically based around under utilized web servers but i'd aim for a lower vm to phyical core ratio myself.

  6. #6

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,801
    Thank Post
    272
    Thanked 1,135 Times in 1,031 Posts
    Rep Power
    349
    Yup I would go with sharing the load across both hosts, you can actually provide HA and live migration now without the need for a SAN.

    When it comes to memory, think about what you need, double it then add some more (it is always needed!). Along with running Veeam on the physical backup server I would also make this a Physical DC.

    Are you planning on buying ESXI License or are you wanting to use the free version? If so Veeam is not supported on this platform.

    With regards to the D2T then Veeam will shortly support this in the coming release.

  7. #7
    ijk
    ijk is offline

    Join Date
    Sep 2009
    Location
    M11/A11/A1307
    Posts
    46
    Thank Post
    9
    Thanked 8 Times in 6 Posts
    Rep Power
    11
    Thank you everyone for your replies/input.
    We have licenses for vSphere Essentials Plus & Veeam Essentials Enterprise.
    We will stay with local storage. I believe Veeam will take care of VM replication.
    In my experience Dell's hardware reliability is very good. We have two servers that are 7-8 years old that haven't developed any faults; I've no reason to suppose that any new ones would either.
    On my many (many) visits to Dell's website I have been configuring the ESX hosts with 64GB RAM.

    @DrPerception : that's really useful info. I think that the school's money is better spent on fast disks/controllers. I think I need to verify what my software is going to let me.

    Many thanks again everyone.
    Nic

  8. #8

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,801
    Thank Post
    272
    Thanked 1,135 Times in 1,031 Posts
    Rep Power
    349
    Quote Originally Posted by ijk View Post
    Thank you everyone for your replies/input.
    We have licenses for vSphere Essentials Plus & Veeam Essentials Enterprise.
    We will stay with local storage. I believe Veeam will take care of VM replication.
    In my experience Dell's hardware reliability is very good. We have two servers that are 7-8 years old that haven't developed any faults; I've no reason to suppose that any new ones would either.
    On my many (many) visits to Dell's website I have been configuring the ESX hosts with 64GB RAM.

    @DrPerception : that's really useful info. I think that the school's money is better spent on fast disks/controllers. I think I need to verify what my software is going to let me.

    Many thanks again everyone.
    Nic
    Veeam will do the replication but not in real time - go for as much memory as possible, if you think you need 64 go to a 100, you will use it (and its always nice to have plenty of extra capacity)

  9. #9
    doddsworthy's Avatar
    Join Date
    Jul 2009
    Location
    Washington
    Posts
    18
    Thank Post
    3
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    Personally, Like what was mentioned before I would configure both ESX Hosts for High Availability. I'm pretty sure you get the functionality of vMotion with vSphere Essentials.
    vMotion is pretty awesome and has been a life saver for me in the past.

    For Disks in your array I would try and go for the 15k SAS Disks if you have the budget for them.
    I have used VEEAM before and i quite like this product. You can setup your servers to replicate on a differential. this will replicate any changes to your DR Cluster.
    I have had to use this in anger before and I think it worked quite well.

SHARE:
+ Post New Thread

Similar Threads

  1. ESX Host RAID disk failure (not) reporting
    By Bruce123 in forum Thin Client and Virtual Machines
    Replies: 7
    Last Post: 2nd May 2013, 12:13 PM
  2. ESX Hosts Memory Upgrade
    By Tallwood_6 in forum Thin Client and Virtual Machines
    Replies: 4
    Last Post: 16th November 2011, 09:35 AM
  3. [Hosting] Internal hosting with External frontpage
    By TechMonkey in forum Web Development
    Replies: 1
    Last Post: 1st July 2009, 12:05 PM
  4. Svr2008R2 RC on ESX Host
    By DAckroyd in forum Thin Client and Virtual Machines
    Replies: 7
    Last Post: 27th May 2009, 06:46 PM
  5. Connection dropping for a split second to ESX Host when in VI Client
    By fox1977 in forum Thin Client and Virtual Machines
    Replies: 2
    Last Post: 10th October 2008, 06:21 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •