VMware announces vSphere 5.5
The free vSphere Hypervisor no longer has a memory limit of 32GB!
Source: VMware (via Virtualization.info and The Register)
What’s New in VMware vSphere 5.5 Platform (PDF)
This 5.5 release offers several enhancements, all related to helping our customers 1) deliver better performance and availability for business critical apps and 2) support next-gen workloads (Big Data anyone?). These enhancements include:
- Greater Scalability – Configurations have doubled from previous limits when it comes to physical CPUs, memory and NUMA nodes. Virtual disk files also now scale up to 64TBs.
- vSphere Customization for Low Latency Applications – vSphere with Operations Management can be tuned to deliver the best performance for low latency applications, such as in-memory databases
- vSphere Flash Read Cache – Server side flash can now be virtualized to provide a high performance read cache layer that dramatically lowers application latency.
- vSphere App HA – This new level of availability enables vSphere with Operations Management to detect and recover from application or operating system failure.
- vSphere Big Data Extensions – Apache Hadoop workloads can now run on vSphere with Operations Management to achieve higher utilization, reliability and agility.
The Virtual SAN feature that's included with vSphere 5.5 looks good too.
The new features introduced with 5.5 are as follows:
- Support for VMDK files of max. 62 TB (was 2TB) filesize for VMFS5 and NFS
- No change in the pricing of vSphere editions.
- 4 new features: AppHA, Reliable Memory, Flash Read Cache and Big Data Extensions
- Latency-sensitivity feature for applications like very high performance computing and stock trading apps
- vSphere Hypervisor (free) has no physical memory limit anymore (was 32 GB)
- Max. RAM per host 4 TB (was 2 TB)
- Virtual CPUs per host 4096 (was 2048)
- NUMA Nodes per host 16 (was 8)
- Logical CPUs per host 320 (was 160)
- PCI hotplug support for SSD
- VMFS heap size improvements
- 16 GB End to end Fibre channel. So 16 GB from host to switch and 16 GB from switch to SAN
- Support for 40 Gbps NICs
- Enhanced IPv6 support
- Enhancements for CPU C-states. This reduces power consumption.
- Expanded vGPU support: In vSphere 5.1 VMware only supports NVIDIA GPU. In vSphere 5.5 support for AMD and Intel GPU’s is added. Three rendering modes are supported: automatic, hardware and software.
- The ability to vMotion a virtual machine between different GPU vendors is also supported. If hardware mode is enabled in the source host and GPU does not exist in the destination host, vMotion will fail and will not attempt a vMotion.
- Added Microsoft Windows Server 2012 guest clustering support
- AHCI Controller Support which enables Mac OS guests to use IDE CDROM drives. AHCI is an operating mode for SATA.
is one of the most interesting features introduced within this release, allows to monitor specific applications inside guest OS and restart automatically first the service then the whole VM if the monitored service fails.
For now App HA supports the following applications:
- Microsoft SQL 2005, 2008, 2008R2, 2012
- Tomcat 6.0, 7.0
- TC Server Runtime 6.0, 7.0
- Microsoft IIS 6.0, 7.0, 8.0
- Apache HTTP Server 1.3, 2.0, 2.2
The SSO component has been also rewritten due to the well known problems in the RSA component embedded in version 5.1 and the vSphere Web Client has been improved, supports all the new features and seems much more responsive.
For the complete list of the new features refer to the official VMware paper
Virtual SAN (VSAN from now on in this article) is a software based distributed storage solution which is built directly in the hypervisor. No this is not a virtual appliance like many of the other solutions out there, this sits indeed right inside your ESXi layer. VSAN is about simplicity, and when I say simple I do mean simple. Want to play around with VSAN? Create a VMkernel NIC for VSAN and enable it on a cluster level. Yes that is it!
When VSAN is enabled a single shared datastore is presented to all hosts which are part of the VSAN enabled cluster. Typically all hosts will contribute performance (SSD) and capacity (magnetic disks) to this shared datastore. This means that when your cluster grows, your datastore will grow with it. (Not a requirement, there can be hosts in the cluster which just consume the datastore!) Note that there are some requirements for hosts which want to contribute storage. Each host will require at least one SSD and one magnetic disk. Also good to know is that with this beta release the limit on a VSAN enabled cluster is 8 hosts. (Total cluster size 8 hosts, including hosts not contributing storage to your VSAN datastore.)
As expected, VSAN heavily relies on SSD for performance. Every write I/O will go to SSD first, and eventually they will go to magnetic disks (SATA). As mentioned, you can set policies on a per virtual machine level. This will also dictate for instance what percentage of your read I/O you can expect to come from SSD. On top of that you can use these policies to define availability of your virtual machines. Yes you read that right, you can have different availability policies for virtual machines sitting on the same datastore. For resiliency “objects” will be replicated across multiple hosts, how many hosts/disks will thus depend on the profile.
VSAN does not require a local RAID set, just a bunch of local disks. Now, whether you defined a 1 host failure to tolerate ,or for instance a 3 host failure to tolerate, VSAN will ensure enough replicas of your objects are created. Is this awesome or what? So lets take a simple example to illustrate that. We have configured a 1 host failure and create a new virtual disk. This means that VSAN will create 2 identical objects and a witness. The witness is there just in case something happens to your cluster and to help you decide who will take control in case of a failure, the witness is not a copy of your object let that be clear! Note, that the amount of hosts in your cluster could potentially limit the amount of “host failures to tolerate”. In other words, in a 3 node cluster you can not create an object that is configured with 2 “host failures to tolerate”. Difficult to visualize? Well this is what it would look like on a high level for a virtual disk which tolerates 1 host failure: