What’s New in vSphere 5.5

Team evoila
28. August 2013
Reading time: 7 min

With VMworld 2013 in progress and the new version of vSphere becoming GA soon, I found it was time to publish a quick overview of the features that come with the new vSphere 5.5 release. I will try to summarize each feature with a couple of words only and go into details with a dedicated article when necessary. Also take a look at What’s New in vSphere 5.5 Platform!

ESXi Features

Hot-Pluggable PCI SSD Devices: 
ESXi now allows hot-plugging of SSD devices in addition to already supported SATA and SAS disks.

Support for Reliable Memory Technology:
Currently, the VMkernel runs in memory in a non-redundant fashion. Memory errors can potentially crash the hypervisor and VMs with it. RMT (Reliable Memory Technology) is a CPU feature that reports a certain memory reagion to be reliable. The VMkernel uses this information to place critical components such as the init thread, hostd and watchdog in this reagion guarding them against failure.

Enhancements to CPU C-States:
While earlier releases supported CPU P-states (frequency scaling), ESXi 5.5 now supports turning off entire CPU components (C-states) in order to save power.

VM Features

Virtual Machine Compatibility:
Formerly just know as the “Virtual Machine Version”, VMware changed the name of this to Virtual Machine Compatiblity. VMs on ESXi 5.5 bring the new VM Version 10. This allows for

  • support for LSI SAS for Solaris 11
  • new CPU architectures
  • new AHCI: supports up to 30 disks and CD/DVD drive

Therefore, the new maximum disk devices limitation per VM went up from 60 to 120.

Expanded vGPU Support:
Support for virtual GPU was limited to NVIDIA-based GPUs up until vSphere 5.1. Version 5.5 brings support for multiple Intel and AMD-based GPU chips, while still allowing vMotion migrations. Using vSphere Web Client or Horizon View, this feature can be enabled for Windows 7, Windows 8, Fedora 17, Ubuntu 12, RHEL 7 and later versions. Three different modes (automatic, hardware, software) govern vMotion migration compatibility to hosts with different or even without a supported GPU.

Graphic Acceleration for Linux Guests:
A new guest driver for virtual video cards is provided and was fully contributed to the open source community allowing every distribution to integrate the driver into their kernel. With the new driver every Linux VM can now support OpenGL 2.1, DRM kernel mode setting, Xrandr, XRender and Xv.

vCenter Server Features

vCenter SSO:
SSO can now connect to its Microsoft SQL database without requiring the customary user IDs and passwords. After joining the hosting system to the AD domain, vCenter SSO interacts with the database using the identity of the machine itself.

vSphere Web Client:
VMware continues to integrate all new features into the Web Client only enhancing its features and making it more usable for daily administration. This includes full client supports for Mac OS X, full support for Firefox and Chrome web browsers and usability features like drag and drop, filters and recent items.

vCenter Server Appliance:
The Linux-based vCenter appliance now uses vPostgres as an embedded database supporting up to 500 ESXi hosts or 5000 Virtual Machines. This new scalability maximum enabled vCenter Server Appliance to be used for most environments out there! Great job, VMware!

vSphere App HA:
Support for Application monitoring in an HA cluster was around for a while but applications of third-party monitoring software had to implement the vSphere Guest SDK in order to monitor application health. vSphere App HA renders this concept obsolete. Two appliances must be deployed per vCenter:

  • vSphere App HA appliance: stores and manages policies
  • vFabric Hyperic appliance: uses agents installed in guest OSs to monitor health and enforce policies stored in vSphere App HA

VM (Anti-)Affinity Rules and HA/DRS Clusters:
vSphere 5.5 HA now takes DRS VM rules into account during a failover process. This removes the requirement for DRS to rebalance workloads once up and running allowing better compliance with multitenancy or compliance restrictions and higher availability for latency sensitive workloads that suffer from vMotion migrations.

vSphere Big Data Extensions:
BDE comes as a Web Client plugin allowing simplified provisioning of multinode Hadoop clusters and ecosystem components such as Apache Pig, Apache Hive and Apache HBase.

vSphere Storage Enhancements

62TB VMDK:
The former VMDK size limitation of 2TB – 512 Bytes was now broken. With 5.5 VMDKs and virtual compatibility mode RDMs support up to 62TB while still supporting snapshots.

MSCS:
Earlier, MSCS was supported with FC shared storage only. This support was extended to iSCSI, FCoE and Round Robin path policy for Windows Server 2012.

16GB E2E Support:
Before 5.5, VMware did not support end-to-end 16Gbit FC connectivity from ESXi hosts to storage arrays. This changed in 5.5.

PDL AutoRemove:
Once a LUN enters the “permanent device loss” state, ESXi now removes the device from the host freeing one of the available 255 spaces for LUNs instead of letting it sit there occupying the slot.

vSphere Replication Interoperability:
Primary-site VMs replicated with VSR can now be migrated with Storage vMotion without any penality for the replication process. Earlier, .psf files where not relocated but deleted during a Storage vMotion migration. As a result, the VM had to go through a full sync including tons of checksum tests on both ends. As of 5.5, migration of the replica disk is still not possible.

vSphere Replication Multi-Point-in-Time (MPIT) Snapshot Retention:
vSphere Replication 5.5 now supports multiple restore points for a single VM. While ealier releases retained the state after the last replication only, a retention policiy now dictates the amount of recovery points to be stored.

VAAI UNMAP Improvements:
The esxcli storage vmfs unmap command, now supports the ability to specify the reclaim size in blocks rather than a percentage value. Dead space reclamation can now the reclaimed in increments rather than all at once.

VMFS Heap Improvements:
Former releases of VMFS hat problems with open files of more than 30TB in size from a single ESXi host. vSphere 5.0 p5 and vSphere 5.1 U1 introduced larger VMFS heap spaces to address the issue at the cost of higher memory usage mounting a VMFS. vSphere 5.5 improved the heap eviction process rendering a larger heap size obsolete. With a maximum memory consumption of 256MB, an ESXi host can now access all the address space of a 64TB VMFS volume.

vSphere Flash Read Cache:
ESXi 5.5 hosts now discover locally attached flash devices and pool them into a “vSphere Flash Resource”. A flash resource can be consumed in two difference ways:

  1. vSphere Flash Swap Cache: replaces the previously introduced Swap to SSD feature
  2. vSphere Flash Read Cache: significantly improves read intensive VM workloads

Networking Features

LACP:
The LACP protocol supported between a physical and vSphere Distributed Switch now supports

  • 22 new hashing algorithms
  • multiple link aggregation groups (LAG): 64 LAGs per host and 64 LAGs per vDS
  • workflow to configure LACP across a large number of hosts

Traffic Filtering:
Network packets can now be filtered based on various qualifiers: MAC (src and dst), system traffic qualifiers (vMotion, iSCSi, …) and IP qualifiers (protocol type, src, dst, port number). Matching packets can be either filtered or tagged. This way VMware implements Access Control Lists (ACL) for vDS.

Quality of Service Tagging:
In addition to the previously available IEEE 802.1q Class of Service tagging of Ethernet frames (Layer 2) as part of Network I/O Control (NIOC), vDS now supports Differentiated Service Code Point (DSCP). This standard inserts QoS tags into the IP header allowing for QoS information to be available across the entire networking environment including routers.

SR-IOV:
Single Root I/O Virtualization was improved by simplifying the configuration process and allowing propagation of Port Group settings to virtual functions of SRIOV devices.

Enhanced Host-Level Packet Capture:
This fancy name represents the functionality of a simple command-line based traffic sniffer like tcpdump available in ESXi hosts. It allows detailed sniffing of vSS and vDS traffic.

40GB NIC Support:
Support for 40GB NICs was introduced supporting Mellanox ConnextX-3 VPI adapters in ethernet mode.