If you’re interested in what really makes the new instalment of VMwares ESXi 5.1 tick, then you’re in the right place. VMware has given us the liberty of looking at the new features implemented within the VMware ESXi 5.1, as well as the vSphere 5.1. We’re going to run through these features individually so you can get a good grasp of what they’re offering us this year, are you as excited as I am? Not to spoil the surprise, but one of the biggest changes they made this year was the removal of vRam limits, which is what we’re going to cover first.
VMware ESXi 5.1′s Newest Features
Removal of vRam Limits
One of the largest things to note would be the fact that the vRam memory limit has been removed, but it isn’t even listed as a specific “feature” with the product. I guess you could say it’s just icing on the cake, if you want to put it like that! If you think back to the whole debacle surrounding the vRam limitations within ESXi and vSphere 5.0, you’ll look at this new instalment as something you’ve always been waiting for. It seemed as if VMware was trying to milk their consumers with these limitations, it seemed as if it were a “RAM-tax” of sorts and resulted in you dipping into your pockets; this was in case of server boxes not being sufficient enough to run. They fixed that mistake this time around, so that’s something we won’t have to worry about for future endeavours! It seems as if the price for this particular edition is solely based on the per-CPU-socket ratio, as opposed to the ironically terrible combination of sockets and virtual memory (as well as the amount of VMs being used).
Brand New Flash/Web-Based Management Client For vSphere 5.1
An Adobe-Flash Web-Based management client for vSphere 5.1 is also implemented within this rendition. Some applications don’t necessarily need flash, and it was actually written with Apache Flex (although Apache Flex makes prevalent use of Flash), there are still some useful Flash-based applications you could use within your VMs. There is still access to the old management client, which is able to work with 5.1 applications easily, but the new features that come alongside the implementation of Flash-based web clients won’t be available. Of course you’re going to have to put Flash on your machine, but the chances are that you’ve already got it installed anyways.
Great Support For The Newer Hardware
Seeing as this release has so many goodies in it already, it’s almost a given when you’re talking about support for the more powerful types of computing hardware (for example AMD and Intel). Not only that, but the virtualization hardware-abstraction layer has been rejuvenated in a sense, as they were upgraded to sport Version 9 virtual hardware; this comes with support relative to Intel’s VT-x with Extended Page Tables virtualization assistance features, as well as the AMD-V with Rapid Virtualization Indexing (known amongst most simply as RVI), nested page tables as well. The support of these hardwares is due to the fact that they wanted to minimize the hypervisor and VM guest operating system overhead that was inflicted on your systems processors (physically speaking).
Another nice implementation with this edition is the possibility of allowing any type of VM (which is generated on VMware ESX Server 3.5 or later) to run on the ESXi 5.1 system virtually unchanged. This means that there wouldn’t be a need to shut down and upgrade to the version 9 virtual software, there wouldn’t be a need for it. If you want the newest features offered with 5.1, however, you’re going to have to update your VMs anyways, but this does give you the ability to update it whenever you feel fit.
VM Hardware-Accelerated 3D Graphics Support
If you used the 5.0 version, you were probably asking yourself why there wasn’t support for Nvidia CUDA/vGPU. It seems as if VMware was thinking the exact same thing themselves, as they figured out a way to introduce it into the technology this time around.
When you’re using vSphere 5.1, VMware has made it possible to use hardware-based vGPUs within the virtual machine if you wanted. This comes on the heels of a partnership between VMwares and NVIDIA themselves. The use of vGPUs is quite common, as it is able to improve the graphical constrictions associated with a virtual machine, this is done through the use of off-loading. The vGPUs off-load needy graphic processes to a physical GPU that is located on the vSphere host. Within vSphere 5.1, the brand new vGPU support is trying to specify View environments that are incredibly dependent when it comes to the graphics, examples would be things like graphic design or even medical imaging (like X-rays and such).
The support in vSphere regarding hardware-based vGPU support is somewhat limited, as it focuses solely on View environments running with vSphere hosts with supported NVIDIA GPU cards. Also, the release of vGPU originally only supported desktop virtual machines (which are running either Microsoft Windows 7 or 8). Although the support of vGPU is enabled within vSphere 5.1, it’s important to know that the leverage of this particular feature is dependent on wherever the future released of View go. Even though these “future release” statements never seem to bode that well with everybody, it’s much, much better than the lack of absolutely any support regarding Nvidia GPUs. There is nothing about CUDA mentioned, and the off-loading is something to delve deeper into, but it sounds like it’s pretty good overall.
Extra Goodies And Features
There is relevant support of Windows 8 desktop systems, as well as Windows Server 2012 support. These aren’t necessarily used that often, but the support is there for those who plan on making use of it.
There has been improved CPU virtualization processes regarding ESXi 5.1 (known as VHV to most_. This new process is supposedly going to allow near-native-access to the physical CPUs themselves (through the use of your virtualized operating systems). Speed is important, so this is good to know.
The ability to perform VM-live-migration between two separate physical servers (if both are running ESXi, of course) is now enabled. This can be done without both physical servers being linked to the SAN. Maybe a bit outdated, but maybe not.
Hardware-assisted-virtualization information (as well as CPU counters) are now readily available to be exposed to external operating systems. This is especially useful for any developers whom may be in the process of debugging or tuning applications within a VM.
Brand new storage features, some of which include a read-only fire sharing on a VMFS volume, in which has been increased to 32 rather than the original 8. There is Space Efficient Sparse Virtual Disks which are implemented with automated processes, these processes automatically reclaim any stranded space you may have. A dynamic block allocation unit size is also implemented, which is good for storage and application needs; as well as 5 Node MSCS Cluster (vs. 2 node), jumbo frame support available for every single iSCSI adapter available, and finally Boot from Software FCoE.
The removal of the reliance regarding shared “root” user accounts (for administration uses specifically) was replaced with support for SNMPv3. Local users whom are assigned administration duties are able to access the full shell, they are also able to root run commands without having to “sudo”. Easier monitoring, easier access, more efficiency!
The Guest OS Storage Reclamation feature is another useful one, it’s when the files are removed from within the guest operating system, and the size of the VMDK file can be reduced as well. The de-allocated storage space can be returned back to the storage free pool (this makes relevant use of the new SE sparse VMDK format, which is available with View). Although this is also dependent on future releases, it seems pretty useful at the moment.