Open Source Xen Server 6.2

img_0279-4e80d41-introCitrix has brought it upon themselves to bless us with something we’ve all been eagerly waiting for, which would be an open source version of Xen Server 6.2 (this is to match up against VMware’s closed source, but also free, Vsphere hypervisor). Citrix has maintained a free, and open source version of Xen for years now, but it hasn’t been doing particularly well against other hypervisor solutions. VMware’s free version of Vsphere hypervisor is gaining ground, as well as KVM. The firm known as Citrix took it upon themselves to make change happen, which is why the release of Xen Server 6.2 is so exciting to talk about. This release is supposed to help the team increase the amount of support they receive, and what better way to promote helpful behaviour other than to release an open source project? This particular project has plenty of beneficial features alongside it’s release (although it was some time ago). Amongst these, the support of Cloud Stack, Open Stack, and Citrix’s very own Cloud Platform are available. The team at Citrix was very verbal regarding its opinions on guest operating systems, as they supported it to the best of their abilities (including Microsoft’s Windows 8 and Windows Server 2012).

Sammer Dholakia, whom is the group Vice President (as well as General Manager) of Citrix’s Cloud Platforms Group, stated that, “The cloud era has brought a lot of exciting opportunities for data centre infrastructure, but the reality is that one size doesn’t fit all when it comes to virtualization,”. Dholakia continued on, saying “By empowering out users and partners with committed open source strategy and community for XenServer – which already powers some of the largest clouds in the world – we are moving the needle in innovation to help customers of all sizes, and at all stages of their cloud strategies, to maximize the benefits they gain from virtualization and the cloud.” James Dunn who specialises in Virtualisation technology at Sphere IT which is an IT Company in London said “The new features Open Source Xen Server is a great step forward in virtualisation and will really be appreciated by system admins all over the world”

Citrix stated that it was going to support its Xen Desktop with the use of Xen Server 6.2 (as if that wasn’t obvious enough!), which would include the support of Intellicache and the Dynamic Memory Control. The team at Citrix also stated that it added Desktop Director alerts, this will be especially useful for administration users, as they will be able to make sure they’re aware of low resources (in order to prevent their VMs from becoming unstable, in a sense). Citrix is pushing to help firms and companies get used to the new and improved (did we mention free?) version of Xen Server, that is until they release the complete editions that can cost up to 3,250 euros, respectively. Seeing as Citrix is supporting a free platform such as this, VMware is going to have to do just the same (and maybe even more) if they’re going to keep their grasp on the market.

The Ins And Outs Of The Shares VHDX File Feature (With Windows Server 12)

This would have to be one of the most prevalent feature to be introduced in Hyper-V with Windows Server 12, so we’re going to be explaining how it works in its entirety. The most important storage feature (in some peoples opinions) would be shared VHDX. The whole point of this feature is to make it possible for the creation of virtual hard disk files, then share said file amongst a number of virtual machines. When you’re using a shared VHDX file, the most common use (as well as intended) would be with guest clustering. If you wanted to make use of shared storage for a guest cluster, the nodes of the cluster were required to connect to another cluster shared volume through the process of iSCSI, Fibre Channel or a Server Message Block (more commonly referred to as SMB) 3.0 file share (Hyper-V 3.0 only). When you’re making use of Windows Server 20120 R2, it’s actually plausible to share a virtual hard disk (while it has to be of the VHDX format). Through this process it’s able to be spotted by the cluster nodes, exactly in the way it would as if it were a shared Serial Attached SCSI disk.

Many people are probably wondering how all of this is relatively useful to them, and we’re going to tell you why. The larger companies and businesses make use of private cloud servers, within these servers there is the ability for authorized users to manufacture their own VMs (virtual machines, given they are based on pre-created templates). When it comes to the self-service process of virtual machine creation, this feature is especially useful in creating standalone VMs. When it comes to building guest clusters, however, you may have a tougher time with that one than you would have figured. There are, of course, exceptions to this rule, but guest clusters (for the most part) require the cluster nodes to access the CSV (Cluster Shared Volumes). When it comes to virtualization administration, it’s a different story, as they are quite reluctant to show the storage architecture implemented within (in the type of way that would allow users to manage a CSV themselves). Those who find themselves rather intrigued with self-service features would find this particular one rather useful.

This would be where shared VDHX comes in handy, after all, there has to be a method to the madness! Administrative usually have the tendency to ignore exposing the storage infrastructure, so instead they can allow their users to create a VDHX file themselves, and then share this particular file amongst the guest cluster nodes they feel fit (the whole process is easy, as if the shared VHDX was a CSV in its own right). Another thing that’s worth mentioning would have the be the fact that VHDX files don’t actually remove the need for physical shared storage. The actual shared VHDX file has to be in a place where it’s able to be accessed by the guest cluster nodes. Of course, this means that the VHDX files have to be stored on a CSV or a SMB 3.0 file share, although it is possible for virtualization administrative users to allow creation of shared VHDX files without exposing the entirety of the storage architecture below.

One final thing to note (other than the sheer fact that it’s a great feature for self-service), is that the shared VHDX feature is incredibly useful amongst hybrid clouds. This is the case because it reduces the complexity of the entire process (the process regarding the connection of cloud-based guest cluster nodes to shared storage).

I know you are probably reading this thinking you wish you configured your now live VMs with a VHDX format disk than the former VHD type but no worrys… there is infact a way to convert VHD to VHDX and this can be found here. You will need to use the powershell and there is also a script you can use to make this process easier. Also below is a nice video which shows you what you need to do to perform this conversion.

VMware ESXi 5.1 Overview

If you’re interested in what really makes the new instalment of VMwares ESXi 5.1 tick, then you’re in the right place. VMware has given us the liberty of looking at the new features implemented within the VMware ESXi 5.1, as well as the vSphere 5.1. We’re going to run through these features individually so you can get a good grasp of what they’re offering us this year, are you as excited as I am? Not to spoil the surprise, but one of the biggest changes they made this year was the removal of vRam limits, which is what we’re going to cover first.

VMware ESXi 5.1′s Newest Features

Removal of vRam Limits

One of the largest things to note would be the fact that the vRam memory limit has been removed, but it isn’t even listed as a specific “feature” with the product. I guess you could say it’s just icing on the cake, if you want to put it like that! If you think back to the whole debacle surrounding the vRam limitations within ESXi and vSphere 5.0, you’ll look at this new instalment as something you’ve always been waiting for. It seemed as if VMware was trying to milk their consumers with these limitations, it seemed as if it were a “RAM-tax” of sorts and resulted in you dipping into your pockets; this was in case of server boxes not being sufficient enough to run. They fixed that mistake this time around, so that’s something we won’t have to worry about for future endeavours! It seems as if the price for this particular edition is solely based on the per-CPU-socket ratio, as opposed to the ironically terrible combination of sockets and virtual memory (as well as the amount of VMs being used).

Brand New Flash/Web-Based Management Client For vSphere 5.1

An Adobe-Flash Web-Based management client for vSphere 5.1 is also implemented within this rendition. Some applications don’t necessarily need flash, and it was actually written with Apache Flex (although Apache Flex makes prevalent use of Flash), there are still some useful Flash-based applications you could use within your VMs. There is still access to the old management client, which is able to work with 5.1 applications easily, but the new features that come alongside the implementation of Flash-based web clients won’t be available. Of course you’re going to have to put Flash on your machine, but the chances are that you’ve already got it installed anyways.

Great Support For The Newer Hardware

Seeing as this release has so many goodies in it already, it’s almost a given when you’re talking about support for the more powerful types of computing hardware (for example AMD and Intel). Not only that, but the virtualization hardware-abstraction layer has been rejuvenated in a sense, as they were upgraded to sport Version 9 virtual hardware; this comes with support relative to Intel’s VT-x with Extended Page Tables virtualization assistance features, as well as the AMD-V with Rapid Virtualization Indexing (known amongst most simply as RVI), nested page tables as well. The support of these hardwares is due to the fact that they wanted to minimize the hypervisor and VM guest operating system overhead that was inflicted on your systems processors (physically speaking).

Another nice implementation with this edition is the possibility of allowing any type of VM (which is generated on VMware ESX Server 3.5 or later) to run on the ESXi 5.1 system virtually unchanged. This means that there wouldn’t be a need to shut down and upgrade to the version 9 virtual software, there wouldn’t be a need for it. If you want the newest features offered with 5.1, however, you’re going to have to update your VMs anyways, but this does give you the ability to update it whenever you feel fit.

VM Hardware-Accelerated 3D Graphics Support

If you used the 5.0 version, you were probably asking yourself why there wasn’t support for Nvidia CUDA/vGPU. It seems as if VMware was thinking the exact same thing themselves, as they figured out a way to introduce it into the technology this time around.

When you’re using vSphere 5.1, VMware has made it possible to use hardware-based vGPUs within the virtual machine if you wanted. This comes on the heels of a partnership between VMwares and NVIDIA themselves. The use of vGPUs is quite common, as it is able to improve the graphical constrictions associated with a virtual machine, this is done through the use of off-loading. The vGPUs off-load needy graphic processes to a physical GPU that is located on the vSphere host. Within vSphere 5.1, the brand new vGPU support is trying to specify View environments that are incredibly dependent when it comes to the graphics, examples would be things like graphic design or even medical imaging (like X-rays and such).

The support in vSphere regarding hardware-based vGPU support is somewhat limited, as it focuses solely on View environments running with vSphere hosts with supported NVIDIA GPU cards. Also, the release of vGPU originally only supported desktop virtual machines (which are running either Microsoft Windows 7 or 8). Although the support of vGPU is enabled within vSphere 5.1, it’s important to know that the leverage of this particular feature is dependent on wherever the future released of View go. Even though these “future release” statements never seem to bode that well with everybody, it’s much, much better than the lack of absolutely any support regarding Nvidia GPUs. There is nothing about CUDA mentioned, and the off-loading is something to delve deeper into, but it sounds like it’s pretty good overall.

Extra Goodies And Features

 There is relevant support of Windows 8 desktop systems, as well as Windows Server 2012 support. These aren’t necessarily used that often, but the support is there for those who plan on making use of it.

 There has been improved CPU virtualization processes regarding ESXi 5.1 (known as VHV to most_. This new process is supposedly going to allow near-native-access to the physical CPUs themselves (through the use of your virtualized operating systems). Speed is important, so this is good to know.

 The ability to perform VM-live-migration between two separate physical servers (if both are running ESXi, of course) is now enabled. This can be done without both physical servers being linked to the SAN. Maybe a bit outdated, but maybe not.

 Hardware-assisted-virtualization information (as well as CPU counters) are now readily available to be exposed to external operating systems. This is especially useful for any developers whom may be in the process of debugging or tuning applications within a VM.

 Brand new storage features, some of which include a read-only fire sharing on a VMFS volume, in which has been increased to 32 rather than the original 8. There is Space Efficient Sparse Virtual Disks which are implemented with automated processes, these processes automatically reclaim any stranded space you may have. A dynamic block allocation unit size is also implemented, which is good for storage and application needs; as well as 5 Node MSCS Cluster (vs. 2 node), jumbo frame support available for every single iSCSI adapter available, and finally Boot from Software FCoE.

 The removal of the reliance regarding shared “root” user accounts (for administration uses specifically) was replaced with support for SNMPv3. Local users whom are assigned administration duties are able to access the full shell, they are also able to root run commands without having to “sudo”. Easier monitoring, easier access, more efficiency!

 The Guest OS Storage Reclamation feature is another useful one, it’s when the files are removed from within the guest operating system, and the size of the VMDK file can be reduced as well. The de-allocated storage space can be returned back to the storage free pool (this makes relevant use of the new SE sparse VMDK format, which is available with View). Although this is also dependent on future releases, it seems pretty useful at the moment.

Server 2012 R2

Some of the new great features in Hyper-V 2012 R2

Generation 2 VMs

Traditional VMs emulate hardware such as the Network cards IDE/SCSI controllers, video cards etc. With the introduction of Serve 2012 R2 Microsoft have introduced generation 2 VMs which utilise a brand new architecture whereby the VM will utilise the traditional hardware of the host server without having to emulate the devices. This opens the door to a number of new features such as secure boot and booting off virtual SCSI etc. This is limited to 64 bit Windows 8 or Server 2012 guest VMs and not supported with server 2008 or windows 7 VMs.

Virtual Machine direct connect

Before Hyper-V 2012 to remotely connect to a running guest VM you would have to RDP to the guest in question. This would require that the guest VM in question has an NIC and IP address configured that you can reach. Now with the release of Hyper-V 2012 R2 Microsoft have introduced another method with does not require this and would enable you to remotely manage your VMs that as yet are not live on your network with an IP address. This is an interesting feature which you would connect via the VMbus to manage such VMs.

Extend Replication

While the replica feature in 2012 was a great addition for implementing DR scenarios for your virtual infrastructure Microsoft have now gone a step further in Server 2012 R2. You now have the option to have a third replica which in essence means you can have store one copy onsite and push another of the same VM offsite.

Replica intervals

In Hyper-V 2012 replication intervals could not be changed and were set at 15 minutes. So even if you had the supporting hardware and network speed you could not reduce this or also have the freedom to extend it to a higher interval. In Hyper-v 2012 R2 you now have the freedom to select replication intervals of 30 seconds, 5 minutes and 15 minutes. Furthermore this will work with an intermittent connection as Hyper-V will watch for 12 missed cycles before it deems failed and thus with the 15 minute interval you can have up to 3 hours of network downtime until it switches to a failed state.

Compression for Quicker migration

Microsoft have introduced 2 new features that you can select to make Migration of VMs faster over the network by compressing the data being transmitted. Obviously compressing the data will have an impact on processing so you will need to ensure that your hypervisor has the resources available to do this. The other option is via SMB Direct where by the memory of the VM being migrated is copied over using SMB. Microsoft recommend when using the latter option that you have 10GB NICs at each end for better performance otherwise the first option to compress the transmitted data.
Live Exporting and Cloning of VMs
In Hyper-V 2012 you would have to power off a VM you would wish to clone or export which in a productive environment can be a difficult task especially if the role in which it is hosting is in constant demand. Now in 2012 R2 Microsoft have enabled a feature which will allow you to backup/export/clone a running VM. This is an amazing new feature and will no doubt make a lot of sys admins very happy.

Resizing VHDX drives online

With Hyper-V 2012 R2 we now have the freedom to expand/reduce the size of a guest VMs VHDX drive without having to shut the machine down. Bear in mind that this can only be done assuming that the guest VM in question was configured to use a VHDx format virtual disk and not the older VHD format drives.

Storage QoS

This feature enables one to limit the amount of Disk I/O by limiting the minimum/maximum amount of IOPS each guest VM can peak. This is especially useful when you have a guest VM that is running a disk I/O hungry application that could potentially cause other guest VMs on the same hypervisor to run poorly due to all the resources being absorbed by this disk hungry VM in question.
These are some of the new features for Hyper-V 2102 R2 and we can appreciate how Virtualisation is developing in the IT world. As the virtualisation technology continues to develop we system admins are going to appreciate this technology more and more. No more worrying about tape and bare metal restore backup scenarios we will be in a world where everything is in a virtualised state and can quickly be backed up and restored to another hypervisor

Virtualization_Apps

Introduction to Virtualisation in the world of computing.

In the past 5 years server virtualisation has become an increasingly popular choice over several physical servers per each role. For example if one was to have An active directory, file server and Exchange server before the days of virtualisation you would (in a best practice scenario) require at least 2 servers. Now with the ever expanding virtualisation technology you can host all 3 server in their own virtual instance (guest machines) on the one physical server (Host or Hypervisor).

Back in 2006 when VMware released their VMware server formally known as GSX server (now branded as ESX) this was a big leap for most IT professionals and business to switch to. This was mostly due to the uncertainty in the technology and its reliability; furthermore the cost of hardware was a lot more than it is now. Now, however as technology has evolved and processing speed and data storage exponentially increases the justification to utilise a virtual infrastructure is much more appealing and cost effective.

There are a wide range of hypervisors to choose from but the 3 biggest players are VMware ESX, Citrix XEN and Microsoft Hyper-v. Both ESX and XEN run on a Linux based platform and are managed using a client which would be installed on another server/pc and Hyper-V is managed via the server on the windows GUI application. Fundamentally the procedures in creating and managing the virtual machines is the same through their propriety management application but each have their own advantages over one another. If you have never had or required the opportunity to use or experiment with virtual servers then I would suggest getting started with Hyper-V which can simply be added as a role to your server 2008/2012 installation. You will also need to ensure that your CPU supports virtualisation and is also enabled for virtualisation in the BIOS.

When server 2008 was released Microsoft also introduced “Virtual licences” which in a nut shell means that when you purchase server 2008 Standard you can install the media on the physical hardware as you would, then you have the right to run another virtual instance of server 2008 Standard within Hyper-V on that server. The CD label has 2 CD keys, one for the physical server and the other for your virtual guest. Assuming your physical servers hardware is good enough you can effectively use this virtual server for another role that you might want to have on a separate box without paying for additional hardware and an additional Windows server license. Since the release of Server 2012 Microsoft have increased the number of virtual instances you can activate on the same physical hardware from 1 to 2 and thus allowing you to run 3 instances of Server 2012 Standard (1 physical and 2 virtual on the same physical hardware).

With the technology of virtualisation it brings another great advantage with how companies plan their disaster recovery scenario. Now as their business critical servers are hosted as a virtual instance on a physical hypervisor it makes the availability to back-up and restore full VMs a breeze. For instance let’s assume you have a couple of physical servers in your comms room that need to be backed up each night. Depending on the types of backups you have planned for in your DR strategy and if you have chosen data level or a full bare metal restore procedure you will find yourself having to either source new hardware to restore the image to or reinstalling the operating from scratch then restoring the data. However in a virtual environment there are a vast number of methods or third party software (such as Veeam) which you can use to simply take a full back up each night of the whole VM at that time. To restore one of these VMs if for example there was a corruption or you managed to destroy something on the production VM you would simple restore that whole VM back to the live Hypervisor and start it up. Furthermore if the whole hypervisor failed you could quickly install the hyper-v role on another server (assuming it has enough resources) or even a high spec workstation then restore your VMs and start them up in the interim until you get the defective server operational again. Once this server is operational again you can simply move the productive VMs over from the “temporary” hypervisor back to the primary.

Another advantage of virtualisation is the use of snap shots which enable you to take a “picture” of how the server was at that exact time and allow you to revert to that snapshot in the future. A good example when one would use this is if you wanted to install some new software on a server and want the flexibility to revert back to this point if the installation fails or the software inevitably breaks something.