In part 1 of Virtualizing Microsoft Lync we focused on what workloads could be virtualized and on the characteristics of the server architecture needed in order to virtualize Microsoft Lync. In part two, we will get a little deeper into the configuration of Microsoft Lync.
Guest Server Configuration
While storage is not a specifically arduous component of Microsoft Lync, it should not be overlooked. What type of storage to be used will depend on the size of the deployment. The supported storage platforms supported within Lync include DAS, iSCSI SAN, and fiber channel SAN. For each supported storage type, make sure you plan for multi path architecture. As well, monitor the storage IO to ensure you meet the requirements needed and that storage does not become the bottleneck for the Lync environment.
Networking, on the other hand, is a component that specific attention should be given to. Microsoft Lync is extremely vulnerable to high latency and packet loss. It is very possible that workloads, especially a Lync server with a media workload, can reach network utilization of 500 Mbps. Therefore, it is recommended to have dedicated network adapters for Lync workloads. Consider using 10 GbE adapters or multiple network adapters using link aggregation. Also, enable VLAN tagging and consider creating a separate vSS or DvS with dedicated NICs. Remember, there is no method fo assign traffic priority to VM’s like there is with CPU, disk, and RAM.
Within the Microsoft virtualization world, use the “Virtual Machine Queue” (VMQ) on the physical NICs. Enabling VMQ offloading increases receive and transmit performance, as the adapter hardware is able to perform these tasks faster that the OS. This will also free up CPU resources. Within VMware, checks with your NIC vendor to see if the drivers loaded on your host have similar features for VMware.
Virtual Machine Performance Considerations
Once you have your host and virtual machines configured there are a few things to consider in order to ensure the best performance of the environment.
With the CPU’s, it is recommended to dedicate processors. However, setting priority within VMware for CPU cycles on Lync 2010 servers is also an option if it is not possible to dedicate CPU’s. Once allocated, ensure that you do not oversubscribe the virtual CPUs. Oversubscription of CPU cores on the host running virtualized Lync Server media workloads is not supported. It is also not recommended for other non-media virtualized Lync Server workloads
Another consideration is the use of VM image templates. Lync Server 2010 does not support sysprep. Therefore a preconfigured generic Lync Server virtual machine image template cannot be created. Instead, create an OS template for Windows Server 2008 R2. Then use this template to install and configure Microsoft Lync.
This is a minor component but it is recommended to disable the virtual DVD/CD drives – as it is recommended for all production virtual machines.
Performance Indicators
As with all environments, the virtual Lync environment should be monitored to ensure the best performance. With the processors, all processors and percentage of total processor time should be measured. Processors should be operating at less than 70% during peak loads. With network interface, adapters should be operating at no more than 80% capacity. With memory, pages/second should be monitored. Database and SIP behavior should be monitored specifically along with the typical CPU, memory, and processor items.
Physical and Virtual Comparison
Finally, a look at what you can expect when comparing a virtual environment to a physical environment. In general, a virtualized Lync server role can handle approximately 50% of the load compared to a physical server. This comparison is due to the fact that it is recommended to use an 8 CPU server hardware profile. However, Hyper-V supports a maximum of 4 virtualized CPU core. Therefore, the 50% load comparison is due to the resource limitations.
VMware on the other hand can support 8 CPU’s with appropriate licensing. Therefore, it should be possible to fully compare a VMware virtual environment to a physical environment. That said, there is no documentation from either Microsoft or VMware to support or deny this. So to be safe, we still assume a 50% load comparison.