Virtualizing Microsoft Exchange 2010/2013 on vSphere 5 Best Practices


We’ll talk today about virtualizing Microsoft Exchange on vSphere 5.x platform. Microsoft Exchange is considered the most common messaging system in any business all over the world. From SMB to huge Enterprises and Corporations, they all may use Microsoft Exchange as their messaging and communication system. For most of them, Exchange is considered Tier 0/1 that should be served with highest level of performance and availability.

vSphere 5.x is capable of providing such level of performance and availability while reducing Microsoft Exchange footprint with 5x to 10x by consolidating many Exchange roles and nodes on the same physical hardware while providing 100% or more of the required performance. Best practices mentioned here are collected from different sources that are mentioned in References section below. I’ll follow the same schema of my previous posts and relate these best practices to our five Design Qualifiers (AMPRS – Availability, Manageability, Performance, Recoverability and Security) in addition to Scalability.

Availability:
1-) Try to use vSphere HA in addition to Exchange DAG to provide the highest level of availability. Adapt a protection policy of N+1, as N is the number of DAG members VMs in vSphere Cluster of N+1 hosts. In case of a ESXi failure, in the background, vSphere HA powers-on the failed VM on another host, restoring the failed DAG member and bringing the newly passive database up to date and ready to take over in case of a failure, or to be manually reactivated as the primary active database.

2-) Try to separate your DAG MBX VMs on different Racks, Blade Chassis and Storage Arrays if available using VMs Affinity/Anti-affinity rules and Storage Affinity/Anti-affinity rules for most availability.

3-) For MBX VMs, use VMs anti-affinity rules to separate them over different hosts. When HA restart a VM, it’ll not respect the anti-affintiy rule, but on the following DRS invocation, the VM will be migrated to respect the rule. In vSphere 5.5, configure the vSphere Cluster with “das.respectVmVmAntiAffinityRules” set to 1 to respect all VMs Anti-affinity rules.

4-) As Microsoft supports vMotion of Exchange VMs, use DRS Clusters in Fully Automated Mode.

5-) vMotion:
a- Leverage Multi-NIC vMotion feature to make vMotion process much faster even for large MBX VM.
b- Try to enable Jumbo Frames on vMotion Network to reduce overhead and increase throughput.
c- DAG members are very sensitive to any latency or drop in its heartbeat network and hence, any vMotion operation may cause a false failover during the switch between the source and destination hosts –although this drop is really negligible and characterized by single ping-packet drop. That’s why we need to set DAG Cluster Heartbeat setting “SameSubnetDelay” to 2 seconds (2000 milliseconds). For step-by-step guide of how to change this value, refer to: Exchange 2013 on VMware – Best Practices. In some cases when leveraging Multi-NIC vMotion and enabling Jumbo Frames on vMotion Network, changing heartbeat setting isn’t necessary.

6-) Try to leverage VM Monitoring to mitigate the risk of Guest OS failure. VMware Tools inside Oracle VMs will send heartbeats to HA driver on the host. If it’s stopped because Guest OS failure, the host will monitor IO and network activity of the VM for certain period. If there’s also no activity, the host will restart the VM. This add additional layer of availability for Oracle VMs. For more information, check this: vSphere HA VM Monitoring – Back to Basics | VMware vSphere Blog – VMware Blogs.

7-) Try to leverage Symantec Application HA Agent for Oracle with vSphere HA for max. availability. Using Application HA, the monitoring agent will monitor Oracle instance and its services, sending heartbeats to HA driver on ESXi host. In case of application failure, it may restart services and any dependent resources. If Application HA Agent can’t recover the application from that failure, it’ll stop sending heartbeats and the host will initiate a VM restart as a HA action. This adds another layer of availability to your Oracle nodes. For more informations, check this pdf from VMware.

Performance:
1-) Try to leverage “Building Blocks” approach. Divide your required number of Users Mailboxes on equally-sized MBX VMs of either 500, 1000, 2000, or 4000 Users Mailboxs per VM. Calculate the required resources per VM and then decide the number of VMs required to serve your requirement. Keep in mind that, deploying your MBX VMs as a Standalone Configuration is somehow different in calculations than DAG Configuration.

2-) It’s recommended to distribute all users mailboxes evenly on all of your DAG members to load balance the users load between all MBX VMs. Keep in mind that additional compute capacity on MBX VMs is needed for passive DBs for failover. In addition, distribute these DAG members VMs evenly on all hosts using DRS VMs Anti-affinity rule for more load-balancing and higher availability.

3-) Use Exchange Server Calculators (2010 & 2013) to calculate the required resources for your Exchange VMs according to your chosen building block sizes.

4-) CPU Sizing:
a-For a complete CPU sizing guides, check the following links of Exchange 2010 CAS/HUB CPU Sizing, Exchange 2010 MBX CPU Sizing & Exchange 2013 CPU Sizing.
b-Exchange is a SMP application that can use all VM vCPUs. Assign vCPUs as required and don’t over-allocate to the VM to prevent CPU Scheduling issues at hypervisor level and high RDY time.
c- Don’t over-commit CPUs. Ratio of Virtual: Physical Cores should be 2:1 max (better to keep it nearly 1:1) to be under MS support umbrella. In some cases, like: small environments, over-commit is allowed after establishing a performance baseline.
d- Enable Hyperthreading when available. It won’t double the processing power –in opposite to what shown on ESXi host as double number of logical cores- but it’ll give a CPU processing boost up to 20-25% in some cases. Don’t consider it when calculating Virtual: Physical Cores ratio.
e- Exchange VMs should have CPU Utilization less than 70-75% if used in a Standalone Configuration. If DAG is to be implemented, CPU Utilization should be less than 80% even in case of failover of a failed MBX DB. MBX role shouldn’t use more than 40% of CPU Utilization in case of Multi-role deployment.
f- Exchange 2010/2013 aren’t vNUMA-aware, so there’s no need to configure large Exchange VMs that spans two or more physical NUMA nodes. Try to size your Exchange VMs to fit inside single NUMA node to gain the performance boost of NUMA node locality. On the other hand, ESXi Hypervisor is NUMA aware and it leverages the NUMA topology to gain a significant performance boost.

5-) Memory Sizing:
a- For a complete Memory sizing guides, check the following links of Exchange 2010 CAS/HUB Memory Sizing, Exchange 2010 MBX Memory Sizing & Exchange 2013 Memory Sizing.
b- Don’t over-commit memory, as Exchange is a memory-intensive application. If needed, reserve the configured memory to provide the required performance level, specially for MBX VMs. Keep in mind that memory reservation affects as aspects, like: HA Slot Size, vMotion chances and time. In addition, reservation of memory removes VM swapfiles from datastores and hence, its space is usable for adding more VMs.
c- Don’t disable Balloon-driver installed with VMware Tools. It’s ESXi Host’s last line of defense against memory contention before Compression and Swapping to Disk processes.
d- Exchange VMs should be always monitored for its memory performance and adjust the configured and reserved –if any- memory values to meet its requirements.
e- Adding memory on the fly to Exchange VMs will not add any performance gain till the VM is rebooted. That’s why enabling hot add won’t be necessary.

6-) Storage Sizing:
a- Always consider any storage space overhead while calculating VMs space size required. Overhead can be: swapfiles, VMs logs or snapshots. It’s recommended to add 20-30% of space as an overhead.
b- Separate Exchange VMs’ disks on different –dedicated if needed- datastores, as Exchange is an IO-intensive application to avoid IO contention.
c- Provide at least 4 paths, through two HBAs, between each ESXi host and the Storage Array for max. availability.
d- For IP-based Storage, enable Jumbo Frames on its network end-to-end. Jumbo Frames reduces network overhead and increases Throughput.
e- RDM can be used in many cases, like: Exchange P2V migration or to leverage 3rd Party array-based backup tool. Choosing RDM disks or VMFS-based disks are based on your technical requirements. No performance difference between these two types of disks.
f- Microsoft supports only virtualized Exchange VMs on either FC, FCoE, iSCSI SANs or using In-guest iSCSI targets. NAS Arrays aren’t supported, either by VMware or Microsoft, either as a NFS datastore or accessing it through UNC path from inside the Guest OS.
g- For heavy workloads, dedicate a LUN/Datastore per a MBX VM for max. performance although it’ll add high management and maintenance overhead.
h- Use Paravirtual SCSI Driver in all of your Exchange VMs, specially disks used for DB and Logs, for max. performance and least latency and CPU overhead.
i- Distribute any Exchange VM disks on the four allowed SCSI drivers for max. performance paralleling and higher IOps. Try to use Eager-zeroed Thick disks for DB and Logs disks.
j- ESXi 5.x Host may need to increase its VMFS Heap Size to allow for hosting Exchange VMs with large disks of several TBs, according to this link: Increase ESXi VMFS Heap Size. This issue is mitigated in vSphere 5.1 U1 and later.
k- Partition Alignment gives a performance boost to your back-end storage, as spindles will not make two reads or writes to process single request. VMFS5 created using vSphere (Web) Client will be aligned automatically as well as any disks formatted using newer versions of Windows. Any upgraded VMFS datastores or upgraded versions of Windows Guests will require a partitions alignment process. For upgraded VMFS, it’s done by migrate VMs disks to another datastore using Storage vMotion, then format and recreate the datastore on VMFS5.

7-) Network:
a- Use VMXNet3 vNIC in all Exchange VMs for max. performance and throughput and least CPU overhead.
b- Exchange VMs port groups should have at least 2 physical NICs for redundancy and NIC teaming capabilities. Connect each physical NIC to a different physical switch for max. redundancy.
c- Consider network separation between different types of networks, like: vMotion, Management, Exchange production, Fault Tolerance, etc. Network separation is either physical or virtual using VLANs.
d- DAG members VMs should have two vNICs, one for public network and the other one for heartbeat and replication network. Keep in mind that, configuring DAG members with one vNIC is supported, but it’s not considered as a best practice.

😎 Monitoring:
a- Always, try to monitor your environment using In-guest tools and ESXi and vCenter performance charts. Monitor the following:
a- ESXi & vCenter Counters:

Subsystem ESXTOP Counters vCenter Counter
CPU %RDY%USED Ready (milliseconds in a 20,000 ms window)Usage
Memory %ACTVSWW/sSWR/s ActiveSwap-in RateSwap-out Rate
Storage ACTVDAVG/cmdKAVG/cmd CommandsDevice LatencyKernel Latency
Network MbRX/sMbTX/s packets-Rxpackets-Tx

b- In-guest Counters:

Subsystem Win PerfMon Counters Description
VM Processor % Processor Time Processor usage across all vCPUs.
VM Memory Memory Ballooned Amount of memory in MB reclaimed by balloon driver.
Memory Swapped Amount of memory in MB forcibly swapped to ESXi host swap.
Memory Used Physical memory in use by the virtual machine.

Manageability:
1-) Microsoft Support Statement for Exchange in Virtual environments in this link.

2-) Try to leverage vSphere Templates in your environment. Create your Golden Template for every tier of your VMs. This reduces the time required for deploying or scaling your Exchange environment as well as preserve consistency of configuration throughout your environment.

3-) Use vCenter Operation Manager to monitor your environment performance trends, establish a dynamic baseline of your VMs performance to prevent false static alerts, estimate the capacity required for further scaling and proactively protect your environment against sudden peaks of VMs performance that need immediate scaling-up of resources.

4-) Time Synchronization is one of the most important things in Oracle environments. It’s recommended to do the following:
a- Let all your Exchang VMs sync their time according to the following best practices: VMware KB: Timekeeping best practices for Windows, including NTP.
b- Disable time-sync between Exchange VMs and Hosts using VMware Tools totally (Even after uncheck the box from VM settings page, VM can sync with the Host using VMware Tools in case of startup, resume, snapshotting, etc.) according to the following KB: VMware KB: Disabling Time Synchronization.
c- Sync all ESXi Hosts in the VI to the same Startum 1 NTP Server which is the same time source of your forest/domain.

Recoverability:
1-) Try to leverage any backup software that uses Microsoft Volume Shadow Service (VSS). These are Exchange-aware and don’t cause any corruption in mailbox DB due to quiesceing the DB during the backup operation. Ofcourse, one of them is vSphere Advanced Data Protection. Check the following link.

2-) If available, you can use any backup software that depends on array-based snapshots if it’s Exchange-aware.

3-) Use VMware Site Recovery Manager (SRM) if available for DR. With SRM, automated failover to a replicated copy of the VMs in your DR site can be carried over in case of a disaster or even a failure of single MBX VM –for example- in your environment.

4-) If SRM isn’t available for any reason, try to leverage any 3rd party replication software to replicate your Exchange VMs to a DR site for recovery in case of any disaster.

5-) Another approaches for DR:
a- You can use Stretched DAG configuration with Automatic Failover.
b- You can use Stretched DAG Configuration with Lagged Copy.
For more information, check Exchange on VMware – High Availability Guide.

Security:
1-) All security procedures done for securing physical Exchange environments should be done in virtual environment, like: Role-based Access Policy.

2-) Follow VMware Hardening Guide (v5.1/v5.5) for more security procedures to secure both of your VMs and vCenter Server.

Scalability:
1-) Scale-up Approach of DAG members requires a large ESXi Hosts with many sockets and RAM. It reduces the number of VMs required to serve certain number of mailbox users and hence, a single failed VM will affect a large portion of users. That’s why Scale-up Approach needs a careful attention to availability of DAG VMs. Scale-out Approach requires smaller ESXi Hosts and gives a more flexibility in designing a DAG VM, but requires high number of ESXi hosts to provide the required level of availability. A single VM failure has a less effect using Scale-out Approach and it requires less time for migration using vMotion and hence, DRS will be more effective. There’s no best approach here. It all depends on your environment and your requirements.

Another long guide and I’m sorry. I hope that I summarized all the information required for your smooth and easy operation of virtualization of any Exchange environments. You should do your homework of sizing your Exchange VMs carefully before attempting virtualization to maintain the required performance. Enjoy !!!

References:
** Virtualizing MS Business Critical Applications by Matt Liebowitz and Alex Fontana.
** Exchange 2010 on VMware – Best Practices Guide.
** Exchange 2010 on VMware – Availability & Recovery Guide.
** Exchange 2010 on VMware – Designing & Sizing Guide.
** Using vSphere HA, DRS & vMotion with Exchange 2010 DAGs.
** Exchange 2013 on VMware – Best Practices Guide.
** Exchange 2013 on VMware – Availability & Recovery Guide.
** Exchange 2013 on VMware – Designing & Sizing Guide.
** Microsoft Best Practices for Virtualized Exchange 2013.
** vSphere Design Sybex 2nd Edition by Scott Lowe, Kendrick Coleman and Forbes Guthrie.


1 Comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.