Microsoft clustering and vmware high availability
We now have our CiB configured and ready. Off course, there is still the Windows Failover Cluster configuration part, but I will leave that up to you. Just let me know how it goes in the comments area of the article. This second type of cluster is the most popular for business critical applications because it allows us to put the Windows nodes on different ESXi hosts and take advantage of vCenter HA and vMotion.
The configuration is pretty much the same as CiB, the only difference are the disks. As mentioned in the begging of the article , starting with vSphere 7. Go ahead, prepare your LUNs on you storage array but do not add them as datastores in vCenter, just leave them there un-formatted and untouched.
Now right-click the first Windows VM and choose Settings. Click OK when you are done. As in the CiB section, we have some settings to configure on the disk before we can actually use it. On the Location drop-down box we can change the location of the virtual file. You can read more about it here. As before, set the Sharing option to No sharing and make sure the Compatibility Mode is set to Physical.
On the second VM, or the rest of them -if you are building a Windows Failover Cluster with more than two nodes- add an existing hard disk. Click OK when you are done then add the rest of the disks you might have. Off course, repeat this operation for the rest of the VMs. In the CiB section, we created an affinity rule to keep the VMs together.
From the Actions pane, hit the Add button. Once you click OK , you should have an anti-affinity rule created in the vCenter console similar to the one bellow. Type ForceAffinePoweron in the Options column and give it a value of 1. Now when we go and build the Windows Failover Cluster , the storage checks should pass with no problems during the validation wizard, which is exactly what we wanted.
If you get any errors or warnings, please post them in the comments area so the community can know about them. If we want to migrate the VMs to a different ESXi host, no problem, we can do that, and vMotion is doing the job just fine. The validation succeeds,. But remember, we have an anti-affinity rule, so the VM will not be started on an ESXi server where other Windows machines that participate in the Windows Failover Cluster are running.
This last type of cluster configuration is pretty much the same as Cluster Across Boxes CAB , just that now we have at least one Windows physical host involved. I will not go too deep into the subject here since most of the settings and configurations were covered in the previous section, the CAB section , so if you need extra information, please read that section.
The most important thing in making this work is to map the same LUNs to the physical and virtual machines. If Guest2 has problems, Guest1 is available again for the roles to fail over to it.
In this scenario, if Guest1 was instead running on a stand-alone host, the virtual machine would be unavailable until Host1 was brought back online and the virtual machine started. If there was an issue such as a hardware failure, the virtual machine could be unavailable for a lengthy period.
We recommend that you place virtual machines that are part of the same guest cluster on different physical hosts. This enables the failover of the workload to a running guest cluster node if there is a host failure. The surviving guest cluster node detects the loss of the other node and starts the clustered roles. This shortens the recovery time and increases the availability of the workload. If all nodes of the guest cluster are on the same host and a host failure occurs, after the host recovers you must restart at least one of the virtual machines to make the workload available.
To avoid this situation, you can configure the cluster to place virtual machines on different physical nodes of the cluster. To do this, use either of the following methods:. Use Windows PowerShell to set the AntiAffinityClassNames property for each virtual machine role that is part of the same guest cluster to the same text string. AntiAffinityClassNames is a property of a cluster group. For more information, see AntiAffinityClassNames , and the example in this section.
When the cluster must fail over clustered roles or move a virtual machine to another node, it checks all clustered roles on the destination node to determine whether any of the roles have the same string value set for the AntiAffinityClassNames property. If a role on the destination node has the same string value, the cluster targets another node and does the same check until it finds a node that can host the virtual machine. If there is no available node except for one with a role that has the same string value, the cluster places the virtual machine on a node with a matching string value.
The following Windows PowerShell example shows how to view the AntiAffinityClassNames property value for a clustered virtual machine that is named " VM1 ", and how to set the property value to the string " GuestCluster1. To view the current AntiAffinityClassNames property value for a guest cluster node that is named " VM1 ," type the following command:. The deployment and management of a guest cluster is very similar to a physical host cluster. You would follow the same steps to create a guest cluster.
Before you install an application on the guest cluster, check if there is a deployment guide that outlines specific requirements and recommendations for running the application in a virtual machine. If you use System Center R2 Virtual Machine Manager, realize that it includes a new feature to deploy guest clusters. You can use service templates to create the virtual machines and to create a Windows Server R2 guest cluster that uses shared virtual hard disk. The requirements for Microsoft Customer Service and Support to officially support a guest cluster are the same as for clusters that run directly on physical hardware.
This includes the requirement to pass all cluster validation tests. To receive support for the Windows Server operating system or for server features that are running on a virtual machine, you must use a supported hypervisor. Note that Microsoft's Life Cycle Support policy still applies to the guest operating system.
The same node limit applies to physical host clusters and to guest clusters that are running Windows Server R2 or Windows Server The number of nodes that you deploy depends on the clustered role. For example, if you have a database role on physical servers that is deployed on more than two nodes to spread out the instances, you can emulate this environment by using more than two nodes of a guest cluster to host the instances.
Realize that although deploying virtual machines has significant cost savings over dedicated physical servers, each virtual machine consumes CPU, memory, and other system resources that could be otherwise provisioned to other virtual machines. Therefore, do not add more nodes to a guest cluster than needed. Failover clusters require that nodes of the same cluster are members of the same Active Directory domain.
These are the same idea that is playing on my thoughts: On having a single VM, I would recommend testing out the vMotion of this single VM to another host. But it seems that having a cluster is still the better idea, just in case the host go down, or the application going awry, or if you want to do some patching on that one VM, if you want to do some configuration changes, etc.
On another hand, deploying SQL mirroring along with clustering seems to cover both data availability and application availability.
I am still in the "Virtualize or Not" cross-road. I would probably need to spin up another thread for this question. From what I have read I agree with your reply to this.
We've now gone away from the "Microsoft Clustering" scenario with less then a month to go-live!!! You have a total of three guests running in the cluster. You require either two Exchange server licenses one for each host OR you can have one Exchange Server license with Software Assurance allows license migration.
Server licenses are bind to the physical hardware even if you're running it in virtual machines and because of that the 90 days rule applies you're only allowed to move licenses 90 days after you assigned it. I would talk to your reseller; depending on the purchase date you could be able to buy SA for your licenses. When your primary server goes down or after 90 days then you're allowed to move you existing licenses to a new machine.
After that you'll have to wait 90 days till you can move you licenses again. I'm not so sure if this is allowed be Microsoft, so perhaps a Licensing Specialist could confirm that. We are only just moving from Exchange , and the only reason we are looking at Software Assurance is due to the mobility rights not that we are thinking about updating, this will most probably be in place until Exchange goes out of support.
To continue this discussion, please ask a new question.
0コメント