Cisco Nexus 1000V and Cisco Nexus 1110 Virtual Services Appliance (VSA) across data centers

Download (0)

Full text

(1)

Cisco Nexus 1000V and Cisco Nexus 1110 Virtual

Services Appliance (VSA) across data centers

With the improvement in storage, virtualization and L2 extension technologies, customers can now choose to have multiple physical data centers in a contained geographical area work as a single virtual data center.

As technology vendors support more such deployments and improve their product offerings, the distances between these data centers (and correspondingly the latency tolerated by the solutions deployed) will also see an increase. This document will explore deploying the Cisco Nexus 1000V and the Cisco Nexus 1110 Virtual Services

Appliance (VSA) in the following two environments where the Nexus 1000V may be deployed across data centers - • Disaster Avoidance – a scenario here would attempt to reduce the impact of an oncoming disaster at a

site (such as a change window and not including major natural disasters, as those would probably hit both sites situated in a small geographic area).

• Reduction in management points – in a scenario where a customer has multiple locations in a small metro area, they would want to deploy one or two ESX servers in each location and manage them all from one single location (where the VSMs reside).

Intended Audience

This document is intended for network architects, network engineers, virtualization administrators, and server administrators interested in understanding and deploying Cisco Nexus 1000V and Nexus 1110 Virtual Services Appliance (VSA) in an environment spanning multiple physical data centers.

What You Will Learn

This document will cover the usage scenarios and deployment models for using the Nexus 1000V in an

environment spanning multiple data centers. The considerations for High Availability for the Nexus 1000V VSMs and the latency requirements for VSM-to-VSM-communication and VSM-to-VEM communication will also be detailed.

The deployment of the Nexus 1000V across data centers assumes the existence of an L2 extension between the two data centers. There are many different options available for L2 extension such as OTV, EoMPLS and VPLS. While each of these methods has mechanisms that deal with preventing loops without the use of spanning tree, complex topologies where multiple independent loop free domains are linked together can cause loops.

(2)

For the purpose of this document we will assume that an L2 extension method exists and supports all unicast/broadcast/multicast traffic, without getting into the details of the implementation.

Nexus 1000V VSM across data centers

Figure 1. Nexus 1000V across DC

Starting with Release 4.2(1)SV2(1.1), the high availability functionality is enhanced to support the split active and standby Cisco Nexus 1000V Virtual Supervisor Modules (VSMs) across two data centers to implement cross-DC clusters and VM mobility while ensuring high availability. The VSM can either be hosted as a Virtual Service Blade (VSB) on the Nexus VSA or it can be deployed as a Virtual Machine

A multi-site data center is commonly used for disaster avoidance and recovery, and although the data centers are in separate physical locations they may not be geographically far apart. This sort of deployment helps in

maintenance operations where one site can be brought down for maintenance purpose by shifting network services across datacenter. This also helps in load balancing or shifting the network service to a branch where it is required. This deployment model may not be very helpful for disaster management under natural disasters; however, for disasters such as a power outage or outage of some other critical system in a single DC or a localized event such as a fire/flood/vandalism, the backup data center can take over the operations of the primary until it is brought back up.

A comprehensive design for disaster recovery will also include considerations for vCenter High Availability and storage solutions for replicating storage between the two data centers. The details of this design however are beyond the scope of this document.

(3)

Figure 1 shows a common deployment of the Nexus 1000V in a scenario where the VSM is split between two physical data centers. The data centers are connected with layer 2 extension provided by OTV using the Nexus 7000. The configuration for the Nexus 7000 can be found in the configuration guide:

http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/OTV/config_guide/b_Cisco_Nexus_7000_Series_NX-OS_OTV_Configuration_Guide.html

The Nexus 1000V does not require any special configuration in order to support the VSMs split across data centers. However, in order for VSM High Availability it must be ensured that the round-trip latency is less than 10 milliseconds.

Cisco Nexus 1000V VSM hosted on the Cisco Nexus 1100 series across data centers

The Cisco Nexus 1100 series Virtual Services Appliances are a series of hardware platforms intended to host multiple virtual service blades to provide different network services including the Cisco Nexus 1000V VSM. The Nexus 1110 is the first in the line of the Nexus 1100 series.

Hosting the Nexus 1000V VSM on the Nexus 1110 VSA provides additional benefits when the Nexus 1000V VSM spans multiple data centers. Since the Nexus 1110 VSA is managed and operated by the network administrator it provides the following benefits over deploying the VSM as a VM –

- The network administrator needs to engage with the server administrator to ensure that the VSM VM has the correct settings and subsequently has no independent way of tracking where the VSM VM resides. With the Nexus 1110 VSA the network administrator determines exactly the placement of the VSM and is able to independently identify exactly where the active VSM resides at any point.

- In the VMware environment it is possible to set up complex DRS and anti-affinity rules, which the network administrator will now need to understand in order to ensure correct deployment of the VSM. The Nexus VSA on the other hand provides most operations through the familiar NX-OS CLI.

- Another consideration is the network administrator’s lack of control over disaster avoidance/recovery operations. The DRS rules apply for all hosts irrespective whether they are hosting a usual VM or specific network services. The Nexus 1110 VSA will help network administrator to concurrently do disaster avoidance/recovery operations for specific network services hosted on it.

The configuration and set up for the Nexus VSA across data centers is no different from the deployment in a single data center. To ensure High Availability the Nexus 1110 VSA pair has to be layer 2 adjacent similar to the Nexus 1000V VSM must have a round-trip latency of less than 10 milliseconds.

Nexus 1000V enhancements for split-brain recovery

A split-brain scenario occurs when there is a loss of communication between the active and standby VSM, as a result of this each VSM assumes it is active and subsequently tries to establish a connection with vCenter and tries to control the VEMs. When communication is restored between the VSMs, both VSMs attempt to resolve the split-brain resolution by rebooting the primary VSM. In setups where the VSM pair is split between data centers, if the latency on the DCI link is too high it may be possible to run into this issue with more frequency than in a setup where the VSMs reside in the same data center.

Prior to release 4.2(1)SV2(1.1) the method of resolving the split-brain resolution was to reboot the primary VSM to recover from a split-brain after the communication between the VSMs is restored. This helps in the scenarios, when the secondary VSM has a proper configuration and all the VEMs are properly connected. If the secondary

(4)

VSM does not have a proper configuration, it may overwrite the valid configuration of the primary VSM. As a result, both VSMs may have an invalid configuration after the split-brain resolution.

Starting with Release 4.2(1)SV2(1.1), the high availability functionality on Cisco Nexus 1000V is enhanced to address this issue. Both primary and secondary VSMs process the same data to select the VSM

(primary/secondary) that needs to be rebooted. When the selected VSM is rebooted and attaches back itself, the high availability functionality comes back normal. The following parameters are used in order of their precedence to select the VSM to be rebooted during the split-brain resolution:

- Last configuration time: The time when the last configuration is done on the VSM. - Module count: The number of modules attached to the VSM.

- vCenter (VC) status: Status of the connection between the VSM and vCenter - Last active time: The time when the VSM becomes active

In addition there are enhanced CLI commands to display the redundancy that were backed up during the split-brain resolution. These commands can provide useful information to understand the state transitions that resulted in the split-brain state.

State transitions on active VSM -

show system internal active-active redundancy traces

State transitions on standby VSM -

show system internal active-active remote redundancy traces

Clear state transition logs -

(5)

clear active-active remote redundancy traces

Nexus 1000V – Extending VEMs to branch office

Figure 2. Nexus 1000V – extending VEMs to branch offices

Another common usage scenario where Nexus 1000V support across data centers is required is for connecting small branch offices with one or two servers to a common data center infrastructure hosted at the main site for a company. The vCenter instance and Nexus 1000V VSMs reside within this main data center and the VEMs reside in branch offices in different geographical locations.

In this scenario too there are no special configuration requirements for the Nexus 1000V. The Nexus 1000V must be deployed in L3 mode for VSM-to-VEM communication. The requirement for the VSM-to-VEM round-trip latency is less than 100 milliseconds. If the active and standby VSMs are also split between data centers the round-trip latency between the VSM pair should be less than 10 milliseconds.

Conclusion

The 4.2(1)SV2(1.1) release for the Cisco Nexus 1000V provides enhancements to enable deployment of Nexus 1000V in different scenarios involving multiple data centers. In this cheat sheet we have explored two commonly deployed scenarios and the considerations for optimally deploying the Nexus 1000V and Nexus 1100 series VSA in this environment.

For More Information

Cisco Nexus 1000V Series Switches: http://www.cisco.com/en/US/partner/products/ps9902/index.html

Cisco Nexus 1000V High Availability Configuration Guide:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_2_1_1/high_availability/configura tion/guide/b_Cisco_Nexus_1000V_High_Availability_and_Redundancy_Configuration_Guide_2_1_1.html

(6)

Cisco Nexus 1100 Series Virtual Services Appliance:

Figure

Updating...

References

Related subjects :