Procedure for deploying a logical router

In document Packt.vmware.nsx.Network.essentials.1782172939 (Page 96-106)

Let's walk through the step-by-step configuration of a Distributed Logical Router:

In the vSphere web client, navigate to Home | Networking & Security | NSX 1. Edges.

Select the appropriate NSX Manager on which to make your changes. If you are creating a universal logical router, you must select the primary NSX Manager. We will be discussing about Primary/Secondary NSX manager concepts in Chapter 7, NSX Cross vCenter.

Select the type of router you wish to add; in this case, we would add logical 2. router.

Select Logical (Distributed) Router to add a logical router local to the selected 3. NSX Manager.

Since we haven't discussed the cross-vCenter NSX environment, we won't leverage a universal logical distributed router in this chapter.

Type a name for the device. This name appears in your vCenter inventory. The 4. name should be unique across all logical routers within a single tenant.

Optionally, you can also enter a hostname. This name appears in the CLI. If you do not specify the hostname, the edge ID, which gets created automatically, is displayed in the CLI.

The Deploy Edge Appliance option is selected by default. An edge appliance 5. (also called a logical router virtual appliance) is required for dynamic routing and

the logical router appliance's firewall, which applies to logical router pings, SSH access, and dynamic routing traffic. You can deselect the Deploy Edge Appliance option if you require only static routes, and do not want to deploy an edge appliance. You cannot add an edge appliance to the logical router after the logical router has been created.

The Enable High Availability option is not selected by default. Select the Enable 6. High Availability check box to enable and configure high availability. High

Availability is required if you are planning to do dynamic routing. I want everyone to think from a cloud provider perspective: if your tenant is requesting the High Availability feature, how do you satisfy that requirement? NSX

Edge replicates the configuration of the primary appliance for the standby

appliance and ensures that the two HA NSX Edge virtual machines are not on the same ESXi host even after you use DRS and vMotion. Two virtual machines are deployed on vCenter in the same resource pool and data store as the appliance you configured. Local link IPs are assigned to HA virtual machines in the NSX Edge HA so that they can communicate with each other. But remember that

The following screenshot shows NSX DLR-VM deployment:

Type and retype a password for the logical router. The password must be 12-255 7. characters and must contain the following:

At least one uppercase letter At least one lowercase letter At least one number

At least one special character

Enable SSH and set the log level (optional). By default, SSH is disabled. If you do 8. not enable SSH, you can still access the logical router by opening the virtual

appliance console. Enabling SSH here causes the SSH process to run on the logical router virtual appliance, but you will also need to adjust the logical router firewall configuration manually to allow SSH access to the logical router's

protocol address. The protocol address is configured when you configure dynamic routing on the logical router. By default, the log level is emergency.

On logical routers, only IPv4 addressing is supported.

Configure the interfaces. Under Configure interfaces, add four logical interfaces 9. (LIFs) to the logical router:

Uplink connected to Transit-Network-01 logical switch with an IP of 192.168.10.2/29

Internal connected to Web-Tier-01 Logical Switch with IP 172.16.10.1/24

Internal connected to App-Tier-01 Logical Switch with IP 172.16.20.1/24

Internal connected to DB-Tier-01 Logical Switch with IP 172.16.30.1/24

The following screenshot depicts the Add Interface screen:

Configure interfaces of this NSX Edge: Internal interfaces are for connections to 10. logical switches that allow VM-to-VM (East-West) communication. Internal

interfaces are created on the logical router virtual appliance and we call them LIF.

Uplink interfaces are for North-South communication. A logical router uplink interface can be connected to an NSX Edge services gateway, third-party router VM, or a VLAN-backed dvPortgroup to make the logical router connection to a physical router directly. You must have at least one uplink interface for dynamic routing to work. Uplink interfaces are created as vNICs on the logical router virtual appliance.

We can add, remove, and modify interfaces after a logical router is

The following screenshot depicts the DLR configuration that we have performed so far:

Now that we have successfully deployed a DLR and configured with logical interfaces, we would expect the DLR to perform basic routing functionality for web, app, and DB

machines to communicate with each other, which was not possible earlier.

The following screenshot depicts the three-tier application architecture without routing:

Let's go ahead and perform a quick ping test between web-01a (172.16.10.11) and app (172.16.20.11). As we can see from the following screenshot, web servers and

application servers are able to communicate each other since we have a Distributer Logical Router, which does the routing in this case. The first ping result is before adding the Distributed Logical Router:

So far, we have discussed the Distributed Logical Router (DLR), which allows ESXi

hypervisor to locally do routing intelligence through which we can optimize East-West data plane traffic. But I know we are very keen to view the DLR routing table in an ESXi host.

Let's focus on the following screenshot to know the network topology.

The following screenshot depicts the three-tier application architecture with DLR connection:

The following questions might come to our mind:

How many networks do we have?

172.16.10.0/24 172.16.20.0/24 172.16.30.0/24 192.168.10.0/29

Are the networks directly connected to the router?

Yes, they are connected to the router.

The preceding command will display the logical router instance as shown in the following screenshot. You can see the following parameters:

VDR Name is default+edge-19 Number of Lifs is 4

Remember we connected four logical networks to the distributed router? Hence the count is 4

Number of Routes is 4

Since we have connected four logical networks, the router is aware of those directly connected networks:

The following command will verify the network routes discovered by DLR:

net-vdr --route -l VDR Name For example:

The logical router routing table is pushed by the NSX Controller to the ESXi host and it will be consistent across all the ESXi hosts. You will see the following output:

Now log in to the controller CLI to view the logical router state information:

nvp-controller # show control-cluster logical-routers instance all (List all LR instances)

You will see the following output:

The other command is:

nvp-controller # show control-cluster logical-routers interface-summary 1460487509

All four logical switches (VXLAN 5000, 5001, 5002, and 5003, which we have connected to the logical router) are displaying in the following output with their respective interface IP, which would be the default gateway for web, app, and DB machines. Again, the idea here is to showcase the power of NSX CLI commands, which give granular-level information and are extremely useful when troubleshooting:

In document Packt.vmware.nsx.Network.essentials.1782172939 (Page 96-106)