Friday, December 13, 2019

Deploying Edge Node VM on N-VDS of Compute Transport Node


Deploying Edge Node VM on N-VDS of Compute Transport Node


Deploying Edge Node VM on N-VDS of Compute Transport Node
Deploying Edge Node VM on N-VDS of Compute Transport Node

Segments which are overlay networks require N-VDS which is a virtual switch specific to NSX-T.

Virtualized workloads VMs are connected to segments hosted on N-VDS of compute hosts. The compute guest VMs will be attached to N-VDS of compute host.

Transport Nodes in NSX-T run instance of N-VDS

NSX-T has two types of Transport Nodes:
1. Edge Transport Node
Available in two form factors - VM and Bare Metal, these are required for services like routing, VPN, load balancing, connectivity with physical network, edge firewall, NAT. They represent a pool of capacity and will be grouped by Edge Node Cluster.

2. Hypervisor Transport Node
These are hosts which are NSX configured.
When hypervisor transport node is created in NSX-T, effectively N-VDS is created on the host. This N-VDS will have dual uplinks for availability purpose.
While configuring NSX on hosts, you specify settings like Transport Zones and N-VDS settings which include uplink profile, TEP IP addressing, uplink information.
Uplink profile has teaming policy, active and standby uplink information, VLAN for TEP and MTU information.

In the above topology, Edge Node VM uses N-VDS of compute transport node for connectivity with upstream physical network.
fp-eth0 interface of Edge Node VM is uplinked to Trunk Segment - External LS 1
Likewise fp-eth1 interface of Edge Node VM is uplinked to Trunk Segment - External LS 2
Management vnic of Edge Node VM is connected to Management segment created on host - Management LS

To the right of above diagram are the transport zones associated with compute transport node and the transport zones associated with edge transport node.

As you see, Overlay Transport Zone is available on both - compute transport node and edge transport node.
ESXi is VLAN backed Transport Zone available only on compute transport node.

VLAN-1 and VLAN-2 are VLAN backed transport zones which are available only on the edge transport node. 


Logical Topology used in this lab

The above is a single tier topology which uses only Tier 0 Gateway.
It is recommended to use the multi tier topology with Tier 1 Gateway because it inherently supports multi tenancy.
A tenant in NSX-T has a specific Tier 1 Gateway and hence Tier 1 Gateways are also called as tenant gateways.


Transport Zones for Overlay and VLAN backed traffic


Transport Zones are created as above:
- ESXi and Overlay transport zones will be used on Compute Transport Nodes
- Overlay, VLAN-1 and VLAN-2 transport zones will be used on Edge Transport Nodes.

Uplink Profile for Compute Host

The uplink profile above shows that Transport VLAN ID 2 has been used for TEP interface on compute host.
TEP interfaces are used for encapsulation and de encapsulation of Geneve traffic.
Geneve is the protocol used in NSX-T for building tunnels between Tunnel End Points which are present on Edge Transport Nodes and also on Compute Transport Nodes.
In addition to the default teaming policy, two teaming policies U1 and U2 are created. These new teaming policies are failover based with single uplink.
These teaming policies can be effectively used to pin traffic of a specific VLAN backed segment.


We will first be creating a compute transport node profile and then use this profile for configuring NSX on compute hosts.



Transport Zones on Compute Transport Nodes


Overlay and ESXi transport zones are selected in the Compute Transport Node Profile.

N-VDS settings of Compute Transport Nodes

The above settings are settings on N-VDS of a Compute Transport Node Profile.
vmnic4 and vmnic5 on the hosts will be used for N-VDS.
IP Pool has been created for assigning IP addresses to TEP interfaces on compute transport nodes.
Uplink profile for compute transport nodes is selected.


Compute Host Transport Nodes with single N-VDS for Overlay and VLAN backed ESXi Transport Zones

Using the compute transport node profile created earlier, the compute hosts have been configured as compute transport nodes.
Effectively N-VDS is installed on the compute hosts along with appropriate transport zones - ESXi and Overlay.


Trunk Segments on N-VDS of compute host for attaching fp-eth0 and fp-eth1 of Edge Transport Node

VLAN backed segments for creating uplink interfaces on Tier 0 Gateway

The above segments are created for creating external interfaces on Tier 0 Gateway.
Segment EDGE-VLAN-1 uses a VLAN tag of 5 and segment EDGE-VLAN-2 uses VLAN tag of 51

Uplink Profile on Edge




The uplink profile for Edge Transport Node uses another VLAN ID 4 for TEPs on Edge.
When Edge uses N-VDS of compute host for uplink connectivity then two different subnets should be used for TEPs on compute and TEPs on edge.
Additional teaming policies have been created which will be used for pinning traffic on VLAN backed segment.


Transport Zones on Edge Transport Node

The above transport zones are used on Edge Transport Nodes.
Overlay transport zone for Geneve backed traffic
VLAN-1 transport zone which will be used for peering with upstream router 1
VLAN-2 transport zone which will be used for peering with upstream router 2


N-VDS settings on Edge Transport Node

The above shows N-VDS setting of Edge Transport Node.
Appropriate TEP pool for Edge has been selected.
Uplink profile related to Edge has been selected.
Notice that fp-eth0 and fp-eth1 on Edge Node VM are getting uplinked to Edge-Trunk-Uplink-1 and Edge-Trunk-Uplink-2 respectively which are VLAN backed segments on N-VDS of compute host.


After the Edge Node VMs are configured properly, we then need to configure Edge Node Cluster as shown below.
 


Tier 0 Gateway requires edge node cluster for peering with physical network.
Active-Active high availability mode has been used in this lab.

Tier 0 Gateway

Next, interfaces need to be created on the Tier 0 Gateway.
Interfaces on Tier 0 Gateway

Configure local BGP AS number on Tier 0 Gateway.


Configure BGP neighbors on Tier 0 Gateway.

Configure route redistribution on Tier 0 Gateway.


Validation:

The below mentioned validation commands are run on Edge Node VM 1


BGP peerings on Tier 0 SR of Edge Node VM 1

Routing Table on Tier 0 SR of Edge Node VM 1

Reachability between Tier 0 SR of Edge Node VM 1 and upstream interfaces on routers

BGP routes on Tier 0 SR of Edge Node VM 1

The below command output is from TOR1





Friday, November 1, 2019

NSX-T Layer 2 Bridging

 Layer 2 Bridging

One important use case of layer 2 bridging is migration of physical to virtual machines.
Here the same IP subnet is split between virtual machines backed by overlay network and physical machines backed by VLAN backed distributed port group.

There will be times when certain physical machines cannot leverage virtualization. Layer 2 bridging can be used in such cases where the physical machines need to keep layer 2 adjacency with virtual machines backed by Geneve backed segment in NSX-T


Configuration for layer 2 bridging:

This lab uses a host which is prepared for NSX-T and hence there is N-VDS on the compute host.
The setup uses a shared edge and compute cluster setup; hence the edge VMs are hosted on cluster prepared for NSX-T 

1. Dedicated edge node cluster is used in the lab for Layer 2 Bridging purpose.
2. Tier 0 Gateway uses separate Edge Transport Nodes other than the ones used for Layer 2 Bridging purpose.
3. Figure below shows the connectivity of edge dedicated for layer 2 bridging purpose.
4. fp-eth2 is unused
5. fp-eth0 is mapped to Logical Switch on host N-VDS which is used for Overlay traffic. This logical switch is a trunk logical switch.
6. fp-eth1 is used for Bridged VLAN which is backed by distributed port group on DVS.

Important Note:
The security settings of the distributed port group used for bridging should have:
a. Promiscuous mode enabled
b. And also forged transmits should be enabled.


Dedicated Edge Nodes for Layer 2 Bridging

NSX-T Logical Topology for Layer 2 Bridging Use Case


We will create a dedicated transport zone for Layer 2 Bridging.
This new transport zone for Layer 2 Bridging will use a new N-VDS, the N-VDS name in this lab is bridge.
The name of N-VDS used for Overlay traffic is ndvs





Next, on the edges used for Layer 2 Bridging, we will ensure that there are two transport zones, one used for Overlay traffic and the second transport zone for Layer 2 Bridging.

N-VDS Configuration for Overlay Traffic


N-VDS used for Bridging

Edges nsx-edge-5 and nsx-edge-6 are part of edge cluster dedicated for Layer 2 Bridging service.
This lab only involves the use case related to Layer 2 Bridging.




Two bridge profiles are created.
Bridge-Profile-1 has nsx-edge-5 as Primary Node
Bridge-Profile-2 has nsx-edge-6 as Primary Node
By using such a set up, both edge nodes in the edge node cluster can be used to bridge traffic related to different VLANs.
Preemption has been also enabled to ensure when primary node recovers from failure, it becomes active again.



We have a Tier 1 Gateway in the lab setup which has segments Web and App attached to it.
Please note that this Tier 1 Gateway does not have Edge Cluster associated with it.




Advanced Networking and Security tab in NSX-T Manager GUI interface is used to map the bridge profile with appropriate segment.

Web segment is mapped with Bridge-Profile-1. VLAN ID 11 is used to bridge Web segment.

App segment is mapped with Bridge-Profile-2. VLAN ID 12 is used to bridge App segment.


This completes the bridging configuration.


 
Validation:
Next up is the validation part.
For validation, I will create a layer 3 interface on my physical router and assign IP from the subnet where bridging is used.
This is for demo purpose only; in production setup you will either keep the gateway for bridged subnet on physical router or Tier 1 Gateway.




 



Wednesday, July 3, 2019

NSX-T Routing Configuration

NSX-T Routing Configuration

Overall Topology used in the lab

Pre-requisites like NSX-T manager installation, preparing and configuring compute host transport nodes, preparing and configuring edge transport nodes are covered here.

As shown in the topology above, two Tier 0 gateways are configured in the lab.
One Tier 0 gateway is configured in Active-Active High Availability mode and the other Tier 0 gateway is configured in Active-Passive High Availability mode.
I will be referring to the two Tier 0 Gateways as Tier 0 Gateway Left and Tier 0 Gateway Right.

A total of four edge node VMs are utilized, two for each Tier 0 gateway.
Edge node clusters are created, two edge node clusters are defined. Each edge node cluster effectively utilizes two edge node VMs.


BGP peerings
The above topology shows the e BGP peerings between NSX-T Edge Nodes and the physical routers.
Two VLANs are utilized as shown above.



Edge Transport Nodes

Here you see that Edge Transport Nodes are ready.


Below you find the interface configs applied on the Tier 0 Gateways.
Interfaces belonging to Edge Nodes on Tier 0 Gateway Left



Interfaces belonging to Tier 0 Gateway Right


BGP Diagram along with IP addressing

This topology shows the BGP AS numbers used in the lab setup.
Also shown in the diagram is the IP addressing used.
Segment with subnet 172.16.10.0/24 is attached to a Tier 1 Gateway Left.
And a segment with subnet 10.0.0.0/24 is directly attached to Tier 0 Gateway Right.

========================================

BGP Configuration on Tier 0 Gateway


Add caption
For BGP configuration, edit the Tier 0 Gateway and configure first the Local BGP AS number and save the configuration.
Next set the BGP peers.


BGP peer configuration on Tier 0 Gateway

You will apply similar BGP configuration on the other Tier 0 Gateway Right but in that case the Local BGP AS number is 65002 as shown in the BGP topology above and the remote AS number will be same i.e. 65001


BGP configuration on the TOR Physical Routers
Make sure you have BGP configured on the physical routers and also ensure that the BGP peerings are up.


BGP peerings on TOR1

BGP Peerings on TOR2

============================================

Configure redistribution on Tier 0 Gateways and Tier 1 Gateway

As mentioned in my earlier post, there is no dynamic routing between Tier 0 gateway and Tier 1 gateway in NSX-T.

We just need to configure redistribution on Tier 0 Gateway and Tier 1 Gateway appropriately.


As shown above, we are redistributing connected networks on Tier 1 Gateway


Enable redistribution on Tier 0 Gateway.
Ensure you also redistribute connected subnet of Tier 1 gateway on Tier 0 gateway.


Follow the same steps to enable redistribution of connected subnet on Tier 0 Gateway Right.

===============================================

Validation


The above command is executed from the CLI of NSX-T Edge Node VM.
This is the first edge node VM corresponding to Tier 0 Gateway Left.
We know that the Tier 0 Logical Router consists of SR and DR; the SR sits atop the DR.
Notice from above output that VRF 5 corresponds to the SR with name as SR-T0-GW

Let's go to VRF 5


High Availability Mode on Tier 0 SR
Check the high availability status of this Tier 0 SR corresponding to Tier 0 Gateway Left and you will find that it is Active-Active


BGP Peerings on Tier 0 SR corresponding to Tier 0 Gateway Left
Notice the three BGP peerings:
a. BGP peering with TOR1
b. BGP peering with TOR2
c. Inter SR BGP peering using link local IP address 169.254.0.131

Notice the BGP best path advertisement.
Also notice the additional attributes like metric, Local Preference, Weight.

Check all the BGP routes on this SR, these BGP routes are learnt on the SR of Tier 0 Gateway Left.


Below are all the routes noticed on Tier 0 SR corresponding to Tier 0 Gateway Left. This is the routing table.


=======================================

Validation on the physical routers for subnet connected to Tier 0 Gateway Right

Let's check how the route for 10.0.0.0/24 (which is locally connected to Tier 0 Gateway Right) is learnt by physical routers 




Notice from the output above, that the route is learnt via the Active Edge Node VM corresponding to Tier 0 Gateway Right.
Standby Edge Node VM corresponding to Tier 0 Gateway Right is doing Auto AS Path Prepend.
There is no explicit configuration done on the Tier 0 Gateway to achieve this AS path prepend.

=========================

End to end connectivity between Windows VM attached to 10.0.0.0/24 and Ubuntu machine connected to 172.16.10.0/24


Use ping and trace to verify connectivity between source and destination VM.





Ensure that you are able to ping with default VM MTU of 1500 bytes.