Monday, January 11, 2021

NSX-T VRF Gateway

 VRF Gateway feature in NSX-T is similar to VRF lite feature in physical networks in the following ways:

  1. Just like there is no need of another physical router for a separate routing instance using VRF lite feature, there is no need to deploy additional edges in case of NSX-T VRF gateways. This drastically reduces the resource requirements. In the case of NSX-T version 3.1, 100 VRFs are supported per edge node.
  2. NSX-T VRF lite offers routing isolation within the VRF Gateway.

In NSX-T, before creating VRF gateway, there should be a parent Tier 0 Gateway.

VRF gateway inherits the following from parent Tier 0 Gateway:

a. HA mode

If parent Tier 0 gateway has active-active availability mode then VRF gateways which utilize the parent T0 all of them will have active-active HA mode.

b. Edge Cluster

c. BGP AS number

d. Graceful restart settings

e. BGP multipath relax

The above topology is what I have used in my lab setup.

VRF Gateways are deployed as Tier 0 Gateways and downstream Tier 1 Gateways are connected to VRF gateways.

As shown in the above diagram, separate VLANs and IP subnets have been used on the VRF Gateways.

VLAN is the channel for data plane in the case of VRF gateways.

You have the ability to run BGP in each VRF gateway for route exchange with the upstream infrastructure.

VLANs and IP subnets have been tabulated below:

Before deploying VRF gateway, we will ensure the following is in place:

  1. NSX managers are deployed
  2. vcenter server is added as compute manager to NSX-T
  3. Hosts are configured as host transport nodes.
  4. Edges have been deployed, edge cluster is created.
  5. Parent Tier 0 Gateway is configured along with uplink interface configuration and BGP configuration.

You can check my previous posts for the above workflows.

Transport Zones
Host Uplink Profile
Edge Uplink Profile
Transport zones on hosts
Hosts have been configured for NSX using VDS as NSX-T host switch
Edge Node Configuration
Edge Transport Nodes
Edge Cluster has been created for parent T0 and VRF gateways
Parent Tier 0 Gateway
Segments for uplinks on parent Tier 0 Gateway
Layer 3 interfaces on parent T0 gateway
BGP configuration on parent Tier 0 Gateway
BGP neighbors on parent T0 gateway

Deployment workflow for VRF gateway is as follows:

In my lab, I have deployed two VRF Gateways.

  1. Create uplink segments for VRF gateways, here specify the VLAN ID information
Segments for uplinks of VRF Gateway A
  1. Next create VRF Gateway
Create VRF Gateway A

VRF gateway is associated with parent Tier 0 Gateway.

L3 interfaces on VRF Gateway A
BGP configuration on VRF Gateway A

Note inter-sr I-BGP is not supported on VRF Gateway

BGP peers on VRF Gateway A

Since my topology also has VRF Gateway B, I will follow the same workflow and configure VRF Gateway B as well.

Next, create corresponding Tier 1 gateways and connect them to respective VRF gateway as shown in the topology above.

Create corresponding Tier 1 Gateways and attach to respective VRF gateways

Next create segments for the workloads and attach workloads to the correct overlay segments.

Overlay segments for workloads
VM connected to overlay network Web-VRF-A
VM connected to overlay network Web-VRF-B

Physical network needs to learn NSX routes.

Hence connected networks on Tier 1 gateways will be advertised towards VRF gateways.

VRF gateways will redistribute NSX routes into BGP.

Advertise connected networks on Tier 1 Gateways
Route redistribution on VRF Gateway A

Validation:

ICMP test between VM on VRF-A to VM on VRF-B
Physical router 1 learns NSX routes
Reach-ability between loopback on physical router 2 and VM on VRF-B

From the logs, there is connectivity between VMs which are in different VRFs.

Monday, August 31, 2020

Multiple N-VDS' for Overlay on compute hosts

 Multiple N-VDS' for Overlay on compute hosts

 
NSX-T allows the creation of overlay networks which are independent of networks in physical/underlay network.
 
 
Along with the creation of overlay networks using NSX-T, micro segmentation/zero trust model of NSX is widely used to implement micro segmentation over NSX defined overlay networks as well as over traditional VLAN backed networks.
 
Architecturally  NSX-T solution has components which sit in the management/control plane and those that are in data plane:
1. Management plane which has NSX T Managers.
They are responsible for management plane as well as control plane.
 
2. Data Plane
In the data plane, we have:
a. Compute hosts which are prepared as host transport nodes and which also host the workloads which are connected to overlay networks typically.
b. Edges which provide north-south connectivity with physical network.
And these can also be leveraged for services like load balancing, edge firewall, NAT, DHCP. Edges are used to create Tier 0 Gateways which sit in between physical network and Tier 1 Gateways.
Tier 1 Gateways are those gateways to which overlay networks are connected typically.

A typical deployment of NSX-T will have one N-VDS on the compute host which is capable of handling both overlay traffic as well as VLAN traffic.

Here we explore how one can deploy multiple N-VDS' on host to provide uplink connectivity from host to two different environments.
For example Data Center Zone and DMZ Zone.

I have used below logical setup in this lab.

Logical Setup


IP Addressing and VLAN details

The edges in this setup have been used for Tier 0 Gateways.
There are two Tier 0 Gateways in this setup, one for Data Center Zone and the other for DMZ zone.
Edge clustering provides high availability at Tier 0 Gateway level as well as at Tier 1 Gateway level.
Availability mode at Tier 0 Gateway level can be Active-Active or Active-Standby if any service is to be consumed by Tier 0 Gateway.
Availability mode at Tier 1 Gateway level is Active-Standby in case any service is to be used at that specific Tier 1 Gateway.
 
If service (NAT,edge firewall, load balancing, VPN) is not required on the Tier 1 Gateway then you should not associate the Tier 1 Gateway with edge cluster. This avoids potential hair pinning of traffic through edge.
 

Fabric Preparation


N-VDS' on compute host

As shown in diagram above, there are two N-VDS' installed on the compute host once the host is prepared for NSX
 
Couple of things are required before the hosts can be prepared for NSX here:
a. NSX Manager should be installed.
b. Vcenter server should be added as compute manager to NSX-T Manager
c. Tunnel Endpoint TEP pools are to be defined which take care of TEP IP assignment on hosts and edges.
TEP interfaces on hosts/edges are responsible for encapsulation/de encapsulation of Geneve traffic. Geneve is the overlay protocol utilized in NSX T
d. Uplink profiles are to be created for edge and hosts respectively.
e. Compute Transport Node profile should be defined.
 
Compute TEP Pool

Edge TEP Pool

 

Uplink profiles for edges and hosts

Uplink profile contains the TEP VLAN ID, active uplink names, teaming policy and MTU setting.
 

Transport zones come in two types - overlay and VLAN
Overlay networks/segments are created using overlay transport zone.
VLAN backed networks/segments can be created using VLAN transport zone.
 
The above transport zones are used for this lab setup.
Trunk segments which are used for uplink connectivity of edge will utilize VLAN transport zone on host.
 
Next compute transport node profile will be created which will be mapped to the single cluster used in this setup.
 
Compute Transport Node Profile

 
 
1st N-VDS on compute host - For Data Center Workloads

 
2nd N-VDS on compute host - For DMZ workloads


Please note there that TEP pools used are the same on both the N-VDS'

The compute transport node profile is then mapped to the cluster so that the cluster is prepared for NSX.
Here NSX software is pushed to the hosts, N-VDS' defined are created on the hosts, TEP interfaces are created on the hosts.
 

Hosts are prepared for NSX using compute transport node profile

 
 
Data Center N-VDS and DMZ N-VDS on the compute host

We now move onto edges.
Edge placement in the lab is as below.
Edge Node Placement



Appropriate VLAN backed segments/networks are created in NSX-T Manager for uplink connectivity of edges.

Trunk Segments for up-link connectivity of edges

Note, appropriate transport zones are selected when VLAN backed trunk segments are created using host N-VDS


Next edges are deployed.
Edges up-linked to appropriate N-VDS'

The edge also has N-VDS which handles VLAN traffic and overlay traffic.
VLAN traffic towards the physical network and overlay traffic to/from the compute hosts.

The edge for Data Center Zone is connected to N-VDS allocated for Data Center Zone.
And the edge for DMZ Zone is connected to N-VDS allocated for DMZ Zone.

Edge Transport Node Config for Data Center Zone

Edge Transport Node Config for DMZ Zone


Edges are configured for NSX

Four edges are deployed, two for Data Center Zone and two for DMZ Zone.

Edge Clusters are created appropriately.
Edge Clusters for Data Center Zone and for DMZ Zone


Gateway Configuration and Routing

VLAN backed segments are created which are to be used while creating Layer 3 interfaces on Tier 0 Gateways.

Segments for creating L3 interfaces on Tier 0 Gateways

VLAN 5 and VLAN 51 are used on Tier 0 Gateway for Data Center Zone
VLAN 55 and VLAN 56 are used on Tier 0 Gateway for DMZ Zone.


IP Addressing for Tier 0 Gateways

 
Next Tier 0 Gateways will be defined and layer 3 interfaces will be created on the Tier 0 Gateways using IP addressing as shown in diagram above.
 

Tier 0 Gateway for Data Center Zone

When the Tier 0 Gateway for Data Center Zone is created, edge cluster for Data Center Zone is selected and the availability mode on Tier 0 Gateway is Active-Active.
 
 
Layer 3 interfaces on Tier 0 Gateway for Data Center Zone

 
 
Tier 0 Gateway for DMZ Zone

 
 
BGP AS Numbering and BGP Setup

 

e BGP peering between edges and upstream routers


 
From routing perspective, upstream routers should advertise default route to e BGP peers which are the edges.
NSX routes are redistributed into BGP on NSX end.
Tier 1 Gateways advertise routes towards Tier 0 Gateway.
There is no routing protocol between Tier 0 Gateway and Tier 1 Gateway.
Please note that inter SR i BGP peering can be enabled when availability mode on Tier 0 Gateway is Active-Active, this is preferred.
 
BGP Peering on Tier 0 Gateway for Data Center Zone

Redistribute routes on Tier 0 Gateway

 
 Next create Tier 1 Gateways and connect to appropriate Tier 0 Gateway.
Tier 1 Gateways are configured

Routes on Tier 1 Gateways are advertised towards Tier 0 Gateway upstream.


Segments are created for workloads


Overlay segments are created for Data Center Zone and for DMZ Zone respectively.
These overlay networks are connected to corresponding Tier 1 Gateways.

Once the overlay networks are created, then workloads can be connected to these overlay networks from vcenter.

VM in DMZ connected to overlay network for DMZ



VM in Data Center Zone connected to overlay network for Data Center

Validation

 

Ping/Trace from VM in DC Zone to VM in DMZ Zone

Ping/Trace from physical router in DC Zone to VM in DMZ


Ping/trace from physical router in DMZ to VM in DC Zone


Traffic Flow from VM in DC to VM in DMZ