Monday, August 31, 2020

Multiple N-VDS' for Overlay on compute hosts

 Multiple N-VDS' for Overlay on compute hosts

 
NSX-T allows the creation of overlay networks which are independent of networks in physical/underlay network.
 
 
Along with the creation of overlay networks using NSX-T, micro segmentation/zero trust model of NSX is widely used to implement micro segmentation over NSX defined overlay networks as well as over traditional VLAN backed networks.
 
Architecturally  NSX-T solution has components which sit in the management/control plane and those that are in data plane:
1. Management plane which has NSX T Managers.
They are responsible for management plane as well as control plane.
 
2. Data Plane
In the data plane, we have:
a. Compute hosts which are prepared as host transport nodes and which also host the workloads which are connected to overlay networks typically.
b. Edges which provide north-south connectivity with physical network.
And these can also be leveraged for services like load balancing, edge firewall, NAT, DHCP. Edges are used to create Tier 0 Gateways which sit in between physical network and Tier 1 Gateways.
Tier 1 Gateways are those gateways to which overlay networks are connected typically.

A typical deployment of NSX-T will have one N-VDS on the compute host which is capable of handling both overlay traffic as well as VLAN traffic.

Here we explore how one can deploy multiple N-VDS' on host to provide uplink connectivity from host to two different environments.
For example Data Center Zone and DMZ Zone.

I have used below logical setup in this lab.

Logical Setup


IP Addressing and VLAN details

The edges in this setup have been used for Tier 0 Gateways.
There are two Tier 0 Gateways in this setup, one for Data Center Zone and the other for DMZ zone.
Edge clustering provides high availability at Tier 0 Gateway level as well as at Tier 1 Gateway level.
Availability mode at Tier 0 Gateway level can be Active-Active or Active-Standby if any service is to be consumed by Tier 0 Gateway.
Availability mode at Tier 1 Gateway level is Active-Standby in case any service is to be used at that specific Tier 1 Gateway.
 
If service (NAT,edge firewall, load balancing, VPN) is not required on the Tier 1 Gateway then you should not associate the Tier 1 Gateway with edge cluster. This avoids potential hair pinning of traffic through edge.
 

Fabric Preparation


N-VDS' on compute host

As shown in diagram above, there are two N-VDS' installed on the compute host once the host is prepared for NSX
 
Couple of things are required before the hosts can be prepared for NSX here:
a. NSX Manager should be installed.
b. Vcenter server should be added as compute manager to NSX-T Manager
c. Tunnel Endpoint TEP pools are to be defined which take care of TEP IP assignment on hosts and edges.
TEP interfaces on hosts/edges are responsible for encapsulation/de encapsulation of Geneve traffic. Geneve is the overlay protocol utilized in NSX T
d. Uplink profiles are to be created for edge and hosts respectively.
e. Compute Transport Node profile should be defined.
 
Compute TEP Pool

Edge TEP Pool

 

Uplink profiles for edges and hosts

Uplink profile contains the TEP VLAN ID, active uplink names, teaming policy and MTU setting.
 

Transport zones come in two types - overlay and VLAN
Overlay networks/segments are created using overlay transport zone.
VLAN backed networks/segments can be created using VLAN transport zone.
 
The above transport zones are used for this lab setup.
Trunk segments which are used for uplink connectivity of edge will utilize VLAN transport zone on host.
 
Next compute transport node profile will be created which will be mapped to the single cluster used in this setup.
 
Compute Transport Node Profile

 
 
1st N-VDS on compute host - For Data Center Workloads

 
2nd N-VDS on compute host - For DMZ workloads


Please note there that TEP pools used are the same on both the N-VDS'

The compute transport node profile is then mapped to the cluster so that the cluster is prepared for NSX.
Here NSX software is pushed to the hosts, N-VDS' defined are created on the hosts, TEP interfaces are created on the hosts.
 

Hosts are prepared for NSX using compute transport node profile

 
 
Data Center N-VDS and DMZ N-VDS on the compute host

We now move onto edges.
Edge placement in the lab is as below.
Edge Node Placement



Appropriate VLAN backed segments/networks are created in NSX-T Manager for uplink connectivity of edges.

Trunk Segments for up-link connectivity of edges

Note, appropriate transport zones are selected when VLAN backed trunk segments are created using host N-VDS


Next edges are deployed.
Edges up-linked to appropriate N-VDS'

The edge also has N-VDS which handles VLAN traffic and overlay traffic.
VLAN traffic towards the physical network and overlay traffic to/from the compute hosts.

The edge for Data Center Zone is connected to N-VDS allocated for Data Center Zone.
And the edge for DMZ Zone is connected to N-VDS allocated for DMZ Zone.

Edge Transport Node Config for Data Center Zone

Edge Transport Node Config for DMZ Zone


Edges are configured for NSX

Four edges are deployed, two for Data Center Zone and two for DMZ Zone.

Edge Clusters are created appropriately.
Edge Clusters for Data Center Zone and for DMZ Zone


Gateway Configuration and Routing

VLAN backed segments are created which are to be used while creating Layer 3 interfaces on Tier 0 Gateways.

Segments for creating L3 interfaces on Tier 0 Gateways

VLAN 5 and VLAN 51 are used on Tier 0 Gateway for Data Center Zone
VLAN 55 and VLAN 56 are used on Tier 0 Gateway for DMZ Zone.


IP Addressing for Tier 0 Gateways

 
Next Tier 0 Gateways will be defined and layer 3 interfaces will be created on the Tier 0 Gateways using IP addressing as shown in diagram above.
 

Tier 0 Gateway for Data Center Zone

When the Tier 0 Gateway for Data Center Zone is created, edge cluster for Data Center Zone is selected and the availability mode on Tier 0 Gateway is Active-Active.
 
 
Layer 3 interfaces on Tier 0 Gateway for Data Center Zone

 
 
Tier 0 Gateway for DMZ Zone

 
 
BGP AS Numbering and BGP Setup

 

e BGP peering between edges and upstream routers


 
From routing perspective, upstream routers should advertise default route to e BGP peers which are the edges.
NSX routes are redistributed into BGP on NSX end.
Tier 1 Gateways advertise routes towards Tier 0 Gateway.
There is no routing protocol between Tier 0 Gateway and Tier 1 Gateway.
Please note that inter SR i BGP peering can be enabled when availability mode on Tier 0 Gateway is Active-Active, this is preferred.
 
BGP Peering on Tier 0 Gateway for Data Center Zone

Redistribute routes on Tier 0 Gateway

 
 Next create Tier 1 Gateways and connect to appropriate Tier 0 Gateway.
Tier 1 Gateways are configured

Routes on Tier 1 Gateways are advertised towards Tier 0 Gateway upstream.


Segments are created for workloads


Overlay segments are created for Data Center Zone and for DMZ Zone respectively.
These overlay networks are connected to corresponding Tier 1 Gateways.

Once the overlay networks are created, then workloads can be connected to these overlay networks from vcenter.

VM in DMZ connected to overlay network for DMZ



VM in Data Center Zone connected to overlay network for Data Center

Validation

 

Ping/Trace from VM in DC Zone to VM in DMZ Zone

Ping/Trace from physical router in DC Zone to VM in DMZ


Ping/trace from physical router in DMZ to VM in DC Zone


Traffic Flow from VM in DC to VM in DMZ

 

Friday, August 21, 2020

NSX-T Federation - Active Active Data Centers

 

 NSX-T Federation - Active Active Data Centers

 
This blog covers NSX-T Federation feature which allows L2 stretching between Data Centers as well as supports micro segmentation for workloads based on security tags.

Earlier blogs covered NSX-T Federation with a single Tier 0 stretched Gateway.
Here we explore how two Tier 0 Gateways can be utilized for workloads which are active in both data centers.


Logical Setup used in Lab
 
The above setup is used in the lab.
 
A brief about the setup:
1. Global Manager sits in Bangalore
2. Both sites have local manager
3. Hosts in each site have been prepared as hosts transport nodes.
4. Edges have been deployed in each site and configured as edge transport nodes.
5. Each site has site local uplink VLANs, edge TEP VLAN, host TEP VLAN & RTEP VLAN
RTEP interfaces are instantiated on edges to handle inter site traffic.
Every stretched segment will have local edges designated as Active and Standby for that specific segment.
6. Four edges correspond to Tier 0 Gateway Bangalore which has Bangalore as Primary location and Delhi as secondary location.
One edge cluster in Bangalore and another in Delhi
7. Four edges correspond to Tier 0 Gateway Delhi which has Delhi as Primary location and Bangalore as secondary location.
One edge cluster in Bangalore and another in Delhi
8. Transport zone configuration, edge node configuration and hosts transport node configuration is done from Local Manager.
9. Stretched Tier 0 Gateway, segments used for uplinks of stretched Tier 0 Gateway and stretched Tier 1 Gateway is created from Global Manager UI.
Segments connected to stretched Tier 1 Gateways are also created from Global Manager UI.
10. A total of eight edge nodes are configured in this lab setup.

IP Addressing and VLAN details


NSX-T Fabric for Bangalore Location

IP Address Pools for Compute TEP, Edge TEP and RTEPs

Local Transport Zones in Bangalore

Uplink Profiles in Bangalore




 
Compute Transport Node Profile



Edge Node Connectivity

In the above diagram, edge is connected to VDS which was earlier used to configure NSX on hosts.
VLAN backed trunk segments are created on hosts' VDS for uplink connectivity of edge.
Fast path interfaces of edge are connected to these trunk segments.


Edge Transport Node Configuration

 
 
Host Transport Nodes are configured


 
Once hosts are configured for NSX, tunnel endpoint interfaces are created on the hosts, NSX-T software is installed on the hosts.
 
 
Edge Transport Nodes are configured 

Once edges are configured for NSX, tunnel endpoint interfaces are created on edges and the edge is connected to appropriate trunk segments on host VDS.
 
Edge Clusters in Bangalore
 
 
RTEP configuration on edge clusters of Bangalore

RTEP configuration is applied to both edge clusters in Bangalore

Before applying configurations on Global Manager, ensure that below configurations are also applied in other location:
a. Transport Zones
b. IP Pools
c. Uplink Profiles
d. Compute Transport Node Profiles
e. Edge Transport Nodes config
f. Hosts are configured as host transport nodes.
g. RTEP configs on edge clusters in Delhi

BGP Setup

 
BGP Setup for Tier 0 Gateway Bangalore


BGP AS 65000 is used on Tier 0 Gateway Bangalore
e BGP is used between Tier 0 Gateway and upstream routers.
Physical network is under AS 65001
Traffic ingress and egress to/from subnet connected to Tier 1 Gateway Bangalore goes through physical routers in Bangalore.
This gives deterministic traffic flow.
AS Path prepending is used on physical routers of Delhi Location to influence this traffic flow.
Physical routers are sending a default route on a per BGP peer basis.
Routes from NSX are redistributed into BGP


BGP Setup for Tier 0 Gateway Delhi

 
BGP AS 65002 is used on Tier 0 Gateway Bangalore
e BGP is used between Tier 0 Gateway and upstream routers.
Physical network is under AS 65001
Traffic ingress and egress to/from subnet connected to Tier 1 Gateway Delhi goes through physical routers in Delhi
This gives deterministic traffic flow.
AS Path prepending is used on physical routers of Bangalore Location to influence this traffic flow.
Physical routers are sending a default route on a per BGP peer basis.
Routes from NSX are redistributed into BGP
 

Global Manager Configuration


Locations are added to Global Manager


Segments are created on Global Manager for uplink connectivity of Tier 0 Gateway

While creating segments on Global Manager, specify location, local transport zone and the VLAN ID




Tier 0 Gateway Bangalore
 
While creating stretched Tier 0 Gateway, specify the edge cluster and the corresponding location as Primary or Secondary.
For Tier 0 Gateway Bangalore, primary location is Bangalore.
For Tier 0 Gateway Delhi, primary location is Delhi

L3 interfaces on Tier 0 Gateway Bangalore

Likewise Tier 0 Gateway is created with Delhi as primary location & Bangalore as secondary location.


Tier 1 Gateway config on Global Manager

Next create stretched Tier 1 Gateway and connect to the already defined Tier 0 Gateway.


Tier 1 Gateways on Global Manager
 

 
Segments on Global Manager connected to Tier 1 Gateway


Next deploy VMs and connect them to appropriate segments.
 
 
VM connected to Overlay Network


Validation

Trace from router in Delhi to VM behind Tier 1 Gateway Bangalore goes through physical router of Bangalore

Trace from router in Bangalore to VM behind Tier 1 Gateway Delhi goes through physical router of Delhi 


Trace from loopback of second physical router in Delhi to VM behind Tier 1 Gateway Bangalore goes through physical router 1 of Bangalore location


Trace from VM behind Tier 1 Gateway Bangalore towards loopback of physical router 2 in Delhi goes through physical router 2 of Bangalore location

 
Trace from VM in Delhi to loopback of physical router 1 in Bangalore goes through physical router of Delhi



Trace from loopback of physical router 2 in Bangalore to VM behind Tier 1 Gateway Delhi goes through physical router in Delhi


RTEP to RTEP tunnel is established