NSX-T: Virtual Edge Node with 4 Datapath Interfaces

Daniel Krieger
11. January 2023
Reading time: 1 min
NSX-T: Virtual Edge Node with 4 Datapath Interfaces

VMware introduced a new feature with NSX-T version 3.2.1, which is not yet used often in practice. To make matters worse, this feature was buggy until NSX-T 3.2.1.2. We are talking about the possibility to use 4 interfaces for the NSX-T traffic in an Edge VM. The feature gives you more control and flexibility over the data flows and less interference with each other. In some scenarios, the design can improve performance because the TEP traffic has less impact on the north/south traffic.

Disclaimer

This blog article is not a step-by-step guide on how to install an Edge Node or NSX-T. The deployment is presented in a very abbreviated way and basically only covers the differences to the common deployment. 

Design

 width=

In this design, we use 4 datapath interfaces (fp-eth0-fp-eth3) for the edge node. Two interfaces handle the TEP overlay traffic, and two interfaces are used for eBGP peering. A separate VDS is used for NSX-T, but you can also run uplink and overlay over separate VDSs.

Preparation

NSX-T version 3.2.1.2 or later is required. The servers must use at least 4 dedicated interfaces for NSX-T. VCF deployments are not supported. The design is based on VMware’s single-NVDS multi-TEP architecture. We need a distributed port group (overlay TEP VLAN 20) for the TAP traffic. We attach uplink1 and uplink2 in active-active mode to this port group. A port group for the Edge BGP uplink 1 (uplink 1 VLAN 30) in active-standby mode on uplink 3 and 4. Finally, another port group for the Edge BGP uplink 2 (uplink 2 VLAN 31) in active-standby mode on uplink 4 and 3 is required. 

Deployment 

After we have made all the preparations, we still need an uplink profile with the appropriate teaming policies. We need two named teaming polices and the default teaming policy. As in the standard design with two fast path interfaces, we use the default load balancing policy for the overlay network (fp-eth0, fp-eth1) and two named teaming policies for the BGP peering with the ToR switches. The named polices require “Failover order” as teaming mode. Fp-eth2 is used for peering with ToR1 and fp-eth3 is used for peering with ToR2.

 width=

Next, we need a VLAN transport zone. Here, it is important that we specify the named teaming policies. In addition, the uplink segments must be created, and the appropriate teaming policy must be selected in the segment. Now everything is prepared to perform BGP peering as usual.

 width=

Summary

The 4 data path design is an extension of the well-known standard design with two fast path interfaces and gives us even more control over the traffic flow. At the same time, the design can bring more performance for environments that have to work with a lot of north/south traffic. A possible scenario would be for example a DMZ under heavy load with proxy servers that have to handle a lot of north/south traffic and then forward it via east/west traffic. Of course, the solution must always be considered on a case-by-case basis, but this is a cost-effective way to avoid performance bottlenecks. It also physically separates TEP and uplink traffic, which can be relevant in high security scenarios.