Advanced Load Balancer (AVI) deployment over the NSX-T 3.2 GUI

Daniel Krieger
23. May 2022
Reading time: 4 min
Advanced Load Balancer (AVI) deployment over the NSX-T 3.2 GUI

Since NSX-T version 3.2 it is possible to deploy and configure the Advanced Load Balancer via the NSX-T GUI. This blog article is intended to be a brief outline of this feature.


You need NSX-T Manager in Version 3.2 or above and NSX Advanced Load Balancer in Version 21.1.X or above. In addition, a VIP is needed, even if a single deployment is performed. Furthermore, a management IP and a working DNS is required. An NTP server is not required, but strongly recommended, as the time must be synchronized on all servers involved. The actual creation of the cluster takes place via the NSX-T Manager GUI under System/Appliances. With the help of a wizard, all required data for the cluster are specified and the cluster creation takes place automatically. It is recommended to store a public SSH key for the administrator during the installation. This cannot be done later and without the key no SSH connection is possible later. After the cluster has been successfully rolled out, the NSX-T Connect must still be configured in the AVI GUI (Infrastructure/Clouds).

Providing virtual services

After the preparations, virtual services can be provided via the NSX-T GUI under Network/Advanced Load Balancer. At the moment not all possibilities are fully integrated. Note: Services, pools or virtual IP addresses created in the AVI GUI cannot be used in the NSX-T GUI.

Deployment problems

In rare cases, NSX-T Manager may fail to configure the cluster cleanly. The deployment aborts at 85%, the cluster IP is reachable, but the cluster cannot be configured. The NSX-T Manager displays the following error message: nsx advanced load balancer controller is not reachable


There are several approaches to solving the problem. In most cases, it is sufficient to delete the appliance via the NSX-T Manager and roll it out again. If this is not successful, it is possible to trigger a force delete via the REST API.

POST /policy/api/v1/alb/controller-nodes/deployments/{node-id}?action=delete&force_delete=true

In rare cases, this is also not enough and there may be referenced configuration remnants from old deployments in NSX-T manager. In the log (/var/log/proton/nsxapi.log) you can find entries similar to the following:

[ALB Controller] Controller configuration failed during on-boarding task in Policy. Error: An object with the same path=[/infra/sites/default/enforcement-points/alb-endpoint] is marked for deletion. Either use another path or wait for the purge cycle (max 5 minutes) for permanent removal of the object.

A force cleanup via the API helps here.

POST /policy/api/v1/troubleshooting/infra/tree/realization?action=cleanup

Body (JSON)

{"paths" : ["/infra/sites/default/enforcement-points/alb-endpoint"]}

After the workaround, the deployment must be performed again and should be successful this time.

Supported Features over the NSX-T GUI

  • Create Virtual Services (fully featured -with exceptions see below)
    • application profile (new profiles must be created via the REST API)
    • TCP/UDP Profile (new profiles must be created via the REST API)
    • Error Page Profile (new profiles must be created via the REST API)
    • SE Group (new groups must be created via AVI GUI)
    • LB Policies (Network Security, HTTP Request, HTTP Security, HTTP Response)
  • Create Virtual IP Addresses
  • Create Pools (fully featured – with exceptions see below)
    • AutoScale Policy (new policy must be created via the REST API)
    • AutoScale Launch Config (new config must be created via the REST API)
    • Analytics Profile (new profile must be created via the REST API)
  • Create Pool Groups (fully featured – with exceptions see below)
    • Deployment Policy (new policy must be created via the REST API)
  • Persistence Profiles (fully featured – with exceptions see below)
    • Only generate new Profiles
    • Only Cookie and HTTP Cookie Profiles
  • Add SSL/Root Certificates