vSphere with Tanzu AKO integration for Ingress

Steven Schramm
20. October 2022
Reading time: 16 min
vSphere with Tanzu AKO integration for Ingress

For vSphere with Tanzu deployments it is necessary to fulfill some networking requirements and at least have NSX Advanced Load Balancer (NSX ALB) available to integrate Load Balancing for all services that should be published without publishing the K8s worker nodes itself.
In case advanced networking features are required it might be necessary to integrate NSX-T Datacenter with all features like routing, switching micro segmentation, Load Balancing and many more.
Currently the Load Balancing feature for NSX ALB and NSX-T Datacenter is different, but in future releases of NSX-T the current NSX-T Load Balancer will be removed and replace by NSX ALB. This is also important from vSphere with Tanzu point of view since it will be necessary to migrate to NSX ALB as Load Balancing solution.
That means NSX ALB is required for vSphere with Tanzu deployments with NSX-T Datacenter integration abd for all deploymemts which are using vCenter Networking combined with NSX ALB as Load Balancer.
Independent from type of integration vSphere with Tanzu will integrate NSX ALB or NSX-T Load Balancer for all K8s Services from type Load Balancer, but not from type Ingress. That means that L4 Load Balancing is integrated but for all L7 Services a dedicated Ingress Controller like Nginx or Contour is required.

The following article describes how to implement L7 Load Balancing features of NSX ALB including advanced Security features like Web Application Firewall (WAF), to replace dedicated Ingress controller with NSX ALB. This article does not decribe how to deploy vSphere with Tanzu it self and also not the integration of NSX-T for vSphere with Tanzu.
Further the deployment of NSX ALB is also not covered and should be already done, but the integration for the NSX-T Cloud within NSX ALB is covered. The deployment described in this article is also based on vSphere with Tanzu deplyments based on NSX-T integration, but we will give some hints what should be changed to implement AKO for vSphere Tanzu deployments without NSX-T.

Requirements

To be able to integrate NSX ALB as Ingress for vSphere with Tanzu Guest Clusters the followingcompoenents are required.

  • NSX ALB Controller
  • AKO Sources for Helm Chart deployment
  • NSX-T Datacenter Deployment
  • vSpehre with Tanzu deployment integrated with NSX-T
  • Management Network for NSX ALB Service Engines
  • Frontend Network for NSX ALB Service Engines and Ingress Services
  • Local content library for NSX ALB service engine ova

Overview logical Infrastrucure

 width=

Preparing NSX-T for NSX ALB integration

Info: This step is not needed for Tanzu deployments without NSX-T integration

As preperation to create a “NSX-T Cloud” inside NSX ALB configuration, it is required to assign T1-Routers and NSX-T segments as Management and Data network. These both networks will be used later on to automatically create NSX ALB service engines. Service engines will be created on demand, if a virtual service will be created and a required service engine is not already available.

Creating required T1-Routers

Two T1-Routers should be created to fulfill the requirements for creating a “NSX-T Cloud” within NSX ALB. At first you should create a new T1-Router for the management network that will be used for the service engines later on.
The T1-Router for the management network in my example is crated with the name “T1-ALB-MGMT” and should be created with the following settings.

  • Edge Cluster: No (Distributed only)
  • Assigned T0 Router: Available T0 Router used for Tanzu deployment
  • Router Advertisement: Enabled for “All connected sgments & service ports”, “All IPsec local Endpoints” is already enabled by default and not required.

The following screenshot shows the configuration in detail.

 width=

The T1-Router for the data network in my example is crated with the name “T1-ALB-DATA” and should be created with the following settings.

  • Edge Cluster: No (Distributed only)
  • Assigned T0 Router: Available T0 Router used for Tanzu deployment
  • Router Advertisement: Enabled for “All connected sgments & service ports”, “All IPsec local Endpoints” is already enabled by default and not required.

The following screenshot shows the configuration in detail.

 width=

Creating required Segments

After the T1-Routers are created it is required to create an NSX-T segment for each of them.
The first segment is for the management network and will be attached to T1-Router “T1-ALB-MGMT” and should be assigned to a transport zone from type overlay. The detailed settings are shown in the following screenshot.

 width=

The second segment is for the data network and will be attached to T1-Router “T1-ALB-DATA” and should be assigned to a transport zone from type overlay. The detailed settings are shown in the following screenshot.

 width=

Creating content library

Further is is required to create a content libary within the vCenter Server used for NSX-T and Tanzu deployment. The content library is required to store the ova files that will be used to automatically deploy the NSX ALB service engines.

At first choose a name for the content library, in my example the content library is called “ako”.

 width=

In the second step choose the type “Local content library”. The setting under “Download content” is not important in this situation and default setting can be kept.
The content library will just be used for the ova of the service engines and does not consume much storage.

 width=

In the next step it is required to choose a available storage to store the ova file of the service engines.

 width=

In the lost step a short summary of the previous steps is shown and can be confirmed with the button “Finish”

 width=

NSX ALB Configuration

As soon as NSX-T is prepared for the integrations the following steps should be done for NSX ALB to complete the integration of NSX-T and NSX ALB.

Create NSX-T Cloud

Info: If you plan to use AKO for Tanzu deployments with vsphere Networking instead of NSX-T integration, you need to create a Cloud from type “VMware vCenter/ vSphere ESX”. In this case the networks already created for NSX ALB integration while Tanzu deployment can be used instead of creating NSX-T specific segments.

The following steps should be done to create integrate NSX-T for NSX ALB.

Login to NSX ALB, switch to tab “Infrastructure”, afterwards open the tab “Cloud” and hit “Create”. After you hit “Create” the following window opens and should be completed with the following information.

As shown in the screenshot below you need to choose an name and prefix for the objects the will be created by the NSX ALB later on.

 width=

In the next step you will configure the connection to NSX-T with the information shown in the screenshot below. Therefore you need to define the user and password that will be used by NSX ALB to connect to NSX-T. After the credentials are applied you are able to choose the upcomming information for the “Management Network” and “Data Network”.
In the current example we choose the transport zone “TZ-Overlay”, the T1-Routers and segments created above.

 width=

In the next step you need to add the vCenter connection to enable the NSX ALB to automatically deploy service engines later on.

 width=

To coplete the connection to the required vCenter just hit “ADD” shown in the previous screenshot and enter the information shown in the following screenshot like the credentials for the vCenter server. The vCenter Address will be discovered by the NSX-T connection already created and is dependent on the vCenter Server added in the Compute Manager added within NSX-T.
Further you need to choose the content library created earlier.

 width=

The following step shows a IPAM and DNS profiled added for the “NSX-T Cloud”. This step can be ignored first and need to be completed after these both profiles are created as shown in the upcoming steps.

 width=

Modify networks of NSX-T Cloud

Next step is to modify the both networks assigned for the “NSX-T Cloud” within the last step. The modification needs to be done to define the subnets and ip pools for these both networks.
Therefore you need to switch to tab “Cloud Resources” –> “Networks” and edit the both networks as shown in the upcoming screenshots.

For the management network it is necessary to add a new subnet including a ip pool that will be used to assign IPs for the NSX ALB services engines that will be deployed automatically.
For the management network the match box “Use Static IP Address for VIPs and Service Engine” should be unchecked and the radio box for “Use for Service Engine” should be checked. This ensures that this ip pool will just be used to assign IPs for service engines but not for virtual services created by Tanzu later on.
Further the routing context for the management network shoul be set to “global”.

 width=

For the data network “AKO-ALB-Frontend” the steps are very similar, but the ip pool should be configured to be use for “Use Static IP Address for VIPs and Service Engine” and the routing context should be set to “T1-ALB-DATA”.
In your case it might be possible the routing context has a different name, since this name is dependent on the name of the T1-Router created and assigned for the data network.

 width=

Routing

The next step is to configure the gateway that should be used for the data network under “Cloud Resources” –> “Routing”.
For this step and all other steps make sure you are changing the objects in the context of the “NSX-T Cloud” created earlier. Shown in the following screenshot under “Select Cloud”.

The following screenshot shows different routing tables for the two different couting contexts created by “NSX-T Cloud”. You need to create a static default route towards the T1-Router “T1-ALB-DATA” for the routing context “T1-ALB-DATA”.
For the routing context “global” it is not required to add a static route since the service engines will automatically get the right gateway for management network. In case you have problems with the management connectivity after deployment of service engines you are able to set the specific gateway within a specific management routing context by NSX ALB cli.

 width=

Create IPAM Profile

Under the tab “Templates” –> “Profiles” –> “IPAM/ DNS Profiles” you need to create a IPAM profile to ensure that all assigned IPs of the different ip pools will be stored in the invetory of NSX ALB.

The follwoning screenshot shows the configuration in detail.

 width=

The created IPAM profile should be assigned to the “NSX-T Cloud” created earlier.

Create DNS Profile

Under the tab “Templates” –> “Profiles” –> “IPAM/ DNS Profiles” you need to create a DNS Profile. The detailed configuration is shown in the following screenshot.

 width=

The created DNS profile should be assigned to the “NSX-T Cloud” created earlier.

Create Service Engine Group

Under the tab “Infrastructure” –> “Cloud Resources” –> “Service Engine Group” select the created cloud and edit the available service engine group “Defaut-Group”.
The basic settings of the default service engine group can be kept with default settings, but under the tab “Advanced” it is required to adjust the settings regarding the placement of the service engines within the vSphere Cluster.

For the advanced settings the followning is recommended to configure.

  • vCenter Server
  • Service Engine Folder (VM and Host Folder created within vCenter to place the services engines in)
  • vSphere Cluster where the service engines should be deployed
  • Datastore that should be used for the service engine deployment

Following the settings that are done for the described example depoyment.

 width=

AKO Integration

Now you should be ready to deploy AKO (Avi Kubernetes Operator) within a new or existing Tanzu K8s Guest Cluster. The follwoing example shows the deployment of an K8s Cluster, the deployment of AKO based on a public available Helm chart and how to create a service from type ingress with NSX ALB as ingress controller.

Deploy vSphere with Tanzu Guest Cluster

The first step to create a new Tanzu K8s Guest Cluster it is necessary to create new vSphere namespace. The vSphere namespace will be created in the vCenter UI under the “Menu” –> “Workload Management” –> “Namespaces” –> “New Namespace”.

As shown in the screenshot below you have to choose the vSphere Cluster where Tanzu is deployed and a DNS compliant name. Afterwards hit “Create” and the vSphere namespace is created.

 width=

After the vSphere namespace is created you need to adjust the following settings.

  • Assign Permissions to a User
  • Assign a storage policy
  • Assign allowed VM classes
  • Assign content library which was already created for Tanzu deployment

Additional settings are possible but these are optional like limiting the available resources.

To create the K8s cluster you need to connect to Tanzu Supervisor Cluster API. Example: kubectl vsphere login –server https://10.5.198.2 –insecure-skip-tls-verify –vsphere-username administrator@vsphere.local
After the login switch to the namespace created above. Example: kubectl config set-context –current –namespace=ako-test

Further it is required to prepare a YAML based config for the deployment of the cluster as shown in the following example.

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: ako-test
  namespace: ako-test
spec:
  distribution:
    version: v1.20.9
  topology:
    controlPlane:
      count: 1
      class: best-effort-xsmall
      storageClass: tanzu-k8s-custom-policy
    workers:
      count: 2
      class: best-effort-xsmall
      storageClass: tanzu-k8s-custom-policy

The YAML file will be applied with the command kubectl apply -f <YAML file name> -n <namespace name>

After the worker and master nodes of the cluster are deployed based on your applied configuration you need to login to this cluster. Example: kubectl-vsphere login –vsphere-username administrator@vsphere.local –server=10.5.198.2 –insecure-skip-tls-verify –tanzu-kubernetes-cluster-namespace=ako-test –tanzu-kubernetes-cluster-name=ako-test
Further it is required to set the pod security policies to allow deployment of pods inside this new K8s cluster. Example: kubectl create clusterrolebinding allow-any-sa –clusterrole=psp:vmware-system-privileged –group=system:serviceaccounts

Warning: Please note the previous command assigns the privileged clusterrole for all accounts. This is not the best practice and should not be done in production environments.

Deploy Helm Chart

First step for the deployment of the AKO helm chart is to create a dedicated namespace within the previously created K8s cluster. Example: kubectl create ns avi-system

Next step is to add the helm repo with the command helm repo add ako
After the repo is added you are aple to search the repo content with the command helm repo search

As preperation for the deployment you need to gather the config file for AKO with the command helm show values ako/ako –version 1.7.1 > values.yaml

The most important required adjustments within the “values.yaml” file are shown bellow.

  • clusterName: ako-test
  • layer7Only: true (required to prevent AKO to be integrated for services of type Load Balancer which are already integrated by NSX-T)
  • nsxtT1LR: ‘/infra/tier-1s/T1-ALB-DATA’
  • vipNetworkList:
    • networkName: AKO-ALB-Frontend (enter the exact name of the data network assigne with NSX-T Cloud of NSX ALB.)
    • cidr: 10.5.213.32/27 (CIDR of the data network)
  • serviceEngineGroupName: Default-Group
  • controllerVersion
  • cloudName: sdbx01-new (name of the NSX-T cloud created within NSX ALB)
  • controllerHost: <fqdn of NSX ALB controller>
  • avicredentials: (NSX ALB Credentials)
    • username: admin
    • password: <secure password>

After you completed the values.yaml file you just need to apply it to the previously created namespace “avi-system” with the command helm install  ako/ako  –generate-name –version 1.7.1 -f values.yaml -n avi-system

To be able to use advanced settings like WAF policies you need to add the “Host Rule” extension. To gather the required sources enter the command helm template ako/ako –version 1.7.1 –include-crds –output-dir <some directory for the sources>

Apply the sources gathered in the previous step: kubectl apply -f <output directory from the previous step>

Now you successfully completed the integration of NSX ALB within your specific Tanzu K8s guest cluster and you are able to create ingress services as shown in the next steps.

Deploy example Application

First step to create a new application is to create a dedicated namespace within the K8s cluster. Example: kubectl create ns hipster

For the example deployment we used the deplyment described under the link https://raw.githubusercontent.com/aidrees/k8s-lab/master/hipster-no-lb.yaml
But we did an ajustment for this deployment to change the service type for the frontend service from “ClusterIP” to “NodePort”. The example config for NodePort deifinition is shown below.

spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30002
  selector:
    app: frontend

In the next step you need to apply your application with the command kubectl apply -f hipster-no-lb.yaml

Create SSL Certificates for HTTPs Ingress

If you plan to publish a application with SSL enabled between ingress and user, you need to create certificate and key pair. For our example we created a self signed certificate and key pair the script available under https://github.com/jhasensio/avi_ako/blob/main/scripts/create_secret.sh

After you downloaded the script you are able to generate certificate and key as well as create the secret with the command ./create_secret.sh <friendly name of secret> <certificate content> <namespace wehere the secret should be created>.
Example: ./create_secret.sh hipster /C=DE/ST=Mainz/CN=hipster.test.local hipster

Create Ingress

In this step we create a ingress for the previously deployed application including the secret created in the last step. This can be done with a YAML configuration file as shown in the following example.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hipster-tls
  labels:
    app: hipster
spec:
  tls:
  - hosts:
    - hipster.hob.local
    secretName: hipster-secret
  rules:
    - host: hipster.hob.local
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: frontend
              port:
                number: 80

After the YAML file is created just apply with the command kubectl apply -f <ingress YAML file> -n hipster

Create Host Rule for created Ingress

Now the ingress is created and your application is available but sometimes it is required to and some additional security. Therefore we show you an example how to apply a WAF policy for your previously created ingress.

At first you need to create a WAF policy within NSX ALB under “Templates” –> “WAF” –> “WAF Policy”. We will not describe this in detail, but you can use the oficial documentation (https://avinetworks.com/docs/latest/waf-policy/)

After you created the WAF policy just create a YAML config file for the host rule you want to apply, as shown in the following example.

apiVersion: ako.vmware.com/v1alpha1
kind: HostRule
metadata:
  name: hipster-ingress
  namespace: hipster
spec:
  virtualhost:
    fqdn: hipster.hob.local
    fqdnType: Contains
    wafPolicy: System-WAF-Policy
    enableVirtualHost: true

After the YAML file is created just apply with the command kubectl apply -f <host rule YAML file> -n hipster

Host rules are able to apply much more settings but we will not discuss all of these. You can read the documentation (https://avinetworks.com/docs/ako/1.7/custom-resource-definitions/) for more information.