NSX Application Platform Part 3: NSX-T, NSX-ALB (Avi), and Tanzu

NSX-T Logical Networking, Ingress, Load balancing, and Tanzu Kubernetes Grid Service (TKGS)

With all the pre-work now complete, we are ready to configure the Supervisor Control Plane for TKGS.

4. We will need this certificate in a later step, so in the below screen click on the little down arrow next to the certificate and copy the certificate to notepad.

The workload network will be assigned to workload clusters deployed from the supervisor cluster, in my example this will be the “workload-tkg (172.52.0.0/24)” network. Click Save then Next when complete.

NSX-T Networking

The Tier-0 gateway will be responsible for making the networks available to the physical network, in this case BGP is used, however, you may use static routes or OSPF. The screenshots below display the basic configuration required on the Tier-0 gateway, similar to the Tier-1. At a minimum “Connected Interfaces & Segments” will need to be enabled for route re-distribution.

napp tanzu networking stack options

root@jump:~# kubectl vsphere login --server 172.51.0.2 -u [email protected] --insecure-skip-tls-verify

KUBECTL_VSPHERE_PASSWORD environment variable is not set. Please enter the password below
Password:
Logged in successfully.

You have access to the following contexts:
172.51.0.2
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`

#### Changing context to the supervisor context
root@jump:~# kubectl config use-context 172.51.0.2

## Check the namespace
root@jump:~# kubectl get ns
NAME STATUS AGE
default Active 79m
kube-node-lease Active 79m
kube-public Active 79m
kube-system Active 79m
svc-tmc-c8 Active 76m
vmware-system-ako Active 79m
vmware-system-appplatform-operator-system Active 79m
vmware-system-capw Active 77m
vmware-system-cert-manager Active 79m
vmware-system-csi Active 77m
vmware-system-kubeimage Active 79m
vmware-system-license-operator Active 76m
vmware-system-logging Active 79m
vmware-system-netop Active 79m
vmware-system-nsop Active 76m
vmware-system-registry Active 79m
vmware-system-tkg Active 77m
vmware-system-ucs Active 79m
vmware-system-vmop Active 77m

To check the addresses assigned to the supervisor control plane VMs, click on any of them. Then click More next to the IP addresses.

I have recently put together a video that provides clear guidance on deploying NAPP, it can be seen here.

NSX-T Segments

To summarise the NSX-T and NSX-ALB section: I have created segments in NSX-T which are presented to vCenter and NSX-ALB. These segments will be utilized for the VIP network in NSX-ALB, as well as workload and front-end networks required for TKGS. The below illustration shows the communication from service engines to the NSX-T workload-tkg segment, and eventually the workload clusters that will reside on the segment.

napp tanzu nsx-t segments

To create the static route, navigate to Infrastructure -> Cloud Resources -> Routing -> Click Create.

NSX-T Gateways

Tier-1 Gateway

3.4 Fill in the required specifications and then click Create.

napp tanzu tier1

5. On the next screen, select Subscribed content library and enter the URL, leave Download content set to immediately.

Tier-0 Gateway

The IPAM profile will be covered in one of the following sections.

napp tier-0 route redistribution
napp tanzu tier-0 gateway configuration

You should now be able to see the VM Class you just created, I have selected the one I just created, as well as some others. Click Ok when you are done.

NSX Advanced Load Balancer (NSX-ALB/ Avi)

The below screenshot shows the configuration of my IPAM profile.

NSX-ALB Default-Cloud Configuration

Once the deployment is complete, you should see a green tick and running under config status.

All three segments are connected to the same Tier-1 gateway “sm-edge-cl01-t1-gw01” and overlay transport zone “sm-m01-tz-overlay01”. Because the segments are connected to a Tier-1 gateway, it is implied that they are overlay networks and will utilize NSX-T logical routing. This is because VLAN backed segments cannot be attached to a Tier-1 gateway.

7. Select the storage device you would like the content to be stored on.

avi tanzu napp default cloud configuration

The nodes have been assigned with an IP address in the workload-tkg range.

avi tanzu napp default cloud configuration

3. Click Save once complete.

NSX-ALB Controller Certificate

NSX Application Platform Part 4: Deploying the Application Platform

  1. To configure a certificate select Templates -> Security -> SSL/TLS Certificates -> Create -> Controller Certificate
avi nsx-alb controller certificate

We will need to configure permissions, storage policies, VM classes and and a content library.

The cluster is deployed in vCenter.

Note: if you have more than one vCenter in linked mode, ensure you have selected the right one.

Note: If you are running a cluster of NSX-ALB controllers, ensure you enter the FQDN and VIP for the cluster under SAN entries.

avi controller certificate
avi controller certificate

6. Click Next once complete. You will be prompted to accept the certificate, select Yes.

avi control plane access settings

2. Click Create to define a new content library.

Note: This is your last change to change any configuration, if you go past this point and need to change something that can’t be changed later, you will need to disable and re-enable TKGS.

controller certificate

Service Engine Group Configuration

Depending on which mode of NAPP you want to deploy, you need to configure your VM classes accordingly. A table of the modes and their requirements can be found here. I will be deploying advanced, so will create a VM class to suit.

  1. To configure this, click on Infrastructure -> Cloud Resources -> Service Engine Group -> Edit Default-Group.

2. Select the Advanced tab, click Include and select the cluster.

Note: I selected Small for my lab deployment. However, in production you might choose differently.

Configure the VIP Network

root@jump:/mnt/tanzuFiles# kubectl get virtualmachine -n impactor
NAME POWERSTATE AGE
impactorlab-control-plane-k4nnd poweredOn 12m
impactorlab-workers-5srcx-7c776c7b4f-bts8m poweredOn 8m29s
impactorlab-workers-5srcx-7c776c7b4f-kjr6j poweredOn 8m27s
impactorlab-workers-5srcx-7c776c7b4f-v98z8 poweredOn 8m28s

To build out this cluster I will use the commands below. I have provided a copy of my cluster.yml file here.
Note: depending on which NAPP form factor you are deploying, your resource requirements may vary, each form factors requirements are here. My cluster yaml has been created for the advanced deployment, which includes NSX Intelligence.

nsx-alb configure vip network

2. Fill out the details relevant to your environment in the screen below.

Static Routes

This is also the address / URL that we will now use to connect to the cluster using kubectl and run some further commands.

Note: This article will not walk through the deployment of NSX-T or NSX-ALB (Avi).

Note: You will also need to use the ‘Controller Certificate’ you saved earlier in this window.

IPAM Configuration

Note: You will also need to use the ‘Controller Certificate’ you saved earlier in this window.

Summary

Each of them will have at least one interface and address in the infrastructure-tkg segment and one in the workload-tkg segment. We have now confirmed that all addresses and VIPs have been configured in their respective subnets.

avi vip network and tanzu workload network communication
0

TKGS (Workload Management)

5. Next we need to assign this certificate as the SSL/TLS certificate used to access the NSX-ALB control plane. To do so click on Administration -> Settings -> Access Settings -> Edit

The final part of the series demonstrates the deployment process for NSX Application Platform and its security features (NSX Intelligence, Network Detection and Response, and Malware Prevention.

Configure the Content Library

  1. In vCenter, navigate to Menu -> Content Libraries.

Click Create once you are ready. You should now see the screen below.

To create an IPAM profile navigate to Templates -> Profiles -> IPAM/DNS Profiles -> Click Create.

3.3 Click on Create VM Class.

Note: The difference between read and write access modes can be found here.

root@jump:~# kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 84m
kube-system docker-registry ClusterIP 10.96.0.232 <none> 5000/TCP 84m
kube-system kube-apiserver-lb-svc LoadBalancer 10.96.1.93 172.51.0.2 443:30905/TCP,6443:32471/TCP 77m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 84m
vmware-system-appplatform-operator-system vmware-system-appplatform-operator-controller-manager-service ClusterIP None <none> <none> 84m
vmware-system-capw capi-controller-manager-metrics-service ClusterIP 10.96.1.19 <none> 9844/TCP 82m
vmware-system-capw capi-kubeadm-bootstrap-controller-manager-metrics-service ClusterIP 10.96.1.178 <none> 9845/TCP 82m
vmware-system-capw capi-kubeadm-bootstrap-webhook-service ClusterIP 10.96.1.238 <none> 443/TCP 82m
vmware-system-capw capi-kubeadm-control-plane-controller-manager-metrics-service ClusterIP 10.96.1.138 <none> 9848/TCP 82m
vmware-system-capw capi-kubeadm-control-plane-webhook-service ClusterIP 10.96.0.109 <none> 443/TCP 82m
vmware-system-capw capi-webhook-service ClusterIP 10.96.0.218 <none> 443/TCP 82m
vmware-system-capw capw-controller-manager-metrics-service ClusterIP 10.96.1.43 <none> 9846/TCP 82m
vmware-system-capw capw-webhook-service ClusterIP 10.96.0.87 <none> 443/TCP 82m
vmware-system-cert-manager cert-manager ClusterIP 10.96.1.78 <none> 9402/TCP 83m
vmware-system-cert-manager cert-manager-webhook ClusterIP 10.96.1.199 <none> 443/TCP 83m
vmware-system-license-operator vmware-system-license-operator-webhook-service ClusterIP 10.96.0.13 <none> 443/TCP 81m
vmware-system-netop vmware-system-netop-controller-manager-metrics-service ClusterIP 10.96.1.85 <none> 9851/TCP 84m
vmware-system-nsop vmware-system-nsop-webhook-service ClusterIP 10.96.1.65 <none> 443/TCP 81m
vmware-system-tkg vmware-system-tkg-controller-manager-metrics-service ClusterIP 10.96.0.148 <none> 9847/TCP 81m
vmware-system-tkg vmware-system-tkg-webhook-service ClusterIP 10.96.1.184 <none> 443/TCP 81m
vmware-system-vmop vmware-system-vmop-controller-manager-metrics-service ClusterIP 10.96.1.254 <none> 9848/TCP 81m
vmware-system-vmop vmware-system-vmop-webhook-service ClusterIP 10.96.0.141 <none> 443/TCP 81m

Creating and Configure a Namespace

Using the jumpbox configured in Part 1, run the below command.

NSX-T: Using this option will instantiate the logical networking components required for the deployment in NSX-T. This includes a Tier-1 gateway, overlay segments for Ingress, service, egress and VIPs. The VIPs are created in NSX-T’s native load balancer.

2. Select vCentre Server Network and click Next. The reasons for this selection have been explained at the beginning of this article.

root@jump:/mnt/tanzuFiles# kubectl vsphere login --server 172.51.0.2 -u [email protected] --insecure-skip-tls-verify

KUBECTL_VSPHERE_PASSWORD environment variable is not set. Please enter the password below
Password:
Logged in successfully.

## deploy the cluster
root@jump:/mnt/tanzuFiles# kubectl apply -f cluster.yml

## workflow kicked off
tanzukubernetescluster.run.tanzu.vmware.com/impactorlab created