Smarter ideas worth writing about.

Getting Started with AKS and .NET Core

Containers have been gaining steam at a rapid rate for the last few years and as companies seek to run containers in production, new challenges occur; scaling, loading balancing, deployment, and service discovery are all prime examples. Container orchestrators are container technology that help to solve these problems and increase developer productivity.

Azure Kubernetes Service (AKS) makes it fairly simple to setup and operationalize Kubernetes, a container orchestrator, in Azure. It takes care of tasks such as automating Kubernetes upgrades and cluster scaling. Because it is Kubernetes, you also maintain application portability, meaning you aren't locked into Azure.

In this blog post, we are going to look at setting up an AKS cluster and deploying an application. The example is an MVC application that communicates with a backend service using NATS. NATS is a simple, high performance open source messaging system for cloud native applications, IoT messaging, and microservices architectures. This example application will introduce us to a few of the fundamental Kubernetes concepts: pods, controllers, services, and service discovery. It will also show us how we can structure an application to scale backend services.

Getting Started with AKS

Install Azure CLI

Setting up an AKS cluster is easy using the Azure CLI. The Azure CLI is the command line interface that allows you to manage your Azure resources. If you don't have the Azure CLI installed, you can find directions here.

Once you have the Azure CLI installed, log in and ensure that the needed Azure service providers are enabled.


az login
az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService

Create a Resource Group

Create an Azure resource group. As AKS is still in preview, only some locations are available.

az group create --name <resource-group-name> --location eastus

Create an AKS Cluster

Create a 1 node AKS cluster using the Azure CLI. By default, this command creates a standard DS1 v2 VM node. This step takes quite a few minutes to complete.

az aks create --resource-group <resource-group-name> --name <cluster-name> --node-count 1 --generate-ssh-keys

This step creates a new resource group and adds several new resources to create your cluster.


Connecting to the Cluster

Kubectl is the Kubernetes CLI. If you need to install it, you can run this command using the Azure CLI.

az aks install-cli

Once you have the CLI installed, you can configure its context to points to your new cluster by using this command.

az aks get-credentials --resource-group <resource-group-name> --name <cluster-name>

You can confirm that everything is configured correctly by using the Kubectl get nodes command.

kubectl get nodes

You should see the name of your cluster in the output. Voila, you have an AKS cluster setup in Azure and you're ready to deploy applications.

Name                        Status    Roles    Age    Version
aks-nodepool1-50042188-0    Ready     agent    4d     v1.7.9

Scaling the Cluster

As I said earlier, the benefit of AKS is that it is easy to manage your Kubernetes cluster. For example, you can easily scale your cluster to 2 nodes. This command takes a couple of minutes because it is spinning up and adding a new Azure VM.

az aks scale --resource-group <resource-group-name> --name <cluster-name> --node-count 2

Deploying an Application

The example application is available via Github. The branch aks-getting-started-v1 was used at the time of this blog post.

git clone https://github.com/joelbrinkley/aks-getting-started.git
cd aks-getting-started
git checkout aks-getting-started-v1

Application Overview

This application consists of a front end ASP.NET Core MVC application that sends commands via NATS messaging to a backend .NET Core service. Within the k8s folder, you will find yaml files that describe the desired state of our application.

If you want to install and explore this application locally using Docker, you can run the powershell script local-deploy.ps1. This will use Docker compose to build and deploy the application.

./local-deploy.ps1 -build

Kubernetes Concepts

In this example, each component to deploy consists of a deployment controller, pod and service. The following is a brief and high level explanation of each component.

Pods are the basic building blocks in Kubernetes. Pods can run single or multiple containers. They are designed as disposable entities that are mortal; so once they die, they stay dead and a new pod is created.

A deployment controller allows us to describe the desired state of our pods. This controller will ensure that we have the correct number of pods by recovering any pods that may fail. It also governs rolling out updates.

A service exposes and load balances between pods both internally or externally to the cluster. Every service in the cluster is assigned a DNS name and our application will use this to integrate components.

Deploying a NATS Container

Within the k8s folder, let's examine nats.yml. This yaml file consists of a deployment and a service that exposes the NATS container internally to the cluster. Use Kubectl to create the deployment and service on your cluster. After we apply the yaml file and look through the logs, we will examine the file in more detail.

kubectl apply -f .\k8s\nats.yml

Let's check and see if the pod was created and check the logs to see if NATS has started.

First, let's list our pods so that we can see its status and use the pod name for displaying logs.

kubectl get pods

Name                        READY    STATUS        RESTARTS    AGE
nats-3798695116-mvpp4       1/1      Running       0           3d

Next, show the log of the pod. We can see that the NATS server has started successfully.

kubectl logs <pod-name>

[5] 2018/03/22 20:15:23.414561 [INF] Starting nats-server version 1.06
[5] 2018/03/22 20:15:23.414600 [INF] Git commit [02dd205]
[5] 2018/03/22 20:15:23.414678 [INF] Starting http monitor on 0.0.0.0:8222
[5] 2018/03/22 20:15:23.414711 [INF] Listening for client connections on 0.0.0.0:4222
[5] 2018/03/22 20:15:23.414721 [INF] Server is ready
[5] 2018/03/22 20:15:23.415189 [INF] Listening for route connections on 0.0.0.0:6222

Finally, get the services. We can see our new nats-service listed in the output.

kubectl get services

NAME                  TYPE            CLUSTER-IP    EXTERNAL-IP    PORT(S)                                        AGE
kubernetes            ClusterIP       10.0.0.1      <none>         433/TCP                                        4d
nats-service          NodePort        10.0.39.147   <none>         6565:30831/TCP,6566:30967/TCP,6567:31324/TCP   3d

The deployment describes the name, number of replicas, any labels, and the pod template. The pod template describes how the containers should be configured. In our case, we are using the container image nats:linux, and we want to expose the ports 4222, 6222, and 8222.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nats
  labels:
    component: nats
    app: todo-app
spec:
  replicas: 1
  template:
    metadata:
      labels:
        component: nats
        app: todo-app
        version: v1
    spec:
      containers:
      - name: todo-messaging
        image: nats:linux
        ports:
          - containerPort: 4222
            name: client
          - containerPort: 6222
            name: cluster
          - containerPort: 8222
            name: monitor

The service describes its name and ports we want to map and expose internally to the cluster. Type NodePort tells the service to expose this IP and port internally to the cluster. Later, when we examine the MVC web app, we will see how to expose a service externally.

apiVersion: v1
kind: Service
metadata:
  name: nats-service
spec:
  selector:
    component: nats 
  ports:
  - name: client
    port: 6565
    targetPort: 4222
  - name: cluster
    port: 6566
    targetPort: 6222
  - name: monitor
    port: 6567
    targetPort: 8222
  type: NodePort

It is important to note that if using NATS in production, a NATS cluster should be configured. Find more information on setting up a NATS cluster on Kubernetes.

Deploying the Todo-Service

Within the k8s folder, let's examine todo-service.yml. This yaml file consists of a deployment. No Kubernetes service is needed because the todo service communicates via NATS and does not need to be exposed to the cluster. Use Kubectl to create the deployment and the previous commands to see if the pod is running.

kubectl apply -f .\k8s\todo-service.yml
kubectl get pods
kubectl logs <pod-name>

The log should show that the subscription has started.

Using natsconnection string: nats://nats-service:6565
Command Bus connected to nats://nats-service:6565
Starting subscription for command handler TodoMessages.CreateTodo

As part of the container template, we specify the NATS connection via an environment variable. We use the name given to the NATS service and the port that the service exposed internally to the cluster.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: todo-service-deployment
spec:
  selector:
      matchLabels:
          app: todo-app
          component: todo-service
  replicas: 1
  template:
    metadata:
      labels:
        component: todo-service
        app: todo-app
        version: v1
    spec:
      containers:
      - name: todo-service
        image: joelvbrinkley/todo-service:v1
        env:
          - name: NATS_CONNECTION
            value: nats://nats-service:6565

Deploying the Todo-Webapp

Within the k8s folder, let's examine todo-webapp.yml. This yaml file consists of a deployment and a service that exposes the MVC application externally. Use Kubectl to create the deployment and service. Again, we will use the previous commands to see if the pod is running.

kubectl apply -f .\k8s\todo-webapp.yml
kubectl get pods
kubectl logs <pod-name>

The log should show that your ASP.NET Core application has started and is listening on port 80.

Hosting environment: K8s
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.

As part of the container template, we assign the ASPNETCORE_ENVIRONMENT variable and expose port 80. The service maps port 51101 to container port 80. The type LoadBalancer tells Kubernetes to use an external load balancer in Azure so the website is exposed externally. Creating this service can take some time. Check the service for an external IP.

kubectl get services

NAME                  TYPE            CLUSTER-IP    EXTERNAL-IP      PORT(S)                                        AGE
kubernetes            ClusterIP       10.0.0.1      <none>           433/TCP                                        4d
nats-service          NodePort        10.0.39.147   <none>           6565:30831/TCP,6566:30967/TCP,6567:31324/TCP   3d
todo-webapp-service   LoadBalancer    10.0.35.199   52.224.167.126   51101:31497/TCP                                7m                    

You can see in the Azure Portal that this creates a public IP address.


Running the Application

Navigate to the url and port provided by Kubectl get services. The MVC home page should load.

Navigate to the todo section, enter a name and description and then press add. This sends a command via NATS to the back end service. The MVC controller waits for a response and displays it to the view.

Check the logs of the service to verify that it was processed.

kubectl logs <todo-service-pod-name>            

You should see a message that your todo command was processed.

Using natsconnection string: nats://nats-service:6565
Command Bus connected to nats://nats-service:6565
Starting subscription for command handler TodoMessages.CreateTodo
Processing create todo command: test name, test description        

Final Thoughts

Realizing the full value of containers in production requires container orchestration. AKS makes it quick and easy to get started leveraging Kubernetes to build scalable applications in Azure.

Though not covered in this tutorial, there is a lot of potential for .NET application modernization with container orchestration. As support grows for windows containers, it will become easier to modernize a legacy .NET application by containerizing it and deploying it in the cloud. It also becomes easier to deploy new services to extend legacy application functionality. This approach saves IT organizations from having to do complete application rewrites and focuses on new business value and speed to market.

After reading this blog post, you should now have an idea on how easy it is to setup AKS and start structuring applications to take advantage of the platform.

Resources

Share:

About The Author

App Dev Consultant

Joel is part of Cardinal's App Dev Practice working out of their Nashville office. He works with a plethora of technologies from .NET and JavaScript frameworks to Terraform. Currently, he is interested in leveraging Azure and container orchestration to build robust .NET solutions.