Using Azure Containers with Paas services; a containerized cloud app example.

Welcome to the new Era of developing software; now you can fully focus on your solution and you don’t have to worry about topics like:
– dependency management
– (high) availability/scaling issues
– monitoring
– security

code you can find here, youtube flic

Motivation/ Why

Because it’s fun! and I love to show you how easy it is to start with an idea and end up in something that is usable in a very short period of time.

In this example I am using the following technologies:

  • python; code around the shell scripts for timing results, persisting and getting stuff from redis cache
  • java; springboot code that can only be accessed by authenticated users, like shown here
  • docker; to containerize my code
  • managed kubernetes on azure (aks); this provides a container orchestration solution on Azure where I don’t have to worry about setting up and maintaining machines, as this is available as a PAAS service. I am using version 1.81 so I can use cronjobs out of the box.
  • azure container instance (aci); serverless solution on azure to host single docker image, used for hosting the (java) frontend app.
  • azure container registry (acr); serverless solution on azure for a private docker registry; contains all the builds
  • redis cache; an open source solution for storing and accessing (temp) data; this is made available as a Paas service on Azure
  • cosmos db; microsofts’ document database, like mongodb. This database can be globally scaled and is available as a Paas service on Azure.
  • azure key vault; An enterprise ready cloud solution, with all the required security certifications. This stores my secrets (for eg passwords) and ceritificates.

NewImage

What does it do:

Provision; with the Azure CLI it retrieves all the DS2 machines per region and stores the machine/region in redis cache. Build: On scheduled base the vmcr8er will retrieve a value from the cache and based on these values it will create the machine in the region. It will measure the time that it has taken to build the machine and will persist the results in the cosmosdb. Show: The frontend app (once you are authenticated) will retrieve the values and calculate an average of the build per region. Averages longer than 3 minutest (180 sec) are considered slow builds. This is how it looks like:

This is how we do it:

  1. Get the latest sources for this project:  git clone https://github.com/chrisvugrinec/azure-vm-weathermap.git
  2. cd to scripts and then run the create_all script to create all the required (Azure) resources:  ./create-all.sh
  3. Prepare keyvault;   step 1 -2
    1. Create the vmcr8tester-redis-pw secret in keyvault and give it the value from the primary redis cache key
    2. Create the vmc8tester-cosmosdb-pw secret in keyvault and give it the value from the primary redis cache key
    3. Make sure that [ solution name ] gets the following authorization on the secrets of this vault: get/list and set or select the template “Secret Management”
  4. Prepare CosmosDB; create DB database, azurevms collection with partition key /id
  5. Kubernetes stuff
    1. On your local machine get the kubernetes credentials with the following command: az aks get-credentials -n [ NAME OF YOUR AKS CLUSTER ] -g [ NAME OF YOUR RESOURCEGROUP ], this will copy the ./kube/config file from the master to your local machine so you can talk to the API server
    2. Create the secret for accessing the private registry with the following command:  kubectl create secret docker-registry [ solution name ] –docker-server=[ solution name ].azurecr.io –docker-username=[ solution name ] –docker-password=[ primarey registry key ] –docker-email=[ your email ]:
  6. Create Provision job
    1. cd to backend/provision and do ./create.sh
    2. Docker login [ your solutionname.cvugrinecapp1.azurecr.io ] and then push the provision image to your private registry:  docker tag [ your image name:1.0 ] [ your registry URL/your image name:latest ], docker push [ your registry URL/your image name:latest ]
    3. Go to backend/kube and edit the provision-job.yaml (don’t forget to change your secret ), then create the job with the following command: kubectl create -f provision-job.yaml (use the example as template)
  7. Create VM create job
    1. cd to backend/vmcr8tester and do ./create.sh
    2. Docker login [ your solutionname.cvugrinecapp1.azurecr.io ] and then push the provision image to your private registry:  docker tag [ your image name:1.0 ] [ your registry URL/your image name:latest ], docker push [ your registry URL/your image name:latest ]
    3. Go to backend/kube and edit the create-job.yaml (don’t forget to change your secret ), then create the job with the following command: kubectl create -f create-job.yaml (use the example as template)
  8. Create Frontend with Container Instance
    1. change the properties in the frontend/java/src/main/resources/application.properties
    2. mvn clean build
    3. docker build -t [ your container url ]/someimage:latest .
    4. docker push [ your container url ]/someimage:latest
    5. ./createContainerInstance.sh
    6. change the reply URL in Azure AD to your container instance (you can see the container instance with the following command: az container list -o table)
    7. grant permissions to this user to access the azure ad tenant
  9. Setup OMS logging
    1. create workspace, by adding Log Analytics to your resource group
    2. get your workspace id and key, go to your workspace, then to settings, connected sources, linux machines
    3. edit the backend/oms/omsagent.yaml file (check the WSID and KEY section)
    4. create the deployment with kubectl create -f omsagent.yaml
    5. enable your oms agent logging, go to marketplace (solutions gallery), Select Application and Insights and then Container Monitor solutions, you will see something cool things like this:

Here is a screenshot of OMS logging which contains a query for the logging of the provision container:

start docker as systemd on centos7/rhel7

Here is an example how you can start a docker container as a service. After reboot this docker instance will be started.
Create a service file like this: 

[Unit]

Description=Redis Docker Container 

After=docker.service

Requires=docker.service

 

[Service]

User=chris

RemainAfterExit=true

ExecStart=/bin/docker run -d –name redis redis

ExecStop=/bin/docker stop -t 2 redis 

ExecStopPost=/bin/docker rm -f redis

 

[Install]

WantedBy=multi-user.target

/etc/systemd/system/redis.service

test it with the following command:

systemctl start redis

stop it with: systemctl stop redis

make this permanent with systemctl enable redis

Docker Image for Azure CLI and Azure Powershell

If you are a MAC or Linux user like I am and you like to manage your azure environment with azure-cli or azure-powershell you can use the following docker Image: cvugrinec/ubuntu-azure-powershellandcli:latest

Just type the following:

docker run –it  cvugrinec/ubuntu-azure-powershellandcli /bin/sh

Please note that for azure-powershell only ARM mode is supported. Azure cli supports ASM and ARM mode.