Using Azure Containers with Paas services; a containerized cloud app example.

Welcome to the new Era of developing software; now you can fully focus on your solution and you don’t have to worry about topics like:
– dependency management
– (high) availability/scaling issues
– monitoring
– security

code you can find here, youtube flic

Motivation/ Why

Because it’s fun! and I love to show you how easy it is to start with an idea and end up in something that is usable in a very short period of time.

In this example I am using the following technologies:

  • python; code around the shell scripts for timing results, persisting and getting stuff from redis cache
  • java; springboot code that can only be accessed by authenticated users, like shown here
  • docker; to containerize my code
  • managed kubernetes on azure (aks); this provides a container orchestration solution on Azure where I don’t have to worry about setting up and maintaining machines, as this is available as a PAAS service. I am using version 1.81 so I can use cronjobs out of the box.
  • azure container instance (aci); serverless solution on azure to host single docker image, used for hosting the (java) frontend app.
  • azure container registry (acr); serverless solution on azure for a private docker registry; contains all the builds
  • redis cache; an open source solution for storing and accessing (temp) data; this is made available as a Paas service on Azure
  • cosmos db; microsofts’ document database, like mongodb. This database can be globally scaled and is available as a Paas service on Azure.
  • azure key vault; An enterprise ready cloud solution, with all the required security certifications. This stores my secrets (for eg passwords) and ceritificates.


What does it do:

Provision; with the Azure CLI it retrieves all the DS2 machines per region and stores the machine/region in redis cache. Build: On scheduled base the vmcr8er will retrieve a value from the cache and based on these values it will create the machine in the region. It will measure the time that it has taken to build the machine and will persist the results in the cosmosdb. Show: The frontend app (once you are authenticated) will retrieve the values and calculate an average of the build per region. Averages longer than 3 minutest (180 sec) are considered slow builds. This is how it looks like:

This is how we do it:

  1. Get the latest sources for this project:  git clone
  2. cd to scripts and then run the create_all script to create all the required (Azure) resources:  ./
  3. Prepare keyvault;   step 1 -2
    1. Create the vmcr8tester-redis-pw secret in keyvault and give it the value from the primary redis cache key
    2. Create the vmc8tester-cosmosdb-pw secret in keyvault and give it the value from the primary redis cache key
    3. Make sure that [ solution name ] gets the following authorization on the secrets of this vault: get/list and set or select the template “Secret Management”
  4. Prepare CosmosDB; create DB database, azurevms collection with partition key /id
  5. Kubernetes stuff
    1. On your local machine get the kubernetes credentials with the following command: az aks get-credentials -n [ NAME OF YOUR AKS CLUSTER ] -g [ NAME OF YOUR RESOURCEGROUP ], this will copy the ./kube/config file from the master to your local machine so you can talk to the API server
    2. Create the secret for accessing the private registry with the following command:  kubectl create secret docker-registry [ solution name ] –docker-server=[ solution name ] –docker-username=[ solution name ] –docker-password=[ primarey registry key ] –docker-email=[ your email ]:
  6. Create Provision job
    1. cd to backend/provision and do ./
    2. Docker login [ your ] and then push the provision image to your private registry:  docker tag [ your image name:1.0 ] [ your registry URL/your image name:latest ], docker push [ your registry URL/your image name:latest ]
    3. Go to backend/kube and edit the provision-job.yaml (don’t forget to change your secret ), then create the job with the following command: kubectl create -f provision-job.yaml (use the example as template)
  7. Create VM create job
    1. cd to backend/vmcr8tester and do ./
    2. Docker login [ your ] and then push the provision image to your private registry:  docker tag [ your image name:1.0 ] [ your registry URL/your image name:latest ], docker push [ your registry URL/your image name:latest ]
    3. Go to backend/kube and edit the create-job.yaml (don’t forget to change your secret ), then create the job with the following command: kubectl create -f create-job.yaml (use the example as template)
  8. Create Frontend with Container Instance
    1. change the properties in the frontend/java/src/main/resources/
    2. mvn clean build
    3. docker build -t [ your container url ]/someimage:latest .
    4. docker push [ your container url ]/someimage:latest
    5. ./
    6. change the reply URL in Azure AD to your container instance (you can see the container instance with the following command: az container list -o table)
    7. grant permissions to this user to access the azure ad tenant
  9. Setup OMS logging
    1. create workspace, by adding Log Analytics to your resource group
    2. get your workspace id and key, go to your workspace, then to settings, connected sources, linux machines
    3. edit the backend/oms/omsagent.yaml file (check the WSID and KEY section)
    4. create the deployment with kubectl create -f omsagent.yaml
    5. enable your oms agent logging, go to marketplace (solutions gallery), Select Application and Insights and then Container Monitor solutions, you will see something cool things like this:

Here is a screenshot of OMS logging which contains a query for the logging of the provision container:

Config Azure AD authentication on tomcat

Just made a simple demo project (not for production purposes) that shows you how you can use the Azure Active Directory as a REALM for your authentication. Nowadays you would use a framework that does 3rd party authentication for you…your custom authenticator will use a third party framework that federates the authentication to a (trusted) 3rd party like Facebook or Google.

Please note that this is NOT the way to use AAD in a production environment.
This is Demo code to show how you can (mis)use AAD as a LDAP solution.
In future I will create a federated authentication solution using AAD.

Here is the project:

Here you see the code in action:

Docker Image for Azure CLI and Azure Powershell

If you are a MAC or Linux user like I am and you like to manage your azure environment with azure-cli or azure-powershell you can use the following docker Image: cvugrinec/ubuntu-azure-powershellandcli:latest

Just type the following:

docker run –it  cvugrinec/ubuntu-azure-powershellandcli /bin/sh

Please note that for azure-powershell only ARM mode is supported. Azure cli supports ASM and ARM mode.

70-533 Azure certification notes

Recently I certified myself for the 70-533 exam, which  is the MCP certification for Implementing Microsoft Azure infrastructure solutions. Here are my notes what I think you should do (HANDS ON) in order to pass:

  • Web Applications/ Paas services
    • Deploy some webapplications, using the concept of slots. Also pretend to do a production update
    • Enable monitoring for 1 or 2 endpoints in your app for different test locations
    • Play with the traffic manager and understand when to use it
    • Enable CDN and understand what needs to be done (for e.g. which DNS records)
    • Implement several databases and understand the difference in products and service levels.
    • Implement autoscaling
  • Azure Virtual Machines
    • Create some VMS’ preferable with own image and attaching own datadisks
    • Make an availabilty plan
    • Make a scaleset
    • Do an update on a update domain
    • Test a failover scenario with the failover domain(s)
    • Enable diagnostics and download the diagnostics with powershell commands
  • Storage and Disks
    • Create storage accounts with powershell, create shares on it and put files on them
    • Upload a VHD and create a image for an OS you would like to make available
    • Create a datadisk and play with the optimization parameters for caching
    • Play with Azure Site Recovery and Backup manager
    • Play and understand the with several zones (LRS/ZRS/GRS/GRS(A))
  • Azure Virtual Networks
    • Play with the setting for Site to Site and Point to Site and understand when to use Express route. Understand when a VPN needs to be installe
    • Export an existing network config, change it and import it back again
    • Make a connection between 2 Virtual Networks
    • Implement subnets and routing between them
    • Implement NSG and play with ACL
    • Play with Static IP addresses for PAAS services (reserved) or VM’s
  • Azure Active Directory (IAM)
    • Add a custom domain to your AAD
    • Add a custom web application and use the SSO with own credential store
    • Add an application from the store and enable SSO
    • Add an application using existing SSO (for e.g. from Google or facebook)
    • Implement a multisite network

To be honest I haven’t done all of this…but this is what I think I should have done in retrospective. The exam is doable ( I passed it the first time…so everyone can 🙂  If I would create a course for passing this exam (maybe I will someday) than I would spend a week doing the stuff I mentioned here. PS: I passed my exam by doing prep exams from: …