Moving from Minikube to IBM Cloud Kubernetes Service

In my last blog I have described a project we are working on: Cloud Native Starter. It is a microservices architecture, written mostly in Java with Eclipse MicroProfile, and using many Istio features. We started to deploy on Minikube because that is easy to implement if you have a reasonably powerful notebook. Now that everything works on Minikube, I wanted to deploy it on the IBM Cloud, too, using IBM Cloud Kubernetes Service (IKS).

IKS is a managed Kubernetes offering that provides Kubernetes clusters on either bare metal or virtual servers in many of IBMs Cloud datacenters in Europe, the Americas, and Asia Pacific. One of the latest features (currently beta) are cluster add-ons to automatically install a managed Istio (together with Kiali, Jaeger, Prometheus, etc) onto an IKS cluster. You can even install the Istio Bookinfo sample with a single click, Knative is also available as preview.

There is even a free (lite) Kubernetes cluster available (single node, 2 vCPUs, 4 GB RAM) but you need an IBM Cloud Account with a credit card entered in order to use it, even if it is free of charge. I have heard stories that there was too much Bitcoin mining going on on the lite clusters, go figure! You can also try and get an IBM Cloud promo code, we hand them out at conferences where we are present, your next chances in Germany are JAX in Mainz, WeAreDevelopers and DevOpsCon in Berlin, ContainerDays in Hamburg .

There is also an IBM Cloud Container Registry (ICR) available, this is a container image repository comparable to Dockerhub but private on the IBM Cloud. You can store your own container images there and reference them in Kubernetes deployment files for deployment on the IBM Cloud. You can even use ICR to build your container images.

I have created scripts to deploy Cloud Native Starter onto the IBM Cloud and documented the steps here. Here I want to point out the few things that are different and very specific when deploying to IBM Cloud Kubernetes Service compared to deploying to Minikube

First, you need to be logged on the IBM Cloud of course which you do with the ibmcloud CLI, then you need to set the cloud-based Kubernetes environment configuration, and finally login to the Container Registry, too
$ ibmcloud login
$ ibmcloud region-set us-south
$ ibmcloud ks cluster-config <clustername>
$ ibmcloud cr login

After that, ‘kubectl’ and ‘docker’ commands work with the IBM Cloud and not a local resource. ‘ibmcloud ks cluster-config’ is comparable to the ‘minikube docker-env’ for Minikube.

‘ibmcloud ks cluster-config’ outputs an ‘export KUBECONFIG=/…/… .yaml’. Copy and paste this export statement into your shell and execute it. This statement needs to be executed every time a new shell is opened where a kubectl command should run on your IKS cluster!

This is the command to build the container image for the Authors Service API locally or on Minikube:
$ docker build -f Dockerfile -t authors:1 .
To build the image on the IBM Container Registry requires this command:
$ ibmcloud cr build -f Dockerfile –tag .
‘cr’ is the subcommand for Container Registry, ‘’ is the URL for the Registry hosted in the US, and ‘cloud-native’ is a namespace within this registry. This is the dashboard view of the Registry with all images of Cloud Native Starter:

The deployment YAML files need to be adapted to reference the correct location of the image. This is the spec for Minikube with the image being locally available in Minikube:

  - image: authors:1
    name: authors

This is the spec for IBM Cloud Container Registry:

  - image:
    name: authors

Everything else is identical to Minikube in the files. In my deployment scripts, I use ‘sed’ to automatically create new deployment files.

Deploying to IKS is not different to deploying to Minikube, just make sure that the KUBECONFIG environment is setup to use the IKS cluster.

A lite (free) Kubernetes cluster on the IBM Cloud has no Ingress or Loadbalancer available. That is reserved for paid clusters. Istio, however, has its own Ingress (istio-ingressgateway) and this is accessible via a NodePort, http on port 31380, https on port 31390. To determine the public IP address of an IKS worker node, issue the command:
$ ibmcloud ks workers <clustername>
The result looks like this:

To access the Cloud Native Starter webapp, simply point your browser to
In our Github repository is a script iks-scripts/ that will point out all important URLs on the IBM Cloud deployment, including the commands to access Kiali, Jaeger, etc.

Managing Microservices Traffic with Istio

I have recently started to work on a new project “Cloud Native Starter” where we want to build a sample polyglot microservices application with Java and Node.js on Kubernetes (Minikube) using Istio for traffic management, tracing, metrics, fault injection, fault tolerance, etc.

There a currently not many Istio examples available, the one most widely used and talked about is probably Istio’s own “Bookinfo” sample, another one I found is the Red Hat Istio tutorial. Unlike our example here, the other tutorials and examples do the request routing part not in the user-facing service directly behind the Istio ingress. It took me a full weekend to figure out how to get request routing for a user-facing service working behind an Istio ingress and with the help of @stefanprodan I finally figured it out.

We are building this sample on Minikube, instructions to set Minikube, Istio, and Kiali can be found here.

The application is made up of four services:

  • Web-App Hosting is a Nginx server that provides a Vue app to the browser
  • Web-API is accessed by the Vue app and provides a list of blog articles and their authors
  • Articles holds the list of blog articles
  • Authors holds the blog authors details (blog URL and Twitter handle)

The interesting part is that there a two versions of Web-API and these exist as two different Kubernetes deployments running in parallel:

Normally, in Kubernetes you would replace v1 with v2. With Istio you can use two or more deployments of different versions of an app to do a green/blue, A/B, or canary deployment to test if v2 works as expected.

Note the “version” label: this is very important for Istio to distinguish between the two deployments. There is also a Kubernetes service definition:

The selector is only using the “app” label. Without Istio it will distribute traffic between the two deployments evenly. Note that the port is named (“name: http”). This is a requirement for Istio.

Now comes the Istio part. Istio works with envoy proxies to control inbound and outbound traffic and to gather telemetry data of a Kubernetes pod. The envoy is injected as additional container into a pod. The envoy “sidecar” allows to add Istio’s capabilities to an application without adding code or additional libraries to your application.


To route traffic (e.g. REST API calls) into a Kubernetes application normally requires a Kubernetes Ingress. With Istio, the equivalent is a Istio Gateway which allows it to manage and monitor incoming traffic. This gateway in turn uses the Istio ingressgateway which is a pod running in Kubernetes. This is the definition of an Istio gateway:

This gateway listens on port 80 and answers to any request (“*”). The “hosts: *” should not be used in production, of course. For a Minikube test environment it is OK.

The second required Istio configuration object is a “Virtual Service” which overlays the Kubernetes service definition. The Web-API service in the example exposes 3 REST URIs. Two of them are used for API documentation (Swagger), they are /openapi and /openapi/ui/ and are currently independent of the version of Web-API. The third URI is /web-api/v1/getmultiple and this is version-specific. This is the VirtualService definition:

  1. is the pointer to the Ingress Gateway
  2. are URIs that directly point to the Kubernetes service web-api listenting on port 9080 (without Istio)
  3. is a URI that uses “subset: v1” of the service web-api which we haven’t defined yet, this is Istio specific
  4. the root / is pointing to port 80 of the web-app service which is different from web-api! It is the service that provides the Vue app to the browser.

The last object required is a DestinationRule, also Istio specific:

Here the subset v1 is selecting pods that belong to web-api and have a selector label of “version: v1” which is the deployment “web-api-v1”.

With this Istio rule set in place all incoming traffic will go to version 1 of the Web-API.

We can change the VirtualService to distribute incoming traffic, e.g. 80% should go to version 1, 20% should go to version 2:

And this is how it looks in Kiali:

I will continue to experiment with other Istio features like telemetry (monitoring, logging), fault injection, etc. I feel like “Jugend forscht” 🙂

Install Istio and Kiali on IBM Cloud or Minikube

I recently started to look into Istio and Kiali.

Istio is an open-source service mesh that sits on top of Kubernetes. It provides functions for traffic control, fault tolerance, logging and monitoring, and security. It has started as joint project by IBM, Google, and Lyft. Kiali is an Istio dashboard and in my opinion Istio is only half the fun without Kiali.

In order to explore Istio you need a Kubernetes Cluster. I have tested two options:

  1. IBM Kubernetes Service on IBM Cloud. There is a free (“lite”) cluster that you can use for 21 days.
  2. Minikube. If you have a reasonably sized notebook, Minikube is great, too.

IBM Kubernetes Service

You can create a free Kubernetes cluster (single node, 2 CPUs, 4 GB memory) that will be active for 21 days, after this time it will be deleted. Although the cluster is free, you still need an IBM Cloud account where you entered a credit card.

To create a Kubernetes cluster, select “Kubernetes” from the burger menu in the upper left corner of the IBM Cloud dashboard.

Create a free Kubernetes cluster on the IBM Cloud

In the “Kubernetes” dashboard, select “Clusters” on the left, then click on “Create cluster.” Select “Free” (1), a location (2), give your cluster a name (3) and click “Create cluster”. Creation takes about 15 to 20 minutes.

Once the cluster is deployed and in status “Normal”, go to the “Add-ons” tab.

Kubernetes Add-Ons on IBM Cloud

Click “Install” for Managed Istio, then select “Istio”, “Extras”, and “Sample”. This is so cool: 5 clicks and you have a managed Istio, with Grafana, Jaeger, and Kiali, and the Istio Bookinfo sample to start with.

Gaining access to the Kubernetes cluster is described in the “Acccess” tab of the cluster dashboard:

Basically you need the “ibmcloud” CLI, login to you IBM Cloud account, target the region where you cluster is located, and download the Kubernetes configuration file. The instruction in the dashboard and at the end of the download specify an “export” string that you need to paste into the command shell. After that, kubectl will target the cluster in IBM Kubernetes Service. In addition there is a button in the cluster dashboard that opens the “Kubernetes Dashboard”.

Access to Kiali requires a port-forward, the URL for Kiali then is http://localhost:20001

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o}') 20001:20001

If you installed the Bookinfo sample, this is how you can find the public IP address of the worker node:

ibmcloud ks workers <cluster name> 
kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?("http2")].nodePort}' 

gives the port number of the istio-ingress-gateway, typically 31380.

Bookinfo productpage is then available at http://publicIP:31380/productpage

Istio Bookinfo Sample


Installation instructions for Minikube can be found here. I run it on Linux (Fedora 29) with VirtualBox (Version 5.2.26) as Hypervisor. It starts with a default of 2 CPUs, 2 GB of RAM, and 20 GB diskspace for the Virtual Machine in VirtualBox. This is not sufficient for Istio, I was able to run it on 4 GB of RAM, but the more CPU and memory are available for it, the better it performs.

You can set the configuration for Minikube with these commands before starting it for the first time:

minikube config set cpus 4
minikube config set memory 8192
minikube config set disk-size 50g
minikube addons enable ingress 
minikube start

This starts Minikube with 4 CPUs, 8 GB of memory, a 50 GB virtual disk (VirtualBox uses thin provisioning, it doesn’t really use 50 GB of disk space unless the virtual disk really fills up), and it enables Kubernetes Ingress. Starting Minikube can take 15 to 20 minutes for the first start.

“minikube dashboard” will then open the Kubernetes dashboard, “minikube stop” stops the cluster, “minikube start” restarts the existing cluster, and “minikube delete” deletes the cluster (in case you want to start fresh or get rid of it).

The fastest method to install Istio on Minikube is this:

curl -L | sh -

This will download Istio into a directory “istio-1.0.6” (at the time when I wrote this blog) and will instruct you to add a directory to your PATH variable so that you can use “istioctl”, its CLI. Change into the istio-1.x.x directory and install some custom resource definitions:

kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

Once this is complete, install Istio itself:

kubectl apply -f install/kubernetes/istio-demo.yaml

Once the installation has completed, check the status of the Istio pods:

kubectl get pod -n istio-system

All pods must be in status Running (2/2) or Completed (0/1). Then install Kiali

bash <(curl -L

You are asked for a Kiali admin userid and password. Once installation is complete, check if the Kiali pod is ready (“kubectl get pod -n istio-system”). Once it is ready, look at its log:

kubectl log kiali-xxxx-xxxx -n istio-system

It should look similar to this:

I0208 05:56:09.375998       1 update_base_url.go:13] Updating base URL in index.html with [/kiali]
I0208 05:56:09.376658 1 javascript_config.go:13] Generating env.js from config
I0208 05:56:09.379867 1 server.go:44] Server endpoint will start at [:20001/kiali]
I0208 05:56:09.379884 1 server.go:45] Server endpoint will serve static content from [/opt/kiali/console]
I0208 05:56:09.380503 1 server.go:50] Server endpoint will require https

This means that Kiali is listening on URI /kiali on https. It is configured to use a NodePort:

kubectl get svc -n istio-system

The Kiali entry looks like this:

kiali NodePort  20001:31993/TCP

The Minikube IP address can be found with

minikube ip

In my environment it is

So the Kiali Dashboard can be accessed at Accessing the Dashboard requires to accept a security exception since it uses a self-signed TLS certificate!

Kiali Dashboard

Install the Bookinfo sample, you need to be inside the istio-1.x.x directory:

kubectl create namespace bookinfo
kubectl apply -n bookinfo -f <(istioctl kube-inject -f  samples/bookinfo/platform/kube/bookinfo.yaml)

Once the Bookinfo pods are available (“kubectl get pods -n bookinfo”), create an istio-ingressgateway to access the Bookinfo productpage:

kubectl apply -n bookinfo -f samples/bookinfo/networking/bookinfo-gateway.yaml

The Bookinfo productpage is then at http://minikubeIP.31380/productpage

Blue Cloud Mirror — (Don’t) Open The Doors!

This isn’t specific to our game “Blue Cloud Mirror”. Everyone trying to create a Hybrid Cloud will need to decide how to connect a local application in a secure manner with code running on the Cloud without fully opening “the doors”. IBM offers a service called Secure Gateway exactly for this purpose. It creates a TLS encrypted tunnel (TLS v1.2) between a Secure Gateway Server on the IBM Cloud and a Secure Gateway Client installed on-premise in your private network. The connection is initiated from the Client so there shouldn’t be any issues with your firewall.

IBM Secure Gateway

You can test a limited (“lite”) version of IBM Secure Gateway with a free IBM Cloud account. Limited means you can connect to one destination which is one on-premise application with a limited amount of traffic (500 MB/month), sufficient for our needs with Blue Cloud Mirror.

The IBM Secure Gateway Service can be found in the “Integration” section of the IBM Cloud Catalog. Log on to the IBM Cloud, go to the Catalog, select IBM Secure Gateway, choose a region, an organization, and a space, click “Create” and wait a moment until the service is ready.

I wrote about the configuration of IBM Secure Gateway in the Users section of our Github repository. There are two things that may be confusing when you start to configure:

  1. What is the difference between Client and Destination?

The Secure Gateway Client is a piece of software that is installed on a server (physical or virtual) on-premise in your data center. It creates the connection to the IBM Secure Gateway service running on the IBM Cloud.

The Destination is the application or API that you want to connect to. It could run on the same server as the client or it could run somewhere on the same network within the data center.

2. Why do I need to configure ACLs, too?

I already specified the destination address and port in the destination configuration. And then I need to specifically allow access to the address and port in the ACL (access control list), too. The ACL is a list that contains information about all clients and their destinations that you manage with an IBM Secure Gateway instance. With the ACL you can “turn off” (deny access) to a destination without deleting it. Maybe you are about to install a new version of the application/API.

At the end of my last blog “Blue Cloud Mirror – Of Kubes and Couches” I explained that access to the Users API is via a Kubernetes ingress which is configured for a host “” and secured with a self-signed TLS certificate. Both, the host name and the self-signed certificate would be a problem if I tried to access the API via the Internet directly. With Secure Gateway this is not an issue. In the README I give instructions how to create the TLS certificate and how to add the Ingress (Minikube) IP Address together with the hostname “” to the servers hosts file so that the Secure Gateway client can resolve it: The hostname and TLS certificate are used in the Secure Gateway destination configuration.

If you go through the configuration yourself you’ll notice that the Secure Gateway Client is available as a Docker image, too. I tried to use that and even tried to create a Kubernetes deployment from it. The problem is that you can’t easily change the hosts file of the Docker image and without adding the hostname “” the Secure Gateway Client isn’t able to resolve the IP address of the Users API ingress. When installing the Secure Gateway Client locally with a classical installer there is no problem.

With everything in place — Secure Gateway Service with Client and Destination set up — the Users API is now available under a very cryptic URL, something like https://cap-eu-de-sg*****

In my next blog I will explain how to manage and describe the API to a developer using IBM API Connect, another service available on the IBM Cloud.

Blue Cloud Mirror – Of Kubes and Couches

In my last blog I presented an overview and introduction to Blue Cloud Mirror. In this blog I want to describe the back end of the Users API.

If a player of Blue Cloud Mirror decides to enter the competition and enters their user data, they are stored when the game is over and the player clicks “Save Score” on the Results page. Data is stored on premise (off the cloud) with the help of this Users API. The data set contains first name, last name, email, and the acceptance of the terms.

The Users API back end is made up of a Node.js application and CouchDB, both deployed on Minikube. You will find details here

Our original plan was to use IBM Cloud Private (“eat your own cookies”) and I started to build a IBM Cloud Private instance but this is too big for a simple demo. There is an IBM Cloud Private Community Edition that everyone can download and use but its resource requirements exceed by far what is available on a typical notebook; no way to carry it around for a demo at a conference. You would need to have a server of a certain size that you can spare for the demo. Instead we decided to go with Minikube.

Minikube is a single node Kubernetes “cluster” that can run on a notebook. It is not suitable for production, of course, but it is sufficient to run this demo. Per default Minikube starts a cluster that uses 2 CPUs (or CPU threads depending on how you count them), 2 GB of RAM, and 20 GB of disk. If your notebook or server has more resources, you can utilize them. Other than that, setup of Minikube is really easy: download the Minikube executable and type “minikube start”. After 10 to 15 minutes you’ll have a Kubernetes cluster. All you need to do is enable ingress and that’s it.

There is a CouchDB container image on Docker Hub which I have used to create a simple deployment with a single pod. You need to persist the configuration and the data of CouchDB, information is available on Docker Hub. My deployment creates two persistent volumes of type HostPath and two persistent volume claims, one for the configuration and one for the data directory. Minikube provides a /data directory in the nodes file system that is persistent over reboots. This is why both persistent volumes point to the /data directory.

CouchDB starts up unconfigured in “Admin Party” mode. To be able to access CouchDB externally there is a NodePort definition for the CouchDB service, using port 32001. Once CouchDB is started, its admin dashboard (Fauxton) is available on this port. CouchDB configuration is described here.

The User Core Service is written in Node.js and provides an API to access CouchDB. It uses “express” for the POST and GET methods, “express-basic-auth” to allow only authenticated access to the API, and “nano” to access CouchDB. The CouchDB URL is passed as environment variable in the Kubernetes deployment, the URL must contain User ID and password of the CouchDB setup.

The API is exposed externally with a Kubernetes ingress. (Remember to enable ingress in Minikube!) The ingress is configured for TLS for a host name “”. TLS uses a self signed certificate and the host name must be entered into the /etc/hosts file of the system running Minikube (unless you are the master of your DNS). Instructions are in the README. Using a self signed TLS certificate is no problem since it is used in only one place, the configuration of IBM Secure Gateway. Which I will explain in my next post. Stay tuned!

Blue Cloud Mirror — A fun IBM Cloud showcase

Blue Cloud Mirror is an online game based on multiple IBM Cloud technologies. It has two levels, in level one you need to show five facial expressions like happy, angry, etc. In level two you need to show five body positions. Have a look at it and play it here.

I created the game together with my colleagues Niklas Heidloff and Thomas Südbröcker. Niklas desribed many aspects of it in his blog, starting here.

Basically, Blue Cloud Mirror has three parts:

  1. Game, can be played anonymously or as a registered user
  2. Scores Service keeps the highscore list for registered users
  3. Users Service keeps the user data of the registration

My part of this project is the Users Service. It does not run in the Cloud for several reasons:

  • Users may not be comfortable with having their data stored on the Cloud.
  • We wanted to deploy part of our microservices on Kubernetes, for example on IBM Cloud Private.
  • We wanted to show how easy it is to securely connect a local backend with an application on the Cloud. Instead of the Users Service the connection could be to any application running on-premise.

I really started to develop on a IBM Cloud Private but since we wanted as many people as possible to use our game I decided to switch to a local instance of Minikube because it is simple, has a small footprint, and if you like you can carry it around on your notebook.

You can find our code in the IBM directory of Github at and you will find the User Service in the users directory of the repository.

I will describe the Users Service in follow on blogs. Stay tuned!

Stuttgart Kubernetes Meetup

Last Thursday night was the Stuttgart Kubernetes Meetup, hosted by CGI in Echterdingen (thanks!!!). I got the chance to talk about “Project Eirini”.

There is Kubernetes and there is Cloud Foundry. Both are Cloud PaaS platforms, both offer container orchestration and scheduling, and both are available on the IBM Cloud. While Kubernetes is all about container orchestration, Cloud Foundry is a developer experience where the concept of containers is pretty much hidden from the developer. Both have their strengths and weaknesses: You can do almost anything with Kubernetes but it has a steep learning curve, as a developer you have to know a lot about orchestration. Cloud Foundry is limited to stateless or 12-factor apps but as a developer you only focus on your code, Cloud Foundry takes care of the rest.

A while ago, SuSE started a project in the Cloud Foundry Incubator called “Cloud Foundry (CF) Containerization”. It converts the VMs running CF Management or backplane functions into containers and deploys them on Kubernetes. It uses a component called “fissile” to do that. There is a Github repo for this. This has been around for a while and works quite well. IBM uses this technology for “Cloud Foundry Enterprise Edition” to run a Cloud Foundry for one customer on top of a Kubernetes cluster.

Cloud Foundry has a container orchestration component called “Diego”, Kubernetes is a container orchestrator. With the CF Containerization approach, Diego cells — the equivalent to Kubernetes worker nodes — are deployed as Pods. That way, Cloud Foundry apps run as containers within containers (nested). They are not visible to Kubernetes. If you deploy Kubernetes apps via kubectl into the Kubernetes cluster that hosts Cloud Foundry Containerization, those apps are not visible to Diego. Diego and Kubernetes then work against each other instead of together. This is where Project Eirini starts.

Eirini is the greek goddess of peace 🙂

Eirini replaces Diego with Kubernetes (actually it gives you a choice between the two). When you deploy an application to Cloud Foundry (native), Diego uses a buildpack — a runtime that matches the programming language of the application — and combines it with the application code and dependencies to form what is called a “droplet”. The droplet is then placed into an empty container end executed, this forms the running application.

Eirini uses a mechanism published by, it creates a container image instead of a droplet, plus it creates a helm chart, and deploys the application as stateful set directly into Kubernetes. The application is visible in Kubernetes and the Kubernetes cluster can be used to run other Kubernetes native applications as well.

This is the Eirini repository on Github, it contains information on how to run CF Containerization and Eirini together. In December 2018, Eirini has passed the Cloud Foundry Acceptance tests and should be production ready in a while.