Cloud Native Starter and OpenShift, OKD, Minishift

Over the last weeks we have worked intensively on our Cloud Native Starter project and made a lot of progress. It is an example of a microservices architecture based on Java, Kubernetes, and Istio. We have developed and tested it on Minikube and IBM Cloud Kubernetes Service.

Currently we are enabling Cloud Native Starter to run on Red Hat OpenShift starting with Minishift.

OpenShift is Red Hat’s commercial Kubernetes distribution. There is a community version of OpenShift called OKD which stands for “Origin Community Distribution of Kubernetes”. OKD is the upstream Kubernetes distribution embedded in Red Hat OpenShift. And the there is Minishift. Like Minikube, it is an OKD based single node Kubernetes cluster running in a VM.

Minishift is currently running OKD/OpenShift Version 3.11 as latest version. OpenShift Version 4 will probably never be supported.

I have experimented with Minishift a while ago when I had a notebook with 2 CPU cores (4 threads) and 8 GB of RAM. That is not enough! My current notebooks has 4 CPU cores (8 threads) and 32 GB of RAM and it runs quite well on this machine.

If you like me come from a plain Kubernetes experience, OpenShift is a challenge. Red Hat enabled many security features like role based access control and also enabled TLS in many places. So you have to learn many things new. And while bringing up Minishift is quite simple (“minishift start”), installing Istio isn’t. There are instructions for OpenShift on the Istio website but they ignore Kiali and I don’t want to miss out on Kiali. And I was not able to get automatic injection to work because I couldn’t find the file to patch.

One day I stumbled over this blog by Kamesh Sampath from Red Hat. And then Istio install on Minishift is almost a breeze:
1. Set up a Minishift instance with some prerequisites
2. Download the Minishift Add-ons from Github
3. Install the Istio add-on

There are still some things missing and I have documented the process that works for me here.

A couple of comments:

The Istio add-on installs Istio with a Kubernetes operator which is cool. It is based on a project called Maistra which seems to be the base for the upcoming OpenShift Service Mesh. It installs a very downlevel Istio version (1.0.3), though. But the integration with OpenShift is very good and all security aspects are in place. For testing I think this works very well.

Did I mention security? Maistra by default seems to enable mTLS. Which results in upstream 503 errors between your services once you apply Istio rules. For the sake of simplicity we therefore decided to disable mTLS in our cloud-native-starter project for Minishift.

Automatic sidecar injection is also handled differently in Maistra/Istio: In our Minikube and IBM Kubernetes Service environments we label the namespace with a specific tag (“istio-injection=enabled”). With this label present, every pod in that namespace will automatically get a sidecar injected. Maistra instead relies on opt-in and requires an annotation in the deployment yaml file as described here. This requires enablement of “admission webhooks” in the master configuration file which is done by patching this file. Fortunately, this is made very easy in Minishift. All you need to do is enable an add-on (“minishift addon enable admissions-webhook”).

What’s Going On (in my cluster)?

Logging and Monitoring have always been important but in a distributed microservices architecture on a Kubernetes cluster it is even more important: watching the ever changing components of a cluster is like “guarding a bag of fleas” as the German proverb says. Even our demo “Cloud Native Starter” has at least 4 or 5 pods running that all create logs that at some point when something doesn’t work you need to look at. There are plenty of articles around logging in a Kubernetes cluster with many different solutions. What is important to me as a developer is that I don’t want to care about maintaining it. I need a logging and monitoring solution but I want somebody else to keep it running for me. Fortunately, IBM Cloud is offering that in the form of “IBM Log Analysis with LogDNA” and “IBM Cloud Monitoring with Sysdig”.

Logging and Monitoring are somewhat hidden in the IBM Cloud dashboard. You find them in the “Observabilty” area of the “Burger” menu where you can create the services, learn how to configure the sources, and access their dashboards.

LogDNA can be used with a Kubernetes Cluster running on the IBM Cloud and it can also be used with a Minikube cluster. It is available in the IBM Cloud Datacenters in Dallas and Frankfurt. There is free (lite) version available but this is limited in its features.

Once a LogDNA instance has been created, the next thing to do is to “Edit log sources”. There are several options, we are only interested in Kubernetes here:

Two kubectl commands need to be executed against the Kubernetes cluster (IBM Kubernetes Service or Minikube work).

The first command creates a Kubernetes secret holding my specific LogDNA ingestion key which is required to write log events into my LogDNA instance. The second command creates a logdna-agent daemon set in the Kubernetes cluster which creates a pod on every Kubernetes worker node. No further installation or configuration is required. If you click on the “View LogDNA” button you’ll see the dashboard:

Notice the filters in the header area. In this screenshot I have filtered on 3 Apps, the listing shows “authors”, “web-api”, and “articles”. I can further filter on showing errors only, save that as a view, and attach an alerting channel to it, for example email or a Slack channel. You can find more infos here.

Sysdig can be used with a Kubernetes Cluster running on the IBM Cloud and it can also be used with a Minikube cluster, too. It is available in the IBM Cloud Datacenters in Dallas, London, and Frankfurt. There is trial version available with limited features which expires after 30 days.

Again, once the Sysdig instance has been created, go to “Edit sources”. There are instructions for Kubernetes, Linux, and Docker. The Kubernetes instructions first explain how to logon to the IBM Cloud and then access the Kubernetes cluster with ibmcloud CLI, this is of course not required for Minikube. Lastly there is a curl command that downloads and install the sysdig agent for Kubernetes. Again, there is no further configuration required. The “View Sysdig” button opens the Sysdig dashboard:

There are several predefined dashboards including 2 predefined Istio dashboards which are not available in the trial version of Sysdig.


Moving from Minikube to IBM Cloud Kubernetes Service

In my last blog I have described a project we are working on: Cloud Native Starter. It is a microservices architecture, written mostly in Java with Eclipse MicroProfile, and using many Istio features. We started to deploy on Minikube because that is easy to implement if you have a reasonably powerful notebook. Now that everything works on Minikube, I wanted to deploy it on the IBM Cloud, too, using IBM Cloud Kubernetes Service (IKS).

IKS is a managed Kubernetes offering that provides Kubernetes clusters on either bare metal or virtual servers in many of IBMs Cloud datacenters in Europe, the Americas, and Asia Pacific. One of the latest features (currently beta) are cluster add-ons to automatically install a managed Istio (together with Kiali, Jaeger, Prometheus, etc) onto an IKS cluster. You can even install the Istio Bookinfo sample with a single click, Knative is also available as preview.

There is even a free (lite) Kubernetes cluster available (single node, 2 vCPUs, 4 GB RAM) but you need an IBM Cloud Account with a credit card entered in order to use it, even if it is free of charge. I have heard stories that there was too much Bitcoin mining going on on the lite clusters, go figure! You can also try and get an IBM Cloud promo code, we hand them out at conferences where we are present, your next chances in Germany are JAX in Mainz, WeAreDevelopers and DevOpsCon in Berlin, ContainerDays in Hamburg .

There is also an IBM Cloud Container Registry (ICR) available, this is a container image repository comparable to Dockerhub but private on the IBM Cloud. You can store your own container images there and reference them in Kubernetes deployment files for deployment on the IBM Cloud. You can even use ICR to build your container images.

I have created scripts to deploy Cloud Native Starter onto the IBM Cloud and documented the steps here. Here I want to point out the few things that are different and very specific when deploying to IBM Cloud Kubernetes Service compared to deploying to Minikube

First, you need to be logged on the IBM Cloud of course which you do with the ibmcloud CLI, then you need to set the cloud-based Kubernetes environment configuration, and finally login to the Container Registry, too
$ ibmcloud login
$ ibmcloud region-set us-south
$ ibmcloud ks cluster-config <clustername>
$ ibmcloud cr login

After that, ‘kubectl’ and ‘docker’ commands work with the IBM Cloud and not a local resource. ‘ibmcloud ks cluster-config’ is comparable to the ‘minikube docker-env’ for Minikube.

‘ibmcloud ks cluster-config’ outputs an ‘export KUBECONFIG=/…/… .yaml’. Copy and paste this export statement into your shell and execute it. This statement needs to be executed every time a new shell is opened where a kubectl command should run on your IKS cluster!

This is the command to build the container image for the Authors Service API locally or on Minikube:
$ docker build -f Dockerfile -t authors:1 .
To build the image on the IBM Container Registry requires this command:
$ ibmcloud cr build -f Dockerfile –tag us.icr.io/cloud-native/authors:1 .
‘cr’ is the subcommand for Container Registry, ‘us.icr.io’ is the URL for the Registry hosted in the US, and ‘cloud-native’ is a namespace within this registry. This is the dashboard view of the Registry with all images of Cloud Native Starter:

The deployment YAML files need to be adapted to reference the correct location of the image. This is the spec for Minikube with the image being locally available in Minikube:

spec:
  containers:
  - image: authors:1
    name: authors

This is the spec for IBM Cloud Container Registry:

spec:
  containers:
  - image: us.icr.io/cloud-native/authors:1
    name: authors

Everything else is identical to Minikube in the files. In my deployment scripts, I use ‘sed’ to automatically create new deployment files.

Deploying to IKS is not different to deploying to Minikube, just make sure that the KUBECONFIG environment is setup to use the IKS cluster.

A lite (free) Kubernetes cluster on the IBM Cloud has no Ingress or Loadbalancer available. That is reserved for paid clusters. Istio, however, has its own Ingress (istio-ingressgateway) and this is accessible via a NodePort, http on port 31380, https on port 31390. To determine the public IP address of an IKS worker node, issue the command:
$ ibmcloud ks workers <clustername>
The result looks like this:

To access the Cloud Native Starter webapp, simply point your browser to
http://149.81.xx.x3:31380
In our Github repository is a script iks-scripts/show-urls.sh that will point out all important URLs on the IBM Cloud deployment, including the commands to access Kiali, Jaeger, etc.

Managing Microservices Traffic with Istio

I have recently started to work on a new project “Cloud Native Starter” where we want to build a sample polyglot microservices application with Java and Node.js on Kubernetes (Minikube) using Istio for traffic management, tracing, metrics, fault injection, fault tolerance, etc.

There a currently not many Istio examples available, the one most widely used and talked about is probably Istio’s own “Bookinfo” sample, another one I found is the Red Hat Istio tutorial. Unlike our example here, the other tutorials and examples do the request routing part not in the user-facing service directly behind the Istio ingress. It took me a full weekend to figure out how to get request routing for a user-facing service working behind an Istio ingress and with the help of @stefanprodan I finally figured it out.

We are building this sample on Minikube, instructions to set Minikube, Istio, and Kiali can be found here.

The application is made up of four services:

  • Web-App Hosting is a Nginx server that provides a Vue app to the browser
  • Web-API is accessed by the Vue app and provides a list of blog articles and their authors
  • Articles holds the list of blog articles
  • Authors holds the blog authors details (blog URL and Twitter handle)

The interesting part is that there a two versions of Web-API and these exist as two different Kubernetes deployments running in parallel:

Normally, in Kubernetes you would replace v1 with v2. With Istio you can use two or more deployments of different versions of an app to do a green/blue, A/B, or canary deployment to test if v2 works as expected.

Note the “version” label: this is very important for Istio to distinguish between the two deployments. There is also a Kubernetes service definition:

The selector is only using the “app” label. Without Istio it will distribute traffic between the two deployments evenly. Note that the port is named (“name: http”). This is a requirement for Istio.

Now comes the Istio part. Istio works with envoy proxies to control inbound and outbound traffic and to gather telemetry data of a Kubernetes pod. The envoy is injected as additional container into a pod. The envoy “sidecar” allows to add Istio’s capabilities to an application without adding code or additional libraries to your application.

(c) istio.io

To route traffic (e.g. REST API calls) into a Kubernetes application normally requires a Kubernetes Ingress. With Istio, the equivalent is a Istio Gateway which allows it to manage and monitor incoming traffic. This gateway in turn uses the Istio ingressgateway which is a pod running in Kubernetes. This is the definition of an Istio gateway:

This gateway listens on port 80 and answers to any request (“*”). The “hosts: *” should not be used in production, of course. For a Minikube test environment it is OK.

The second required Istio configuration object is a “Virtual Service” which overlays the Kubernetes service definition. The Web-API service in the example exposes 3 REST URIs. Two of them are used for API documentation (Swagger), they are /openapi and /openapi/ui/ and are currently independent of the version of Web-API. The third URI is /web-api/v1/getmultiple and this is version-specific. This is the VirtualService definition:

  1. is the pointer to the Ingress Gateway
  2. are URIs that directly point to the Kubernetes service web-api listenting on port 9080 (without Istio)
  3. is a URI that uses “subset: v1” of the service web-api which we haven’t defined yet, this is Istio specific
  4. the root / is pointing to port 80 of the web-app service which is different from web-api! It is the service that provides the Vue app to the browser.

The last object required is a DestinationRule, also Istio specific:

Here the subset v1 is selecting pods that belong to web-api and have a selector label of “version: v1” which is the deployment “web-api-v1”.

With this Istio rule set in place all incoming traffic will go to version 1 of the Web-API.

We can change the VirtualService to distribute incoming traffic, e.g. 80% should go to version 1, 20% should go to version 2:

And this is how it looks in Kiali:

I will continue to experiment with other Istio features like telemetry (monitoring, logging), fault injection, etc. I feel like “Jugend forscht” 🙂

Install Istio and Kiali on IBM Cloud or Minikube

I recently started to look into Istio and Kiali.

Istio is an open-source service mesh that sits on top of Kubernetes. It provides functions for traffic control, fault tolerance, logging and monitoring, and security. It has started as joint project by IBM, Google, and Lyft. Kiali is an Istio dashboard and in my opinion Istio is only half the fun without Kiali.

In order to explore Istio you need a Kubernetes Cluster. I have tested two options:

  1. IBM Kubernetes Service on IBM Cloud. There is a free (“lite”) cluster that you can use for 21 days.
  2. Minikube. If you have a reasonably sized notebook, Minikube is great, too.

IBM Kubernetes Service

You can create a free Kubernetes cluster (single node, 2 CPUs, 4 GB memory) that will be active for 21 days, after this time it will be deleted. Although the cluster is free, you still need an IBM Cloud account where you entered a credit card.

To create a Kubernetes cluster, select “Kubernetes” from the burger menu in the upper left corner of the IBM Cloud dashboard.

Create a free Kubernetes cluster on the IBM Cloud

In the “Kubernetes” dashboard, select “Clusters” on the left, then click on “Create cluster.” Select “Free” (1), a location (2), give your cluster a name (3) and click “Create cluster”. Creation takes about 15 to 20 minutes.

Once the cluster is deployed and in status “Normal”, go to the “Add-ons” tab.

Kubernetes Add-Ons on IBM Cloud

Click “Install” for Managed Istio, then select “Istio”, “Extras”, and “Sample”. This is so cool: 5 clicks and you have a managed Istio, with Grafana, Jaeger, and Kiali, and the Istio Bookinfo sample to start with.

Gaining access to the Kubernetes cluster is described in the “Acccess” tab of the cluster dashboard:

Basically you need the “ibmcloud” CLI, login to you IBM Cloud account, target the region where you cluster is located, and download the Kubernetes configuration file. The instruction in the dashboard and at the end of the download specify an “export” string that you need to paste into the command shell. After that, kubectl will target the cluster in IBM Kubernetes Service. In addition there is a button in the cluster dashboard that opens the “Kubernetes Dashboard”.

Access to Kiali requires a port-forward, the URL for Kiali then is http://localhost:20001

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jstadata.name}') 20001:20001

If you installed the Bookinfo sample, this is how you can find the public IP address of the worker node:

ibmcloud ks workers <cluster name> 
kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}' 

gives the port number of the istio-ingress-gateway, typically 31380.

Bookinfo productpage is then available at http://publicIP:31380/productpage

Istio Bookinfo Sample

Minikube

Installation instructions for Minikube can be found here. I run it on Linux (Fedora 29) with VirtualBox (Version 5.2.26) as Hypervisor. It starts with a default of 2 CPUs, 2 GB of RAM, and 20 GB diskspace for the Virtual Machine in VirtualBox. This is not sufficient for Istio, I was able to run it on 4 GB of RAM, but the more CPU and memory are available for it, the better it performs.

You can set the configuration for Minikube with these commands before starting it for the first time:

minikube config set cpus 4
minikube config set memory 8192
minikube config set disk-size 50g
minikube addons enable ingress 
minikube start

This starts Minikube with 4 CPUs, 8 GB of memory, a 50 GB virtual disk (VirtualBox uses thin provisioning, it doesn’t really use 50 GB of disk space unless the virtual disk really fills up), and it enables Kubernetes Ingress. Starting Minikube can take 15 to 20 minutes for the first start.

“minikube dashboard” will then open the Kubernetes dashboard, “minikube stop” stops the cluster, “minikube start” restarts the existing cluster, and “minikube delete” deletes the cluster (in case you want to start fresh or get rid of it).

The fastest method to install Istio on Minikube is this:

curl -L https://git.io/getLatestIstio | sh -

This will download Istio into a directory “istio-1.0.6” (at the time when I wrote this blog) and will instruct you to add a directory to your PATH variable so that you can use “istioctl”, its CLI. Change into the istio-1.x.x directory and install some custom resource definitions:

kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

Once this is complete, install Istio itself:

kubectl apply -f install/kubernetes/istio-demo.yaml

Once the installation has completed, check the status of the Istio pods:

kubectl get pod -n istio-system

All pods must be in status Running (2/2) or Completed (0/1). Then install Kiali

bash <(curl -L http://git.io/getLatestKialiKubernetes)

You are asked for a Kiali admin userid and password. Once installation is complete, check if the Kiali pod is ready (“kubectl get pod -n istio-system”). Once it is ready, look at its log:

kubectl log kiali-xxxx-xxxx -n istio-system

It should look similar to this:

I0208 05:56:09.375998       1 update_base_url.go:13] Updating base URL in index.html with [/kiali]
I0208 05:56:09.376658 1 javascript_config.go:13] Generating env.js from config
I0208 05:56:09.379867 1 server.go:44] Server endpoint will start at [:20001/kiali]
I0208 05:56:09.379884 1 server.go:45] Server endpoint will serve static content from [/opt/kiali/console]
I0208 05:56:09.380503 1 server.go:50] Server endpoint will require https

This means that Kiali is listening on URI /kiali on https. It is configured to use a NodePort:

kubectl get svc -n istio-system

The Kiali entry looks like this:

kiali NodePort 10.103.75.235  20001:31993/TCP

The Minikube IP address can be found with

minikube ip

In my environment it is 192.168.99.100.

So the Kiali Dashboard can be accessed at https://192.168.99.100:31993/kiali Accessing the Dashboard requires to accept a security exception since it uses a self-signed TLS certificate!

Kiali Dashboard

Install the Bookinfo sample, you need to be inside the istio-1.x.x directory:

kubectl create namespace bookinfo
kubectl apply -n bookinfo -f <(istioctl kube-inject -f  samples/bookinfo/platform/kube/bookinfo.yaml)

Once the Bookinfo pods are available (“kubectl get pods -n bookinfo”), create an istio-ingressgateway to access the Bookinfo productpage:

kubectl apply -n bookinfo -f samples/bookinfo/networking/bookinfo-gateway.yaml

The Bookinfo productpage is then at http://minikubeIP.31380/productpage

Blue Cloud Mirror — (Don’t) Open The Doors!

This isn’t specific to our game “Blue Cloud Mirror”. Everyone trying to create a Hybrid Cloud will need to decide how to connect a local application in a secure manner with code running on the Cloud without fully opening “the doors”. IBM offers a service called Secure Gateway exactly for this purpose. It creates a TLS encrypted tunnel (TLS v1.2) between a Secure Gateway Server on the IBM Cloud and a Secure Gateway Client installed on-premise in your private network. The connection is initiated from the Client so there shouldn’t be any issues with your firewall.

IBM Secure Gateway

You can test a limited (“lite”) version of IBM Secure Gateway with a free IBM Cloud account. Limited means you can connect to one destination which is one on-premise application with a limited amount of traffic (500 MB/month), sufficient for our needs with Blue Cloud Mirror.

The IBM Secure Gateway Service can be found in the “Integration” section of the IBM Cloud Catalog. Log on to the IBM Cloud, go to the Catalog, select IBM Secure Gateway, choose a region, an organization, and a space, click “Create” and wait a moment until the service is ready.

I wrote about the configuration of IBM Secure Gateway in the Users section of our Github repository. There are two things that may be confusing when you start to configure:

  1. What is the difference between Client and Destination?

The Secure Gateway Client is a piece of software that is installed on a server (physical or virtual) on-premise in your data center. It creates the connection to the IBM Secure Gateway service running on the IBM Cloud.

The Destination is the application or API that you want to connect to. It could run on the same server as the client or it could run somewhere on the same network within the data center.

2. Why do I need to configure ACLs, too?

I already specified the destination address and port in the destination configuration. And then I need to specifically allow access to the address and port in the ACL (access control list), too. The ACL is a list that contains information about all clients and their destinations that you manage with an IBM Secure Gateway instance. With the ACL you can “turn off” (deny access) to a destination without deleting it. Maybe you are about to install a new version of the application/API.

At the end of my last blog “Blue Cloud Mirror – Of Kubes and Couches” I explained that access to the Users API is via a Kubernetes ingress which is configured for a host “users-api.cloud” and secured with a self-signed TLS certificate. Both, the host name and the self-signed certificate would be a problem if I tried to access the API via the Internet directly. With Secure Gateway this is not an issue. In the README I give instructions how to create the TLS certificate and how to add the Ingress (Minikube) IP Address together with the hostname “users-api.cloud” to the servers hosts file so that the Secure Gateway client can resolve it: The hostname and TLS certificate are used in the Secure Gateway destination configuration.

If you go through the configuration yourself you’ll notice that the Secure Gateway Client is available as a Docker image, too. I tried to use that and even tried to create a Kubernetes deployment from it. The problem is that you can’t easily change the hosts file of the Docker image and without adding the hostname “users-api.cloud” the Secure Gateway Client isn’t able to resolve the IP address of the Users API ingress. When installing the Secure Gateway Client locally with a classical installer there is no problem.

With everything in place — Secure Gateway Service with Client and Destination set up — the Users API is now available under a very cryptic URL, something like https://cap-eu-de-sg*****.securegateway.appdomain.cloud:12345.

In my next blog I will explain how to manage and describe the API to a developer using IBM API Connect, another service available on the IBM Cloud.

Blue Cloud Mirror – Of Kubes and Couches

In my last blog I presented an overview and introduction to Blue Cloud Mirror. In this blog I want to describe the back end of the Users API.

If a player of Blue Cloud Mirror decides to enter the competition and enters their user data, they are stored when the game is over and the player clicks “Save Score” on the Results page. Data is stored on premise (off the cloud) with the help of this Users API. The data set contains first name, last name, email, and the acceptance of the terms.

The Users API back end is made up of a Node.js application and CouchDB, both deployed on Minikube. You will find details here https://github.com/IBM/blue-cloud-mirror/tree/master/users

Our original plan was to use IBM Cloud Private (“eat your own cookies”) and I started to build a IBM Cloud Private instance but this is too big for a simple demo. There is an IBM Cloud Private Community Edition that everyone can download and use but its resource requirements exceed by far what is available on a typical notebook; no way to carry it around for a demo at a conference. You would need to have a server of a certain size that you can spare for the demo. Instead we decided to go with Minikube.

Minikube is a single node Kubernetes “cluster” that can run on a notebook. It is not suitable for production, of course, but it is sufficient to run this demo. Per default Minikube starts a cluster that uses 2 CPUs (or CPU threads depending on how you count them), 2 GB of RAM, and 20 GB of disk. If your notebook or server has more resources, you can utilize them. Other than that, setup of Minikube is really easy: download the Minikube executable and type “minikube start”. After 10 to 15 minutes you’ll have a Kubernetes cluster. All you need to do is enable ingress and that’s it.

There is a CouchDB container image on Docker Hub which I have used to create a simple deployment with a single pod. You need to persist the configuration and the data of CouchDB, information is available on Docker Hub. My deployment creates two persistent volumes of type HostPath and two persistent volume claims, one for the configuration and one for the data directory. Minikube provides a /data directory in the nodes file system that is persistent over reboots. This is why both persistent volumes point to the /data directory.

CouchDB starts up unconfigured in “Admin Party” mode. To be able to access CouchDB externally there is a NodePort definition for the CouchDB service, using port 32001. Once CouchDB is started, its admin dashboard (Fauxton) is available on this port. CouchDB configuration is described here.

The User Core Service is written in Node.js and provides an API to access CouchDB. It uses “express” for the POST and GET methods, “express-basic-auth” to allow only authenticated access to the API, and “nano” to access CouchDB. The CouchDB URL is passed as environment variable in the Kubernetes deployment, the URL must contain User ID and password of the CouchDB setup.

The API is exposed externally with a Kubernetes ingress. (Remember to enable ingress in Minikube!) The ingress is configured for TLS for a host name “users-api.cloud”. TLS uses a self signed certificate and the host name must be entered into the /etc/hosts file of the system running Minikube (unless you are the master of your DNS). Instructions are in the README. Using a self signed TLS certificate is no problem since it is used in only one place, the configuration of IBM Secure Gateway. Which I will explain in my next post. Stay tuned!