Blue Cloud Mirror — (Don’t) Open The Doors!

This isn’t specific to our game “Blue Cloud Mirror”. Everyone trying to create a Hybrid Cloud will need to decide how to connect a local application in a secure manner with code running on the Cloud without fully opening “the doors”. IBM offers a service called Secure Gateway exactly for this purpose. It creates a TLS encrypted tunnel (TLS v1.2) between a Secure Gateway Server on the IBM Cloud and a Secure Gateway Client installed on-premise in your private network. The connection is initiated from the Client so there shouldn’t be any issues with your firewall.

IBM Secure Gateway

You can test a limited (“lite”) version of IBM Secure Gateway with a free IBM Cloud account. Limited means you can connect to one destination which is one on-premise application with a limited amount of traffic (500 MB/month), sufficient for our needs with Blue Cloud Mirror.

The IBM Secure Gateway Service can be found in the “Integration” section of the IBM Cloud Catalog. Log on to the IBM Cloud, go to the Catalog, select IBM Secure Gateway, choose a region, an organization, and a space, click “Create” and wait a moment until the service is ready.

I wrote about the configuration of IBM Secure Gateway in the Users section of our Github repository. There are two things that may be confusing when you start to configure:

  1. What is the difference between Client and Destination?

The Secure Gateway Client is a piece of software that is installed on a server (physical or virtual) on-premise in your data center. It creates the connection to the IBM Secure Gateway service running on the IBM Cloud.

The Destination is the application or API that you want to connect to. It could run on the same server as the client or it could run somewhere on the same network within the data center.

2. Why do I need to configure ACLs, too?

I already specified the destination address and port in the destination configuration. And then I need to specifically allow access to the address and port in the ACL (access control list), too. The ACL is a list that contains information about all clients and their destinations that you manage with an IBM Secure Gateway instance. With the ACL you can “turn off” (deny access) to a destination without deleting it. Maybe you are about to install a new version of the application/API.

At the end of my last blog “Blue Cloud Mirror – Of Kubes and Couches” I explained that access to the Users API is via a Kubernetes ingress which is configured for a host “users-api.cloud” and secured with a self-signed TLS certificate. Both, the host name and the self-signed certificate would be a problem if I tried to access the API via the Internet directly. With Secure Gateway this is not an issue. In the README I give instructions how to create the TLS certificate and how to add the Ingress (Minikube) IP Address together with the hostname “users-api.cloud” to the servers hosts file so that the Secure Gateway client can resolve it: The hostname and TLS certificate are used in the Secure Gateway destination configuration.

If you go through the configuration yourself you’ll notice that the Secure Gateway Client is available as a Docker image, too. I tried to use that and even tried to create a Kubernetes deployment from it. The problem is that you can’t easily change the hosts file of the Docker image and without adding the hostname “users-api.cloud” the Secure Gateway Client isn’t able to resolve the IP address of the Users API ingress. When installing the Secure Gateway Client locally with a classical installer there is no problem.

With everything in place — Secure Gateway Service with Client and Destination set up — the Users API is now available under a very cryptic URL, something like https://cap-eu-de-sg*****.securegateway.appdomain.cloud:12345.

In my next blog I will explain how to manage and describe the API to a developer using IBM API Connect, another service available on the IBM Cloud.

Blue Cloud Mirror – Of Kubes and Couches

In my last blog I presented an overview and introduction to Blue Cloud Mirror. In this blog I want to describe the back end of the Users API.

If a player of Blue Cloud Mirror decides to enter the competition and enters their user data, they are stored when the game is over and the player clicks “Save Score” on the Results page. Data is stored on premise (off the cloud) with the help of this Users API. The data set contains first name, last name, email, and the acceptance of the terms.

The Users API back end is made up of a Node.js application and CouchDB, both deployed on Minikube. You will find details here https://github.com/IBM/blue-cloud-mirror/tree/master/users

Our original plan was to use IBM Cloud Private (“eat your own cookies”) and I started to build a IBM Cloud Private instance but this is too big for a simple demo. There is an IBM Cloud Private Community Edition that everyone can download and use but its resource requirements exceed by far what is available on a typical notebook; no way to carry it around for a demo at a conference. You would need to have a server of a certain size that you can spare for the demo. Instead we decided to go with Minikube.

Minikube is a single node Kubernetes “cluster” that can run on a notebook. It is not suitable for production, of course, but it is sufficient to run this demo. Per default Minikube starts a cluster that uses 2 CPUs (or CPU threads depending on how you count them), 2 GB of RAM, and 20 GB of disk. If your notebook or server has more resources, you can utilize them. Other than that, setup of Minikube is really easy: download the Minikube executable and type “minikube start”. After 10 to 15 minutes you’ll have a Kubernetes cluster. All you need to do is enable ingress and that’s it.

There is a CouchDB container image on Docker Hub which I have used to create a simple deployment with a single pod. You need to persist the configuration and the data of CouchDB, information is available on Docker Hub. My deployment creates two persistent volumes of type HostPath and two persistent volume claims, one for the configuration and one for the data directory. Minikube provides a /data directory in the nodes file system that is persistent over reboots. This is why both persistent volumes point to the /data directory.

CouchDB starts up unconfigured in “Admin Party” mode. To be able to access CouchDB externally there is a NodePort definition for the CouchDB service, using port 32001. Once CouchDB is started, its admin dashboard (Fauxton) is available on this port. CouchDB configuration is described here.

The User Core Service is written in Node.js and provides an API to access CouchDB. It uses “express” for the POST and GET methods, “express-basic-auth” to allow only authenticated access to the API, and “nano” to access CouchDB. The CouchDB URL is passed as environment variable in the Kubernetes deployment, the URL must contain User ID and password of the CouchDB setup.

The API is exposed externally with a Kubernetes ingress. (Remember to enable ingress in Minikube!) The ingress is configured for TLS for a host name “users-api.cloud”. TLS uses a self signed certificate and the host name must be entered into the /etc/hosts file of the system running Minikube (unless you are the master of your DNS). Instructions are in the README. Using a self signed TLS certificate is no problem since it is used in only one place, the configuration of IBM Secure Gateway. Which I will explain in my next post. Stay tuned!

Blue Cloud Mirror — A fun IBM Cloud showcase

Blue Cloud Mirror is an online game based on multiple IBM Cloud technologies. It has two levels, in level one you need to show five facial expressions like happy, angry, etc. In level two you need to show five body positions. Have a look at it and play it here.

I created the game together with my colleagues Niklas Heidloff and Thomas Südbröcker. Niklas desribed many aspects of it in his blog, starting here.

Basically, Blue Cloud Mirror has three parts:

  1. Game, can be played anonymously or as a registered user
  2. Scores Service keeps the highscore list for registered users
  3. Users Service keeps the user data of the registration

My part of this project is the Users Service. It does not run in the Cloud for several reasons:

  • Users may not be comfortable with having their data stored on the Cloud.
  • We wanted to deploy part of our microservices on Kubernetes, for example on IBM Cloud Private.
  • We wanted to show how easy it is to securely connect a local backend with an application on the Cloud. Instead of the Users Service the connection could be to any application running on-premise.

I really started to develop on a IBM Cloud Private but since we wanted as many people as possible to use our game I decided to switch to a local instance of Minikube because it is simple, has a small footprint, and if you like you can carry it around on your notebook.

You can find our code in the IBM directory of Github at https://github.com/IBM/blue-cloud-mirror and you will find the User Service in the users directory of the repository.

I will describe the Users Service in follow on blogs. Stay tuned!

Stuttgart Kubernetes Meetup

Last Thursday night was the Stuttgart Kubernetes Meetup, hosted by CGI in Echterdingen (thanks!!!). I got the chance to talk about “Project Eirini”.

There is Kubernetes and there is Cloud Foundry. Both are Cloud PaaS platforms, both offer container orchestration and scheduling, and both are available on the IBM Cloud. While Kubernetes is all about container orchestration, Cloud Foundry is a developer experience where the concept of containers is pretty much hidden from the developer. Both have their strengths and weaknesses: You can do almost anything with Kubernetes but it has a steep learning curve, as a developer you have to know a lot about orchestration. Cloud Foundry is limited to stateless or 12-factor apps but as a developer you only focus on your code, Cloud Foundry takes care of the rest.

A while ago, SuSE started a project in the Cloud Foundry Incubator called “Cloud Foundry (CF) Containerization”. It converts the VMs running CF Management or backplane functions into containers and deploys them on Kubernetes. It uses a component called “fissile” to do that. There is a Github repo for this. This has been around for a while and works quite well. IBM uses this technology for “Cloud Foundry Enterprise Edition” to run a Cloud Foundry for one customer on top of a Kubernetes cluster.

Cloud Foundry has a container orchestration component called “Diego”, Kubernetes is a container orchestrator. With the CF Containerization approach, Diego cells — the equivalent to Kubernetes worker nodes — are deployed as Pods. That way, Cloud Foundry apps run as containers within containers (nested). They are not visible to Kubernetes. If you deploy Kubernetes apps via kubectl into the Kubernetes cluster that hosts Cloud Foundry Containerization, those apps are not visible to Diego. Diego and Kubernetes then work against each other instead of together. This is where Project Eirini starts.

Eirini is the greek goddess of peace 🙂

Eirini replaces Diego with Kubernetes (actually it gives you a choice between the two). When you deploy an application to Cloud Foundry (native), Diego uses a buildpack — a runtime that matches the programming language of the application — and combines it with the application code and dependencies to form what is called a “droplet”. The droplet is then placed into an empty container end executed, this forms the running application.

Eirini uses a mechanism published by buildpacks.io, it creates a container image instead of a droplet, plus it creates a helm chart, and deploys the application as stateful set directly into Kubernetes. The application is visible in Kubernetes and the Kubernetes cluster can be used to run other Kubernetes native applications as well.

This is the Eirini repository on Github, it contains information on how to run CF Containerization and Eirini together. In December 2018, Eirini has passed the Cloud Foundry Acceptance tests and should be production ready in a while.