I am moving

Not me personally but this blog is moving.

Image by congerdesign on Pixabay

I used this blog a lot when I was still actively employed by IBM. November last year I retired from work. No, I am not a “Rentner” yet, I am in what is called the “passive phase” of an early retirement program. But I don’t blog that much, actually I blog hardly at all.

So in the last couple of days I played around with Github Pages and themes and Jekyll and WordPress export and conversion to Markdown. The result is here, my new (old) blog

https://haralduebele.github.io

It looks a bit different, of course, but the structure is identical, only the host part of the URL has changed.

This blog, the one you are reading right now, will stay alive until end of December 2021. No new articles, though. And then the domain will be unregistered.

Run your Code and Containers Serverless on IBM Cloud Code Engine

Please note: I am moving this blog.
You can read this article at its new home:
https://haralduebele.github.io/2020/09/21/run-your-code-and-containers-serverless-on-ibm-cloud-code-engine/

IBM Cloud Code Engine is a fully managed, serverless platform that runs your containerized workloads, including web apps, micro-services, event-driven functions, or batch jobs. Code Engine even builds container images for you from your source code. Because these workloads are all hosted within the same Kubernetes infrastructure, all of them can seamlessly work together. The Code Engine experience is designed so that you can focus on writing code and not on the infrastructure that is needed to host it.

I am a big fan of Kubernetes, it is a very powerful tool to manage containerized applications. But if you only want to run a small application without exactly knowing how much traffic it will generate then Kubernetes may be too big, too expensive, and too much effort. A serverless platform would most likely be better suited for this, for example Knative Serving. But it still requires Kubernetes. If you run a Knative instance on your own you probably don’t gain much. This is where something like IBM’s Code Engine comes to play: They run the (multi-tenant) environment, you use a little part of it and in the end pay only what you use. You don’t pay for any idle infrastructure. Code Engine is currently available as a Beta.

Code Engine offers 3 different options: Applications, Jobs, and Container Builds. Applications and jobs are organized in “Projects” which are based on Kubernetes namespaces and act as a kind of folder. Apps and jobs within each folder can communicate over a private network with each other.

Run your code as an application

This is based on Knative Serving. A container image is deployed, it runs and accepts requests until it is terminated by the operator. An example would be a web application that users interact with or a microservice that receives requests from a user or from other microservices. Since it is based on Knative serving it allows scale-to-zero; no resources are used and hence no money is spent when nobody uses the service. If it receives a request, it spins up, serves the request, and goes dormant again after a time-out. If you allow for auto scaling, it spins up more instances if a huge number of requests come in. Knative Serving itself can do this but IBM’s Code Engine offers a nice web-based GUI for this. And some additional features that I describe later.

Run a job

What is the difference between an app and a job? An app runs until it is terminated by an operator, and it can receive requests. A job doesn’t receive requests and it runs to completion, i.e. it runs until the task it has been started for is complete. This is not Knative Serving but Kubernetes knows jobs and in the linked document is an example that computes π to 2000 places and prints it out. Which is a typical example for a job.

This is how the job would look in Code Engine:

There is a Job Configuration, it specifies the container image (perl) and in the Pi example the command (perl) and the 3 arguments to compute π to 2000 places and print it.

Submitting a “jobrun” creates a pod and in the pod’s log we will find π as:

3.14159265358979323846264338327950288419716939937…

The Submit Job is interesting:

This is where a Code Engine job differs from Kubernetes: In this screenshot, Array indices of “1-50” means that Code Engine will start 50 jobs numbered 1 through 50 using the same configuration. It doesn’t really make sense to calculate the number Pi 50 fifty times. (It should render the identical result 50 times, if not, something is seriously wrong.) But imagine a scenario like this: You have a huge sample of sensor data (or images, or voice samples, etc.) that you need to process to create a ML model. Instead of starting one huge job to process all, you could start 50 or 100 or even more smaller jobs that work on subsets of the data in an “embarrassingly parallel” approach. The current limit is a maximum of 1000 job instances at the same time.

Each of the pods for one of these jobs in an array gets an environment variable JOB_INDEX injected. You could then create an algorithm where each job is able to determine which subset of data to work on based on the index number. If one of the jobs fails, e.g. JOB_INDEX=17, you could restart a single job with just this single Array index instead of rerunning all of them.

Build a Container Image

Code Engine can build container images for you. There are 2 “build strategies”: Buildpack and Dockerfile:

Buildpack (or “Cloud Native Buildpack”) is something you may know from Cloud Foundry or Heroku: the Buildpack inspects your code in a source repository, determines the language environment, and then creates a container image. This is of course limited to the supported languages and language enviroments, and it is based on a number of assumptions. So it will not always work but if it does it relieves developers from writing and maintaining Dockerfiles. The Buildpack strategy is based on Paketo, which is a Cloud Foundry project. Paketo in turn is based on Cloud Native Buildpacks which are maintained under Buildpacks.io and are a Cloud Native Computing Foundation (CNCF) sandbox project at the moment. Buildpacks are currently available for Go, Java, Node.js, PHP, and .NET Core. More will probably follow.

The Dockerfile strategy is straightforward: Specify your source repository and the name of the Dockerfile within, then start to create. It is based on Kaniko and builds the container image inside a container in the Kubernetes cluster. The Dockerfile strategy should always work, even when using Buildpack fails.

The container images are stored in an image registry, this can be Docker Hub or the IBM Cloud Container Registry (ICR) or other registries, both public and private. You can safely store the credentials to access private image registries in Code Engine. These secrets can then be used to store images after being build or to retrieve images to deploy a Code Engine app or job.

Of course, you don’t have to build your container images in Code Engine. You can use your existing DevOps toolchains to create the images and store them in a repository and Code Engine can pick them up from there. But its nice that you can build them in a simple and easy way with Code Engine.

Code Engine CLI

There is a Code Engine plugin for the ibmcloud CLI. Currently the Code Engine (CE or ce) CLI has more functionality than the web based UI in the IBM Cloud dashboard. This will most likely change when Code Engine progresses during the Beta and when it becomes generally available later.

You can use the CLI to retrieve the Kubernetes API configuration used by Code Engine. Once this has been done you can also use kubectl and the kn CLI, you do have only limited permissions in the Kubernetes cluster, though. I have made a quick test: kubectl apply -f service.yaml does work, it creates an app in Code Engine. kn service list or kn service describe hello also work. You ar enot limited to the ibmcloud CLI, then.

Networking

Code Engine apps are assigned a URL in the form https://hello.abcdefgh-1234.us-south.codeengine.appdomain.cloud. They are accessible externally using HTTPS/TLS secured by a Let’s Encrypt certificate. If you deploy a workload with multiple services/apps, maybe only one of them needs to be accessed from the Internet, e.g. the backend-for-frontend. You can limit the networking of the other services to private Code Engine internal endpoints with the CLI:

$ ibmcloud ce application create --name myapp --image ibmcom/hello --cluster-local

This is the same you would do with a label in the YAML file of a Knative service.

Code Engine jobs do not need this, they cannot be accessed externally by definition. Jobs can still make external requests, though. And they can call Code Engine apps internally, there is an example in the Code Engine sample git repo at https://github.com/IBM/CodeEngine.

Integrate IBM Cloud services

If you know Cloud Foundry on the IBM Cloud this should be familiar. IBM Cloud services like Cloud Object Storage, Cloudant database, the Watson services, etc. can be “bound” to a Cloud Foundry app. When the Cloud Foundry app is started, an environment variable VCAP_SERVICES is injected into the pod that holds a JSON object with the configuration (URLs, credentials, etc.) of the bound service/s. The application starting in the pod can then retrieve the configuration and configure access to the service/s. The developers of Code Engine have duplicated this method and in addition to the JSON object in VCAP_SERVICES they also inject individual environment variables for a service (for code that struggles with JSON like Bash scripts).

The helloworld example displays the environment variables of the pod it is running in. If you bind a IBM Cloud service to it, you can display the results with it:

This binding of IBM Cloud services is really interesting for Code Engine jobs. Remember that you cannot connect to them and they can by themselves only write to the joblog. With this feature, you can bind for example a Cloud Object Storage (COS) service to the job, place your data into a COS bucket, run an array of jobs that pick “their” data based on their JOB_INDEX number, and when done, place the results back into the COS bucket.

You may have guessed that under the covers, binding an IBM Cloud service to a Code Engine app or job creates a Kubernetes secret automatically.

Conclusion

Keep in mind that at the time of this writing IBM Cloud Code Engine has just started Beta (it was announced last week). It still has beta limitations, some functions are only available in the CLI, not in the Web UI, and during the Beta, price plans are not available yet. But it is already very promising, it is a very easy start for your small apps using serverless technologies. I am sure that there will be more features and functions in Code Engine as it progresses towards general availability.

Application Security from a Platform Perspective

Please note: I am moving this blog.
You can read this article at its new home:
https://haralduebele.github.io/2020/09/03/application-security-from-a-platform-perspective/

We have added an application security example to our pet project Cloud Native Starter.

Picture 1: Application Architecture

The functionality of our sample is this:

  • A Web-App service serves a Vue.js/Javascript Web-App frontend application running in the browser of a client
  • This frontend redirects the user to the login page of Keycloak, an open source identity and access management (IAM) system
  • After successful login, the frontend obtains a JSON Web Token (JWT) from Keycloak
  • It requests a list of blog articles from the Web-API using the JWT
  • The Web-API in turn requests the article information from the Articles service, again using the JWT
  • The Web-API and Articles services use Keycloak to verify the validity of the JWT and authorize the requests

My colleague Niklas Heidloff has blogged about the language specific application security aspects here:

We also created an app security workshop from it, the material is publicly available on Gitbook.

In this article I want to talk about application security from the platform side. This is what we cover in the above mentioned workshop:

Picture 2: Platform view of the Cloud Native Starter security sample

There are two things that I want to write about:

  1. Accessing the application externally using TLS (HTTPS, green arrow)
  2. Internal Istio Service Mesh security using mutual TLS (mTLS, red-brown arrows)

About the architecture

This is a sample setup for a workshop with the main objective to make it as complete as possible while also keeping it as simple as possible. That’s why there are some “short cuts”:

  1. Istio installation is performed with the demo profile.
  2. Istio Pod auto-injection is enabled on the default namespace using the required annotation.
  3. Web-App deployment in the default namespace is part of the Istio service mesh although it doesn’t benefit a lot from it, there is no communication with other services in the mesh. But it allows us to use the Istio Ingress for TLS encrypted HTTPS access. In a production environment I would probably place Web-App outside the mesh, maybe even outside of Kubernetes, it is only a web server.
  4. Keycloak is installed into the default namespace, too. It is an ‘ephemeral’ development install that consists only of a single pod without persistence. By placing it in the default namespace it can be accessed by the Web-App frontend in the browser through the Istio Ingress using TLS/HTTPS which is definitely a requirement for an IAM — you do not want your authentication information flowing unencrypted through the Internet!
    Making it part of the Service Mesh itself automatically enables encryption in the communication with the Web-API and Articles services; both call Keycloak to verify the validity of the JWT token passed by the frontend.
    In a production setup, Keycloak would likely be installed in its own namespace. You could either make this namespace part of the Istio service mesh, too. Or you could configure the Istio Egress to enable outgoing calls from the Web-API and Articles services to a Keycloak service outside the mesh. Or maybe you even have an existing Keycloak instance running somewhere else. Then you would also use the Istio Egress to get access to it.

We are using Keycloak in our workshop setup, it is open source and widely used. Actually any OpenID Connect (OIDC) compliant IAM service should work. Another good exampe would be the App ID service on IBM Cloud which has the advantage of being a managed service so you dan’t have to manage it.

Accessing the application with TLS

In this example we are using Istio to help secure our application. We will use the Istio Ingress to route external traffic from the Web-App frontend into the application inside the service mesh.

From a Kubernetes networking view, the Istio Ingress is a Kubernetes service of type LoadBalancer. It requires an external IP address to make it accessible from the Internet. And it will also need a DNS entry in order to be able to create a TLS certificate and to configure the Istio Ingress Gateway correctly.

How you do that is dependent on your Kubernetes implementation and your Cloud provider. In our example we use the IBM Cloud and the IBM Cloud Kubernetes Service (IKS). For IKS the process of exposing the Istio Ingress with a DNS name and TLS is documented in this article and here based on the Istio Bookinfo sample.

The documentation is very good, I won’t repeat it here. But a little background may be required: When you issue the command to create a DNS entry for the load-balancer (ibmcloud ks nlb-dns create ...), in the background this command also produces a Let’s Encrypt TLS certificate for this DNS entry and it stores this TLS certificate in a Kubernetes secret in the default namespace. The Istio Ingress is running in the istio-system namespace, it cannot access a secret in default. That is the reason for the intermediate step to export the secret with the certificate and recreate it in istio-system.

So how is storing a TLS certificate in a Kubernetes secret secure, it is only base64 encoded and not encrypted? That is true but there is are two possible solutions:

  1. Use a certificate management system like IBM Certificate Manager: Certificate Manager uses the Hardware Security Module (HSM)-based IBM Key Protect service for storing root encryption keys. Those root encryption keys are used to wrap per-tenant data encryption keys, which are in turn used to encrypt per-certificate keys which are then stored securely within Certificate Manger databases.
  2. Add a Key Management System (KMS) to the IKS cluster on the IBM Cloud. There is even a free option, IBM Key Protect for IBM Cloud, or for the very security conscious there is the IBM Hyper Protect Crypto Service. Both can be used to encrypt the etcd server of the Kubernetes API server and Kubernetes secrets. You would need to manage the TLS certificates yourself, though.

Or use both, the certificate management system to manage your TLS certificates and the KMS for the rest.

We didn’t cover adding a certificate management system or a KMS in our workshop to keep it simple. But there is a huge documentation section on many aspects of protecting sensitive information in your cluster on the IBM Cloud:

https://cloud.ibm.com/docs-content/v1/content/f4eb4b6eefb6fb178cf351a16955abc21c96f483/containers/images/cs_encrypt_ov_kms.png
Picture 3 (c) IBM Corp.

Istio Security

In my opinion, Istio is a very important and useful addition to Kubernetes when you work with Microservices architectures. It has features for traffic management, security, and observability. The Istio documentation has a very good section on Istio security features.

In our example we set up Istio with “pod auto-injection” enabled for the default namespace. This means that into every pod that is deployed into the default namespace, Istio deploys an additional container, the Envoy proxy. Istio then changes the routing information in the pod so that all other containers in the pod communicate with services in other pods only through this proxy. For example, when the Web-API service calls the REST API of the Articles service, the Web-API container in the Web-API pod connects to the Envoy proxy in the Web-API pod which makes the request to the Envoy proxy in the Articles pod which passes the request to the Articles container. Sounds complicated but it happens automagically.

The Istio control plane contains a certificate authority (CA) that can manage keys and certificates. This Istio CA creates a X.509 certificate for every Envoy proxy and this certificate can be used for encryption and authentication in the service mesh.

(c) istio.io

You can see in picture 2 that each of our pods is running an Envoy sidecar and each sidecar holds a (X.509) certificate, including the Istio Ingress which is of course part of the service mesh, too.

With the certificates in place in all the pods, all the communication in the service mesh is automatically encrypted using mutual TLS or mTLS. mTLS means that in the case of a client service (e.g. Web-API) calling a server service (e.g. Articles) both sides can verify the authenticity of the other side. When using “simple” TLS, only the client can verify the authenticity of the server, not vice versa.

The Istio CA even performs automatic certificate and key rotation. Imagine what you would need to add to your code to implement this yourself!

You still need to configure the Istio Ingress Gateway. “Gateway” is an Istio configuration resource. This is what its definition looks like

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: default-gateway-ingress
  namespace: default
spec:
  selector:
	istio: ingressgateway
  servers:
  - port:
	  number: 443
	  name: https
	  protocol: HTTPS
	tls:
	  mode: SIMPLE
	  serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
	  privateKey: /etc/istio/ingressgateway-certs/tls.key
	hosts:
	- "harald-uebele-k8s-1234567890-0001.eu-de.containers.appdomain.cloud"

This requires that you followed the instructions that I linked in the previous section “Accessing the application with TLS”. These instructions create the DNS hostname specified in the hosts: variable and the TLS privateKey and serverCertificate in the correct location.

Now you can access the Istio Ingress using the DNS hostname and only (encrypted) HTTPS as protocol. HTTPS is terminated at the Istio Ingress which means the communication is decrypted there, the Ingress has the required keys to do so. The Istio Ingress is part of the Istio Service Mesh so all the communication between the Ingress and any other service in the mesh will be re-encrypted using mTLS. This happens automatically.

We also need to define an Istio VirtualService for the Istio Ingress Gateway to configure the internal routes:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: virtualservice-ingress
spec:
  hosts:
  - "harald-uebele-k8s-1234567890-0001.eu-de.containers.appdomain.cloud"
  gateways:
  - default-gateway-ingress
  http:
  - match:
    - uri:
        prefix: /auth
    route:
    - destination:
        port:
          number: 8080
        host: keycloak
  - match:
    - uri:
        prefix: /articles
    route:
    - destination:
        port:
          number: 8081
        host: web-api
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        port:
          number: 80
        host: web-app

The DNS hostname is specified in the hosts: variable, again.

There are 3 routing rules in this example:

  1. https://harald-uebele-k8s-1234567890-0001.eu-de.containers.appdomain.cloud/auth will route the request to the Keycloak service, port 8080. If you know Keycloak you will know that 8080 is the unencrypted port!
  2. https://harald-uebele-k8s-1234567890-0001.eu-de.containers.appdomain.cloud/articles to the Web-API service, port 8081.
  3. Calling https://harald-uebele-k8s-1234567890-0001.eu-de.containers.appdomain.cloud without a path sends the request to Web-App service which basically is a Nginx webserver listending on port 80. Again: http only!

Is this secure? Yes, because all involved parties establish their service mesh internal communications via the Envoy proxies and those will encrypt traffic.

Can it be more secure? Yes, because the Istio service mesh is using mTLS in “permissive” mode. So you can still access the services via unencrypted requests. This is done on purpose to allow you to migrate into a Istio service mesh without immediately breaking your application. In our example you could still call the Artictles service using its NodePort which effectively bypasses Istio security.

Switching to STRICT mTLS

STRICT means that mTLS is enforced for communication in the Istio service mesh. No unencrypted and (X.509!) no unauthorized communication is possible. This eliminates pretty much the possibility of man-in-the-middle attacks.

This requires a PeerAuthentication definition:

apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
  name: "default"
  namespace: "default"
spec:
  mtls:
    mode: STRICT

The PeerAuthentication policy can be set mesh wide, for a namespace, or for a workload using a selector. In this example the policy is set for namespace default.

Once this definition is applied, only mTLS encrypted traffic is possible. You cannot access any service running inside the Istio service mesh by calling it on its NodePort. This also means that services running inside the service mesh can not call services outside without going through an Istio Egress Gateway.

You can do even more with Istio without changing a line of your code. The Istio security concepts and security tasks gives a good overview of what is possible.

Two great additions to ‘kubectl’

Please note: I am moving this blog.
You can read this article at its new home:
https://haralduebele.github.io/2020/05/20/two-great-additions-to-kubectl/

I started to learn Kubernetes in its vanilla form. Almost a year ago I made my first steps on Red Hat OpenShift. From then on, going back to vanilla Kubernetes made me miss the easy way you switch namespaces (aka projects) in OpenShift. With ‘oc project’ it is like switching directories on your notebook. You can do that with ‘kubectl’ somehow but it is not as simple.

Recently I found 2 power tools for kubectl: ‘kubectx’ and ‘kubens’. Ahmet Alp Balkan, a Google Software Engineer, created them and open sourced them (https://github.com/ahmetb/kubectx).

The Github repo has installation instructions for macOS and diferent flavours of Linux. When you install them, also make sure to install ‘fzf’ (“A command-line fuzzy finder”, https://github.com/junegunn/fzf), it is a cool addition.

kubens

‘kubens’ allows you to quickly switch namespaces in Kubernetes. Normally you work in ‘default’ and whenever you need to check something or do something in another namespace you need to add the ‘-n namespace’ parameter to your command.

‘kubens istio-system’ will make ‘istio-system’ your new home and a subsequent ‘kubectl get pod’ or ‘kubectl get svc’ will show the pods and services in istio-system. Thats not all.

‘kubens’ without a parameter will list all namespaces and with ‘fzf’ installed too you have a selectable list:

I think that is even better than ‘oc projects’!

kubectx

‘kubectx’ is really helpful when you work with multiple Kubernetes clusters. I typically work with a Kubernetes cluster on the IBM Cloud (IKS) and then very often start CRC (CodeReady Containers) to try something out on OpenShift. When I log into OpenShift, my connection to the IKS cluster drops. It actually doesn’t drop but the kube context is switched to CRC. With ‘kubectx’ you can switch between them.

In this example I have two contexts, one is CRC, the other IKS (Kubernetes on IBM Cloud):

Not exactly easy to know which one is which, isn’t it? But you can set aliases for the entries like this:

$ kubectx CRC=default/api-crc-testing:6443/kube:admin
$ kubectx IKS=knative/br1td2of0j1q10rc8aj0

And then you get a list with recognizable names:

You can now switch via the list. In addition, with ‘kubectx -‘ you can switch to the previous context.

When you constantly create new kube contexts, e.g. create new CRC or Minikube instances, this list may grow and get unmanageable. But with ‘kubectx -d <NAME>’ you can delete entries from the list. (They will still be in the kube context, though.)

Deploy your Quarkus applications on Kubernetes. Almost automatically!

Please note: I am moving this blog.
You can read this article at its new home:
https://haralduebele.github.io/2020/04/03/deploy-your-quarkus-applications-on-kubernetes-almost-automatically/

You want to code Java, not Kubernetes deployment YAML files? And you use Quarkus? You may have seen the announcement blog for Quarkus 1.3.0. Under “much much more” is a feature that is very interesting to everyone using Kubernetes or OpenShift and with a dislike for the required YAML files:

Easy deployment to Kubernetes or OpenShift

The Kubernetes extension has been overhauled and now gives users the ability to deploy their Quarkus applications to Kubernetes or OpenShift with almost no effort. Essentially the extension now also takes care of generating a container image and applying the generated Kubernetes manifests to a target cluster, after the container image has been generated.

Image © quarkus.io

There are two Quarkus extensions required.

  1. Kubernetes Extension
    This extension generates the Kubernetes and OpenShift YAML (or JSON) files and also manages the automatic deployment using these files.
  2. Container Images
    There are actually 3 extensions that can handle automatic build using:
    – Jib
    – Docker
    – OpenShift Source-to-image (s2i)

Both extensions use parameters that are placed into the application.properties file. The parameters are listed in the respective guides of the extensions. Note that I use the term “listed”. Some of these parameters are really just listed without any further explanation.

You can find the list of parameters for the Kubernetes extension here, those for the Container Image extension are here.

I tested the functionality in 4 different scenarios: Minikube, IBM Cloud Kubernetes Service, and Red Hat OpenShift in the form of CodeReady Containers (CRC) and Red Hat OpenShift on IBM Cloud. I will describe all of them here.

Demo Project

I use the simple example from the Quarkus Getting Started Guide as my demo application. The current Quarkus 1.3.1 uses Java 11 and requires Apache Maven 3.6.2+. My notebook runs on Fedora 30 so I had to manually install Maven 3.6.3 because the version provided in the Fedora 30 repositories is too old.

The following command creates the Quarkus Quickstart Demo:

$ mvn io.quarkus:quarkus-maven-plugin:1.3.1.Final:create \
    -DprojectGroupId=org.acme \
    -DprojectArtifactId=config-quickstart \
    -DclassName="org.acme.config.GreetingResource" \
    -Dpath="/greeting"
$ cd config-quickstart

You can run the application locally:

$ ./mvnw compile quarkus:dev

Then test it:

$ curl -w "\n" http://localhost:8080/hello
hello

Now add the Kubernetes and Docker Image extensions:

$ ./mvnw quarkus:add-extension -Dextensions="kubernetes, container-image-docker"

Edit application.properties

The Kubernetes extension will create 3 Kubernetes objects:

  1. Service Account
  2. Service
  3. Deployment

The configuration and naming of these is based on some basic parameters that have to be added in application.properties:

quarkus.kubernetes.part-of=todo-appOne of the Kubernetes “recommended” labels (recommended, not required)
quarkus.container-image.registry=
quarkus.container-image.group=
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
Specifies the container image in the K8s deployment. Result is ‘image: getting-started:1.0’. Make sure there are no excess or trailing spaces! I specify empty registry and group parameters to obtain predictable results.
quarkus.kubernetes.service-type=NodePortCreates a service of type NodePort, default would be ClusterIP (doesn’t really work with Minikube)

Now do a test compile with

$ ./mvnw clean package

This should result in BUILD SUCCESS. Look at the kubernetes.yml file in the target/kubernetes directory.

Every object (ServiceAccount, Service, Deployment) has a set of annotations and labels. The annotations are picked up automatically when the source directory is under version control (e.g. git) and from the last compile time. The labels are picked up from the parameters specified in the table above. You can specify additional parameters but the Kubernetes extensions uses specific defaults:

  • app.kubernetes.io/name and name in the YAML are set to quarkus.container-image.name.
  • app.kubernetes.io/version in the YAML is set to the container-image.tag parameter.

The definition of the port (http, 8080) is picked up by Quarkus from the source code during compile.

Deploy to

With Minikube, we will create the Container (Docker) Image in the Docker installation that is part of the Minikube VM. So after starting Minikube (minikube start) you need to point your local docker command to the Minikube environment:

$ eval $(minikube docker-env)

The Kubernetes extension specifies imagePullPolicy: Always as the default for a container image. This is a problem when using the Minikube Docker environment, it should be never instead. Your application.properites should therefore look like this:

quarkus.kubernetes.part-of=todo-app
quarkus.container-image.registry=
quarkus.container-image.group=
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
quarkus.kubernetes.image-pull-policy=never
quarkus.kubernetes.service-type=NodePort

Now try a test build & deploy in the getting-started directory:

$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true

Check that everything is started with:

$ kubectl get pod 
$ kubectl get deploy
$ kubectl get svc

Note that in the result of the last command you can see the NodePort of the getting-started service, e.g. 31304 or something in that range. Get the IP address of your Minikube cluster:

$ minikube ip

And then test the service, in my example with:

$ curl 192.168.39.131:31304/hello
hello

The result of this execise:

Installing 2 Quarkus extensions and adding 7 statements to the application.properties file (of which 1 is optional) allows you to compile your Java code, build a container image, and deploy it into Kubernetes with a single command. I think this is cool!

What I just described for Minikube also works for the IBM Cloud. IBM Cloud Kubernetes Service (or IKS) does not have an internal Container Image Registry, instead this is a separate service and you may have guessed its name: IBM Cloud Container Registry (ICR). This example works on free IKS clusters, too. A free IKS cluster is free of charge and you can use for 30 days.

For our example to work, you need to create a “Namespace” in an ICR location which is different from a Kubernetes namespace. For example, my test Kubernetes cluster (with the name: mycluster) is located in Houston, so I create a namespace called ‘harald-uebele’ in the registry location Dallas (because it is close to Houston).

Now I need to login and setup the connection using the ibmcloud CLI:

$ ibmcloud login
$ ibmcloud ks cluster config --cluster mycluster
$ ibmcloud cr login
$ ibmcloud cr region-set us-south

The last command will set the registry region to us-south which is Dallas and has the URL ‘us.icr.io’.

application.properties needs a few changes:

  • registry now holds the ICR URL (us.icr.io)
  • group is the registry namespace mentioned above
  • image-pull-policy is changed to always for ICR
  • service-account needs to be ‘default’, the service account created by the Kubernetes extension (‘getting-started’) is not allowed to pull images from the ICR image registry
quarkus.kubernetes.part-of=todo-app
quarkus.container-image.registry=us.icr.io
quarkus.container-image.group=harald-uebele
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
quarkus.kubernetes.image-pull-policy=always
quarkus.kubernetes.service-type=NodePort
quarkus.kubernetes.service-account=default

Compile & build as before:

$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true

Check if the image has been built:

$ ibmcloud cr images

You should see the newly created image, correctly tagged, and hopefully with a ‘security status’ of ‘No issues’. That is the result of a Vulnerability Advisor scan that is automatically performed on every image.

Now check the status of your deployment:

$ kubectl get deploy
$ kubectl get pod
$ kubectl get svc

With kubectl get svc you will see the number of the NodePort of the service, in my example it is 30850. You can obtain the public IP address of an IKS worker node with:

$ ibmcloud ks worker ls --cluster mycluster

If you have multiple worker nodes, any of the public IP addresses will do. Test your service with:

$ curl <externalIP>:<nodePort>/hello

The result should be ‘hello’.

All this also works on

Red Hat OpenShift

I have tested this with CodeReady Containers (CRC) and on Red Hat OpenShift on IBM Cloud. CRC was a bit flaky, sometimes it would build the image, create the deployment config but wouldn’t start the pod.

On OpenShift, the container image is built using Source-to-Image (s2i) and this requires a different Maven extension:

$ ./mvnw quarkus:add-extension -Dextensions="container-image-s2i"

It seems like you can have only container-image extensions in your project. If you installed the container-image-docker extension before, you’ll need to remove it from the dependency section of the pom.xml file, otherwise the build may fail, later.

There is an OpenShift specific section of parameters / options is the documentation of the extension.

Start with log in to OpenShift and creating a new project (quarkus):

$ oc login ...
$ oc new-project quarkus

This is the application.properties file I used:

quarkus.kubernetes.deployment-target=openshift
quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000
quarkus.container-image.group=quarkus
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
quarkus.openshift.part-of=todo-app
quarkus.openshift.service-account=default
quarkus.openshift.expose=true
quarkus.kubernetes-client.trust-certs=true

Line 1: Create an OpenShift deployment
Line 2: This is the (OpenShift internal) image repository URL for OpenShift 4
Line 3: The OpenShift project name
Line 4: The image name will also be used for all other OpenShift objects
Line 5: Image tag, will also be the application version in OpenShift
Line 6: Name of the OpenShift application
Line 7: Use the ‘default’ service account
Line 8: Expose the service with a route (URL)
Line 9: Needed for CRC because of self-signed certificates, don’t use with OpenShift on IBM Cloud

With these options in place, start a compile & build:

$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true

It will take a while but in the end you should see a “BUILD SUCCESS” and in the OpenShift console you should see an application called “todo-app” with a Deployment Config, Pod, Build, Service, and Route:

Additional and missing options

Namespaces (Kubernetes) and Projects (OpenShift) cannot be specified with an option in application.properties. With OpenShift thats not really an issue because you can specify which project (namespace) to work in with the oc CLI before starting the mvn package. But it would be nice if there were a namespace and/or project option.

The Kubernetes extension is picking up which Port your app is using during build. But if you need to specify an additional port this is how you do it:

quarkus.kubernetes.ports.https.container-port=8443

This will add an https port on 8443 to the service and an https containerPort on 8443 to the containers spec in the deployment.

The number of replicas is supposed to be defined with:

quarkus.kubernetes.replicas=4

This results in WARN io.qua.config Unrecognized configuration key “quarkus.kubernetes.replicas” was provided; it will be ignored and the replicas count remains 1 in the deployment. Instead use the deprecated configuration option without quarkus. (I am sure this will be fixed):

kubernetes.replicas=4

Adding a key value pair environment variables to the deployment:

quarkus.kubernetes.env-vars.DB.value=local

will result in this YAML:

    spec:
      containers:
      - env:
        - name: "DB"
          value: "local"

There are many more options, for readiness and liveness probes, mounts and volumes, secrets, config maps, etc. Have a look at the documentation.

Cloud Native and Reactive Microservices on Red Hat OpenShift 4

Please note: I am moving this blog.
You can read this article at its new home:
https://haralduebele.github.io/2020/02/03/cloud-native-and-reactive-microservices-on-red-hat-openshift-4/

My colleague Niklas Heidloff has started to create another version of our Cloud Native Starter using a reactive programming model, and he has also written an extensive series of blogs about it starting here. He uses Minikube to deploy the reactive example and I have created documentation and scripts to deploy it on CloudReady Containers (CRC) which is running Red Hat OpenShift 4.

The reactive version of Cloud Native Starter is based on Quarkus (“Supersonic Subatomic Java”), uses Apache Kafka for messaging, and PostgreSQL for data storage of the articles service. Postgres is accessed via the reactive SQL client. Niklas has blogged about all of the details.

Cloud Native Starter Reactive: High Level Architecture

The deployment on OpenShift is very similar to the deployment of the original Cloud Native Starter which I have written about in my last blog.

The services (web-app, web-api, authors, articles) are build locally in Docker, then tagged with an image path suitable for the OpenShift image repository, then pushed with Docker into the internal repository.

Two things are different, though:

  1. The reactive example currently doesn’t require Istio, no need to install it, then.
  2. Kafka and Postgres weren’t used before.

I install Kafka using the Strimzi operator, and Postgres with the Dev4Devs operator.

In the OpenShift OperatorHub catalog, the Strimzi operator is version 0.14.0, we need version 0.15.0. That’s why I use a script to install the Strimzi Kafka operator and then deploy a Kafka cluster into a kafka namespace/project.

The Dev4Devs Postgres operator is installed through the OperatorHub catalog in the OpenShift web console into its own namespace (postgres).

An example Postgres “cluster” with a single pod is deployed via the operator into the same namespace/project.

Using operators makes it so easy to install components into your architecture. The way they are created in this example is not really applicable to production environments but to create test environments for developers its perfect.