Deploy your Quarkus applications on Kubernetes. Almost automatically!

You want to code Java, not Kubernetes deployment YAML files? And you use Quarkus? You may have seen the announcement blog for Quarkus 1.3.0. Under “much much more” is a feature that is very interesting to everyone using Kubernetes or OpenShift and with a dislike for the required YAML files:

Easy deployment to Kubernetes or OpenShift

The Kubernetes extension has been overhauled and now gives users the ability to deploy their Quarkus applications to Kubernetes or OpenShift with almost no effort. Essentially the extension now also takes care of generating a container image and applying the generated Kubernetes manifests to a target cluster, after the container image has been generated.

Image © quarkus.io

There are two Quarkus extensions required.

  1. Kubernetes Extension
    This extension generates the Kubernetes and OpenShift YAML (or JSON) files and also manages the automatic deployment using these files.
  2. Container Images
    There are actually 3 extensions that can handle automatic build using:
    – Jib
    – Docker
    – OpenShift Source-to-image (s2i)

Both extensions use parameters that are placed into the application.properties file. The parameters are listed in the respective guides of the extensions. Note that I use the term “listed”. Some of these parameters are really just listed without any further explanation.

You can find the list of parameters for the Kubernetes extension here, those for the Container Image extension are here.

I tested the functionality in 4 different scenarios: Minikube, IBM Cloud Kubernetes Service, and Red Hat OpenShift in the form of CodeReady Containers (CRC) and Red Hat OpenShift on IBM Cloud. I will describe all of them here.

Demo Project

I use the simple example from the Quarkus Getting Started Guide as my demo application. The current Quarkus 1.3.1 uses Java 11 and requires Apache Maven 3.6.2+. My notebook runs on Fedora 30 so I had to manually install Maven 3.6.3 because the version provided in the Fedora 30 repositories is too old.

The following command creates the Quarkus Quickstart Demo:

$ mvn io.quarkus:quarkus-maven-plugin:1.3.1.Final:create \
    -DprojectGroupId=org.acme \
    -DprojectArtifactId=config-quickstart \
    -DclassName="org.acme.config.GreetingResource" \
    -Dpath="/greeting"
$ cd config-quickstart

You can run the application locally:

$ ./mvnw compile quarkus:dev

Then test it:

$ curl -w "\n" http://localhost:8080/hello
hello

Now add the Kubernetes and Docker Image extensions:

$ ./mvnw quarkus:add-extension -Dextensions="kubernetes, container-image-docker"

Edit application.properties

The Kubernetes extension will create 3 Kubernetes objects:

  1. Service Account
  2. Service
  3. Deployment

The configuration and naming of these is based on some basic parameters that have to be added in application.properties:

quarkus.kubernetes.part-of=todo-appOne of the Kubernetes “recommended” labels (recommended, not required)
quarkus.container-image.registry=
quarkus.container-image.group=
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
Specifies the container image in the K8s deployment. Result is ‘image: getting-started:1.0’. Make sure there are no excess or trailing spaces! I specify empty registry and group parameters to obtain predictable results.
quarkus.kubernetes.service-type=NodePortCreates a service of type NodePort, default would be ClusterIP (doesn’t really work with Minikube)

Now do a test compile with

$ ./mvnw clean package

This should result in BUILD SUCCESS. Look at the kubernetes.yml file in the target/kubernetes directory.

Every object (ServiceAccount, Service, Deployment) has a set of annotations and labels. The annotations are picked up automatically when the source directory is under version control (e.g. git) and from the last compile time. The labels are picked up from the parameters specified in the table above. You can specify additional parameters but the Kubernetes extensions uses specific defaults:

  • app.kubernetes.io/name and name in the YAML are set to quarkus.container-image.name.
  • app.kubernetes.io/version in the YAML is set to the container-image.tag parameter.

The definition of the port (http, 8080) is picked up by Quarkus from the source code during compile.

Deploy to

With Minikube, we will create the Container (Docker) Image in the Docker installation that is part of the Minikube VM. So after starting Minikube (minikube start) you need to point your local docker command to the Minikube environment:

$ eval $(minikube docker-env)

The Kubernetes extension specifies imagePullPolicy: Always as the default for a container image. This is a problem when using the Minikube Docker environment, it should be never instead. Your application.properites should therefore look like this:

quarkus.kubernetes.part-of=todo-app
quarkus.container-image.registry=
quarkus.container-image.group=
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
quarkus.kubernetes.image-pull-policy=never
quarkus.kubernetes.service-type=NodePort

Now try a test build & deploy in the getting-started directory:

$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true

Check that everything is started with:

$ kubectl get pod 
$ kubectl get deploy
$ kubectl get svc

Note that in the result of the last command you can see the NodePort of the getting-started service, e.g. 31304 or something in that range. Get the IP address of your Minikube cluster:

$ minikube ip

And then test the service, in my example with:

$ curl 192.168.39.131:31304/hello
hello

The result of this execise:

Installing 2 Quarkus extensions and adding 7 statements to the application.properties file (of which 1 is optional) allows you to compile your Java code, build a container image, and deploy it into Kubernetes with a single command. I think this is cool!

What I just described for Minikube also works for the IBM Cloud. IBM Cloud Kubernetes Service (or IKS) does not have an internal Container Image Registry, instead this is a separate service and you may have guessed its name: IBM Cloud Container Registry (ICR). This example works on free IKS clusters, too. A free IKS cluster is free of charge and you can use for 30 days.

For our example to work, you need to create a “Namespace” in an ICR location which is different from a Kubernetes namespace. For example, my test Kubernetes cluster (with the name: mycluster) is located in Houston, so I create a namespace called ‘harald-uebele’ in the registry location Dallas (because it is close to Houston).

Now I need to login and setup the connection using the ibmcloud CLI:

$ ibmcloud login
$ ibmcloud ks cluster config --cluster mycluster
$ ibmcloud cr login
$ ibmcloud cr region-set us-south

The last command will set the registry region to us-south which is Dallas and has the URL ‘us.icr.io’.

application.properties needs a few changes:

  • registry now holds the ICR URL (us.icr.io)
  • group is the registry namespace mentioned above
  • image-pull-policy is changed to always for ICR
  • service-account needs to be ‘default’, the service account created by the Kubernetes extension (‘getting-started’) is not allowed to pull images from the ICR image registry
quarkus.kubernetes.part-of=todo-app
quarkus.container-image.registry=us.icr.io
quarkus.container-image.group=harald-uebele
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
quarkus.kubernetes.image-pull-policy=always
quarkus.kubernetes.service-type=NodePort
quarkus.kubernetes.service-account=default

Compile & build as before:

$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true

Check if the image has been built:

$ ibmcloud cr images

You should see the newly created image, correctly tagged, and hopefully with a ‘security status’ of ‘No issues’. That is the result of a Vulnerability Advisor scan that is automatically performed on every image.

Now check the status of your deployment:

$ kubectl get deploy
$ kubectl get pod
$ kubectl get svc

With kubectl get svc you will see the number of the NodePort of the service, in my example it is 30850. You can obtain the public IP address of an IKS worker node with:

$ ibmcloud ks worker ls --cluster mycluster

If you have multiple worker nodes, any of the public IP addresses will do. Test your service with:

$ curl <externalIP>:<nodePort>/hello

The result should be ‘hello’.

All this also works on

Red Hat OpenShift

I have tested this with CodeReady Containers (CRC) and on Red Hat OpenShift on IBM Cloud. CRC was a bit flaky, sometimes it would build the image, create the deployment config but wouldn’t start the pod.

On OpenShift, the container image is built using Source-to-Image (s2i) and this requires a different Maven extension:

$ ./mvnw quarkus:add-extension -Dextensions="container-image-s2i"

It seems like you can have only container-image extensions in your project. If you installed the container-image-docker extension before, you’ll need to remove it from the dependency section of the pom.xml file, otherwise the build may fail, later.

There is an OpenShift specific section of parameters / options is the documentation of the extension.

Start with log in to OpenShift and creating a new project (quarkus):

$ oc login ...
$ oc new-project quarkus

This is the application.properties file I used:

quarkus.kubernetes.deployment-target=openshift
quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000
quarkus.container-image.group=quarkus
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
quarkus.openshift.part-of=todo-app
quarkus.openshift.service-account=default
quarkus.openshift.expose=true
quarkus.kubernetes-client.trust-certs=true

Line 1: Create an OpenShift deployment
Line 2: This is the (OpenShift internal) image repository URL for OpenShift 4
Line 3: The OpenShift project name
Line 4: The image name will also be used for all other OpenShift objects
Line 5: Image tag, will also be the application version in OpenShift
Line 6: Name of the OpenShift application
Line 7: Use the ‘default’ service account
Line 8: Expose the service with a route (URL)
Line 9: Needed for CRC because of self-signed certificates, don’t use with OpenShift on IBM Cloud

With these options in place, start a compile & build:

$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true

It will take a while but in the end you should see a “BUILD SUCCESS” and in the OpenShift console you should see an application called “todo-app” with a Deployment Config, Pod, Build, Service, and Route:

Additional and missing options

Namespaces (Kubernetes) and Projects (OpenShift) cannot be specified with an option in application.properties. With OpenShift thats not really an issue because you can specify which project (namespace) to work in with the oc CLI before starting the mvn package. But it would be nice if there were a namespace and/or project option.

The Kubernetes extension is picking up which Port your app is using during build. But if you need to specify an additional port this is how you do it:

quarkus.kubernetes.ports.https.container-port=8443

This will add an https port on 8443 to the service and an https containerPort on 8443 to the containers spec in the deployment.

The number of replicas is supposed to be defined with:

quarkus.kubernetes.replicas=4

This results in WARN io.qua.config Unrecognized configuration key “quarkus.kubernetes.replicas” was provided; it will be ignored and the replicas count remains 1 in the deployment. Instead use the deprecated configuration option without quarkus. (I am sure this will be fixed):

kubernetes.replicas=4

Adding a key value pair environment variables to the deployment:

quarkus.kubernetes.env-vars.DB.value=local

will result in this YAML:

    spec:
      containers:
      - env:
        - name: "DB"
          value: "local"

There are many more options, for readiness and liveness probes, mounts and volumes, secrets, config maps, etc. Have a look at the documentation.

Cloud Native and Reactive Microservices on Red Hat OpenShift 4

My colleague Niklas Heidloff has started to create another version of our Cloud Native Starter using a reactive programming model, and he has also written an extensive series of blogs about it starting here. He uses Minikube to deploy the reactive example and I have created documentation and scripts to deploy it on CloudReady Containers (CRC) which is running Red Hat OpenShift 4.

The reactive version of Cloud Native Starter is based on Quarkus (“Supersonic Subatomic Java”), uses Apache Kafka for messaging, and PostgreSQL for data storage of the articles service. Postgres is accessed via the reactive SQL client. Niklas has blogged about all of the details.

Cloud Native Starter Reactive: High Level Architecture

The deployment on OpenShift is very similar to the deployment of the original Cloud Native Starter which I have written about in my last blog.

The services (web-app, web-api, authors, articles) are build locally in Docker, then tagged with an image path suitable for the OpenShift image repository, then pushed with Docker into the internal repository.

Two things are different, though:

  1. The reactive example currently doesn’t require Istio, no need to install it, then.
  2. Kafka and Postgres weren’t used before.

I install Kafka using the Strimzi operator, and Postgres with the Dev4Devs operator.

In the OpenShift OperatorHub catalog, the Strimzi operator is version 0.14.0, we need version 0.15.0. That’s why I use a script to install the Strimzi Kafka operator and then deploy a Kafka cluster into a kafka namespace/project.

The Dev4Devs Postgres operator is installed through the OperatorHub catalog in the OpenShift web console into its own namespace (postgres).

An example Postgres “cluster” with a single pod is deployed via the operator into the same namespace/project.

Using operators makes it so easy to install components into your architecture. The way they are created in this example is not really applicable to production environments but to create test environments for developers its perfect.

Cloud Native Starter on Red Hat OpenShift 4

I have written about Cloud Native Starter many times in this blog. It is a project created by Niklas Heidloff, Thomas Südbröcker, and myself that demonstrates how to get started with cloud-native applications and microservice based architectures. We have started it on Minikube, and ported it to IBM Cloud Kubernetes Service and to Red Hat OpenShift in the form of Minishift and Red Hat OpenShift on IBM Cloud, the last two based on OpenShift version 3.

Cloud Native Starter Vue.js frontend

OpenShift 4 on the IBM Cloud is imminent and Minishift has a successor based on version 4 called CodeReady Containers or CRC. Time to move Cloud Native Starter to OpenShift 4. Here is a summary of my experience.

© Red Hat Inc.

Install CRC

I have blogged about CRC before and back then in September 2019, CRC was version 1.0.0-beta3 and based on OpenShift 4.1.11. Today CRC is version 1.4, and based on OpenShift 4.2.13. It has matured quite a bit. The installation hasn’t changed: CRC is still free of charge, but you need a Red Hat ID (also free) to obtain the pull secrets to install/start it. If you want to use Istio (of course you do!), the minimum requirement of 8 GB memory will not suffice, in my opinion 16 GB of memory are a requirement in this case. Other than that, setting up CRC is done by entering two commands: ‘crc setup’ which checks the prerequisites and does some setup for virtualization and networking, and ‘crc start’ which does the rest. First start takes around 15 minutes. In the end, it will tell you that the cluster is started (hopefully), issue a warning (“The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage.”) and give you the credentials to log into OpenShift as kubeadmin and as developer.

Install OpenShift Service Mesh aka Istio

There is a simple way to install Istio — which is called OpenShift Service Mesh by Red Hat — into OpenShift 4. It uses Operators and I already described it in another blog. Service Mesh requires 4 Operators for Elasticsearch, Jaeger, Kiali, and Service Mesh itself. The official documentation still states that you have to install all 4 of them in this sequence. Actually, last time I tried I simply installed the Service Mesh Operator and this pulled the other three without intervention.

While Service Mesh is Istio under the covers, Red Hat has added some features. You can have more than one Istio Control Plane in an OpenShift cluster, and they can have different configurations (demo and production for example). A ‘Member Roll’ then describes which OpenShift projects (namespaces) are a member of a specific Istio Control Plane. With vanilla upstream Istio, a namespace can be tagged to enable ‘automatic sidecar injection’. When a deployment is made to a tagged namespace, an envoy sidecar is then automatically injected into each pod. This is very convenient in Kubernetes but not helpful in OpenShift. Consider constantly doing binary builds: this automatic sidecar injection would inject an envoy into every build pod where it has zero function because this pod will terminate once the build is complete and it doesn’t communicate. Red Hat decided to trigger sidecar injection by adding an annotation to the deployment yaml file:

“Vanilla” Kubernetes/Istio ignores this annotation, there is no problem to have it in yaml files that are used on vanilla Kubernetes/Istio, too.

The telemetry tools and my favorite, Kiali, are integrated into the OpenShift authentication and accessible via a simple OpenShift route (https://kiali-istio-system.apps-crc.testing):

Kiali as part of OpenShift Service Mesh

Access the OpenShift Internal Image Repository

For the CRC/OpenShift 4 port of Cloud Native Starter, I decided to do the container image builds on the local Docker daemon, then tag the resulting image, and push it into the internal image repository of OpenShift. How do you access the internal image repository? You need to login to OpenShift first, the do a ‘docker login’ to the repository:

$ oc login --token=<APITOKEN> --server=https://api.crc.testing:6443
$ docker login -u developer -p $(oc whoami -t) default-route-openshift-image-registry.apps-crc.testing

Problem is that the Docker CLI uses TLS and doesn’t “know” the internal repository. The ‘docker login’ will terminate with a x509 error.

Error response from daemon: Get https://default-route-openshift-image-registry.apps-crc.testing/v2/: x509: certificate signed by unknown authority

CRC uses self signed certificates that Docker doesn’t know about. But you can extract the required certificate and pass it to docker though, I have described the process here.

With the certificate in place for Docker, ‘docker login’ to the OpenShift repository is possible. ‘docker build’ in our scripts is local, the image is then tagged on the local Docker, and in the end push to OpenShift, e.g. for authors-nodejs service:

$ docker build -f Dockerfile -t  authors:1 .
$ docker tag authors:1 default-route-openshift-image-registry.apps-crc.testing/cloud-native-starter/authors:1
$ docker push default-route-openshift-image-registry.apps-crc.testing/cloud-native-starter/authors:1

After that the deployment is standard Kubernetes business with the notable exception that the image name in the deployment YAML file must reflect the location of the image within the OpenShift repository. Of course, our deployment scripts take care of that.

OpenShift and Container Permissions

In Cloud Native Starter there is one service, web-app-vuejs, that provides a Vue.js application, the frontend, to the browser of the user. To do that, Nginx is used as web server. The docker build has two stages: stage 1 builds the Vue.js application with yarn, stage 2 puts the resulting Vue.js into directory /usr/share/nginx/html in the image and Nginx serves this directory at default port 80. Works with vanilla Kubernetes (e.g. Minikube).

The pod reports “CrashLoopBackoff” and never starts in OpenShift. When you look at the logs you’ll notice messages about non-root users and permissions. This image will never run on OpenShift unless you lower security constraints on the project — they were implemented for a reason.

Information on how to solve this problem can be found in the blog Deploy VueJS applications on OpenShift written by Joel Lord:

  1. Start Nginx on a port number above 1024, default port 80 and anything up to 1024 requires root
  2. Move all temporary files (PID, cache, logs, CGI, etc.) to the /tmp directory (can be accessed by everyone
  3. Use /code as base directory

Look here for my modified nginx.conf and Dockerfile.

Result

This is the Cloud Native Starter project in the OpenShift 4 Console:

Project Overview in OpenShift 4

W-JAX 2019 Impressions

I attended the W-JAX 2019 conference beginning of November in Munich. It is a big conference for software developers and had somewhere between 1300 and 1500 participants. Here are some impressions and pictures.

This is the team preparing our booth: unpacking, setting up the background

front: Miriam and Thomas, back: Niklas and myself

Jason McGee, IBM Fellow, Vice President and CTO of IBM Cloud Platform gave a keynote on “The 20 Year Platform – bringing together Kubernetes, 12-Factor and Functions“.

Full house for Jason McGee

Emily Jiang, our MicroProfile hero from the IBM Hursley lab, did On Stage Hacking: “Building a 12-Factor Microservice“:

l2r: Thomas, Miriam, Niklas, Emily, Harald

Grace Jansen, Developer Advocate and also from the IBM Hursley lab, presented “Reacting to the Future of Application Architecture” :

The next IBM session at W-JAX was Niklas Heidloff and myself explaining “How to develop your first cloud-native app in Java“. I never had such an attentive audience asking so many clever questions … maybe giving out swag (T-Shirts with the cool IBM rebus logo) for asking good questions does help 🙂

And finally Jeremias Werner, Senior Software Developer at the IBM Böblingen lab, presented “A peek behind the scenes and how Knative is changing the serverless landscape” (unfortunately the link to his agenda topic doesn’t work: HTTP 404):

Between the talks we were quite busy at the booth:

Some people work at a new career path as movie star 🙂

Thomas is interviewed by the W-JAX team

And of course Blue Cloud Mirror, our Open Source game project, was an attraction at the booth:

The team on day 1:

l2r: Andrea, Niklas, Miguel, Harald, Miriam, Jason, Thomas

Thomas posted a cool video on Twitter:

Feierabendbier at day 2 … and we do the coolest boomerangs!

Installing Istio 1.4 – New version, new methods

The latest release of Istio — 1.4.x — is changing the way Istio is installed. There are now two methods, the method using Helm will be deprecated in the future:

  • Istio Operator, this is in alpha state at the moment and seems to be similar to the way Red Hat Service Mesh is installed (see here)
  • Using istioctl

I have tried the istioctl method with a Kubernetes cluster on IBM Cloud (IKS) and want to document my findings.

Download Istio

Execute the following command:

$ curl -L https://istio.io/downloadIstio | sh -

This will download the latest Istio version from Github, currently 1.4.0. When the command has finished there are instructions on how to add istioctl to your path environment variable. To do this is important for the next steps.

Target your Kubernetes cluster

Execute the commands needed (if any) to be able to access your Kubernetes cluster. With IKS this is at least:

$ ibmcloud ks cluster config <cluster-name>

Verify Istio and Kubernetes

$ istioctl verify-install

This will try and access your Kubernetes cluster and check if Istio is installable on it. The command should result in: “Install Pre-Check passed! The cluster is ready for Istio installation.”

Installation Configuration Profiles

There are 5 built-in Istio installation profiles: default, demo, minimal, sds, remote. Check with:

$ istioctl profile list

“minimal” installs only Pilot, “default” is a small footprint installation, “demo” installs almost all features in addition to setting the logging and tracing ratio to 100% (= everything) which is definitely not desirable in a production environment, it would put too much load on your cluster simply for logging and tracing.

Here is a good overview of the different profiles. You can modify the profiles and enable or disable certain features. I will use the demo profile, this has all the options I want enabled.

Installing Istio

This requires a single command:

$ istioctl manifest apply --set profile=demo

Verify the installation

First, generate a manifest for the demo installation:
$ istioctl manifest generate --set profile=demo > generated-manifest.yaml
Then verify that this was applied on your cluster correctly:
$ istioctl verify-install -f generated-manifest.yaml
Result (last lines) should look like this:

Checked 23 crds
Checked 9 Istio Deployments
Istio is installed successfully

Also check the Istio pods with:

$ kubectl get pod -n istio-system

The result should look similar to this:

NAME                                      READY   STATUS    RESTARTS   AGE
grafana-5f798469fd-r756w                  1/1     Running   0          7m
istio-citadel-56465d79b9-vtbbd            1/1     Running   0          7m5s
istio-egressgateway-5ff488489-99jkw       1/1     Running   0          7m5s
istio-galley-86c8659987-mlmsb             1/1     Running   0          7m4s
istio-ingressgateway-66c76dfc5f-kp8zb     1/1     Running   0          7m5s
istio-pilot-68bd4747d8-89qqt              1/1     Running   0          7m3s
istio-policy-77964b9766-v8l8n             1/1     Running   1          7m4s
istio-sidecar-injector-759bf6b4bc-ppwg2   1/1     Running   0          7m2s
istio-telemetry-5649c7d7c6-xt8wz          1/1     Running   1          7m3s
istio-tracing-cd67ddf8-ldvlp              1/1     Running   0          7m6s
kiali-7964898d8c-qb4jb                    1/1     Running   0          7m3s
prometheus-586d4445c7-tmw2q               1/1     Running   0          7m4s 

Kiali, Prometheus, Jaeger

Kiali is Istio’s dashboard and this is one of the coolest features in 1.4.x: To open the Kiali dashboard you no longer need to execute complicated port-forwarding commands, simply type

$ istioctl dashboard kiali

Then login with admin/admin:

The same command works for Prometheus (monitoring) and Jaeger (tracing), too:

$ istioctl dashboard prometheus
$ istioctl dashboard jaeger

OpenShift Service Mesh aka Istio on CodeReady Containers

undefined Last week I wrote about running OpenShift 4 on your laptop. This is using CodeReady Containers (CRC) and deploys a full Red Hat OpenShift into a single VM on a workstation.

You can install OpenShift Service Mesh which is Red Hat’s version of Istio into CRC. This is done using Operators and in this blog I want to write about my experience.

Please note: an unmodified CRC installation reserves 8 GB of memory (RAM) for the virtual machine running OpenShift. This is not enough to run Istio/Service Mesh. I am in the fortunate situation that my notebook has 32 GB of RAM, so in the article about CRC I have set the memory limit of CRC to 16 GB with this command:

$ crc config set memory 16384

You need to do that before you start CRC for the first time.

Install the Service Mesh Operators

Here are the official instructions I followed. OpenShift uses an Operator to install the Red Hat Service Mesh. There are also separate Operators for Elasticsearch, Jaeger, and Kiali. We need all 4 and install them in sequence.

In the Web Console, go to Catalog, OperatorHub and search for Elasticsearch:

Click on the Elasticsearch (provided by Red Hat, Inc.) tile, click “Install”, accept all defaults for “Create Operator Subscription”, and click “Subscribe”.

In the “Subscription Overview” wait for “UPGRADE STATUS” to be “Up to date”, then check section “Installed Operators” for “STATUS: InstallSucceeded”:

Repeat these steps for Jaeger, Kiali, and Service Mesh. There are Community and Red Hat provided Operators, make sure to use the Red Hat provided ones!

I don’t know if this is really necessary but I always wait for the Operator status to be InstallSucceeded before continuing with the next one.

In the end there will be 4 Operators in Project “openshift-operators”:

Create the Service Mesh Control Plane

The Service Mesh Control Plane is the actual installation of all Istio components into OpenShift.

We begin with creating a project ‘istio-system’, either in the Web Console or via command line (‘oc new-project istio-system‘) You can actually name the project whatever you like, in fact you can have more than one service mesh in a single OpenShift instance. But to be consistent with Istio I like to stay with ‘istio-system’ as name.

In the Web Console in project: ‘istio-system’ click on “Installed Operators”. You should see all 4 Operators in status “Copied”. The Operators are installed in project ‘openshift-operators’ but we will create the Control Plane in ‘istio-system’. Click on “Red Hat OpenShift Service Mesh”. This Operator provides 2 APIs: ‘Member Role’ and ‘Control Plane’:

Click on “Create New” Control Plane. This opens an editor with a YAML file of kind “ServiceMeshControlPlane”. Look at it but accept it as is. It will create a Control Plane of name ‘basic-install’ with Kiali, Grafana, and Tracing (Jaeger) enabled, Jaeger will use an ‘all-in-one’ template (without Elasticsearch). Click “Create”.

You will now see “basic-install” in the list of Service Mesh Control Planes. Click on “basic-install” and “Resources”. This will display a list of objects that belong to the control plane and this list will grow in the next minutes as more objects are created:

A good way to check if the installation is complete is by looking into Networking – Routes. You should see 5 routes:

Click on the Routes for grafana, jaeger, prometheus, and kiali. Accept the security settings. I click on Kiali last because Kiali is using the other services and in that way all the security settings for those are in place already.

One last thing to do: you need to specify which projects are managed by your Service Mesh Control Plane and this is done by creating a Service Mesh Member Role.

In your project ‘istio-system’ go to “Installed Operator” and click on the “OpenShift Service Mesh” operator. In the Overview, create a new ‘Member Roll’:

In the YAML file make sure that namespace is indeed ‘istio-system’ and then add all projects to the ‘members’ section that you want to be managed.

Good to know: These projects do not need to exist at this time (in fact we are going to create ‘cloud-native-starter’ in a moment) and you can always change this list at any time!

Click “Create”. You are now ready to deploy an application.

Example Application

As an example I use one part of our OpenShift on IBM Cloud Workshop.

First step is to create a build config and a build which results in a container image being built and stored in the OpenShift internal image registry:

$ oc new-build --name authors --binary --strategy docker
$ oc start-build authors --from-dir=.

The instructions in the workshop to check for the image (part 1, step 3) no longer work, OpenShift 4 doesn’t use a Docker registry anymore and the new registry doesn’t have a UI. Check the build logs and wait until the image has been pushed successfully.

Before deploying the application, we need to change the deployment.yaml file in the deployment directory:

OpenShift Service Mesh uses an annotation in the Kubernetes Deployment definition to trigger the Istio Proxy or Sidecar injection into a pod. The tagging of a namespace that you may use on default Istio doesn’t work on OpenShift. With the “OpenShift way” you have control over which pods receive a sidecar and hence are part of the service mesh; build containers for example shouldn’t use a sidecar.

The annotation is ‘sidecar.istio.io/inject: “true” ‘ and the YAML file looks like this:

kind: Deployment
apiVersion: apps/v1beta1
metadata:
  name: authors
spec:
  replicas: 1
  template:
    metadata:
      annotations: 
        sidecar.istio.io/inject: "true"    
      labels:
        app: authors
        version: v1

You also need to change the location of the image in the deployment.yaml. The registry service has changed between OpenShift 3.11 – on which the workshop is based – and OpenShift 4 in this article:

    spec:
      containers:
      - name: authors
        image: image-registry.openshift-image-registry.svc:5000/cloud-native-starter/authors:latest

Once these changes are made to deployment.yaml, start the deployment (you must be in the deployment directory) and create a Route:

$ oc apply -f deployment.yaml
$ oc apply -f service.yaml
$ oc expose svc/authors

The second command creates the service for the deployment. Note: Without a service in place, the sidecar container will not start! If you check the istio-proxy log it will constantly show that it can’t find a listener for port 3000. That is the missing service definition, the error looks like this:

You can try if the example works by calling the API, e.g.:

curl -X GET "http://authors-cloud-native-starter.apps-crc.testing/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

This will return a JSON object with author information.

You can check if it works by “curl-ing” the address a couple of times and checking the result in Kiali (https://kiali-istio-system.apps-crc.testing):

Red Hat OpenShift 4 on your laptop

[Nov 29, 2019: Another update to the Expiration section at the end]

I use Minishift on my laptop and have blogged about it. Minishift is based on OKD 3.11, the Open Source upstream version of OpenShift. An update of Minishift to OpenShift 4 never happened and wasn’t planned. I haven’t actually seen OKD 4.1 except for some source code.

But recently I found something called Red Hat CodeReady Containers and this allows to run OpenShift 4.1 in a single node configuration on your workstation. It operates almost exactly like Minishift and Minikube. Actually under the covers it works completely different but that’s another story.

CodeReady Containers (CRC) runs on Linux, MacOS, and Windows, and it only supports the native hypervisors: KVM for Linux, Hyperkit for MacOS, and HyperV for Windows.

This is the place where you need to start: Install on Laptop: Red Hat CodeReady Containers

To access this page you need to register for a Red Hat account which is free. It contains a link to the Getting Started guide, the download links for CodeReady Containers (for Windows, MacOS, and Linux) and a link to download the pull secrets which are required during installation.

The Getting Started quide lists the hardware requirements, they are similar to those for Minikube and Minishift:

  • 4 vCPUs
  • 8 GB RAM
  • 35 GB disk space for the virtual disk

You will also find the required versions of Windows 10 and MacOS there.

I am running Fedora (F30 at the moment) on my notebook and I normally use VirtualBox as hypervisor. VirtualBox is not supported so I had to install KVM first, here are good instructions. The requirements for CRC also mention NetworkManager as required but most Linux distributions will use it, Fedora certainly does. There are additional instructions for Ubuntu/Debian/Mint users for libvirt in the Getting Started guide.

Start with downloading the CodeReady Containers archive for your OS and download the pull secrets to a location you remember. Extracting the CodeReady Containers archive results in an executable ‘crc’ which needs to be placed in your PATH. This is very similar to the ‘minikube’ and ‘minishift’ executables.

First step is to setup CodeReady Containers:

$ crc setup

This checks the prerequistes, installs some drivers, configures the network, and creates an initial configuration in a directory ‘.crc’ (on Linux).

You can check the configurable options of’crc’ with:

$ crc config view

Since I plan to test Istio on crc I have changed the memory limit to 16 GB and added the path to the pull secret file:

$ crc config set memory 16384
$ crc config set pull-secret-file path/to/pull-secret.txt

Start CodeReady Containers with:

$ crc start

This will take a while and in the end give you instructions on how to access the cluster.

INFO To access the cluster using 'oc', run 'eval $(crc oc-env) && oc login -u kubeadmin -p ********* https://api.crc.testing:6443' 
INFO Access the OpenShift web-console here: https://console-openshift-console.apps-crc.testing 
INFO Login to the console with user: kubeadmin, password: *********  
CodeReady Containers instance is running

I found that you need to wait a few minutes after that because OpenShift isn’t totally started then. Check with:

$ crc status

Output should look like:

CRC VM:          Running
OpenShift:       Running (v4.x)  
Disk Usage:      11.18GB of 32.2GB (Inside the CRC VM)
Cache Usage:     11.03GB

If your cluster is up, access it using the link in the completion message or use:

$ crc console

User is ‘kubeadmin’ and the password has been printed in the completion message above. You will need to accept the self-signed certificates and then be presented with an OpenShift 4 Web Console:

There are some more commands that you probably need:

  1. ‘crc stop’ stops the OpenShift cluster
  2. ‘crc delete’ completely deletes the cluster
  3. ‘eval $(crc oc-env)’ correctly sets the environment for the ‘oc’ CLI

I am really impressed with CodeReady Containers. They give you the full OpenShift 4 experience with the new Web Console and even include the OperatorHub catalog to get started with Operators.

Expiration

Starting with CodeReady Containers (crc) version 1.1.0 and officially with version 1.2.0 released end of November 2019, the certificates no longer expire. Or to be precise: they do expire, but crc will renew them at ‘crc start’ when they are expired. Instead, ‘crc start’ will print a message at startup when a newer version of crc, which typically includes a new version of OpenShift, is available. Details are here.