OpenShift Service Mesh aka Istio on CodeReady Containers

undefined Last week I wrote about running OpenShift 4 on your laptop. This is using CodeReady Containers (CRC) and deploys a full Red Hat OpenShift into a single VM on a workstation.

You can install OpenShift Service Mesh which is Red Hat’s version of Istio into CRC. This is done using Operators and in this blog I want to write about my experience.

Please note: an unmodified CRC installation reserves 8 GB of memory (RAM) for the virtual machine running OpenShift. This is not enough to run Istio/Service Mesh. I am in the fortunate situation that my notebook has 32 GB of RAM, so in the article about CRC I have set the memory limit of CRC to 16 GB with this command:

$ crc config set memory 16384

You need to do that before you start CRC for the first time.

Install the Service Mesh Operators

Here are the official instructions I followed. OpenShift uses an Operator to install the Red Hat Service Mesh. There are also separate Operators for Elasticsearch, Jaeger, and Kiali. We need all 4 and install them in sequence.

In the Web Console, go to Catalog, OperatorHub and search for Elasticsearch:

Click on the Elasticsearch (provided by Red Hat, Inc.) tile, click “Install”, accept all defaults for “Create Operator Subscription”, and click “Subscribe”.

In the “Subscription Overview” wait for “UPGRADE STATUS” to be “Up to date”, then check section “Installed Operators” for “STATUS: InstallSucceeded”:

Repeat these steps for Jaeger, Kiali, and Service Mesh. There are Community and Red Hat provided Operators, make sure to use the Red Hat provided ones!

I don’t know if this is really necessary but I always wait for the Operator status to be InstallSucceeded before continuing with the next one.

In the end there will be 4 Operators in Project “openshift-operators”:

Create the Service Mesh Control Plane

The Service Mesh Control Plane is the actual installation of all Istio components into OpenShift.

We begin with creating a project ‘istio-system’, either in the Web Console or via command line (‘oc new-project istio-system‘) You can actually name the project whatever you like, in fact you can have more than one service mesh in a single OpenShift instance. But to be consistent with Istio I like to stay with ‘istio-system’ as name.

In the Web Console in project: ‘istio-system’ click on “Installed Operators”. You should see all 4 Operators in status “Copied”. The Operators are installed in project ‘openshift-operators’ but we will create the Control Plane in ‘istio-system’. Click on “Red Hat OpenShift Service Mesh”. This Operator provides 2 APIs: ‘Member Role’ and ‘Control Plane’:

Click on “Create New” Control Plane. This opens an editor with a YAML file of kind “ServiceMeshControlPlane”. Look at it but accept it as is. It will create a Control Plane of name ‘basic-install’ with Kiali, Grafana, and Tracing (Jaeger) enabled, Jaeger will use an ‘all-in-one’ template (without Elasticsearch). Click “Create”.

You will now see “basic-install” in the list of Service Mesh Control Planes. Click on “basic-install” and “Resources”. This will display a list of objects that belong to the control plane and this list will grow in the next minutes as more objects are created:

A good way to check if the installation is complete is by looking into Networking – Routes. You should see 5 routes:

Click on the Routes for grafana, jaeger, prometheus, and kiali. Accept the security settings. I click on Kiali last because Kiali is using the other services and in that way all the security settings for those are in place already.

One last thing to do: you need to specify which projects are managed by your Service Mesh Control Plane and this is done by creating a Service Mesh Member Role.

In your project ‘istio-system’ go to “Installed Operator” and click on the “OpenShift Service Mesh” operator. In the Overview, create a new ‘Member Roll’:

In the YAML file make sure that namespace is indeed ‘istio-system’ and then add all projects to the ‘members’ section that you want to be managed.

Good to know: These projects do not need to exist at this time (in fact we are going to create ‘cloud-native-starter’ in a moment) and you can always change this list at any time!

Click “Create”. You are now ready to deploy an application.

Example Application

As an example I use one part of our OpenShift on IBM Cloud Workshop.

First step is to create a build config and a build which results in a container image being built and stored in the OpenShift internal image registry:

$ oc new-build --name authors --binary --strategy docker
$ oc start-build authors --from-dir=.

The instructions in the workshop to check for the image (part 1, step 3) no longer work, OpenShift 4 doesn’t use a Docker registry anymore and the new registry doesn’t have a UI. Check the build logs and wait until the image has been pushed successfully.

Before deploying the application, we need to change the deployment.yaml file in the deployment directory:

OpenShift Service Mesh uses an annotation in the Kubernetes Deployment definition to trigger the Istio Proxy or Sidecar injection into a pod. The tagging of a namespace that you may use on default Istio doesn’t work on OpenShift. With the “OpenShift way” you have control over which pods receive a sidecar and hence are part of the service mesh; build containers for example shouldn’t use a sidecar.

The annotation is ‘sidecar.istio.io/inject: “true” ‘ and the YAML file looks like this:

kind: Deployment
apiVersion: apps/v1beta1
metadata:
  name: authors
spec:
  replicas: 1
  template:
    metadata:
      annotations: 
        sidecar.istio.io/inject: "true"    
      labels:
        app: authors
        version: v1

You also need to change the location of the image in the deployment.yaml. The registry service has changed between OpenShift 3.11 – on which the workshop is based – and OpenShift 4 in this article:

    spec:
      containers:
      - name: authors
        image: image-registry.openshift-image-registry.svc:5000/cloud-native-starter/authors:latest

Once these changes are made to deployment.yaml, start the deployment (you must be in the deployment directory) and create a Route:

$ oc apply -f deployment.yaml
$ oc apply -f service.yaml
$ oc expose svc/authors

The second command creates the service for the deployment. Note: Without a service in place, the sidecar container will not start! If you check the istio-proxy log it will constantly show that it can’t find a listener for port 3000. That is the missing service definition, the error looks like this:

You can try if the example works by calling the API, e.g.:

curl -X GET "http://authors-cloud-native-starter.apps-crc.testing/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

This will return a JSON object with author information.

You can check if it works by “curl-ing” the address a couple of times and checking the result in Kiali (https://kiali-istio-system.apps-crc.testing):

Red Hat OpenShift 4 on your laptop

[Nov 29, 2019: Another update to the Expiration section at the end]

I use Minishift on my laptop and have blogged about it. Minishift is based on OKD 3.11, the Open Source upstream version of OpenShift. An update of Minishift to OpenShift 4 never happened and wasn’t planned. I haven’t actually seen OKD 4.1 except for some source code.

But recently I found something called Red Hat CodeReady Containers and this allows to run OpenShift 4.1 in a single node configuration on your workstation. It operates almost exactly like Minishift and Minikube. Actually under the covers it works completely different but that’s another story.

CodeReady Containers (CRC) runs on Linux, MacOS, and Windows, and it only supports the native hypervisors: KVM for Linux, Hyperkit for MacOS, and HyperV for Windows.

This is the place where you need to start: Install on Laptop: Red Hat CodeReady Containers

To access this page you need to register for a Red Hat account which is free. It contains a link to the Getting Started guide, the download links for CodeReady Containers (for Windows, MacOS, and Linux) and a link to download the pull secrets which are required during installation.

The Getting Started quide lists the hardware requirements, they are similar to those for Minikube and Minishift:

  • 4 vCPUs
  • 8 GB RAM
  • 35 GB disk space for the virtual disk

You will also find the required versions of Windows 10 and MacOS there.

I am running Fedora (F30 at the moment) on my notebook and I normally use VirtualBox as hypervisor. VirtualBox is not supported so I had to install KVM first, here are good instructions. The requirements for CRC also mention NetworkManager as required but most Linux distributions will use it, Fedora certainly does. There are additional instructions for Ubuntu/Debian/Mint users for libvirt in the Getting Started guide.

Start with downloading the CodeReady Containers archive for your OS and download the pull secrets to a location you remember. Extracting the CodeReady Containers archive results in an executable ‘crc’ which needs to be placed in your PATH. This is very similar to the ‘minikube’ and ‘minishift’ executables.

First step is to setup CodeReady Containers:

$ crc setup

This checks the prerequistes, installs some drivers, configures the network, and creates an initial configuration in a directory ‘.crc’ (on Linux).

You can check the configurable options of’crc’ with:

$ crc config view

Since I plan to test Istio on crc I have changed the memory limit to 16 GB and added the path to the pull secret file:

$ crc config set memory 16384
$ crc config set pull-secret-file path/to/pull-secret.txt

Start CodeReady Containers with:

$ crc start

This will take a while and in the end give you instructions on how to access the cluster.

INFO To access the cluster using 'oc', run 'eval $(crc oc-env) && oc login -u kubeadmin -p ********* https://api.crc.testing:6443' 
INFO Access the OpenShift web-console here: https://console-openshift-console.apps-crc.testing 
INFO Login to the console with user: kubeadmin, password: *********  
CodeReady Containers instance is running

I found that you need to wait a few minutes after that because OpenShift isn’t totally started then. Check with:

$ crc status

Output should look like:

CRC VM:          Running
OpenShift:       Running (v4.x)  
Disk Usage:      11.18GB of 32.2GB (Inside the CRC VM)
Cache Usage:     11.03GB

If your cluster is up, access it using the link in the completion message or use:

$ crc console

User is ‘kubeadmin’ and the password has been printed in the completion message above. You will need to accept the self-signed certificates and then be presented with an OpenShift 4 Web Console:

There are some more commands that you probably need:

  1. ‘crc stop’ stops the OpenShift cluster
  2. ‘crc delete’ completely deletes the cluster
  3. ‘eval $(crc oc-env)’ correctly sets the environment for the ‘oc’ CLI

I am really impressed with CodeReady Containers. They give you the full OpenShift 4 experience with the new Web Console and even include the OperatorHub catalog to get started with Operators.

Expiration

Starting with CodeReady Containers (crc) version 1.1.0 and officially with version 1.2.0 released end of November 2019, the certificates no longer expire. Or to be precise: they do expire, but crc will renew them at ‘crc start’ when they are expired. Instead, ‘crc start’ will print a message at startup when a newer version of crc, which typically includes a new version of OpenShift, is available. Details are here.