Serverless and Knative – Part 3: Knative Eventing

This is part 3 of my blog series about Serverless and Knative. I covered Installing Knative on CodeReady Containers in part 1 and Knative Serving in part 2.

Knative Eventing allows to pass events from an event producer to an event consumer. Knative events follow the CloudEvents specification.

Event producers can be anything:

  • “Ping” jobs that periodically send an event
  • Apache CouchDB sending an event when a record is written, changed, or deleted
  • Kafka Message Broker
  • Github repository
  • Kubernetes API Server emitting cluster events
  • and many more.

An event consumer is any type of code running on Kubernetes (typically) that is callable. It can be a “classic” Kubernetes deployment and service, and of course in can be a Knative Service.

A good source to learn Knative eventing is the Knative documentation itself and the Red Hat Knative Tutorial. I think, the Red Hat tutorial is better structured and more readable.

There are three usage patterns for Knative Eventing, the first one being the simplest:

Source to Sink

In this case, the source sends a message to a sink, there is no queuing or filtering, it is a one-to-one relationship.

Source to Sink
(c) Red Hat, Inc.

Knative Event Sources are Knative objects. The following sources are installed when Knative is installed:

$ kubectl api-resources --api-group='sources.knative.dev'
NAME               SHORTNAMES   APIGROUP              NAMESPACED   KIND
apiserversources                sources.knative.dev   true         ApiServerSource
pingsources                     sources.knative.dev   true         PingSource
sinkbindings                    sources.knative.dev   true         SinkBinding

There are many more sources, e.g. a Kafka Source or a CouchDB Source, but they need to be installed separately. To get a basic understanding of Knative eventing, the PingSource is sufficient. It creates something comparable to a cron job on Linux that periodically emits a message.

The Source links to the Sink so it is best to define/deploy the Sink first. It is a simple Knative Service, the code snippets are all from the Red Hat Knative Tutorial:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: eventinghello
spec:
  template:
    metadata:
      name: eventinghello-v1
    spec:
      containers:
      - image: quay.io/rhdevelopers/eventinghello:0.0.2

And this is the Source definition:

apiVersion: sources.knative.dev/v1alpha2
kind: PingSource 
metadata:
  name: eventinghello-ping-source
spec: 
  schedule: "*/2 * * * *"
  jsonData: '{"key": "every 2 mins"}'
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: eventinghello
  • PingSource is one of the default Knative Sources.
  • The Schedule is typical cron, it defines that the “ping” happens every 2 minutes.
  • jsonData is the (fixed) message that is transmitted.
  • sink defines the Knative Service that the Source connects to: eventinghello.

When both elements are deployed we can see that an eventinghello pod is started every two minutes, in its log we can see the message ‘{“key”: “every 2 mins”}’. The pod itself terminates after about 60 to 70 seconds (Knative scale to zero) and another pod is started after the 2 minutes interval of the PingSource are over and the next message is sent.

To recap the Source-to-Sink pattern: it connects an event source with an event sink in a one-to-one relation. In my opinion it is a starting point to understand Knative Eventing terminology but it would be an incredible waste of resources if this were the only available pattern. The next pattern is:

Channel and Subscription

A Knative Channel is a custom resource that can persist events and allows to forward events to multiple destinations (via subscriptions). There are multiple channel implementations: InMemoryChannel, KafkaChannel, NATSChannel, etc.

By default all Knative Channels in a Kubernetes cluster use the InMemoryChannel implementation. The Knative documentation describes InMemoryChannels as “a best effort Channel. They should NOT be used in Production. They are useful for development.” Characteristics are:

No Persistence: When a Pod goes down, messages go with it.
No Ordering Guarantee: There is nothing enforcing an ordering, so two messages that arrive at the same time may go to subscribers in any order. Different downstream subscribers may see different orders.
No Redelivery Attempts: When a subscriber rejects a message, there is no attempts to retry sending it.
Dead Letter Sink: When a subscriber rejects a message, this message is sent to the dead letter sink, if present, otherwise it is dropped.

A lot of restrictions but it is much easier to set up compared to the KafkaChannel where you need to create a Kafka Server first.

Knative Eventing is very configurable here: you can change the cluster wide Channel default and you can change the Channel implementation per namespace. For example you can keep InMemoryChannel as the cluster default but use KafkaChannel in one or two projects (namespaces) with much higher requirements for availability and message delivery.

A Knative Subscription connects (= subscribes) a Sink service to a Channel. Each Sink service needs its own Subscription to a Channel.

Coming from the Source to Sink pattern in the previous section, the Source to Sink relation is now replaced with a Source to Channel relation. One or multiple Sink services subscribe to the Channel:

Channels and Subscriptions
(c) Red Hat, Inc.

The Channel and Subscription pattern decouples the event producer (Source) from the event consumer (Sink) and allows for a one to many relation between Source and Sink. Every message / event emitted by the Source is forwarded to one or many Sinks that are subscribed to the Channel.

The next pattern (Broker and Trigger) extends the Channel and Subscription pattern and is the most interesting scenario. Therefore I won’t go into more detail here but the Red Hat Knative Tutorial has an example for Channel and Subscriber.

Brokers and Triggers

A Broker is a Knative custom resource that is composed of at least two distinct objects, an ingress and a filter. Events are sent to the Broker ingress, the filter strips all metadata from the event data that is not part of the CloudEvent. Brokers typically use Knative Channels to deliver the events.

This is the definition of a Knative Broker:

apiVersion: eventing.knative.dev/v1beta1
kind: Broker
metadata:
  name: default
spec:
  # Configuration specific to this broker.
  config:
    apiVersion: v1
    kind: ConfigMap
    name: config-br-default-channel
    namespace: knative-eventing

A Trigger is very similar to a Subscription, it subscribes to events from a specific Broker but the most interesting aspect is that it allows filtering on specific events based on their CloudEvent attributes:

apiVersion: eventing.knative.dev/v1beta1
kind: Trigger
metadata:
  name: my-service-trigger
spec:
  broker: default
  filter:
    attributes:
      type: dev.knative.foo.bar
      myextension: my-extension-value
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: my-service

I think this is were Knative Eventing gets interesting. Why would you install an overhead of resources (called Knative Eventing) into your Kubernetes cluster to simply send a message / event from one pod to another? But with an event broker that receives a multitude of different events and triggers that filter out a specific event and route that to a specific (micro) service I can see an advantage.

Brokers and Triggers
(c) Red Hat, Inc.

This is the slightly modified example from the Red Hat Knative Tutorial:

To create a default broker requires no YAML. To use the default Broker for a Kubernetes namespace just add a label:

kubectl label namespace knativetutorial knative-eventing-injection=enabled

This will automatically create the required resources. To check:

$ kubectl get broker
NAME      READY   REASON   URL                                                       AGE
default   True             http://default-broker.knativetutorial.svc.cluster.local   3d19h

$ kubectl get channel
NAME                                                        READY   REASON   URL                                                                       AGE
inmemorychannel.messaging.knative.dev/default-kne-trigger   True             http://default-kne-trigger-kn-channel.knativetutorial.svc.cluster.local   3d19h

The first command shows the “default” broker is ready and listens to the URL http://default-broker.knativetutorial.svc.cluster.local. The second command shows that our default broker uses the InMemoryChannel implementation.

The example implements 2 services (sinks) to receive events: eventingaloha and eventingbonjour.

aloha-sink.yaml:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: eventingaloha
spec:
  template:
    metadata:
      name: eventingaloha-v1
      annotations:
        autoscaling.knative.dev/target: "1"
    spec:
      containers:
      - image: quay.io/rhdevelopers/eventinghello:0.0.2

bonjour-sink.yaml:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: eventingbonjour
spec:
  template:
    metadata:
      name: eventingbonjour-v1
      annotations:
        autoscaling.knative.dev/target: "1"
    spec:
      containers:
      - image: quay.io/rhdevelopers/eventinghello:0.0.2

They are exactly the same, they are based on the same container image, only the name is different. The name will help to distinguish which service received an event.

When everything is set up, we will send three different event types to the broker: ‘aloha’, ‘bonjour’, and ‘greetings’. The ‘aloha’ type should go to the eventingaloha service, ‘bonjour’ to the eventingbonjour service, and ‘greetings’ to both. To accomplish this we need triggers.

Triggers have some limitations. First, you can filter on multiple attributes, e.g.:

  filter:
    attributes:
      type: dev.knative.foo.bar
      myextension: my-extension-value

But the attributes are always AND: ‘dev.knative.foo.bar’ AND ‘my-extension-value’. We cannot define a trigger that would filter on ‘aloha’ OR ‘greetings’. We need 2 triggers for that.

Also a trigger can only define a single subscriber (service). We cannot define a trigger for ‘greetings’ with both the eventingaloha service and the eventingbonjour service as subscribers.

This means we will need 4 Trigger configurations:

If you start to seriously work with Knative Triggers, think about a good naming convention for them first. Otherwise troubleshooting could be difficult in case the triggers don’t work as expected: OpenShift Web Console does a very good job at visualizing Knative objects but it ignores Triggers. And this is what you see in the command line:

$ kubectl get trigger
NAME               READY   REASON   BROKER    SUBSCRIBER_URI   AGE
alohaaloha         True             default                    21h
bonjourbonjour     True             default                    21h
greetingsaloha     True             default                    21h
greetingsbonjour   True             default                    21h

Our example now looks like this:

We have the Knative default Broker, 4 Knative Triggers that filter on specific event attributes and pass the events to one or both of the 2 Knative eventing services. We don’t have an event source yet.

A little further up we saw that the broker listens to the URL
http://default-broker.knativetutorial.svc.cluster.local

We will now simply start a pod in our cluster based on a base Fedora image that contains the curl command based on this curler.yaml:

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: curler
  name: curler
spec:
  containers:
  - name: curler
    image: fedora:29 
    tty: true

Start with:

$ kubectl -n knativetutorial apply -f curler.yaml

Get a bash shell in the running pod:

$ kubectl -n knativetutorial exec -it curler -- /bin/bash

In the curler pod, we send an event using curl to the broker URL, event type ‘aloha’:

[root@curler /]# curl -v "http://default-broker.knativetutorial.svc.cluster.local" \
> -X POST \
> -H "Ce-Id: say-hello" \
> -H "Ce-Specversion: 1.0" \
> -H "Ce-Type: aloha" \
> -H "Ce-Source: mycurl" \
> -H "Content-Type: application/json" \
> -d '{"key":"from a curl"}'

In the OpenShift Web Console we can see that an eventingaloha pod has been started:

After about a minute this scales down to 0 again. Next test is type ‘bonjour’, again in the curler pod:

[root@curler /]# curl -v "http://default-broker.knativetutorial.svc.cluster.local" \
-X POST \
-H "Ce-Id: say-hello" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: bonjour" \
-H "Ce-Source: mycurl" \
-H "Content-Type: application/json" \
-d '{"key":"from a curl"}'

This starts a eventingbonjour pod as expected:

If we are fast enough we can check its logs and see our event has been forwarded:

2020-06-09 08:38:22,348 INFO eventing-hello ce-id=say-hello
2020-06-09 08:38:22,349 INFO eventing-hello ce-source=mycurl
2020-06-09 08:38:22,350 INFO eventing-hello ce-specversion=1.0
2020-06-09 08:38:22,351 INFO eventing-hello ce-time=2020-06-09T08:38:12.512544667Z
2020-06-09 08:38:22,351 INFO eventing-hello ce-type=bonjour
2020-06-09 08:38:22,352 INFO eventing-hello content-type=application/json
2020-06-09 08:38:22,355 INFO eventing-hello content-length=21
2020-06-09 08:38:22,356 INFO eventing-hello POST:{"key":"from a curl"}

In the last test we send the ‘greetings’ type event:

[root@curler /]# curl -v "http://default-broker.knativetutorial.svc.cluster.local" \
-X POST \
-H "Ce-Id: say-hello" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: greetings" \
-H "Ce-Source: mycurl" \
-H "Content-Type: applicatio

And as expected we see pods in both services are started:

Using Apache Kafka

I didn’t go through the Knative Kafka Example. But since it is hard to find and also the preferable method of setting up a production scale Broker & Trigger pattern for Knative Eventing, I wanted to have it documented here.

There are actually 2 parts in the Kafka example:

  1. Start with Installing Apache Kafka: This will probably work in OpenShift (and CRC), too. But depending on the OpenShift version I would start to install the Strimzi or the Red Hat AMQ Streams operator from the OperatorHub catalog in the OpenShift Web Console and create a Kafka cluster with the help of the installed operator.
  2. Continue with the Apache Channel Example. This example installs a Kafka Channel and uses it together with the Knative Default Broker. In the end, an Event Sink is created, a Trigger that connects the Sink to the Broker, and an Event Source (that uses the Kubernetes API Server to generate events).

Knative Eventing Recap

I have had a look now at both Knative Serving and Knative Eventing:

I really like Knative Serving, I think it can help a developer be more productive.

I am undecided about Eventing, though. The Broker & Trigger example based on the InMemoryChannel is easy to set up. But using the InMemoryChannel is for testing and learning only, it is not viable for production. And if I set up my cluster with an instance of Apache Kafka I do ask myself why I should take the messaging detour through Eventing and not use Kafka Messaging in my code directly.

Serverless and Knative – Part 2: Knative Serving

In the first part of this series I went through the installation of Knative on CodeReady Containers which is basically Red Hat OpenShift 4.4 running on a notebook.

In this second part I will cover Knative Serving, which is responsible for deploying and running containers, also networking and auto-scaling. Auto-scaling allows scale to zero and is probably the main reason why Knative is referred to as Serverless platform.

https://pbs.twimg.com/profile_images/1022537350562250752/m5EQknfW_400x400.jpg

Before digging into Knative Serving let me share a piece of information from the Knative Runtime Contract which helps to position Knative. It compares Kubernetes workloads (general-purpose containers) with Knative workloads (stateless request-triggered containers):

In contrast to general-purpose containers, stateless request-triggered (i.e. on-demand) autoscaled containers have the following properties:

  • Little or no long-term runtime state (especially in cases where code might be scaled to zero in the absence of request traffic).
  • Logging and monitoring aggregation (telemetry) is important for understanding and debugging the system, as containers might be created or deleted at any time in response to autoscaling.
  • Multitenancy is highly desirable to allow cost sharing for bursty applications on relatively stable underlying hardware resources.

Or in other words: Knative sees itself better suited for short running processes. You need to provide central logging and monitoring because the pods come and go. And multi-tenant hardware can be provided large enough to scale for peaks and at the same time make effective use of the resources.

As a developer, I would expect Knative to make my life easier (Knative claims that it is “abstracting away the complex details and enabling developers to focus on what matters”) but instead when coming from Kubernetes it gets more complicated and confusing at first because Knative uses new terminology for its resources. They are:

  1. Service: Responsible for managing the life cycle of an application/workload. Creates and owns the other Knative objects Route and Configuration.
  2. Route: Maps a network endpoint to one or multiple Revisions. Allows Traffic Management.
  3. Configuration: Desired state of the workload. Creates and maintains Revisions.
  4. Revision: A specific version of a code deployment. Revisions are immutable. Revisions can be scaled up and down. Rules can be applied to the Route to direct traffic to specific Revisions.
(c) knative.dev

Did I already mention that this is confusing? We now need to distinguish between Kubernetes services and Knative services. And on OpenShift, between OpenShift Routes and Knative Routes.

Enough complained, here starts the interesting part:

Creating a sample application

I am following this example from the Knative web site which is a simple Hello World type of application written in Node.js. The sample is also available in Java, Go, PHP, Python, Ruby, and some other languages.

Instead of using the Docker build explained in the example I am using an OpenShift Binary build which builds the Container image on OpenShift and stores it as an Image stream in the OpenShift Image Repository. Of course, the Container image could also be on Docker Hub or Quay.io or any other repository that you can access. If you follow the Knative example step by step, you create the Node.js application, a Dockerfile, and some more files. On OpenShift, for the Binary build, we need the application code and the Dockerfile and then create an OpenShift project and the Container image with these commands:

$ oc new-project knativetutorial
$ oc new-build --name helloworld --binary --strategy docker
$ oc start-build helloworld --from-dir=.

Deploying an app as Knative Service

Next I continue with the Knative example. This is the service.yaml file required to deploy the ‘helloworld’ example as a Knative Service:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-nodejs
spec:
  template:
    metadata:
      name: helloworld-nodejs-v1
    spec:
      containers:
        - image: image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest
          env:
            - name: TARGET
              value: "Node.js Sample v1"

If you are familiar with Kubernetes, you have to start to pay close attention to the first line, to see that this is the definition of a Knative Service.

All you need for your deployment are the highlighted lines, specifically the first ‘metadata’.’name’ and the ‘containers’.’images’ specification to tell Kubernetes where to find the Container image.

Line 11 specifies the location of the Container image just like every other Kubernetes deployment description. In this example, the ‘helloworld’ image is the Image stream in the OpenShift internal Image Repository in a project called ‘knativetutorial’. It is the result of the previous section “Creating a sample application”.

Lines 12, 13, and 14 are setting an environment variable and are used to “create” different versions. (In the Hello World code, the variable TARGET represents the “World” part.)

Lines 7 and 8, ‘metadata’ and ‘name’, are optional but highly recommended. They are used to provide arbitrary names for the Revisions. If you omit this second name, Knative will use default names for the Revisions (“helloworld-nodejs-xhz5df”) and if you have more than one version/revision this makes it difficult to distinguish between them.

With CRC and Knative correctly set up, I simply deploy the service using oc:

$ oc apply -f service.yaml
service.serving.knative.dev/helloworld-nodejs created

The reply isn’t very spectacular but if you look around (oc get all) you can see that a lot has happened:

  1. A Kubernetes Pod is created, running two containers: user-container and Envoy
  2. Multiple Kubernetes services are created, one is equipped with an OpenShift route
  3. An OpenShift Route is created
  4. A Kubernetes deployment and a replica-set are created
  5. Knative service, configuration, route, and revision objects are created

It would have taken a YAML file with a lot more definitions and specifications to accomplish all that with plain Kubernetes. I would say that the Knative claim of “abstracting away the complex details and enabling developers to focus on what matters” is definitely true!

Take a look at the OpenShift Console, in the Developer, Topology view:

I really like the way the Red Hat OpenShift developers have visualized Knative objects here.

If you click on the link (Location) of the Route, you will see the helloworld-nodejs response in a browser:

If you wait about a minute or so, the Pod will terminate: “All Revisions are autoscaled to 0”. If you click on the Route location (URL) then, a Pod will be spun up again.

Another good view of the Knative service is available through the kn CLI tool:

$ kn service list
NAME                URL                                                         LATEST                 AGE   CONDITIONS   READY   REASON
helloworld-nodejs   http://helloworld-nodejs-knativetutorial.apps-crc.testing   helloworld-nodejs-v1   13m   3 OK / 3     True  

$ kn service describe helloworld-nodejs
Name:       helloworld-nodejs
Namespace:  knativetutorial
Age:        15m
URL:        http://helloworld-nodejs-knativetutorial.apps-crc.testing

Revisions:  
  100%  @latest (helloworld-nodejs-v1) [1] (15m)
        Image:  image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest (at 53b1b4)

Conditions:  
  OK TYPE                   AGE REASON
  ++ Ready                  15m 
  ++ ConfigurationsReady    15m 
  ++ RoutesReady            15m 

Adding a new revision

I will now create a second version of our app and deploy it as a second Revision using a new file, service-v2.yaml:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-nodejs
spec:
  template:
    metadata:
      name: helloworld-nodejs-v2
    spec:
      containers:
        - image: image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest
          env:
            - name: TARGET
              value: "Node.js Sample v2 -- UPDATED"

I have changed the revision number to ‘-v2’ and modified the environment variable TARGET so that we can see which “version” is called. Apply with:

$ oc apply -f service-v2.yaml
service.serving.knative.dev/helloworld-nodejs configured

Checking with the kn CLI we can see that Revision ‘-v2’ is now used:

$ kn service describe helloworld-nodejs
Name:       helloworld-nodejs
Namespace:  knativetutorial
Age:        21m
URL:        http://helloworld-nodejs-knativetutorial.apps-crc.testing

Revisions:  
  100%  @latest (helloworld-nodejs-v2) [2] (23s)
        Image:  image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest (at 53b1b4)

Conditions:  
  OK TYPE                   AGE REASON
  ++ Ready                  18s 
  ++ ConfigurationsReady    18s 
  ++ RoutesReady            18s 

It is visible in the OpenShift Web Console, too:

Revision 2 has now fully replaced Revision 1.

Traffic Management

What if we want to Canary test Revision 2? It is just a simple modification in the YAML:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-nodejs
spec:
  template:
    metadata:
      name: helloworld-nodejs-v2
    spec:
      containers:
        - image: image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest
          env:
            - name: TARGET
              value: "Node.js Sample v2 -- UPDATED"
  traffic:
    - tag: v1
      revisionName: helloworld-nodejs-v1
      percent: 75
    - tag: v2
      revisionName: helloworld-nodejs-v2
      percent: 25

This will create a 75% / 25% distribution between revision 1 and 2. Deploy the change and watch in the OpenShift Web Console:

Have you ever used Istio? To accomplish this with Istio requires configuring the Ingress Gateway plus defining a Destination Rule and a Virtual Service. In Knative it is just adding a few lines of code to the Service description. Have you noticed the “Set Traffic Distribution” button in the screen shot of the OpenShift Web Console? Here you can modify the distribution on the fly:

Auto-Scaling

Scale to zero is an interesting feature but without additional tricks (like pre-started containers or pods which aren’t available in Knative) it can be annoying because users have to wait until a new pod is started and ready to receive requests. Or it can lead to problems like time-outs in a microservices architecture if a scaled-to-zero service is called by another service and has to be started first.

On the other hand, if our application / microservice is hit hard with requests, a single pod may not be sufficient to serve them and we may need to scale up. And preferably scale up and down automatically.

Auto-scaling is accomplished by simply adding a few annotation statements to the Knative Service description:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-nodejs
spec:
  template:
    metadata:
      name: helloworld-nodejs-v3
      annotations:
        # the minimum number of pods to scale down to
        autoscaling.knative.dev/minScale: "1"
        # the maximum number of pods to scale up to
        autoscaling.knative.dev/maxScale: "5"
        # Target in-flight-requests per pod.
        autoscaling.knative.dev/target: "1"
    spec:
      containers:
        - image: image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest
          env:
            - name: TARGET
              value: "Node.js Sample v3 -- Scaling"

minScale: “1” prevents scale to zero, there will always be at least 1 pod active.
maxScale: “5” will allow to start a maximum of 5 pods.
target: “1” limits every started pod to 1 concurrent request at a time, this is just to make it easier to demo.

All auto-scale parameters are listed and described here.

Here I deployed the auto-scale example and run a load test using the hey command against it:

$ hey -z 30s -c 50 http://helloworld-nodejs-knativetutorial.apps-crc.testing/

Summary:
  Total:        30.0584 secs
  Slowest:      1.0555 secs
  Fastest:      0.0032 secs
  Average:      0.1047 secs
  Requests/sec: 477.1042
  
  Total data:   501935 bytes
  Size/request: 35 bytes

Response time histogram:
  0.003 [1]         |
  0.108 [9563]  |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.214 [3308]  |■■■■■■■■■■■■■■
  0.319 [899]   |■■■■
  0.424 [367]   |■■
  0.529 [128]   |■
  0.635 [42]    |
  0.740 [15]    |
  0.845 [10]    |
  0.950 [5]         |
  1.056 [3]         |


Latency distribution:
  10% in 0.0249 secs
  25% in 0.0450 secs
  50% in 0.0776 secs
  75% in 0.1311 secs
  90% in 0.2157 secs
  95% in 0.2936 secs
  99% in 0.4587 secs

Details (average, fastest, slowest):
  DNS+dialup:   0.0001 secs, 0.0032 secs, 1.0555 secs
  DNS-lookup:   0.0001 secs, 0.0000 secs, 0.0197 secs
  req write:    0.0000 secs, 0.0000 secs, 0.0079 secs
  resp wait:    0.1043 secs, 0.0031 secs, 1.0550 secs
  resp read:    0.0002 secs, 0.0000 secs, 0.3235 secs

Status code distribution:
  [200] 14341 responses


$ oc get pod
NAME                                               READY   STATUS    RESTARTS   AGE
helloworld-nodejs-v3-deployment-66d7447b76-4dhql   2/2     Running   0          28s
helloworld-nodejs-v3-deployment-66d7447b76-pvxqg   2/2     Running   0          29s
helloworld-nodejs-v3-deployment-66d7447b76-qxkbc   2/2     Running   0          28s
helloworld-nodejs-v3-deployment-66d7447b76-vhc69   2/2     Running   0          28s
helloworld-nodejs-v3-deployment-66d7447b76-wphwm   2/2     Running   0          2m35s

In the end of the output we see 5 pods are started, one of them for a longer time (2m 35s) than the rest. That is the minScale: “1” pre-started pod.

Jakarta EE Example from Cloud Native Starter

I wanted to see how easy it is to deploy any form of application using Knative Serving.

I used the authors-java-jee microservice that is part of our Cloud Native Starter project and that we use in an exercise of an OpenShift workshop. A Container image of this service is stored on Dockerhub in my colleague Niklas Heidloffs registry as nheidloff/authors:v1

This is the Knative service.yaml:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: authors-jee
spec:
  template:
    metadata:
      name: authors-jee-v1
    spec:
      containers:
      - image: docker.io/nheidloff/authors:v1

When I deployed this I noticed that it never starts (you need to scroll the following view to the right to see the problem):

$ kn service list
NAME          URL                                                   LATEST   AGE   CONDITIONS   READY     REASON
authors-jee   http://authors-jee-knativetutorial.apps-crc.testing            33s   0 OK / 3     Unknown   RevisionMissing : Configuration "authors-jee" is waiting for a Revision to become ready.

$ oc get pod
NAME                                         READY   STATUS    RESTARTS   AGE
authors-jee-v1-deployment-7dd4b989cf-v9sv9   1/2     Running   0          42s

The user-container in the pod never starts and the Revision never becomes ready. Why is that?

To understand this problem you have to know that there are two versions of the authors service: The first version is written in Node.js and listens on port 3000. The second version is the JEE version we try to deploy here. To make it a drop-in replacement for the Node.js version it is configured to listen on port 3000, too. Very unusual for JEE and something Knative obviously does not pick up from the Docker metadata in the image.

The Knative Runtime Contract has some information about Inbound Network Connectivity, Protocols and Ports:

“The developer MAY specify this port at deployment; if the developer does not specify a port, the platform provider MUST provide a default. Only one inbound containerPort SHALL be specified in the core.v1.Container specification. The hostPort parameter SHOULD NOT be set by the developer or the platform provider, as it can interfere with ingress autoscaling. Regardless of its source, the selected port will be made available in the PORT environment variable.”

I found another piece of information regarding containerPort in the IBM Cloud documentation about Knative:

By default, all incoming requests to your Knative service are sent to port 8080. You can change this setting by using the containerPort specification.

I modified the Knative service yaml with ports.containerPort info:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: authors-jee
spec:
  template:
    metadata:
      name: authors-jee-v2
    spec:
      containers:
      - image: docker.io/nheidloff/authors:v1
        ports:
        - containerPort: 3000

Note the Revision ‘-v2’! Check after deployment:

$ kn service list
NAME          URL                                                   LATEST           AGE   CONDITIONS   READY   REASON
authors-jee   http://authors-jee-knativetutorial.apps-crc.testing   authors-jee-v2   11m   3 OK / 3     True    

$ oc get pod
NAME                                        READY   STATUS    RESTARTS   AGE
authors-jee-v2-deployment-997d44565-mhn7w   2/2     Running   0          51s

The authors-java-jee microservice is using Eclipse Microprofile and has implemented specific health checks. They can be used as Kubernetes readiness and liveness probes, the YAML file then looks like this, syntax is exactly the standard Kubernetes syntax:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: authors-jee
spec:
  template:
    metadata:
      name: authors-jee-v2
    spec:
      containers:
      - image: docker.io/nheidloff/authors:v1
        ports:
        - containerPort: 3000
        livenessProbe:
          exec:
            command: ["sh", "-c", "curl -s http://localhost:3000/"]
          initialDelaySeconds: 20
        readinessProbe:
          exec:
            command: ["sh", "-c", "curl -s http://localhost:3000/health | grep -q authors"]
          initialDelaySeconds: 40

Microservices Architectures and Knative private services

So far the examples I tested where all exposed on public URLs using the Kourier Ingress Gateway. This is useful for testing and also for externally accessible microservices, e.g. backend-for-frontend services that serve a browser-based web front end or a REST API for other external applications. The multitude of microservices in a cloud native application will only and should only be called cluster local and not be exposed with an external URL.

The Knative documentation has information on how to label a service cluster-local. You can either add a label to the Knative service or the Knative route. The steps described in the documentation are to 1. deploy the service and then 2. convert it to cluster-local via the label.

You can easily add the label to the YAML file and immediately deploy a cluster-local Knative service. This is the modified Jakarta EE example of the previous section:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: authors-jee
  labels:
    serving.knative.dev/visibility: cluster-local
spec:
  template:
    metadata:
      name: authors-jee-v2
    spec:
      containers:
      - image: docker.io/nheidloff/authors:v1
        ports:
        - containerPort: 3000

When this is deployed to OpenShift, the correct URL shows up in the Route:

Of course you can no longer open the URL in your browser, this address is only available from within the Kubernetes cluster.

Debugging Tips

There are new places to look for information as to why a Knative service doesn’t work. Here is a list of helpful commands and examples:

  1. Display the Knative service:
$ kn service list
NAME          URL                                                   LATEST   AGE    CONDITIONS   READY   REASON
authors-jee   http://authors-jee-knativetutorial.apps-crc.testing            3m7s   0 OK / 3     False   RevisionMissing : Configuration "authors-jee" does not have any ready Revision.

It is normal and to be expected that the revision is not available for some time immediately after the deployment because the application container needs to start first. But in this example the revision isn’t available after over 3 minutes and that is not normal.

You can also display Knative service info using oc instead of kn by using ‘kservice’:

$ oc get kservice
NAME          URL                                                   LATESTCREATED    LATESTREADY   READY   REASON
authors-jee   http://authors-jee-knativetutorial.apps-crc.testing   authors-jee-v2                 False   RevisionMissing

2. Check the pod:

$ oc get pod
No resources found in knativetutorial namespace.

That is bad: no pod means no logs to look at.

3. Get information about the revision:

$ oc get revision
NAME             CONFIG NAME   K8S SERVICE NAME   GENERATION   READY   REASON
authors-jee-v2   authors-jee                      1            False   ContainerMissing

$ oc get revision authors-jee-v2 -o yaml
apiVersion: serving.knative.dev/v1
kind: Revision
[...]
status:
  conditions:
  - lastTransitionTime: "2020-06-03T08:12:49Z"
    message: 'Unable to fetch image "docker.io/nheidloff/authors:1": failed to resolve
      image to digest: failed to fetch image information: GET https://index.docker.io/v2/nheidloff/authors/manifests/1:
      MANIFEST_UNKNOWN: manifest unknown; map[Tag:1]'
    reason: ContainerMissing
    status: "False"
    type: ContainerHealthy
  - lastTransitionTime: "2020-06-03T08:12:49Z"
    message: 'Unable to fetch image "docker.io/nheidloff/authors:1": failed to resolve
      image to digest: failed to fetch image information: GET https://index.docker.io/v2/nheidloff/authors/manifests/1:
      MANIFEST_UNKNOWN: manifest unknown; map[Tag:1]'
    reason: ContainerMissing
    status: "False"
    type: Ready
  - lastTransitionTime: "2020-06-03T08:12:47Z"
    status: Unknown
    type: ResourcesAvailable
[...]

The conditions under the status topic show that I have (on purpose as a demo) mistyped the Container image tag.

This is a real example:

$ oc get revision helloworld-nodejs-v1 -o yaml
[...]
status:
  conditions:
  - lastTransitionTime: "2020-05-28T06:42:14Z"
     message: The target could not be activated.
     reason: TimedOut
     severity: Info
     status: "False"
     type: Active
  - lastTransitionTime: "2020-05-28T06:40:04Z"
     status: Unknown
     type: ContainerHealthy
  - lastTransitionTime: "2020-05-28T06:40:05Z"
     message: '0/1 nodes are available: 1 Insufficient cpu.'
     reason: Unschedulable
     status: "False"
     type: Ready
  - lastTransitionTime: "2020-05-28T06:40:05Z"
     message: '0/1 nodes are available: 1 Insufficient cpu.'
     reason: Unschedulable
       status: "False"
       type: ResourcesAvailable

These conditions clearly show that the cluster is under CPU pressure and unable to schedule a new pod. This was on my first CRC configuration that used only 6 vCPUs.

In my next blog article in this series I will talk about Knative Eventing.

Serverless and Knative – Part 1: Installing Knative on CodeReady Containers

I have worked with Kubernetes for quite some time now, also with Istio Service Mesh. Recently I decided that I want to explore Knative and its possibilities.

https://pbs.twimg.com/profile_images/1022537350562250752/m5EQknfW_400x400.jpg

So what is Knative? The Knative web site describes it as “components build on top of Kubernetes, abstracting away the complex details and enabling developers to focus on what matters.” It has two distinct components, originally it were three:

  1. Knative Build. It is no longer part of Knative, it is now a project of its own: “Tekton“
  2. Knative Serving, responsible for deploying and running containers, also networking and auto-scaling. Auto-scaling allows scale to zero and is the main reason why Knative is referred to as Serverless platform.
  3. Knative Eventing, connecting Knative services (deployed by Knative Serving) with events or streams of events.

This will be a series of blogs about installing Knative, Knative Serving, and Knative Eventing.

In order to explore Knative you need to have access to an instance, of course, and that may require installing it yourself. The Knative documentation (for v0.12) has instructions on how to install it on many different Kubernetes platforms, including Minikube. Perfect, Knative on my notebook.

Installation

I followed the instructions for Minikube and installed it, and started a tutorial. At some point, I finished for the day, and stopped Minikube. The next morning it wouldn’t start again. I tried to find out what went wrong and in the end deleted the Minikube profile, recreated it, and reinstalled Knative again. Just out of curiosity I restarted Minikube and ran into the very same problem. This time I was a little more successful with my investigation and found this issue: https://github.com/knative/eventing/issues/2544. I thought about moving to Knative 0.14 shortly but then decided to test it on OpenShift. If you read some of my previous blogs you may know that I am a fan of CodeReady Containers (CRC).

Knative on Red Hat OpenShift is called OpenShift Serverless. It has been a preview (“beta”) for quite some time but since end of April 2020 it is GA, generally available, no longer preview only. According to the Red Hat OpenShift documentation OpenShift Serverless v1.7.0 is based on Knative 0.13.2 (as of May 1st, 2020) and it is tested on OpenShift 4.3 and 4.4. The CRC version I am currently using (v1.10) is built on top of OpenShift 4.4. So it should work.

The hardware or cluster size requirements for OpenShift Serverless are steep: minimum 10 CPUs and 40 GB of RAM. I only have 8 vCPUs (4 cores) and 32 GB of RAM in my notebook and I do need to run an Operating System besides CRC but I thought I give it a try. I started Knative installation on a CRC config using 6 vCPUs and 20 GB of RAM and so far it seems to work. I have tried it on smaller configurations and got unschedulable pods (Memory and/or CPU pressure).

Installation is accomplished via an OpenShift Serverless Operator and it took me probably less then 20 minutes to have both Knative Serving and Eventing installed by just following the instructions:

  1. Install the OpenShift Serverless operator
  2. Create a namespace for Knative Serving
  3. Create Knative Serving via the Serverless operators API. This also installs Kourier as “an open-source lightweight Knative Ingress based on Envoy.” Kourier is a lightweight replacement for Istio.
  4. Create a namespace for Knative Eventing
  5. Create Knative Eventing via the Serverless operators API.

I have started and stopped CRC many times now and it doesn’t have the issues that Minikube had.

As a future exercise I will test the Knative Add-on for the IBM Cloud Kubernetes Service. This installs Knative 0.14 together with Istio on top of Kubernetes and requires a minimum of 3 worker nodes with 4 CPUs and 16 GB om memory (b3c.4×16 is the machine specification).

In the next blog article I will cover Knative Serving with an example from the Knative documentation.