Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Deployment and testing

Before testing out the conversion, enable them in the CRD:

Kubebuilder generates Kubernetes manifests under the config directory with webhook bits disabled. To enable them:

  • Enable patches/webhook_in_<kind>.yaml and patches/cainjection_in_<kind>.yaml in config/crd/kustomization.yaml file.

  • Enable ../certmanager and ../webhook directories under the bases section in config/default/kustomization.yaml file.

  • Enable all the vars under the CERTMANAGER section in config/default/kustomization.yaml file.

Additionally, if present in the Makefile, set the CRD_OPTIONS variable to just "crd", removing the trivialVersions option (this ensures that it actually generates validation for each version, instead of telling Kubernetes that they are the same):

CRD_OPTIONS ?= "crd"

Now that all code changes and manifests are in place, deploy it to the cluster and test it out.

You’ll need cert-manager installed (version 0.9.0+) unless you have got some other certificate management solution. The Kubebuilder team has tested the instructions in this tutorial with 0.9.0-alpha.0 release.

Once all ducks are in a row with certificates, run make install deploy (as normal) to deploy all the bits (CRD, controller-manager deployment) onto the cluster.

Testing

Once all of the bits are up and running on the cluster with conversion enabled, test out the conversion by requesting different versions.

Make a v2 version based on the v1 version (put it under config/samples)

apiVersion: batch.tutorial.kubebuilder.io/v2
kind: CronJob
metadata:
  labels:
    app.kubernetes.io/name: project
    app.kubernetes.io/managed-by: kustomize
  name: cronjob-sample
spec:
  schedule:
    minute: "*/1"
  startingDeadlineSeconds: 60
  concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
  jobTemplate:
    spec:
      template:
        spec:
          securityContext:
            runAsNonRoot: true
            runAsUser: 1000
            seccompProfile:
              type: RuntimeDefault
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              readOnlyRootFilesystem: false
          restartPolicy: OnFailure

Then, create it on the cluster:

kubectl apply -f config/samples/batch_v2_cronjob.yaml

If you have done everything correctly, it should create successfully, and you should be able to fetch it using both the v2 resource

kubectl get cronjobs.v2.batch.tutorial.kubebuilder.io -o yaml
apiVersion: batch.tutorial.kubebuilder.io/v2
kind: CronJob
metadata:
  labels:
    app.kubernetes.io/name: project
    app.kubernetes.io/managed-by: kustomize
  name: cronjob-sample
spec:
  schedule:
    minute: "*/1"
  startingDeadlineSeconds: 60
  concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
  jobTemplate:
    spec:
      template:
        spec:
          securityContext:
            runAsNonRoot: true
            runAsUser: 1000
            seccompProfile:
              type: RuntimeDefault
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              readOnlyRootFilesystem: false
          restartPolicy: OnFailure

and the v1 resource

kubectl get cronjobs.v1.batch.tutorial.kubebuilder.io -o yaml
apiVersion: batch.tutorial.kubebuilder.io/v1
kind: CronJob
metadata:
  labels:
    app.kubernetes.io/name: project
    app.kubernetes.io/managed-by: kustomize
  name: cronjob-sample
spec:
  schedule: "*/1 * * * *"
  startingDeadlineSeconds: 60
  concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
  jobTemplate:
    spec:
      template:
        spec:
          securityContext:
            runAsNonRoot: true
            runAsUser: 1000
            seccompProfile:
              type: RuntimeDefault
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              readOnlyRootFilesystem: false
          restartPolicy: OnFailure
  

Both should be filled out, and look equivalent to the v2 and v1 samples, respectively. Notice that each has a different API version.

Finally, if you wait a bit, you should notice that the CronJob continues to reconcile, even though the controller is written against the v1 API version.