Tutorial: Writing a Kubernetes CRD and Controller with Kubebuilder

I recently came across the Kubebuilder tool which provides scaffolding to write your own CustomResourceDefinition (CRD) and Controller to extend the API running in Kubernetes clusters. I worked through the tutorial building a CronJob controller, but wanted to write something from scratch, and got some inspiration for a simple (contrived) example to write a “WebPage” CRD that supports CRUD operations for a static web page:

apiVersion: sandbox.rvmiller.com/v1beta1
kind: WebPage
  name: sample-web-page
  html: |
        <title>WebPage CRD</title>
        <h2>This page served from a Kubernetes CRD!</h2>

The idea is that our custom controller will create a Deployment with a Pod running an nginx webserver image that uses a mounted ConfigMap volume to display the web page contents. We can then kubectl port-forward to the pod(s) to verify the web page is displayed.

You can find the code for this tutorial here.

Step 1: Set up the environment

Download and install Kubebuilder, Kustomize, and Kind to run your Kubernetes cluster locally. I prefer Kind over Minikube since it starts up faster, but you’re welcome to use any tool to deploy your cluster.

Step 2: Set up the scaffolding

Create a directory to store your Kubebuilder files:

$ mkdir webpage-crd
$ kubebuilder init --domain <any base domain, I used rvmiller.com>
$ kubebuilder create api --group sandbox --version v1beta1 --kind WebPage

This will generate a CRD to create resources with the following manifest:

apiVersion: sandbox.rvmiller.com/v1beta1
kind: WebPage

Step 3: Define the Custom Resource fields

We need to update the generated webpage_types.go file with our custom Spec and Status fields. We can also use kubebuilder’s annotations to add OpenAPI validation:

// WebPageSpec defines the desired state of WebPage
type WebPageSpec struct {
	// Html field stores the static web page contents
	// +kubebuilder:validation:MinLength=1
	Html string `json:"html"`

// WebPageStatus defines the observed state of WebPage
type WebPageStatus struct {
	// Stores the last time the job was successfully scheduled.
	// +optional
	LastUpdateTime *metav1.Time `json:"lastUpdateTime,omitempty"`

You can now run make manifests to have Kubebuilder generate the CRD yaml:

  group: sandbox.rvmiller.com
    kind: WebPage
              description: Html field stores the static web page contents
              minLength: 1
              type: string
          - html
              description: Stores the last time the job was successfully scheduled.
              format: date-time
              type: string

Step 4: Define the controller to reconcile state changes

This is where the core logic for your controller lives, reconciling actual state with desired state, based on updates made to WebPage CRDs.

func (r *WebPageReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
	ctx := context.Background()
	log := r.Log.WithValues("webpage", req.NamespacedName)

	log.Info("starting reconcile")

	// Get custom resource
	var webpage api.WebPage
	if err := r.Get(ctx, req.NamespacedName, &webpage); err != nil {
		log.Error(err, "unable to fetch WebPage")
		return ctrl.Result{}, client.IgnoreNotFound(err)

	// Desired ConfigMap
	cm, err := r.desiredConfigMap(webpage)
	if err != nil {
		return ctrl.Result{}, err

	// Desired Deployment
	dep, err := r.desiredDeployment(webpage, cm)
	if err != nil {
		return ctrl.Result{}, err

	// Patch (create/update) both owned resources
	applyOpts := []client.PatchOption{client.ForceOwnership, client.FieldOwner("webpage-controller")}

	err = r.Patch(ctx, &cm, client.Apply, applyOpts...)
	if err != nil {
		return ctrl.Result{}, err

	err = r.Patch(ctx, &dep, client.Apply, applyOpts...)
	if err != nil {
		return ctrl.Result{}, err

	// Set the last update time
	webpage.Status.LastUpdateTime = &metav1.Time{Time: time.Now()}
	if err = r.Status().Update(ctx, &webpage); err != nil {
		log.Error(err, "unable to update status")

	log.Info("finished reconcile")

	return ctrl.Result{}, nil

These helper functions allow creating/updating a Deployment and ConfigMap with the values we’re interested in. We can then PATCH the objects in Kubernetes to avoid maintaining and updating the state for the whole object. This approach is based on a talk from Kubecon 2019 documented here.

func (r *WebPageReconciler) desiredConfigMap(webpage api.WebPage) (corev1.ConfigMap, error) {
	cm := corev1.ConfigMap{
		TypeMeta: metav1.TypeMeta{APIVersion: corev1.SchemeGroupVersion.String(), Kind: "ConfigMap"},
		ObjectMeta: metav1.ObjectMeta{
			Name:      webpage.Name + "-config",
			Namespace: webpage.Namespace,
		Data: map[string]string{
			"index.html": webpage.Spec.Html,

	// For garbage collector to clean up resource
	if err := ctrl.SetControllerReference(&webpage, &cm, r.Scheme); err != nil {
		return cm, err

	return cm, nil

func (r *WebPageReconciler) desiredDeployment(webpage api.WebPage, cm corev1.ConfigMap) (appsv1.Deployment, error) {
	dep := appsv1.Deployment{
		TypeMeta: metav1.TypeMeta{APIVersion: appsv1.SchemeGroupVersion.String(), Kind: "Deployment"},
		ObjectMeta: metav1.ObjectMeta{
			Name:      webpage.Name,
			Namespace: webpage.Namespace,
		Spec: appsv1.DeploymentSpec{
			Selector: &metav1.LabelSelector{
				MatchLabels: map[string]string{"webpage": webpage.Name},
			Template: corev1.PodTemplateSpec{
				ObjectMeta: metav1.ObjectMeta{
					Labels: map[string]string{"webpage": webpage.Name},
				Spec: corev1.PodSpec{
					Containers: []corev1.Container{
							Name:  "nginx",
							Image: "nginx",
							VolumeMounts: []corev1.VolumeMount{
									Name:      "config-volume",
									MountPath: "/usr/share/nginx/html",
					Volumes: []corev1.Volume{
							Name: "config-volume",
							VolumeSource: corev1.VolumeSource{
								ConfigMap: &corev1.ConfigMapVolumeSource{
									LocalObjectReference: corev1.LocalObjectReference{
										Name: cm.Name,

	// For garbage collector to clean up resource
	if err := ctrl.SetControllerReference(&webpage, &dep, r.Scheme); err != nil {
		return dep, err

	return dep, nil

Step 5: Compile, Install, and Run

Now you can use Kubebuilder to generate manifests for your CRD, install the CRD into your running Kubernetes cluster, and run your controller locally (will take up your terminal):

$ make
go build -o bin/manager main.go
$ make install
kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/webpages.sandbox.rvmiller.com created
$ make run
2020-07-04T22:21:21.748-0400    INFO    setup   starting manager

Step 6: View your hard work

You can use the webpage.yaml file from the top of this blog post and apply it to your cluster. You will see a configmap and pod created, and when you port-forward to the running pod, you can view your HTML locally in a web browser!

$ kubectl apply -f webpage.yaml 
webpage.sandbox.rvmiller.com/sample-web-page created
$ kubectl get configmaps
NAME                     DATA   AGE
sample-web-page-config   1      10m
$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
sample-web-page-nginx   1/1     Running   0          10m
$ kubectl port-forward sample-web-page-nginx 7070:80
Forwarding from -> 80
Forwarding from [::1]:7070 -> 80

You can also edit the CRD, wait a few seconds for reconciliation, and force refresh the page to see the updated HTML:

$ kubectl edit webpage.sandbox.rvmiller.com sample-web-page
  <h2>Another test...</h2>

And you can view the status of the custom resource to view its last updated time:

$ kubectl get webpage.sandbox.rvmiller.com sample-web-page -o yaml
  lastUpdateTime: "2020-07-05T03:14:34Z"

Since we “Own” the Deployment and ConfigMap objects, they will be reconciled even if you delete them. Normally deleting a deployment is final, but you will see if you delete the deployment sample-web-page it will be recreated through reconciliation.

Step 7: Tear down

Thanks to the ControllerReference’s set above, you can simply remove the CRD to have all the dependent resources automatically garbage collected:

$ kubectl delete crd webpage.sandbox.rvmiller.com
customresourcedefinition.apiextensions.k8s.io "webpages.sandbox.rvmiller.com" deleted
$ k get pods
NAME                    READY   STATUS        RESTARTS   AGE
sample-web-page-nginx   0/1     Terminating   0          7m39s

Kubernetes Secret Environment Variable Gotcha

Playing around with starting a MySQL pod with an environment variable populated from a secret on Kubernetes, I experienced a gotcha with an error message that I couldn’t easily find googling around:

The Error Message

mysqladmin: [ERROR] unknown option '--"'.

The Investigation

Since I had previously started a MySQL pod with a non-secret env variable without any problems, I suspected an issue with my configuration:

apiVersion: v1
kind: Secret
  name: mysql-root-password
type: Opaque
apiVersion: v1
kind: Pod
  name: db
    - name: mysql
      image: mysql
        - secretRef:
            name: mysql-root-password

The value “cGFzc3dvcmQK” comes from base64-encoding the password, in this case the word “password”:

$ echo "password" | base64

But this is actually incorrect, since echo will implicitly add a newline character, which gets base64-encoded into the string! When this string later gets base64-decoded inside Kubernetes, the environment variables in the MySQL container look like this:

$ kubectl exec -it db printenv


That newline character is included, and MySQL fails to start attempting to apply an option for an empty environment variable (‘–“‘), causing that somewhat-confusing error message to appear in the container logs.

The Takeaway

Be sure to base64-encode secrets without the newline character. When generating the secret with echo, you should use the “-n” flag to strip the newline character:

$ echo -n "password" | base64

Using this encoded string will prevent empty environment variables being injected into the MySQL container and allow MySQL to start:

$ kubectl exec -it db printenv

I did come across this issue which describes the gotcha affecting other applications as well, even 4 years after it was originally filed. But since I couldn’t find any posts with this exact MySQL error log, I thought I’d post my experience.

Wifi-based trilateration on Android

404px-Sea_island_surveyTriangulation offers a way to locate yourself in space.  Cartographers in the 1600s originally used the technique to measure things like the height of the cliff, which would be too impractical to measure directly.  Later, triangulation evolved into an early navigation system when Dutch mathematician Willebrord Snell discovered three points can be used to locate a point on a map.

While triangulation uses angles to locate points, trilateration uses lateral distances.  If we know the positions of three points P1P2, and P3, as well as our distance from each of the points, r1r2, and r3; we can look at the overlapping circles formed to estimate where we are relative to the three points. We can even extend the technique to 3D, finding the intersecting region of spheres surrounding the points.

In this project, I’d like to show how we can use the Wifi signal strength, in dB, to approximate distance from a wireless access point (AP) or router.  Once we have this distance, we can create a circle surrounding an AP to show possible locations we might occupy.  In the next part of the project, I plan to show how we can use three APs to estimate our position in a plane using concepts of trilateration. (Note: I haven’t had time to implement this, but you can use this Wiki article to implement it yourself).

Trilateration using 3 access points providing a very precise position (a) and a rougher estimate (b)

Trilateration using 3 access points providing a very precise position (a) and a rougher estimate (b)

Determining distance from decibel level

There’s a useful concept in physics that lets us mathematically relate the signal level in dB to a real-world distance.  Free-space path loss (FSPL) characterizes how the wireless signal degrades over distance (following an inverse square law):

Screen Shot 2013-07-05 at 2.36.07 PM

The constant there, 92.45, varies depending on the units you’re using for other measurements (right now it’s using GHz for frequency and km for distance).  For my application I used the recommended constant -27.55, which treats frequency in MHz and distance in meters (m).  We can re-arrange the equation to solve for d, in Java:

public double calculateDistance(double levelInDb, double freqInMHz)    {
   double exp = (27.55 - (20 * Math.log10(freqInMHz)) + Math.abs(levelInDb)) / 20.0;
   return Math.pow(10.0, exp);

Now, there are few drawbacks to this rough approximation:

  1. FSPL explicitly requires “free space” for calculation, while most Wifi signals are obstructed by walls and other materials.
  2. Ideally, we will want to sample the signal strength many times (10+) to account for varying interference.

Problem (1) will be resolved in the future by using the signal-to-noise ratio to more accurately estimate (that sounds like an oxymoron) obstructions to the wifi signal.  Problem (2) can be implemented in code by sampling many times and computing the average signal level.

Using the above code along with Android’s WifiManager and ScanResult classes, I can print out our final measurements:

WifiManager wifi = (WifiManager) getSystemService(Context.WIFI_SERVICE);

registerReceiver(new BroadcastReceiver()
	public void onReceive(Context c, Intent intent) 
		results = wifi.getScanResults();
		for (ScanResult s : results)	{
			DecimalFormat df = new DecimalFormat("#.##");
			Log.d(TAG, s.BSSID + ": " + s.level + ", d: " + 
					df.format(calculateDistance((double)s.level, s.frequency)) + "m");
}, new IntentFilter(WifiManager.SCAN_RESULTS_AVAILABLE_ACTION)); 


And we can get back data that appears to be correct when moving further away from my test router (MAC address: 84:1b:5e:2c:76:f2):

[Image lost during host transition, but basically just showed how the distance increased]

Quickie: Which way does gravity point?


Everyone knows a compass always points north, and most people know it’s because of magnetic fields present on Earth’s surface.  There’s another force here on Earth directed to a central point, and that’s gravity.  Humans are quite adept at sensing gravity thanks to equilibrioception, where  fluid contained in structures in our inner ear provide feedback to help us stay balanced.

But machines, too, can detect gravity thanks to the simple accelerometer.  Already present in most smartphones today, accelerometers react to gravity with tiny springs, creating a voltage difference that we can measure and turn into meaningful units.

On Android, we can easily read the accelerometer data:

SensorManager sensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);
Sensor accel = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
sensorManager.registerListener(this, accel, SensorManager.SENSOR_DELAY_NORMAL);


public void onSensorChanged(SensorEvent event) {
	float x, y, z;
	x = event.values[0];
	y = event.values[1];
	z = event.values[2];

Using accelerometers to emulate human’s perception of gravity

I’d like to show how we can use an Android phone (even my dusty old Droid Eris) to visualize the force of gravity.  To save time, we’re only going to use two dimensions, x and y, but the technique used here can easily be extended into 3D.

Let’s represent gravity the same way students in a high school physics class would — with an arrow pointing down.  The goal would be the ability to rotate the phone (changing the x and y position), while still having that arrow point down, illustrating the direction of gravity.

The first thing we’ll need to do is convert the rectangular coordinates given to us (x and y) to a polar system (r, θ), where extracting an angle is much easier.

Thinking back to high school geometry, the inverse tangent will provide that angle directly.  Java has a built-in method, atan2(), which even gracefully handles the divide-by-zero case when x = 0. Because the image rotation I’m using is based on degrees (more on that in a moment), we can convert the radian angle to a common degree (0-360°).

double theta = Math.atan2(y, x);
double degree = ((theta * -180.0) / 3.14159) + 180;  // +180 to keep 0 on the right

That gives us the degree rotation of the phone in 2D.  We’re almost there.  To determine the degree that we would like the gravity arrow to point, we need to offset that degree, modulo 360 to keep us within the range (0-360°):

float rotateDegree = (float) ((degree + 270.0) % 360.0);

Now it’s just a matter of re-drawing the arrow image on the screen.  Android offers some fancy animation techniques, but for this quickie project, I chose to use a matrix rotation:

Matrix matrix = new Matrix();
Bitmap rotated = Bitmap.createBitmap(myImg, 0, 0, myImg.getWidth(), myImg.getHeight(),matrix, true);

With that code in place, we can finally visualize the force of gravity, at least in two dimensions:

This project was a quick one (writing this blog entry actually took longer than the code itself), but I think it’s important to show how we can figuratively “teach” a device a human trait and give them a new skill.  For instance, with a faster refresh rate and perhaps a little more accuracy, a robot can use this technique to keep itself balanced, much like humans use information from gravitational forces to stay balanced.

Github available here.

CS530 Visualization Projects

This is a collection of projects I created for CS 530: Introduction to Computer Visualization. Each project required an HTML writeup, so I figured it would be easiest to keep a collection of links here…

Project 1: First Steps with VTK

shapeimage_2To get acquainted with the Visualization Toolkit (VTK), we used bathymetry (sea depth) and topography information from NASA to visualize the earth in a few different ways. We also implemented a Sea Rise simulation that shows how land masses on Earth change as the sea level rises.

LINK to project 1

Project 2: Color Mapping

shapeimage_3This project focused on choosing the right color maps to visualize different types of data. The two types of data we looked at were MRI scans and a topographical map of the western U.S. With these data sets, we were tasked with creating appropriate color maps in both continuous and discrete styles.

LINK to project 2

Project 3: Isosurfaces

shapeimage_4Isosurfacing allows the medical industry to convert 2-dimensional slices, such as the CT slices used in this project, to 3-dimensional surface in space. This project explored different techniques of building isosurfaces and mapping colors to them.

LINK to project 3

Project 4: Direct Volume Rendering

shapeimage_5Although isosurfacing can generate a surface in 3D, the medical industry often uses raycast volume rendering instead because it better reflects the ambiguity and imprecision of the measurement. Rather than creating a geometry from data, volume rendering uses rays emitted from the object, adding opacity and color along the way. This project dealt with two data sets, the CT scan from the previous project, and vorticity surrounding a delta wing on an aircraft.

LINK to project 4

Project 4 Bonus: Multidimensional Transfer Function

shapeimage_6Using the programs created for the last project, I added a 2nd component to rendering using the gradient magnitude files generated from the same data set.

LINK to project 4 bonus

Project 5: Vector Field/Flow Visualization

p5_t2_h_sThis final project explored vector field visualization of velocity data surrounding a delta wing dataset. I visualize the vector field in different ways: plane slices showing the velocity data with arrow glyphs, streamlines, stream tubes, and a stream surface. Finally, I present the streamlines with the isosurface that makes up the magnitude of the vortices for reference.

LINK to project 5

Wind Turbine Analysis



For our final project for CS 59000: Embedded Systems, a partner and I implemented several tests on a small-scale wind turbine using the Texas Instruments MSP430 board. We use the Analog to Digital Converter (ADC) to gather information on voltage generated by the turbine and rotations per minute calculated with the help of an optical tachometer. We then send these values to a Java-based user interface to report in real-time on an attached computer.

For the final part of our project, we designed a wind turbine stand on springs that we can use, along with the MSP430, to measure accelerometer data from the wind turbine under stress. We also send the real-time data to the user interface on an attached computer.


Power Coefficient (Cp)

10ee302faa559afbeabbf9f6e403151a (Wiki link)

We measured the following characteristics of the wind turbine at LOW fan speed:

  • AT = 0.134614 m2
  • V3 = (2.101 m/sec)3 = 9.275
  • ρ = 1.2041 kg/m3 at 20°C (from Wikipedia)

The average voltage reported by our program at LOW fan speed was 2.304 volts. Resistance was set at 330 Ω.

Using these values, we found the power coefficient, Cp, to be:

Cp = 0.00929 or 0.01

Tip-Speed Ratio

(Wiki Link)

This part of the project required the use of the optical tachometer connected to the MSP430 board. The tachometer will output a high value when no blade blocks the beam, and a low value (close to zero) when a blade is in front. We read this information and convert the rate at which blades are passing in the beam to compute a rotations per minute (RPM) value.

The average RPM we measured at a given time was: 55 RPM

We measured the radius of a blade, and found R = 20.7 cm or 0.207 meters.

At LOW fan speed, the velocity of wind was recorded as V = 2.101 m/sec * 60 s= 126.06 m/min.

Using these values, we found the Tip-Speed Ratio to be:
λ = .567 rotations

Accelerometer Data


We constructed a special stand for the wind turbine that allows the turbine and MSP board to move in unison, while still being flexible to allow natural movement due to the wind.

For this part of the project, we modified the provided Java program to also display accelerometer data in the X- and Y-axes. We track and record this data in real-time, which gives some insight into how the wind turbine is moving as the speed and direction of wind changes.

Although we are not able to give a unit for these values, the magnitude of change can indicate what is happening in the physical system. For instance, when we see X values change from near-zero to negative, we know that stress is being placed in the wind turbine in the negative X direction (see diagram below — blue values represent negative readings).

Screen Shot 2013-01-24 at 12.12.48 AM

Arduino Web Server


Winter break means plenty of time to toy around with something new. I’m not sure what inspired this project, perhaps the ethernet driver we designed for our Operating Systems course, but I’ve decided to explore the field of embedded networking. And you can’t get much more embedded than a 16 MHz Arduino Uno with 32K of memory.


I want to create an Arduino-based web server, but with a few twists, because the idea already exists and has been implemented. The first link points to Lady Ada’s quick and dirty Arduino file server, which can serve up character-based files stored on micro SD. The second link offers a more functional server called Webduino, which claims to offer image support (ie. binary transfers). However, reading through the code, it looks like the developer took the easy way out by re-encoding a PNG as hex values, and then sending those values byte-by-byte over the network. That’s not image support! Also, both implementations seem to suffer from the limitation that only one client can connect at a time.

Because the Arduino has no formal notion of threads, it would make sense that multiple clients just won’t work. But I’ve been reading up on a project called Protothreads, which adds the most basic threading you can imagine. No separate stacks. No pre-emptive scheduling. Just a way to give the appearance that two computations are concurrent. I’m hoping that I can use protothreading to allow multiple clients to connect.

Additionally, it would be nice to find a way to do binary transfers. Glancing at the EthernetClient and EthernetServer API, it looks like they’re both set up for byte transfers. I wonder if there’s a way I can trick it into sending binary information. We’ll see.

Update – 26 January 2012:

I found an easy way (untested) to get the Arduino to send non-text content over the EthernetClient interface. When a client requests a file of a certain type, say, PNG, you can send a response indicating that you will be sending PNG binary data byte by byte as follows:

client.println("HTTP/1.1 200 OK");
client.println("Content-Type: image/png");

I hope to test this technique soon. Admittedly, I still have a long way to go on this project, but other projects (iPhone app, stay tuned) keep arising.