22 releases (5 breaking)

new 0.5.0 Sep 28, 2023
0.4.5 Sep 28, 2023
0.3.0 Sep 22, 2023
0.2.6 Sep 21, 2023
0.0.1 May 22, 2023

#51 in Template engine

Download history 1/week @ 2023-06-08 1/week @ 2023-06-15 1/week @ 2023-06-22 2/week @ 2023-06-29 2/week @ 2023-07-06 3/week @ 2023-07-20 1/week @ 2023-07-27 3/week @ 2023-08-10 2/week @ 2023-08-17 2/week @ 2023-08-24 2/week @ 2023-08-31 4/week @ 2023-09-07 281/week @ 2023-09-14 164/week @ 2023-09-21

461 downloads per month


14K SLoC

Tembo Operator


  • A Kubernetes cluster: kind
  • rust
  • just
  • Opentelemetry collector (optional)

Rust Linting

Run linting with cargo fmt and clippy


rustup component add clippy
cargo clippy

cargo fmt:

rustup toolchain install nightly
rustup component add rustfmt --toolchain nightly
cargo +nightly fmt

Running Locally

To build and run the operator locally

just start-kind
just run
  • Or, you can run with auto reloading your local changes.
  • First, install cargo-watch
cargo install cargo-watch
  • Then, run with auto reload
just watch

Install on an existing cluster

just install-depedencies
just install-chart

Integration testing

To automatically set up a local cluster for functional testing, use this script. This will start a local kind cluster, annotate the default namespace for testing and install the CRD definition.

just start-kind

Or, you can follow the below steps.

  • Connect to a cluster that is safe to run the tests against
  • Set your kubecontext to any namespace, and label it to indicate it is safe to run tests against this cluster (do not do against non-test clusters)
NAMESPACE=<namespace> just annotate
  • Start or install the controller you want to test (see the following sections), do this in a separate shell from where you will run the tests
export DATA_PLANE_BASEDOMAIN=localhost
cargo run
  • Run the integration tests
cargo test -- --ignored
  • The integration tests assume you already have installed or are running the operator connected to the cluster.

Other testing notes

  • Include the --nocapture flag to show print statements during test runs


As an example; install kind. Once installed, follow these instructions to create a kind cluster connected to a local image registry.


Apply the CRD from cached file, or pipe it from crdgen (best if changing it):

just install-crd

OpenTelemetry (TDB)

Setup an OpenTelemetry Collector in your cluster. Tempo / opentelemetry-operator / grafana agent should all work out of the box. If your collector does not support grpc otlp you need to change the exporter in main.rs.



cargo run
  • Or, with optional telemetry (change as per requirements):
OPENTELEMETRY_ENDPOINT_URL= RUST_LOG=info,kube=trace,controller=debug cargo run --features=telemetry


Compile the controller with:

just compile

Build an image with:

just build

Push the image to your local registry with:

docker push localhost:5001/controller:<tag>

Edit the deployment's image tag appropriately, then run:

kubectl apply -f yaml/deployment.yaml
kubectl port-forward service/coredb-controller 8080:80

NB: namespace is assumed to be default. If you need a different namespace, you can replace default with whatever you want in the yaml and set the namespace in your current-context to get all the commands here to work.


In either of the run scenarios, your app is listening on port 8080, and it will observe events.

Try some of:

kubectl apply -f yaml/sample-coredb.yaml
kubectl delete coredb sample-coredb
kubectl edit coredb sample-coredb # change replicas

The reconciler will run and write the status object on every change. You should see results in the logs of the pod, or on the .status object outputs of kubectl get coredb -o yaml.

Webapp output

The sample web server exposes some example metrics and debug information you can inspect with curl.

$ kubectl apply -f yaml/sample-coredb.yaml
$ curl
# HELP cdb_controller_reconcile_duration_seconds The duration of reconcile to complete in seconds
# TYPE cdb_controller_reconcile_duration_seconds histogram
cdb_controller_reconcile_duration_seconds_bucket{le="0.01"} 1
cdb_controller_reconcile_duration_seconds_bucket{le="0.1"} 1
cdb_controller_reconcile_duration_seconds_bucket{le="0.25"} 1
cdb_controller_reconcile_duration_seconds_bucket{le="0.5"} 1
cdb_controller_reconcile_duration_seconds_bucket{le="1"} 1
cdb_controller_reconcile_duration_seconds_bucket{le="5"} 1
cdb_controller_reconcile_duration_seconds_bucket{le="15"} 1
cdb_controller_reconcile_duration_seconds_bucket{le="60"} 1
cdb_controller_reconcile_duration_seconds_bucket{le="+Inf"} 1
cdb_controller_reconcile_duration_seconds_sum 0.013
cdb_controller_reconcile_duration_seconds_count 1
# HELP cdb_controller_reconciliation_errors_total reconciliation errors
# TYPE cdb_controller_reconciliation_errors_total counter
cdb_controller_reconciliation_errors_total 0
# HELP cdb_controller_reconciliations_total reconciliations
# TYPE cdb_controller_reconciliations_total counter
cdb_controller_reconciliations_total 1
$ curl

The metrics will be auto-scraped if you have a standard PodMonitor for prometheus.io/scrape.


Updating the CRD:


~2M SLoC