38 releases (breaking)

✓ Uses Rust 2018 edition

new 0.31.0 Mar 28, 2020
0.29.0 Mar 12, 2020
0.23.0 Dec 31, 2019
0.22.0 Nov 30, 2019
0.13.0 Jul 22, 2019

#1 in #kubernetes

Download history 1410/week @ 2019-12-10 1363/week @ 2019-12-17 885/week @ 2019-12-24 770/week @ 2019-12-31 2384/week @ 2020-01-07 1837/week @ 2020-01-14 2717/week @ 2020-01-21 2980/week @ 2020-01-28 3433/week @ 2020-02-04 2966/week @ 2020-02-11 3323/week @ 2020-02-18 5028/week @ 2020-02-25 5334/week @ 2020-03-03 5871/week @ 2020-03-10 6856/week @ 2020-03-17 6052/week @ 2020-03-24

16,237 downloads per month
Used in 5 crates (4 directly)

Apache-2.0 and maybe MPL-2.0

2.5K SLoC


CircleCI Client Capabilities Client Support Level Crates.io Discord chat

Rust client for Kubernetes in the style of a more generic client-go. It makes certain assumptions about the kubernetes api to allow writing generic abstractions, and as such contains rust reinterpretations of Reflector and Informer to allow writing kubernetes controllers/watchers/operators more easily.

NB: This library is currently undergoing a lot of changes with async/await stabilizing. Please check the CHANGELOG when upgrading.


Select a version of kube along with the generated k8s api types that corresponds to your cluster version:

kube = "0.31.0"
kube-derive = "0.31.0"
k8s-openapi = { version = "0.7.1", default-features = false, features = ["v1_15"] }

Note that turning off default-features for k8s-openapi is recommended to speed up your compilation (and we provide an api anyway).


See the examples directory for how to watch over resources in a simplistic way.

API Docs

See version-rs for a super light (~100 lines), actix*, prometheus, deployment api setup.

See controller-rs for a full actix* example, with circleci, and kube yaml.

NB: actix examples with futures are due for a rewrite with the current version.


The direct Api type takes a client, and is constructed with either the ::global or ::namespaced functions:

use k8s_openapi::api::core::v1::Pod;
let pods: Api<Pod> = Api::namespaced(client, "default");

let p = pods.get("blog").await?;
println!("Got blog pod with containers: {:?}", p.spec.unwrap().containers);

let patch = json!({"spec": {
    "activeDeadlineSeconds": 5
let patched = pods.patch("blog", &pp, serde_json::to_vec(&patch)?).await?;
assert_eq!(patched.spec.active_deadline_seconds, Some(5));

pods.delete("blog", &DeleteParams::default()).await?;

See the examples ending in _api examples for more detail.

Custom Resource Definitions

Working with custom resources uses automatic code-generation via proc_macros in kube-derive.

You need to #[derive(CustomResource)] and some #[kube(attrs..)] on a spec struct:

#[derive(CustomResource, Serialize, Deserialize, Default, Clone)]
#[kube(group = "clux.dev", version = "v1", namespaced)]
pub struct FooSpec {
    name: String,
    info: String,

Then you can use a lot of generated code as:

println!("kind = {}", Foo::KIND); // impl k8s_openapi::Resource
let foos: Api<Foo> = Api::namespaced(client, "default");
let f = Foo::new("my-foo");
println!("foo: {:?}", f)
println!("crd: {}", serde_yaml::to_string(Foo::crd());

There are a ton of kubebuilder like instructions that you can annotate with here. See the crd_ prefixed examples for more.


The optional kube::runtime module contains sets of higher level abstractions on top of the Api and Resource types so that you don't have to do all the watch book-keeping yourself.


A basic event watcher that presents a stream of api events on a resource with a given set of ListParams. Events are received as a raw WatchEvent type.

An Informer updates the last received resourceVersion internally on every event, before shipping the event to the app. If your controller restarts, you will receive one event for every active object at startup, before entering a normal watch.

let r = Resource::all::<Pod>();
let inf = Informer::new(client, r);

The main feature of Informer<K> is being able to subscribe to events while having a streaming .poll() open:

let pods = inf.poll().await?.boxed(); // starts a watch and returns a stream

while let Some(event) = pods.try_next().await? { // await next event
    handle(event).await?; // pass the WatchEvent to a handler

How you handle them is up to you, you could build your own state, you can use the Api, or just print events. In this example you get complete Pod objects:

async fn handle(event: WatchEvent<Pod>) -> anyhow::Result<()> {
    match event {
        WatchEvent::Added(o) => {
            let containers = o.spec.unwrap().containers.into_iter().map(|c| c.name).collect::<Vec<_>>();
            println!("Added Pod: {} (containers={:?})", Meta::name(&o), containers);
        WatchEvent::Modified(o) => {
            let phase = o.status.unwrap().phase.unwrap();
            println!("Modified Pod: {} (phase={})", Meta::name(&o), phase);
        WatchEvent::Deleted(o) => {
            println!("Deleted Pod: {}", Meta::name(&o));
        WatchEvent::Error(e) => {
            println!("Error event: {:?}", e);

The node_informer example has an example of using api calls from within event handlers.


A cache for K that keeps itself up to date. It does not expose events, but you can inspect the state map at any time.

let r = Resource::namespaced::<Node>(&namespace);
let lp = ListParams::default()
let rf = Reflector::new(client, lp, r);

then you should poll() the reflector, and state() to get the current cached state:

rf.poll().await?; // watches + updates state

// Clone state and do something with it
rf.state().await.into_iter().for_each(|(node)| {
    println!("Found Node {:?}", node);

Note that poll holds the future for 290s by default, but you can (and should) get .state() from another async context (see reflector examples for how to spawn an async task to do this). See also the self-driving issue.

If you need the details of just a single object, you can use the more efficient, Reflector::get and Reflector::get_within.


Examples that show a little common flows. These all have logging of this library set up to debug, and where possible pick up on the NAMSEPACE evar.

# watch pod events
cargo run --example pod_informer
# watch event events
cargo run --example event_informer
# watch for broken nodes
cargo run --example node_informer

or for the reflectors:

cargo run --example pod_reflector
cargo run --example node_reflector
cargo run --example deployment_reflector
cargo run --example secret_reflector
cargo run --example configmap_reflector

for one based on a CRD, you need to create the CRD first:

kubectl apply -f examples/foo.yaml
cargo run --example crd_reflector

then you can kubectl apply -f crd-baz.yaml -n default, or kubectl delete -f crd-baz.yaml -n default, or kubectl edit foos baz -n default to verify that the events are being picked up.

For straight API use examples, try:

cargo run --example crd_api
cargo run --example job_api
cargo run --example log_stream
cargo run --example pod_api
NAMESPACE=dev cargo run --example log_stream -- kafka-manager-7d4f4bd8dc-f6c44


Kube has basic support (with caveats) for rustls as a replacement for the openssl dependency. To use this, turn off default features, and enable rustls-tls:

cargo run --example pod_informer --no-default-features --features=rustls-tls

or in Cargo.toml:

kube = { version = "0.31.0", default-features = false, features = ["rustls-tls"] }
k8s-openapi = { version = "0.7.1", default-features = false, features = ["v1_15"] }

This will pull in the variant of reqwest that also uses its rustls-tls feature.


Apache 2.0 licensed. See LICENSE for details.


~1M SLoC