#leader-election #lease #kubernetes #election #async #api-bindings #leader

kube-lease-manager

Ergonomic and reliable leader election using Kubernetes Lease API

15 releases (8 breaking)

new 0.9.0 May 14, 2025
0.8.0 Mar 14, 2025
0.7.0 Dec 24, 2024
0.6.0 Nov 22, 2024
0.2.0 Jul 24, 2024

#162 in Asynchronous

Download history 360/week @ 2025-01-23 275/week @ 2025-01-30 428/week @ 2025-02-06 350/week @ 2025-02-13 370/week @ 2025-02-20 417/week @ 2025-02-27 327/week @ 2025-03-06 390/week @ 2025-03-13 292/week @ 2025-03-20 369/week @ 2025-03-27 524/week @ 2025-04-03 403/week @ 2025-04-10 235/week @ 2025-04-17 378/week @ 2025-04-24 200/week @ 2025-05-01 412/week @ 2025-05-08

1,273 downloads per month

MIT license

115KB
1.5K SLoC

kube-lease-manager

Ergonomic and reliable leader election using Kubernetes Lease API.

CI status Audit status Crates.io publishing status docs.rs status Version at Crates.io License

kube-lease-manager is a high-level helper to facilitate leader election using Lease Kubernetes resource. It ensures that only a single instance of the lease managers holds the lock at any moment of time.

Some of the typical use cases:

  • automatic coordination of leader election between several instances (Pods) of Kubernetes controllers;
  • ensure only a single instance of concurrent jobs is running right now;
  • exclusive acquiring of shared resource.

Features

  • LeaseManager is a central part of the crate. This is a convenient wrapper around a Kubernetes Lease resource to manage all aspects of a leader election process.
  • Provides two different high-level approaches to lock and release lease: fully automated or partially manual lock control.
  • Uses Server-Side-Apply approach to update lease state that facilitates conflict detection and resolution and makes impossible concurrent locking.
  • Tolerates configurable time skew between nodes of the Kubernetes cluster.
  • Behavioral parameters of the lease manager are easily and flexibly configurable.
  • Uses well-known and highly appreciated kube and Tokio crates to access Kubernetes API and coordinate asynchronous tasks execution.
  • You don't need to use low-level Kubernetes API.
  • Uses Tokio tracing crate to provide event logs.

Please visit crate's documentation to get details and more examples.


As mentioned above, kube-lease-manager provides two possible ways to manage lease lock:

  1. Fully automated: you create LeaseManager instance and run its watch() method. It returns Tokio watch channel to watch on state changes. Besides that, it runs an unattended background task which permanently tries to lock lease if it's free and publish changed state to the channel. The task finishes if the channel is closed.
  2. Partially manual: you create LeaseManager instance and use its changed() and release() methods to control lock. changed() tries to lock lease as soon as it becomes free and returns actual lock state when it's changed. Your responsibilities are:
    • to keep changed() running (it's a Future) to ensure lock is refreshing while it's in use;
    • to call release() when you don't need the lock and want to make it free for others.

The first way ensures that the lease is locked (has a holder) at any moment of time. Second makes it possible to acquire and release a lock when you need it.

Example

The simplest example using the first locking approach:

use kube::Client;
use kube_lease_manager::{LeaseManagerBuilder, Result};
use std::time::Duration;

#[tokio::main]
async fn main() {
   // Use the default Kube client
   let client = Client::try_default().await?;
   // Create the simplest LeaseManager with reasonable defaults using a convenient builder.
   // It uses a Lease resource called `test-watch-lease`.
   let manager = LeaseManagerBuilder::new(client, "test-watch-lease")
           .build()
           .await?;

   let (mut channel, task) = manager.watch().await;
   // Watch on the channel for lock state changes
   tokio::select! {
        _ = channel.changed() => {
            let lock_state = *channel.borrow_and_update();

            if lock_state {
                // Do something useful as a leader
                println!("Got a luck!");
            }
        }
        _ = tokio::time::sleep(Duration::from_secs(10)) => {
            println!("Unable to get lock during 10s");
        }
    }

   // Explicitly close the control channel
   drop(channel);
   // Wait for the finish of the manager and get it back
   let _manager = tokio::join!(task).0.unwrap()?;

   Ok(())
}

Please visit crate's documentation to get more examples and usage details.

Credits

The author was inspired on this piece of work by two other crates that provide similar functionality: kubert and kube-leader-election. Both of them are great, thanks to the authors. But both have something missing for one of my projects. So it was a reason to create this one.

License

This project is licensed under the MIT license.

Dependencies

~53MB
~787K SLoC