#kubernetes #workload #lease #leader #election #lock #acquire

kube-leader-election

Leader election implementations for Kubernetes workloads

28 breaking releases

0.31.0 Apr 4, 2024
0.30.0 Mar 26, 2024
0.29.0 Jan 22, 2024
0.28.0 Nov 2, 2023
0.1.2 Jul 26, 2021

#52 in Date and time

Download history 425/week @ 2023-12-22 432/week @ 2023-12-29 790/week @ 2024-01-05 629/week @ 2024-01-12 578/week @ 2024-01-19 1768/week @ 2024-01-26 1191/week @ 2024-02-02 758/week @ 2024-02-09 914/week @ 2024-02-16 1372/week @ 2024-02-23 720/week @ 2024-03-01 830/week @ 2024-03-08 914/week @ 2024-03-15 621/week @ 2024-03-22 843/week @ 2024-03-29 583/week @ 2024-04-05

3,195 downloads per month

MIT license

22KB
257 lines

Kubernetes Leader Election in Rust

CI workflow crates.io version License: MIT

This library provides simple leader election for Kubernetes workloads.

[dependencies]
kube-leader-election = "0.31.0"

Example

Acquire leadership on a Kubernetes Lease called some-operator-lock, in the default namespace and promise to renew the lock every 15 seconds:

let leadership = LeaseLock::new(
    kube::Client::try_default().await?,
    "default",
    LeaseLockParams {
        holder_id: "some-operator".into(),
        lease_name: "some-operator-lock".into(),
        lease_ttl: Duration::from_secs(15),
    },
);

// Run this in a background task every 5 seconds
// Share the result with the rest of your application; for example using Arc<AtomicBool>
// See https://github.com/hendrikmaus/kube-leader-election/blob/master/examples/shared-lease.rs
let lease = leadership.try_acquire_or_renew().await?;

log::info!("currently leading: {}", lease.acquired_lease);

Please refer to the examples for runnable usage demonstrations.

Features

Kubernetes Lease Locking

A very basic form of leader election without fencing, i.e., only use this if your application can tolerate multiple replicas acting as leader for a short amount of time.

This implementation uses a Kubernetes Lease resource from the API group coordination.k8s.io, which is locked and continuously renewed by the leading replica. The leaseholder, as well as all candidates, use timestamps to determine if a lease can be acquired. Therefore, this implementation is volatile to datetime skew within a cluster.

Only use this implementation if you are aware of its downsides, and your workload can tolerate them.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License

MIT

Dependencies

~61MB
~1M SLoC