#kubernetes #election #leader #lease #workload #lock #leader-election

kube-leader-election

Leader election implementations for Kubernetes workloads

35 breaking releases

0.38.0 Nov 20, 2024
0.36.0 Sep 16, 2024
0.34.0 Jul 23, 2024
0.30.0 Mar 26, 2024
0.1.2 Jul 26, 2021

#269 in Concurrency

Download history 551/week @ 2024-08-19 541/week @ 2024-08-26 518/week @ 2024-09-02 903/week @ 2024-09-09 1215/week @ 2024-09-16 986/week @ 2024-09-23 535/week @ 2024-09-30 603/week @ 2024-10-07 641/week @ 2024-10-14 532/week @ 2024-10-21 381/week @ 2024-10-28 430/week @ 2024-11-04 483/week @ 2024-11-11 790/week @ 2024-11-18 1058/week @ 2024-11-25 1355/week @ 2024-12-02

3,720 downloads per month

MIT license

23KB
274 lines

Kubernetes Leader Election in Rust

CI workflow crates.io version License: MIT

This library provides simple leader election for Kubernetes workloads.

[dependencies]
kube-leader-election = "0.38.0"

Example

Acquire leadership on a Kubernetes Lease called some-operator-lock, in the default namespace and promise to renew the lock every 15 seconds:

let leadership = LeaseLock::new(
    kube::Client::try_default().await?,
    "default",
    LeaseLockParams {
        holder_id: "some-operator".into(),
        lease_name: "some-operator-lock".into(),
        lease_ttl: Duration::from_secs(15),
    },
);

// Run this in a background task every 5 seconds
// Share the result with the rest of your application; for example using Arc<AtomicBool>
// See https://github.com/hendrikmaus/kube-leader-election/blob/master/examples/shared-lease.rs
let lease = leadership.try_acquire_or_renew().await?;

log::info!("currently leading: {}", lease.acquired_lease);

Please refer to the examples for runnable usage demonstrations.

Features

Kubernetes Lease Locking

A very basic form of leader election without fencing, i.e., only use this if your application can tolerate multiple replicas acting as leader for a short amount of time.

This implementation uses a Kubernetes Lease resource from the API group coordination.k8s.io, which is locked and continuously renewed by the leading replica. The leaseholder, as well as all candidates, use timestamps to determine if a lease can be acquired. Therefore, this implementation is volatile to datetime skew within a cluster.

Only use this implementation if you are aware of its downsides, and your workload can tolerate them.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License

MIT

Dependencies

~61MB
~1M SLoC