#kubernetes #election #leader #lease #workload #lock #leader-election

kube-leader-election

Leader election implementations for Kubernetes workloads

34 breaking releases

0.37.0 Oct 11, 2024
0.35.0 Sep 14, 2024
0.34.0 Jul 23, 2024
0.30.0 Mar 26, 2024
0.1.2 Jul 26, 2021

#267 in Concurrency

Download history 1528/week @ 2024-07-23 773/week @ 2024-07-30 937/week @ 2024-08-06 731/week @ 2024-08-13 548/week @ 2024-08-20 500/week @ 2024-08-27 744/week @ 2024-09-03 1325/week @ 2024-09-10 542/week @ 2024-09-17 1042/week @ 2024-09-24 507/week @ 2024-10-01 658/week @ 2024-10-08 564/week @ 2024-10-15 552/week @ 2024-10-22 370/week @ 2024-10-29 376/week @ 2024-11-05

1,999 downloads per month

MIT license

23KB
274 lines

Kubernetes Leader Election in Rust

CI workflow crates.io version License: MIT

This library provides simple leader election for Kubernetes workloads.

[dependencies]
kube-leader-election = "0.37.0"

Example

Acquire leadership on a Kubernetes Lease called some-operator-lock, in the default namespace and promise to renew the lock every 15 seconds:

let leadership = LeaseLock::new(
    kube::Client::try_default().await?,
    "default",
    LeaseLockParams {
        holder_id: "some-operator".into(),
        lease_name: "some-operator-lock".into(),
        lease_ttl: Duration::from_secs(15),
    },
);

// Run this in a background task every 5 seconds
// Share the result with the rest of your application; for example using Arc<AtomicBool>
// See https://github.com/hendrikmaus/kube-leader-election/blob/master/examples/shared-lease.rs
let lease = leadership.try_acquire_or_renew().await?;

log::info!("currently leading: {}", lease.acquired_lease);

Please refer to the examples for runnable usage demonstrations.

Features

Kubernetes Lease Locking

A very basic form of leader election without fencing, i.e., only use this if your application can tolerate multiple replicas acting as leader for a short amount of time.

This implementation uses a Kubernetes Lease resource from the API group coordination.k8s.io, which is locked and continuously renewed by the leading replica. The leaseholder, as well as all candidates, use timestamps to determine if a lease can be acquired. Therefore, this implementation is volatile to datetime skew within a cluster.

Only use this implementation if you are aware of its downsides, and your workload can tolerate them.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License

MIT

Dependencies

~61MB
~1M SLoC