28 releases
Uses new Rust 2024
| 0.3.16 | Dec 11, 2025 |
|---|---|
| 0.3.15 | Dec 11, 2025 |
| 0.3.9 | Nov 30, 2025 |
| 0.2.12 | Nov 27, 2025 |
| 0.1.0 | Nov 26, 2025 |
#949 in Network programming
Used in 2 crates
(via lmrc-pipeline)
140KB
1.5K
SLoC
lmrc-k3s
Part of the LMRC Stack - Infrastructure-as-Code toolkit for building production-ready Rust applications
A Rust library for managing K3s Kubernetes clusters via SSH.
Features
- Simple API: Easy-to-use interface for cluster management
- Declarative Reconciliation: Define desired cluster state and automatically add/remove workers
- SSH-based: Manages remote nodes via SSH connections using the
ssh-managercrate - Async/Await: Built on Tokio for efficient async operations
- Flexible Configuration: Support for custom K3s versions, tokens, and disabled components
- Private Network Support: Handles both public and private IP configurations for cloud deployments
- Cluster Status Tracking: Query real-time cluster status from kubectl
- Graceful Node Removal: Automatically drain nodes before removal
- Idempotent Operations: Safe to run multiple times, won't reinstall if already present
- Well Documented: Comprehensive API documentation with examples
Installation
Add this to your Cargo.toml:
[dependencies]
lmrc-k3s = "0.1"
tokio = { version = "1.0", features = ["full"] }
Quick Start
use lmrc_k3s::K3sManager;
#[tokio::main]
async fn main() -> Result<(), lmrc_k3s::K3sError> {
// Create a manager with default settings
let manager = K3sManager::builder()
.token("my-secure-token")
.build();
// Install K3s on master node (force=false won't reinstall if already present)
manager.install_master("192.168.1.10", None, false).await?;
// Join worker nodes
manager.join_worker("192.168.1.11", "192.168.1.10", None).await?;
manager.join_worker("192.168.1.12", "192.168.1.10", None).await?;
// Download kubeconfig
manager.download_kubeconfig("192.168.1.10", "kubeconfig").await?;
// Check cluster state
if let Some(state) = manager.get_cluster_state() {
println!("Cluster has {} ready workers", state.ready_workers_count());
}
Ok(())
}
Usage Examples
Basic Cluster Setup
use lmrc_k3s::K3sManager;
#[tokio::main]
async fn main() -> Result<(), lmrc_k3s::K3sError> {
let manager = K3sManager::builder()
.token("my-cluster-token")
.build();
// Install master (force=false won't reinstall if already present)
manager.install_master("192.168.1.10", None, false).await?;
// Join workers
manager.join_worker("192.168.1.11", "192.168.1.10", None).await?;
Ok(())
}
Custom K3s Version and Disabled Components
let manager = K3sManager::builder()
.version("v1.28.5+k3s1")
.token("my-secure-token")
.disable(vec![
"traefik".to_string(),
"servicelb".to_string(),
])
.build();
Private Network Setup (e.g., Hetzner Cloud)
// Install master with private IP for internal communication
manager.install_master(
"203.0.113.10", // Public IP for SSH access
Some("10.0.0.10"), // Private IP for cluster communication
false // Don't force reinstall
).await?;
// Join worker with private IP
manager.join_worker(
"203.0.113.11", // Worker public IP
"10.0.0.10", // Master private IP (for K3s API)
Some("10.0.0.11") // Worker private IP
).await?;
Check Cluster Status
// Check if K3s is installed
if manager.is_installed("192.168.1.10").await? {
println!("K3s is installed");
// Get node status
let nodes = manager.get_nodes("192.168.1.10").await?;
println!("Nodes: {}", nodes);
}
Uninstall K3s
// Uninstall from master
manager.uninstall("192.168.1.10", true).await?;
// Uninstall from worker
manager.uninstall("192.168.1.11", false).await?;
Declarative Cluster Reconciliation
The library supports declarative cluster management - define your desired state and let lmrc-k3s handle the rest!
use lmrc_k3s::{DesiredClusterConfig, K3sManager, WorkerConfig};
// Define desired state
let desired = DesiredClusterConfig {
master_ip: "192.168.1.10".to_string(),
master_private_ip: None,
workers: vec![
WorkerConfig::new("192.168.1.11".to_string()),
WorkerConfig::new("192.168.1.12".to_string()),
WorkerConfig::new("192.168.1.13".to_string()),
],
k3s_version: "v1.28.5+k3s1".to_string(),
token: "my-token".to_string(),
disabled_components: vec!["traefik".to_string()],
};
// Reconcile - will add/remove workers to match desired state
manager.reconcile(&desired).await?;
// Later, scale up by adding to desired state
desired.workers.push(WorkerConfig::new("192.168.1.14".to_string()));
manager.reconcile(&desired).await?; // Adds worker .14
// Scale down by removing from desired state
desired.workers.retain(|w| w.ip != "192.168.1.13");
manager.reconcile(&desired).await?; // Removes worker .13 gracefully
Benefits:
- Idempotent - Safe to run multiple times
- Automatic diffing - Only makes necessary changes
- Graceful removal - Drains nodes before deletion
- State-aware - Won't reinstall if already present
Examples
The repository includes several complete examples:
basic_setup.rs- Simple cluster setup with one master and two workersprivate_network.rs- Cloud deployment with private networkingcluster_status.rs- Check cluster status and node informationdeclarative_reconciliation.rs- Declarative cluster management with automatic reconciliation
Run examples with:
cargo run --example basic_setup
cargo run --example declarative_reconciliation
SSH Configuration
This crate uses the ssh-manager crate for SSH connections. Ensure you have:
- SSH access configured to your target nodes
- SSH keys set up (typically in
~/.ssh/) - Root access on target nodes (K3s installation requires root)
Authentication Methods
Public Key Authentication (Default):
// Uses ~/.ssh/id_rsa by default
let manager = K3sManager::builder()
.token("my-token")
.build();
// Or specify a custom key path
let manager = K3sManager::builder()
.token("my-token")
.ssh_key_path("/path/to/custom/id_rsa")
.build();
Password Authentication:
let manager = K3sManager::builder()
.token("my-token")
.ssh_username("root")
.ssh_password("your-password")
.build();
Custom SSH Username:
let manager = K3sManager::builder()
.token("my-token")
.ssh_username("admin") // Default is "root"
.build();
API Overview
K3sManager
The main struct for managing K3s clusters.
Methods
Cluster Setup:
builder()- Create a new builder for K3sManagerinstall_master(server_ip, private_ip, force)- Install K3s on a master nodejoin_worker(worker_ip, master_ip, worker_private_ip)- Join a worker to the clusterdownload_kubeconfig(master_ip, output_path)- Download kubeconfig from masteruninstall(server_ip, is_master)- Uninstall K3s from a node
Declarative Management:
reconcile(desired)- Reconcile cluster to match desired configurationplan_reconciliation(desired)- Create a reconciliation plan without applyingapply_reconciliation(desired, plan)- Apply a reconciliation planremove_worker(master_ip, worker_ip)- Gracefully remove a worker node
Status & Monitoring:
is_installed(server_ip)- Check if K3s is installed on a nodeget_nodes(master_ip)- Get cluster node statusget_node_info_list(master_ip)- Get detailed node information listget_cluster_state()- Get current cluster state with node informationrefresh_cluster_state(master_ip)- Refresh cluster state from the master node
K3sManagerBuilder
Builder pattern for creating K3sManager instances.
Methods
Cluster Configuration:
version(version)- Set K3s version (default: "v1.28.5+k3s1")token(token)- Set cluster token (required)disable(components)- Set components to disable
SSH Configuration:
ssh_username(username)- Set SSH username (default: "root")ssh_key_path(path)- Set SSH private key path (default: "~/.ssh/id_rsa")ssh_password(password)- Set SSH password for password-based authentication
Build:
build()- Build the K3sManager instance
Requirements
- Rust 1.70 or later
- SSH access to target nodes
- Root privileges on target nodes
- Target nodes running a Linux distribution supported by K3s
License
Part of the LMRC Stack project. Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Acknowledgments
- Built on top of the ssh-manager crate for SSH functionality
- Uses K3s - Lightweight Kubernetes distribution
Resources
Dependencies
~11–15MB
~204K SLoC