11 releases (5 breaking)
| 0.6.3 | Nov 10, 2025 |
|---|---|
| 0.6.2 | Oct 8, 2025 |
| 0.5.2 | Oct 7, 2025 |
| 0.4.0 | Oct 6, 2025 |
| 0.1.0 | Sep 30, 2025 |
#383 in Concurrency
5.5MB
3.5K
SLoC
Linch
Linch is a high-performance async/sync channel library for Rust that prioritizes simplicity and efficiency. While performance is important, Linch's primary focus is providing a clean, straightforward implementation that's easy to understand and use, with seamless communication between sync and async contexts.
Etymology: The name "linch" is short for "lined channel" - like a concrete-lined channel that provides a smooth, reliable pathway for water flow. Similarly, Linch provides a smooth, reliable pathway for data flow between different parts of your application.
Key Features
- ๐ Mixed Sync/Async: Send and receive both synchronously and asynchronously
- ๐ Cross-Context Communication: Seamlessly bridge sync and async code without blocking
- ๐ฏ Simple Design: Clean, minimal API that's easy to understand and integrate
- โก High Performance: Optimized for throughput and low latency
- ๐ Select Operations: Support for selecting over multiple channels
- ๐ฆ Two Implementations: Choose between the main channel and schannel variants
- ๐งต Thread Safe: Full support for multi-producer, multi-consumer scenarios
Quick Start
Add this to your Cargo.toml:
[dependencies]
linch = "0.1"
Basic Usage
use linch::bounded;
// Create a bounded channel with capacity 10
let (sender, receiver) = bounded(10);
// Send synchronously
sender.send(42).unwrap();
// Receive synchronously
let value = receiver.recv().unwrap();
assert_eq!(value, 42);
Async Usage
use linch::bounded;
#[tokio::main]
async fn main() {
let (sender, receiver) = bounded(10);
// Send asynchronously
sender.send_async(42).await.unwrap();
// Receive asynchronously
let value = receiver.recv_async().await.unwrap();
assert_eq!(value, 42);
}
Two Channel Implementations
Linch provides two channel implementations to suit different use cases:
1. Main Channel (linch::channel)
The primary implementation focuses on simplicity and correctness with efficient async handling:
use linch::bounded;
let (tx, rx) = channel(100);
When to use:
- โ General-purpose applications
- โ When you want proper async/await integration
- โ Applications with mixed sync/async workloads
- โ Bridging sync and async contexts (e.g., sync threads communicating with async tasks)
- โ When CPU efficiency matters during idle periods
- โ Production applications requiring reliability
Characteristics:
- Uses proper waker-based async coordination
- CPU-efficient when channels are idle
- Slightly more complex internals for better async integration
- Supports select operations
2. SChannel (linch::schannel)
A high-throughput implementation optimized for maximum performance:
use linch::schannel;
let (tx, rx) = schannel::with_capacity(100);
When to use:
- โก Benchmarking against other channel implementations
- โก When maximum throughput is critical
- โก Applications that keep channels busy continuously
Characteristics:
- Uses active polling for async operations
- Optimized for scenarios where channels rarely stay empty
- Higher CPU usage during async operations
- Maximum performance for high-throughput scenarios
Examples
Cross-Context Communication
One of Linch's key strengths is enabling seamless communication between sync and async contexts:
use linch::bounded;
use std::thread;
#[tokio::main]
async fn main() {
let (tx, rx) = bounded(10);
// Spawn a synchronous thread (e.g., CPU-intensive work)
let tx_clone = tx.clone();
thread::spawn(move || {
for i in 0..5 {
// Synchronous send from thread
tx_clone.send(format!("Processed item {}", i)).unwrap();
// Simulate work
thread::sleep(std::time::Duration::from_millis(100));
}
});
// Receive asynchronously in async context without blocking
for _ in 0..5 {
let value = rx.recv_async().await.unwrap();
println!("Async task received: {}", value);
// Can continue with other async work here
}
}
You can also send from async contexts to sync contexts:
use linch::bounded;
use std::thread;
#[tokio::main]
async fn main() {
let (tx, rx) = bounded(10);
// Spawn a sync thread that receives
let handle = thread::spawn(move || {
while let Ok(value) = rx.recv() {
println!("Sync thread received: {}", value);
}
});
// Send from async context
for i in 0..5 {
tx.send_async(format!("Async message {}", i)).await.unwrap();
// Can do other async work between sends
tokio::time::sleep(tokio::time::Duration::from_millis(50)).await;
}
drop(tx); // Close channel
handle.join().unwrap();
}
Select Operations
use linch::{bounded, Select};
#[tokio::main]
async fn main() {
let (tx1, rx1) = bounded(1);
let (tx2, rx2) = bounded(1);
// Send to both channels
tx1.send("Hello").unwrap();
tx2.send("World").unwrap();
// Select from multiple receivers
let mut sel = Select::new();
let idx1 = sel.recv(&rx1);
let idx2 = sel.recv(&rx2);
let op = sel.select();
match op.index() {
i if i == idx1 => {
let msg = op.recv(&rx1).unwrap();
println!("Received from channel 1: {}", msg);
},
i if i == idx2 => {
let msg = op.recv(&rx2).unwrap();
println!("Received from channel 2: {}", msg);
},
_ => unreachable!(),
}
}
Stream Integration
The schannel implementation supports conversion to async streams:
use linch::schannel;
use futures::StreamExt;
#[tokio::main]
async fn main() {
let (tx, rx) = schannel::with_capacity(10);
// Send some values
for i in 0..5 {
tx.send(i).unwrap();
}
drop(tx); // Close the channel
// Convert to stream and collect
let values: Vec<_> = rx.into_stream().collect().await;
println!("Collected: {:?}", values);
}
Performance Characteristics
Linch is designed with performance in mind, but simplicity comes first. The implementations are:
- Zero-copy where possible
- Optimized for both single and multi-threaded scenarios
- Benchmarked against other popular Rust channel libraries
Benchmarking
The crate includes comprehensive benchmarks comparing against other channel implementations:
# Run all benchmarks
cargo bench
# Run specific benchmark categories
cargo bench congestion
cargo bench realistic_workload
See the benchmark guide for detailed performance analysis.
Design Philosophy
Linch prioritizes:
- Simplicity: Clean, understandable code that's easy to maintain and debug
- Cross-Context Communication: Seamless bridging between sync and async code without complexity
- Correctness: Proper async/await integration and memory safety
- Performance: High throughput and low latency, but not at the expense of simplicity
- Flexibility: Support for both sync and async patterns in the same channel
The goal is to provide a channel implementation that's both easy to use and fast enough for most applications, with particular focus on making sync/async interoperability effortless.
API Reference
Main Channel Functions
bounded(capacity)- Create a bounded channelSender::send(item)- Send synchronouslySender::send_async(item)- Send asynchronouslyReceiver::recv()- Receive synchronouslyReceiver::recv_async()- Receive asynchronously
SChannel Functions
schannel::bounded(capacity)- Create a high-throughput channelSender::send_async(item)- Send asynchronously with active pollingReceiver::recv_async()- Receive asynchronously with active pollingReceiver::into_stream()- Convert to async stream
Select Operations
Both the main channel and schannel support select operations:
Main Channel Select:
Select::new()- Create a new select operationSelect::recv(receiver)- Add receiver to selectSelect::select()- Wait for any operation to complete
SChannel Select:
schannel::Select::new()- Create a new select operationschannel::Select::recv(receiver)- Add receiver to selectschannel::Select::send(sender)- Add sender to selectschannel::Select::select()- Wait for any operation to completeschannel::Select::try_select()- Non-blocking selectschannel::Select::select_timeout(timeout)- Select with timeout
Error Handling
Both implementations use standard Rust error types:
SendError<T>- Returned when all receivers are droppedRecvError- Returned when all senders are droppedSendTimeoutError<T>- Returned on send timeoutRecvTimeoutError- Returned on receive timeout
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. The project values:
- Code clarity over micro-optimizations
- Comprehensive tests for all features
- Good documentation with examples
- Benchmark coverage for performance-critical changes
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Built on top of the excellent crossbeam-channel library
- Inspired by Go's channels and Rust's async ecosystem
- The concrete-lined channel metaphor reflects the library's goal of providing reliable, efficient data pathways
- Thanks to the Rust community for feedback and contributions