8 releases
| 0.1.13 | Oct 9, 2025 |
|---|---|
| 0.1.12 | Oct 9, 2025 |
| 0.1.10 | Sep 22, 2025 |
| 0.1.3 |
|
#141 in Debugging
637 downloads per month
175KB
2.5K
SLoC
Quantum Pulse
A lightweight, customizable profiling library for Rust applications with support for custom categories and percentile statistics.
Features
- π True Zero-Cost Abstraction - Stub implementation compiles to nothing when disabled
- π― Derive Macro Support - Automatic implementation with
#[derive(ProfileOp)] - π Percentile Statistics - Automatic calculation of p50, p95, p99, and p99.9 percentiles using HDR histograms
- π·οΈ Type-Safe Categories - Define your own operation categories with compile-time guarantees
- π Multiple Output Formats - Console and CSV export options
- βΈοΈ Pausable Timers - Exclude specific periods from measurements
- π§ Clean API - Same interface whether profiling is enabled or disabled
- π Async Support - Full support for async/await patterns
- π― No Conditionals Required - Use the same code for both production and development
Installation
Add this to your Cargo.toml:
[dependencies]
# For production builds (zero overhead)
quantum-pulse = { version = "0.1.5", default-features = false }
# For development builds (with profiling and macros)
quantum-pulse = { version = "0.1.5", features = ["full"] }
Or use feature flags in your application:
[dependencies]
quantum-pulse = { version = "0.1.5", default-features = false }
[features]
profiling = ["quantum-pulse/full"]
Quick Start
π― Recommended: Using the Derive Macro
The easiest and most maintainable way to use quantum-pulse is with the ProfileOp derive macro:
use quantum_pulse::{ProfileOp, profile, ProfileCollector};
// Simply derive ProfileOp and add category attributes
#[derive(Debug, ProfileOp)]
enum AppOperation {
#[category(name = "Database", description = "Database operations")]
QueryUser,
#[category(name = "Database")] // Reuses the description
UpdateUser,
#[category(name = "Network", description = "External API calls")]
HttpRequest,
#[category(name = "Cache", description = "Cache operations")]
ReadCache,
ComputeHash, // No category attribute - uses variant name as category
}
fn main() {
// Profile operations with zero boilerplate
let user = profile!(AppOperation::QueryUser, {
fetch_user_from_database()
});
let result = profile!(AppOperation::HttpRequest, {
call_external_api()
});
// Generate and display report
let report = ProfileCollector::get_summary();
println!("Total operations: {}", report.total_operations);
}
Category Management
The ProfileOp macro intelligently manages categories:
#[derive(Debug, ProfileOp)]
enum DatabaseOps {
#[category(name = "Query", description = "Read operations")]
SelectUsers,
#[category(name = "Query")] // Automatically reuses "Read operations" description
SelectPosts,
#[category(name = "Mutation", description = "Write operations")]
InsertUser,
#[category(name = "Mutation")] // Automatically reuses "Write operations" description
UpdateUser,
DeleteUser, // Uses "DeleteUser" as both name and description
}
Alternative: Manual Implementation
For advanced use cases or when you prefer explicit control:
use quantum_pulse::{Operation, Category, profile};
#[derive(Debug)]
enum AppOperation {
DatabaseQuery,
NetworkRequest,
}
// Implement Operation trait manually
impl Operation for AppOperation {
fn get_category(&self) -> &dyn Category {
match self {
AppOperation::DatabaseQuery => &DatabaseCategory,
AppOperation::NetworkRequest => &NetworkCategory,
}
}
}
// Define custom categories
struct DatabaseCategory;
impl Category for DatabaseCategory {
fn get_name(&self) -> &str { "Database" }
fn get_description(&self) -> &str { "Database operations" }
}
struct NetworkCategory;
impl Category for NetworkCategory {
fn get_name(&self) -> &str { "Network" }
fn get_description(&self) -> &str { "Network operations" }
}
Advanced Features
Async Support
use quantum_pulse::{ProfileOp, profile_async};
#[derive(Debug, ProfileOp)]
enum AsyncOperation {
#[category(name = "IO", description = "I/O operations")]
FileRead,
#[category(name = "Network", description = "Network operations")]
HttpRequest,
#[category(name = "Database")]
DatabaseQuery,
}
async fn fetch_data() -> Result<Data, Error> {
// Profile async operations seamlessly
let data = profile_async!(AsyncOperation::HttpRequest, async {
client.get("https://api.example.com/data").await
}).await;
profile_async!(AsyncOperation::DatabaseQuery, async {
process_data(data).await
}).await
}
Complex Enum Variants
The ProfileOp macro supports all enum variant types:
#[derive(Debug, ProfileOp)]
enum ComplexOperation {
// Unit variant
#[category(name = "Simple")]
Basic,
// Tuple variant with data
#[category(name = "Database", description = "Database operations")]
Query(String),
// Struct variant with named fields
#[category(name = "Cache", description = "Cache operations")]
CacheOp { key: String, ttl: u64 },
}
fn example() {
let op1 = ComplexOperation::Basic;
let op2 = ComplexOperation::Query("SELECT * FROM users".to_string());
let op3 = ComplexOperation::CacheOp {
key: "user:123".to_string(),
ttl: 3600
};
// All variants work seamlessly with profiling
profile!(op1, { /* work */ });
profile!(op2, { /* work */ });
profile!(op3, { /* work */ });
}
Report Generation
use quantum_pulse::{ProfileCollector, ReportBuilder, TimeFormat};
// Quick summary
let summary = ProfileCollector::get_summary();
println!("Total operations: {}", summary.total_operations);
println!("Total time: {} Β΅s", summary.total_time_micros);
// Detailed report with configuration
let report = ReportBuilder::new()
.include_percentiles(true)
.group_by_category(true)
.time_format(TimeFormat::Milliseconds)
.build();
println!("{}", report.to_string());
// Export to CSV
let stats = ProfileCollector::get_all_stats();
let mut csv = String::from("Operation,Count,Mean(Β΅s)\n");
for (name, stat) in stats {
csv.push_str(&format!("{},{},{:.2}\n",
name, stat.count, stat.mean().as_micros()));
}
std::fs::write("profile.csv", csv).unwrap();
Pausable Timers
For operations where you need to exclude certain periods:
use quantum_pulse::{PausableTimer, ProfileOp};
#[derive(Debug, ProfileOp)]
enum Operation {
#[category(name = "Processing")]
DataProcessing,
}
fn process_with_io() {
let mut timer = PausableTimer::new(&Operation::DataProcessing);
// Processing phase 1 (measured)
process_part_1();
timer.pause();
// I/O operation (not measured)
let data = read_from_disk();
timer.resume();
// Processing phase 2 (measured)
process_part_2(data);
// Timer automatically records on drop
}
Zero-Cost Abstractions
Quantum Pulse implements true zero-cost abstractions through compile-time feature selection:
How It Works
#[derive(Debug, ProfileOp)]
enum AppOp {
#[category(name = "Critical")]
ImportantWork,
}
// Your code always looks the same
let result = profile!(AppOp::ImportantWork, {
expensive_operation()
});
// With default features (stub mode):
// - profile! macro expands to just the code block
// - No timing, no allocations, no overhead
// - Compiler optimizes it to: let result = expensive_operation();
// With "full" feature enabled:
// - Full profiling with timing and statistics
// - HDR histograms for accurate percentiles
// - Comprehensive reporting
Performance Characteristics
| Configuration | Overhead | Use Case |
|---|---|---|
| Stub (default) | Zero - methods are empty and inlined away | Production |
| Full | ~200-300ns per operation | Development, debugging |
Pause/Unpause Profiling
Control profiling dynamically with pause!() and unpause!() macros:
use quantum_pulse::{profile, pause, unpause, ProfileOp};
#[derive(Debug, ProfileOp)]
enum AppOperation {
#[category(name = "Core")]
CriticalWork,
#[category(name = "Debug")]
DiagnosticWork,
}
// Normal profiling - operations are recorded
profile!(AppOperation::CriticalWork, {
perform_important_work();
});
// Pause all profiling
pause!();
// This won't be recorded
profile!(AppOperation::DiagnosticWork, {
debug_operations();
});
// Resume profiling
unpause!();
// This will be recorded again
profile!(AppOperation::CriticalWork, {
more_important_work();
});
Stack-Based Pause/Unpause
For fine-grained control, pause only timers currently on the call stack with pause_stack!() and unpause_stack!():
use quantum_pulse::{profile, pause_stack, unpause_stack, ProfileOp};
#[derive(Debug, ProfileOp)]
enum AppOperation {
#[category(name = "Processing")]
DataProcessing,
#[category(name = "IO")]
DatabaseQuery,
}
// Profile data processing, but exclude I/O wait time
profile!(AppOperation::DataProcessing, {
// Initial processing (measured)
process_data();
// Pause only the DataProcessing timer
pause_stack!();
// Database query (not counted in DataProcessing time)
// But the query itself is still profiled separately
profile!(AppOperation::DatabaseQuery, {
query_database();
});
// Resume the DataProcessing timer
unpause_stack!();
// More processing (measured)
finalize_data();
});
Key differences:
pause!()/unpause!()- Affects all profiling globallypause_stack!()/unpause_stack!()- Affects only timers currently on the call stack
Use Cases
Global Pause/Unpause:
- Exclude initialization/cleanup from performance measurements
- Focus profiling on specific sections during debugging
- Reduce overhead during non-critical operations
- Selective measurement in loops or batch operations
Stack-Based Pause/Unpause:
- Exclude I/O wait time from algorithm profiling
- Measure only CPU-bound work in mixed operations
- Exclude network latency from processing metrics
- Fine-grained control without affecting concurrent operations
- Conditional profiling based on runtime conditions
Migration Guide
From String-based Profiling
If you're currently using string-based operation names, migrate to type-safe enums:
// Before: String-based (error-prone, no compile-time checks)
profile!("database_query", {
query_database()
});
// After: Type-safe with ProfileOp (recommended)
#[derive(Debug, ProfileOp)]
enum DbOp {
#[category(name = "Database")]
Query,
}
profile!(DbOp::Query, {
query_database()
});
From Manual Implementation
If you have existing manual Operation implementations, you can gradually migrate:
// Before: Manual implementation
#[derive(Debug)]
enum OldOp {
Task1,
Task2,
}
impl Operation for OldOp {
fn get_category(&self) -> &dyn Category {
// Manual category logic
}
}
// After: Simply add ProfileOp derive
#[derive(Debug, ProfileOp)]
enum NewOp {
#[category(name = "Tasks", description = "Application tasks")]
Task1,
#[category(name = "Tasks")]
Task2,
}
Examples
Check out the examples/ directory for comprehensive examples:
macro_derive.rs- Recommended: Using the ProfileOp derive macrobasic.rs- Simple profiling examplecustom_categories.rs- Manual category implementationasync_profiling.rs- Profiling async codetrading_system.rs- Real-world trading system example
Run examples with:
# Recommended: See the derive macro in action
cargo run --example macro_derive --features full
# Other examples
cargo run --example basic --features full
cargo run --example async_profiling --features full
cargo run --example trading_system --features full
Feature Flags
full: Enable full profiling functionality with HDR histograms and derive macrosmacros: Enable only the derive macros (included infull)- Default (no features): Stub implementation with zero overhead
Best Practices
- Use ProfileOp Derive: Start with the derive macro for cleaner, more maintainable code
- Organize by Category: Group related operations under the same category name
- Descriptive Names: Use clear, descriptive names for both categories and operations
- Profile Boundaries: Profile at meaningful boundaries (API calls, database queries, etc.)
- Avoid Over-Profiling: Don't profile every function - focus on potential bottlenecks
Performance Considerations
The library is designed with performance in mind:
- True Zero-Cost: Stub implementations are completely removed by the compiler
- Efficient Percentiles: Using HDR histograms for O(1) percentile calculations
- Lock-Free Operations: Using atomic operations and thread-local storage
- Smart Inlining: Critical paths marked with
#[inline(always)]in stub mode - No Runtime Checks: Feature selection happens at compile time
Benchmarks
Run benchmarks with:
cargo bench
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under either of
- Apache License, Version 2.0, (LICENSE-APACHE or https://apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or https://opensource.org/licenses/MIT)
at your option.
Acknowledgments
This library was designed for high-performance applications requiring microsecond-precision profiling with minimal overhead and maximum ergonomics.
Dependencies
~170β710KB
~16K SLoC