8 releases (4 stable)
| 1.3.1 | Feb 21, 2026 |
|---|---|
| 1.2.0 | Feb 20, 2026 |
| 0.4.0 | Feb 10, 2026 |
| 0.3.0 | Feb 2, 2026 |
| 0.1.0 | Jan 31, 2026 |
#584 in Hardware support
5.5MB
80K
SLoC
The world's first agentic system monitoring utility and API.
Built in Rust for safety and performance, featuring revolutionary hardware interfaces for AI.
Cross-platform agentic system monitoring
Silicon Monitor is a powerful, cross-platform hardware monitoring utility designed primarily for AI agents and interactive interfaces. It provides deep insights into CPUs, GPUs, memory, disks, motherboards, and network interfaces across Windows, Linux, and macOS.
Primary Usage Modes
| Mode | Command | Description |
|---|---|---|
| ๐ค AI Agent | amon / simon ai |
Natural language queries, MCP server for Claude, tool manifests for LLMs |
| ๐ป CLI | simon <component> |
Command-line monitoring with JSON output for scripting |
| ๐ฅ๏ธ TUI | simon tui |
Interactive terminal dashboard with real-time graphs and selectable themes |
| ๐ช GUI | simon gui |
Native desktop application with egui |
Overview
Silicon Monitor provides comprehensive hardware monitoring:
- ๐ฎ GPU Monitoring: NVIDIA, AMD, and Intel GPUs with utilization, memory, temperature, power, and process tracking
- ๐ป CPU Monitoring: Per-core metrics, frequencies, temperatures, and hybrid architecture support
- ๐ง Memory Monitoring: RAM, swap, bandwidth, and latency tracking
- ๐พ Disk Monitoring: I/O operations, throughput, queue depth, and SMART data
- ๐ง Motherboard Monitoring: System information, BIOS version, and hardware sensors
- ๐ Process Monitoring: System-wide process tracking with GPU attribution
- ๐ Network Monitoring: Interface statistics, bandwidth rates, and network health
- ๏ฟฝ Network Tools: nmap-style port scanning, ping, traceroute, DNS lookup
- ๐ Audio Monitoring: Audio device enumeration, volume levels, and mute states
- ๐ถ Bluetooth Monitoring: Adapter and device enumeration, battery levels, connection states
- ๐ฅ๏ธ Display Monitoring: Connected displays, resolutions, refresh rates, and scaling
- ๐ USB Monitoring: USB device enumeration, device classes, and connection topology
Interfaces:
- ๐ค AI Agent: Natural language queries, MCP server for Claude Desktop, tool manifests for all major LLMs
- ๐ป CLI: Structured command-line output with JSON support for scripting and automation
- ๐ฅ๏ธ TUI: Beautiful terminal interface with real-time graphs, selectable themes, and integrated AI chat
- ๐ช GUI: Native desktop application with multiple themes and visualizations
Features
Multi-Vendor GPU Support
- NVIDIA: Full NVML integration for all CUDA-capable GPUs (GeForce, Quadro, Tesla, Jetson)
- AMD: ROCm/sysfs support for RDNA/CDNA architectures (Radeon, Instinct)
- Intel: i915/xe driver support for Arc, Iris Xe, and Data Center GPUs
- Unified API: Single interface for all GPU vendors with vendor-specific capabilities
Comprehensive Metrics
GPU Metrics:
- Utilization (graphics, compute, video engines)
- Memory usage (used, free, total, bandwidth)
- Clock frequencies (graphics, memory, streaming multiprocessor)
- Temperature sensors (GPU, memory, hotspot)
- Power consumption (current, average, limit, TDP)
- PCIe bandwidth and generation
- Per-process GPU memory attribution
CPU Metrics:
- Per-core utilization and frequency
- Temperature sensors
- Cache sizes (L1, L2, L3)
- Thread topology
- Power states
Memory Metrics:
- Total, used, free, available
- Swap usage
- Page faults
- Memory bandwidth
Disk Metrics:
- Read/write bytes and operations
- Queue depth and latency
- SMART attributes
- Device information
Network Metrics:
- Per-interface RX/TX statistics
- Bandwidth rates
- Packet errors and drops
- Link speed and state
Process Monitoring with GPU Attribution
Silicon Monitor uniquely correlates system processes with GPU usage across all vendors:
use simonlib::{ProcessMonitor, GpuCollection};
let gpu_collection = GpuCollection::auto_detect()?;
let mut monitor = ProcessMonitor::with_gpus(gpu_collection)?;
// Get processes sorted by GPU memory usage
let gpu_procs = monitor.processes_by_gpu_memory()?;
for proc in gpu_procs.iter().take(10) {
println!("{} (PID {}): {} MB GPU memory",
proc.name, proc.pid, proc.total_gpu_memory_bytes / 1024 / 1024);
}
AI Agent for System Analysis
Ask questions about your system in natural language:
use simonlib::agent::{Agent, AgentConfig, ModelSize};
let config = AgentConfig::new(ModelSize::Medium); // 500M parameters
let mut agent = Agent::new(config)?;
let response = agent.ask("What's my GPU temperature?", &monitor)?;
println!("{}", response.response);
// "GPU temperature is 65ยฐC. โ Temperature is within safe range."
let response = agent.ask("How much power am I using?", &monitor)?;
// "Current GPU power consumption: 280.5W"
Features:
- Natural language queries (state, predictions, energy, recommendations)
- Multiple model sizes (100M, 500M, 1B parameters)
- Zero latency impact on monitoring (non-blocking)
- Response caching for instant repeated queries
- See AI_AGENT.md for details
AI Agent Discoverability & Hardware Ontology
Silicon Monitor is designed from the ground up to be discoverable by AI agents. It provides a structured hardware ontology that allows agents to understand what monitoring capabilities are available and how to query them.
Hardware Ontology
The library exposes a machine-readable ontology describing hardware domains, properties, and their data types:
use simonlib::ai_api::HardwareOntology;
let ontology = HardwareOntology::complete();
println!("{}", serde_json::to_string_pretty(&ontology)?);
{
"version": "1.0.0",
"name": "Silicon Monitor Hardware Ontology",
"domains": [
{
"id": "gpu",
"name": "Graphics Processing Unit",
"properties": [
{ "id": "utilization", "data_type": "percentage", "unit": "%" },
{ "id": "temperature", "data_type": "temperature", "unit": "C" },
{ "id": "power_draw", "data_type": "power", "unit": "W" }
]
},
{ "id": "cpu", "name": "Central Processing Unit" },
{ "id": "memory", "name": "System Memory" },
{ "id": "disk", "name": "Storage Devices" },
{ "id": "network", "name": "Network Interfaces" },
{ "id": "process", "name": "System Processes" }
]
}
Tool Discovery
AI agents can enumerate all available monitoring tools with their schemas:
use simonlib::ai_api::{AiDataApi, ToolDefinition};
let api = AiDataApi::new()?;
let tools: Vec<ToolDefinition> = api.list_tools();
for tool in &tools {
println!("{}: {}", tool.name, tool.description);
// get_gpu_status: Get current status of all GPUs...
// get_cpu_usage: Get CPU utilization per-core...
}
MCP Server (Model Context Protocol)
For seamless integration with Claude Desktop and other MCP-compatible AI systems:
simon ai server # or: amon server
Configure in Claude Desktop's claude_desktop_config.json:
{
"mcpServers": {
"silicon-monitor": {
"command": "simon",
"args": ["ai", "server"]
}
}
}
Multi-Format Tool Export
Export tool definitions in formats optimized for different AI platforms:
amon manifest --format openai # OpenAI function calling format
amon manifest --format anthropic # Claude tool use format
amon manifest --format mcp # Model Context Protocol
This enables AI agents to:
- Discover available hardware monitoring capabilities at runtime
- Understand the data types and units for each metric
- Query system state using structured tool calls
- Reason about hardware relationships through the ontology
Installation
From Source
# Clone the repository
git clone https://github.com/nervosys/SiliconMonitor
cd SiliconMonitor
# Build with all GPU vendor support
cargo build --release --features full
# Or build for specific vendors
cargo build --release --features nvidia # NVIDIA only
cargo build --release --features amd # AMD only
cargo build --release --features intel # Intel only
cargo build --release --features nvidia,amd # NVIDIA + AMD
Binary Aliases
The CLI provides two binary names optimized for different use cases:
simon - Full Silicon Monitor
Complete hardware monitoring with subcommands for specific metrics:
# Launch TUI (default)
simon
# Monitor specific components
simon cpu
simon gpu
simon memory
simon processes
# Peripheral hardware
simon audio # List audio devices and volume
simon bluetooth # List Bluetooth adapters and devices
simon displays # Show connected displays
simon usb # List USB devices
# Watch mode: continuously monitor devices (press 'q' to quit)
simon cli audio --watch # Watch audio devices
simon cli bluetooth --watch # Watch Bluetooth devices
simon cli display --watch # Watch connected displays
simon cli usb --watch # Watch USB devices
simon cli usb --watch -i 2.0 # Watch USB with 2s refresh interval
simon ai query "What's my GPU temperature?" # Ask a question
simon ai query # Interactive AI mode
simon ai manifest --format openai # Export for OpenAI/GPT
simon ai manifest --format anthropic # Export for Claude
simon ai manifest --format gemini # Export for Gemini
simon ai manifest --format grok # Export for xAI Grok
simon ai manifest --format llama # Export for Meta Llama
simon ai manifest --format mistral # Export for Mistral
simon ai manifest --format deepseek # Export for DeepSeek
simon ai server # Start MCP server for Claude Desktop
#### `amon` - AI Monitor
Dedicated AI agent interface for natural language system queries. This is syntactic sugar for `simon ai`:
```bash
# Query subcommand (default if no subcommand)
amon query "What's my GPU temperature?" # Ask a question
amon query # Interactive AI mode
amon # Also starts interactive mode
# Export manifests for AI agents
amon manifest --format openai # Export for OpenAI/GPT-4o/o1/o3
amon manifest --format anthropic # Export for Claude 4
amon manifest --format gemini # Export for Gemini 2.0
amon manifest --format grok # Export for xAI Grok 3
amon manifest --format llama # Export for Meta Llama 4
amon manifest --format mistral # Export for Mistral Large
amon manifest --format deepseek # Export for DeepSeek-R1/V3
amon manifest --format mcp # Export as MCP tools
amon manifest -o tools.json # Save to file
# Start MCP server for Claude Desktop integration
amon server
# List available AI backends
amon --list-backends
Both binaries provide the same underlying functionality - use **`simon`** for traditional monitoring commands or **`amon`** for AI-focused interactions!
```bash
# Build both binaries
cargo build --release --features cli
Feature Flags
nvidia- NVIDIA GPU support via NVMLamd- AMD GPU support via sysfs/DRMintel- Intel GPU support via i915/xe driverscli- Command-line interface and TUIfull- All features enabled
Quick Start
GPU Monitoring
use simonlib::gpu::GpuCollection;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Auto-detect all GPUs (NVIDIA, AMD, Intel)
let gpus = GpuCollection::auto_detect()?;
// Get snapshot of all GPUs
for (idx, info) in gpus.snapshot_all()?.iter().enumerate() {
println!("GPU {}: {}", idx, info.static_info.name);
println!(" Vendor: {:?}", info.static_info.vendor);
println!(" Utilization: {}%", info.dynamic_info.utilization.graphics);
println!(" Memory: {} / {} MB",
info.dynamic_info.memory.used / 1024 / 1024,
info.static_info.memory_total / 1024 / 1024);
println!(" Temperature: {}ยฐC", info.dynamic_info.temperature.gpu);
println!(" Power: {:.1}W", info.dynamic_info.power.current / 1000.0);
}
Ok(())
}
CPU Monitoring
use simonlib::cpu::CpuMonitor;
let mut monitor = CpuMonitor::new()?;
let info = monitor.update()?;
println!("CPU: {}", info.name);
for (idx, core) in info.cores.iter().enumerate() {
println!(" Core {}: {:.1}% @ {} MHz",
idx, core.utilization, core.frequency_mhz);
}
Memory Monitoring
use simonlib::memory::MemoryMonitor;
let mut monitor = MemoryMonitor::new()?;
let info = monitor.update()?;
println!("Memory: {} / {} MB ({:.1}% used)",
info.used_mb(), info.total_mb(), info.used_percent());
println!("Swap: {} / {} MB",
info.swap_used_mb(), info.swap_total_mb());
Network Monitoring
use simonlib::network_monitor::NetworkMonitor;
let mut monitor = NetworkMonitor::new()?;
let interfaces = monitor.interfaces()?;
for iface in interfaces {
if iface.is_active() {
let (rx_rate, tx_rate) = monitor.bandwidth_rate(&iface.name, &iface);
println!("{}: โ{:.2} MB/s โ{:.2} MB/s",
iface.name, rx_rate / 1_000_000.0, tx_rate / 1_000_000.0);
}
}
Network Diagnostic Tools (nmap, traceroute, ping style)
Silicon Monitor includes network diagnostic utilities inspired by popular CLI tools:
use simonlib::{ping, traceroute, scan_ports, dns_lookup, check_port};
use std::time::Duration;
// Ping a host
let result = ping("8.8.8.8", 4)?;
println!("RTT: min={:.2}ms avg={:.2}ms max={:.2}ms",
result.rtt_min_ms, result.rtt_avg_ms, result.rtt_max_ms);
// DNS lookup
let ips = dns_lookup("google.com")?;
for ip in ips {
println!(" โ {}", ip);
}
// Traceroute
let hops = traceroute("google.com", 30)?;
for hop in &hops.hops {
println!("{:>2} {:>15} {:>10}ms",
hop.ttl,
hop.address.as_deref().unwrap_or("*"),
hop.rtt_ms.unwrap_or(0.0));
}
// Port scan (nmap-style TCP connect scan)
let ports = [22, 80, 443, 8080];
let results = scan_ports("192.168.1.1", &ports)?;
for r in results {
println!("{}/tcp {} {}", r.port, r.status, r.service.unwrap_or_default());
}
// Quick port check (netcat-style)
let open = check_port("192.168.1.1", 80, Duration::from_secs(2))?;
println!("Port 80: {}", if open { "OPEN" } else { "CLOSED" });
AI Agent CLI and API
Peripheral Hardware Monitoring
Silicon Monitor provides cross-platform monitoring for audio, Bluetooth, display, and USB devices:
Audio Devices
use simonlib::audio::AudioMonitor;
let mut monitor = AudioMonitor::new()?;
let devices = monitor.devices();
for device in devices {
println!("{} ({:?}): {:?}", device.name, device.device_type, device.state);
if device.is_default {
println!(" * Default device");
}
if let Some(vol) = device.volume {
println!(" Volume: {}%", vol);
}
}
// Get master volume (0-100)
if let Some(volume) = monitor.master_volume() {
println!("Master volume: {}%", volume);
}
Bluetooth Devices
use simonlib::bluetooth::BluetoothMonitor;
let mut monitor = BluetoothMonitor::new()?;
// List adapters
for adapter in monitor.adapters() {
println!("Adapter: {} ({})", adapter.name, adapter.address);
println!(" Powered: {}", adapter.powered);
}
// List connected/paired devices
for device in monitor.devices() {
println!("{} ({:?})", device.name, device.device_type);
if let Some(battery) = device.battery_percent {
println!(" Battery: {}%", battery);
}
}
Display/Monitor Information
use simonlib::display::DisplayMonitor;
let monitor = DisplayMonitor::new()?;
for display in monitor.displays() {
println!("Display {}: {}x{} @ {}Hz",
display.id, display.width, display.height, display.refresh_rate);
if display.is_primary {
println!(" * Primary display");
}
println!(" Aspect ratio: {}", display.aspect_ratio());
if let Some(scale) = display.scale_factor {
println!(" Scale: {}x", scale);
}
}
USB Devices
use simonlib::usb::UsbMonitor;
let monitor = UsbMonitor::new()?;
for device in monitor.devices() {
println!("USB {:04x}:{:04x} - {} {}",
device.vendor_id, device.product_id,
device.manufacturer.as_deref().unwrap_or("Unknown"),
device.product.as_deref().unwrap_or("Unknown"));
println!(" Class: {:?}", device.device_class);
println!(" Bus {}, Port {}", device.bus_number, device.port_number);
}
Silicon Monitor includes a lightweight AI agent that can answer questions about your system in natural language:
Command Line Interface
# Quick single queries with amon
amon query "What's my GPU temperature?"
amon query "Show CPU usage"
amon query "Is my memory usage normal?"
# Interactive AI session
amon
# You: What's my GPU doing?
# ๐ค Agent: Your GPU is currently at 45% utilization...
# Or use simon ai subcommand
simon ai query "Analyze my system performance"
simon ai # Interactive mode
Programmatic Usage
use simonlib::agent::{Agent, AgentConfig, ModelSize};
use simonlib::SiliconMonitor;
let monitor = SiliconMonitor::new()?;
let config = AgentConfig::new(ModelSize::Medium);
let mut agent = Agent::new(config)?;
let response = agent.ask("What's my GPU temperature?", &monitor)?;
println!("{}", response.response);
Agent Features
- Natural Language: Ask questions in plain English
- System Aware: Accesses real-time hardware metrics
- Multiple Backends: Automatic detection of local and remote AI models
- Privacy First: Rule-based fallback when no backends configured
- Smart Caching: Remembers recent queries for instant responses
- Interactive Mode: Multi-turn conversations about your system
Supported Backends
The AI agent automatically detects and uses available backends in this order:
Local Inference Backends (โ Implemented):
- TensorRT-LLM - NVIDIA optimized inference (requires Triton server)
- vLLM - High-performance serving with PagedAttention
- Ollama - Easy local model management (recommended for beginners)
- LM Studio - User-friendly GUI for local models
- llama.cpp - Direct GGUF model loading (placeholder, needs bindings)
Remote API Backends:
- OpenAI API - GPT models (requires
OPENAI_API_KEY) - Anthropic Claude - Claude models (requires
ANTHROPIC_API_KEY) - GitHub Models - Free AI models (requires
GITHUB_TOKEN) - Azure OpenAI - Enterprise OpenAI (requires
AZURE_OPENAI_API_KEY)
Always Available:
- Rule-Based - Built-in fallback system (no setup required)
See LOCAL_AI_BACKENDS.md for detailed setup instructions.
Backend Configuration
# List available backends
amon --list-backends
# [*] Available AI Backends:
# 1. Rule-Based (Built-in)
# 2. Ollama (Local Server) [+] running
# 3. GitHub Models
# API Key: GITHUB_TOKEN [+] configured
# Endpoint: https://models.inference.ai.azure.com
# Automatic backend detection (default)
amon query "What's my GPU temperature?"
# [*] Using backend: Ollama
# Question: What's my GPU temperature?
# ...
# Configure via environment variables (remote APIs only)
export OPENAI_API_KEY="sk-..." # For OpenAI
export ANTHROPIC_API_KEY="sk-..." # For Anthropic
export GITHUB_TOKEN="ghp_..." # For GitHub Models
# Or start local inference servers
ollama serve # Ollama (easiest)
vllm serve meta-llama/Llama-3-8B # vLLM (fastest)
# TensorRT-LLM via Triton # TensorRT (NVIDIA only)
ollama serve # Ollama on port 11434
# LM Studio GUI โ Start Server # LM Studio on port 1234
Programmatic Backend Selection
use simonlib::agent::{AgentConfig, BackendConfig, BackendType};
// Auto-detect best backend
let config = AgentConfig::auto_detect()?;
// Or use specific backend
let config = AgentConfig::with_backend_type(BackendType::RemoteOpenAI)?;
// Or custom configuration
let backend = BackendConfig::openai("gpt-4o-mini", Some("sk-...".into()));
let config = AgentConfig::with_backend(backend);
let mut agent = Agent::new(config)?;
Example Queries:
- "What's my GPU temperature and is it safe?"
- "Show me CPU usage across all cores"
- "How much memory am I using?"
- "Is my system running hot?"
- "What processes are using the most GPU memory?"
Terminal User Interface (TUI)
Silicon Monitor includes a beautiful TUI for real-time monitoring with integrated AI agent:
# Build and run the TUI
cargo run --release --features cli --example tui
# Or after installation (using either binary name)
simon
amon # AI Monitor alias
TUI Features:
- ๐ Real-time graphs with 60-second history
- ๐จ Selectable color themes - Press
tto choose from 6 themes (Catppuccin Mocha/Latte, Glances, Nord, Dracula, Gruvbox Dark) - โจ๏ธ Keyboard navigation (โ/โ or 1-6 for tabs, Q to quit)
- ๐ Sparkline charts for trends
- ๐ Process selection - Use โ/โ arrows to navigate processes, Enter for detailed view
- ๐ค Integrated AI Agent - Press
ato ask questions about your system - ๐ฅ๏ธ 6 tabs: Overview, CPU, GPU, Memory, System, Agent
Keyboard Shortcuts:
| Key | Action |
|---|---|
q |
Quit |
Tab |
Cycle process sort mode |
โ/โ |
Select process |
Enter |
Open process detail view |
Esc |
Close overlay/detail view |
t / T |
Open theme picker |
PgUp/PgDn |
Page through processes |
Home/End |
Jump to first/last process |
r |
Reset scroll position |
Agent Tab:
- Natural language queries: "What's my GPU temperature?"
- Conversation history with timing
- Response caching for instant repeated queries
- Zero impact on monitoring performance
See TUI_AGENT_GUIDE.md for complete agent usage guide.
๐ธ TUI Screenshots (click to expand)
| Overview Tab | GPU Tab | Agent Tab |
|---|---|---|
| Real-time system overview with gauges | GPU metrics, temperature, utilization | Natural language AI queries |
Graphical User Interface (GUI)
Silicon Monitor also includes a native desktop GUI built with egui for a modern graphical experience:
# Build and run the GUI
cargo run --release --features gui
# Or after installation
simon gui
GUI Features:
- ๐ผ๏ธ Native desktop application (Windows, Linux, macOS)
- ๐จ Multiple color themes (Dark, Light, Ocean, Forest, Sunset, Monochrome)
- ๐ Real-time graphs and visualizations
- ๐ Auto-refreshing metrics
- ๐ฑ๏ธ Mouse-friendly interface with scrollable panels
- ๐ Historical data with trend charts
๐ธ GUI Screenshots (click to expand)
| System Overview | GPU Monitoring | Theme Selection |
|---|---|---|
| Main dashboard with system stats | Detailed GPU metrics panel | Available color themes |
Examples
The repository includes comprehensive examples:
gpu_monitor.rs- Multi-vendor GPU monitoring with all metricsnvidia_monitor.rs- NVIDIA-specific features (NVML)amd_monitor.rs- AMD-specific features (sysfs/DRM)intel_monitor.rs- Intel-specific features (i915/xe)all_gpus.rs- Unified multi-vendor GPU examplecpu_monitor.rs- CPU metrics and per-core statsmemory_monitor.rs- Memory and swap usagedisk_monitor.rs- Disk I/O and SMART datamotherboard_monitor.rs- System information and sensorsprocess_monitor.rs- Process listing with GPU attributionnetwork_monitor.rs- Network interface statisticstui.rs- Interactive terminal UIaudio_monitor.rs- Audio device enumeration and volumebluetooth_monitor.rs- Bluetooth adapter and device discoverydisplay_monitor.rs- Display/monitor informationusb_monitor.rs- USB device enumerationagent_simple.rs- AI agent quick demoagent_demo.rs- AI agent interactive demo with model selection
Run any example with:
cargo run --release --features nvidia --example gpu_monitor
cargo run --release --features nvidia --example process_monitor
cargo run --release --example network_monitor
cargo run --release --features cli --example tui
cargo run --release --features cli --example audio_monitor
cargo run --release --features cli --example bluetooth_monitor
cargo run --release --features cli --example display_monitor
cargo run --release --features cli --example usb_monitor
cargo run --release --features full --example agent_simple
Platform Support
| Platform | CPU | Memory | Disk | GPU (NVIDIA) | GPU (AMD) | GPU (Intel) | GPU (Apple) | Network | Audio | Bluetooth | Display | USB |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Linux | โ | โ | โ | โ | โ | โ | โ | โ | โ | โ | โ | โ |
| Windows | โ | โ | โ | โ | โ | โ | โ | โ | โ | โ | โ | โ |
| macOS | โ | โ | โ | โ | โ | โ | โ | โ | โ | โ | โ | โ |
โ Fully Supported | ๐ง Partial/In Progress | โ Not Supported
GPU Backend Details
NVIDIA:
- Linux: Full NVML support via
libnvidia-ml.so - Windows: Full NVML support via
nvml.dll - Metrics: All metrics supported - utilization, memory, clocks, power, temperature, processes, throttling, ECC
- Devices: GeForce, Quadro, Tesla, Jetson (Nano, TX1/TX2, Xavier, Orin, Thor)
AMD:
- Linux: sysfs via
/sys/class/drm/card*/device/ - Windows: WMI performance counters (
Win32_PerfFormattedData_GPUPerformanceCounters) for utilization, memory, and temperature - Metrics: Utilization (GFX/compute), VRAM, clocks (SCLK/MCLK), temperature, power, fan speed
- Devices: RDNA 1/2/3, CDNA 1/2 (Radeon RX 5000+, Instinct MI series)
- Requirements: amdgpu driver (Linux), AMD display driver (Windows)
Intel:
- Linux: i915/xe drivers via
/sys/class/drm/card*/ - Windows: WMI performance counters for utilization, memory (shared + dedicated), and temperature
- Metrics: GT frequency, memory (discrete GPUs), temperature, power via hwmon
- Devices: Arc A-series, Iris Xe, UHD Graphics, Data Center GPU Max
- Requirements: i915 (legacy) or xe (modern) kernel driver (Linux), Intel graphics driver (Windows)
Apple:
- macOS:
powermetrics+system_profilerfor GPU detection and monitoring - Metrics: Utilization, frequency, power draw (via powermetrics plist output)
- Devices: Apple Silicon M1/M2/M3/M4 (Base, Pro, Max, Ultra) integrated GPUs
- Requirements: macOS with Apple Silicon; sudo for full powermetrics data
Architecture
simon/
โโโ core/ # Core metric structs (CPU, memory, power, etc.)
โโโ gpu/ # Multi-vendor GPU abstraction
โ โโโ mod.rs # Unified Device trait, GpuCollection
โ โโโ nvidia_new.rs # NVIDIA backend (NVML)
โ โโโ amd_rocm.rs # AMD backend (sysfs/DRM)
โ โโโ intel_levelzero.rs # Intel backend (i915/xe)
โโโ disk/ # Disk I/O and SMART monitoring
โโโ motherboard/ # System info, BIOS, sensors
โโโ silicon/ # Apple/Intel/AMD silicon-level monitoring
โโโ audio/ # Audio device enumeration
โโโ bluetooth/ # Bluetooth adapter/device monitoring
โโโ display/ # Connected display monitoring
โโโ usb/ # USB device enumeration
โโโ hwmon/ # Hardware monitor sensors
โโโ observability/ # AI-oriented observability API
โโโ ai_api/ # AI agent tools and ontology
โโโ process_monitor.rs # Process enumeration with GPU attribution
โโโ network_monitor.rs # Network interface statistics
โโโ tui/ # Terminal user interface (ratatui)
โโโ bin/main.rs # CLI/TUI entry point
โโโ platform/ # Platform-specific implementations
API Documentation
GPU Collection API
The GpuCollection provides a unified interface for all GPU vendors:
use simonlib::gpu::{GpuCollection, Device};
// Auto-detect all available GPUs
let collection = GpuCollection::auto_detect()?;
// Get count of detected GPUs
println!("Found {} GPUs", collection.device_count());
// Snapshot all GPUs at once
let snapshots = collection.snapshot_all()?;
// Access individual devices
for device in collection.gpus() {
println!("{} ({})", device.name()?, device.vendor());
}
Process Monitoring
The ProcessMonitor correlates system processes with GPU usage:
use simonlib::process_monitor::ProcessMonitor;
use simonlib::gpu::GpuCollection;
let gpus = GpuCollection::auto_detect()?;
let mut monitor = ProcessMonitor::with_gpus(gpus)?;
// Get all processes
let processes = monitor.processes()?;
// Get top GPU consumers
let gpu_procs = monitor.processes_by_gpu_memory()?;
// Get top CPU consumers
let cpu_procs = monitor.processes_by_cpu()?;
// Get only GPU processes
let gpu_only = monitor.gpu_processes()?;
Network Monitor API
The NetworkMonitor tracks network interface statistics:
use simonlib::network_monitor::NetworkMonitor;
let mut monitor = NetworkMonitor::new()?;
// Get all interfaces
let interfaces = monitor.interfaces()?;
// Get only active interfaces
let active = monitor.active_interfaces()?;
// Get specific interface
if let Some(iface) = monitor.interface_by_name("eth0")? {
println!("RX: {} MB", iface.rx_mb());
println!("TX: {} MB", iface.tx_mb());
// Calculate bandwidth rate
let (rx_rate, tx_rate) = monitor.bandwidth_rate("eth0", &iface);
println!("Rate: โ{:.2} MB/s โ{:.2} MB/s",
rx_rate / 1_000_000.0, tx_rate / 1_000_000.0);
}
Full API documentation:
cargo doc --features full --no-deps --open
Building from Source
Prerequisites
Linux:
# Ubuntu/Debian
sudo apt install build-essential pkg-config libdrm-dev
# Fedora
sudo dnf install @development-tools libdrm-devel
# Arch
sudo pacman -S base-devel libdrm
# For NVIDIA support, install CUDA toolkit or driver (provides libnvidia-ml.so)
Windows:
# Install Visual Studio Build Tools (2019 or later)
# For NVIDIA support, install CUDA toolkit or NVIDIA driver (provides nvml.dll)
macOS:
# Install Xcode command line tools
xcode-select --install
Compilation
# Clone the repository
git clone https://github.com/nervosys/SiliconMonitor
cd SiliconMonitor
# Development build
cargo build --features full
# Release build (optimized)
cargo build --release --features full
# Run tests
cargo test --features full
# Run specific example
cargo run --release --features nvidia --example gpu_monitor
Contributing
Contributions are welcome! Areas that need help:
- Apple GPU enhancements: Apple Silicon GPU auto-detection is integrated via
GpuCollection::auto_detect(); could add Metal Performance Shaders for richer metrics - macOS Process I/O: I/O read/write bytes and handle counts on macOS
- CPU% refinements: CPU% now uses delta-based sampling (matching Task Manager/top behavior); further improvements could include per-core attribution
- Documentation: More examples, tutorials, API documentation
- Testing: Multi-GPU setups, edge cases, platform-specific bugs
- GUI: Native desktop application (egui) โ planned but not yet implemented
See CONTRIBUTING.md for guidelines.
Development Workflow
# Format code
cargo fmt
# Run clippy
cargo clippy --features full -- -D warnings
# Run tests
cargo test --features full
# Build documentation
cargo doc --features full --no-deps --open
# Run examples
cargo run --release --features nvidia --example gpu_monitor
License
This project is dual-licensed:
- Open Source: GNU Affero General Public License v3.0 (AGPL-3.0) โ free for open-source use with copyleft obligations
- Commercial: A proprietary license is available for closed-source, SaaS, or embedded use without AGPL requirements
Contributors must agree to the Contributor License Agreement to enable dual licensing.
See LICENSE-COMMERCIAL.md for commercial licensing details and contact information.
Acknowledgments
Silicon Monitor builds upon and is inspired by:
- jetson-stats by Raffaello Bonghi - Comprehensive monitoring for NVIDIA Jetson devices
- nvtop - GPU monitoring TUI for Linux
- radeontop - AMD GPU monitoring
- intel_gpu_top - Intel GPU monitoring
Special thanks to the Rust community and the maintainers of the following crates:
- ratatui - Terminal UI framework
- sysinfo - System information
- nvml-wrapper - NVIDIA NVML bindings
Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: docs.rs/simon
Made with ๐ฆพ by NERVOSYS
Dependencies
~4โ63MB
~1M SLoC