36 releases
| 0.4.5 | Nov 23, 2025 |
|---|---|
| 0.4.4 | Nov 8, 2025 |
| 0.3.9 | Nov 3, 2025 |
| 0.2.9 | Oct 31, 2025 |
| 0.1.9 | Oct 29, 2025 |
#175 in Asynchronous
298 downloads per month
6MB
11K
SLoC
Contains (WOFF font, 99KB) fontawesome-webfont.woff, (WOFF font, 78KB) fontawesome-webfont.woff2, (WOFF font, 45KB) open-sans-v17-all-charsets-300.woff2, (WOFF font, 41KB) open-sans-v17-all-charsets-300italic.woff2, (WOFF font, 45KB) open-sans-v17-all-charsets-600.woff2, (WOFF font, 43KB) open-sans-v17-all-charsets-600italic.woff2 and 7 more.
Helios Engine - LLM Agent Framework
Helios Engine is a powerful and flexible Rust framework for building LLM-powered agents with tool support, streaming chat capabilities, and easy configuration management. Create intelligent agents that can interact with users, call tools, and maintain conversation context - with both online and offline local model support.
Key Features
- 🆕 ReAct Mode: Enable agents to reason and plan before taking actions with a simple
.react()call - includes custom reasoning prompts for domain-specific tasks - 🆕 Forest of Agents: Multi-agent collaboration system where agents can communicate, delegate tasks, and share context
- Agent System: Create multiple agents with different personalities and capabilities
- 🆕 Tool Builder: Simplified tool creation with builder pattern - wrap any function as a tool without manual trait implementation
- Tool Registry: Extensible tool system for adding custom functionality
- Extensive Tool Suite: 16+ built-in tools including web scraping, JSON parsing, timestamp operations, file I/O, shell commands, HTTP requests, system info, and text processing
- 🆕 RAG System: Retrieval-Augmented Generation with vector stores (InMemory and Qdrant)
- 🆕 Custom Endpoints: Ultra-simple API for adding custom HTTP endpoints to your agent server - ~70% less code than before!
- Streaming Support: True real-time response streaming for both remote and local models with immediate token delivery
- Local Model Support: Run local models offline using llama.cpp with HuggingFace integration (optional
localfeature) - HTTP Server & API: Expose OpenAI-compatible API endpoints with full parameter support
- Dual Mode Support: Auto, online (remote API), and offline (local) modes
- CLI & Library: Use as both a command-line tool and a Rust library crate
- 🆕 Feature Flags: Optional
localfeature for offline model support - build only what you need!
Documentation
Online Resources
- Official Website - Complete interactive documentation with tutorials, guides, and examples
- Official Book - Comprehensive guide to Helios Engine
- API Reference - Detailed API documentation on docs.rs
Quick Links
- Getting Started - Installation and first steps
- Core Concepts - Agents, LLMs, chat, and error handling
- Tools - Using and creating tools
- Forest of Agents - Multi-agent systems
- RAG System - Retrieval-Augmented Generation
- Examples - Code examples and use cases
Local Documentation
- Getting Started - Comprehensive guide: installation, configuration, first agent, tools, and CLI
- Tools Guide - Built-in tools, custom tool creation, and Tool Builder
- Forest of Agents - Multi-agent systems, coordination, and communication
- RAG System - Retrieval-Augmented Generation with vector stores
- API Reference - Complete API documentation
- Configuration - Configuration options and local inference setup
- Using as Crate - Library usage guide
Full Documentation Index - Complete navigation and updated structure
Quick Start
Version 0.4.4
Install CLI Tool
# Install without local model support (lighter, faster install)
cargo install helios-engine
# Install with local model support (enables offline mode with llama-cpp-2)
cargo install helios-engine --features local
Basic Usage
# Initialize configuration
helios-engine init
# Start interactive chat
helios-engine chat
# Ask a quick question
helios-engine ask "What is Rust?"
As a Library Crate
Add to your Cargo.toml:
[dependencies]
helios-engine = "0.4.4"
tokio = { version = "1.35", features = ["full"] }
For local model support:
[dependencies]
helios-engine = { version = "0.4.4", features = ["local"] }
tokio = { version = "1.35", features = ["full"] }
Simple Example
use helios_engine::{Agent, Config, CalculatorTool};
#[tokio::main]
async fn main() -> helios_engine::Result<()> {
let config = Config::from_file("config.toml")?;
let mut agent = Agent::builder("MyAssistant")
.config(config)
.system_prompt("You are a helpful AI assistant.")
.tool(Box::new(CalculatorTool))
.react() // Enable ReAct mode for reasoning before acting!
.build()
.await?;
let response = agent.chat("What is 15 * 8?").await?;
println!("{}", response);
Ok(())
}
See Getting Started Guide or visit the Official Book for detailed examples and comprehensive tutorials!
Custom Endpoints Made Simple
Create custom HTTP endpoints with minimal code:
use helios_engine::{ServerBuilder, get, EndpointBuilder, EndpointResponse};
let endpoints = vec![
// Simple static endpoint
get("/api/version", serde_json::json!({"version": "1.0"})),
// Dynamic endpoint with request handling
EndpointBuilder::post("/api/echo")
.handle(|req| {
let msg = req.and_then(|r| r.body).unwrap_or_default();
EndpointResponse::ok(serde_json::json!({"echo": msg}))
})
.build(),
];
ServerBuilder::with_agent(agent, "model")
.endpoints(endpoints)
.serve()
.await?;
70% less code than the old API! See Custom Endpoints Guide for details.
Use Cases
- Chatbots & Virtual Assistants: Build conversational AI with tool access and memory
- Multi-Agent Systems: Coordinate multiple specialized agents for complex workflows
- Data Analysis: Agents that can read files, process data, and generate reports
- Web Automation: Scrape websites, make API calls, and process responses
- Knowledge Management: Build RAG systems for semantic search and Q&A
- API Services: Expose your agents via OpenAI-compatible HTTP endpoints
- Local AI: Run models completely offline for privacy and security
Built-in Tools (16+)
Helios Engine includes a comprehensive suite of production-ready tools:
- File Management: Read, write, edit, and search files
- Web & API: Web scraping, HTTP requests
- System Utilities: Shell commands, system information
- Data Processing: JSON parsing, text manipulation, timestamps
- Communication: Agent-to-agent messaging
- Knowledge: RAG tool for semantic search and retrieval
Learn more in the Tools Guide.
Project Structure
helios-engine/
├── src/ # Source code
├── examples/ # Example applications
├── docs/ # Documentation
├── book/ # mdBook source (deployed to vercel)
├── tests/ # Integration tests
├── Cargo.toml # Project configuration
└── README.md # This file
Contributing
We welcome contributions! See our Contributing Guide for details on:
- Development setup
- Code standards
- Documentation guidelines
- Testing procedures
Links
- Official Website & Book - Complete documentation and guides
- Crates.io - Package registry
- API Documentation - API reference
- GitHub Repository - Source code
- Examples - Code examples
License
This project is licensed under the MIT License - see the LICENSE file for details.
Made with love in Rust by Ammar Alnagar
Dependencies
~18–38MB
~568K SLoC