17 releases (4 breaking)
Uses new Rust 2024
| new 0.5.0 | Mar 29, 2026 |
|---|---|
| 0.4.0 | Mar 16, 2026 |
| 0.3.2 | Feb 21, 2026 |
| 0.2.1 | Jan 22, 2026 |
| 0.1.1 | Nov 30, 2025 |
#211 in Web programming
603 downloads per month
Used in 3 crates
2.5MB
27K
SLoC
ADK-Rust
Rust Agent Development Kit (ADK-Rust) - Build AI agents in Rust with modular components for models, tools, memory, RAG, security, realtime voice, and more.
A flexible framework for developing AI agents with simplicity and power. Model-agnostic, deployment-agnostic, optimized for frontier AI models. Includes support for realtime voice agents, RAG pipelines, graph workflows, declarative security, and 120+ working examples.
Supported Providers
| Provider | Feature Flag | Default Model |
|---|---|---|
| Gemini | gemini (default) |
gemini-2.5-flash |
| OpenAI | openai |
gpt-5-mini |
| Anthropic | anthropic |
claude-sonnet-4-6 |
| DeepSeek | deepseek |
deepseek-chat |
| Groq | groq |
llama-3.3-70b-versatile |
| Ollama | ollama |
llama3.2 |
| Fireworks AI | fireworks |
accounts/fireworks/models/llama-v3p1-8b-instruct |
| Together AI | together |
meta-llama/Llama-3.3-70B-Instruct-Turbo |
| Mistral AI | mistral |
mistral-small-latest |
| Perplexity | perplexity |
sonar |
| Cerebras | cerebras |
llama-3.3-70b |
| SambaNova | sambanova |
Meta-Llama-3.3-70B-Instruct |
| Amazon Bedrock | bedrock |
us.anthropic.claude-sonnet-4-6 |
| Azure AI Inference | azure-ai |
(endpoint-specific) |
Quick Start
1. Create a new project:
cargo new my_agent && cd my_agent
2. Add dependencies:
[dependencies]
adk-rust = "0.5.0"
tokio = { version = "1.40", features = ["full"] }
dotenvy = "0.15"
3. Set your API key:
echo 'GOOGLE_API_KEY=your-key' > .env
4. Write src/main.rs:
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GOOGLE_API_KEY")?;
let model = GeminiModel::new(&api_key, "gemini-2.5-flash")?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}
5. Run:
cargo run
Even Faster — adk::run()
Auto-detect your provider and run an agent in one call:
use adk_rust::run;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
dotenvy::dotenv().ok();
// Set ANTHROPIC_API_KEY, OPENAI_API_KEY, or GOOGLE_API_KEY
let response = run("You are a helpful assistant.", "What is Rust?").await?;
println!("{response}");
Ok(())
}
provider_from_env() checks ANTHROPIC_API_KEY → OPENAI_API_KEY → GOOGLE_API_KEY in order.
Adding Tools
let agent = LlmAgentBuilder::new("researcher")
.instruction("Search the web and summarize findings.")
.model(Arc::new(model))
.tool(Arc::new(GoogleSearchTool::new()))
.build()?;
Workflow Agents
// Sequential execution
let pipeline = SequentialAgent::new("pipeline", vec![agent1, agent2, agent3]);
// Parallel execution
let parallel = ParallelAgent::new("analysis", vec![analyst1, analyst2]);
// Loop until condition (max 5 iterations)
let loop_agent = LoopAgent::new("refiner", vec![agent])
.with_max_iterations(5);
Multi-Agent Systems
let coordinator = LlmAgentBuilder::new("coordinator")
.instruction("Delegate tasks to specialists.")
.model(model)
.sub_agent(code_agent)
.sub_agent(test_agent)
.build()?;
Realtime Voice Agents
Build voice-enabled AI assistants with bidirectional audio streaming:
use adk_realtime::{RealtimeAgent, openai::OpenAIRealtimeModel, RealtimeModel};
let model: Arc<dyn RealtimeModel> = Arc::new(
OpenAIRealtimeModel::new(&api_key, "gpt-4o-realtime-preview-2024-12-17")
);
let agent = RealtimeAgent::builder("voice_assistant")
.model(model)
.instruction("You are a helpful voice assistant.")
.voice("alloy")
.server_vad() // Voice activity detection
.build()?;
Features:
- OpenAI Realtime API & Gemini Live API
- Vertex AI Live with Application Default Credentials
- LiveKit WebRTC bridge for production audio routing
- Bidirectional audio (PCM16, G711, Opus)
- Server-side VAD
- Mid-session context mutation (swap instructions/tools without dropping the call)
- Real-time tool calling
- Multi-agent handoffs
Graph-Based Workflows
Build complex workflows using LangGraph-style graph agents:
use adk_graph::prelude::*;
let agent = GraphAgent::builder("processor")
.node_fn("fetch", |ctx| async move { /* ... */ })
.node_fn("transform", |ctx| async move { /* ... */ })
.edge(START, "fetch")
.edge("fetch", "transform")
.edge("transform", END)
.checkpointer(SqliteCheckpointer::new("state.db").await?)
.build()?;
Features:
- Cyclic graphs for ReAct patterns
- Conditional routing
- State management with reducers
- Checkpointing (memory, SQLite)
- Human-in-the-loop interrupts
Browser Automation
Give agents web browsing capabilities with 46 tools:
use adk_browser::{BrowserSession, BrowserToolset, BrowserConfig};
use std::sync::Arc;
let config = BrowserConfig::new().webdriver_url("http://localhost:4444");
let session = Arc::new(BrowserSession::new(config));
let toolset = BrowserToolset::new(session);
let tools = toolset.all_tools(); // 46 browser tools
let mut builder = LlmAgentBuilder::new("web_agent")
.model(model);
for tool in tools {
builder = builder.tool(tool);
}
let agent = builder.build()?;
Tools include navigation, extraction, forms, screenshots, JavaScript execution, and more.
Agent Evaluation
Test and validate agent behavior:
use adk_eval::{Evaluator, EvaluationConfig, EvaluationCriteria};
let evaluator = Evaluator::new(EvaluationConfig::with_criteria(
EvaluationCriteria::exact_tools().with_response_similarity(0.8)
));
let report = evaluator.evaluate_file(agent, "tests/agent.test.json").await?;
assert!(report.all_passed());
RAG Pipeline
Build retrieval-augmented generation pipelines with adk-rag:
use adk_rag::{RagPipelineBuilder, InMemoryVectorStore, FixedSizeChunker};
let pipeline = RagPipelineBuilder::new()
.chunker(FixedSizeChunker::new(512, 50))
.vector_store(InMemoryVectorStore::new())
.embedding_provider(embedding_model)
.build()?;
// Ingest documents
pipeline.ingest("doc1", "Your document text here...").await?;
// Search
let results = pipeline.search("query", 5).await?;
Features:
- 3 chunking strategies (fixed-size, recursive, markdown-aware)
- 6 vector store backends (in-memory, Qdrant, Milvus, Weaviate, Pinecone, SurrealDB)
- Pluggable embedding providers and rerankers
- Ready-made
RagToolfor agent integration
Declarative Security
Tools declare their required scopes — the framework enforces automatically:
use adk_tool::FunctionTool;
use adk_auth::{ScopeGuard, ContextScopeResolver};
// Tool declares what scopes it needs
let transfer = FunctionTool::new("transfer", "Transfer funds", handler)
.with_scopes(&["finance:write", "verified"]);
// Framework enforces before execution — no imperative checks in handlers
let guard = ScopeGuard::new(ContextScopeResolver);
let protected = guard.protect(transfer);
Features:
- Declarative
required_scopes()on theTooltrait ScopeGuardwith pluggable resolvers (context, static, custom)- Role-based access control with allow/deny rules
- SSO/OAuth integration (Auth0, Okta, Azure AD, Google OIDC)
- Audit logging for all access decisions
Deployment
# Console mode (default)
cargo run
# Server mode
cargo run -- serve --port 8080
Installation Options
# Standard (default) — agents, models, tools, sessions, runner, guardrails, auth
adk-rust = "0.5.0"
# Full — standard + all stable specialist crates (graph, realtime, browser, eval, rag)
# Does NOT include experimental crates (code, sandbox, audio) — use `labs` for those
adk-rust = { version = "0.5.0", features = ["full"] }
# Labs — standard + experimental crates (code, sandbox, audio)
adk-rust = { version = "0.5.0", features = ["labs"] }
# Full + Labs — everything including experimental crates
adk-rust = { version = "0.5.0", features = ["full", "labs"] }
# Minimal
adk-rust = { version = "0.5.0", default-features = false, features = ["minimal"] }
# Custom
adk-rust = { version = "0.5.0", default-features = false, features = ["agents", "gemini", "tools"] }
# With new providers (forwarded to adk-model)
adk-model = { version = "0.5.0", features = ["fireworks", "together", "mistral", "perplexity", "cerebras", "sambanova", "bedrock", "azure-ai"] }
Documentation
- API Reference
- Official Guides
- Examples — 120+ working examples
- Wiki — Comprehensive guides and tutorials
License
Apache 2.0
Dependencies
~4–78MB
~1.5M SLoC