#openai #mcp #anthropic #llm

cnctd_ai

Multi-provider AI/LLM abstraction layer with MCP support (Anthropic, OpenAI, Gemini, OpenRouter)

20 releases

Uses new Rust 2024

0.1.25 Feb 27, 2026
0.1.24 Feb 20, 2026
0.1.14 Jan 28, 2026
0.1.7 Dec 11, 2025
0.1.2 Oct 1, 2025

#576 in Machine learning


Used in cnctd_cli

MIT license

495KB
11K SLoC

cnctd_ai

A Rust abstraction layer for AI/LLM providers (Anthropic Claude, OpenAI, Google Gemini, OpenRouter) with integrated MCP (Model Context Protocol) support and autonomous agent framework.

Features

  • Multi-Provider Support: Unified interface for Anthropic Claude, OpenAI, Google Gemini, and OpenRouter
  • Streaming & Non-Streaming: Support for both regular completions and streaming responses
  • Tool Calling: Full support for function/tool calling across all providers
  • Agent Framework: Autonomous task execution with tool calling loops
  • MCP Integration: Native support for MCP servers (stdio and HTTP gateway)
  • Error Handling: Comprehensive error types with provider-specific handling
  • Type Safety: Strong typing with proper error handling throughout

Installation

cargo add cnctd_ai

Quick Start

Agent Framework

The easiest way to build autonomous AI applications:

use cnctd_ai::{Agent, Client, AnthropicConfig, McpGateway};

// Setup client and gateway
let client = Client::anthropic(
    AnthropicConfig {
        api_key: "your-key".into(),
        model: "claude-sonnet-4-20250514".into(),
        version: None,
    },
    None,
)?;

let gateway = McpGateway::new("https://mcp.cnctd.world");

// Create agent with default settings
let agent = Agent::new(&client).with_gateway(&gateway);

// Run autonomous task - agent will use tools as needed
let trace = agent.run_simple(
    "Research the latest Rust async trends and summarize key findings"
).await?;

// View results
trace.print_summary();

For advanced configuration:

let agent = Agent::builder(&client)
    .max_iterations(10)
    .max_duration(Duration::from_secs(300))
    .system_prompt("You are a helpful research assistant.")
    .gateway(&gateway)
    .build();

See Agent Framework Documentation for more details.

Basic Completion

use cnctd_ai::{Client, AnthropicConfig, Message, CompletionRequest};

let client = Client::anthropic(
    AnthropicConfig {
        api_key: "your-api-key".into(),
        model: "claude-sonnet-4-20250514".into(),
        version: None,
    },
    None,
)?;

let request = CompletionRequest {
    messages: vec![Message::user("Hello, how are you?")],
    tools: None,
    options: None,
};

let response = client.complete(request).await?;
println!("Response: {}", response.text());

Streaming

use cnctd_ai::{Client, AnthropicConfig, Message, CompletionRequest};

let mut stream = client.complete_stream(request).await?;

while let Some(chunk) = stream.next().await {
    let chunk = chunk?;
    if let Some(text) = chunk.text() {
        print!("{}", text);
    }
}

Tool Calling

use cnctd_ai::{Client, Message, CompletionRequest, create_tool};
use serde_json::json;

// Create a tool using the helper function
let weather_tool = create_tool(
    "get_weather",
    "Get the current weather for a location",
    json!({
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            },
            "unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"]
            }
        },
        "required": ["location"]
    })
)?;

let mut request = CompletionRequest {
    messages: vec![Message::user("What's the weather in SF?")],
    tools: None,
    options: None,
};
request.add_tool(weather_tool);

let response = client.complete(request).await?;

// Check if model wants to use a tool
if let Some(tool_use) = response.tool_use() {
    println!("Tool: {}", tool_use.name);
    println!("Arguments: {}", tool_use.input);
    
    // Execute tool and continue conversation
    // See examples/tool_calling.rs for full implementation
}

MCP Gateway Integration

use cnctd_ai::McpGateway;

let gateway = McpGateway::new("https://mcp.cnctd.world");

// List available servers
let servers = gateway.list_servers().await?;

// List tools from a specific server
let tools = gateway.list_tools("brave-search").await?;

// Execute a tool
let result = gateway.call_tool(
    "brave-search",
    "brave_web_search",
    Some(json!({"query": "Rust programming"})),
).await?;

Examples

The repository includes several examples:

Agent Framework:

  • agent_simple.rs - Minimal agent setup
  • agent_basic.rs - Full-featured agent with configuration

Core Functionality:

  • basic_completion.rs - Simple completion example
  • streaming.rs - Streaming responses
  • tool_calling.rs - Function/tool calling
  • tool_calling_streaming.rs - Tool calling with streaming
  • conversation.rs - Multi-turn conversations
  • error_handling.rs - Error handling patterns
  • mcp_gateway.rs - MCP gateway integration

Run examples with:

cargo run --example agent_simple
cargo run --example basic_completion

Environment Variables

Set these for the examples:

ANTHROPIC_API_KEY=your-anthropic-key
OPENAI_API_KEY=your-openai-key
GEMINI_API_KEY=your-gemini-key
GATEWAY_URL=https://mcp.cnctd.world  # Optional
GATEWAY_TOKEN=your-token  # Optional

Tool Creation Helpers

The library provides helper functions for easier tool creation:

use cnctd_ai::{create_tool, create_tool_borrowed};

// For owned strings (runtime data)
let tool = create_tool(name, description, schema)?;

// For static strings (compile-time constants)
let tool = create_tool_borrowed(name, description, schema)?;

OpenAI Responses API & Multi-Turn Tool Calls

cnctd_ai uses OpenAI's newer Responses API (/v1/responses) for GPT-4, GPT-4.1, GPT-5, and reasoning models (o1, o3). This provides better tool calling support but has specific requirements for multi-turn conversations:

Key Concepts

  • call_id: OpenAI uses call_id (format: call_...) to match function_call items with their function_call_output responses
  • ToolUse.call_id: The library captures this from API responses and stores it in ToolUse.call_id
  • ToolResult.effective_call_id(): Returns the correct ID to use when sending tool results back

Reasoning Models (GPT-5.2-pro, o1, o3)

Reasoning models require special handling for multi-turn tool calls:

  1. Encrypted reasoning content: The library automatically requests reasoning.encrypted_content for reasoning models
  2. Reasoning items: Must be echoed back in continuation requests via Message.reasoning_items

The library handles this automatically - just ensure you preserve reasoning_items when building continuation messages:

// After getting a response with tool calls
let response = client.complete(request).await?;

// The response.message includes reasoning_items if present
// When building the next request, include the full message
messages.push(response.message.clone());

// Add tool results
let tool_result = ToolResult::new(tool_use.call_id.unwrap_or(tool_use.id), output);
messages.push(Message::tool_results(vec![tool_result]));

Application Considerations

When persisting and reconstructing conversations from a database:

  1. Store call_id: Save both tool_use_id and call_id from tool calls
  2. Match 1:1: Every function_call must have a matching function_call_output
  3. Preserve reasoning: Store and restore reasoning_items for reasoning models

Error Handling

The library provides comprehensive error types:

use cnctd_ai::Error;

match client.complete(request).await {
    Ok(response) => { /* handle success */ },
    Err(Error::AuthenticationFailed(msg)) => { /* handle auth */ },
    Err(Error::RateLimited { retry_after }) => { /* handle rate limit */ },
    Err(Error::ProviderError { provider, message, status_code }) => { /* handle provider error */ },
    Err(e) => { /* handle other errors */ },
}

cnctd_ai_server

The repository includes a subcrate at crates/cnctd_ai_server/ -- an Axum-based REST API that provides:

  • Streaming SSE chat with full tool-calling loops
  • MCP integration for tool discovery and execution
  • 4-point data obfuscation protecting sensitive entity data during AI conversations
  • Agent execution with background task management

The obfuscation system is fully dynamic -- the server fetches its entity dictionary from an HTTP source URL hosted by your application. No entity types are hardcoded.

See the Obfuscation Setup Guide for integration details.

Documentation

License

MIT License - see LICENSE file for details.

Dependencies

~25–40MB
~664K SLoC