5 releases
new 0.1.4 | Nov 7, 2024 |
---|---|
0.1.3 | Nov 7, 2024 |
0.1.2 | Nov 7, 2024 |
0.1.1 | Nov 7, 2024 |
0.1.0 | Nov 7, 2024 |
#363 in Web programming
26KB
374 lines
MagicAPI AI Gateway
The world's fastest AI Gateway proxy, written in Rust and optimized for maximum performance. This high-performance API gateway routes requests to various AI providers (OpenAI, GROQ) with streaming support, making it perfect for developers who need reliable and blazing-fast AI API access.
Features
- 🚀 Blazing fast performance - built in Rust with zero-cost abstractions
- ⚡ Optimized for low latency and high throughput
- 🔄 Unified API interface for multiple AI providers (OpenAI, GROQ)
- 📡 Real-time streaming support with minimal overhead
- 🔍 Built-in health checking
- 🛡️ Configurable CORS
- 🔀 Smart provider-specific request routing
- 📊 Efficient request/response proxying
- 💪 Production-ready and battle-tested
Quick Start
Installation
You can install MagicAPI Gateway using one of these methods:
Using Cargo Install
cargo install magicapi-ai-gateway
After installation, you can start the gateway by running:
magicapi-ai-gateway
Building from Source
- Clone the repository:
git clone https://github.com/magicapi/ai-gateway
cd ai-gateway
- Build the project:
cargo build --release
- Run the server:
cargo run --release
The server will start on http://127.0.0.1:3000
by default.
Running the Gateway
You can configure the gateway using environment variables:
# Basic configuration
export RUST_LOG=info
# Start the gateway
magicapi-ai-gateway
# Or with custom port
PORT=8080 magicapi-ai-gateway
Usage
Making Requests
To make requests through the gateway, use the /v1/*
endpoint and specify the provider using the x-provider
header.
Example: OpenAI Request
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: openai" \
-H "Authorization: Bearer your-openai-api-key" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Example: GROQ Request
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: groq" \
-H "Authorization: Bearer your-groq-api-key" \
-d '{
"model": "llama2-70b-4096",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true,
"max_tokens": 300
}'
Streaming Responses
To enable streaming, add the Accept: text/event-stream
header to your request.
Configuration
The gateway can be configured using environment variables:
RUST_LOG=debug # Logging level (debug, info, warn, error)
Performance
MagicAPI Developer AI Gateway is designed for maximum performance:
- Zero-cost abstractions using Rust's ownership model
- Asynchronous I/O with Tokio for optimal resource utilization
- Connection pooling via Reqwest for efficient HTTP connections
- Memory-efficient request/response proxying
- Minimal overhead in the request path
- Optimized streaming response handling
Architecture
The gateway leverages the best-in-class Rust ecosystem:
- Axum - High-performance web framework
- Tokio - Industry-standard async runtime
- Tower-HTTP - Robust HTTP middleware
- Reqwest - Fast and reliable HTTP client
- Tracing - Zero-overhead logging and diagnostics
Security Notes
- Always run behind a reverse proxy in production
- Configure CORS appropriately for your use case
- Use environment variables for sensitive configuration
- Consider adding rate limiting for production use
Contributing
We welcome contributions! Please see our CONTRIBUTING.md for guidelines.
Development Setup
# Install development dependencies
cargo install cargo-watch
# Run tests
cargo test
# Run with hot reload
cargo watch -x run
Troubleshooting
Common Issues
-
Connection Refused
- Check if port 3000 is available
- Verify the HOST and PORT settings
-
Streaming Not Working
- Ensure
Accept: text/event-stream
header is set - Check client supports streaming
- Verify provider supports streaming for the requested endpoint
- Ensure
-
Provider Errors
- Verify provider API keys are correct
- Check provider-specific headers are properly set
- Ensure the provider endpoint exists and is correctly formatted
Support
For support, please open an issue in the GitHub repository. Our community is active and happy to help!
License
This project is dual-licensed under both the MIT License and the Apache License (Version 2.0). You may choose either license at your option. See the LICENSE-MIT and LICENSE-APACHE files for details.
Acknowledgments
Special thanks to all our contributors and the Rust community for making this project possible.
Docker Support
Building and Running with Docker
- Build the Docker image:
docker build -t magicapi1/magicapi-ai-gateway:latest .
- Run the container:
docker run -p 3000:3000 \
-e RUST_LOG=info \
magicapi1/magicapi-ai-gateway:latest
Using Pre-built Docker Image
docker pull magicapi1/magicapi-ai-gateway:latest
docker run -p 3000:3000 \
-e RUST_LOG=info \
magicapi1/magicapi-ai-gateway:latest
Docker Compose
Option 1: Build from Source
Create a docker-compose.yml
file:
version: '3.8'
services:
gateway:
build: .
ports:
- "3000:3000"
environment:
- RUST_LOG=info
restart: unless-stopped
Option 2: Use Prebuilt Image
Create a docker-compose.yml
file:
version: '3.8'
services:
gateway:
image: magicapi1/magicapi-ai-gateway:latest
ports:
- "3000:3000"
environment:
- RUST_LOG=info
restart: unless-stopped
Then run either option with:
docker-compose up -d
Dependencies
~16–29MB
~440K SLoC