1 unstable release
0.1.0 | Oct 1, 2024 |
---|
#1658 in Web programming
14KB
130 lines
openai-realtime-proxy
Safely deploy OpenAI's Realtime APIs in less than 5 minutes!
The OpenAI Realtime API provides a seamless voice-to-voice conversation experience. To reduce latency, it establishes a WebSocket connection between the client and the backend. However, production apps likely need a proxy sitting in the middle to handle authentication, rate limiting, and avoid leaking sensitive data.
This library takes care of the proxying part, allowing you to focus on the rest of your application.
use axum::{extract::WebSocketUpgrade, response::IntoResponse, routing::get, Router};
#[tokio::main]
async fn main() {
let app = Router::new().route("/ws", get(ws_handler));
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, app).await.unwrap();
}
async fn ws_handler(ws: WebSocketUpgrade) -> impl IntoResponse {
// check for authentication/access/etc. here
let proxy = realtime_proxy::Proxy::new(
std::env::var("OPENAI_API_KEY").expect("OPENAI_API_TOKEN env var not set.")
);
ws.on_upgrade(|socket| proxy.handle(socket))
}
Refer to the documentation on docs.rs for detailed usage instructions.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Dependencies
~16–28MB
~475K SLoC