5 stable releases
new 2.1.1 | Nov 21, 2024 |
---|---|
2.1.0 | Oct 31, 2024 |
2.0.1 | Oct 28, 2024 |
2.0.0 | Oct 23, 2024 |
1.0.0 | Oct 21, 2024 |
#223 in Audio
552 downloads per month
27KB
196 lines
Aristech STT-Client for Rust
This is the Rust client implementation for the Aristech STT-Server.
Installation
To use the client in your project, add it to your Cargo.toml
or use cargo
to add it:
cargo add aristech-stt-client
Usage
use aristech_stt_client::{get_client, recognize_file, TlsOptions, Auth};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let mut client = get_client(
"https://stt.example.com",
Some(TlsOptions {
ca_certificate: None,
auth: Some(Auth { token: "your-token", secret: "your-secret" }),
}),
).await?;
let results = recognize_file(&mut client, "path/to/audio/file.wav", None).await?;
for result in results {
println!(
"{}",
result
.chunks
.get(0)
.unwrap()
.alternatives
.get(0)
.unwrap()
.text
);
}
Ok(())
}
There are several examples in the examples directory:
- file.rs: Demonstrates how to perform recognition on a file.
- live.rs: Demonstrates how to perform live recognition using the microphone.
- models.rs: Demonstrates how to get the available models from the server.
- nlpFunctions.rs: Demonstrates how to list the configured NLP-Servers and the coresponding functions.
- nlpProcess.rs: Demonstrates how to perform NLP processing on a text by using the STT-Server as a proxy.
- account.rs: Demonstrates how to retrieve the account information from the server.
You can run the examples directly using cargo
like this:
- Create a
.env
file in the rust directory:
HOST=stt.example.com
# The credentials are optional but probably required for most servers:
TOKEN=your-token
SECRET=your-secret
# The following are optional:
# ROOT_CERT=your-root-cert.pem # If the server uses a self-signed certificate
# If neither credentials nor an explicit root certificate are provided,
# you can still enable SSL by setting the SSL environment variable to true:
# SSL=true
# MODEL=some-available-model
# NLP_SERVER=some-config
# NLP_PIPELINE=function1,function2
- Run the examples, e.g.:
cargo run --example live
Build
To build the library, run:
cargo build
Dependencies
~13–43MB
~783K SLoC