#inference #nvidia #triton #deep-learning

triton-client

A client for interfacing with NVIDIA Triton inference server

3 releases (breaking)

0.2.0 Aug 15, 2022
0.1.0 Jun 16, 2022
0.0.1 Jun 16, 2022

#541 in Machine learning

48 downloads per month

Apache-2.0

64KB
1K SLoC

C++ 802 SLoC // 0.2% comments Rust 235 SLoC Python 73 SLoC // 0.3% comments

triton-client-rs

A Rust gRPC client library for NVIDIA Triton.

This library provides the necessary setup to generate a Triton client from NVIDIA's Protocol Buffers definitions.

// un-auth'd use of Triton
let client = Client::new("http://localhost:8001/", None).await?;
let models = client
    .repository_index(triton_client::inference::RepositoryIndexRequest {
        repository_name: "".into(), // This should show us models not referenced by repo name.
        ready: false,               // show all models, not just ready ones.
    })
    .await?;

Dependencies

~13–29MB
~428K SLoC