-
safetensors
functions to read and write safetensors which aim to be safer than their PyTorch counterpart. The format is 8 bytes which is an unsized int, being the size of a JSON header, the JSON…
-
tantivy-tokenizer-api
Tokenizer API of tantivy
-
tch
Rust wrappers for the PyTorch C++ api (libtorch)
-
torch-sys
Low-level FFI bindings for the PyTorch C++ api (libtorch)
-
hf-hub
crates aims ease the interaction with huggingface It aims to be compatible with huggingface_hub python package…
-
tiktoken-rs
encoding and decoding with the tiktoken library in Rust
-
rv
Random variables
-
tract-linalg
Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
-
tract-nnef
Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
-
tract-data
Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
-
criterion-stats
Criterion's statistics library
-
tract-onnx-opl
Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
-
ort
A safe Rust wrapper for ONNX Runtime 1.17 - Optimize and Accelerate Machine Learning Inferencing
-
tract-pulse-opl
Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
-
candle-core
Minimalist ML framework
-
linfa
A Machine Learning framework for Rust
-
re_smart_channel
A channel that keeps track of latency and queue length
-
tract-core
Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
-
openvino
High-level bindings for OpenVINO
-
linfa-nn
A collection of nearest neighbour algorithms
-
linfa-clustering
A collection of clustering algorithms
-
re_int_histogram
A histogram with
i64
keys andu32
counts, supporting both sparse and dense uses -
openvino-sys
Low-level bindings for OpenVINO (use the
openvino
crate for easier-to-use bindings) -
burn-tensor
Tensor library with user-friendly APIs and automatic differentiation support
-
ort-sys
Unsafe Rust bindings for ONNX Runtime 1.17 - Optimize and Accelerate Machine Learning Inferencing
-
burn-autodiff
Automatic differentiation backend for the Burn framework
-
burn
Flexible and Comprehensive Deep Learning Framework in Rust
-
burn-tch
LibTorch backend for the Burn framework using the tch bindings
-
burn-ndarray
Ndarray backend for the Burn framework
-
burn-wgpu
WGPU backend for the Burn framework
-
burn-fusion
Kernel fusion backend decorator for the Burn framework
-
burn-common
Common crate for the Burn framework
-
burn-compute
Compute crate that helps creating high performance async backends
-
burn-train
Training crate for the Burn framework
-
burn-candle
Candle backend for the Burn framework
-
burn-dataset
dataset APIs for creating ML data pipelines
-
tract-hir
Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
-
tract-libcli
Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
-
fsrs
including Optimizer and Scheduler
-
burn-core
Flexible and Comprehensive Deep Learning Framework in Rust
-
candle-nn
Minimalist ML framework
-
find_cuda_helper
Helper crate for searching for CUDA libraries
-
tflite
Rust bindings for TensorFlow Lite
-
tract-pulse
Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
-
tract-onnx
Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
-
rust_tokenizers
High performance tokenizers for Rust
-
tfrecord
de/serialize for TensorBoard
-
ggml-sys-bleedingedge
Bleeding edge low-level bindings to GGML
-
tensorflow
Rust language bindings for TensorFlow
-
rgwml
reducing cognitive overload while using rust for ml, ai, and data science operations
-
lance-testing
A columnar data format that is 100x faster than Parquet for random access
-
lance-arrow
Arrow Extension for Lance
-
lance-datagen
A columnar data format that is 100x faster than Parquet for random access
-
llm-samplers
Token samplers for large language models
-
lance-datafusion
Internal utilities used by other lance modules to simplify working with datafusion
-
lance
A columnar data format that is 100x faster than Parquet for random access
-
lance-linalg
A columnar data format that is 100x faster than Parquet for random access
-
lance-index
Lance indices implementation
-
lance-file
Lance file format
-
lance-io
I/O utilities for Lance
-
candle-kernels
CUDA kernels for Candle
-
candle-transformers
Minimalist ML framework
-
lance-table
Lance table format
-
lance-core
Lance Columnar Format -- Core Library
-
re_analytics
Rerun's analytics SDK
-
tensorflow-sys
The package provides bindings to TensorFlow
-
linfa-linear
A Machine Learning framework for Rust
-
lingua
An accurate natural language detection library, suitable for short text and mixed-language text
-
surrealml-core
The core machine learning library for SurrealML that enables SurrealDB to store and load ML models
-
smartcore
Machine Learning in Rust
-
mnist
data set parser
-
rust-bert
Ready-to-use NLP pipelines and language models
-
lance-encoding
Encoders and decoders for the Lance file format
-
langchain-rust
LangChain for Rust, the easiest way to write LLM-based programs in Rust
-
openai-api-rs
OpenAI API client library for Rust (unofficial)
-
tdigest
T-Digest algorithm in Rust
-
linfa-kernel
Kernel methods for non-linear algorithms
-
web-rwkv
RWKV language model in pure WebGPU
-
candle-metal-kernels
Metal kernels for Candle
-
ollama-rs
interacting with the Ollama API
-
linfa-reduction
A collection of dimensionality reduction techniques
-
dfdx
Ergonomic auto differentiation in Rust, with pytorch like apis
-
openai
An unofficial Rust library for the OpenAI API
-
linfa-elasticnet
A Machine Learning framework for Rust
-
liboxen
Oxen is a fast, unstructured data version control, to help version datasets, written in Rust
-
glowrs
SentenceTransformers for candle-rs
-
rstats
Statistics, Information Measures, Data Analysis, Linear Algebra, Clifford Algebra, Machine Learning, Geometric Median, Matrix Decompositions, PCA, Mahalanobis Distance, Hulls, Multithreading
-
petal-clustering
A collection of clustering algorithms
-
fastembed
https://github.com/qdrant/fastembed
-
rten
Machine learning runtime
-
llama-cpp-2
llama.cpp bindings for Rust
-
rten-tensor
Tensor library for the RTen machine learning runtime
-
FerriteChatter
ChatGPT CLI
-
llama-cpp-sys-2
Low Level Bindings to llama.cpp
-
finalfusion
Reader and writer for common word embedding formats
-
burn-tensor-testgen
Test generation crate for burn-tensor
-
rten-vecmath
SIMD vectorized implementations of various math functions used in ML models
-
llama-core
The core component of LlamaEdge
-
cudnn
safe Rust wrapper for CUDA's cuDNN
-
fastapprox
Fast approximate versions of certain functions that arise in machine learning
-
llm-chain
running chains of LLMs (such as ChatGPT) in series to complete complex tasks, such as text summation
-
cudnn-sys
FFI bindings to cuDNN
-
tlparse
Parse TORCH_LOG logs produced by PyTorch torch.compile
-
instant-clip-tokenizer
Fast text tokenizer for the CLIP neural network
-
aleph-alpha-client
Interact with large language models provided by the Aleph Alpha API in Rust code
-
rten-imageio
loading images for use with RTen
-
lance-test-macros
A columnar data format that is 100x faster than Parquet for random access
-
clust
An unofficial Rust client for the Anthropic/Claude API
-
pasta-msm
Optimized multiscalar multiplicaton for Pasta moduli for x86_64 and aarch64
-
ct2rs
Rust bindings for OpenNMT/CTranslate2