#deep-learning #machine-learning #tensors #ggml


Bleeding edge low-level bindings to GGML

248 major breaking releases

new 2312100052… Dec 10, 2023
2312080048… Dec 8, 2023
2312071219… Dec 7, 2023
2312060048… Dec 6, 2023
0.0.230422152738-llamacpp… Apr 22, 2023

#107 in Machine learning

Download history 998/week @ 2023-08-19 821/week @ 2023-08-26 1143/week @ 2023-09-02 347/week @ 2023-09-09 271/week @ 2023-09-16 165/week @ 2023-09-23 293/week @ 2023-09-30 381/week @ 2023-10-07 362/week @ 2023-10-14 359/week @ 2023-10-21 566/week @ 2023-10-28 87/week @ 2023-11-04 467/week @ 2023-11-11 391/week @ 2023-11-18 764/week @ 2023-11-25 764/week @ 2023-12-02

2,412 downloads per month
Used in rusty-ggml

MIT license

71K SLoC

C++ 25K SLoC // 0.1% comments C 22K SLoC // 0.1% comments CUDA 7.5K SLoC // 0.0% comments Rust 5.5K SLoC // 0.0% comments Python 4.5K SLoC // 0.1% comments Metal Shading Language 3K SLoC // 0.0% comments Objective-C 2K SLoC // 0.1% comments Shell 1.5K SLoC // 0.1% comments Vim Script 132 SLoC // 0.1% comments Zig 121 SLoC // 0.0% comments JavaScript 107 SLoC // 0.0% comments Prolog 71 SLoC RPM Specfile 55 SLoC // 0.2% comments GNU Style Assembly 55 SLoC // 0.2% comments Batch 48 SLoC Swift 42 SLoC // 0.1% comments Happy 36 SLoC INI 6 SLoC


Bleeding edge Rust bindings for GGML.

Release Info

See VERSION.txt, ggml-tag-current.txt

This repo is set up with a workflow to automatically check for the latest GGML release several times per day. The workflow currently just builds for Linux x86: if that build succeeds, then a new release and package will be published.

Note that the GGML project is undergoing very rapid development. Other than being able to generate the binding and build the package (on x86 Linux at least) you really can't make any assumptions about a release of this crate.

Releases will be in the format YYMMDDHHMM.0.0+sourcerepo-release.releasename (UTC). At present, sourcerepo is going to be llamacpp (from the llama.cpp repo) but at some point it may change to point to the ggml repo instead (currently llama.cpp seems to get the features first). Build metadata after the + is informational only.


You can find the crate published here: https://crates.io/crates/ggml-sys-bleedingedge

Automatically generated documentation: https://docs.rs/ggml-sys-bleedingedge/


There is now experimental support for compiling with BLAS.

Available features:

  • no_k_quants - Disables building with k_quant quantizations (i.e. Q4_K)
  • no_accelerate - Only relevant on Mac, disables building with Accelerate.
  • use_cmake - Builds and links against libllama using cmake.
  • cublas - Nvidia's CUDA BLAS implementation.
  • clblast - OpenCL BLAS.
  • hipblas - AMD's ROCM/HIP BLAS implementation. Set the ROCM_PATH environment variable to point at your ROCM installation. It defaults to /opt/rocm. Note: Unless your GPU is natively supported by ROCM it's very likely you'll need to set the HSA_OVERRIDE_GFX_VERSION environment variable otherwise your app will immediately crash when initializing ROCM. For example on an RX 6600 HSA_OVERRIDE_GFX_VERSION=10.3.0 works.
  • openblas - OpenBLAS.
  • metal - Metal support, only available on Mac.
  • llamacpp_api - Include the llama.cpp C++ API in bindings.

Enabling any of the BLAS features or metal implies use_cmake. You will need a working C++ compiler and cmake set up to build with this feature. Due to limitations in the llama.cpp cmake build system currently, it's necessary to build and link against libllama (which pulls in stuff like libstdc++) even though we only need GGML. Also, although we can build the library using cmake there's no simple way to know the necessary library search paths and libraries: we try to make a reasonable choice here but if you have libraries in unusual locations or multiple versions then weird stuff may happen.


The project has a slow, irresponsible person like me maintaining it. This is not an ideal situation.



The files under ggml-src/ are distributed under the terms of the MIT license. As they are simply copied from the source repo (see below) refer to that for definitive information on the license or credits.

Credit goes to the original authors: Copyright (c) 2023 Georgi Gerganov

Currently automatically generated from the llama.cpp project.

Build Scripts

Initially derived from the build script and bindings generation in the Rustformers llm project.

No runtime deps