490 major breaking releases

new 2404291813… Apr 29, 2024
2404281811… Apr 28, 2024
2404270046… Apr 27, 2024
2404261812… Apr 26, 2024
0.0.230422152738-llamacpp… Apr 22, 2023

#140 in Machine learning

Download history 149/week @ 2024-01-10 64/week @ 2024-01-17 114/week @ 2024-01-24 89/week @ 2024-01-31 134/week @ 2024-02-07 1059/week @ 2024-02-14 2020/week @ 2024-02-21 2216/week @ 2024-02-28 2618/week @ 2024-03-06 1957/week @ 2024-03-13 2165/week @ 2024-03-20 788/week @ 2024-03-27 1161/week @ 2024-04-03 1751/week @ 2024-04-10 1449/week @ 2024-04-17 1142/week @ 2024-04-24

5,537 downloads per month
Used in rusty-ggml

MIT license

9MB
140K SLoC

C++ 71K SLoC // 0.1% comments C 29K SLoC // 0.1% comments Python 11K SLoC // 0.2% comments CUDA 8K SLoC // 0.0% comments Rust 6K SLoC // 0.0% comments Metal Shading Language 5K SLoC // 0.0% comments Objective-C 2.5K SLoC // 0.0% comments Shell 2.5K SLoC // 0.2% comments GLSL 1K SLoC // 0.0% comments Swift 1K SLoC // 0.0% comments JavaScript 1K SLoC // 0.1% comments Kotlin 630 SLoC // 0.1% comments Gherkin (Cucumber) 482 SLoC // 0.1% comments RPM Specfile 163 SLoC // 0.2% comments Zig 146 SLoC // 0.0% comments Vim Script 135 SLoC // 0.1% comments Batch 78 SLoC // 0.2% comments Prolog 18 SLoC INI 7 SLoC

Contains (JAR file, 60KB) gradle-wrapper.jar

ggml-sys-bleedingedge

Bleeding edge Rust bindings for GGML.

Release Info

See VERSION.txt, ggml-tag-current.txt

This repo is set up with a workflow to automatically check for the latest GGML release several times per day. The workflow currently just builds for Linux x86: if that build succeeds, then a new release and package will be published.

Note that the GGML project is undergoing very rapid development. Other than being able to generate the binding and build the package (on x86 Linux at least) you really can't make any assumptions about a release of this crate.

Releases will be in the format YYMMDDHHMM.0.0+sourcerepo-release.releasename (UTC). At present, sourcerepo is going to be llamacpp (from the llama.cpp repo) but at some point it may change to point to the ggml repo instead (currently llama.cpp seems to get the features first). Build metadata after the + is informational only.

Crate

You can find the crate published here: https://crates.io/crates/ggml-sys-bleedingedge

Automatically generated documentation: https://docs.rs/ggml-sys-bleedingedge/

Features

There is now experimental support for compiling with BLAS.

Available features:

  • no_k_quants - Disables building with k_quant quantizations (i.e. Q4_K)
  • no_accelerate - Only relevant on Mac, disables building with Accelerate.
  • use_cmake - Builds and links against libllama using cmake.
  • cublas - Nvidia's CUDA BLAS implementation.
  • clblast - OpenCL BLAS.
  • hipblas - AMD's ROCM/HIP BLAS implementation. Set the ROCM_PATH environment variable to point at your ROCM installation. It defaults to /opt/rocm. Note: Unless your GPU is natively supported by ROCM it's very likely you'll need to set the HSA_OVERRIDE_GFX_VERSION environment variable otherwise your app will immediately crash when initializing ROCM. For example on an RX 6600 HSA_OVERRIDE_GFX_VERSION=10.3.0 works.
  • openblas - OpenBLAS.
  • metal - Metal support, only available on Mac.
  • llamacpp_api - Include the llama.cpp C++ API in bindings.

Enabling any of the BLAS features or metal implies use_cmake. You will need a working C++ compiler and cmake set up to build with this feature. Due to limitations in the llama.cpp cmake build system currently, it's necessary to build and link against libllama (which pulls in stuff like libstdc++) even though we only need GGML. Also, although we can build the library using cmake there's no simple way to know the necessary library search paths and libraries: we try to make a reasonable choice here but if you have libraries in unusual locations or multiple versions then weird stuff may happen.

Limitations

The project has a slow, irresponsible person like me maintaining it. This is not an ideal situation.

Credits

GGML

The files under ggml-src/ are distributed under the terms of the MIT license. As they are simply copied from the source repo (see below) refer to that for definitive information on the license or credits.

Credit goes to the original authors: Copyright (c) 2023 Georgi Gerganov

Currently automatically generated from the llama.cpp project.

Build Scripts

Initially derived from the build script and bindings generation in the Rustformers llm project.

No runtime deps