#llm #build #cpp #llama #binary #server #compile

llama_cpp_low

small server binary compile build from llama.cpp

7 releases

new 0.3.5 May 9, 2024
0.3.3 May 8, 2024
0.3.2 May 6, 2024
0.3.1 May 5, 2024
0.3.0 May 5, 2024

#31 in #llama

Download history 670/week @ 2024-05-04

671 downloads per month
Used in llm-daemon

MIT license

7MB
138K SLoC

C++ 72K SLoC // 0.1% comments C 30K SLoC // 0.1% comments Python 11K SLoC // 0.2% comments CUDA 9K SLoC // 0.0% comments Metal Shading Language 5.5K SLoC // 0.0% comments Objective-C 2.5K SLoC // 0.0% comments Shell 2.5K SLoC // 0.2% comments GLSL 1K SLoC // 0.0% comments Swift 1K SLoC // 0.0% comments JavaScript 1K SLoC // 0.1% comments Kotlin 630 SLoC // 0.1% comments Gherkin (Cucumber) 506 SLoC // 0.1% comments RPM Specfile 163 SLoC // 0.2% comments Zig 146 SLoC // 0.0% comments Vim Script 135 SLoC // 0.1% comments Batch 78 SLoC // 0.2% comments Rust 35 SLoC Prolog 18 SLoC INI 7 SLoC

Contains (JAR file, 60KB) gradle-wrapper.jar

llama-cpp-low

Script to build llama.cpp server binary using cargo

Wait, are you sober?

I just wanted to have the daemon to run the LLM with minimal external dependency...

No runtime deps