#llm #build #cpp #llama #binary #server #compile

llama_cpp_low

small server binary compile build from llama.cpp

18 releases

0.4.0 Dec 15, 2024
0.3.14 Sep 6, 2024
0.3.13 Jul 12, 2024
0.3.7 Jun 19, 2024
0.3.5 May 9, 2024

#45 in #llama

Download history 156/week @ 2024-09-02 1/week @ 2024-09-09 69/week @ 2024-09-16 68/week @ 2024-09-23 8/week @ 2024-09-30 1/week @ 2024-10-07 10/week @ 2024-10-14 2/week @ 2024-11-04 9/week @ 2024-12-02 139/week @ 2024-12-09 195/week @ 2024-12-16

343 downloads per month
Used in llm-daemon

MIT license

9.5MB
188K SLoC

C++ 105K SLoC // 0.1% comments C 31K SLoC // 0.1% comments Python 17K SLoC // 0.1% comments CUDA 8K SLoC // 0.0% comments GLSL 5.5K SLoC // 0.0% comments Metal Shading Language 5K SLoC // 0.0% comments OpenCL 4K SLoC Objective-C 4K SLoC // 0.0% comments JavaScript 3K SLoC // 0.1% comments Shell 2K SLoC // 0.1% comments Swift 1K SLoC // 0.0% comments Kotlin 701 SLoC // 0.1% comments Vim Script 671 SLoC // 0.1% comments RPM Specfile 109 SLoC // 0.2% comments Batch 78 SLoC // 0.2% comments Prolog 36 SLoC Rust 27 SLoC INI 7 SLoC

Contains (JAR file, 60KB) gradle-wrapper.jar

llama-cpp-low

Script to build llama.cpp server binary using cargo

Wait, are you sober?

I just wanted to have the daemon to run the LLM with minimal external dependency...

No runtime deps