2 unstable releases
new 0.2.0 | Dec 16, 2024 |
---|---|
0.1.0 | Dec 13, 2024 |
#14 in #llama
75 downloads per month
23KB
201 lines
Llama link
Setup Llama.cpp Server
Manual
- Clone https://github.com/ggerganov/llama.cpp/
- read ./llama.cpp/docs/build.md to build
- Run. e.g.
./build/bin/llama-server -m ./models/7B/ggml-model-f16.gguf --prompt "Once pick an action" --json-schema '{}'
NixOs
Options:
-
Use the package https://search.nixos.org/options?channel=unstable&from=0&size=50&sort=relevance&type=packages&query=llama-cpp
-
Use the flake at ./llama.cpp/flake.nix. E.g.
{
description = "My CUDA-enabled llama.cpp development environment";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
flake-parts.url = "github:hercules-ci/flake-parts";
llama-cpp.url = "github:ggerganov/llama.cpp";
};
outputs = { self, nixpkgs, flake-parts, llama-cpp }@inputs:
flake-parts.lib.mkFlake { inherit inputs; } {
systems = [ "x86_64-linux" "aarch64-linux" ];
perSystem = { config, self', inputs', pkgs, system, ... }: {
devShells.default = pkgs.mkShell {
buildInputs = [
llama-cpp.packages.${system}.cuda
pkgs.cudatoolkit
pkgs.gcc
pkgs.cmake
];
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
export LD_LIBRARY_PATH=${pkgs.cudatoolkit}/lib:$LD_LIBRARY_PATH
'';
};
};
};
}
Dependencies
~11–23MB
~334K SLoC