#llama #cpp #interface #link #server

llama_link

A llama.cpp server interface

2 unstable releases

new 0.2.0 Dec 16, 2024
0.1.0 Dec 13, 2024

#14 in #llama

Download history 75/week @ 2024-12-08

75 downloads per month

Custom license

23KB
201 lines

Llama link

Setup Llama.cpp Server

Manual

NixOs

Options:

  1. Use the package https://search.nixos.org/options?channel=unstable&from=0&size=50&sort=relevance&type=packages&query=llama-cpp

  2. Use the flake at ./llama.cpp/flake.nix. E.g.

{
  description = "My CUDA-enabled llama.cpp development environment";

  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
    flake-parts.url = "github:hercules-ci/flake-parts";
    llama-cpp.url = "github:ggerganov/llama.cpp";
  };

  outputs = { self, nixpkgs, flake-parts, llama-cpp }@inputs:
    flake-parts.lib.mkFlake { inherit inputs; } {
      systems = [ "x86_64-linux" "aarch64-linux" ];

      perSystem = { config, self', inputs', pkgs, system, ... }: {
        devShells.default = pkgs.mkShell {
          buildInputs = [
            llama-cpp.packages.${system}.cuda

            pkgs.cudatoolkit
            pkgs.gcc
            pkgs.cmake
          ];

          shellHook = ''
            export CUDA_PATH=${pkgs.cudatoolkit}
            export LD_LIBRARY_PATH=${pkgs.cudatoolkit}/lib:$LD_LIBRARY_PATH
          '';
        };
      };
    };
}

Dependencies

~11–23MB
~334K SLoC