#inference #llm

app llwm

Run LLM inference with WebAssembly

1 unstable release

0.0.0 Dec 1, 2024

#118 in #inference

Apache-2.0

7KB

llama.wasm

Run LLM inference with WebAssembly. This is for experiment.

No runtime deps